content
stringlengths
7
2.61M
#include "../../src/positioning/qgeopositioninfo.h"
Maikki Uotila Personal life Uotila was born on 25 February 1977 in Espoo, Finland. She received a Bachelor of Arts in dance from Sarah Lawrence College in New York. In 2004, she settled in Vancouver, British Columbia, Canada. She married Victor Kraatz on June 19, 2004. They have two sons – Oliver (born September 14, 2006) and Henry (born July 10, 2010). Competitive Uotila competed as a single skater until 1994, winning the junior silver medal at the 1992 Nordic Championships. In 1994, she switched to ice dancing, teaming up Toni Mattila. They competed in the free dance at the 1996 European Championships in Sofia, Bulgaria, finishing 23rd. They also appeared at the 1996 World Championships in Edmonton, Alberta, Canada, but were eliminated after the compulsory dances. After two silver medals, Uotila/Mattila won the Finnish national title in the 1996–1997 season. They placed 26th at the 1997 European Championships in Paris, France. They were coached by Arja Wuorivirta and Martin Skotnicky. They ended their partnership in 1997 after three seasons together. Uotila competed with Michel Bigras in the 1997–1998 season. Post-competitive Uotila coaches ice dancing at the BC Centre of Excellence. She has also worked as a dance instructor at the Shadbolt Centre for the Arts in Burnaby.
Virulence factors of the coagulase-negative staphylococci. Coagulase-negative staphylococci (CNS) have gained substantial interest as pathogens involved in nosocomial, particularly catheter-related infections. The pathogenic potential of CNS is mainly due to their capacity to form biofilms on indwelling medical devices. In a biofilm, the bacteria are protected against antibiotics and from attacks by the immune system. The factors contributing to biofilm formation are among the best-studied virulence factors of CNS and comprise factors involved in the adhesion to a catheter surface and in cell accumulation. CNS usually persist in the host in relative silence, but may cause sepsis, for which the recently found inflammatory peptides called phenol-soluble modulins are prime candidates. Many CNS also produce several lipases, proteases, and other exoenzymes, which possibly contribute to the persistence of CNS in the host and may degrade host tissue. We are also beginning to understand how regulators of virulence trigger the expression of virulence factors in CNS. A better conception of the mechanisms underlying the pathogenicity and the frequently encountered antibiotic resistance of CNS may help to develop novel, efficient anti-staphylococcal therapeutics.
I think a lot of our country buys into authoritarianism without realizing or admitting it. It has a certain comfort, as long as you aren't the one(s) being oppressed or harmed. Breaking due process means that individuals don't actually have rights. edit - my opinions below I'm conflicted on the Hillary thing, because its barely more than a clerical mistake. Has our criminal justice system overpunished people for similar crimes? Definitely. I didn't agree with it then, so I don't agree with it now. There are a lot of conflicting facts, and virtually nothing harmful arose from her mistakes; the entire investigation is just another waste of time by the GOP, intent to create a scapegoat to rile up their voters. As a dire Bernie supporter I knew I was feeding into the hope that she would be disqualified from the primary. Because of that, I understand why Trump supporters also buy into it. What bothers me is that the Trump supporters are cheering for an old testament punishment. I simply thought Bernie was the best candidate, and hoped Clinton's questionable activities would show that. There are some massive problems with this country, this is highlighting one in our criminal justice system. Trump has no desire to solve any of the problems, his only goal is to win by dragging the competition down. He has hardly said anything about what he will do, just vague suggestions that he will "fix things" by any means necessary. Either he wins this election and USA becomes some pseudo-dictatorship, or Hillary wins and we-the-people can make a push to address and fix the problems we have.
Michael Jose Blanco, MD, who was honored with several awards for humanism, compassion and community service, receives his hood during graduation. Outstanding members of the Class of 2014 received special recognition during the medical school’s Honors Convocation, held May 2 at Slee Concert Hall on the North Campus. These award-winning MD candidates also were recognized during the school’s 168th commencement ceremony, held the same day.
Influencing Preferences for Different Types of Causal Explanation of Complex Events Objective: We examined preferences for different forms of causal explanations for indeterminate situations. Background: Klein and Hoffman distinguished several forms of causal explanations for indeterminate, complex situations: single-cause explanations, lists of causes, and explanations that interrelate several causes. What governs our preferences for single-cause (simple) versus multiple-cause (complex) explanations? Method: In three experiments, we examined the effect of target audience, explanatory context, participant nationality, and explanation type. All participants were college students. Participants were given two scenarios, one regarding the U.S. economic collapse in 2007 to 2008 and the other about the sudden success of the U.S. military in Iraq in 2007. The participants were asked to assess various types of causal explanations for each of the scenarios, with reference to one or more purposes or audience for the explanations. Results: Participants preferred simple explanations for presentation to less sophisticated audiences. Malaysian students of Chinese ethnicity preferred complex explanations more than did American students. The form of presentation made a difference: Participants preferred complex to simple explanations when given a chance to compare the two, but the preference for simple explanations increased when there was no chance for comparison, and the difference between Americans and Malaysians disappeared. Conclusions: Preferences for explanation forms can vary with the context and with the audience, and they depend on the nature of the alternatives that are provided. Application: Guidance for decision-aiding technology and training systems that provide explanations need to involve consideration of the form and depth of the accounts provided as well as the intended audience.
The newly observed open-charm states in quark model Comparing the measured properties of the newly observed open-charm states D, D, D, D, D_{s1}, D_{sJ}, and D_{sJ} with our predicted spectroscopy and strong decays in a constituent quark model, we find that: the $D(2\,^1S_0)$ assignment to D remains open for its too broad width determined by experiment; the D and $D_{s1}$ can be identified as the $2\,^3S_1$-$1\,^3D_1$ mixtures; if the D and D are indeed the same resonance, they would be the $D(1\,^3D_3)$; otherwise, they could be assigned as the $D(1\,^3D_3)$ and $D^\prime_2(1D)$, respectively; the $D_{sJ}$ could be either the $D_{s1}$'s partner or the $D_s(1\,^3D_3)$; and both the $D_{s1}(2P)$ and $D^\prime_{s1}(2P)$ interpretations for the $D_{sJ}$ seem likely. The $E1$ and $M1$ radiative decays of these sates are also studied. Further experimental efforts are needed to test the present quarkonium assignments for these new open-charm states. Due to the poor information on the higher excitations of D and D s mesons, the find of these open-charm states is clearly of importance to complete the D and D s spectra. To understand their observed properties, various efforts have been carried out under the assumption that all the observed open-charm states are dominated by the simple qq quark content. It is natural and necessary to exhaust the possible conventional qq descriptions before resorting to more exotic interpretations. Further theoretical efforts are still required in order to satisfactorily explain the data concerning these open-charm states. In this work, we shall investigate the masses as well as strong and radiative decays of these newly observed states in the nonrelativistic constituent quark model and try to clarify their possible quarkonium assignments by comparing our predictions with the experiment. The organization of this paper is as follows. In Sec. II, we calculate the open-charm mesons masses in a nonrelativistic constituent quark model and give the possible assignments for these open-charm states based on their observed masses and decay modes. In Sec. III, we investigate, with the 3 P 0 decay model, the strong decays of these states for different possible assignments. The radiative transitions of these states are given in Sec. IV. The summary and conclusion are given in Sec. V. II. Masses To estimate the masses of c and cs states, we employ a simple nonrelativistic constituent quark model which was proposed by Lakhina and Swanson and turns out to be able to describe the heavy-light meson and the charmonium masses with reasonable accuracy.. In this model, the Hamiltonian is where H 0 is the zeroth-order Hamiltonian, H sd is the spin-dependent Hamiltonian, and C qq is a constant. The H 0 is where r=|r| is the qq separation, M r = 2m q mq/(m q + mq); m q and S q (mq and Sq) are the mass and spin of the constituent quark q (antiquarkq), respectively. The H sd is Here L is the relative orbital angular momentum between q andq, and where E = 0.5772 and the scale has been set to 1.3 GeV. GeV. These quark masses are also used in both strong and radiative decays computations. The heavy-light mesons are not the charge conjugation eigenstates and hence mixing can occur between the two states with J = L. This mixing can be parameterized as cq (nL) where is the mixing angle and q denotes u or s quark. The cq (nL) refers to the higher mass state. With the help of the Mathematica program, solving the Schrdinger equation with Hamiltonian H 0 and evaluating the H sd in leading-order perturbation theory, one can obtain the open charm mesons masses as shown in Tables 1-2. 1 For comparison, the corresponding masses predicted by some other approaches such as the Blankenbecler-Sugar equation and the relativistic quark model are also listed. It is clear from Tables 1 and 2 that the quark model can reasonably account for the masses of the observed ground S and P -wave open-charm mesons, and the overall agreement 1 The mixing angles in radians are c between the expectations from the quark model and those from other approaches, especially the Blankenbecler-Sugar equation and the relativistic quark model, is good, which hence encourages us to discuss the possible assignments for the newly observed open-charm states based on the expectations of our employed quark model. Among these newly observed opencharm states, the J P of D s1 is determined to be 1 − experimentally, while the spinparity quantum numbers of the other states are still unsettled. According to the observed decay modes, the possible spin-parity quantum numbers of these open-charm states are listed in Table 3. We shall discuss the possible quarkonium assignments for these open-charm states based on Tables 1, 2 turns out to be consistent with the predictions for D(2 1 S 0 ). Therefore, the 0 − assignment to D seems the most plausible. The possible J P of D are 1 −, 3 −,. The D mass is very close to the predicted mass for the 1 − (2636 MeV). Also, the D(2 3 S 1 ) and D(1 3 D 1 ) have the same J P and similar masses, and hence in general cam mix to produce two physical 1 − states 2. Therefore, the D is most likely the 2 3 S 1 -1 3 D 1 mixtures The helicity-angle distribution of D is found to be also consistent with the 2 Hereafter, we shall assign the 1 − physical states as the 2 3 S1-1 3 D1 mixtures. Below, we shall focus on these possible assignments for the observed open-charm states as shown in Table 4. The mass information alone is insufficient to classify these new open-charm states. Their decay properties also need to be compared with model expectations. We shall discuss the decay dynamics of these states in next section. III. Strong decays A. Model parameters In this section, we shall employ the 3 P 0 model to evaluate the tow-body open-flavor strong decays of the initial state. The 3 P 0 model, also known as the quark pair creation model, has been extensively applied to evaluate the strong decays of mesons from light qq to heavy cb, since it gives a considerably good description of many observed decay amplitudes and partial widths of hadrons. Some detailed reviews on the 3 P 0 model can be found in Refs.. Also, the simple harmonic oscillator (SHO) approximation for spatial wave functions of mesons is used in the strong decays computations. This is typical of strong decay calculations. The SHO wave functions have the advantage that decay amplitudes and widths can be determined analytically, and it has been demonstrated that the numerical results are usually not strongly dependent on the details of the spatial wave functions of mesons. The explicit expression for the decay width employed in this work can be seen in Refs.. The parameters involved in the 3 P 0 model include the constituent quarks masses, the SHO wave function scale parameters 's, and the light nonstrange quark pair creation strength. The and the strange quark pair creation strength ss can be related by ss ≈ / √ 3. The constituent quarks masses m u, m d, m s, and m c are the same as those used in the constituent quark model. The SHO wave function scale parameters are taken to be the effective 's obtained by equating the root mean square radius of the SHO wave function to that obtained from the nonrelativistic quark model. The meson effective 's used in this work are listed in Table 5. The remaining parameter is an overall factor in the width. By fitting to 19 wellestablished experimental decay widths, 3 we obtain = 0.452 ± 0.105, consistent with 0.485 ± 0.15 obtained by Close and Swanson from their model. The uncertainty means that the theoretical width has an uncertainty of ≃ 0.47. It is perhaps no surprise that the prediction has a larger uncertainty due to the larger errors of data as well as the decay model which is The n 3 L J -n 1 L J mixing angles are taken as those determined in the mass estimates. B. D The decay widths of D as D(2 1 S 0 ) are shown in Table 6. The predicted total width is about 45 MeV, about 70 MeV lower than the lower limit of the measured (D) = 130 ± 12 ± 13 MeV. The recent calculations in a 3 P 0 model and a chiral quark model also give a rather narrow width for the D(2 1 S 0 ). The upper limit of the D(2 1 S 0 )'s width is expected to be about 66 MeV, still about 50 MeV lower than the lower limit of the measurement. This inconsistency between the theoretical and experimental results could imply that the experimental analysis has overestimated the width of D if this state is indeed the 2 1 S 0 charmed meson, as suggested by Ref.. Further confirmation of its resonance parameters is required to confirm the D(2 1 S 0 ) assignment to D. The ratio (D 0 )/(D * ) is expected to be about 0.22, which is independent of the parameter and can also present a consistent check for this assignment. Without additional information on D, the D(2 1 S 0 ) assignment to D would remain open. C. D In the 2S-1D mixing scenario, the eigenvectors of D 1 and its partner D 1 (M X ) can be written as where the is the D(2 3 S 1 )-D(1 3 D 1 ) mixing angle and M X denotes the mass of the physical The predicted decay widths of D are listed in Table 7. The variations of decay widths and branching ratio (D + )/(D * + − ) with the mixing angle are illustrated in Fig. 1. It is clear that in the presence of about 0.364 ≤ ≤ 0.4 radians, both the total width and branching ratio (D + )/(D * + − ) of D can be well reproduced (see Fig. 1(a)). Also in this mixing angle range, the D *, D 1, and D are the dominant decay modes and the mode D * + − dominates D + − (see Fig. 1(b)), consistent with the observation. The helicity-angle distribution of D is also found to be consistent with the predictions for the D( Therefore the interpretation of D as a mixture of the D(2 3 S 1 ) and D(1 3 D 1 ) seems convincing. It is expected that (D 1 )/(D * ) is around 1.0 and (D 1 )/(D * ) is Further experimental study on the D in the D 1, D 1, and D * channels can present a consistent check for this interpretation. GeV (see Table 1). The total width and branching ratio (D + − )/(D * + − ) of D(M X ) as functions of the initial state mass M X and the mixing angle are illustrated in Fig. 2 MeV). The D(1 3 D 3 ) interpretation for the D therefore appears suitable. The width of D(2 3 P 0 ) is predicted to be about 135 MeV, about 70 MeV higher than 60.9 ± 5.6 ± 3.1 MeV. However, the lower limit of the D(2 3 P 0 )'s total width is expected to be bout 72 MeV, compatible with the measurement, which makes the D(2 3 P 0 ) assignments for the D also plausible. The decay widths of D as D(1 3 D 3 ), D 2 (1D), and D 2 (1D) are listed in Table 9. The expressions of decay widths of D 2 (1D) are not listed but the same as those of D 2 (1D) except that the c 1D is replaced by c 1D + /2. The dependence of the total widths of D 2 (1D) and D 2 (1D) on the mixing angle c 1D is illustrated in Fig. 3. system where the D 1 (1P ) is broader than the D 1 (1P ). From Fig. 3, one can see that, at 0.697 radians, the lower limit of the D 2 (1D)'s total width is substantially larger than the upper limit of the measurement, while the lower limit of the D 2 (1D)'s total width is close to the upper limit of the experiment. Therefore, if the D is indeed a 2 − state, the favorable quarkonium assignment would be the D 2 (1D) rather than D 2 (1D). Estimates of decay widths containing c 1D are given in terms of c 1D = 0.697 radians. A symbol"" indicates that a decay mode is forbidden. The ratio (D→D + − ) (D→D * + − ) is independent of the and therefore is crucial to further clarify In the 2S-1D mixing scenario, the eigenvectors of D s1 and its partner D s1 (M Y ) can be written as where the 1 is the D s (2 3 S 1 )-D s (1 3 D 1 ) mixing angle and M Y denotes the mass of the physical The decay widths of D s1 are listed in Table 11. The variations of decay widths and (D * K)/(DK) with the mixing angle 1 are illustrated in Fig. 4. Clearly, with 1.06 ≤ 1 ≤ 1.34 radians, both the total width and (D * K)/(DK) of D s1 can be well reproduced (see Fig. 4 (a)). Also, in this mixing angle range, the main decay modes are DK and D * K (see Fig. 4 (b)), in accord with the observation of the D s1 in the DK and D * K. Therefore, the picture of D s1 being in fact a mixture of the D s (2 3 S 1 ) and D s (1 3 D 1 ) seems convincing. The studies in a chiral quark model and a 3 P 0 model also favor this interpretation. Table 2). The total width and the branching ratio (D * K)/(DK) for the D s1 (M Y ) as functions of the initial model also favors this assignment. The D sJ could also be the D s (1 3 D 3 ) as shown in Table 4. In this case, the decay widths are listed in Table 12 This assignment is also favored by the studies in the 3 P 0 model and lattice QCD. D * s, and DK * channels is crucial to distinguish these two possible assignments. F. D sJ The decay widths of D sJ as D s1 (2P ) or D s1 (2P ) are listed in Table 13. The expressions of the decay widths of D s1 (2P ) are not listed but the same as those of the D s1 (2P ) except that the cs 2P is replaced by cs 2P + /2. The dependence, of the total width of D sJ as 1 + state, on the mixing angle cs 2P and A are illustrated in Fig. 6 and Fig. 7. The similar behavior also exists at about A = 306 MeV, as shown in Fig. 7. Within the theoretical and experimental errors, the predicted total widths for both D s1 (2P ) and D s1 (2P ) are comparable with the experiment. Therefore, both the D s1 (2P ) and D s1 (2P ) assignments for the D sJ seem likely based on its measured total width. It should be noted that since the experimental errors of (D sJ is large, the improved measurement of (D sJ is needed to confirm our present assignment. Also, Fig. 7 indicates that in the vicinity of initial state = 300 (270 ≤ A ≤ 330 MeV), the D s1 (2P ) is expected to be about 100 ∼ 150 MeV broader than the D s1 (2P ) in width, which is consistent with the prediction from the heavy quark effective theory. In the framework of the heavy quark effective theory, the D s1 (2P ) is the 1 + state existing in S = (0 +, 1 + ) doublet while D s1 (2P ) corresponds to the 1 + state of T = (1 +, 2 + ) doublet, and the 1 + state of S doublet is predicted to be broader than the one of T doublet. The similar conclusion has been reached in calculations from the 3 P 0 model and the chiral quark model. IV. Radiative decays It is well known that radiative transitions can probe the internal charge structure of hadrons, and therefore they will likely play an important role in determining the quantum numbers and n 2S +1 L J cq states in the nonrelativistic quark model are given by where e Q = mqQc+mcQq (mq+mc), e Q = mqQc+mcQq (mq mc), Q c and Q q denote the quark c and q charges in units of |e|, respectively. = 1/137 is the fine-structure constant, E is the final photon energy, E f is the energy of the final state n 2S +1 L J, M i is the initial state mass, and the angular matrix The wave functions used to evaluate the matrix element v |r|v and v |j 0 (E r/2)|v are obtained from the nonrelativistic quark model. According to the PDG, the well estab- Table 18. As can be seen in Table 15, the D 1 and D 1 are clearly of great interest to discriminate the 2 − and 3 − interpretations for the D, since these modes are forbidden for a 3 − state while allowable for a 2 − state. Especially, the (D 2 (1D) → D 1 ) is expected to be about 757 keV and thus becomes an experimentally promising process. Similarly, from Table 16, the experimental information on the D sJ in the D s0, D s1, and D s1 would be important to discriminate the 1 − and 3 − interpretations since these decay modes are forbidden for the 3 − state while allowable for the 1 − state. As for the M 1 transitions, experimental study on the ratio R = B(D sJ →D s1 ) B(D sJ →D s1 ) would be useful to discriminate the D s1 (2P ) and D s1 (2P ) interpretations since it is expected that Estimates of decay widths containing are given in terms of = 0.4 radians. A symbol"" indicates that a decay mode is forbidden. Mass spectra alone are insufficient to determine the quantum numbers of these open-charm
def inspect_bands_dos(self): exit_code = None if 'bands_workchain' in self.ctx: bands = self.ctx.bands_workchain if not bands.is_finished_ok: self.report( 'Bands calculation {} finished with error, exit_status: {}' .format(bands, bands.exit_status)) exit_code = self.exit_codes.ERROR_SUB_PROC_BANDS_FAILED self.out( 'band_structure', compose_labelled_bands(bands.outputs[out_ln['bands']], self.ctx.bands_kpoints)) else: bands = None if 'dos_workchain' in self.ctx: dos = self.ctx.dos_workchain if not dos.is_finished_ok: self.report( 'DOS calculation finished with error, exit_status: {}'. format(dos.exit_status)) exit_code = self.exit_codes.ERROR_SUB_PROC_DOS_FAILED self.out('dos_bands', dos.outputs[out_ln['bands']]) self.out( 'dos', dos_from_bands(dos.outputs[out_ln['bands']], smearing=orm.Float( self.ctx.options.get('dos_smearing', 0.05)), npoints=orm.Int( self.ctx.options.get('dos_npoints', 2000)))) else: dos = None return exit_code
Decimal Dust, Significant Digits, and the Search for Stars The practice of rounding statistical results to two decimal places is one of a large number of heuristics followed in the social sciences. In evaluating this heuristic, the authors conducted simulations to investigate the precision of simple correlations. They considered a true correlation of.15 and ran simulations in which the sample sizes were 60, 100, 200, 500, 1,000, 10,000, and 100,000. They then looked at the digits in the correlations first, second, and third decimal places to determine their reproducibility. They conclude that when n < 500, the habit of reporting a result to two decimal places seems unwarranted, and it never makes sense to report the third digit after the decimal place unless one has a sample size larger than 100,000. Similar results were found with rhos of.30,.50, and.70. The results offer an important qualification to what is otherwise a misleading practice.
############################################################ # MORUS 640 ############################################################ ciphers_640 = [ { '[0]': [27]}, # alpha { '[0]': [0]}, # beta {}, # gamma {}, # delta {}, # epsil { '[0]': [27], # appr1 '[1]': [0,26,8], '[2]': [31,13,7], '[3]': [12]}, { '[1]': [2], # appr2 '[2]': [1,7,15,27], '[3]': [6,20,14], '[4]': [19]}, { '[0]': [27], # full '[1]': [0,2,8,26], '[2]': [31,27,15,13,1], '[3]': [20,14,12,6], '[4]': [19]} ] states_640 = [ { '[1][0]': [0]}, # alpha { '[0][0]': [0], # beta '[0][1]': [0]}, { '[0][1]': [0], # gamma '[0][4]': [0], '[1][1]': [31]}, { '[0][4]': [0], # delta '[1][2]': [0], '[1][4]': [13]}, { '[0][2]': [0], # epsil '[1][0]': [0], '[1][2]': [7]}, { '[2][2]': [0]}, # appr1 { '[2][2]': [0]}, # appr2 {} # full ] weight_mini_640 = [ 1, # alpha 1, # beta 1, # gamma 1, # delta 1, # epsil 7, # appr1 9, # appr2 16, # full ] weight_640 = [ 10, # alpha 10, # beta 10, # gamma 10, # delta 10, # epsil # 14, # appr1 # 18, # appr2 # 16, # full ] ############################################################ # MORUS 1280 ############################################################ ciphers_1280 = [ { '[0]': [51]}, # alpha { '[0]': [0]}, # beta {}, # gamma {}, # delta {}, # epsil { '[0]': [51], # appr1 '[1]': [0,33,55], '[2]': [4,37,46], '[3]': [50]}, { '[1]': [25], # appr2 '[2]': [7,29,38,51], '[3]': [11,20,42], '[4]': [24]}, { '[0]': [51], # full '[1]': [0,25,33,55], '[2]': [4,7,29,37,38,46,51], '[3]': [11,20,42,50], '[4]': [24]} ] states_1280 = [ { '[1][0]': [0]}, # alpha { '[0][0]': [0], # beta '[0][1]': [0]}, { '[0][1]': [0], # gamma '[0][4]': [0], '[1][1]': [46]}, { '[0][4]': [0], # delta '[1][2]': [0], '[1][4]': [4]}, { '[0][2]': [0], # epsil '[1][0]': [0], '[1][2]': [38]}, { '[2][2]': [0]}, # appr1 { '[2][2]': [0]}, # appr2 {} # full ] weight_mini_1280 = [ 1, # alpha 1, # beta 1, # gamma 1, # delta 1, # epsil 7, # appr1 9, # appr2 16, # full ] # weight_1280 = [ # 1, # alpha # 1, # beta # 1, # gamma # 1, # delta # 1, # epsil # 7, # appr1 # 9, # appr2 # 16, # full # ] masks_list = [ {'kind': 1, 'width': 32, 'states': states_640, 'ciphers': ciphers_640, 'weight': weight_mini_640}, {'kind': 1, 'width': 64, 'states': states_1280, 'ciphers': ciphers_1280, 'weight': weight_mini_1280}, {'kind': 2, 'width': 32, 'states': states_640, 'ciphers': ciphers_640, 'weight': weight_640}, # {'kind': 2, 'width': 64, 'states': states_1280, 'ciphers': ciphers_1280, 'weight': weight_1280}, ]
<reponame>julianschick/flipdot-brose-code #include "flipdotdriver.h" #include <algorithm> //std::min/std::max FlipdotDriver::FlipdotDriver(int module_width_, int module_height_, int device_count_, flipdot_driver_pins_t* pins_, flipdot_driver_timing_config_t* timing_) : module_width(module_width_), module_height(module_height_), device_count(device_count_) { pins = *pins_; timing = *timing_; bound_timing(); total_width = module_width * device_count; total_height = module_height; init_gpio(); init_spi(); // Activate config register output, can now operate gpio_set_level(pins.oe_conf, 0); } void FlipdotDriver::init_spi() { esp_err_t ret; spi_bus_config_t bus_config; bus_config.miso_io_num = -1; bus_config.mosi_io_num = pins.ser; bus_config.sclk_io_num = pins.serclk; bus_config.quadwp_io_num = -1; bus_config.quadhd_io_num = -1; bus_config.max_transfer_sz = 0; bus_config.flags = SPICOMMON_BUSFLAG_MASTER; bus_config.intr_flags = 0; spi_device_interface_config_t device_config; device_config.command_bits = 0; device_config.address_bits = 0; device_config.dummy_bits = 0; device_config.mode=0; device_config.duty_cycle_pos = 128; // 50%/50% device_config.cs_ena_pretrans = 0; device_config.cs_ena_posttrans = 0; device_config.clock_speed_hz=APB_CLK_FREQ / 2; // 40 MHz (max Speed for 74HC595) device_config.input_delay_ns = 0; device_config.spics_io_num=-1; device_config.flags = 0; device_config.queue_size=1; device_config.pre_cb = 0; device_config.post_cb = 0; ret = spi_bus_initialize(HSPI_HOST, &bus_config, 1); ESP_ERROR_CHECK(ret); ret = spi_bus_add_device(HSPI_HOST, &device_config, &spi); ESP_ERROR_CHECK(ret); } void FlipdotDriver::init_gpio() { uint64_t mask = 0x00; mask |= (uint64_t) 0x01 << (uint64_t) pins.clr; mask |= (uint64_t) 0x01 << (uint64_t) pins.rclk_sel; mask |= (uint64_t) 0x01 << (uint64_t) pins.rclk_conf; mask |= (uint64_t) 0x01 << (uint64_t) pins.oe_sel; mask |= (uint64_t) 0x01 << (uint64_t) pins.oe_conf; gpio_config_t io_conf; io_conf.intr_type = GPIO_INTR_DISABLE; io_conf.mode = GPIO_MODE_OUTPUT; io_conf.pin_bit_mask = mask; io_conf.pull_down_en = GPIO_PULLDOWN_DISABLE; io_conf.pull_up_en = GPIO_PULLUP_DISABLE; ESP_ERROR_CHECK(gpio_config(&io_conf)); // Disable config and select outputs gpio_set_level(pins.oe_conf, 1); gpio_set_level(pins.oe_sel, 1); // Disable clear flag gpio_set_level(pins.clr, 1); // RCLK lines on HIGH by default gpio_set_level(pins.rclk_sel, 1); gpio_set_level(pins.rclk_conf, 1); // Zero all shift-registers clear_registers(); // ... which means no device is selected selected_device = 0; } void FlipdotDriver::flip(PixelCoord& coord, bool show) { flip(coord.x, coord.y, show); } void FlipdotDriver::flip(int x, int y, bool show) { if (x < 0 || x >= total_width) return; if (y < 0 || y >= total_height) return; int device = (x / module_width) + 1; int device_x = x % module_width; if (selected_device != device) { select_device(device); } uint16_t row_mask = encode_row(y); uint8_t buffer[5]; // set if (show) { buffer[0] = (uint8_t) (row_mask >> 8); buffer[1] = (uint8_t) row_mask; buffer[2] = 0x00; buffer[3] = 0x00; buffer[4] = encode_status(device_x, 0); // reset } else { buffer[0] = 0x00; buffer[1] = 0x00; buffer[2] = (uint8_t) (row_mask >> 8); buffer[3] = (uint8_t) row_mask; buffer[4] = encode_status(device_x, 1); } spi_transaction_t tx; tx.flags = 0; tx.cmd = 0; tx.addr = 0; tx.length = 5 * 8; tx.rxlength = 0; tx.rx_buffer = 0; tx.tx_buffer = buffer; gpio_set_level(pins.rclk_conf, 0); ESP_ERROR_CHECK(spi_device_transmit(spi, &tx)); gpio_set_level(pins.rclk_conf, 1); ets_delay_us(1); gpio_set_level(pins.oe_sel, 0); ets_delay_us(show ? timing.set_usecs : timing.reset_usecs); gpio_set_level(pins.oe_sel, 1); } void FlipdotDriver::select_device(int device) { uint8_t mask = 0x01; mask <<= (device - 1); spi_transaction_t tx; tx.flags = 0; tx.cmd = 0; tx.addr = 0; tx.length = 8; tx.rxlength = 0; tx.rx_buffer = 0; tx.tx_buffer = &mask; gpio_set_level(pins.rclk_sel, 0); ESP_ERROR_CHECK(spi_device_transmit(spi, &tx)); gpio_set_level(pins.rclk_sel, 1); selected_device = device; } uint8_t FlipdotDriver::encode_column(int x) { if (x < 0 || x >= module_width) return 0x00; int col = x + 1; if (col <= 7) return col; if (col <= 14) return col + 1; if (col <= 21) return col + 2; return col + 3; } uint8_t FlipdotDriver::encode_status(int x, int dir) { uint8_t result = ~encode_column(x) << 3; if (dir == 0) { result |= BIT1; } return result; } uint16_t FlipdotDriver::encode_row(int y) { if (y < 0 || y >= module_height) { return 0x0000; } return 0x0001 << y; } void FlipdotDriver::clear_registers() { gpio_set_level(pins.clr, 0); gpio_set_level(pins.rclk_conf, 0); gpio_set_level(pins.rclk_sel, 0); ets_delay_us(1); gpio_set_level(pins.rclk_conf, 1); gpio_set_level(pins.rclk_sel, 1); gpio_set_level(pins.clr, 1); } void FlipdotDriver::set_timing(int usecs) { timing.set_usecs = usecs; timing.reset_usecs = usecs; bound_timing(); } void FlipdotDriver::bound_timing() { timing.set_usecs = std::min(timing.set_usecs, 1000); timing.set_usecs = std::max(timing.set_usecs, 0); timing.reset_usecs = std::min(timing.reset_usecs, 1000); timing.reset_usecs = std::max(timing.reset_usecs, 0); }
/******************************************************************************* * Copyright (c) 2009, 2023 Mountainminds GmbH & Co. KG and Contributors * This program and the accompanying materials are made available under * the terms of the Eclipse Public License 2.0 which is available at * http://www.eclipse.org/legal/epl-2.0 * * SPDX-License-Identifier: EPL-2.0 * * Contributors: * Evgeny Mandrikov - initial API and implementation * *******************************************************************************/ package org.jacoco.core.internal.analysis.filter; import java.io.BufferedReader; import java.io.IOException; import java.io.StringReader; import java.util.BitSet; import java.util.regex.Matcher; import java.util.regex.Pattern; import org.objectweb.asm.tree.AbstractInsnNode; import org.objectweb.asm.tree.LineNumberNode; import org.objectweb.asm.tree.MethodNode; /** * Filters out instructions that were inlined by Kotlin compiler. */ public final class KotlinInlineFilter implements IFilter { private int firstGeneratedLineNumber = -1; public void filter(final MethodNode methodNode, final IFilterContext context, final IFilterOutput output) { if (context.getSourceDebugExtension() == null) { return; } if (!KotlinGeneratedFilter.isKotlinClass(context)) { return; } if (firstGeneratedLineNumber == -1) { firstGeneratedLineNumber = getFirstGeneratedLineNumber( context.getSourceFileName(), context.getSourceDebugExtension()); } int line = 0; for (final AbstractInsnNode i : methodNode.instructions) { if (AbstractInsnNode.LINE == i.getType()) { line = ((LineNumberNode) i).line; } if (line >= firstGeneratedLineNumber) { output.ignore(i, i); } } } private static int getFirstGeneratedLineNumber(final String sourceFileName, final String smap) { try { final BufferedReader br = new BufferedReader( new StringReader(smap)); expectLine(br, "SMAP"); // OutputFileName expectLine(br, sourceFileName); // DefaultStratumId expectLine(br, "Kotlin"); // StratumSection expectLine(br, "*S Kotlin"); // FileSection expectLine(br, "*F"); final BitSet sourceFileIds = new BitSet(); String line; while (!"*L".equals(line = br.readLine())) { // AbsoluteFileName br.readLine(); final Matcher m = FILE_INFO_PATTERN.matcher(line); if (!m.matches()) { throw new IllegalStateException( "Unexpected SMAP line: " + line); } final String fileName = m.group(2); if (fileName.equals(sourceFileName)) { sourceFileIds.set(Integer.parseInt(m.group(1))); } } if (sourceFileIds.isEmpty()) { throw new IllegalStateException("Unexpected SMAP FileSection"); } // LineSection int min = Integer.MAX_VALUE; while (true) { line = br.readLine(); if (line.equals("*E") || line.equals("*S KotlinDebug")) { break; } final Matcher m = LINE_INFO_PATTERN.matcher(line); if (!m.matches()) { throw new IllegalStateException( "Unexpected SMAP line: " + line); } final int inputStartLine = Integer.parseInt(m.group(1)); final int lineFileID = Integer .parseInt(m.group(2).substring(1)); final int outputStartLine = Integer.parseInt(m.group(4)); if (sourceFileIds.get(lineFileID) && inputStartLine == outputStartLine) { continue; } min = Math.min(outputStartLine, min); } return min; } catch (final IOException e) { // Must not happen with StringReader throw new AssertionError(e); } } private static void expectLine(final BufferedReader br, final String expected) throws IOException { final String line = br.readLine(); if (!expected.equals(line)) { throw new IllegalStateException("Unexpected SMAP line: " + line); } } private static final Pattern LINE_INFO_PATTERN = Pattern.compile("" // + "([0-9]++)" // InputStartLine + "(#[0-9]++)?+" // LineFileID + "(,[0-9]++)?+" // RepeatCount + ":([0-9]++)" // OutputStartLine + "(,[0-9]++)?+" // OutputLineIncrement ); private static final Pattern FILE_INFO_PATTERN = Pattern.compile("" // + "\\+ ([0-9]++)" // FileID + " (.++)" // FileName ); }
Goddard v. Google, Inc. Facts and procedural history While on Google's search results page, Plaintiff clicked on advertisements that led her to allegedly fraudulent websites. She then entered her cell phone number on the allegedly fraudulent sites to download ringtones, an action for which she was unknowingly charged. Plaintiff filed a lawsuit against Google on April 13, 2008, claiming that she was an intended third-party beneficiary of Google's AdWords Content Policy that Google failed to adequately enforce by aiding and abetting the fraud sites. Google asserted that each of Plaintiff's claims was barred by the CDA, which prevents a website from being treated as the "publisher or speaker" of third-party content. The court rejected Plaintiff's "artful" pleading and dismissed her complaint with leave to amend in a decision issued on December 17, 2008. In her amended complaint, Plaintiff alleged that "Google's involvement in creating the allegedly fraudulent advertisements was so pervasive that the company controlled much of the underlying commercial activity engaged in by the third-party advertisers." She further asserted that Google "not only encourages illegal conduct, but collaborates in the development of the illegal content and, effectively, requires its advertiser customers to engage in it." Section 230(c) of the CDA Protection for "good samaritan" blocking and screening of offensive material (1) Treatment of publisher or speaker—No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Decision The court relied on Carafano v. Metrosplash.com, which found that the CDA provides "robust" immunity for internet service providers and websites where courts have adopted "a relatively expansive definition of 'interactive computer service' and a relatively restrictive definition of 'information content provider.'" Therefore, a website operator is not liable as an "information content provider" merely by augmenting the content of online material generally. Rather, the website must contribute "materially ... to its alleged unlawfulness." A website does not so contribute when it merely provides third parties with neutral tools to create web content, even if the website has knowledge that third parties are using such tools to create illegal content. Developer liability Plaintiff alleged that Google's Keyword Tool is not a "neutral tool" because when a potential advertiser enters the word "ringtone" into Google's Keyword Tool, the tool suggests the word "free." According to Plaintiff, this suggestion is "neither innocuous nor neutral" because Google is aware of the "mobile content industry's unauthorized charge problems." The court rejected Plaintiff's argument that the Keyword Tool materially contributes to the alleged illegality and thereby establishes developer liability. The court cited Carafano v. Metrosplash.com, where it ruled that "if a particular tool 'facilitates the expression of information,' it generally will be considered 'neutral' so long as users ultimately determine what content to post, such that the tool merely provides 'a framework that could be utilized for proper or improper purposes.'" Thus, like the menus in Carafano, Google's Keyword Tool is a neutral tool that merely provides options to advertisers. Plaintiff further alleged that Google effectively requires advertisers to engage in illegal conduct. The court held that Plaintiff's use of the word "requires" is inconsistent with the allegation that Google "suggests" words to the advertisers. The court referred to the Ninth Circuit ruling in Fair Housing Council of San Fernando Valley v. Roommates.com, LLC where CDA immunity was denied because the website forced subscribers to disclose protected characteristics and discriminatory preferences as "a condition of using its services." The court thereby concluded that Plaintiff's reasoning failed to disclose a "requirement" of any kind or suggest the type of "direct and palpable" involvement that otherwise is required to obviate CDA immunity. Contract claims in light of Barnes v. Yahoo! Plaintiff alleged that she and similarly situated individuals were intended third-party beneficiaries of Google's Advertising Terms which include a Content Policy requiring that mobile subscription service advertisers display certain information about their products, including whether downloading the products will result in charges to the consumer. In Barnes v. Yahoo!, Inc. the court addressed the immunity of Yahoo! against Barnes' claim that it was either negligent in undertaking to remove, or breached an oral contract to remove, offensive and unauthorized content posted about the plaintiff by her ex-boyfriend on one of Yahoo!'s public profile pages. The court found that claims alleging that a website negligently undertook to remove harmful content are barred by the CDA. However, the court noted that certain promissory conduct by a defendant may remove it from the protections of the CDA even where the alleged promise was to remove or screen third-party content. Barnes implies that when a party engages in conduct giving rise to an independent and enforceable contractual obligation, that party may be liable not as a publisher or speaker of third-party content but instead under contract law. Here, however, there was no allegation that Google ever promised Plaintiff or anyone else that it would enforce its Content Policy. Also, even if Google had promised to enforce its Content Policy, Plaintiff would not be a third-party beneficiary of that promise: Google would be the promisor and the allegedly fraudulent MSSP would be a promisee. Thus, the court also rejected Plaintiff's contract claim. Holding Federal Judge Jeremy Fogel reasoned that "a plaintiff may not establish developer liability merely by alleging that the operator of a website should have known that the availability of certain tools might facilitate the posting of improper content.” The court held that Plaintiff's claims treat Google as the publisher or speaker of third-party content, but she failed to substantiate the "labels and conclusions" by which she attempted to escape the scope of the CDA. Further, the court emphasized that the CDA "must be interpreted to protect websites not merely from ultimate liability, but from having to fight costly and protracted legal battles." For these reasons, the court dismissed the Plaintiff's complaint without leave to amend.
<filename>pkg/apis/options/util/util.go package util import ( "encoding/base64" "errors" "io/ioutil" "os" "github.com/oauth2-proxy/oauth2-proxy/v7/pkg/apis/options" ) // GetSecretValue returns the value of the Secret from its source func GetSecretValue(source *options.SecretSource) ([]byte, error) { switch { case len(source.Value) > 0 && source.FromEnv == "" && source.FromFile == "": value := make([]byte, base64.StdEncoding.DecodedLen(len(source.Value))) decoded, err := base64.StdEncoding.Decode(value, source.Value) return value[:decoded], err case len(source.Value) == 0 && source.FromEnv != "" && source.FromFile == "": return []byte(os.Getenv(source.FromEnv)), nil case len(source.Value) == 0 && source.FromEnv == "" && source.FromFile != "": return ioutil.ReadFile(source.FromFile) default: return nil, errors.New("secret source is invalid: exactly one entry required, specify either value, fromEnv or fromFile") } }
Soliton binding and low-lying singlets in frustrated odd-legged S=1/2 spin tubes Motivated by the intriguing properties of the vanadium spin tube Na2V3O7, we show that an effective spin-chirality model similar to that of standard Heisenberg odd-legged S=1/2 spin tubes can be derived for frustrated inter-ring couplings, but with a spin-chirality coupling constant alpha that can be arbitrarily small. Using density matrix renormalization group and analytical arguments, we show that, while spontaneous dimerization is always present, solitons become bound into low-lying singlets as alpha is reduced. Experimental implications for strongly frustrated tubes are discussed. Spin ladders, systems which consist of a finite number of coupled chains, have attracted considerable attention recently. Spin- 1 2 ladders with an even number of legs are expected to have a spin gap, while ladders with an odd number of legs behave like spin chains at low energy. Both predictions have been largely confirmed experimentally. Here we have taken the ladders to have open boundary conditions in the rung direction, which is the most natural definition. In comparison, spin ladders with periodic boundary conditions in the rung direction (see Fig. 1), often referred to as spin tubes, have received much less attention, mostly because the prospect for experimental realizations was remote. Nonetheless, it was noticed early on that spin tubes with an odd number of legs are not expected to behave in the same way as their ladder counterparts. The crucial observation is that the ground state of a ring with an odd number of sites is not just two-fold degenerate, as for a rung in an odd-leg ladder, but is four-fold degenerate: In addition to the Kramers degeneracy of the spin- 1 2 ground state, there is a degeneracy due to the two possible signs of the ground state momentum, leading to an extra degree of freedom on top of the total spin, often called the chirality by extension of the case of a triangle. As a consequence, a standard L-leg spin tube ( Fig. 1(a)), defined by the Hamiltonian where r, l is a spin-1 2 operator on ring r and leg l, can be described in the strong-ring limit (J ≪ J) by an effective model, valid to first order in the inter-ring coupling J, defined by the Hamiltonian Here S r are the usual spin-1 2 operators which describe the total spin of ring r, while r are pseudo-spin-1 2 op-erators acting on the chirality. The parameters of the model are an overall coupling constant K = J L and a parameter that measures the strength of the coupling between spin and chirality. For ordinary spin tubes, this coupling is always strong: is equal to 4 for three-leg spin tubes and increases with the number of legs. Using bosonization arguments, Schulz predicted that the ground state should be spontaneously dimerized and the spectrum gapped in all sectors, a prediction supported by further numerical and analytical work. In addition, the excitations have been argued to be unbound solitons and the gap is always a significant fraction of J. All of this remarkable physics still awaits an experimental realization. In this context, the recent synthesis of Na 2 V 3 O 7, whose structure may be regarded as a spin-1 2 nine-leg spin tube, has opened up new perspectives. However, the properties reported so far do not match the properties predicted for standard odd-legged spin tubes. In particular, no spin gap could be detected in zero external field. This might not be too suprising, however: although the overall topology of Na 2 V 3 O 7 is indeed that of a nine-leg spin tube, the actual geometry is quite different from that of Fig. 1(a). Although ab-initio calculations have not yet reached a consensus, it is likely that the inter-ring coupling exhibits some kind of frustration. Since the tubes in Na 2 V 3 O 7 only have a C 3 -axis, a frustrated model of the type of Fig. 1(b) might be more appropriate. In this Letter, we use extensive Density Matrix Renormalization Group (DMRG) simulations supported by several analytical arguments to show that inter-ring frustration can have dramatic consequences for the properties of odd-legged spin tubes. In particular, we show that it can reduce the spin gap and bind solitons in lowlying singlets. This picture might resolve some of the puzzles of Na 2 V 3 O 7. Our starting point is to notice that, as long as the inter-ring coupling does not break the rotational symme- try of the tube, the effective Hamiltonian is still given by Eq., but with a parameter that can take on arbitrarily small values if frustration is allowed. For instance, for the three-leg spin tube of Fig. 1, leading to = 2 if each site is coupled to two neighbors (i.e., J = 0), and to = 0 if each site is coupled to all sites of neighboring rings (J = J ). Therefore, we concentrate on the model of Eq. with K = 1 and consider all values of ≥ 0 in the following. In a previous DMRG study of the effective model, Kawano and Takahashi reported the finite-size scaling of the spin gap for the triangular tube with = 4. Using White's DMRG algorithm as well, we extend their numerical analysis to the range 0 ≤ ≤ 20. For this purpose, we classify the lowest lying excitations according to the quantum numbers and study the sectors,, and for open chains with up to N = 200 sites. From an analysis of the truncation-dependence of the gaps, we find that convergence is reached by keeping 250 states and thus perform the calculations up to that limit within six finite-system sweeps. The sum of the discarded density-matrix eigenvalues is smaller than 10 −5 in all cases. Above ≃ 1.4, a range that includes all non-frustrated spin tubes, we found that a gap is indeed present, in agreement with Kawano and Takahashi's analysis of the case = 4, but interestingly enough, we find that the first excitation appears in all sectors. In other words, spin and chirality gaps are equal above ≃ 1.4. The situation changes dramatically upon reducing. Below ≃ 1.4, the first excitation is no longer degenerate but appears only in the sector . The first excitation is thus a chirality excitation, and the spin gap ∆ S and the chirality gap ∆ are no longer equal. To examine this point further, we have performed a system- and, including careful finite-size scaling. The first excitation is always in the sector and is non-degenerate. Below ≃ 0.5, the second excitation is also non-degenerate and appears in the sector. In this parameter range, the first excitation that has a nonzero spin quantum number is the third excited state. This excitation manifests itself in all sectors. By following this excitation as is increased, one can determine that it becomes the second excitation at ≃ 0.5, and then the first excitation at ≃ 1.4. The gaps corresponding to these excitations are plotted in Fig. 2. In extracting the gaps, some care had to be taken regarding finite-size effects. The results were fitted with polynomials in 1/N, where N is the length of the tube. Good fits could be obtained with third order polynomials. The extrapolated values lie between those obtained with quadratic and quartic polynomials. The differences between these fits were used to define the error-bars shown in the inset of Fig. 2. That these gaps correspond to very different excitations is confirmed by their -dependences. As shown in the inset, the results for ≤ 0.1 can be fitted with power laws of the form ∆ ∝ b with exponents b = 1.54 ± 0.06, 1.36 ± 0.07, and 1.11 ± 0.12 for the singlet excitations in the sectors,, and for the first spin excitation, respectively. These exponents are consistent with the simple fractions b = 3/2, 4/3 and 1. The large error bar and the value significantly larger than b = 1 for the spin excitation is very probably a finite-size effect. Since the spin gap follows a linear finite-size scaling for spin tubes with up to 200 rings when is very small, the extrapolations underestimate the gap, resulting in an exponent larger than the actual one. The nature of the excitations can be further explored by examining the nearest-neighbor expectation values of the spin and pseudo-spin interactions S i S i+1 and + i − i+1 + h.c.. In Fig. 3, these quantities are shown for the lowest-lying states in the important sectors. The spin and chirality degrees of freedom alternate synchronously. As expected, the ground state, shown in Fig. 3(a), is uniformly dimerized in both the spin and the chirality channels. Excitations in a dimerized spin chain can be described in terms of solitons, which can be viewed as domain walls between two dimer coverings. In a chain with an even number of sites, solitons always appear in pairs, which can either be bound or unbound. The bound states can also be interpreted as excited bonds, i.e., as spin-triplet states in a background of dimers, and would therefore leave the dimer pattern unchanged. In an open chain, a bound soliton state is located with maximum probability at the center of the system, whereas its unbound counterpart tries to maximize both the distance between the solitons and the distance to the ends of the chain. The nearest-neighbor expectation values of excited states in the sectors and for = 0.1 are shown in Fig. 3(b) and (c). The unaltered dimer patterns and the constrictions at the center of the chain indicate the presence of bound soliton states. The very pronounced constriction in Fig. 3(b) suggests that the excitation in the ground state sector is close to unbinding, which we find to occur in the parameter range 0.5 < < 0.6. For larger values of, all expectation values have a structure similar to that shown in Fig. 3(d), representing the lowest-lying spin excitation. Here one can clearly identify two domain walls at around 1 3 and 2 3 of the chain length, suggesting the presence of two unbound solitons. We have checked that these features are independent of the chain length by studying systems of up to 200 sites. From a complete analysis of the expectation values throughout the whole range of, we conclude that lowest-lying bound states only exist in the sectors and, for which the unbinding can be observed at 0.5 < < 0.6 and 1.3 < < 1.4, respectively. The lowest-lying pure spin and combined spin-chirality excitations are always unbound and are therefore degenerate in the thermodynamic limit. Interestingly, the same hierarchy of states for the bound and unbound soliton excitations as a function of can be obtained analytically by the variational approach of Wang for the model that includes a next-nearestneighbor coupling along the legs with relative strength = 0.5. Since it was shown by Kawano and Takahashi that Wang's model remains in the same phase as the additional interaction is turned off for = 4, we conjecture that the two models are in the same phase over the whole range of. In the same spirit, we note that the effective Hamiltonian of Eq. can also be seen as a special case of the recently investigated spin-orbital models, with great similarities in the roles of chirality and orbital degrees of freedom. Next, we show that the scaling of the chirality gap ∆ ∝ 3/2 can be recovered by a simple mean-field decoupling of the interaction terms S r S r+1 ± r ∓ r+1 into Since, according to our numerical results, the ground state is spontaneously dimerized, we look for a dimerized solution by starting with alternating expectation values S r S r+1 = C S − (−1) r S and + r − r+1 + − r + r+1 = C − (−1) r, where S/ is the alternation parameter in the corresponding channel. The mean-field Hamiltonian then describes Heisenberg and XY-chains with alternating bond strengths. Now, for Heisenberg and XYchains with alternating exchange J, the scalings of the gap and of the alternation parameters as a function of are well known. Up to logarithmic corrections, they are given by: ∆ S ∝ J() 2/3, S ∝ 1/3, ∆ ∝ J and ∝. The mean-field decoupling then leads to a J of order one and ∝ for the spin part, and to J ∝ and ∝ S for the chirality. Self-consistency then requires that S, ∝ 1/2 and ∆ ∝ 3/2. This last scaling is in very good agreement with our DMRG result for the chirality gap, for which no logarithmic correction could be extracted within our numerical accuracy. As a further check, we have also extracted S and as a function of. They are not strictly proportional, but we believe this is due to logarithmic corrections. Indeed, the ground state energy for small reads A, B, C, and D are constants independent of. The alternation parameters minimizing the ground state energy obey the relation ∝ S (log S + const.), leading to a very good fit of our numerical results (not shown). In order to illustrate the influence of the chirality degrees of freedom, we have calculated the specific heat density c as a function of temperature using exact diagonalization of small systems. We have used weighted means between systems with an even and odd number of sites to extrapolate the curves to the bulk limit. Extrapolations for systems with up to 8 sites and two particular values of are shown in Fig. 4. The double-peak structure for small is due to the well-separated lowlying chirality excitations. The distinct low-temperature peak progressively disappears as the chirality excitations get closer to the spin excitations and is completely absent at ≈ 1. We also note that these low-lying chirality excitations, which are singlets, should be detectable in Raman spectroscopy. Both predictions are expected to apply to Na 2 V 3 O 7 if the absence of any detectable zerofield spin gap in that compound is indeed a consequence of frustration. In conclusion, we have shown that the low energy properties of odd-legged spin tubes change dramatically if the coupling between spin and chirality degrees of freedom is reduced, as would be the case for frustrated spin tubes. While our results confirm that excitations in nonfrustrated or weakly frustrated spin tubes are unbound solitons, they show that solitons are bound into singlet bound states if the coupling between spin and chirality is sufficiently reduced by frustration. As a consequence, spin and chirality gaps are no longer equal, and the low-lying chirality excitations, which lie below the spin gap, are expected to lead to a number of interesting consequences such as an additional low-temperature peak in the specific heat. We hope that these conclusions will encourage further experimental investigations of Na 2 V 3 O 7 and of other odd-legged spin tubes.
/** * Updates the BlockState of the TankBlock belonging to this TileEntity. */ public void updateBlockState() { BlockState state = world.getBlockState(pos); BlockState newState = (isPartOfTank) ? state .with(TankBlock.DOWN, isConnected(Direction.DOWN)) .with(TankBlock.UP, isConnected(Direction.UP)) .with(TankBlock.NORTH, isConnected(Direction.NORTH)) .with(TankBlock.SOUTH, isConnected(Direction.SOUTH)) .with(TankBlock.WEST, isConnected(Direction.WEST)) .with(TankBlock.EAST, isConnected(Direction.EAST)) .with(TankBlock.CONNECTED, true) : state .with(TankBlock.DOWN, false) .with(TankBlock.UP, false) .with(TankBlock.NORTH, false) .with(TankBlock.SOUTH, false) .with(TankBlock.WEST, false) .with(TankBlock.EAST, false) .with(TankBlock.CONNECTED, false); world.setBlockState(pos, newState, 3); }
Perceived coercion among jail diversion participants in a multisite study. OBJECTIVE Although jail diversion is considered an appropriate and humane response to the disproportionately high volume of people with mental illness who are incarcerated, little is known regarding the perceptions of jail diversion participants, the extent to which they feel coerced into participating, and whether perceived coercion reduces involvement in mental health services. This study addressed perceived coercion among participants in postbooking jail diversion programs in a multisite study and examined characteristics associated with the perception of coercion. METHODS Data collected in interviews with 905 jail diversion participants from 2003 to 2005 were analyzed with random-effects proportional odds models. RESULTS Ten percent of participants reported a high level of coercion, and another 26% reported a moderate level of coercion. Having a drug charge was associated with lower perceived coercion to enter the program. In addition, an interaction between sexual abuse and substance abuse indicated that recent sexual abuse was associated with higher levels of perceived coercion, but only among those without current substance abuse. At the 12-month follow-up (N=398), variables associated with higher perceived coercion to receive behavioral health services included spending more time in jail and higher perceived coercion at baseline. The amount of behavioral health service use was not predicted by perceived coercion at baseline. Rather, being older, having greater symptom severity, and having a history of sexual abuse but no substance abuse and no history of physical abuse were associated with higher levels of outpatient service use. CONCLUSIONS Overall, one-third of jail diversion participants reported some level of perceived coercion. Important determinants of perceived coercion included charge type, length of time in jail, and sexual abuse history. Engagement in treatment was not affected by perceived coercion.
<filename>projects/util_libs/libplatsupport/src/plat/fvp/sp804.c /* * Copyright 2019, Data61 * Commonwealth Scientific and Industrial Research Organisation (CSIRO) * ABN 41 687 119 230. * * This software may be distributed and modified according to the terms of * the BSD 2-Clause license. Note that NO WARRANTY is provided. * See "LICENSE_BSD2.txt" for details. * * @TAG(DATA61_BSD) */ #include <stdio.h> #include <assert.h> #include <errno.h> #include <utils/util.h> #include <utils/time.h> #include <platsupport/plat/sp804.h> #include "../../ltimer.h" /* This file is mostly the same as the dmt.c file for the hikey. * Consider to merge the two files as a single driver file for * SP804. */ #define TCLR_ONESHOT BIT(0) #define TCLR_VALUE_32 BIT(1) #define TCLR_INTENABLE BIT(5) #define TCLR_AUTORELOAD BIT(6) #define TCLR_STARTTIMER BIT(7) /* It looks like the FVP does not emulate time accruately. Thus, pick * a small Hz that triggers interrupts in a reasonable time */ #define TICKS_PER_SECOND 35000 #define TICKS_PER_MS (TICKS_PER_SECOND / MS_IN_S) static void sp804_timer_reset(sp804_t *sp804) { assert(sp804 != NULL && sp804->sp804_map != NULL); sp804_regs_t *sp804_regs = sp804->sp804_map; sp804_regs->control = 0; sp804->time_h = 0; } int sp804_stop(sp804_t *sp804) { if (sp804 == NULL) { return EINVAL; } assert(sp804->sp804_map != NULL); sp804_regs_t *sp804_regs = sp804->sp804_map; sp804_regs->control = sp804_regs->control & ~TCLR_STARTTIMER; return 0; } int sp804_start(sp804_t *sp804) { if (sp804 == NULL) { return EINVAL; } assert(sp804->sp804_map != NULL); sp804_regs_t *sp804_regs = sp804->sp804_map; sp804_regs->control = sp804_regs->control | TCLR_STARTTIMER; return 0; } uint64_t sp804_ticks_to_ns(uint64_t ticks) { return ticks / TICKS_PER_MS * NS_IN_MS; } bool sp804_is_irq_pending(sp804_t *sp804) { if (sp804) { assert(sp804->sp804_map != NULL); return !!sp804->sp804_map->ris; } return false; } int sp804_set_timeout(sp804_t *sp804, uint64_t ns, bool periodic, bool irqs) { uint64_t ticks64 = ns * TICKS_PER_MS / NS_IN_MS; if (ticks64 > UINT32_MAX) { return ETIME; } return sp804_set_timeout_ticks(sp804, ticks64, periodic, irqs); } int sp804_set_timeout_ticks(sp804_t *sp804, uint32_t ticks, bool periodic, bool irqs) { if (sp804 == NULL) { return EINVAL; } int flags = periodic ? TCLR_AUTORELOAD : TCLR_ONESHOT; flags |= irqs ? TCLR_INTENABLE : 0; assert(sp804->sp804_map != NULL); sp804_regs_t *sp804_regs = sp804->sp804_map; sp804_regs->control = 0; if (flags & TCLR_AUTORELOAD) { sp804_regs->bgload = ticks; } else { sp804_regs->bgload = 0; } sp804_regs->load = ticks; /* The TIMERN_VALUE register is read-only. */ sp804_regs->control = TCLR_STARTTIMER | TCLR_VALUE_32 | flags; return 0; } static void sp804_handle_irq(void *data, ps_irq_acknowledge_fn_t acknowledge_fn, void *ack_data) { assert(data != NULL); sp804_t *sp804 = data; if (sp804->user_cb_event == LTIMER_OVERFLOW_EVENT) { sp804->time_h++; } sp804_regs_t *sp804_regs = sp804->sp804_map; sp804_regs->intclr = 0x1; ZF_LOGF_IF(acknowledge_fn(ack_data), "Failed to acknowledge the timer's interrupts"); if (sp804->user_cb_fn) { sp804->user_cb_fn(sp804->user_cb_token, sp804->user_cb_event); } } uint64_t sp804_get_ticks(sp804_t *sp804) { assert(sp804 != NULL && sp804->sp804_map != NULL); sp804_regs_t *sp804_regs = sp804->sp804_map; return sp804_regs->value; } uint64_t sp804_get_time(sp804_t *sp804) { uint32_t high, low; /* timer must be being used for timekeeping */ assert(sp804->user_cb_event == LTIMER_OVERFLOW_EVENT); /* sp804 is a down counter, invert the result */ high = sp804->time_h; low = UINT32_MAX - sp804_get_ticks(sp804); /* check after fetching low to see if we've missed a high bit */ if (sp804_is_irq_pending(sp804)) { high += 1; assert(high != 0); } uint64_t ticks = (((uint64_t) high << 32llu) | low); return sp804_ticks_to_ns(ticks); } void sp804_destroy(sp804_t *sp804) { int error; if (sp804->irq_id != PS_INVALID_IRQ_ID) { error = ps_irq_unregister(&sp804->ops.irq_ops, sp804->irq_id); ZF_LOGF_IF(error, "Failed to unregister IRQ"); } if (sp804->sp804_map != NULL) { sp804_stop(sp804); ps_pmem_unmap(&sp804->ops, sp804->pmem, (void *) sp804->sp804_map); } } int sp804_init(sp804_t *sp804, ps_io_ops_t ops, sp804_config_t config) { int error; if (sp804 == NULL) { ZF_LOGE("sp804 cannot be null"); return EINVAL; } sp804->ops = ops; sp804->user_cb_fn = config.user_cb_fn; sp804->user_cb_token = config.user_cb_token; sp804->user_cb_event = config.user_cb_event; error = helper_fdt_alloc_simple( &ops, config.fdt_path, SP804_REG_CHOICE, SP804_IRQ_CHOICE, (void *) &sp804->sp804_map, &sp804->pmem, &sp804->irq_id, sp804_handle_irq, sp804 ); if (error) { ZF_LOGE("Simple fdt alloc helper failed"); return error; } sp804_timer_reset(sp804); return 0; }
KITCHENER — Party on, Laurier. You won't hear any complaints from the university president. "You know, we have a bit of a reputation as a party school," Wilfrid Laurier University president Max Blouw told The Record's editorial board this week. "Well, some people view that negatively. I actually view it very positively because it is truly an opportunity to engage, to learn who you are, learn what excites you, what scares you. "And at Laurier, students really engage with one and other. They really engage with student clubs, with volunteers and in the community — the purple and gold, the spirit — it's absolutely fabulous. I believe that's enormously invaluable." Blouw said social gatherings breed learning. Related Content Universities can learn from Dalhousie disgrace, Laurier president says "We often take students just when they're leaving their parents and before they have mortgages and kids and spouse and responsibilities. It gives a period of time for real human development to take place. "I believe it's an incredibly formative time in anybody's life. When I talk to our alumni, very often they will say to me, 'It's the most important thing that ever happened to me … to leave home, go to the university, find out who I am and then hit the world running.' I believe that Laurier encourages engagement of one student with another — social engagement."
package edu.itu.csc.quakenweather.models; import android.graphics.Bitmap; /** * Created by <NAME> on 3/4/2018. */ public class Weather { private String city; private int date; private double morningTemperature; private double dayTemperature; private double eveningTemperature; private double nightTemperature; private double pressure; private int humidity; private double windSpeed; private String weather; private Bitmap icon; public Weather(String city, int date, double morningTemperature, double dayTemperature, double eveningTemperature, double nightTemperature, double pressure, int humidity, double windSpeed, String weather, Bitmap icon) { this.city = city; this.date = date; this.morningTemperature = morningTemperature; this.dayTemperature = dayTemperature; this.eveningTemperature = eveningTemperature; this.nightTemperature = nightTemperature; this.pressure = pressure; this.humidity = humidity; this.windSpeed = windSpeed; this.weather = weather; this.icon = icon; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public int getDate() { return date; } public void setDate(int date) { this.date = date; } public double getMorningTemperature() { return morningTemperature; } public void setMorningTemperature(double morningTemperature) { this.morningTemperature = morningTemperature; } public double getDayTemperature() { return dayTemperature; } public void setDayTemperature(double dayTemperature) { this.dayTemperature = dayTemperature; } public double getEveningTemperature() { return eveningTemperature; } public void setEveningTemperature(double eveningTemperature) { this.eveningTemperature = eveningTemperature; } public double getNightTemperature() { return nightTemperature; } public void setNightTemperature(double nightTemperature) { this.nightTemperature = nightTemperature; } public double getPressure() { return pressure; } public void setPressure(double pressure) { this.pressure = pressure; } public int getHumidity() { return humidity; } public void setHumidity(int humidity) { this.humidity = humidity; } public double getWindSpeed() { return windSpeed; } public void setWindSpeed(double windSpeed) { this.windSpeed = windSpeed; } public String getWeather() { return weather; } public void setWeather(String weather) { this.weather = weather; } public Bitmap getIcon() { return icon; } public void setIcon(Bitmap icon) { this.icon = icon; } @Override public String toString() { return "Weather{" + "city='" + city + '\'' + ", date=" + date + ", morningTemperature=" + morningTemperature + ", dayTemperature=" + dayTemperature + ", eveningTemperature=" + eveningTemperature + ", nightTemperature=" + nightTemperature + ", pressure=" + pressure + ", humidity=" + humidity + ", windSpeed=" + windSpeed + ", weather='" + weather + '\'' + ", icon=" + icon + '}'; } }
1. Field Of The Invention This invention relates to a new apparatus used as a bubble forming and stabilizing device in a continuous extrusion process for making a blown film and a process for using same. Blown films may be made from any one of several processes and one such process is commonly referred to as a blown continuous extrusion process. The invention discloses an internal air deflector and bubble forming and stabilizing mandrel for use in internally cooling and stabilizing a bubble of blown film during the extrusion process. The device allows for increased production rates, improved stabilization and improved physical properties of the forming bubble by effectively forming the bubble over an internal mandrel enabling a high velocity cooling air stream to be directed between the under and outer surfaces of the mandrel and the inner surface of the forming bubble usually formed of a polymer. The invention enables more efficient heat transfer from the extrusion polymer to the cooling air stream causing the molten polymer to drop in temperature more quickly in the blowing process which subsequently also improves the stability of the process and further allows even higher internal and external air velocities to be introduced which in turn allows for increased productivity and improved product quality due to improved stability. The device also provides support for the molten polymer during its most unstable state. 2. Description Of The Prior Art The device of the present invention is particularly suitable for use in a continuous process for the production of blown film. In many cases, the blown film will be formed from a polymer resin although other materials may be used to produce a blown bubble. For ease of reference, and not for limitation purpose, the following description will be made with reference to a bubble formdd from a polymer. In a typical process, a hot polymer melt is fed to a die from which it is extruded in the form of a tube which is nipped at a desired point after cooling to form a bubble. The extruded polymer is generally expanded by using internal air pressure to blow the polymer into a bubble and the bubble should be of uniform and constant thickness subsequent to the frost line. However, the tube which emerges from the die itself is generally unstable due to low melt strength until its temperature is reduced sufficiently to improve the melt strength and eventually solidify the polymer, that is, at its frost line To increase the rate at which the molten bubble reaches the point of solidifying at the frost line, the temperature of the forming bubble is reduced as quickly as possible while still maintaining the desired stability. This may be done in several ways. One of several known methods is by using an external air ring which directs cooling air onto the outer surface of the forming bubble as it emerges from the die. Additional cooling can also be achieved by cooling the inside of the bubble such as is disclosed in U.S. Pat. No. 4,236,884 granted on Dec. 2, 1980 to Gloucester Engineering Co., Inc. The amount of cooling is generally limited by the temperature of the cooling air, the melt strength of the extrusion polymer, the blow-up ratio of the bubble size to the die size and the volume and velocity of cooling air that can be introduced to the inner and outer surfaces of the forming bubble without destroying the stability of the forming bubble. These limitations directly affect the production line speed and the product quality through the extrusion process. Various devices have been proposed which attempt to reduce the temperature of the air within the forming bubble to improve the extrusion rate which in turn reduces production costs. Cooling of the forming bubble can be achieved by cooling from the inside of the forming bubble or by outside cooling of the bubble, or by both. An example of the exterior cooling is shown in U.S. Pat. No. 4,259,947 granted to Robert J. Cole, the inventor herein. In this patent, there is disclosed a dual lip air ring wherein the exterior air is blown radially outwardly away from the forming bubble emerging from the die. The resulting venturi effect and low pressure zone causes the forming bubble to draw away from the medial line as it emerges from the die and allows a non-impinging, relatively high velocity air stream to be introduced to the exterior wall of the forming bubble, cooling it faster than direct impingement cooling. By cooling the forming bubble faste while maintaining the stability of the bubble, it is possible to increase the rate of extrusion of the bubble and maintain good quality thus reducing production time and costs. Additional cooling can also be achieved from the inside of the bubble. As shown in U.S. Pat. No. 4,236,884, there is proposed a device which exchanges the hot interior air within the forming bubble with cooler air via ports located within the die mandrel itself. Air is supplied to a series of internal nozzles which blow the air radially outwardly at the internal surface of the forming bubble. These and other processes of the prior art have clear limitations due to the effect of the impingement of the air and the low melt strength of the polymer during the blowing process. Further, as the formin bubble itself is increased in size with relation to the die size, the radial distance between the internal air nozzles and the wall of the forming bubble will also increase which has the undesired effect of reducing the efficiency of the cooling process.
Comparative phylogenomic analyses of teleost fish Hox gene clusters: lessons from the cichlid fish Astatotilapia burtoni: comment A reanalysis of the sequences reported by Hoegg et al has highlighted the presence of a putative HoxC1a gene in Astatotilapia burtoni. We discuss the evolutionary history of the HoxC1a gene in the teleost fish lineages and suggest that HoxC1a gene was lost twice independently in the Neoteleosts. This comment points out that combining several gene-finding methods and a Hox-dedicated program can improve the identification of Hox genes. Background The identification of individual Hox genes is an essential basis for their study in evolutionary research fields. It is even more important in teleost fish where the unravelling of zebrafish and pufferfish Hox clusters have contributed to the establishment of the fish-specific genome duplication hypothesis. In a recent study, Hoegg et al present the Hox gene content of the cichlid fish Astatotilapia burtoni, characterized from the complete sequence of the seven Hox clusters. Availability of these sequences is extremely valuable to help the community better understand the evolutionary plasticity of Hox genes and their regulatory elements in teleosts. A total of 46 Hox coding sequences have been identified, using common gene detection methods relying on sequence similarity. Identification of Hox multigenic family members is hampered by the high conservation of the homeodomain. We believe that a set of complementary approaches is thus required to correctly annotate all members of the Hox family. Here, we complement the analysis of the Hox gene content of Astatotilapia burtoni reported in, using a combination of sequence similarity methods, de-novo gene predictions and a program we have developed that specifically classifies Hox proteins in their homology groups. In addition, we collect a comprehensive set of HoxC1a sequences that allows us to re-investigate the presence of HoxC1a pseudogenes in various teleost species. In light of our findings, we discuss a revised version of HoxC1a gene loss events in teleost lineages. Results and Discussion HoxC1a gene detection Astatotilapia burtoni Hox cluster genomic sequences were collected from GenBank and submitted to the de-novo gene prediction program GENSCAN. A total of 102 putative coding sequences were localized on the genomic sequences. We applied HoxPred on each putative peptide, and 37 sequences were predicted as Hox proteins. We have also applied HoxPred on the protein sequences detected by Hoegg et al and the resulting classification in homology groups is concordant with their result in both cases. We observed that GENSCAN predictions sometimes encompass two or three genes in a single predicted peptide. This sole method thus detects less Hox genes than reported in. With this method, we have located a paralogous group (PG) 1 prediction on the HoxCa cluster. A detailed analysis of the predicted gene shows that GENSCAN, and other de-novo gene prediction programs, fail to correctly predict the C-terminal portion of its homeodomain. Alignment of zebrafish HoxC1a peptide to the HoxCa cluster with GeneWise (global mode) supports the prediction of the first exon and completes the homeodomain sequence. The resulting putative peptide (Additional file 1) comprises two exons and one intron, its length is 295 residues and its genomic position downstream of HoxC3a (Additional file 2) strongly suggest a potential HoxC1a. Expression data would be needed to support this prediction. HoxPred has previously been applied on the teleost fish Gasterosteus aculeatus proteome to characterise its Hox gene content. We have shown that this fish comprises a putative HoxC1a gene partially supported by EST evidence. We performed pairwise alignments between fulllength HoxC1a protein sequences detected in teleosts. The Neosteleost A. burtoni and G. aculeatus HoxC1a are very similar with 66% identity. On the contrary, A. burtoni HoxC1a only shares 30% identity with the Ostariophysii zebrafish HoxC1a, in the homeodomain for the most part. Phylogenetic analyses in paralogous group 1 Phylogenetic reconstructions of PG1 homeodomains from teleost species were conducted as in, with the addition of A. burtoni and Fundulus heteroclitus sequences. The HoxC1a sequence recently reported in the Ostariophysii Megalobrama amblycephala was not included as the PCR fragment does not comprise the homeodomain. Phylogenetic analyses confirm that Astatotilapia burtoni comprises a putative HoxC1a gene (Figure 1). In the frame of novel HoxC1a sequences, these phylogenetic tree reconstructions refine the analysis of F. heteroclitus PG1 PCR fragments and provide additional evidence to confirm the classification previously proposed in. Fishing HoxC1a pseudogenes out of teleost genomes The Neoteleosts HoxC1a genes we have identified provide a more comprehensive set of sequences that can be used to investigate HoxC1a pseudogenes in teleost sequences. We have analysed the region downstream HoxC3a of the medaka Oryzia latipes in search of a putative HoxC1a gene. The EnsEMBL GENSCAN prediction that spans this Phylogenetic tree of Paralogous group 1 in selected verte-brates Figure 1 Phylogenetic tree of Paralogous group 1 in selected vertebrates. Phylogenetic tree reconstructions were conducted with homeodomain sequences as in. The represented tree is obtained by bayesian inference (BI) using MrBayes. Rooting is arbitrary. The first numbers above the internal branches are posterior probabilities obtained by BI. The second numbers correspond to bootstrap values produced by the program PHYML of maximumlikelihood (ML) tree reconstruction. Only statistical support values > 50 for at least one of the methods used (ML or BI) are shown. Marginal probabilities at each internal branches were taken as a measure of statistical support. All the alignements and the trees are available upon request. Abbreviations: LATME: Latimeria menadoensis, BRARE: Danio rerio, ASTBU: Astatotilapia burtoni, GASAC: Gasterosteus aculeatus, fox: Fundulus heteroclitus, ORYLA: Oryzia latipes. We performed similar analyses on the genomic sequences of pufferfishes (Takifugu rubripes and Tetraodon nigroviridis) and observed imprints of HoxC1a pseudogenes in both cases. For T. rubripes, this finding is in agreement with the HoxC1a pseudogene described in. For T. nigroviridis, no HoxC1a pseudogene has been reported yet, and a previous attempt to identify a functional HoxC1a gene was unsuccessful. We have constructed mVista plots as performed in to highlight conserved non-coding sequences downstream of HoxC3a with a comparative genomic approach (Additional file 3). As previously noted by Hoegg et al, we observe a high similarity between G. aculeatus and A. burtoni. This plot also shows a high similarity between the Neoteleost pseudogenes and putative genes, whereas zebrafish HoxC1a sequence is clearly less similar. HoxC1a gene loss in the teleost Hox clusters Based on the sole presence of HoxC1a gene in zebrafish, Hoegg et al suggest that HoxC1a has been lost once in the lineage leading to Neoteleosts (as illustrated in figure 3 in ). Presence of this gene in both G. aculeatus and A. burtoni rejects this hypothesis. Figure 2 is a comprehensive overview of the current HoxC1a set of orthologs in teleosts, according to our results based on publicly available data. We have mapped HoxC1a gene loss events on the phylogeny reported in. It indicates that HoxC1a has been lost independently among Neoteleosts, in both lineages leading to O. latipes and to the pufferfishes. Whether an additional HoxC1a gene loss has occurred in the lineage leading to the cichlid fish Oreochromis niloticus remains to be investigated. Conclusion A. burtoni was reported to contain 46 Hox genes. We have complemented the Hox gene content of this fish with a putative HoxC1a gene. Combined with the detection of HoxC1a orthologs in G. aculeatus and F. heteroclitus, we introduce here a more comprehensive set of HoxC1a genes in teleosts. These Neoteleost genes facilitate the investigation of pseudogenes in O. latipes and pufferfishes in comparison with the more distant zebrafish ortholog. We report two novel HoxC1a pseudogenes, in O. latipes and T. nigroviridis respectively. In addition, this case-study illustrates the annotation challenge posed by the Hox multigenic family. We have shown that Hox identification can be improved by combining several gene-finding methods and a Hox-dedicated program. This comment has hopefully given new insights into the gene loss events presented by Hoegg et al, as regards to HoxC1a. Our results modify their conclusions and rule out the hypothesis of a unique HoxC1a gene loss event in the lineage leading to Neoteleosts. We propose that HoxC1a was independently lost in the lineage leading to O. latipes and in the lineage leading to pufferfishes. Our findings do not affect other aspects of the Hoegg et al study, especially the fact that each teleost species studied so far contains a different Hox gene set. Rather, we believe that this contribution reinforces their conclusions about non-essential Hox genes that can be easily and repeatedly lost like HoxB7a or HoxC1a. Overview of HoxC1a content in teleost species and gene loss events mapped on a phylogeny Figure 2 Overview of HoxC1a content in teleost species and gene loss events mapped on a phylogeny. HoxC1a genes are depicted with stars. Dashed lines indicate that corresponding species were not reported in and their position in the phylogeny is hypothetic.
Australia's Bundamba Plant First Step in Large Water Recycling Scheme This article announces that the Bundamba Advanced Water Treatment Plant near Brisbane, Australia, has produced its first purified recycled water, achieving a significant milestone for one of Australia's largest water schemes, the $2.4 billion Western Corridor Recycled Water project. The continual water flow will reduce the power station's reliance on the droughtaffected Wivenhoe Dam and will ensure that the power station remains available to support the growing electricity needs of the southeastern portion of the state of Queensland during severe water shortages.
def gen_error_codes(args): yield "static struct py_ssl_error_code error_codes[] = {" for reason, libname, errname, num in args.reasons: yield f" #ifdef {reason}" yield f' {{"{errname}", ERR_LIB_{libname}, {reason}}},' yield " #else" yield f' {{"{errname}", {args.lib2errnum[libname]}, {num}}},' yield " #endif" yield " { NULL }" yield "};" yield ""
Michael Justin Allen Sexton Every Nvidia GeForce RTX 2080, RTX 2080 Ti Card You Can Pre-Order Now Interested in a GeForce RTX 2080 or RTX 2080 Ti graphics card? Here are the first 23 sweet slices of Nvidia 'Turing' silicon and all the details you'll need before you buy. Interested in a GeForce RTX 2080 or RTX 2080 Ti graphics card? Here are the first 23 sweet slices of Nvidia 'Turing' silicon and all the details you'll need before you buy. Nvidia's next generation of graphics cards, dubbed "GeForce RTX," are nearly here, and we've spotted almost two dozen of these cards for pre-order. You won't be able to lay your hands on any until Sept. 20, but if you simply have to be one of the first to try out Nvidia's new "Turing" architecture, you need to act fast. Certain cards have sold out of their initial allocations, and while more are coming, the waiting periods could be long, if the 2016 launch of Nvidia's "Pascal" cards is any indication. Below, check out our analysis of all the GeForce RTX 2080 and RTX 2080 Ti cards you can pre-order today. One note: Some of the specs for these cards haven't been released yet. Most crucially, most of Nvidia's board partners have not released clock-speed information for their respective RTX 2000-series cards. Currently, only Nvidia, PNY, and Zotac have reported the speeds at which their cards operate. Nvidia's RTX 2080 Founders Edition is essentially a factory-overclocked reference-version card. Many of the graphics cards produced by Nvidia's board partners use this as the starting point when designing their own versions of the graphics card. The Founders Edition has a 5 percent factory overclock, and it is cooled by two dual-axial fans that have 13 blades each. The thermal solution also has a vapor chamber that further improves thermal performance. The card's improved power-delivery system consists of eight power phases. The RTX 2080 Founders Edition is available for pre-order for $799. This price is $100 above the RTX 2080's starting MSRP for third-party cards. Nvidia is pricing its RTX cards this way, in part, due to the presumed high quality of the components used on its card, but also to keep its "official" Founders Edition cards from competing directly with the company's OEM board partners. Several of the board partners, however, have priced their RTX 2080s above $799. As a result, the RTX 2080 Founders Edition may offer the best price/performance ratio of the RTX 2080 models available at launch. Nvidia's RTX 2080 Ti Founders Edition deploys a similar thermal solution to what's on the RTX 2080 Founders Edition: two dual-axial 13-blade fans and a vapor chamber. The board has 13 power phases, five more than the RTX 2080 Founders Edition, which should be more than capable of handling the increased power draw associated with the RTX 2080 Ti's higher core count. The GPU also has a green LED that illuminates the side of the card, as well as an aluminum backplate. Nvidia slightly overclocked this card, raising its boost clock from 1,545MHz to 1,635MHz. The RTX 2080 Ti Founders Edition is priced at $1,199, $200 above the RTX 2080 Ti's $999 MSRP, in an attempt to avoid competing with Nvidia's OEM board partners. All of the board partners, however, have priced their RTX 2080 Ti graphics cards close to $1,200, throwing them into direct competition with the Founders Edition model. Many of the board-partner cards also feature larger coolers, several being triple-fan thermal solutions. As a result, the RTX 2080 Ti Founders Edition may not be the best card to pre-order until more nitty-gritty emerges on the partner cards, especially given what an investment the RTX 2080 Ti is. Asus designed this RTX 2080 to serve as a baseline for its 2000-series graphics cards. The thermal solution comprises two high-performance wing-blade fans over a large aluminum heatsink. According to Asus, the heatsink's surface area is more than 50 percent larger than its previous iteration of this cooler, which should help keep the card extra-cool. The card is also equipped with an aluminum backplate to help protect and cool the back of the PCB. The rear I/O panel has a single HDMI 2.0b port alongside a lone USB Type-C port and a complement of three DisplayPort 1.4 outputs. This will be a familiar loadout through these cards. Need more airflow, lots more lights, and a visual sign that your fancy new card might be getting too hot? The ROG Strix OC Edition of the RTX 2080 GPU features a beefier triple-fan thermal solution than its Dual OC Edition card shown above. The fans use a special blade design to increase air pressure through the cooler and improve thermal efficiency. The cooler is also equipped with RGB LEDs that can be controlled with the company's Aura Sync software. In addition to static and fixed lighting patterns, the software also supports a feature to change the lights' color based on the temperature of your graphics card. Or the software can sync the lights to music, turning your graphics card into a beat-stompin' light show. The clock speed this GPU operates at is currently unknown, but given the name and the thermal solution here, it ought to be clocked higher than Asus's RTX 2080 Dual OC Edition card. On the card's rear I/O panel are two HDMI 2.0b ports, a pair of DisplayPort 1.4 jacks, and a single USB Type-C port. Now this is a more sedate version of the RTX 2080 Ti. Asus built its RTX 2080 Ti Turbo Edition with a straight-through "blower"-style thermal solution. This design uses a single 80mm dual-ball-bearing fan that pulls air from the case’s interior and expels it out the back through the PCI Express slot backplane. A single RGB LED strip lights up the side of the cooler’s shroud, giving you a touch of classy bling without lighting up your case like an out-of-control RGB carnival. The card has a single HDMI 2.0b port, one USB Type-C port, and two DisplayPort 1.4 hookups on the I/O panel. The RTX 2080 Ti Dual OC Edition is, at least on the surface, similar to the company's RTX 2080 Dual OC Edition. (The fans, the ports, and the cooler are all identical.) The key difference? The GPU's core and memory. The GPU core on this graphics card has 1,408 more cores than its non-Ti counterpart, and it also has an additional 3GB of RAM that connects to the core over a wider 352-bit bus. In short: It's far faster, but as it uses the same thermal solution, it may have a harder time remaining cool if you try to push it beyond its (presumably) factory-overclocked settings. Dig that groovy transparent face! EVGA's RTX 2080 XC Gaming card comes equipped with a redesigned version of the company's iCX2 cooling solution that, according to EVGA, is 19 percent quieter and provides 14 percent better cooling than the previous iteration. The improvements are partially down to the use of new hydro-dynamic fan bearings, which EVGA claims have an exceedingly long lifespan. A metal backplate adorns the back of the card to help cool the GPU further. The graphics card also has controllable RGB LEDs, and it comes with interchangeable trim in red, black, white, and carbon colors. That's the most control over customizing any graphics card that you're likely to get. On the rear I/O plate you'll find one USB Type-C port, one HDMI 2.0b port, and a trio of DisplayPort 1.4 jacks. EVGA equipped its RTX 2080 Ti XC Gaming with the same redesigned iCX2 cooler that it used on its RTX 2080 XC Gaming. Of course, the RTX 2080 Ti model will likely need to operate at a higher temperature while under heavy load, as it has more cores and VRAM. This card also has RGB LED mood lighting and interchangeable trim, but in this card's case, the trim is available in red, black, and white only. Video-output options are the same here as on EVGA's RTX 2080 GPUs: one USB Type-C port, one HDMI 2.0b port, and three DisplayPort 1.4 ports. EVGA's RTX 2080 XC Ultra Gaming is physically identical to the non-Ultra version of the card. At this writing, their spec sheets also showed them to be completely identical in every way; we can only conjecture on the difference at this point. Our guess: The Ultra version will likely operate at higher clock speeds, giving it a slight performance advantage over its less expensive sibling. Why are we showing this card edge-on? That's because EVGA's RTX 2080 Ti XC Ultra Gaming looks almost identical to its less-expensive counterpart, but it comes in the form of a wider triple-slot (yes, triple-slot) add-on card. This wider design allows EVGA to jam in even more raw materials to help keep the GPU cool. Presumably, this GPU will also operate at higher clock speeds, and it should overclock better, thanks to its improved thermal solution. All of Gigabyte's RTX 2080 and 2080 Ti cards currently available for pre-order are physically identical, so we'll show the next few from various angles. All of these GPUs feature a large triple-fan cooler that consists of a large aluminum heatsink and several copper heatpipes. Six of the heat pipes make direct contact with the GPU core, and they also aid in cooling the VRAM and MOSFETs. These GPUs also have solid metal backplates and feature customizable RGB LED illumination. The Gigabyte GeForce RTX 2080 Windforce OC serves as the base model for Gigabyte's 2080 product lineup, and it is the least expensive, at a $789.99 MSRP. Gigabyte's RTX 2080 Gaming OC is identical to Gigabyte's RTX 2080 Windforce OC, but it's targeted as a slightly higher-end product. Clock speeds are unknown at this time, but the Gaming OC model will likely be up-clocked slightly from the Windforce OC. Gigabyte's RTX 2080 Ti Windforce OC is the company's least-expensive RTX 2080 Ti available in its first wave of Turing cards. It is identical to the company's RTX 2080 products, except that it has an additional 1,408 cores and an additional 3GB of memory. The MSRP at launch is $1,169.99. Gigabyte's RTX 2080 Ti Gaming OC is Gigabyte's flagship RTX 2080 Ti graphics card. It is slightly more expensive than the company's RTX 2080 Ti Windforce OC graphics card (MSRP is $1,199), and it will likely come factory-overclocked at a higher speed. Here, you can see the solid, full-coverage backplate that will adorn all of the initial Gigabyte cards. MSI's RTX 2080 Duke OC features a large cooler that consists of three double-ball-bearing Torx fans that help to cool a large aluminum heatsink. These fans are designed to be exceedingly quiet and use a special blade design to increase airflow. The graphics card also has an aluminum backplate on its back and controllable RGB LEDs on its side. The graphics card's rear I/O panel has the requisite one HDMI 2.0b port, one USB Type-C jack, and three DisplayPort connections. No, the card image you see here is not distorted; that leftmost fan is supposed to be that size. MSI's RTX 2080 Gaming X Trio uses a larger thermal solution that is centered around three of MSI's double-ball-bearing Torx fans. Compared to the RTX 2080 Duke OC, the Gaming X Trio model is 20mm wider, 13mm longer, and 5mm thicker. Overall, the thermal solution is beefier and should result in better temps while gaming. The card's shroud also has more RGB LED bling, for those that enjoy projecting light shows inside of their case. This card has the standard port loadout: single HDMI 2.0b port, lone USB Type-C port, and three DisplayPort jacks. The MSI GeForce RTX 2080 TI Gaming X Trio makes use of the same thermal solution as the non-Ti model of the card with the same name. The cards are physically identical, but this model will presumably be much faster given the RTX 2080 Ti GPU's larger core count and the card's additional 3GB of RAM. Due to the exceedingly large size of the thermal solution and its fans, this will likely be one of the faster RTX 2080 Ti graphics cards at launch. Also, we suspect it will be hard for any other card to top that crazy backplate, judging by what we see here. PNY's version of the RTX 2080 uses a dual-fan thermal solution that has an RGB LED strip on top of the card. Although PNY marks this as the "Overclocked Edition" of its RTX 2080 GPU, the card actually isn't factory-overclocked. It comes clocked identical to Nvidia's reference spec, but it is likely designed to overclock better than the company's standard RTX 2080, should you dare to try. The GPU comes with one HDMI 2.0 port, one USB Type-C port, and three DisplayPort 1.4 jacks on the rear I/O panel. More metal, please! PNY's GeForce RTX 2080 Ti flagship features a large triple-fan cooler covering a big, big radiator. This card is not factory-overclocked, shipping at a base clock speed of 1,350MHz and a boost clock of 1,545MHz. We'd expect this card to overclock well, due to its gigantic thermal solution, but as the architecture hasn't been tested and we don't know how sensitive the RTX 2080 Ti will be to overclocking, we can't say for sure...yet. Once again, we see one HDMI 2.0b port, a USB Type-C port, and three DisplayPort 1.4 jacks on the rear I/O panel of this card. Zotac engineered this graphics card with a straight-through blower-style cooler that uses just a single fan, which might give this card an acoustic advantage. An aluminum plate is attached to the back of the card to further improve thermal efficiency. Zotac adhered to Nvidia's reference design with this card and gave it a boost clock of 1,710MHz. The card's rear I/O panel supports the usual array: a trio of DisplayPort 1.4 ports, one HDMI 2.0b port, and a single USB Type-C port. Zotac designed its RTX 2080 Gaming Amp graphics card with a whopper of a triple-fan thermal solution. The card, technically, is still a dual-slot PCI Express card, but the cooler's thickness extends past the mounting bracket and will likely prevent you from installing anything in the add-on slot directly beside. Underneath the cooler's shroud is a hefty-looking aluminum heatsink with several copper heatpipes that make direct contact to the GPU. Zotac also equipped this card with an aluminum backplate and its Spectra Lighting technology, which is the company's name for its RGB LED scheme. This card's rear I/O panel is identical that of the Zotac card above. Zotac's GeForce RTX 2080 Ti Gaming Triple Fan is, as the name suggests, cooled by a series of three fans. The radiator portion of the cooler well overhangs the PCB, as you can see here. The card supports Zotac's RGB LED Spectra Lighting technology, so expect some bling, and it comes with the standard RTX 2080 Ti rear I/O panel. The last of our pre-order lot, Zotac's GeForce RTX 2080 Ti Gaming Amp graphics card is identical in every detail to the company's RTX 2080 Ti Triple Fan card. Although its name doesn't explicitly spell it out, we can see this card also uses an XXL-size thermal solution with three full-diameter fans. As this card is priced slightly higher than the non-Amp, we can also assume, given the history of previous Zotac Amp cards, that it will be clocked higher than the Triple Fan model.
Experimental Verification and Comparison of MAFC Method and D-Q Method for Selective Harmonic Detection This paper reviews two popular harmonic detection methods for power electronics applications. One is a d-q method, which dominates the three-phase active filter and STATCOM applications and the other one is a multiple adaptive feed forward cancellation (MAFC) method. The latter one is also called an adaptive neuron method, neural network method or Adaline method. The novelty of this paper is that it presents experimental results, which is rare in the literature. This paper first gives an introduction of MAFC method from adaptive control point of view. Then realtime (RT) implementation of two methods is introduced and realtime hardware in the loop tests (HIL) are fully performed to compare these two methods. Finally, experimental results using these two methods on a diode rectifier front end motor drive and a thyristor rectifier dc drive are presented. Both steady state and dynamic performances of these two methods are compared. The comparison results give guidance for using these two methods for power electronics applications like active filter or STATCOM
def from_hparams( cls, optimizer: Optimizer, hparams: Namespace, num_training_steps: int ) -> LambdaLR: return LinearWarmup(optimizer, hparams.warmup_steps, num_training_steps)
// Global Imports import { css, FlattenSimpleInterpolation } from 'styled-components'; // End Global Imports interface IdefaultOptions { black?: string; fontFamilyBase?: string; fontFamilyMonospace?: string; fontSizeBase?: string; fontWeightBase?: number | string; fontWeightBolder?: number | string; lineHeightBase?: number; bodyColor?: string; bodyBg?: string; headingsMarginBottom?: string; paragraphMarginBottom?: string; labelMarginBottom?: string; dtFontWeight?: number | string; linkDecoration?: string; tableCellPadding?: string; tableCaptionColor?: string; } export const defaultOptions: IdefaultOptions = { black: '#000', bodyBg: '#fff', bodyColor: '#212529', dtFontWeight: 700, fontFamilyBase: '-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji"', fontFamilyMonospace: 'SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace', fontSizeBase: '1rem', fontWeightBase: 400, fontWeightBolder: 'bolder', headingsMarginBottom: '0.5rem', labelMarginBottom: '0.5rem', lineHeightBase: 1.5, linkDecoration: 'none', paragraphMarginBottom: '1rem', tableCaptionColor: '#6c757d', tableCellPadding: '0.75rem' }; export const reset = ( options?: IdefaultOptions ): FlattenSimpleInterpolation => { const { black, fontFamilyBase, fontFamilyMonospace, fontSizeBase, fontWeightBase, fontWeightBolder, lineHeightBase, bodyColor, bodyBg, headingsMarginBottom, paragraphMarginBottom, dtFontWeight, linkDecoration, tableCellPadding, tableCaptionColor, labelMarginBottom } = { ...defaultOptions, ...options }; return css` *, *::before, *::after { box-sizing: border-box; } html { font-family: sans-serif; line-height: 1.15; -webkit-text-size-adjust: 100%; -webkit-tap-highlight-color: rgba(${black}, 0); } article, aside, figcaption, figure, footer, header, hgroup, main, nav, section { display: block; } body { margin: 0; font-family: ${fontFamilyBase}; font-size: ${fontSizeBase}; font-weight: ${fontWeightBase}; line-height: ${lineHeightBase}; color: ${bodyColor}; text-align: left; background-color: ${bodyBg}; } [tabindex='-1']:focus:not(:focus-visible) { outline: 0 !important; } hr { box-sizing: content-box; height: 0; overflow: visible; } h1, h2, h3, h4, h5, h6 { margin-top: 0; margin-bottom: ${headingsMarginBottom}; } p { margin-top: 0; margin-bottom: ${paragraphMarginBottom}; } abbr[title], abbr[data-original-title] { text-decoration: underline; text-decoration: underline dotted; cursor: help; border-bottom: 0; text-decoration-skip-ink: none; } address { margin-bottom: 1rem; font-style: normal; line-height: inherit; } ol, ul, dl { margin-top: 0; margin-bottom: 1rem; } ol ol, ul ul, ol ul, ul ol { margin-bottom: 0; } dt { font-weight: ${dtFontWeight}; } dd { margin-bottom: 0.5rem; margin-left: 0; } blockquote { margin: 0 0 1rem; } b, strong { font-weight: ${fontWeightBolder}; } small { font-size: 80%; } sub, sup { position: relative; font-size: 75%; line-height: 0; vertical-align: baseline; } sub { bottom: -0.25em; } sup { top: -0.5em; } a { text-decoration: ${linkDecoration}; background-color: transparent; } a:not([href]) { color: inherit; text-decoration: none; } a:not([href]):hover { color: inherit; text-decoration: none; } pre, code, kbd, samp { font-family: ${fontFamilyMonospace}; font-size: 1em; } pre { margin-top: 0; margin-bottom: 1rem; overflow: auto; } figure { margin: 0 0 1rem; } img { vertical-align: middle; border-style: none; } svg { overflow: hidden; vertical-align: middle; } table { border-collapse: collapse; } caption { padding-top: ${tableCellPadding}; padding-bottom: ${tableCellPadding}; color: ${tableCaptionColor}; text-align: left; caption-side: bottom; } th { text-align: inherit; } label { display: inline-block; margin-bottom: ${labelMarginBottom}; } button { border-radius: 0; } button:focus { outline: 1px dotted; outline: 5px auto -webkit-focus-ring-color; } input, button, select, optgroup, textarea { margin: 0; font-family: inherit; font-size: inherit; line-height: inherit; } button, input { overflow: visible; } button, select { text-transform: none; } select { word-wrap: normal; } button, [type='button'], [type='reset'], [type='submit'] { -webkit-appearance: button; } button:not(:disabled), [type='button']:not(:disabled), [type='reset']:not(:disabled), [type='submit']:not(:disabled) { cursor: pointer; } button::-moz-focus-inner, [type='button']::-moz-focus-inner, [type='reset']::-moz-focus-inner, [type='submit']::-moz-focus-inner { padding: 0; border-style: none; } input[type='radio'], input[type='checkbox'] { box-sizing: border-box; padding: 0; } input[type='date'], input[type='time'], input[type='datetime-local'], input[type='month'] { -webkit-appearance: listbox; } textarea { overflow: auto; resize: vertical; } fieldset { min-width: 0; padding: 0; margin: 0; border: 0; } legend { display: block; width: 100%; max-width: 100%; padding: 0; margin-bottom: 0.5rem; font-size: 1.5rem; line-height: inherit; color: inherit; white-space: normal; } progress { vertical-align: baseline; } [type='number']::-webkit-inner-spin-button, [type='number']::-webkit-outer-spin-button { height: auto; } [type='search'] { outline-offset: -2px; -webkit-appearance: none; } [type='search']::-webkit-search-decoration { -webkit-appearance: none; } ::-webkit-file-upload-button { font: inherit; -webkit-appearance: button; } output { display: inline-block; } summary { display: list-item; cursor: pointer; } template { display: none; } [hidden] { display: none !important; } `; }; export default reset;
Integrating Analytical Frameworks to Investigate Land-Cover Regime Shifts in Dynamic Landscapes Regime shiftsrapid long-term transitions between stable statesare well documented in ecology but remain controversial and understudied in land use and land cover change (LUCC). In particular, uncertainty surrounds the prevalence and causes of regime shifts at the landscape level. We studied LUCC dynamics in the Tanintharyi Region (Myanmar), which contains one of the last remaining significant contiguous forest areas in Southeast Asia but was heavily deforested between 19922015. By combining remote sensing methods and a literature review of historical processes leading to LUCC, we identified a regime shift from a forest-oriented state to an agricultural-oriented state between 19972004. The regime shift was triggered by a confluence of complex political and economic conditions within Myanmar, notably the ceasefires between various ethnic groups and the military government, coupled with its enhanced business relations with Thailand and China. Government policies and foreign direct investment enabling the establishment of large-scale agro-industrial concessions reinforced the new agriculture-oriented regime and prevented reversion to the original forest-dominated regime. Our approach of integrating complementary analytical frameworks to identify and understand land-cover regime shifts can help policymakers to preempt future regime shifts in Tanintharyi, and can be applied to the study of land change in other regions. Introduction Escalating human domination of Earth's ecosystems over the course of the Anthropocene has led to adverse global environmental impacts through changes in climate, biogeochemical cycles, ecosystem functions, and biodiversity. Human land-use activities have transformed natural landscapes through agricultural expansion and intensification, tropical deforestation, and urban sprawl. Transformations in land systems-the terrestrial component of earth systems that comprise all the processes, activities, and socioeconomic outcomes of the human use of land-have profound consequences for local environments and human well-being and are among the most important drivers of global environmental change. Although land systems can change gradually, the changes may also occur abruptly between two stable states in the system. This process is referred to as a regime shift [4,, a concept used in ecology to describe rapid transitions between different stable states of ecological systems (e.g., forest to savannah ). Reorganisation of land systems may occur during brief periods of abrupt, non-linear change driven by exogenous events that are difficult to anticipate. The period of rapid change between two stable states of land systems (or land-system regimes) is a 'land-system regime shift.' Regime shifts may be catalysed by reaching a threshold after cumulative pressure with associated feedbacks (a 'tipping point') or by influential events (a 'punctuation') applied to the land system. Recognition of land-system regime shifts as significant components of land-system change has emerged only recently through local-scale studies with long temporal scales of land use and land cover data (e.g., 27-32 years ) that have focused on the implications of rapid land-system changes on long-term socio-ecological interactions and human well-being. At broader scales (thousands of km 2 ), however, studies of rapid land transformations remain challenging and have been limited by the availability of long-term land use and land cover datasets. Limits to the temporal resolution and land-cover class differentiation of available land cover data products have precluded studies that could potentially detect and investigate regime shifts at broad scales. For example, the CORINE Land Cover products, developed by the European Environment Agency, have been widely used for land cover change assessments in Europe, but its coarse time-intervals (i.e., ≥6 years) limit its applicability for detecting rapid changes. Moreover, although dense time-series historical satellite data are now beginning to provide comprehensive quantified records of global-scale land-change dynamics, to date there has been no direct, quantitative evidence of broad-scale (e.g., at the landscape or regional level) regime shifts. Insights into whether broad-scale regime shifts occur, and under what conditions, can inform national-level environmental policy and enhance dialogue on the drivers of widespread land change. Data requirements for landscape-level investigation of regime shifts mirror those that are recognised at the local scale. First, spatial data need to be available on a near-continuous timeline in order to unambiguously identify the occurrence of a regime shift. Second, complete information on land cover transitions is necessary to produce insights into the drivers of a regime shift. Ramankutty and Coomes, for example, identified the main underlying factors and triggers resulting in the shifts in the production of soybean and shrimp, but were unable to elucidate whether soybean or shrimp farms expanded at the expense of natural habitats or other categories of cultivated land. Mller et al. showed net changes in land use/cover and could only infer the drivers based on presumed relationships among the land use/cover categories. Hence, while previous research may have identified regime shifts, and even the net changes across sites, they did not have the deep dataset required to understand the transitions, and therefore, the drivers of the regime shift. Moreover, quantitative analysis must be supported by a deep understanding of the context of the landscape(s) in question. While village-level participatory mapping is a feasible means to gather historical and contextual information at local scales, it is not so at the scale of thousands of square kilometers. Literature reviews, national-level interviews, and expert opinion must therefore form the basis for interpreting the outputs generated from quantitative analysis. Therefore, a complete and holistic analysis of landscape-level regime shifts should be broad-scale, based on high quality satellite-based land cover data over frequent time intervals with sufficient categorical resolution for transitions and their drivers to be evaluated, and accompanied by in-depth historical/contextual analysis that links land change patterns to the underlying processes driving them. Here, we identified and investigated a land-cover regime shift across a~43,300 km 2 landscape of southern Myanmar. We focused on regime shift in land cover, as opposed to a land-system regime shift, given the limitations on land use data at various scales (such as land use intensity and land management, which are largely unavailable in Myanmar) that complements the land cover data to fully characterise the entire land system. Our approach, nevertheless, allowed us to describe the macro-level drivers that influenced broad scale changes and patterns of land use and land cover. We accomplished this through the integration of two complementary analytical frameworks to identify, characterise, and explain the presence and drivers of the regime shift. Through these frameworks, we identified the presence of a regime shift over a specific time period; quantified the land cover transitions during the regime shift versus during stable land-cover regimes; and described the preconditions, triggers, and self-reinforcing processes that facilitated the regime shift. Conceptual/Methodological Framework The methodological flow consisted of three distinct steps: identifying the occurrence of a land-cover regime shift; characterising the dynamics of the land cover transitions; and explaining the causes and drivers that facilitated the land-cover regime shift (Figure 1). We used multiple methods to identify and characterise the patterns and dynamics of the regime shift, all of which relied on a validated quantification of land cover transitions. This included complex time-series analyses of land cover transitions such as Sankey diagramming and Intensity Analysis, the latter of which extracted information at multiple levels (see Section 3.3, Data Analysis). We explained the processes driving the regime shift by adopting the land-use regime shift analytical framework from Ramankutty and Coomes, which enabled us to develop structured narratives by identifying the role of preconditions, triggers, and self-reinforcing processes governing the regime shift. Through this systematic process, we linked the complementary information generated from the two analytical frameworks to holistically and robustly investigate a land-cover regime shift. 104 series analyses of land cover transitions such as Sankey diagramming and Intensity Analysis, the 105 latter of which extracted information at multiple levels (see Section 3.3, Data Analysis). We explained 106 the processes driving the regime shift by adopting the land-use regime shift analytical framework 107 from Ramankutty and Coomes, which enabled us to develop structured narratives by identifying 108 the role of preconditions, triggers, and self-reinforcing processes governing the regime shift. Through 109 this systematic process, we linked the complementary information generated from the two analytical 110 frameworks to holistically and robustly investigate a land-cover regime shift. Study Area Our study area was the Tanintharyi Region (hereafter "Tanintharyi") of southern Myanmar (43,345 km 2 ; Figure 2), a landscape situated in the biologically rich transition zone of the Indochinese and Sundaic regions of Southeast Asia. Tanintharyi supports tropical evergreen broadleaf forests, mixed deciduous forests in the north and northeast parts, and mangrove forests in the coastal intertidal zone. Various globally important threatened species, such as tiger (Panthera tigris), Sunda pangolin (Manis javanica), Asian elephant (Elephas maximus), and Malayan tapir (Tapirus indicus) inhabit this landscape. Tanintharyi is also an important economic region for trade in natural resources, and specifically, large-scale agricultural businesses promoted as a dual economic development and poverty alleviation strategy under Myanmar's 30-year Agricultural Master Plan. These conditions have, in turn, resulted in Tanintharyi becoming a tropical biodiversity hotspot undergoing extensive forest conversion and land change for agricultural plantation development. A complex array of pressures to convert its biologically diverse forests is expected to intensify over at least the next decade, as Myanmar transitions towards a more democratic and capitalist economy. Evidence indicates extensive conversion of forests to agricultural plantations (oil palm, rubber) occurred in Tanintharyi between 1990 to 2015. Those previous studies, however, only quantified land cover changes between two time-points with available 30-m spatial resolution satellite data (1990-2000 for Leimgruber et al. ; 2002-2014 for Bhagwat et al. ; 1995-2015 for De Alban et al. ), leaving the characterisation of land cover dynamics with better temporal frequencies largely uninvestigated within these periods.. 126 These conditions have, in turn, resulted in Tanintharyi becoming a tropical biodiversity hotspot 127 undergoing extensive forest conversion and land change for agricultural plantation development 128. A complex array of pressures to convert its biologically diverse forests is expected to Data Preparation The main spatial data source for this investigation was the 24-year annual time-series global land cover product developed by the European Space Agency Climate Change Initiative (ESA CCI; ). These globally consistent land cover maps, delivered at 300-m spatial resolution, were interpreted from a suite of satellite data acquired by various sensors (i.e., MERIS, SPOT-VGT, AVHRR, PROBA-V) using the standardised hierarchical Land Cover Classification System developed by the Food and Agriculture Organisation of the United Nations. We used these land cover maps to quantify annual land cover transitions as a follow-up to a previous study that evaluated land cover change across two time-points between 1995 and 2015, with the initial intention of quantifying annual dynamics of land cover transitions, and generating input data for a predictive land cover change model. However, after discovering a land-cover regime shift (see Section 4, Results), the objectives of this study shifted towards the investigation of landscape-level land-cover regime shifts. To prepare the time-series land cover maps for analysis, we developed a script using Google Earth Engine (see Supplementary Materials). We defined the geographic areas of interest, particularly Tanintharyi (and its districts Dawei, Myeik, and Kawthoung), using the Global Administrative Database. We aggregated the 23 detailed land cover categories present in Tanintharyi into six broad classes, namely, Forest, Mosaic Vegetation, Shrubland, Cropland, Other Vegetation, and Non-Vegetation (Table 1; see Supplementary Materials), to show broad patterns for analysing land cover transitions, and to keep the analysis from becoming unwieldy due to too much detail. After reclassification, we then masked out the pixels outside each specific area of interest (e.g., whole region, each district) prior to exporting the land cover maps for further data processing. Data Analysis We generated stacked area plots, Sankey diagrams, and conducted Intensity Analysis to identify and characterise the patterns and dynamics of the land-cover regime shift. Thereafter, we used the land-use regime shift analytical framework to explain the processes driving the regime shift. We calculated the area of each land cover category per year using the Semi-Automatic Classification Plugin in QGIS v.2.18, and used the calculated outputs to generate stacked area plots using R v.3.4. The plots tracked the proportion of total map area comprising each land cover category over the 24-year period (cf. land-use transitions in ). We then generated transition (or cross-tabulation) matrices by calculating the annual area of change for all land cover transitions in QGIS. We then used the transition matrices (see Supplementary Materials) to conduct Intensity Analysis, a quantitative method to analyse land cover change over time for an area of interest to summarise the change within time intervals whilst allowing the user to determine whether the changes observed in the maps are due to real transitions or map errors. We conducted Intensity Analysis (hereafter "IA") using a Microsoft Excel Macro developed by Aldwaik and Pontius. We extracted information at three levels of analysis: interval, category, and transition, progressing from general analysis to more detailed, respectively. For the identification step, we conducted an interval-level IA, in which we calculated the total landscape change (percentage of all map pixels changing category) for each time interval, from which we identified time intervals with either faster or slower rates of change than the interval-level uniform intensity. We then examined how the overall annual rates of change varied across time intervals to determine whether a regime shift occurred. We defined a regime shift as the period during which the overall annual rate of landscape change exceeded the uniform intensity for the entire interval range. Next, we characterised the land cover transitions during the regime shift and the (stable) land-cover regimes. We used output transition matrices to produce Sankey diagrams, which illustrated the flows and patterns of gross land cover transitions in three time periods: pre-regime shift, during the regime shift, and post-regime shift. We generated the Sankey diagrams using an online generator developed by Csala. Additionally, category-level IA quantified the size (area) and intensity (proportion of all transitioning pixels) of gross losses and gross gains for each land cover category per interval, from which we identified active or dormant land cover categories, defined as being greater than or less than, respectively, the category-level uniform intensity. For example, for each category, a loss intensity above the uniform represents a gross loss higher than the average change observed across all categories in an interval, meaning that such categories are "actively" losing for that interval. Conversely, a loss intensity below the uniform represents a gross loss lower than the average change observed across all categories in an interval, meaning that these categories are "dormantly" losing in that interval. This concept of active and dormant categories is similarly applied to the gross gains, providing information on whether a category is actively or dormantly gaining. Therefore, at the category-level IA, land cover categories can be characterised into four trends: active losers-active gainers (AL/AG), which describes categories experiencing active losses and gains; active losers-dormant gainers (AL/DG), which describes categories experiencing active losses yet dormant gains; dormant losers-active gainers (DL/AG), which describes categories experiencing dormant losses yet active gains; and dormant losers-dormant gainers (DL/DG), which describes categories experiencing dormant losses and gains. Category-level IA revealed that forest actively lost area over the period of the regime shift (see Section 4, Results). We therefore conducted a transition-level IA to quantify the size, intensity, and specific destination land cover categories for forest losses during each interval. From these transitions, we determined systematic transitions based on reciprocity, defined as whether the loss from category A and the gain to category B matched, and an evaluation of hypothesised commission and omission errors. "Systematic transitions" were those that deviated from a hypothesised transition-level uniform intensity. As an example, if losses from category A were intensively targeted by category B, and the gains of category B intensively targeted category A, then we would conclude that there was a systematic transition from A to B. We applied this concept to forest losses (i.e., deforestation) to determine whether forest was systematically transitioning into another land cover category. This was a critical component in identifying systematic land cover transitions, and by extension the drivers, of forest conversion. We visualised IA outputs using the tidyverse package in R. We adopted Ramankutty and Coomes' analytical framework for understanding land-use regime shifts to develop a structured complementary narrative that identified and explained the preconditions, triggers, and self-reinforcing processes governing the regime shift. Preconditions are defined as the necessary conditions that set the stage for a regime shift to occur. Triggers are specific events that are the immediate cause of the regime shift. Self-reinforcing processes maintain the new land-cover regime in a stable state, resisting a shift back to the previous regime. To develop the narrative, we identified relevant key events taken from the literature as either preconditions, triggers, or self-reinforcing processes based on the timing and relationships of the events. In addition to literature review, the narrative was informed by the first-hand extensive research and observational expertise of the authors (especially KMW) with land issues in Tanintharyi. Narrative perspective is appropriate for understanding regime shifts, and land-system change in general, since it adopts long time horizons, focuses on critical events and abrupt transitions, and seeks depth of understanding through historical detail and interpretation to tell the story of land change for a particular area. Narrative storylines also form the basis of scenarios that are designed for developing projections of future land changes. We summarised the narrative by constructing a ball-and-valley diagram adopted from Mller et al. using Sketch v.51.2. We subsequently conducted a land cover assessment through visual interpretation of available high-resolution satellite imagery between 2000-2017 using 150 random sample points for each of the major forest transitions (see Supplementary Materials) to validate the connection between the land cover change results and the proximate causes of deforestation identified in the narratives (see Sections 4.2 and 4.3). We used the Open Foris Collect Earth, an integrated tool that enables systematic land cover data collection through augmented visual interpretation of historical time-series high-resolution satellite imagery. Identifying the Occurrence of the Land-Cover Regime Shift Examination of the stacked area plots revealed that land cover change between 1992-2015 in Tanintharyi consisted of two periods of relatively low rates of land cover change, which bookended a period of drastic change characterised by a relatively rapid decrease in Forest (76.58% to 60.20%) coupled with large increases in Shrubland (12.51% to 23.06%) and Mosaic Vegetation (4.94% to 9.16%) ( Figure 3). Interval-level IA confirmed that the fastest annual rates of landscape change occurred between 1997-2004 (1.18% to 3.25%, notwithstanding the aberrant 2002-2003 interval at 0.37%), which were more than twice the uniform intensity over the 24-year period in Tanintharyi, compared to the slower rates of change that occurred during the 1992-1997 and 2004-2015 intervals that were below the uniform intensity ( Figure 4). These characteristics confirmed the occurrence of a land-cover regime shift during the 1997-2004 period. Similarly, the highest annual rates of change occurred between 1997-2004 across the three Tanintharyi districts, albeit with varying intensities and minor differences specific to some districts (see Supplementary Materials). The annual rates of change in each district ranged from 1.60% to 4.57% for Dawei (except 2002-2003 at 0.20%); 1.27% to 2.83% for Myeik (except 2002-2003 at 0.21%); and 0.97% to 3.58% for Kawthoung (except 1998-1999 at 0.31%). In summary, both a broader analysis of net changes (stacked area plots) and a detailed analysis of total landscape change (interval-level IA) revealed the existence of a land-cover regime shift in Tanintharyi. Characterising the Dynamics of the Land-Cover Regime Shift Sankey diagramming of land cover transitions revealed that during the regime shift, Forest was primarily converted into Shrubland (4216 km 2 ; 9.90%) and Mosaic Vegetation (1424 km 2 ; 3.34%), and to a lesser extent Cropland (330 km 2 ; 0.78%) ( Figure 5). Indeed, transitions between different land cover categories occurred during the "quiescent" periods of 1992-1997 and 2004-2015, but the changes were not as extensive as those observed during the regime shift. The narratives, as seen in the next section (see Section 4.3, Results), showed that the regime shift resulted in rapid and extensive deforestation from 1997-2004, from a predominantly Forest landscape to Shrubland, Mosaic Vegetation, and Cropland. The land cover assessment using visual interpretation of high-resolution imagery showed that the three major transitions from Forest to Shrubland, Forest to Mosaic Vegetation, and Forest to Cropland during the regime shift were predominantly conversion of forest to oil palm and rubber plantations, mixed vegetation, and rice paddies, and then to a smaller extent to logged/cleared areas and bare ground. The approximate land cover composition (based on the % of assessed random samples) of each of the forest transitions are elaborated as follows. First, samples that indicated a transition from Forest to Mosaic Vegetation consisted of mixed vegetation (41%), plantations (33%), rice paddy (10%), and forest (10%) with the remaining identified as logged areas, built-up area, or bare ground (6%). Second, the samples that indicated a transition from Forest to Shrubland consisted of plantations (43%), mixed vegetation (33%), and forest (20%) with the remaining identified as rice paddy, logged areas, or bare ground (4%). Third, the samples that indicated a transition from Forest to Cropland consisted of plantations (49%), rice paddy (24%), mixed vegetation (17%), and forest (4%) with the remaining identified as logged areas, bare ground, or built-up area (6%). Hence, the result of the visual assessment of land cover helped to establish the connection between the land cover change results and the proximate causes of deforestation identified in the narratives. During the regime shift, Forest was consistently an active loser-dormant gainer whereas Mosaic Vegetation and Shrubland were consistently dormant losers-active gainers at both the regional and district levels. Cropland was also a dormant loser-active gainer, consistently in Kawthoung, and across some intervals in Myeik and the entire Tanintharyi Region. Outside the regime shift, trends varied across intervals at both the regional and district levels. Forest was an active loser-dormant gainer in general (with the exception of some intervals) at both the regional and district levels. Categories that were dormant losers-active gainers were Mosaic Vegetation across most intervals in Tanintharyi and its three districts; followed by Shrubland, Cropland, and Other Vegetation (only intermittently) at varying intervals at both the regional and district levels. Other Vegetation and Non-Vegetation were dormant losers-dormant gainers across most intervals during and outside the regime shift at both the regional and district levels. Finally, active loser-active gainer categories, the dynamic trend that indicates swapping or almost equivalent gross losses and gross gains in an interval, was the least observed trend in the region, mostly observed in Kawthoung and Myeik, and consisting of Shrubland, Mosaic Vegetation, Cropland, and Other Vegetation (Figure 6; see Supplementary Materials). (Table 2). For example, in 1995-1996, Forest loss was intensively targeted by Mosaic Vegetation, and the gain of Mosaic Vegetation intensively targeted Forest, hence this transition was systematic. We concluded that the conversion from Forest to Mosaic Vegetation in 1995-1996 was truly systematic after both testing for reciprocity and comparing the hypothesised commission errors involved in this transition. In summary, five systematic transitions were determined prior to the regime shift, of which three categories systematically targeted Forest, and one category systematically avoided Forest. Seven systematic transitions were determined during the land-cover regime shift, of which two categories systematically targeted Forest, and three categories systematically avoided Forest. After the regime shift, 21 systematic transitions were determined, of which two categories systematically targeted Forest, two categories systematically avoided Forest, and one category either targeted or avoided Forest in different intervals., regime shift, and post-regime shift. "Targeted Forest" refers to categories that gained from Forest at an intensity greater than the uniform intensity for all Forest transitions in that interval. "Avoided Forest" refers to categories that gained from forest at an intensity less than the uniform intensity for all Forest transitions in that interval. Period Interval Targeted Explaining the Preconditions, Triggers, and Self-Reinforcing Processes of the Land-Cover Regime Shift The analysis of the land-cover regime shift (Figure 7; see Supplementary Materials for timeline of key events) focuses on the landscape level (Tanintharyi Region) where the literature made sufficient references to country/region/state dynamics while references to district levels were largely unavailable. Preconditions The transformed political landscape of Myanmar in the 1990s after the end of the Cold War and socialism led to corresponding economic reforms that set the stage for a land-cover regime shift in Tanintharyi. The disintegration of the Communist Party of Burma in 1989 brought a series of ceasefire deals with ethno-nationalist armed organisations along the northern border with China. As a result, Myanmar's Union Armed Forces, the Tatmadaw, dedicated more troops along the southeast border with Thailand where the Karen National Union (KNU) and their armed wing the Karen National Liberation Army (KNLA) continued their armed political struggle against the military government despite the ceasefire agreements in the north. The KNU, as the country's longest-standing rebel group who took up arms in the early 1950s in response to ethnic-based exclusions from the post-colonial state, controlled much of southeastern Myanmar's borderland forests. KNU and KNLA authorities governed over Karen (Kayin) villages in Karen State, extending to the southern Tanintharyi Region along the Thai border (in addition to other regions where Karen populations resided). During the Cold War, the KNU was viewed by governments in Thailand and the United States as a buffer against possible incursions from the Burmese communists or the Burmese military. The KNU's refusal to join a ceasefire against the Tatmadaw after the end of the Cold War sparked a series of military offensives against KNLA hotspot posts, eventually leading to the overthrow of their rebel capital, Manerplaw on the Thai border in 1995. Throughout the 1990s the KNU and its KNLA battalions grew weaker with more tenuous control over its Karen territory. Morale suffered and corruption rose among KNU leaders as a result, especially in the KNU 4th Brigade in Tanintharyi. Coincidentally, regional state governments in China, Myanmar, and Thailand worked to enhance economic integration across borders. For example, the Yunnan government in China saw the Wa and Kokang rebel groups of Myanmar as useful borderland agents to advance China's business and security interests in the region. However, as business relations among the national governments warmed, China and Thailand's diplomacy found the rebel groups along Myanmar's borderlands, from southern Tanintharyi Region to Kachin State, less useful to their political and security interests than in decades past. Meanwhile, Myanmar's military government responded to burgeoning street protests in Yangon and other cities calling for better economic conditions and democracy by reforming their impoverished socialist economy and selectively privatising a few lucrative economic sectors, in particular logging and mining. As a result, a nascent military-backed crony capitalist class was formed. The conditions for "ceasefire capitalism" had thereby solidified in Myanmar's resource-rich borderlands, and this, along with Thailand's "battlefields to marketplaces" and China's "Beijing Consensus", came to define Myanmar's eastern borderlands in the 1990s and into the 2000s, setting the preconditions for a land-cover regime shift by the turn of the century. Triggers The preconditions established in the 1990s-in particular policies of turning battlefields into marketplaces-set the stage for a series of events that triggered the land-cover regime shift in Tanintharyi. The importance of China and Thailand as Myanmar's trading partners increased when the United States and European Union imposed economic sanctions on Myanmar in response to the human rights abuses of the military government. Immediately after Myanmar's military signed a series of bilateral resource trade and infrastructure deals with China in 1988, the Commander in Chief of the Thai Armed Forces became the first foreign dignitary to visit Myanmar since the military gunned down hundreds of pro-democracy student activists earlier that same year (the 8888 Uprisings). The meeting resulted in 35 Thai companies receiving 47 logging concessions in border areas claimed by Karen villagers and the KNU. Within a few months of signing the logging deals, the Thai government passed a domestic logging ban (ostensibly in response to landslides caused by logging and flooding) that furthered Thai logging interests with its neighbors, resulting in an explosion of demand for logs from Myanmar, and facilitating deals by KNU leaders, Thai businessmen and state officials, and Burmese commanders and government officials for timber from KNU-controlled territory along the Thai border. The logging business further weakened KNU security by increasing opportunities for corruption, which facilitated yet more resource deals. The Tatmadaw also used the logging roads to move troops closer towards the Thai border, setting up military units in these borderlands for the first time. Timber was not the only resource extracted by Thai companies in Tanintharyi, however. In the early 1990s, a consortium of foreign oil companies signed a deal with the Burmese military to allow an oil/gas pipeline to run overland across the northern portion of Tanintharyi to Thailand. To prepare for construction and the security of what became the Yetagon/Yadana pipeline, the Tatmadaw led a military offensive against the KNU who had territorial control over some of the pipeline area, forcibly displacing villagers from a wide swath around the pipeline route. International civil society groups pressured the oil/gas consortium to establish the Tanintharyi Nature Reserve in the forests that immediately bordered the pipeline to the southeast. In fact, the creation of the park curtailed local forest use and customary management practices of the Karen, while also adding a further security buffer between the pipeline and Karen villagers and KNU members. At the height of the logging boom along the Thai borderlands in the 1990s, and as the military was negotiating with the KNU to sign a ceasefire, the Tatmadaw initiated a large military offensive in the northern stretches of the KNU 4th Brigade. The Tatmadaw pushed down from the pipeline area where they had secured their first territorial inroads into the region, forcing KNLA troops to the Thai border. Karen villagers fled into forests as internally displaced persons and across the Thai border as refugees. Remaining Karen villagers resettled (some forced, some voluntarily) into "model villages" along the government-controlled Union Road that ran north-south along the western edge of Tanintharyi. As a result of the ceasefire agreements between 1989-1995, direct warfare was replaced with "ceasefire capitalism" whereby land concessions were strategically allocated in previously contested areas to extract rent and make the territory legible to the state, leading to deforestation and the displacement of local communities. The military setbacks that greatly weakened the KNU during the mid-1990s led to a contraction of area under their control, thus reducing their revenue from the cross-border trade, and the displacement of thousands of Karen people along the border with Thailand. Hence, after agreeing to ceasefires, the increase in Tatmadaw-controlled areas enabled greater infrastructure development (e.g., roads) and led to an increase in logging concessions awarded during the early part of the land-cover regime shift, particularly along the Thailand-Myanmar border. The timber sector became a lucrative area for informal foreign investment, for example, for Thai logging companies that received concessions from both the military government and ethnic armed groups in ethnic regions along the Thai border (Karen, Kayah, and Shan States, and the Tanintharyi Region) where the most intensive logging had occurred. At the turn of the century, the military encouraged domestic companies to help industrialise the agricultural sector, which aligned with its experimentation of crony capitalism. By 1999, Myanmar's military government launched the first oil palm development programme, with the top military leader proclaiming Tanintharyi as the future "edible oil palm big pot of the nation". By the time the second oil palm development programme concluded in 2013, the government claimed that a total of nearly 360,000 acres (~1457 km 2 ) had been planted out of the 1.9 M acres (~7689 km 2 ) awarded. The 1.9 M acres of oil palm concessions represented nearly 20% of the total land area of Tanintharyi, making it the highest concentration of land allocated for the purpose of agribusiness in the country, and representing more than one-third of the total area of Myanmar's agribusiness estates. While the Tatmadaw's most significant military offensives occurred in the north of Tanintharyi, oil palm concessions were allocated primarily in the southern and eastern areas. Oil palm concessions were allocated mostly throughout the 2000s in forested areas of Tanintharyi that had little previous state military presence, as they were located within forest reserves demarcated by the state or the KNU (which has its own forest department). The forests provided cover for KNLA soldiers still engaged in guerrilla fighting. In some cases, the oil palm concessionaires, especially those crony companies that also operated logging company subsidiaries, cleared forests inside their oil palm concessions, although usually they never substantially planted oil palm. In addition to "conversion timber" coming off oil palm estates, the military government also allocated numerous logging concessions in forest reserves to the same cronies who received oil palm concessions. The official annual log volume quota for Tanintharyi was 30,000 cubic tons, one of the highest for any state or region in the country. In addition to oil palm, regional businessmen expanded tens of thousands of acres of rubber plantations into northern Tanintharyi from the mid-2000s, oftentimes backed by the Tatmadaw or the Mon rebel group. The process of reallocating land from individual smallholders to crony companies was legally enabled by the passage of the 1991 Wastelands Act, which allowed land to be reallocated for the production of agricultural commodities for domestic self-sufficiency or foreign exchange earnings. Together with the edible oils self-sufficiency policy, these policies enabled the development of oil palm concessions that led to widespread deforestation and internal displacement with a 900% increase in planted area between 2000-2010, coinciding with the latter part of the land-cover regime shift. For example, the Yuzana company owned by Htay Myint, a well-connected crony, was awarded an oil palm concession in the Pachan reserve forest (Yuzana 1 plantation), while Karen villagers near Lenya were evicted from the Yuzana 2 plantation. Self-Reinforcing Processes Throughout the early to mid-2000s, the triggering events that created a new agricultural production-oriented regime began to stabilize, which continued to be reinforced by the industrial agricultural system, village roadside settlements, state militarised perimeters, infrastructure development, and migrant agricultural labor migrations. By the 2000s, many Karen villages located in KNU territories had been forcibly emptied. For the larger oil palm concessions in operation, companies had built roads into these areas for the first time. Migrant laborers arrived to work the plantations, living in company villages inside the concession boundaries. Other landless migrants arrived to this opened land frontier in search of new opportunities. In 2011, Myanmar underwent another significant political transition with a quasi-democratically elected government put in place. The new government administration quickly passed a series of policies and laws that sought to open their state economy to foreign investors, particularly in agribusiness and natural resources. Tanintharyi, in particular, received ample attention from foreign investors, especially neighboring Thailand, for its strategic ocean access and trade routes to Thailand. The economic interests of large-scale agricultural businesses were entrenched, at the expense of smallholders, by the Farmland Law and the Vacant, Fallow, and Virgin Land Management Law in 2012. The latter, in particular, allowed areas of up to 50,000 acres (~202 km 2 ) outside the Permanent Forest Estate, including forested land and land occupied by farmers unable to get formal tenure recognition, to be leased for agricultural concessions for 30 years. The new 2012 land laws also legally supported the previous allocation of the oil palm concessions to private companies, allowing the concessionaires in most cases to retain concession rights. Foreign investors interested in palm oil processing (and rubber latex) started to make deals with the government and local companies in Tanintharyi. Regional governments (Thailand, China, Japan) drew up plans for the Dawei Special Economic Zone, which would be Southeast Asia's largest if fully developed. In addition to these various development initiatives since 2010 that helped to self-reinforce Tanintharyi's new agriculture-oriented regime, thereby preventing deforested areas from reverting back to forests, renewed international conservation efforts in Tanintharyi are working to maintain existing forest cover. International support to the government's and KNU's forest departments hopes to achieve better conservation outcomes for Tanintharyi's state and rebel forest reserves, with at least one to be upgraded to a national park. Forest-based communities in areas targeted by international conservation efforts will come under increasing scrutiny with the need to better conform to more forest-friendly management practices. These conservation efforts, matched by other global environmental mechanisms (REDD+, FLEGT), will additionally self-reinforce the agriculture-oriented regime currently in place. Discussion Our study is the first to identify a regional-scale land-cover regime shift, which could indicate that a regime shift in the overall land system may be occurring at a previously unrecognised scale. Previous documentation of land-system regime shifts have been local in scale (e.g., community-level for Mller et al. ; 14-72 km 2 for Zaehringer et al. ; and 6,348 km 2 for Trincsi ). Our study demonstrates that identification of a landscape-level land-cover regime shift highlights areas of particular interest: where land cover change was most rapid, corresponding changes in the land use patterns at various spatial and temporal scales have most likely occurred, thereby indicating where investigations into the preconditions, triggers, and self-reinforcing processes of the potential regime shift in the land system should be done. Systematic Investigation of Land-Cover Regime Shifts using Complementary Analytical Frameworks We investigated the complex dynamics of land-cover regime shifts through a novel integration of complementary analytical frameworks. Through the spatially-explicit quantitative Intensity Analysis framework, we identified the temporal occurrence of a land-cover regime shift in the Tanintharyi Region, Myanmar, and further characterised its land cover transitions. Analysis of land change intensities at the interval, category, and transition levels quantified the spatial and temporal extent of the regime shift, which we then contextualised through a structured narrative of the explanatory preconditions, triggers, and self-reinforcing processes driving it. The two analytical frameworks we integrated had previously never been applied together to study land-cover regime shifts, which remain a poorly understood phenomenon of land change globally. While the analytical framework developed by Ramankutty and Coomes was designed to direct the attention of scientists and researchers to the study of the processes governing land-use regime shifts, the Intensity Analysis framework developed by Aldwaik and Pontius has been applied in the study of the patterns of land cover change, which then guides the articulation of the processes driving these changes to gain insights on the dynamics of land change. Our study, therefore, provides the first example that integrates these two complementary frameworks, applied to the investigation of land-cover regime shifts through a systematic process. As a result, the spatially explicit, mixed-methods approach we present here represents a benchmark for future studies investigating regime shifts, and also presents an opportunity to accelerate our knowledge of regime shifts in other geographic areas through the application of the approach. Moreover, the approach is scalable for the same study area of interest, thereby permitting analyses at multiple spatial domains or scales. Previous studies have identified the occurrence of regime shifts at the village level or the national level, but no study has ever been multi-scale over the same study area of interest. Here, we showed that the occurrence of the regime shift was detected at both the regional (Tanintharyi) and district (i.e., Dawei, Myeik, Kawthoung) levels. Moreover, these results provide empirical evidence that land-cover regime shifts occur at multiple spatial scales. Although we did not carry out an analysis at the township or village levels, evidence of the regime shift in two villages in Dawei District has been documented, further confirming the scale independence in the detection of regime shifts. Our analysis also showed that while there were similar broad land change dynamics at the regional and district levels, nuanced and spatially heterogenous land cover transitions were also detected between the three Tanintharyi districts; thereby highlighting areas that could be the subject of more detailed investigation at local levels. Resolution is Crucial but Is a Double-Edged Sword Sufficient resolution in two aspects is critical when investigating the dynamics of land-cover regime shifts. First is temporal resolution; specifically, land cover maps need to be available at a frequency that will allow the identification of the regime shift (see also ). The present study emerged from a previous analysis of land cover change in the Tanintharyi Region, which used two time-points (i.e., 1995 and 2015); as a result, the regime shift remained undetected due to the low temporal resolution. It was only when an annual analysis of land cover change was conducted on the ESA CCI data that the regime shift became detectable. It is therefore plausible that other studies on land cover change, in particular, deforestation, which use low temporal resolution (i.e., large time intervals) did not detect important high-resolution dynamics such as regime shifts. Indeed, the dearth of annual land cover datasets (or the capacity to develop annual datasets efficiently) has constrained the temporal resolution of previous studies. The availability of annual land cover datasets offers new opportunities to discover previously unrecognised regime shifts. Second, the analysis of land-cover regime shifts requires sufficient resolution of land cover categories that will permit the characterisation of land cover transitions (and therefore the drivers). Although a global forest change dataset (e.g., ) may be used to identify regime shifts in forest cover from 2000 onwards given its annual temporal resolution, and may have finer scale detection of regime shifts than our study (30-m versus 300-m pixel size, respectively), a binary forest/non-forest classification does not permit the characterisation of land cover transitions compared to the characterisation that is possible with a land cover product containing detailed land cover categories. Low land-cover class resolution studies remain severely constrained in terms of both determining systematic land cover transitions and the drivers of a regime shift. Despite the more sophisticated land cover analysis possible with greater land cover category resolution, our dataset exhibited constraints that required additional analysis. In this study, in order to identify timber extraction by logging concessions and the expansion of agro-industrial plantations (primarily oil palm and rubber) as proximate causes of deforestation during the early and latter stages of the 1997-2004 regime shift, a subsequent visual land cover assessment was required to further differentiate sub-classes of Mosaic Vegetation, Shrubland, and Cropland. Thus, the current ESA CCI global land cover map product is constrained in detecting very specific land cover (such as oil palm and rubber, in our case) given that its spatial, spectral, and temporal resolution necessitates the broad generalisation of land cover categories applicable across the globe. Future land cover map products with high spatial, temporal, and categorical resolutions will enable more efficient discovery and interpretation of land-cover regime shifts, and will greatly facilitate improved land change research. Previous research on land-system regime shifts used long historical timelines (e.g., ). Time-series spatial data derived from remote sensing are limited historically, so our research, in reality, is limited to recent land-cover regime shifts, and potentially future ones as data become available. In fact, time-series land cover data have only become available very recently (e.g., the 24-year ESA CCI 24-year annual land cover product was released to the public in April 2017; ), hence, potentially opening the opportunity to revisit well-known cases of land change, and investigate the prevalence and significance of regime shifts in global land change. High resolution data allowed us to very precisely identify and characterise the land-cover regime shift and its dynamics. However, the systematic transitions were challenging to link with the processes driving them simply because these transitions varied from year to year, making their interpretation difficult. It is possible that systematic transitions in other regions of the world may be more consistent, at which point the drivers of forest loss, for example, would be more consistent as well. In our case, the variability of systematic transitions showed us that forest transitioned into a range of land cover categories that varied over time. We recognise that attempting to explain annual processes with a narrative is infeasible due to varying systematic transitions year on year, which is even further constrained by the limited literature available at sub-regional to local levels. While we used a narrative perspective in this study, participatory approaches carried out at the local scales (e.g., village, household) such as those employed in other studies on land-system regime shifts provide an agent-based perspective that is focused on understanding how the agency of actors involved in and excluded from land use decision-making shapes both short-and long-term land-system transformations. The integration of perspectives is therefore essential in understanding land-system change since each perspective deals with specific organisational levels and temporal scales of coupled human-environment systems. Hence, for example, combining these perspectives could provide information to relate probable causes for the exceptionally slower rate of change during the 2002-2003 interval within the regime shift, which could not be explained with an extensive review of the literature. Complementing the quantitative landscape-scale approach we presented in this study with qualitative local level methods can lend to an integration of these different perspectives that allows us to maximise the detailed information afforded by high resolution. Historical and Possible Future Drivers of Land-Cover Regime Shifts in Tanintharyi Our systematic approach for investigating a land-cover regime shift through the integration of complementary analytical frameworks afforded a deeper appreciation of the underlying processes that determined the patterns of land change in Tanintharyi. Previous studies that have detected land-system regime shifts were constrained since the land use transition curves only showed net changes, and hence failed to reveal the total gross landscape change and gross inter-category transitions constituting the most systematic landscape changes, and may likely have underappreciated the complex underlying processes driving the changes. In contrast, our systematic approach enabled the linkage of patterns to processes: we detected the principal signals of the land-cover regime shift in Tanintharyi, where forest transitioned into different land cover categories that varied over time, by first identifying the spatial and temporal extent of the regime shift, and then characterising the gross inter-category transitions, which then finally directed our focus to the possible underlying drivers that explained the regime shift. As land change is spatially and temporally heterogeneous, driven by dynamic forces and their interactions that arise from specific environmental, social, economic, political, and historical contexts, our approach therefore allowed us to understand the dynamics of feedback mechanisms that linked the patterns of land cover change with the processes of land use change. Mller et al. identified two distinct pathways that can induce land-system regime shifts (but may also be a combination of both): a "tipping point" pathway wherein subtle perturbations from underlying drivers accrue and eventually tip over a critical threshold that then results in a systemic shift to a new land-system regime; or a "punctuation" pathway wherein influential external events punctuate the period of equilibrium of an existing land-system regime that then results in a drastic change to a new stable land-system regime. In Tanintharyi, our results showed that the land-cover regime shift was induced by a punctuation pathway wherein the equilibrium was punctuated by decisive political events-armed conflict ceasefires, bilateral trade deals, road infrastructure development, granting of resource concessions, and enabling policies on land allocation and edible oils production-thereby transforming the formerly forest-dominated landscape into a new agricultural production-oriented landscape. Moreover, evidence indicated that the regime shift is irreversible given several prominent developments: increasing foreign investment following the initiation of democratic and capitalist transitions ; the subsequent lifting of Western sanctions to remove obstacles for foreign companies to invest in Myanmar ; and the 2012 Foreign Investment Law that included very significant liberalisation measures to encourage foreign direct investment in the natural resources extraction and agribusiness production sectors ; thus causing the agricultural production-oriented landscape to persist. Beyond providing a historical contextual analysis to explain a robust quantitative land cover change analysis, our study also suggests that another regime shift could be on the horizon for Tanintharyi. In Myanmar, the interplay of underlying factors, particularly of armed conflict in ethnic borderland regions, weak land tenure, and economic interests, has situated the role of formal land concessions (e.g., logging, agribusiness) in deforestation. While nearly 20% of the region had been allocated to oil palm concessions by the end of 2013, and at present self-reinforcing processes appear to have stabilised land change since 2004, less than 20% of the oil palm concessions had actually been planted. Full development of these concessions represent a new trigger that would further accelerate a transition to a potentially new agriculture-dominated landscape, causing more widespread deforestation (including in existing or proposed protected areas, see ) and internal displacement. Also, full development of the Dawei Special Economic Zone, although stalled since 2008, could catalyse further infrastructure development in the region, particularly along the forested Thailand-Myanmar border, and thus lead to further deforestation. A simulation study, for example, along the "Road to Dawei" demonstrated this possibility wherein a conventional approach to road construction was likely to have positive economic impacts in the region, especially in the short-term, but also negative consequences for the integrity of the ecosystem, which in turn might also negatively impact the investment itself and its economic outcomes in the medium and longer term. Conclusions and Recommendations Land-system regime shifts have been recognised as globally significant, albeit they are still poorly understood land change phenomena. In this study, we investigated the complex dynamics of land-cover regime shifts by integrating complementary analytical frameworks, and then applied this to the dynamic and rapidly transitioning landscape of the Tanintharyi Region in southern Myanmar. We found that the land-cover regime shift resulted in rapid and extensive deforestation from 1997-2004, which was due to timber extraction by logging concessions as well as the expansion of agro-industrial plantations during the early and latter stages of the regime shift, respectively. Therefore, our study provides the first direct, quantitative evidence of a broad-scale regime shift, emphasising that the land cover changes were non-linear, which had not been detected by previous studies that have already reported the extensive land cover changes in the region. Also, our study detected the occurrence of the regime shift at both the regional and district levels, providing evidence that regime shifts occur at multiple spatial scales as well as confirming scale independence in the detection of regime shifts. The political and economic conditions that developed internally within Myanmar and its neighboring countries, primarily Thailand and China, as well as the socio-economic-political interactions between Myanmar and these two countries, set the stage to trigger the land-cover regime shift. Government policies that facilitated the establishment of large-scale agro-industrial concessions, which emerged through state-mediated capitalism and politico-business complexes and the influx of foreign direct investments, reinforced the new agricultural production-oriented regime, and prevented it from reverting back to the previous forest-dominated regime. These social, economic, and political complexities necessitated a deeper investigation and treatment of the narratives regarding the preconditions, triggers, and self-reinforcing processes that explained the regime shift, which allowed us to connect the spatial patterns with the processes driving land cover change. Our study provides a template for future studies investigating land-cover regime shifts using a spatially explicit, scalable, and quantifiable approach through the integration of complementary analytical frameworks, and directs further attention to uncovering the dynamics of a potential regime shift in the overall land system. The future work that we envision includes understanding land-cover regime shifts, and land change more broadly, in terms of the dynamics and variations of spatial determinants and landscape patterns across space and time that could lead to resolving some of the challenges for developing model projections of future land change, as well as exploring approaches that further characterise landscape changes and their driving processes. Next, integration of the telecoupling framework for analysing regime shifts is important to further explore cases such as the Tanintharyi Region, where regional-global external forces (e.g., markets, policies) have influenced local land use and land cover change. Finally, our approach offers new opportunities to study previously unrecognised regime shifts in other geographic areas, thereby advancing our understanding of land-system dynamics that drive global environmental change.
#pragma once #include <mbgl/util/optional.hpp> #include <cstddef> namespace mbgl { namespace gl { class Segment { public: Segment(std::size_t vertexOffset_, std::size_t indexOffset_, std::size_t vertexLength_ = 0, std::size_t indexLength_ = 0) : vertexOffset(vertexOffset_), indexOffset(indexOffset_), vertexLength(vertexLength_), indexLength(indexLength_) {} const std::size_t vertexOffset; const std::size_t indexOffset; std::size_t vertexLength; std::size_t indexLength; private: friend class Context; mutable optional<UniqueVertexArray> vao; }; template <class Attributes> class SegmentVector : public std::vector<Segment> { public: SegmentVector() = default; }; } // namespace gl } // namespace mbgl
Signup to receive a daily roundup of the top LGBT+ news stories from around the world A judge has refused an appeal that would delay a trans woman prisoner’s gender confirmation surgery. Michelle-Lael Norsworthy was the first inmate in California to be granted surgery by a court. The judge found that while she had been allowed hormone treatment, and had been recommended for surgery by her psychologist, the prison “chose to ignore the clear recommendations of her mental health provider” and “instead of following his recommendations, they removed her from his care.” The Department of Corrections and Rehabilitation have had their request for a delay denied, and will now appeal to the 9th Circuit Court of Appeals. Kris Hayashi, Executive Director of the Transgender Law Center said before the ruling: “The state provides essential medical care to all people being held in prison, and everyone – transgender or not – should find it troubling that the state is trying to take that away from Michelle just because of who she is.” Judge John Tigar said that he accepted the legal complications of the case, but believed and further delay would cause Ms Norsworthy serious psychological harm. Hayashi welcomed the statement: “Judge Tigar recognized the urgency for Michelle in receiving the care that all the evidence shows is critical for her health.” Earlier this month, musicians Michael Stipe and Sir Elton John spoke out about “horrific” treatment of trans women prisoners.
import matplotlib matplotlib.use('Agg') import pandas as pd import seaborn as sns import sys from matplotlib import pyplot as plt import seaborn as sns import pandas as pd import numpy as np from keyname import keyname as kn from fileshash import fileshash as fsh # open-type fonts matplotlib.rcParams['pdf.fonttype'] = 42 dataframe_filename = sys.argv[2] df_key = pd.read_csv(sys.argv[1]) df_data = pd.read_csv(dataframe_filename) print("data loaded!") key = { row['Metric'] : { col : row[col] for col, val in row.iteritems() if col != 'Metric' } for idx, row in df_key.iterrows() } df_data['Dimension'] = df_data.apply( lambda x: key[x['Metric']]['Dimension'], axis=1 ) df_data['Dimension Type'] = df_data.apply( lambda x: key[x['Metric']]['Dimension Type'], axis=1 ) df_data['Dimension'] = df_data.apply( lambda x: x['Dimension Type'] + " " + str(x['Dimension']), axis=1 ) df_data['Metric'] = df_data.apply( lambda x: ( ('Sliding ' if key[x['Metric']]['Sliding'] else '') + key[x['Metric']]['Base Metric'] ), axis=1 ) df_data['Tag Mean Match Score'] = df_data.apply( lambda x: x['Tag Mean Match Score'] + np.random.normal(0, 1e-8), axis=1 ) print("data crunched!") g = sns.FacetGrid( df_data, col='Metric', row='Dimension', hue='Dimension Type', margin_titles=True, sharey=False, row_order=( sorted( [x for x in df_data['Dimension'].unique() if 'Mean' in x], key=lambda str: next(int(s) for s in str.split() if s.isdigit()) ) + sorted( [x for x in df_data['Dimension'].unique() if 'Minimum' in x], key=lambda str: next(int(s) for s in str.split() if s.isdigit()) ) ) ).set(xlim=(0, 1)) g.map(sns.distplot, "Tag Mean Match Score", rug=True, hist=False) outfile = kn.pack({ 'title' : kn.unpack(dataframe_filename)['title'], 'bitweight' : kn.unpack(dataframe_filename)['bitweight'], 'seed' : kn.unpack(dataframe_filename)['seed'], '_data_hathash_hash' : fsh.FilesHash().hash_files([dataframe_filename]), '_script_fullcat_hash' : fsh.FilesHash( file_parcel="full_parcel", files_join="cat_join" ).hash_files([sys.argv[0]]), # '_source_hash' :kn.unpack(dataframe_filename)['_source_hash'], 'ext' : '.pdf' }) plt.savefig( outfile, transparent=True, bbox_inches='tight', pad_inches=0 ) print("output saved to", outfile)
import iraf no = iraf.no yes = iraf.yes from axe import axesrc #import axesrc # Point to default parameter file for task _parfile = 'axe$backest.par' _taskname = 'backest' ###### # Set up Python IRAF interface here ###### def backest_iraf(grism, config, np, interp, niter_med, niter_fit, kappa, smooth_length, smooth_fwhm, old_bck, mask, in_af, out_back): # properly format the strings grism = axesrc.straighten_string(grism) config = axesrc.straighten_string(config) in_af = axesrc.straighten_string(in_af) out_back = axesrc.straighten_string(out_back) # transform the IF booleans to python if old_bck == iraf.yes: old_bck = True else: old_bck = False if mask == iraf.yes: mask = True else: mask = False # check whether there is something to start if grism != None and config != None: axesrc.backest(grism=grism, config=config, np=np, interp=interp, niter_med=niter_med, niter_fit=niter_fit, kappa=kappa, smooth_length=smooth_length , smooth_fwhm=smooth_fwhm, old_bck=old_bck, mask=mask, in_af=in_af, out_bck=out_back) else: # print the help iraf.help(_taskname) # Initialize IRAF Task definition now... parfile = iraf.osfn(_parfile) a = iraf.IrafTaskFactory(taskname=_taskname,value=parfile, pkgname=PkgName, pkgbinary=PkgBinary, function=backest_iraf)
/* Fills R with a NaN whose significand is described by STR. If QUIET, we force a QNaN, else we force an SNaN. The string, if not empty, is parsed as a number and placed in the significand. Return true if the string was successfully parsed. */ bool real_nan (REAL_VALUE_TYPE *r, const char *str, int quiet, enum machine_mode mode) { const struct real_format *fmt; fmt = REAL_MODE_FORMAT (mode); gcc_assert (fmt); if (*str == 0) { if (quiet) get_canonical_qnan (r, 0); else get_canonical_snan (r, 0); } else { int base = 10, d; memset (r, 0, sizeof (*r)); r->cl = rvc_nan; while (ISSPACE (*str)) str++; if (*str == '-') str++; else if (*str == '+') str++; if (*str == '0') { str++; if (*str == 'x' || *str == 'X') { base = 16; str++; } else base = 8; } while ((d = hex_value (*str)) < base) { REAL_VALUE_TYPE u; switch (base) { case 8: lshift_significand (r, r, 3); break; case 16: lshift_significand (r, r, 4); break; case 10: lshift_significand_1 (&u, r); lshift_significand (r, r, 3); add_significands (r, r, &u); break; default: gcc_unreachable (); } get_zero (&u, 0); u.sig[0] = d; add_significands (r, r, &u); str++; } if (*str != 0) return false; lshift_significand (r, r, SIGNIFICAND_BITS - fmt->pnan); r->sig[SIGSZ-1] &= ~SIG_MSB; r->signalling = !quiet; } return true; }
import { Product } from "@shopware-pwa/shopware-6-client/src/interfaces/models/content/product/Product"; import { UiProductOption } from "@shopware-pwa/helpers"; interface ProductOptions { [attribute: string]: UiProductOption[]; } export function getProductOptions({ product }: { product?: Product; } = {}): ProductOptions { let typeOptions = {} as any; product?.children?.forEach(variant => { if (!variant?.options?.length) { return; } for (let option of variant.options) { if (option.group?.name) { if (!typeOptions.hasOwnProperty(option.group.name)) { typeOptions[option.group.name] = []; } if ( !typeOptions[option.group.name].find( (valueOption: any) => option.id == valueOption.code ) ) { typeOptions[option.group.name].push({ label: option.name, code: option.id, value: option.name } as UiProductOption); } } } }); return typeOptions; }
import Money from './Money' import Expression from './Expression' const currenciesToKey = (currencies: CurrencyPair): string => { return `${currencies.from}-${currencies.to}` } interface Rates { [index: string]: number } interface CurrencyPair { from: string to: string } export default class Bank { private rates: Rates = {} public reduce = (source: Expression, to: string): Money => { return source.reduce(this, to) } public rate = (from: string, to: string): number => { if (from === to) { return 1 } if (!this.rates[currenciesToKey({ from, to })]) { throw new Error('Exchange rate not registered') } return this.rates[currenciesToKey({ from, to })] } public addRate = (currencies: CurrencyPair, rate: number): void => { this.rates[currenciesToKey(currencies)] = rate } }
<gh_stars>1-10 package org.ipso.lbc.common.utils; import org.ipso.lbc.common.frameworks.logging.LoggingFacade; /** * Created by <NAME> (<NAME>, iPso), 2016/3/17 12:58. Contact <EMAIL>.<br> */ public class StringUtils { public static int existCount(String src, String target){ int r=0; int targetLength=target.length(); while(src.contains(target)){ r ++; src=src.substring(src.indexOf(target) +targetLength); } return r; } public static String removeFromTail(String src,String target,int th){ return removeFromTail(src,target,th,false); } public static String removeFromTail(String src, String target, int th, Boolean includeTarget){ th = validate(th, existCount(src,target)); String r = new String(src); for(int i=0;i<th-1;i++){ r=r.substring(0, r.lastIndexOf(target)); } r = r.substring(0, r.lastIndexOf(target) + (includeTarget?0:1)); return r; } public static String removeFromHead(String src, String target, int th){ return removeFromHead(src, target, th, false); } public static String removeFromHead(String src, String target, int th, Boolean includeTarget){ th = validate(th, existCount(src, target)); String r = new String(src); for(int i=0;i<th-1;i++){ r=r.substring(r.indexOf(target)+1); } r = r.substring(r.indexOf(target) + (includeTarget?1:0)); return r; } private static Integer validate(Integer th, Integer existCount){ if (th > existCount){ LoggingFacade.debug("The th is an invalid value because it's greater than c, we make th=c here.(original c = " + existCount +", th = " + th + ")"); return existCount; } else { return th; } } }
The Validity and Clinical Utility of the Personality Inventory for DSM5 Response Inconsistency Scale ABSTRACT The Personality Inventory for DSM5 (PID5; Krueger, Derringer, Markon, Watson, & Skodol, 2012) is a self-report instrument designed to assess the personality traits of the alternative model of personality disorders (AMPD) in Section III of the DSM5. Despite its relatively recent introduction to the field, the instrument is frequently and widely used. One criticism of this instrument is that it does not include validity scales to detect potentially invalidating response style, including noncredible over- and underreporting and inconsistent (random) responding. Keeley, Webb, Peterson, Roussin, and Flanagan constructed an inconsistency scale (the PID5INC) to assess random responding on PID5 and proposed a number of potential cut scores that could be applied. In this study, we attempted to cross-validate the PID5INC, including whether the scale could detect randomly generated protocols and distinguish them from nonrandom protocols produced by two student and two clinical samples. The PID5INC successfully distinguished random from nonrandom protocols and the best cut scores were similar to those reported by Keeley et al.. We also found that a relatively low amount of random responding compromised the psychometric validity of the PID5 trait scales, which extended previous work on this instrument.
// Copyright 2023 Fraunhofer AISEC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // // $$\ $$\ $$\ $$\ // $$ | $$ |\__| $$ | // $$$$$$$\ $$ | $$$$$$\ $$\ $$\ $$$$$$$ |$$\ $$$$$$\ $$$$$$\ $$$$$$\ // $$ _____|$$ |$$ __$$\ $$ | $$ |$$ __$$ |$$ |\_$$ _| $$ __$$\ $$ __$$\ // $$ / $$ |$$ / $$ |$$ | $$ |$$ / $$ |$$ | $$ | $$ / $$ |$$ | \__| // $$ | $$ |$$ | $$ |$$ | $$ |$$ | $$ |$$ | $$ |$$\ $$ | $$ |$$ | // \$$$$$$\ $$ |\$$$$$ |\$$$$$ |\$$$$$$ |$$ | \$$$ |\$$$$$ |$$ | // \_______|\__| \______/ \______/ \_______|\__| \____/ \______/ \__| // // This file is part of Clouditor Community Edition. package api import ( "clouditor.io/clouditor/internal/api" ) // PayloadRequest describes any kind of requests that carries a certain payload. // This is for example a Create/Update request carrying an embedded message, // which should be updated or created. type PayloadRequest = api.PayloadRequest // CloudServiceRequest represents any kind of RPC request, that contains a // reference to a cloud service. // // Note: GetCloudServiceId() is already implemented by the generated protobuf // code for the following messages because they directly have a cloud_service id // field: // - orchestrator.RemoveControlFromScopeRequest // - orchestrator.ListControlsInScopeRequest // - orchestrator.GetCloudServiceRequest // - orchestrator.RemoveCloudServiceRequest // - orchestrator.UpdateMetricConfigurationRequest // - orchestrator.GetMetricConfigurationRequest // - orchestrator.ListMetricConfigurationRequest // - orchestrator.MetricChangeEvent // - orchestrator.TargetOfEvaluation // - orchestrator.RemoveTargetOfEvaluationRequest // - orchestrator.GetTargetOfEvaluationRequest // - orchestrator.ListTargetsOfEvaluationRequest // - orchestrator.Certificate // // All other requests, especially in cases where the cloud service ID is // embedded in a sub-field need to explicitly implement this interface in order. // This interface is for example used by authorization checks. type CloudServiceRequest = api.CloudServiceRequest
<reponame>panreyes/sorr-vita /* * Copyright © 2006-2012 SplinterGU (Fenix/Bennugd) * Copyright © 2002-2006 Fenix Team (Fenix) * Copyright © 1999-2002 <NAME> (Fenix) * * This file is part of Bennu - Game Development * * This software is provided 'as-is', without any express or implied * warranty. In no event will the authors be held liable for any damages * arising from the use of this software. * * Permission is granted to anyone to use this software for any purpose, * including commercial applications, and to alter it and redistribute it * freely, subject to the following restrictions: * * 1. The origin of this software must not be misrepresented; you must not * claim that you wrote the original software. If you use this software * in a product, an acknowledgment in the product documentation would be * appreciated but is not required. * * 2. Altered source versions must be plainly marked as such, and must not be * misrepresented as being the original software. * * 3. This notice may not be removed or altered from any source * distribution. * */ #ifndef __MODEFFECTS_SYMBOLS_H #define __MODEFFECTS_SYMBOLS_H #include <bgddl.h> #ifdef __BGDC__ #define BLUR_NORMAL 0 #define BLUR_3x3 1 #define BLUR_5x5 2 #define BLUR_5x5_MAP 3 #define GSCALE_RGB 0 #define GSCALE_R 1 #define GSCALE_G 2 #define GSCALE_B 3 #define GSCALE_RG 4 #define GSCALE_RB 5 #define GSCALE_GB 6 #define GSCALE_OFF -1 DLCONSTANT __bgdexport( mod_effects, constants_def )[] = { { "BLUR_NORMAL" , TYPE_INT, BLUR_NORMAL }, { "BLUR_3x3" , TYPE_INT, BLUR_3x3 }, { "BLUR_5x5" , TYPE_INT, BLUR_5x5 }, { "BLUR_5x5_MAP", TYPE_INT, BLUR_5x5_MAP }, { "GSCALE_RGB" , TYPE_INT, GSCALE_RGB }, { "GSCALE_R" , TYPE_INT, GSCALE_R }, { "GSCALE_G" , TYPE_INT, GSCALE_G }, { "GSCALE_B" , TYPE_INT, GSCALE_B }, { "GSCALE_RG" , TYPE_INT, GSCALE_RG }, { "GSCALE_RB" , TYPE_INT, GSCALE_RB }, { "GSCALE_GB" , TYPE_INT, GSCALE_GB }, { "GSCALE_OFF" , TYPE_INT, GSCALE_OFF }, { NULL , 0 , 0 } } ; DLSYSFUNCS __bgdexport( mod_effects, functions_exports )[] = { { "GRAYSCALE" , "IIB" , TYPE_INT , 0 }, { "RGBSCALE" , "IIFFF" , TYPE_INT , 0 }, { "BLUR" , "IIB" , TYPE_INT , 0 }, { "FILTER" , "IIP" , TYPE_INT , 0 }, { NULL , NULL , 0 , NULL } }; char * __bgdexport( mod_effects, modules_dependency )[] = { "libgrbase", NULL }; #else extern DLCONSTANT __bgdexport( mod_effects, constants_def )[]; extern DLSYSFUNCS __bgdexport( mod_effects, functions_exports )[]; extern char __bgdexport( mod_effects, modules_dependency )[]; #endif #endif
Mechanical loading and how it affects bone cells: the role of the osteocyte cytoskeleton in maintaining our skeleton. Lack of physical activity causes bone loss and fractures not only in elderly people, but also in bedridden patients or otherwise inactive youth. This is fast becoming one of the most serious healthcare problems in the world. Osteocytes, cells buried within our bones, stimulate bone formation in the presence of mechanical stimuli, as well as bone degradation in the absence of such stimuli. As yet, we do not fully comprehend how osteocytes sense mechanical stimuli, and only know a fraction of the whole range of molecules that osteocytes subsequently produce to regulate bone formation and degradation in response to mechanical stimuli. This dramatically hampers the design of bone loss prevention strategies. In this review we will focus on the first step in the cascade of events leading to adaptation of bone mass to mechanical loading, i.e., on how osteocytes are able to perceive mechanical stimuli placed on whole bones. We will place particular emphasis on the role of the osteocyte cytoskeleton in mechanosensing. Given the crucial importance of osteocytes in maintaining a proper resistance against bone fracture, greater knowledge of the molecular mechanisms that govern the adaptive response of osteocytes to mechanical stimuli may lead to the development of new strategies towards fracture prevention and enhanced bone healing.
package p024io.fabric.sdk.android.p348a.p353e; import org.json.JSONObject; /* renamed from: io.fabric.sdk.android.a.e.h */ /* compiled from: CachedSettingsIo */ public interface C13894h { /* renamed from: a */ JSONObject mo43300a(); /* renamed from: a */ void mo43301a(long j, JSONObject jSONObject); }
<reponame>engineerscodes/PyVisionHUB<filename>01.Basic/wasted.py ''' Consider the following numpy array: import numpy as np x = np.arange(10,21) i) What will be the output of the following commands: a) print(x) b) print(x[-3]) c) print[x[-4,:]) ii) Write the command to print all elements from index 1 to index 9 with a difference of 2. iii) Write the command to print all elements from index 7 to index 2 using negative indexing. ''' import numpy as np x = np.arange(10,21) print(x) print(x[-3]) #print(x[-4,:]) print(x[1:9 :2]) print(x[7:1:-1])
class Snake: ''' Classe que representa a entidade principal do jogo ''' def __init__(self): self.body = [Cube((0, 0), SNAKE)] self.dirs = [D_RIGHT] self.belly = [] def hit(self, c: Cube): return self.body[0].position == c.position def eat(self): self.belly.append(Cube(self.body[0].position, GRID)) self.dirs.append(self.dirs[0]) def collided(self): for i in range(1, len(self.body)): if self.hit(self.body[i]): return True return False def move(self, dir_=None): if dir_ is None or abs(dir_ - self.dirs[0]) == 2: dir_ = self.dirs[0] self.dirs.insert(0, dir_) self.dirs = self.dirs[:-1] for i, c in enumerate(self.body): c.move(self.dirs[i]) def draw(self, surface): if len(self.belly) > 0: b = self.belly[0] hit_ = False for c in self.body: if c.position == b.position: hit_ = True break if not hit_: self.belly.remove(b) self.body.append(b) for c in self.body: c.draw(surface)
Proton pump inhibitors use and the risk of fatty liver disease: A nationwide cohort study Proton pump inhibitor (PPI)induced hypochondria can change the composition of the gut microbiota, inducing overgrowth of small bowel bacteria, which has been suggested to promote the development of fatty liver disease through the gutliver axis. In this study, we aimed to investigate the association between PPI use and the risk of fatty liver disease.
def _chart_locations(graph): precoord1 = [graph[i].depth for i in range(len(graph)) if len(graph[i].name) == 0] max_height = max(Counter(precoord1).values()) max_depth = max(precoord1)+1 precoord0 = [] for number in list(Counter(precoord1).values()): precoord0 += [i for i in range(number)] return list(zip(precoord0,precoord1)), max_depth, max_height
# noinspection PyUnresolvedReferences import unreal as ue from strgen import StringGenerator @ue.uclass() class PythonBridgeImplementation(ue.PythonBridge): @ue.ufunction(override=True) def generate_string_from_regex(self, regex: str) -> str: return StringGenerator(regex).render() if __name__ == '__main__': ue.log_warning("BLT plugin Python bridge has been initiated!")
1. Field of the Invention The present invention relates to data transmission over computer networks. More particularly, it relates to improving the throughput of network controllers using bus mastering architecture. 2. Description of the Prior Art A computer network is a system of hardware and software that allows two or more computers to communicate with each other. Networks are of several different kinds. For example, local area networks ("LAN") connect computers within a work-group or department. There are campus networks which extend to multiple buildings on a campus. There are metropolitan area networks, ("MAN") which span a city or metropolitan area. There are wide area networks ("WAN") that make connection between nodes in different cities, different state and different countries. An enterprise network connects all of the computers within an organization regardless of where they are located or of kind, Networks operate under a network operating system ("NOS") whose architecture is typically layered. That is, a layered architecture specifies different functions at different levels in a hierarchy of software functions. A typical layered architecture can be conceptualized as having five layers: a user interface layer at the top of the hierarchy followed by an upper protocol layer, a lower protocol layer, a driver layer and finally a physical layer at the bottom. The user interface layer is the layer in which the data to be transmitted is created. For example, the user interface layer may be a word processor and the data to be sent is a file that was created by the user with the word processor. The upper protocol layer specifies the destination of the data to be transmitted. It also passes the data to be transmitted to the lower protocol layer. Because the lower protocol layer cannot handle an unlimited amount of data at any given time, the upper protocol layer passes data to the lower protocol layer in predetermined quantities called packets. The lower protocol layer includes the communications services which are a set of conventions which define how communication over the network will be structured. In general, data passed from the upper protocol layer as packets are broken down further by the lower protocol layer into frames. A frame is a data structure for transmitting data over a serial communication channel and typically includes a flag that indicated the start of the frame followed by an address, a control field, a data field and a frame check sequence field for error correction. The data field may be either fixed or variable. In the case of Ethernet, the frame is of variable size with a maximum size of 1,514 bytes. Also the functions of sequencing of frames, the pacing of frames, routing, and the like are done in the lower protocol layer. In performing these functions, the lower protocol layer establishes various descriptor and buffer fields in main memory. The next layer down is typically called the driver layer. This layer is a software module that is specific to the network controller hardware. The purpose of a driver is to isolate the hardware specific software functions in one module to facilitate interchangeability of hardware and software components designed by different organizations. The driver programs the network controller to carry out functions and transfers data between the network controller and the lower protocol layer. In doing so, the driver layer passes various descriptor and buffer fields on to the physical layer. The physical layer is the hardware which in a network includes the network controller and the physical link. If the physical link is linear, such as Ethernet, a carrier sense multiple access/collision detection (CSMA/CD) system is used in which a node sends a signal that every other node detects but only the addressed node interprets as useful data. If two nodes send signals at the same time, a collision occurs and both backoff, wait for a unique random amount of time and then try again. FIG. 1 is a block diagram of the general setting of the invention. Referring now to FIG. 1, a CPU 2, a main memory 6 and a bus mastering network controller 8 are connected to system bus 4. Bus mastering network controller 8 consists of a parallel data side 10, a buffer memory 11 and a serial side 12. Parallel side 10 is connected to system bus 4 and serial side 12 is connected to network physical link 14. Bus mastering network controller 8 is specific to a particular type of network such as Ethernet, token ring, etc. and provides the attachment point for the network physical link such as coaxial cable, fiber optic cable, etc., wireless (where an antenna and base station are needed). Bus mastering network controllers are a class of network controllers that are capable of transferring data from main memory to the physical link directly without requiring any interaction by the host CPU. When a bus mastering network controller is used, a data frame is communicated from CPU 2 to bus mastering network controller 8 by having the driver layer set up transmit buffers and descriptors in main memory 6 that contain all of the information about the frame to be transmitted such as frame length, frame header and pointers to application data fragments. The bus mastering network controller is then able to transfer the data directly from the application fragments directly without requiring any data copy from the CPU. In order to do this, bus mastering controller 8 gains control of system bus 4 and reads or writes data directly to and from main memory 6. FIG. 2 is an event chart showing the operation of a prior art bus mastering network controller 16. In FIG. 2 the events run vertically from top to bottom in order of their occurrence. The events of CPU 2, parallel side 10 and serial side 12 are shown on separate event lines for clarity. The sequence of events shown in FIG. 2 are accurate, but the time between events as illustrated in not intended to be to scale. Referring now to FIG. 2, at time 101, CPU 2 issues a transmit command (Tx) which is sent out over bus 4 to bus mastering network controller 8. At time 102, bus mastering network controller 8 receives the transmit command. At time 103, bus mastering network controller 8 completes acquisition of bus 4 at which point it drives all signals on bus 4. At time 104, the transfer of a frame of information from main memory 6 to buffer memory 11 is commenced. The frame transfer is in parallel over bus 4. In the case of modem computer architectures, bus 4 may be 32 or 64 bytes wide. The data transmission rate from main memory 6 over bus 4 to buffer memory 11 is much greater than the transmission of data over network physical link 14. For example, for a 100 Mbps FastEthernet link, it takes 122 microseconds to transmit a 1500 byte frame but takes only about 11 microseconds (this is a theoretical minimum with a system with a 32 bit, 33 Mhz PCI bus with 0 wait state memory) to copy the same frame across bus 4. At time 105, serial side 12 commences transfer of data from buffer memory 11 onto network physical link 14. The difference in time between time 104 and 105 is known as the threshold period which is programmable parameter and is measured in units of bytes stored in buffer memory 11. This parameter is chosen to optimize the two objectives of starting transmissions over the physical link as soon as possible and avoiding an underrun condition. At time 106, the copying of a complete frame from main memory to buffer memory 11 is complete. However, the transmission by the serial side 12 over the network physical link 14 has not yet been completed. It is not until time 107 that the transmission of the first frame of data is complete. The time between the event of copy of a complete frame at 106 and the event of completion of transmission of the frame at time 107 may vary substantially primarily because the serial side is slow compared to the bus speed and also because the serial side 12 may not be able to transmit immediately or there may be failures in transmission that require several retries. Thus the actual interval between events at times 106 and 107 may be very long. At time 108, serial side 12 issues an indication that the transmission is complete. The indication may be in the form of an interrupt, writing to a particular location in main memory or setting a flag. At time 109 the transmission complete indication is acknowledged by parallel side 10. At time 110, the transmission complete is acknowledged by the CPU at the driver layer. At time 111, the transmission complete is acknowledged in the CPU at the lower protocol layer. At time 112, the transmission complete is acknowledged by the CPU at the upper protocol layer. At this point, transmission of one frame is complete. A packet of data is the largest quantity of data that the lower protocol layer can handle at any one time. A packet may consist of one or more frames. If there are additional frames under the control of the lower protocol layer, they will be sent at this time. If there are no additional frames under the control of the lower protocol layer, the lower protocol layer will send a request for the next packet to be passed to it. At time 113, the upper protocol layer transfers the packet to the lower protocol layer. At time 114, the lower protocol layer transfers a frame to the driver layer. And at time 115, the driver layer programs the physical layer by passing various descriptor and buffer fields thereto. At time 116, the CPU issues the transmit command to bus mastering network controller 8. Thereafter the process is a repeat of what was previously described. Data throughput is affected in two ways by the architecture of the bus mastering network controller. One way is the time between frames being put out on the physical link by serial side 12. The second way is the time required to move data from main memory 6 to buffer memory 11. This includes the time to move data from either: 1, the lower protocol layer to buffer memory 11 if there are one or more frames under the control of the lower protocol layer; or 2, the upper protocol layer to buffer memory 11 if their are no packets under the control of the lower protocol layer. In FIG. 2, the activities of CPU 2, parallel side 10 and serial side 12 are connected. In general, the driver layer programs the bus mastering network controller to copy data from application fragments and then returns to the NOS. With this approach, as can be seen from examining FIG. 2, there is a substantial period of time between the completion of frame copy at time 106 and the issue Tx complete indication at time 108. During this period, CPU 2 is idle with respect to transmission of data over the network. This limits the data transfer rate on frame transmissions.
Lp(R) Associated with Differential Equations: Existence, Invertibility Conditions and Inversion Ausual problem in analog signal processing is to ascertain the existenceof a continuous single-input single-output linear time-invariant inputoutput stable system associated with a linear differential equation, i.e., of a continuous system such that, for every input signal in a given space of signals, yields an output, in the same space, which verifies the equation with known term the input, and to ascertain the existence of its inverse system. In this paper, we consider, as space of signals, the usual Banach space of L p functions, or the space of distributions spanned by L p functions and by their distributional derivatives, of any order (input spaces which include signals with not necessarily left-bounded support), we give a systematic theoretical analysis of the existence, uniqueness and invertibility of continuous linear time-invariant inputoutput stable systems (both causal and non-causal ones) associated with the differential equation and, in case of invertibility, we characterize the continuous inverse system. We also give necessary and sufficient conditions for causality. As an application, we consider the problem of finding a suitable almost inverse of a causal continuous linear time-invariant inputoutput stable non-invertible system, defined on the space of finite-energy functions, associated with a simple differential equation.
Chinese companies will be encouraged to buy, or take on lease, farmlands abroad to help guarantee food security, under a plan being considered by Beijing. The move comes amid a food crisis in China. China has about 40 per cent of the world’s farmers, but just 9 per cent of the world’s arable land. Africa and South America are among the most likely directions. Russia’s also in the list. However, according to the country’s constitution, foreign companies aren’t allowed to buy Russian land, though still possible to lease. Acquiring farmlands abroad is now becoming a trend around the globe. Oil-rich but food-poor countries in the Middle East and North Africa explore similar options. Libya is now in talks with Ukraine about growing wheat in the former Soviet republic, while Saudi Arabia says it will invest in agricultural and livestock projects abroad to ensure food security and control commodity prices.
import matplotlib.pyplot as plt files = [2500, 4500, 8500, 16500] buildtime = [57.79, 1*60+41.46, 3*60+11.77, 6*60+12.55] plt.plot(files, buildtime, marker='o', color='b') plt.xlabel('C Files') plt.ylabel('Time [s]') plt.title('make Build') plt.legend(loc='upper left') plt.show()
A rare case of osteogenesis imperfecta combined with complete tooth loss Abstract Osteogenesis imperfecta (OI) is a heritable disorder of the connective tissue characterized by blue sclerae, osteoporosis and bone fragility. Dentinogenesis imperfecta type I is commonly seen in OI patients, but other dental impairments, such as tooth agenesis or complete tooth loss, are rarely reported for these patients. Here, we report the case of a 37-year-old female Chinese OI patient who experienced complete tooth loss before puberty. The patient has a family history of OI and her father has a history of tooth loss. She showed obvious OI phenotypes, including a dwarfed stature, blue sclerae, scoliosis, pigeon chest and a history of fractures. Tooth loss began at the age of 6 years and continued until complete tooth loss at 20 years; this occurred in the absence of dental decay, gum disease, accidents or drug usage. Radiological studies revealed osteoporosis of the lower limbs and an underdeveloped scapula. Type I collagen gene analysis identified a known c.2314G>A (p.Gly772Ser) substitution in the COL1A2 gene, which we suggest affects the interaction between type I collagen and extracellular matrix proteins, including cartilage oligomeric matrix protein, phosphophoryn and SPARC (secreted protein acidic and rich in cysteine). In silico prediction indicated a relatively mild effect of the mutation, so it is conceivable that the severity of the clinical phenotype may result from additional mutations in candidate genes responsible for abnormal dental phenotypes in this family. To our knowledge, this is the first report of an OI patient with a phenotype of complete tooth loss at a young age.
Representative Isovalue Detection and Isosurface Segmentation Using Novel Isosurface Measures Interval volume is the volume of the region between two isosurfaces. This paper proposes a novel measure, called VOA measure, that is computed based on interval volume and isosurface area. This measure represents the rate of change of distance between isosurfaces with respect to isovalue. It can be used to detect representative isovalues of the dataset since two isosurfaces near material boundaries tend to be much closer to each other than two isosurfaces in material interiors, assuming they have the same isovalue difference. For the same isosurface, some portion of it may pass through the boundary of two materials and some portion of it may pass through the interior of a material. To separate the portions of an isosurface that represent different features of the dataset, another novel isosurface measure is introduced. This measure is calculated based on the Euclidean distance of individual sample points on two isosurfaces. The effectiveness of the two new measures in detecting significant isovalues and segmenting isosurfaces are demonstrated in the paper.
/** * An Hyperbola, which is represented as a curve set of two boundary curves * which are instances of GJHyperbolaBranch2D. */ public class GJHyperbola2D extends GJContourArray2D<GJHyperbolaBranch2D> implements GJConic2D, Cloneable { // =================================================================== // Static factories public static GJHyperbola2D create(GJPoint2D center, double a, double b, double theta) { return new GJHyperbola2D(center.x(), center.y(), a, b, theta, true); } public static GJHyperbola2D create(GJPoint2D center, double a, double b, double theta, boolean d) { return new GJHyperbola2D(center.x(), center.y(), a, b, theta, d); } // =================================================================== // static methods /** * Creates a new Hyperbola by reducing the conic coefficients, assuming * conic type is Hyperbola, and hyperbola is centered. * * @param coefs an array of double with at least 3 coefficients containing * coefficients for x^2, xy, and y^2 factors. If the array is * longer, remaining coefficients are ignored. * @return the GJHyperbola2D corresponding to given coefficients */ public static GJHyperbola2D reduceCentered(double[] coefs) { double A = coefs[0]; double B = coefs[1]; double C = coefs[2]; // Compute orientation angle of the hyperbola double theta; if (abs(A - C) < GJShape2D.ACCURACY) { theta = PI / 4; } else { theta = atan2(B, (A - C)) / 2.0; if (B < 0) theta -= PI; theta = GJAngle2D.formatAngle(theta); } // compute ellipse in isothetic basis double[] coefs2 = GJConics2D.transformCentered(coefs, GJAffineTransform2D.createRotation(-theta)); // extract coefficient f if present double f = 1; if (coefs2.length > 5) f = abs(coefs[5]); assert abs(coefs2[1] / f) < GJShape2D.ACCURACY : "Second conic coefficient should be zero"; assert coefs2[0] * coefs2[2] < 0 : "Transformed conic is not an Hyperbola"; // extract major and minor axis lengths, ensuring r1 is greater double r1, r2; if (coefs2[0] > 0) { // East-West hyperbola r1 = sqrt(f / coefs2[0]); r2 = sqrt(-f / coefs2[2]); } else { // North-South hyperbola r1 = sqrt(f / coefs2[2]); r2 = sqrt(-f / coefs2[0]); theta = GJAngle2D.formatAngle(theta + PI / 2); theta = Math.min(theta, GJAngle2D.formatAngle(theta + PI)); } // Return the new Hyperbola return new GJHyperbola2D(0, 0, r1, r2, theta, true); } /** * Transforms an hyperbola, by supposing both the hyperbola is centered * and the transform has no translation part. * * @param hyper an hyperbola * @param trans an affine transform * @return the transformed hyperbola, centered around origin */ public static GJHyperbola2D transformCentered(GJHyperbola2D hyper, GJAffineTransform2D trans) { // Extract inner parameter of hyperbola double a = hyper.a; double b = hyper.b; double theta = hyper.theta; // precompute some parts double aSq = a * a; double bSq = b * b; double cot = cos(theta); double sit = sin(theta); double cotSq = cot * cot; double sitSq = sit * sit; // compute coefficients of the centered conic double A = cotSq / aSq - sitSq / bSq; double B = 2 * cot * sit * (1 / aSq + 1 / bSq); double C = sitSq / aSq - cotSq / bSq; double[] coefs = new double[] { A, B, C }; // Compute coefficients of the transformed conic, still centered double[] coefs2 = GJConics2D.transformCentered(coefs, trans); // reduce conic coefficients to an hyperbola return GJHyperbola2D.reduceCentered(coefs2); } // =================================================================== // class variables /** Center of the hyperbola */ protected double xc = 0; protected double yc = 0; /** first focal parameter */ protected double a = 1; /** second focal parameter */ protected double b = 1; /** angle of rotation of the hyperbola */ protected double theta = 0; /** a flag indicating whether the hyperbola is direct or not */ protected boolean direct = true; /** The negative branch of the hyperbola */ protected GJHyperbolaBranch2D branch1 = null; /** The positive branch of the hyperbola */ protected GJHyperbolaBranch2D branch2 = null; // =================================================================== // constructors /** * Assume centered hyperbola, with a = b = 1 (orthogonal hyperbola), theta=0 * (hyperbola is oriented East-West), and direct orientation. */ public GJHyperbola2D() { this(0, 0, 1, 1, 0, true); } /** * Copy constructor * @param hyp the hyperbola to copy */ public GJHyperbola2D(GJHyperbola2D hyp) { this(hyp.xc, hyp.yc, hyp.a, hyp.b, hyp.theta, hyp.direct); } public GJHyperbola2D(GJPoint2D center, double a, double b, double theta) { this(center.x(), center.y(), a, b, theta, true); } public GJHyperbola2D(GJPoint2D center, double a, double b, double theta, boolean d) { this(center.x(), center.y(), a, b, theta, d); } public GJHyperbola2D(double xc, double yc, double a, double b, double theta) { this(xc, yc, a, b, theta, true); } /** Main constructor */ public GJHyperbola2D(double xc, double yc, double a, double b, double theta, boolean d) { this.xc = xc; this.yc = yc; this.a = a; this.b = b; this.theta = theta; this.direct = d; branch1 = new GJHyperbolaBranch2D(this, false); branch2 = new GJHyperbolaBranch2D(this, true); this.add(branch1); this.add(branch2); } // =================================================================== // methods specific to GJHyperbola2D /** * Transforms a point in local coordinate (ie orthogonal centered hyberbola * with a=b=1) to global coordinate system. */ public GJPoint2D toGlobal(GJPoint2D point) { point = point.transform(GJAffineTransform2D.createScaling(a, b)); point = point.transform(GJAffineTransform2D.createRotation(theta)); point = point.transform(GJAffineTransform2D.createTranslation(xc, yc)); return point; } public GJPoint2D toLocal(GJPoint2D point) { point = point.transform(GJAffineTransform2D.createTranslation(-xc, -yc)); point = point.transform(GJAffineTransform2D.createRotation(-theta)); point = point.transform(GJAffineTransform2D.createScaling(1.0 / a, 1.0 / b)); return point; } /** * Changes coordinates of the line to correspond to a standard hyperbola. * Standard hyperbola is such that x^2-y^2=1 for every point. * * @param point * @return */ private GJLinearShape2D formatLine(GJLinearShape2D line) { line = line.transform(GJAffineTransform2D.createTranslation(-xc, -yc)); line = line.transform(GJAffineTransform2D.createRotation(-theta)); line = line.transform(GJAffineTransform2D.createScaling(1.0/a, 1.0/b)); return line; } /** * Returns the center of the Hyperbola. This point does not belong to the * Hyperbola. * @return the center point of the Hyperbola. */ public GJPoint2D getCenter() { return new GJPoint2D(xc, yc); } /** * Returns the angle made by the first direction vector with the horizontal * axis. */ public double getAngle() { return theta; } /** Returns a */ public double getLength1() { return a; } /** Returns b */ public double getLength2() { return b; } public boolean isDirect() { return direct; } public GJVector2D getVector1() { return new GJVector2D(cos(theta), sin(theta)); } public GJVector2D getVector2() { return new GJVector2D(-sin(theta), cos(theta)); } /** * Returns the focus located on the positive side of the main hyperbola * axis. */ public GJPoint2D getFocus1() { double c = hypot(a, b); return new GJPoint2D(xc + c * cos(theta), yc + c * sin(theta)); } /** * Returns the focus located on the negative side of the main hyperbola * axis. */ public GJPoint2D getFocus2() { double c = hypot(a, b); return new GJPoint2D(xc - c * cos(theta), yc - c * sin(theta)); } public GJHyperbolaBranch2D positiveBranch() { return branch2; } public GJHyperbolaBranch2D negativeBranch() { return branch1; } public Collection<GJHyperbolaBranch2D> branches() { ArrayList<GJHyperbolaBranch2D> array = new ArrayList<GJHyperbolaBranch2D>(2); array.add(branch1); array.add(branch2); return array; } /** * Returns the asymptotes of the hyperbola. */ public Collection<GJStraightLine2D> asymptotes() { // Compute base direction vectors GJVector2D v1 = new GJVector2D(a, b); GJVector2D v2 = new GJVector2D(a, -b); // rotate by the angle of the hyperbola with Ox axis GJAffineTransform2D rot = GJAffineTransform2D.createRotation(this.theta); v1 = v1.transform(rot); v2 = v2.transform(rot); // init array for storing lines ArrayList<GJStraightLine2D> array = new ArrayList<GJStraightLine2D>(2); // add each asymptote GJPoint2D center = this.getCenter(); array.add(new GJStraightLine2D(center, v1)); array.add(new GJStraightLine2D(center, v2)); // return the array of asymptotes return array; } // =================================================================== // methods inherited from GJConic2D interface public double[] conicCoefficients() { // scaling coefficients double aSq = this.a * this.a; double bSq = this.b * this.b; double aSqInv = 1.0 / aSq; double bSqInv = 1.0 / bSq; // angle of hyperbola with horizontal, and trigonometric formulas double sint = sin(this.theta); double cost = cos(this.theta); double sin2t = 2.0 * sint * cost; double sintSq = sint * sint; double costSq = cost * cost; // coefs from hyperbola center double xcSq = xc * xc; double ycSq = yc * yc; /* * Compute the coefficients. These formulae are the transformations on * the unit hyperbola written out long hand */ double a = costSq / aSq - sintSq / bSq; double b = (bSq + aSq) * sin2t / (aSq * bSq); double c = sintSq / aSq - costSq / bSq; double d = -yc * b - 2 * xc * a; double e = -xc * b - 2 * yc * c; double f = -1.0 + (xcSq + ycSq) * (aSqInv - bSqInv) / 2.0 + (costSq - sintSq) * (xcSq - ycSq) * (aSqInv + bSqInv) / 2.0 + xc * yc * (aSqInv + bSqInv) * sin2t; // Equivalent to: // double f = (xcSq*costSq + xc*yc*sin2t + ycSq*sintSq)*aSqInv // - (xcSq*sintSq - xc*yc*sin2t + ycSq*costSq)*bSqInv - 1; // Return array of results return new double[] { a, b, c, d, e, f }; } public GJConic2D.Type conicType() { return GJConic2D.Type.HYPERBOLA; } public double eccentricity() { return hypot(1, b * b / a / a); } // =================================================================== // methods implementing the GJCurve2D interface @Override public GJHyperbola2D reverse() { return new GJHyperbola2D(this.xc, this.yc, this.a, this.b, this.theta, !this.direct); } @Override public Collection<GJPoint2D> intersections(GJLinearShape2D line) { Collection<GJPoint2D> points = new ArrayList<GJPoint2D>(); // format to 'standard' hyperbola GJLinearShape2D line2 = formatLine(line); // Extract formatted line parameters GJPoint2D origin = line2.origin(); double dx = line2.direction().x(); double dy = line2.direction().y(); // extract line parameters // different strategy depending if line is more horizontal or more // vertical if (abs(dx) > abs(dy)) { // Line is mainly horizontal // slope and intercept of the line: y(x) = k*x + yi double k = dy / dx; double yi = origin.y() - k * origin.x(); // compute coefficients of second order equation double a = 1 - k * k; double b = -2 * k * yi; double c = -yi * yi - 1; double delta = b * b - 4 * a * c; if (delta <= 0) { System.out.println("Intersection with horizontal line should alays give positive delta"); return points; } // x coordinate of intersection points double x1 = (-b - sqrt(delta)) / (2 * a); double x2 = (-b + sqrt(delta)) / (2 * a); // support line of formatted line GJStraightLine2D support = line2.supportingLine(); // check first point is on the line double pos1 = support.project(new GJPoint2D(x1, k * x1 + yi)); if (line2.contains(support.point(pos1))) points.add(line.point(pos1)); // check second point is on the line double pos2 = support.project(new GJPoint2D(x2, k * x2 + yi)); if (line2.contains(support.point(pos2))) points.add(line.point(pos2)); } else { // Line is mainly vertical // slope and intercept of the line: x(y) = k*y + xi double k = dx / dy; double xi = origin.x() - k * origin.y(); // compute coefficients of second order equation double a = k * k - 1; double b = 2 * k * xi; double c = xi * xi - 1; double delta = b * b - 4 * a * c; if (delta <= 0) { // No intersection with the hyperbola return points; } // x coordinate of intersection points double y1 = (-b - sqrt(delta)) / (2 * a); double y2 = (-b + sqrt(delta)) / (2 * a); // support line of formatted line GJStraightLine2D support = line2.supportingLine(); // check first point is on the line double pos1 = support.project(new GJPoint2D(k * y1 + xi, y1)); if (line2.contains(support.point(pos1))) points.add(line.point(pos1)); // check second point is on the line double pos2 = support.project(new GJPoint2D(k * y2 + xi, y2)); if (line2.contains(support.point(pos2))) points.add(line.point(pos2)); } return points; } // =================================================================== // methods implementing the GJShape2D interface @Override public boolean contains(GJPoint2D point) { return this.contains(point.x(), point.y()); } @Override public boolean contains(double x, double y) { GJPoint2D point = toLocal(new GJPoint2D(x, y)); double xa = point.x() / a; double yb = point.y() / b; double res = xa * xa - yb * yb - 1; return abs(res) < 1e-6; } /** * Transforms this Hyperbola by an affine transform. */ @Override public GJHyperbola2D transform(GJAffineTransform2D trans) { GJHyperbola2D result = GJHyperbola2D.transformCentered(this, trans); GJPoint2D center = this.getCenter().transform(trans); result.xc = center.x(); result.yc = center.y(); //TODO: check convention for transform with indirect transform, see GJCurve2D. result.direct = this.direct ^ !trans.isDirect(); return result; } /** Returns a bounding box with infinite bounds in every direction */ @Override public GJBox2D boundingBox() { return GJBox2D.INFINITE_BOX; } /** Throws an UnboundedShapeException */ @Override public void draw(Graphics2D g) { throw new GJUnboundedShape2DException(this); } // =================================================================== // methods implementing the GJGeometricObject2D interface /* (non-Javadoc) * @see math.geom2d.GJGeometricObject2D#almostEquals(math.geom2d.GJGeometricObject2D, double) */ public boolean almostEquals(GJGeometricObject2D obj, double eps) { if (this==obj) return true; if (!(obj instanceof GJHyperbola2D)) return false; // Cast to hyperbola GJHyperbola2D that = (GJHyperbola2D) obj; // check if each parameter is the same if (abs(that.xc - this.xc) > eps) return false; if (abs(that.yc - this.yc) > eps) return false; if (abs(that.a - this.a) > eps) return false; if (abs(that.b - this.b) > eps) return false; if (abs(that.theta - this.theta) > eps) return false; if (this.direct != that.direct) return false; // same parameters, then same parabola return true; } // =================================================================== // methods implementing the Object interface /** * Tests whether this hyperbola equals another object. */ @Override public boolean equals(Object obj) { if (!(obj instanceof GJHyperbola2D)) return false; // Cast to hyperbola GJHyperbola2D that = (GJHyperbola2D) obj; // check if each parameter is the same if (!GJEqualUtils.areEqual(this.xc, that.xc)) return false; if (!GJEqualUtils.areEqual(this.yc, that.yc)) return false; if (!GJEqualUtils.areEqual(this.a, that.a)) return false; if (!GJEqualUtils.areEqual(this.b, that.b)) return false; if (!GJEqualUtils.areEqual(this.theta, that.theta)) return false; if (this.direct!=that.direct) return false; // same parameters, then same parabola return true; } /** * @deprecated use copy constructor instead (0.11.2) */ @Deprecated @Override public GJHyperbola2D clone() { return new GJHyperbola2D(xc, yc, a, b, theta, direct); } }
<gh_stars>0 package aws import ( "bytes" "fmt" "log" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) func resourceAwsVolumeAttachment() *schema.Resource { return &schema.Resource{ Create: resourceAwsVolumeAttachmentCreate, Read: resourceAwsVolumeAttachmentRead, Update: resourceAwsVolumeAttachmentUpdate, Delete: resourceAwsVolumeAttachmentDelete, Schema: map[string]*schema.Schema{ "device_name": { Type: schema.TypeString, Required: true, ForceNew: true, }, "instance_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, "volume_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, "force_detach": { Type: schema.TypeBool, Optional: true, }, "skip_destroy": { Type: schema.TypeBool, Optional: true, }, }, } } func resourceAwsVolumeAttachmentCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ec2conn name := d.Get("device_name").(string) iID := d.Get("instance_id").(string) vID := d.Get("volume_id").(string) // Find out if the volume is already attached to the instance, in which case // we have nothing to do request := &ec2.DescribeVolumesInput{ VolumeIds: []*string{aws.String(vID)}, Filters: []*ec2.Filter{ { Name: aws.String("attachment.instance-id"), Values: []*string{aws.String(iID)}, }, { Name: aws.String("attachment.device"), Values: []*string{aws.String(name)}, }, }, } vols, err := conn.DescribeVolumes(request) if (err != nil) || (len(vols.Volumes) == 0) { // This handles the situation where the instance is created by // a spot request and whilst the request has been fulfilled the // instance is not running yet stateConf := &resource.StateChangeConf{ Pending: []string{"pending", "stopping"}, Target: []string{"running", "stopped"}, Refresh: InstanceStateRefreshFunc(conn, iID, []string{"terminated"}), Timeout: 10 * time.Minute, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } _, err = stateConf.WaitForState() if err != nil { return fmt.Errorf( "Error waiting for instance (%s) to become ready: %s", iID, err) } // not attached opts := &ec2.AttachVolumeInput{ Device: aws.String(name), InstanceId: aws.String(iID), VolumeId: aws.String(vID), } log.Printf("[DEBUG] Attaching Volume (%s) to Instance (%s)", vID, iID) _, err := conn.AttachVolume(opts) if err != nil { if awsErr, ok := err.(awserr.Error); ok { return fmt.Errorf("[WARN] Error attaching volume (%s) to instance (%s), message: \"%s\", code: \"%s\"", vID, iID, awsErr.Message(), awsErr.Code()) } return err } } stateConf := &resource.StateChangeConf{ Pending: []string{"attaching"}, Target: []string{"attached"}, Refresh: volumeAttachmentStateRefreshFunc(conn, name, vID, iID), Timeout: 5 * time.Minute, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } _, err = stateConf.WaitForState() if err != nil { return fmt.Errorf( "Error waiting for Volume (%s) to attach to Instance: %s, error: %s", vID, iID, err) } d.SetId(volumeAttachmentID(name, vID, iID)) return resourceAwsVolumeAttachmentRead(d, meta) } func volumeAttachmentStateRefreshFunc(conn *ec2.EC2, name, volumeID, instanceID string) resource.StateRefreshFunc { return func() (interface{}, string, error) { request := &ec2.DescribeVolumesInput{ VolumeIds: []*string{aws.String(volumeID)}, Filters: []*ec2.Filter{ { Name: aws.String("attachment.device"), Values: []*string{aws.String(name)}, }, { Name: aws.String("attachment.instance-id"), Values: []*string{aws.String(instanceID)}, }, }, } resp, err := conn.DescribeVolumes(request) if err != nil { if awsErr, ok := err.(awserr.Error); ok { return nil, "failed", fmt.Errorf("code: %s, message: %s", awsErr.Code(), awsErr.Message()) } return nil, "failed", err } if len(resp.Volumes) > 0 { v := resp.Volumes[0] for _, a := range v.Attachments { if a.InstanceId != nil && *a.InstanceId == instanceID { return a, *a.State, nil } } } // assume detached if volume count is 0 return 42, "detached", nil } } func resourceAwsVolumeAttachmentRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ec2conn request := &ec2.DescribeVolumesInput{ VolumeIds: []*string{aws.String(d.Get("volume_id").(string))}, Filters: []*ec2.Filter{ { Name: aws.String("attachment.device"), Values: []*string{aws.String(d.Get("device_name").(string))}, }, { Name: aws.String("attachment.instance-id"), Values: []*string{aws.String(d.Get("instance_id").(string))}, }, }, } vols, err := conn.DescribeVolumes(request) if err != nil { if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidVolume.NotFound" { d.SetId("") return nil } return fmt.Errorf("Error reading EC2 volume %s for instance: %s: %#v", d.Get("volume_id").(string), d.Get("instance_id").(string), err) } if len(vols.Volumes) == 0 || *vols.Volumes[0].State == "available" { log.Printf("[DEBUG] Volume Attachment (%s) not found, removing from state", d.Id()) d.SetId("") } return nil } func resourceAwsVolumeAttachmentUpdate(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] Attaching Volume (%s) is updating which does nothing but updates a few params in state", d.Id()) return nil } func resourceAwsVolumeAttachmentDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ec2conn if _, ok := d.GetOk("skip_destroy"); ok { return nil } name := d.Get("device_name").(string) vID := d.Get("volume_id").(string) iID := d.Get("instance_id").(string) opts := &ec2.DetachVolumeInput{ Device: aws.String(name), InstanceId: aws.String(iID), VolumeId: aws.String(vID), Force: aws.Bool(d.Get("force_detach").(bool)), } _, err := conn.DetachVolume(opts) if err != nil { return fmt.Errorf("Failed to detach Volume (%s) from Instance (%s): %s", vID, iID, err) } stateConf := &resource.StateChangeConf{ Pending: []string{"detaching"}, Target: []string{"detached"}, Refresh: volumeAttachmentStateRefreshFunc(conn, name, vID, iID), Timeout: 5 * time.Minute, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } log.Printf("[DEBUG] Detaching Volume (%s) from Instance (%s)", vID, iID) _, err = stateConf.WaitForState() if err != nil { return fmt.Errorf( "Error waiting for Volume (%s) to detach from Instance: %s", vID, iID) } return nil } func volumeAttachmentID(name, volumeID, instanceID string) string { var buf bytes.Buffer buf.WriteString(fmt.Sprintf("%s-", name)) buf.WriteString(fmt.Sprintf("%s-", instanceID)) buf.WriteString(fmt.Sprintf("%s-", volumeID)) return fmt.Sprintf("vai-%d", hashcode.String(buf.String())) }
#!/usr/bin/env python3 import sys TABLE = dict() def generate_string(text): global TABLE if text in TABLE: return TABLE[text] else: res = [] for val in text: res.append(TABLE.get(val, "'"+val+"'")) return "+".join(res) def generate_ascii_array(text): return [str(ord(i)) for i in text] def generate_table(): global TABLE TABLE[0] = '+[]' TABLE[1] = '+!![]' TABLE[2] = '!![]+!![]' TABLE[3] = '!![]+!![]+!![]' TABLE[4] = '!![]+!![]+!![]+!![]' TABLE[5] = '!![]+!![]+!![]+!![]+!![]' TABLE[6] = '!![]+!![]+!![]+!![]+!![]+!![]' TABLE[7] = '!![]+!![]+!![]+!![]+!![]+!![]+!![]' TABLE[8] = '!![]+!![]+!![]+!![]+!![]+!![]+!![]+!![]' TABLE[9] = '!![]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+!![]' TABLE['0'] = '('+TABLE[0]+'+[])' TABLE['1'] = '('+TABLE[1]+'+[])' TABLE['2'] = '('+TABLE[2]+'+[])' TABLE['3'] = '('+TABLE[3]+'+[])' TABLE['4'] = '('+TABLE[4]+'+[])' TABLE['5'] = '('+TABLE[5]+'+[])' TABLE['6'] = '('+TABLE[6]+'+[])' TABLE['7'] = '('+TABLE[7]+'+[])' TABLE['8'] = '('+TABLE[8]+'+[])' TABLE['9'] = '('+TABLE[9]+'+[])' TABLE[''] = '([]+[])' TABLE['true'] = '(!![]+[])' TABLE['false'] = '(![]+[])' TABLE['NaN'] = '(+[![]]+[])' TABLE['NaN'] = '(+{}+[])' TABLE['undefined'] = '([][[]]+[])' TABLE['[object Object]'] = '({}+[])' TABLE['a'] = TABLE['false']+'['+TABLE[1]+']' TABLE['b'] = TABLE['[object Object]']+'['+TABLE[2]+']' TABLE['c'] = TABLE['[object Object]']+'['+TABLE[5]+']' TABLE['d'] = TABLE['undefined']+'['+TABLE[2]+']' TABLE['e'] = TABLE['true']+'['+TABLE[3]+']' TABLE['Infinity'] = '(+('+TABLE[1]+'+'+TABLE['e']+'+('+TABLE[1]+')+('+TABLE[0]+')+('+TABLE[0]+')+('+TABLE[0]+'))+[])' TABLE['f'] = TABLE['false']+'['+TABLE[0]+']' TABLE['i'] = TABLE['undefined']+'['+TABLE[5]+']' TABLE['j'] = TABLE['[object Object]']+'['+TABLE[3]+']' TABLE['l'] = TABLE['false']+'['+TABLE[2]+']' TABLE['n'] = TABLE['undefined']+'['+TABLE[1]+']' TABLE['o'] = TABLE['[object Object]']+'['+TABLE[1]+']' TABLE['r'] = TABLE['true']+'['+TABLE[1]+']' TABLE['s'] = TABLE['false']+'['+TABLE[3]+']' TABLE['t'] = TABLE['true']+'['+TABLE[0]+']' TABLE['u'] = TABLE['true']+'['+TABLE[2]+']' TABLE['y'] = TABLE['Infinity']+'['+TABLE[7]+']' TABLE['I'] = TABLE['Infinity']+'['+TABLE[0]+']' TABLE['N'] = TABLE['NaN']+'['+TABLE[0]+']' TABLE['O'] = TABLE['[object Object]']+'['+TABLE[8]+']' TABLE[','] = '[[],[]]+[]' TABLE['['] = TABLE['[object Object]']+'['+TABLE[0]+']' TABLE[']'] = TABLE['[object Object]']+'['+TABLE['1']+'+('+TABLE[4]+')]' TABLE[' '] = TABLE['[object Object]']+'['+TABLE[7]+']' TABLE['"'] = TABLE['']+'['+generate_string('fontcolor')+']()['+TABLE['1']+'+('+TABLE[2]+')]' TABLE['<'] = TABLE['']+'['+generate_string('sub')+']()['+TABLE[0]+']' TABLE['='] = TABLE['']+'['+generate_string('fontcolor')+']()['+TABLE['1']+'+('+TABLE[1]+')]' TABLE['>'] = TABLE['']+'['+generate_string('sub')+']()['+TABLE[4]+']' TABLE['/'] = TABLE['']+'['+generate_string('sub')+']()['+TABLE[6]+']' TABLE['+'] = '(+('+TABLE[1]+'+'+TABLE['e']+'+['+TABLE[1]+']+('+TABLE[0]+')+('+TABLE[0]+'))+[])['+TABLE[2]+']' TABLE['.'] = '(+('+TABLE[1]+'+['+TABLE[1]+']+'+TABLE['e']+'+('+TABLE[2]+')+('+TABLE[0]+'))+[])['+TABLE[1]+']' TABLE[','] = '([]['+generate_string('slice')+']['+generate_string('call')+']'+TABLE['[object Object]']+'+[])['+TABLE[1]+']' TABLE['[object Window]'] = '([]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return self')+')()+[])' TABLE['W'] = TABLE['[object Window]']+'['+TABLE[8]+']' TABLE['h'] = '([]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return location')+')()+[])['+TABLE[0]+']' TABLE['p'] = '([]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return location')+')()+[])['+TABLE[3]+']' TABLE['m'] = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return typeof 0')+')()['+TABLE[2]+']' TABLE['C'] = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return escape')+')()('+TABLE['1']+'['+generate_string("sub")+']())['+TABLE[2]+']' TABLE['('] = '([]['+generate_string('filter')+']+[])['+generate_string('trim')+']()['+TABLE['1']+'+('+TABLE[5]+')]' TABLE[')'] = '([]['+generate_string('filter')+']+[])['+generate_string('trim')+']()['+TABLE['1']+'+('+TABLE[6]+')]' TABLE['{'] = '([]['+generate_string('filter')+']+[])['+generate_string('trim')+']()['+TABLE['1']+'+('+TABLE[8]+')]' TABLE['g'] = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return typeof""')+')()['+TABLE[5]+']' TABLE['%'] = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return escape')+')()({})['+TABLE[0]+']' TABLE['B'] = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return escape')+')()({})['+TABLE[2]+']' TABLE['S'] = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return unescape')+')()('+TABLE['%']+'+'+TABLE['5']+'+('+TABLE[3]+'))' TABLE['x'] = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return unescape')+')()('+TABLE['%']+'+'+TABLE['7']+'+('+TABLE[8]+'))' TABLE['v'] = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return unescape')+')()('+TABLE['%']+'+'+TABLE['7']+'+('+TABLE[6]+'))' return def obfuscate(code): global TABLE if len(TABLE) == 0: generate_table() payload = ','.join(generate_string(str(x)) for x in generate_ascii_array(code)) payload = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+generate_string('return String')+')()['+generate_string('fromCharCode')+']('+payload+')' payload = '[]['+generate_string('filter')+']['+generate_string('constructor')+']('+payload+')()' return payload def main(): if len(sys.argv) != 2: print('Usage: python main.py <filepath>') return with open(sys.argv[1], 'r') as f: before = f.read() after = obfuscate(before) print(after) if __name__ == '__main__': main()
Parkinson's disease presenting with oculogyric crisis in the off period. We herein report the case of a 67-year-old Japanese man diagnosed with sporadic Parkinson's disease (PD) at 52 years of age who presented with oculogyric crisis (OGC) in the off period. Ordinarily, OGC is caused by postencephalitic parkinsonism or the chronic use of antidopaminergic medications. The OGC began at 65 years of age and was associated with the wearing-off of symptoms. The dominant OGC feature was tonic deviations in eye posture induced by looking upward with prominent retrocollis. The administration of control dopaminergic medications led to improvements in the wearing-off phenomenon and OGC. This observation confirms that sporadic PD can induce OGC in the off period.
def check_prereqs(): logging.info("Checking for dependencies...") for req in prereqs: if not check_path_for(req): logging.error("Could not find dependency: {}".format(req)) return False return True
Of the winning legislators, 23 had declared assets over Rs 5 crore while five had assets below Rs 5 lakh. New Delhi: Meghalaya has 66 per cent 'crorepati' MLAs, or 39 out of 59 elected MLAs in the state legislative assembly-2018 having individual wealth of over Rs 1 crore, a report said on Sunday. During previous elections in 2013, 60 per cent MLAs had wealth over Rs 1 crore. According to a report by the Meghalaya Election Watch (MEW) and the Association for Democratic Reforms (ADR), seven MLAs did not declare their sources of income. Only one MLA, Bendic R Marak from the state's second largest National People's Party (NPP), has declared criminal cases registered against him, which include one for criminal trespass, one for voluntarily causing hurt and the third one for acts done by several persons in furtherance of common intention. The Meghalaya elections were held on February 27 for 59 out of 60 seats. According to the Meghalaya Election Watch and the Association for Democratic Reforms, the Congress that emerged as the largest party with 21 seats has 15 or highest number of rich MLAs. Other parties whose MLAs declared assets worth over one crore rupees include the NPP - 12 out of 19, United Democratic Party (UDP) - four out of six, both of the Bharatiya Janata Party (BJP), two of four of People's Democratic Front (PDF), one of the two MLAs of Hill State People's Democratic Party (HSPDP), the lone MLA of the Nationalist Congress Party (NCP) and two of the three Independent MLAs. According to the report, the average of assets per MLA contesting the Meghalaya Assembly Elections 2018 is Rs 7.18 crore. In 2013, the average assets of MLAs worked out to Rs 7.77 crore. The top three richest MLAs are Metbah Lyngdoh from UDP with assets over Rs 87 crore, Dasakhiatbha Lamare from NPP with assets over Rs 40 crore and Renikton Lyngdoh Tongkhar from HSPDP with assets worth over Rs 29 crore. NPP MLA Pongseng Marak declared the lowest assets of Rs 2.99 lakh.
1. Technical Field The present invention relates to digital communication systems, and more particularly relates to a system and method for modeling distortion as a function of a quantization parameter. 2. Related Art With the advent of personal computing and the Internet, a huge demand has been created for the transmission of digital data, and in particular, digital video data. However, the ability to transmit video data over low capacity communication channels, such as telephone lines, remains an ongoing challenge. To address this issue, systems are being developed in which coded representations of video signals are broken up into video elements or objects that can be independently manipulated. For example, MPEG-4 is a compression standard developed by the Moving Picture Experts Group (MPEG) that utilizes a set of video objects. Using this technique, different bit rates can be assigned to different visual objects. Thus, the more important data (e.g., facial features) can be encoded and transmitted at a higher bit rate, and therefore lower distortion, than the unimportant data (e.g., background features). Because bandwidth is at a premium, one of the important challenges is to be able to efficiently select bit rates that will meet the distortion requirements for each visual object. Ideally, a bit rate should be selected no higher than is necessary to ensure that the distortion for the visual object does not exceed a selected threshold. Unfortunately, because of the number of parameters that can be introduced in such encoding schemes, predicting distortion levels for corresponding bit rates is a complex problem. Accordingly, the process of selecting bit rates for visual objects remains a challenge. One solution was taught in the paper entitled xe2x80x9cRate Control and Bit Allocation for MPEG-4,xe2x80x9d by Ronda et al., IEEE Transactions on Circuits and Systems For Video Technology, Vol. 9, No. 8, December 1999, which is hereby incorporated by reference (hereinafter xe2x80x9cRondaxe2x80x9d). In Ronda, a model was taught in which distortion D was defined as: D(q)=a1q2+a2q+a3+N(0, "sgr"2), Where q is a quantization parameter, N is a Gaussian distribution, and a1, a2, a3 are distortion model parameters. (In MPEG-4 systems, the bit rate is a function of q.) One of the problems associated with this model, however, is that it provides a Gaussian distribution having polynomial mean and constant variance. Thus, it assumes a constant variance Gaussian distribution regardless of the value of q, which is generally inaccurate, particularly in the case of a low bit rate. Accordingly, a need exists for a system that can more accurately model distortion in an encoding system. This invention overcomes the above-mentioned problems, as well as others, by providing a distortion model in which distortion D(q) is calculated as a random variable that has a general Gaussian distribution that is a function of the quantization parameter q. In a first aspect, the invention provides a method for determining a quantization parameter q that meets a predetermined quality level, comprising the steps of: providing a distortion model D(q)=N(a1q2+a2q+a3, b1q2+b2q+b3), wherein N is a Gaussian distribution and a1, a2, a3, b1, b2 and b3 are distortion model parameters; selecting a target distortion level; and calculating the quantization parameter q such that an upper bound of the distortion model D(q) is less than or equal to the target distortion level. In a second aspect, the invention provides an encoding system having quality level selection capabilities, comprising: a system for selecting a target distortion level; a distortion model, wherein the distortion model determines a distortion level as a function of a quantization parameter, and wherein the distortion model includes a Gaussian distribution having a variance that is a function of the quantization parameter; and a system for calculating the quantization parameter such that the distortion level does not exceed the target distortion level. In a third aspect, the invention provides a video encoder that allows for the selection of a distortion level, comprising: a selection system for selecting a target distortion level; and a system for determining a quantization parameter that will ensure compliance with the selected target distortion level, wherein the system includes an algorithm for calculating distortion that utilizes a Gaussian distribution having a variance that is a function of the quantization parameter. In a fourth aspect, the invention provides a program product, stored on a recordable medium, which when executed allows for the selection of a distortion level in an encoding operation, comprising: a selection system for selecting a target distortion level; and a system for determining a quantization parameter that will ensure compliance with the selected target distortion level, wherein the system includes an algorithm for calculating distortion that utilizes a Gaussian distribution having a variance that is a function of the quantization parameter.
// CreateDeploySmartContractGenesisOutState creates some OutStates using shard and txHash. func CreateDeploySmartContractGenesisOutState(shard shard.Index, txHash *chainhash.Hash) []*wire.OutState { setTotalAmountOfEachShard() genesisAddr := multivacaddress.GenerateAddress(GenesisPublicKeys[shard], multivacaddress.UserAddress) data := wire.MTVCoinData{ Value: new(big.Int).Div(TotalAmountOfEachShard, big.NewInt(int64(NumberOfOutsInEachGenesisBlock))), } dataBytes, _ := rlp.EncodeToBytes(data) var outStates []*wire.OutState var outState *wire.OutState sc := wire.SmartContract{} sc.ContractAddr = txHash.FormatSmartContractAddress() sc.APIList = []string{} sc.Code = []byte{} scHash := sc.SmartContractHash() scBytes := scHash.CloneBytes() outState = &wire.OutState{ OutPoint: wire.OutPoint{ TxHash: *txHash, Index: 0, Shard: shard, UserAddress: multivacaddress.GenerateAddress(GenesisPublicKeys[shard], multivacaddress.UserAddress), ContractAddress: sc.ContractAddr, Data: scBytes, }, State: wire.StateUnused, } outStates = append(outStates, outState) outState = &wire.OutState{ OutPoint: wire.OutPoint{ TxHash: *txHash, Index: 1, Shard: shard, UserAddress: multivacaddress.GenerateAddress(GenesisPublicKeys[shard], multivacaddress.UserAddress), ContractAddress: sc.ContractAddr, Data: []byte{}, }, State: wire.StateUnused, } outStates = append(outStates, outState) outState = &wire.OutState{ OutPoint: wire.OutPoint{ TxHash: *txHash, Index: 2, Shard: shard, UserAddress: genesisAddr, Data: dataBytes, }, State: wire.StateUnused, } outStates = append(outStates, outState) return outStates }
/* * Copyright 2014,2017 柏大衛 * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.davidjohnburrowes.io; import com.davidjohnburrowes.formats.jpeg.test.TestUtils; import java.io.IOException; import static org.junit.Assert.*; import org.junit.Before; import org.junit.Test; public class ByteBufferTest { private TestUtils utils; private ByteBuffer bb; @Before public void setUp() { utils = new TestUtils(); bb = new ByteBuffer(); } private void assertState_NoMark() { assertFalse(bb.canAdd()); assertFalse(bb.mustRead()); } private void assertState_MarkWithNoBuffer() { assertTrue(bb.canAdd()); assertFalse(bb.mustRead()); } private void assertState_MarkWithBuffer(int expectedValue) throws IOException { assertTrue(bb.canAdd()); assertFalse(bb.mustRead()); bb.reset(); assertEquals(expectedValue, bb.readByte()); } private void assertState_MarkWithFullBuffer() { assertTrue(bb.canAdd()); assertFalse(bb.mustRead()); bb.addByte((byte)99); assertState_NoMark(); } private void assertState_MarkWithReadBuffer(int expectedValue) { assertFalse(bb.canAdd()); assertTrue(bb.mustRead()); assertEquals(expectedValue, bb.readByte()); } // State: No mark @Test public void initially_inNoMark() { assertState_NoMark(); } @Test public void mark_inNoMark_toMarkWithNoBuffer() { bb.mark(3); assertState_MarkWithNoBuffer(); } @Test(expected=IOException.class) public void reset_inNoMark_throwsException() throws IOException { bb.reset(); } @Test public void canAdd_inNoMark_false() { assertFalse(bb.canAdd()); } @Test public void mustRead_inNoMark_false() { assertFalse(bb.mustRead()); } @Test(expected=IllegalStateException.class) public void addByte_inNoMark_throwsException() { bb.addByte((byte)2); } @Test(expected=IllegalStateException.class) public void readByte_inNoMark_throwsException() { bb.readByte(); } // State: Mark with no buffer @Test public void mark_inMarkWithNoBuffer_toInMarkWithNoBuffer() { bb.mark(3); bb.mark(3); assertState_MarkWithNoBuffer(); } @Test public void reset_inMarkWithNoBuffer_toInMarkWithNoBuffer() throws IOException { bb.mark(3); bb.reset(); assertState_MarkWithNoBuffer(); } @Test public void canAdd_inMarkWithNoBuffer_true() { bb.mark(3); assertTrue(bb.canAdd()); assertState_MarkWithNoBuffer(); } @Test public void mustRead_inMarkWithNoBuffer_false() { bb.mark(2); assertFalse(bb.mustRead()); } @Test public void addByte_inMarkWithNoBuffer_toInMarkWithBuffer() throws IOException { bb.mark(3); bb.addByte((byte)1); assertState_MarkWithBuffer(1); } @Test(expected=IllegalStateException.class) public void readByte_inMarkWithNoBuffer_throwsException() { bb.mark(2); bb.readByte(); } // State: Mark with buffer @Test public void mark_inMarkWithBuffer_toInMarkNoBuffer() { bb.mark(3); bb.addByte((byte)1); bb.mark(3); assertState_MarkWithNoBuffer(); } @Test public void reset_inMarkWithBuffer_toMarkWithReadBuffer() throws IOException { bb.mark(3); bb.addByte((byte)1); bb.reset(); assertState_MarkWithReadBuffer(1); } @Test public void canAdd_inMarkWithBuffer_true() throws IOException { bb.mark(3); bb.addByte((byte)1); assertTrue(bb.canAdd()); assertState_MarkWithBuffer(1); } @Test public void mustRead_inMarkWithBuffer_false() throws IOException { bb.mark(3); bb.addByte((byte)1); assertFalse(bb.mustRead()); assertState_MarkWithBuffer(1); } @Test public void addByte_inMarkWithBuffer_toInMarkWithBuffer() throws IOException { bb.mark(3); bb.addByte((byte)1); bb.addByte((byte)2); assertState_MarkWithBuffer(1); } @Test public void addByte_inMarkWithBuffer_toInMarkWithFullBuffer() { bb.mark(2); bb.addByte((byte)1); bb.addByte((byte)2); assertState_MarkWithFullBuffer(); } @Test(expected=IllegalStateException.class) public void readByte_inMarkWithBuffer_throwsException() { bb.mark(2); bb.addByte((byte)1); bb.readByte(); } // State: Mark with full buffer @Test public void mark_inMarkWithFullBuffer_toInMarkNoBuffer() { bb.mark(1); bb.addByte((byte)1); bb.mark(3); assertState_MarkWithNoBuffer(); } @Test public void reset_inMarkWithFullBuffer_toMarkWithReadBuffer() throws IOException { bb.mark(1); bb.addByte((byte)1); bb.reset(); assertState_MarkWithReadBuffer(1); } @Test public void canAdd_inMarkWithFullBuffer_true() throws IOException { bb.mark(1); bb.addByte((byte)1); assertTrue(bb.canAdd()); assertState_MarkWithBuffer(1); } @Test public void mustRead_inMarkWithFullBuffer_false() throws IOException { bb.mark(1); bb.addByte((byte)1); assertFalse(bb.mustRead()); assertState_MarkWithBuffer(1); } @Test public void addByte_inMarkWithFullBuffer_toNoMark() { bb.mark(1); bb.addByte((byte)1); bb.addByte((byte)2); assertState_NoMark(); } @Test(expected=IllegalStateException.class) public void readByte_inMarkWithFullBuffer_throwsException() { bb.mark(1); bb.addByte((byte)1); bb.readByte(); } // State: Mark with read buffer @Test public void mark_inMarkWithReadBuffer_toInMarkWithReadBuffer() throws IOException { bb.mark(2); bb.addByte((byte)1); bb.reset(); bb.mark(3); assertState_MarkWithReadBuffer(1); } @Test public void reset_inMarkWithReadBuffer_toMarkWithReadBuffer() throws IOException { bb.mark(2); bb.addByte((byte)1); bb.reset(); bb.reset(); assertState_MarkWithReadBuffer(1); } @Test public void canAdd_inMarkWithReadBuffer_false() throws IOException { bb.mark(2); bb.addByte((byte)1); bb.reset(); assertFalse(bb.canAdd()); assertState_MarkWithReadBuffer(1); } @Test public void mustRead_inMarkWithReadBuffer_true() throws IOException { bb.mark(2); bb.addByte((byte)1); bb.reset(); assertTrue(bb.mustRead()); assertState_MarkWithReadBuffer(1); } @Test(expected=IllegalStateException.class) public void addByte_inMarkWithReadBuffer_throwsException() throws IOException { bb.mark(2); bb.addByte((byte)1); bb.reset(); bb.addByte((byte)2); } @Test public void readByte_inMarkWithReadBuffer_toMarkWithReadBuffer() throws IOException { bb.mark(3); bb.addByte((byte)1); bb.addByte((byte)2); bb.reset(); assertEquals(1, bb.readByte()); assertState_MarkWithReadBuffer(2); } @Test public void readByte_inMarkWithReadBuffer_toNoMark() throws IOException { bb.mark(1); bb.addByte((byte)1); bb.reset(); assertEquals(1, bb.readByte()); assertState_NoMark(); } }
#This does only work for LND #Only do this if you understand the process. #Never enter secrets into online webpages. #This guide is based on https://www.lightningnode.info/technicals/restorelndonchainfundsinelectrum #Before starting download chantools by guggero <EMAIL>:guggero/chantools.git #For using the Script you are going to extract the BIP32 rootkey of your LND node, make sure you are offline and on a privacy preserving OS (Tails) #Get Rootkey with `chantools showrootkey` #Type in your 24 seed phrase in the terminal and the decipher password in case you used one #you will get the BIP32 Rootkey encoded in Base58 xpriv_bip32rootkey = '<KEY>' import base58 from cryptotools.BTC import Xprv from cryptotools.ECDSA.secp256k1 import PrivateKey,PublicKey from bip32 import BIP32 HARDENED_INDEX = 0x80000000 ENCODING_PREFIX = { "main": { "private": 0x0488ADE4, "public": 0x0488B21E, }, "test": { "private": 0x04358394, "public": 0x043587CF, }, } def _serialize_extended_key(key, depth, parent, index, chaincode, network="main"): """Serialize an extended private *OR* public key, as spec by bip-0032. :param key: The public or private key to serialize. Note that if this is a public key it MUST be compressed. :param depth: 0x00 for master nodes, 0x01 for level-1 derived keys, etc.. :param parent: The parent pubkey used to derive the fingerprint, or the fingerprint itself None if master. :param index: The index of the key being serialized. 0x00000000 if master. :param chaincode: The chain code (not the labs !!). :return: The serialized extended key. """ for param in {key, chaincode}: assert isinstance(param, bytes) for param in {depth, index}: assert isinstance(param, int) if parent: assert isinstance(parent, bytes) if len(parent) == 33: fingerprint = _pubkey_to_fingerprint(parent) elif len(parent) == 4: fingerprint = parent else: raise ValueError("Bad parent, a fingerprint or a pubkey is" " required") else: fingerprint = bytes(4) # master # A privkey or a compressed pubkey assert len(key) in {32, 33} if network not in {"main", "test"}: raise ValueError("Unsupported network") is_privkey = len(key) == 32 prefix = ENCODING_PREFIX[network]["private" if is_privkey else "public"] extended = prefix.to_bytes(4, "big") extended += depth.to_bytes(1, "big") extended += fingerprint extended += index.to_bytes(4, "big") extended += chaincode if is_privkey: extended += b"\x00" extended += key return extended def _unserialize_extended_key(extended_key): """Unserialize an extended private *OR* public key, as spec by bip-0032. :param extended_key: The extended key to unserialize __as bytes__ :return: network (str), depth (int), fingerprint (bytes), index (int), chaincode (bytes), key (bytes) """ assert isinstance(extended_key, bytes) and len(extended_key) == 78 prefix = int.from_bytes(extended_key[:4], "big") network = None if prefix in list(ENCODING_PREFIX["main"].values()): network = "main" elif prefix in list(ENCODING_PREFIX["test"].values()): network = "test" depth = extended_key[4] fingerprint = extended_key[5:9] index = int.from_bytes(extended_key[9:13], "big") chaincode, key = extended_key[13:45], extended_key[45:] return network, depth, fingerprint, index, chaincode, key zprv_prefix = b'\<KEY>' zpriv_bip32rootkey = base58.b58encode_check(zprv_prefix + base58.b58decode_check(xpriv_bip32rootkey)[4:]).decode('ascii') extended_key = base58.b58decode_check(xpriv_bip32rootkey) (network, depth, fingerprint, index, chaincode, key) = _unserialize_extended_key(extended_key) private_key_bip32rootkey = PrivateKey(key) public_key_bip32rootkey = private_key_bip32rootkey.to_public() public_key_serialized = public_key_bip32rootkey.encode(compressed=True) extended_pubkey = _serialize_extended_key(public_key_serialized,0x00,None,0x00000000, chaincode) xpub_bip32rootkey = base58.b58encode_check(extended_pubkey).decode() zpub_prefix = b'\<KEY>' zpub_bip32rootkey = base58.b58encode_check(zpub_prefix + base58.b58decode_check(xpub_bip32rootkey)[4:]).decode('ascii') print("zpub_bip32rootkey: %s"% zpub_bip32rootkey) bip32 = BIP32.from_xpriv(xpriv_bip32rootkey) #Derivation Path for Native Segwit Addresses xpub_extended_key_bip84 = bip32.get_xpub_from_path("m/84'/0'/0'") zpub_extended_key_bip84 = base58.b58encode_check(zpub_prefix + base58.b58decode_check(xpub_extended_key_bip84)[4:]).decode('ascii') #Include this output in a watchonly wallet in Electrum print("zpub_extended_key_bip84: %s"% zpub_extended_key_bip84)
import torch import torch.autograd as autograd import torch.nn.functional as F import gc import pprint from collections import Counter import sklearn.metrics.pairwise as pairwise import numpy as np def get_optimizer(models, args): ''' -models: List of models (such as Generator, classif, memory, etc) -args: experiment level config returns: torch optimizer over models ''' params = [] for model in models: params.extend([param for param in model.parameters() if param.requires_grad]) return torch.optim.Adam(params, lr=args.lr, weight_decay=args.weight_decay) def get_x_indx(batch, args, eval_model): x_indx = autograd.Variable(batch['x'], volatile=eval_model) return x_indx def get_hard_mask(z, return_ind=False): ''' -z: torch Tensor where each element probablity of element being selected -args: experiment level config returns: A torch variable that is binary mask of z >= .5 ''' max_z, ind = torch.max(z, dim=-1) if return_ind: del z return ind masked = torch.ge(z, max_z.unsqueeze(-1)).float() del z return masked def get_gen_path(model_path): ''' -model_path: path of encoder model returns: path of generator ''' return '{}.gen'.format(model_path) def one_hot(label, num_class): vec = torch.zeros( (1, num_class) ) vec[0][label] = 1 return vec def gumbel_softmax(input, temperature, cuda): noise = torch.rand(input.size()) noise.add_(1e-9).log_().neg_() noise.add_(1e-9).log_().neg_() noise = autograd.Variable(noise) if cuda: noise = noise.cuda() x = (input + noise) / temperature x = F.softmax(x.view(-1, x.size()[-1]), dim=-1) return x.view_as(input)
<filename>server/controllers/survey.ts<gh_stars>0 import Survey from '../models/survey'; import BaseCtrl from './base'; export default class SurveyCtrl extends BaseCtrl { model = Survey; getByUser = (req, res) => { this.model.find({ userId: req.params.userId }, (err, item) => { if (err) { return console.error(err); } res.status(200).json(item); }); } getActive = (req, res) => { this.model.find({ active: true }, (err, item) => { if (err) { return console.error(err); } res.status(200).json(item); }); } }
<reponame>svenschultze/Colab-Live-Figures from IPython.display import display, Javascript from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas import numpy as np import inspect from live.figure import Figure figures = {} def figure(name=None, width=30, memory_enabled=True): context = get_current_context() if context not in figures.keys(): figures[context] = Figure(name, width, memory_enabled) return figures[context] def imshow(img, width=30, memory_enabled=True): fig = figure(width=width, memory_enabled=memory_enabled) fig.imshow(img) def figshow(fig=None, width=30, memory_enabled=True): fig = figure(width=width, memory_enabled=memory_enabled) fig.figshow(fig) def vidshow(vid, width=30, fps=10): fig = figure(width=width, memory_enabled=False) fig.vidshow(vid, fps) def repeat(shape=None, fps=10): fig = figure() fig.repeat(shape, fps) def get_current_context(): for frame in inspect.stack(): if "ipython-input" in frame.filename: return frame.filename return None
/** * Reason code of the message status */ public static class ReasonCode { /** * No specific reason code specified. */ public static final int UNSPECIFIED = 0; /** * Sending of the message failed. */ public static final int FAILED_SEND = 1; /** * Delivering of the message failed. */ public static final int FAILED_DELIVERY = 2; /** * Displaying of the message failed. */ public static final int FAILED_DISPLAY = 3; /** * Incoming one-to-one message was detected as spam. */ public static final int REJECTED_SPAM = 4; }
1. Field of the Invention The present invention relates to a method for controlling a pausing period of a defrosting operation for minimizing a temperature deviation in a refrigerator through the optimum variation of the pausing period of the defrosting operation based on the present temperature in the refrigerator and the present pressure of the discharging outlet of a compressor. 2. Description of the Prior Art Generally, a refrigerator includes a heater for a defrosting operation for removing frost formed around an evaporator. When the accumulated operating time of a compressor is a first predetermined time (for example, 8 hours) or over, a controlling apparatus for the refrigerator selects a defrosting mode without any condition and carries out the defrosting operation. In addition when the accumulated operating time of the compressor is a second predetermined time (for example, 5 hours) or over, the controlling apparatus for the refrigerator reads the accumulated time for a door opening, the operating ratio of the compressor, etc. and carries out the defrosting operation. When the defrosting mode is selected, the controlling apparatus for the refrigerator operates the heater for the defrosting operation and detects the temperature around the evaporator through a defrosting sensor. When the detected temperature is a predetermined restoring temperature for the defrosting operation (for example, 13.degree. C.) or over, the operation of the heater for the defrosting operation (i.e. a heat generating operation) is stopped. Meanwhile when the detected temperature is the restoring temperature for the defrosting operation or below, the heater for the defrosting operation is operated for a predetermined time (for example, for 90 minutes) and then the operation thereof is stopped. After the operation of the refrigerator at the defrosting mode as described above, the temperature in the refrigerator is usually -10.degree. C. or over. If the refrigerator is commonly operated, the temperature in the refrigerator is about -16.degree. C..about.-20.degree. C. After the defrosting operation and during the pausing period of the defrosting operation, which is fixed as a predetermined time interval (for example, 4.about.7 minutes), the temperature in the refrigerator can be further increased. In order to lower the increased temperature in the refrigerator to the temperature in the refrigerator during the common operation, the operating time of the compressor should be increased. This will increase the consuming power of the refrigerator. In addition, the temperature in the refrigerator at the common operating mode of the refrigerator is different from the temperature in the refrigerator at the defrosting mode. Therefore, it is difficult for food stored in the refrigerator to keep in an optimally fresh state. Accordingly, the method for controlling the defrosting operation as described above fixes the pausing period of the defrosting operation irrespective of the temperature in the refrigerator after the defrosting operation and induces the increase in the temperature of the refrigerator. A method for automatically controlling the operation of the refrigerator is disclosed in U.S. Pat. No. 5,228,300 (granted to Shim). In this patent, the operations of the compressor and a fan motor are delayed after the completion of a defrosting cycling for minimizing the increase of the temperature in the refrigerator. And in the controlling method, the temperature setting of a chamber, the defrost cycling and the operation of a compressor and a fan motor are automatically controlled according to the the door open/close frequency and open time. According to the present state of a temperature adjusting apparatus and the mean temperature of the refrigerator, the temperature in the refrigerator is lowered and the defrosting cycling is operated for a predetermined time interval or an auxiliary time interval according to the utilization number of the refrigerator and the opening time of the door before the defrosting cycle. The operations of the compressor and the fan motor are delayed so that the temperature in the refrigerator is not increased after the completion of the defrosting cycling to minimize the temperature variation in the refrigerator and to protect the stored food. However, after the defrosting operation, the method for controlling the defrosting operation as described above could not actively cope with the increase in the temperature in the refrigerator and with the increase in the consuming power of the refrigerator. Therefore, the pausing period of the defrosting operation could not be optimally varied.
Zaha Hadid, best known in the UK for the London 2012 Aquatics Centre, the architectural centrepiece of the summer games, has taken first place in a competition to design the new Tokyo National Stadium. The visually striking submission will replace the current, ageing structure, built in 1958 and which served as the main venue for the 1964 Summer Olympics. The 54-year old stadium, designed by Mitsuo Katayama and described by the jury chair Tadao Ando as "announcing the birth of a modern Japanese architecture", will make way for a new venue which Ando said would see a modern Japan "reborn". The stadium is scheduled for completion in 2018, and will play host to the 2019 Rugby World Cup, as well as forming the centrepiece of the 2020 Summer Olympics should Tokyo's bid prove successful. The choice of Hadid may prove to be a controversial one, however, as the single major sports venue her firm have produced was subject to significant budget overruns. The Aquatics Centre was anticipated to cost £72m, but the final figure spiralled to £270m, a figure which may have been even higher had earlier designs for the venue's temporary "wings" been followed. With Japan's self-taught, Pritzker-laureate Ando overseeing a jury containing such architecture heavyweights as Lords Foster and Rogers, the former Japan Football Association President Junji Ogura and Ichiro Kono, who led the unsuccessful 2016 Tokyo Olympic bid, the contest required a number of specific criteria to be met, notably the use of adjustable seating and a retractable roof, while establishing a dialogue with its physical surroundings, particularly the Meiji Shrine. Despite Japan's staggering national debt, the stadium is set to become the world's most expensive venue at current exchange rates. Assuming a construction budget of 130bn yen ($1.62bn/£1.02bn) is fully utilised, it will surpass the $1.6bn paid for the New Meadowlands Stadium, home of both the New York Giants and New York Jets. Moreover, it will not become the home of any of Japan's major professional sports teams, instead playing host only to a potential future Fifa World Cup, IAAF World Championships and concerts by performers of sufficient standing to fill what will become an 80,000 capacity arena. The question as to whether a return on investment is a viable prospect when the lifespan of modern venues is 40 years or less remains open, although it should be assumed that it will succeed the Saitama Stadium 2002 as the home of Japan's national football team as the latter continues to be devilled by problems with public transport. The competition, in which submissions were invited from firms which had both experience in the design of a minimum 15,000 capacity stadium and had won one of five major architecture awards, carried a prize of 20m yen ($249,532/£157,431), although the prestige associated with being chosen for such a landmark project may outweigh any immediate financial benefits. The 48 entries were eventually whittled down to a shortlist of 11, including bids from the sport specialist Populous, designers of Wembley Stadium, Arsenal's Emirates Stadium and the London 2012 Olympic Stadium; Australia's Cox Architecture, a major player in the design of Sydney's Olympics venues, and Japan's 2010 Pritzker Prize winners SANAA, one of the two favourites for the competition according to Kenplatz, a Japanese engineering and architecture publication.
Personality as the Predictor of Treatment Experiences: A Combined Focus on Relaxation and Catharsis Examines the relationship between fifty-eight participants' pre-treatment personality scores and subsequent ratings of either relaxation or catharsis. Participants completed the Multidimensional Personality Questionnaire prior to their treatment workshops and the Phenomenology of Consciousness Inventory as the measure of their treatment-experience within the workshops. Multivariate Multiple Regression Analyses show personality as a significant predictor of treatment-experience. Univariate analyses reveal different aspects of the treatment-experience are predicted by different functions of personality described as either dispositional-mood or style. As a mood-measure, high Negative Emotionality predicts high Internal Dialogue and low Rationality. Style variables of high Absorption, low Constraint, low Harmavoidance, and high Social Closeness predict the self-altering features of the treatment-experiences. The implication is that personality, through the vicissitudes of mood and the stability of style, provides the structure for our experiences.
Today a political agreement was reached between the Commission, the Council and the European Parliament on the modernisation of the EU's trade defence instruments. The changes agreed today to the EU's anti-dumping and anti-subsidy regulations will make the EU's trade defence instruments more adapted to the challenges of the global economy: they'll become more effective, transparent and easier to use for companies, and in some cases will enable the EU to impose higher duties on dumped products. The deal culminates a process launched by the Commission in 2013 and represents a balanced outcome, taking into account the interests of EU producers, users and importers. President Jean-Claude Juncker said: "Our actions to defend European producers and workers against unfair trading practices must be bold and efficient and today's agreement will provide us with an additional tool to do just that. We are not naïve free traders and the set of changes agreed today confirms that once again. Europe will continue to stand for open markets and rules-based trade but we will not hesitate to resort to our trade defence toolbox to ensure a level playing field for our companies and workers." Trade Commissioner Cecilia Malmström said: "Better late than never. It took us some time to get here, but today's deal means that the EU will have the necessary tools to tackle quickly and effectively unjust trading practices. Together with the recently-agreed changes to the anti-dumping methodology, the EU's tool box of trade defence instruments is in shape to deal with global challenges. The EU stands for open and rules-based trade, but we must ensure that others do not take advantage of our openness. We are and we will continue to stand up for companies and workers suffering from unfair competition." The new rules will shorten the current 9 month investigation period for the imposition of provisional measures and make the system more transparent. The companies will benefit from an early warning system that will help them adapt to the new situation in case duties are imposed. Smaller companies will also get assistance from a specific help desk, to make it easier for them to trigger and participate in trade defence proceedings. Also, in some cases, the EU will adapt its 'lesser duty rule' and may impose higher duties. This will apply to cases targeting imports of unfairly subsidised or dumped products from countries where raw materials and energy prices are distorted. The political agreement reached today will enter into force once the Council and the European Parliament give their final green light. Background Together with the new anti-dumping methodology, this is the first major overhaul of the EUs anti-dumping and anti-subsidy instruments in 15 years. It is the fruit of more than 4 years' labour, including broad consultations with multiple stakeholders and negotiations with member states and the European Parliament. The Commission first proposed a reform of the EU's trade defence instruments in 2013. The Council reached a compromise in December 2016 which allowed for three-way negotiations between them, the Commission, and the European Parliament. More information EU Trade Defence New antidumping methodology
package yaml import ( "bytes" "io" "github.com/pkg/errors" "github.com/valyala/bytebufferpool" "gopkg.in/yaml.v2" "github.com/why444216978/codec" ) type YamlCodec struct{} var _ codec.Codec = (*YamlCodec)(nil) func (c YamlCodec) Encode(data interface{}) (io.Reader, error) { b, err := yaml.Marshal(data) if err != nil { return nil, err } return bytes.NewBuffer(b), nil } func (c YamlCodec) Decode(r io.Reader, dst interface{}) error { if r == nil { return errors.New("reader is nil") } buf := bytebufferpool.Get() defer bytebufferpool.Put(buf) if _, err := buf.ReadFrom(r); err != nil { return err } return yaml.Unmarshal(buf.Bytes(), dst) }
import math a,b,h,m= map(int, input().split()) x=360*(m/60) y=360*((h+m/60)/12) rad=abs(x-y) ans=pow(a,2)+pow(b,2)-2*a*b*math.cos(math.radians(rad)) print(pow(ans,0.5))
Thymoquinone alleviates the experimentally induced Alzheimers disease inflammation by modulation of TLRs signaling Alzheimers disease (AD) is characterized by a robust inflammatory response elicited by the accumulation and deposition of amyloid- (A) within the brain. A induces detrimental inflammatory responses through toll-like receptors (TLRs) signaling pathway. Thymoquinone (TQ), the main active constituent of Nigella sativa oil, has been reported by several previous studies for its potent anti-inflammatory effect. The aim of this study is to elucidate the effect of TQ in improving learning and memory, using a rat model of AD induced by a combination of aluminum chloride (AlCl3) and d-galactose (d-Gal). TQ was administered orally at doses of 10, 20, and 40 mg/kg/day for 14 days after AD induction. Memory functions were assessed using the step through passive avoidance test. Amyloid plaques were shown to be present using hematoxylin and eosin staining. Tumor necrosis factor-alpha (TNF-) and Interleukin-1beta (IL-1) levels in brain were assessed via ELISA and profiling TLR-2, TLR-4, myeloid differential factor 88, tollinterleukin-1 receptor domain-containing adapter-inducing interferon-, interferon regulatory factor 3 (IRF-3), and nuclear factor-B (NF-B) expressions via real-time polymerase chain reaction. TQ improved AD rat cognitive decline, decreased A formation and accumulation, significantly decreased TNF- and IL-1 at all levels of doses and significantly downregulated the expression of TLRs pathway components as well as their downstream effectors NF-B and IRF-3 mRNAs at all levels of doses (p < 0.05). We concluded that TQ reduced the inflammation induced by d-Gal/AlCl3 combination. It is therefore reasonable to assign the anti-inflammatory responses to the modulation of TLRs pathway.
/* * Copyright 2014 Team APPetizer * Author: <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ #include "selectconnectiontypescreen.h" #include "bundle.h" SayCheesePhotoManager::SelectConnectionTypeScreen::SelectConnectionTypeScreen(Bundle *bundle, MainWindow *mainWindow) : SelectConnectionTypeScreenView(bundle, mainWindow) { this->connect(this->backButton, &QPushButton::pressed, this, &SelectConnectionTypeScreen::handleBackButtonPressed); this->connect(this->wifiButton, &QPushButton::pressed, this, &SelectConnectionTypeScreen::handleWifiButtonPressed); } void SayCheesePhotoManager::SelectConnectionTypeScreen::handleUsbButtonPressed() { } void SayCheesePhotoManager::SelectConnectionTypeScreen::handleWifiButtonPressed() { this->mainWindow->loadScreen(Screen::ServerConfiguration); } void SayCheesePhotoManager::SelectConnectionTypeScreen::handleBackButtonPressed() { this->mainWindow->loadPreviousScreen(); }
<reponame>jrbeverly/JCompiler<filename>src/test/resources/assignment_testcases/a3/J1_methodInvocationQualified.java // DISAMBIGUATION public class J1_methodInvocationQualified { public int bar() { return 123; } public J1_methodInvocationQualified() {} public static int test() { J1_methodInvocationQualified foo = new J1_methodInvocationQualified(); return foo.bar(); } }
<filename>src/main/java/cn/dagongren8/teamplus/entity/Message.java package cn.dagongren8.teamplus.entity; import java.io.Serializable; import java.util.Date; /** * <p> * * </p> * * @author wanghaihua * @since 2021-01-12 */ public class Message implements Serializable { private static final long serialVersionUID = 1L; private Integer messageId; private Subject subject; private User user; private String messageContent; private Date messageCreatetime; public static long getSerialVersionUID() { return serialVersionUID; } public Integer getMessageId() { return messageId; } public void setMessageId(Integer messageId) { this.messageId = messageId; } public Subject getSubject() { return subject; } public void setSubject(Subject subject) { this.subject = subject; } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public String getMessageContent() { return messageContent; } public void setMessageContent(String messageContent) { this.messageContent = messageContent; } public Date getMessageCreatetime() { return messageCreatetime; } public void setMessageCreatetime(Date messageCreatetime) { this.messageCreatetime = messageCreatetime; } @Override public String toString() { return "Message{" + "messageId=" + messageId + ", subject=" + subject + ", user=" + user + ", messageContent='" + messageContent + '\'' + ", messageCreatetime=" + messageCreatetime + '}'; } }
The metabolism of cancer cells: moonlighting proteins and growth control. Cancer is a disease of uncontrolled cell growth that results in cells that pile up into a nonstructured mass or tumor. Cancer cells exhibit clear metabolic differences from normal cells. More than fifty years ago the biochemist Otto Warburg noted that tumor cells have high rates of anaerobic glycolysis and these high rates are present despite ample supply of oxygen. Although Warburg and others documented the glycolytic shift of tumor cells, it was not clear that it represented more than a secondary event. The role of metabolism as a determinant or contributor in tumorigenesis was assumed unlikely. Recent findings have caused a revisiting of these assumptions and generated renewed interest in the Warburg effect and its implications with respect to carcinogenesis.
// create subaccount with different client settings private static void createSubAccountBySettingClient() { try { SubaccountCreateResponse subaccount = Subaccount.creator("Test 2") .enabled(true) .client(client) .create(); System.out.println(subaccount); } catch (PlivoRestException | IOException e) { e.printStackTrace(); } }
// RGBToHSL converts the provided rgb value to HSL format. func RGBToHSL(r, g, b int) (float64, float64, float64) { c := colorful.Color{ R: float64(r) / 255, B: float64(g) / 255, G: float64(b) / 255, } return c.Hsl() }
""" Sphindexer ~~~~~~~~~~ A Sphinx Indexer """ import re from unicodedata import normalize from typing import Any, List, Tuple, Pattern, cast from docutils import nodes from sphinx.domains.index import IndexDomain from sphinx.errors import NoUri from sphinx.locale import _, __ from sphinx.util import logging # Update separately from the package version, since 2021-11-07 __version__ = "3.2.20220227" # x.y.YYYYMMDD[.HHMI] # - x: changes that need to be addressed by the user. # - y: changes that do not require a response from the user. logger = logging.getLogger(__name__) # ------------------------------------------------------------ class Character(object): def chop_mark(self, rawtext): text = normalize('NFD', rawtext) if text.startswith('\N{RIGHT-TO-LEFT MARK}'): text = text[1:] return text def sort_key(self, text): if not text: return (0, '') elif text[0].upper().isalpha() or text.startswith('_'): return (1, text.upper()) else: return (0, text.upper()) class Represent(object): def represent(self, data, end=None): name = self.__class__.__name__ rpr = f"<{name}: {data}" if end: for o in self[0:end]: rpr += repr(o) if self[end].astext(): rpr += repr(self[end]) else: for o in self: rpr += repr(o) return rpr + ">" class Convert(object): _type_to_link = {'see': 1, 'seealso': 2, 'uri': 3} _main_to_code = {'main': 1, '': 2} _code_to_main = {1: 'main', 2: ''} def _type2link(self, link): return self._type_to_link[link] def _main2code(self, main): return self._main_to_code[main] def _code2main(self, code): return self._code_to_main[code] # ------------------------------------------------------------ class Text(Character, nodes.Text): whatiam = 'term' def assort(self): text = self.chop_mark(self) if self.whatiam == 'classifier' and text == _('Symbols'): return (0, text) return self.sort_key(text) class Subterm(Represent, Character, nodes.Element): def __init__(self, link, *terms): if link == 1: template = _('see %s') elif link == 2: template = _('see also %s') else: template = None _terms = [] for term in terms: if term.astext(): _terms.append(term) super().__init__(''.join([repr(term) for term in terms]), *_terms, delimiter=' ', template=template) def __repr__(self, attr=""): attr += f"len={len(self)} " if self['delimiter'] != ' ': attr += f"delimiter='{self['delimiter']}' " if self['template']: attr += f"tpl='{self['template']}' " return self.represent(attr) def __str__(self): """Jinja2""" return self.astext() def __eq__(self, other): """unittest、IndexRack.generate_genindex_data.""" try: return self.astext() == other.astext() except AttributeError: return self.astext() == other def astext(self): if self['template'] and len(self) == 1: return self['template'] % self[0].astext() text = "" for subterm in self: text += subterm.astext() + self['delimiter'] return text[:-len(self['delimiter'])] def assort(self): text = self.chop_mark(self.astext()) return self.sort_key(text) # ------------------------------------------------------------ UNIT_CLSF = 0 # a classifier. The index_key and category_key are variable names. UNIT_TERM = 1 # a primary term. UNIT_SBTM = 2 # a secondary term. class IndexUnit(Represent, nodes.Element): def __init__(self, term, subterm, link_type, main, file_name, target, index_key): super().__init__(repr(term) + repr(subterm), # rawsource used for debug. nodes.Text(''), term, subterm, link_type=link_type, main=main, file_name=file_name, target=target, index_key=index_key) # Text is used to avoid errors in Element.__init__. # Since it is always overwritten in IndexRack, consideration for extensibility # isn't needed. def __repr__(self, attr=""): if self['main']: attr += "main " if self['file_name']: attr += f"file_name='{self['file_name']}' " if self['target']: attr += f"target='{self['target']}' " return self.represent(attr, 2) def set_subterm_delimiter(self, delimiter=', '): self[UNIT_SBTM]['delimiter'] = delimiter # ------------------------------------------------------------ _each_words = re.compile(r' *; *') class IndexEntry(Convert, Represent, nodes.Element): other_entry_types = ('list') textclass = Text packclass = Subterm unitclass = IndexUnit def __init__(self, rawtext, entry_type='single', file_name=None, target=None, main='', index_key=''): """ - textclass is to expand functionality for multi-byte language. - textclass is given by IndexRack class. """ self.delimiter = '; ' rawwords = _each_words.split(rawtext) terms = [] for rawword in rawwords: terms.append(self.textclass(rawword, rawword)) super().__init__(rawtext, *terms, entry_type=entry_type, file_name=file_name, target=target, main=main, index_key=index_key) def __repr__(self, attr=""): if self['entry_type']: attr += f"entry_type='{self['entry_type']}' " if self['main']: attr += "main " if self['file_name']: attr += f"file_name='{self['file_name']}' " if self['target']: attr += f"target='{self['target']}' " if self['index_key']: attr += f"index_key='{self['index_key']}' " return self.represent(attr) def astext(self): """ >>> entry = IndexEntry('sphinx; python', 'single', 'document', 'term-1', None) >>> entry.astext() 'sphinx; python' """ text = self.delimiter.join(k.astext() for k in self) return text def make_index_units(self): """ The parts where the data structure changes between IndexEntry and IndexUnit will be handled here. >>> entry = IndexEntry('sphinx', 'single', 'document', 'term-1') >>> entry.make_index_units() [<IndexUnit: main file_name='document' target='term-1' <#text: ''><#text: 'sphinx'>>] """ etype = self['entry_type'] fn = self['file_name'] tid = self['target'] main = self['main'] index_key = self['index_key'] def _index_unit(term, sub1, sub2): if etype in ('see', 'seealso'): link = self._type2link(etype) else: link = self._type2link('uri') emphasis = self._main2code(main) if not sub1: sub1 = self.textclass('') if not sub2: sub2 = self.textclass('') subterm = self.packclass(link, sub1, sub2) index_unit = self.unitclass(term, subterm, link, emphasis, fn, tid, index_key) return index_unit index_units = [] try: # _index_unit(term, subterm1, subterm2) if etype == 'single': try: index_units.append(_index_unit(self[0], self[1], '')) except IndexError: index_units.append(_index_unit(self[0], '', '')) elif etype == 'pair': index_units.append(_index_unit(self[0], self[1], '')) index_units.append(_index_unit(self[1], self[0], '')) elif etype == 'triple': index_units.append(_index_unit(self[0], self[1], self[2])) # ' ' index_units.append(_index_unit(self[1], self[2], self[0])) # ' ' index_units.append(_index_unit(self[2], self[0], self[1])) # ' ' index_units[1].set_subterm_delimiter() # the delimiter became ', ' elif etype == 'see': index_units.append(_index_unit(self[0], self[1], '')) elif etype == 'seealso': index_units.append(_index_unit(self[0], self[1], '')) elif etype in self.other_entry_types: for i in range(len(self)): index_units.append(_index_unit(self[i], '', '')) else: logger.warning(__('unknown index entry type %r'), etype, location=fn) except IndexError as err: raise IndexError(str(err), repr(self)) except ValueError as err: logger.warning(str(err), location=fn) return index_units class IndexRack(Convert, Character, nodes.Element): """ 1. self.__init__() Initialization. Reading from settings. 2. self.append() Importing the IndexUnit object. Preparing for self.update_units(). 3. self.update_units() Update each IndexUnit object and prepare for self.sort_units(). 4. self.sort_units() Sorting. 5. self.generate_genindex_data() Generating data for genindex. """ textclass = Text packclass = Subterm unitclass = IndexUnit entryclass = IndexEntry def __init__(self, builder): # Save control information. self.env = builder.env self.config = builder.config self.get_relative_uri = builder.get_relative_uri def create_index(self, group_entries: bool = True, _fixre: Pattern = re.compile(r'(.*) ([(][^()]*[)])') ) -> List[Tuple[str, List[Tuple[str, Any]]]]: """see sphinx/environment/adapters/indexentries.py""" # Save the arguments. self._group_entries = group_entries self._fixre = _fixre # Initialize the container. self._rack = [] # [IndexUnit, IndexUnit, ...] self._classifier_catalog = {} # {term: classifier} self._function_catalog = {} # {function name: number of homonymous funcion} domain = cast(IndexDomain, self.env.get_domain('index')) entries = domain.entries # entries: Dict{file name: List[Tuple(type, value, tid, main, index_key)]} for fn, entries in entries.items(): for entry_type, value, tid, main, ikey in entries: entry = self.entryclass(value, entry_type, fn, tid, main, ikey) index_units = entry.make_index_units() self += index_units self.update_units() self.sort_units() return self.generate_genindex_data() def append(self, unit): """ Gather information for the update process, which will be determined by looking at all units. """ # Gather information. self.put_in_classifier_catalog(unit['index_key'], unit[UNIT_TERM].astext()) unit[UNIT_TERM].whatiam = 'term' # Gather information. if self._group_entries: self.put_in_function_catalog(unit, self._fixre) # Put the unit on the rack. self._rack.append(unit) def extend(self, units): for unit in units: self.append(unit) def put_in_classifier_catalog(self, index_key, word): if not index_key: return if not word: raise ValueError(repr(self)) if word not in self._classifier_catalog: # No overwriting. (To make the situation in "make clean" true) self._classifier_catalog[word] = index_key def put_in_function_catalog(self, unit, _fixre): m = _fixre.match(unit[UNIT_TERM].astext()) if m: try: self._function_catalog[m.group(1)] += 1 except KeyError: self._function_catalog[m.group(1)] = 1 else: pass def update_units(self): """Update with the catalog.""" for unit in self._rack: assert [unit[UNIT_TERM]] # Update multiple functions of the same name. if self._group_entries: self.update_unit_with_function_catalog(unit) # Set the classifier. self.update_unit_with_classifier_catalog(unit) def update_unit_with_classifier_catalog(self, unit): ikey = unit['index_key'] term = unit[UNIT_TERM] word = term.astext() # Important: The order in which if/elif decisions are made. if ikey: _key, _raw = self.chop_mark(ikey), ikey elif word in self._classifier_catalog: _key, _raw = self.chop_mark(self._classifier_catalog[word]), word else: _key, _raw = self.make_classifier_from_first_letter(term), word unit[UNIT_CLSF] = self.textclass(_key, _raw) unit[UNIT_CLSF].whatiam = 'classifier' def make_classifier_from_first_letter(self, term): text = self.chop_mark(term.astext()) if text[0].upper().isalpha() or text.startswith('_'): return text[0].upper() else: return _('Symbols') def update_unit_with_function_catalog(self, unit): """ fixup entries: transform func() (in module foo) func() (in module bar) into func() (in module foo) (in module bar) """ i_tm = unit[UNIT_TERM] m = self._fixre.match(i_tm.astext()) # If you have a function name and a module name in the format that _fixre expects, # and you have multiple functions with the same name. if m and self._function_catalog[m.group(1)] > 1: unit[UNIT_TERM] = self.textclass(m.group(1)) if unit[UNIT_SBTM].astext(): subterm = unit[UNIT_SBTM].astext() term = self.textclass(m.group(2) + ', ' + subterm) else: term = self.textclass(m.group(2)) unit[UNIT_SBTM] = self.packclass(unit['link_type'], term) def sort_units(self): """ What is done in Text is done in Text, and what is done in IndexUnit is done in IndexUnit.""" self._rack.sort(key=lambda unit: ( unit[UNIT_CLSF].assort(), # classifier unit[UNIT_TERM].assort(), # primary term unit['link_type'], # see Convert. 1:'see', 2:'seealso', 3:'uri'. unit[UNIT_SBTM].assort(), # secondary term unit['main'], # see Convert. 3:'main', 4:''. unit['file_name'], unit['target']), ) # about x['file_name'], x['target']. # Reversing it will make it dependent on the presence of "make clean". def generate_genindex_data(self): rtnlist = [] _clf, _tm, _sub = -1, -1, -1 for unit in self._rack: # take a unit from the rack. i_clf = unit[UNIT_CLSF] i_tm = unit[UNIT_TERM] i_sub = unit[UNIT_SBTM] i_em = unit['main'] i_lnk = unit['link_type'] i_fn = unit['file_name'] i_tid = unit['target'] i_iky = unit['index_key'] if len(rtnlist) == 0 or not rtnlist[_clf][0] == i_clf: # Enter a clsssifier. rtnlist.append((i_clf, [])) # Post-processing. _clf, _tm, _sub = _clf + 1, -1, -1 # Update _clf to see "(clf, [])" added. Reset the others. r_clsfr = rtnlist[_clf] # [classifier, [term, term, ..]] # r_clfnm = r_clsfr[0] # classifier is KanaText object. r_terms = r_clsfr[1] # [term, term, ..] if len(r_terms) == 0 or not r_terms[_tm][0] == i_tm: # Enter a term. r_terms.append((i_tm, [[], [], i_iky])) # Post-processing. _tm, _sub = _tm + 1, -1 # Update _tm to see "(i_tm, [[], [], i_iky])" added. Reset the other. else: pass r_term = r_terms[_tm] # [term, [links, [subterm, subterm, ..], index_key] # r_term_value = r_term[0] # term_value is KanaText object. r_term_links = r_term[1][0] # [(main, uri), (main, uri), ..] r_subterms = r_term[1][1] # [subterm, subterm, ..] # if it's see/seealso, reset file_name for no uri. see Convert. if i_lnk == 3: # Change the code to string. r_main = self._code2main(i_em) # uri try: r_uri = self.get_relative_uri('genindex', i_fn) + '#' + i_tid except NoUri: continue else: r_uri = None # sub(class Subterm): [], [KanaText], [KanaText, KanaText]. if len(i_sub) == 0: if r_uri: r_term_links.append((r_main, r_uri)) elif len(r_subterms) == 0 or not r_subterms[_sub][0] == i_sub: # Enter a subterm. r_subterms.append((i_sub, [])) # Post-processing. _sub = _sub + 1 r_subterm = r_subterms[_sub] # r_subterm_value = r_subterm[0] r_subterm_links = r_subterm[1] # Enter a link. if r_uri: r_subterm_links.append((r_main, r_uri)) else: # Enter a link. if r_uri: r_subterm_links.append((r_main, r_uri)) return rtnlist # ------------------------------------------------------------
More options: Share, Mark as favorite Image Credit: Shutterstock Last week, I pointed out that there is no such thing as a natural social-conservative skew among Latino Americans. But that leaves open a rejoinder, expressed by several readers: The GOP doesn’t need to get all of the Latino vote, just its fair share. That’s true, and I should have made my point clearer. In the wake of the election, some social conservatives have tried a new version of the old Silent Majority argument, contending that Republicans can continue to make their candidates pass litmus tests on abortion and gay marriage and still win national elections if only it taps the natural social conservatism of Latinos. Exposing that illusion was the point of the numbers I presented. This time I will explicitly offer a broader argument and then give the numbers. My thesis is that the GOP is in trouble across the electoral board because it has become identified in the public mind with social conservatism. Large numbers of Independents and Democrats who are naturally attracted to arguments of fiscal discipline, less government interference in daily life, greater personal responsibility, and free enterprise refuse to vote for Republicans because they are so put off by the positions and rhetoric of social conservatives, whom they take to represent the spirit of the “real” GOP. I use Asian-Americans as an example of how powerfully this antipathy can alienate a naturally conservative voting bloc. Let it be clear: The causal link with social conservatism is asserted here, not proved. But the GOP had better take the hypothesis seriously. Let’s start with data from the Current Population Survey from 2003 on some key socioeconomic indicators for adults ages 30–49. (The CPS first started identifying Asians separately from other ethnic groups in 2003). Politically, a college education is a wash—in the General Social Survey, almost identical proportions of college graduates identify themselves as liberals and conservatives. But Asians are also richer, more often in conservative-skewed professions, equally married, and less often divorced than non-Latino whites—all indicators that normally identify disproportionately conservative voters. Now let’s turn to the political indicators provided by the General Social Survey. Asians are only half as likely to identify themselves as “conservative” or “very conservative” as whites, and less than half as likely to identify themselves as Republicans. Asians are not only a lot more liberal than whites; a higher percentage of Asians identify themselves as “liberal” or “extremely liberal” (22%) than do blacks (19%) or Latinos (17%). And depending on which poll you believe, somewhere in the vicinity of 70% of Asians voted for Barack Obama in the last presidential election. Something’s wrong with this picture. It’s not just that the income, occupations, and marital status of Asians should push them toward the right. Everyday observation of Asians around the world reveal them to be conspicuously entrepreneurial, industrious, family-oriented, and self-reliant. If you’re looking for a natural Republican constituency, Asians should define “natural.” Can the Republicans write them off as a special case in the same way that Jews have been a special case? That’s hard to do, because their stories are so different. Many of the Jews who immigrated to America had been socialists, trade-union activists, or otherwise committed to the Left in their native lands, and those family traditions have sometimes perpetuated themselves. The great majority of non-political Jewish immigrants came from places where they had been systematically persecuted for being Jews, and it is easy to see how Jews might have an enduring propensity to side with the underdog. In contrast, virtually no Asian Americans came here because they were fleeing persecution for being Asian. They sometimes fled political persecution by the Communists, especially from Vietnam, but that experience tends to produce conservative immigrants, not liberal ones. Usually, Asians came to the United States for the traditional reason: America was the land of opportunity where they could rise in the world. Asian immigrants overwhelmingly succeeded, another experience that tends to produce conservative immigrants. Beyond that, Asian minorities everywhere in the world, including America, tend to be underrepresented in politics—they’re more interested in getting ahead commercially or in non-political professions than in running for office or organizing advocacy groups. Lack of interest in politics ordinarily translates into a “just don’t bother us” attitude that trends conservative. Further, there are reasons for Asian Americans not to like Democrats. Asians who became successful because everyone in the family worked two or three jobs (a common strategy behind Asian success) are likely to be offended by the liberal “You didn’t build that” mentality. Unlike every other minority group, Asians owe nothing to the Democrats for affirmative action. On the contrary, Asians are penalized by affirmative action, especially in the universities, where discrimination against Asian applicants (relative to their superb academic qualifications) has been documented in the technical literature. And yet something has happened to define conservatism in the minds of Asians as deeply unattractive, despite all the reasons that should naturally lead them to vote for a party that is identified with liberty, opportunity to get ahead, and economic growth. I propose that the explanation is simple. Those are not the themes that define the Republican Party in the public mind. Republicans are seen by Asians—as they are by Latinos, blacks, and some large proportion of whites—as the party of Bible-thumping, anti-gay, anti-abortion creationists. Factually, that’s ludicrously inaccurate. In the public mind, except among Republicans, that image is taken for reality.
<gh_stars>10-100 from django.shortcuts import render from landing.models import Data from django.db.models import Q from operator import __or__ as OR def dataset_preview(request, id): template = 'pages/data-preview.html' model = Data.objects.get(id=id) context = { 'dataset_id': model } return render(request, template, context) def data_filter(request): template = 'pages/data-filter.html' type = request.GET['type'] province = request.GET['province'] district = request.GET['district'] dataset='' if type == 'None' and province=='None' : dataset = Data.objects.filter(district=district) elif type == 'None' and district=='None': dataset = Data.objects.filter(province=province) elif province=='None' and district=='None': dataset = Data.objects.filter(type=type) elif type == 'None': dataset = Data.objects.filter(district=district,province=province) elif district == 'None': dataset = Data.objects.filter(type=type,province=province) elif province == 'None': dataset = Data.objects.filter(district=district,type=type) else : dataset = Data.objects.filter(district=district,type=type,province=province) context={ 'dataset':dataset, } return render(request, template, context)
2. The Dilemmas of Diffusion: Institutional Transfer and the Remaking of Vocational Training Practices in Eastern Germany The process of unification, it is thus far clear, is more than a simple transfer of economic and political institutions from West Germany to East Germany. In West Germany, these institutions are embedded in the social structure. In East Germany, without this social structure, these institutions exist as a set of parameters that constrain social and political action. There is no reason to assume that the sum of these actions will produce institutions that are identical or even similar to what we have known in West Germany or that they will not transform the institutions that characterize the Federal Republic as a whole. Introduction 2 One of the most important debates in contemporary industrial relations theory and policy is whether or not (and if so, how) institutional practices developed in one setting can be transferred to and implemented effectively in another context. This debate takes place both at the level of company practice-witness the debate over "lean production" in several Roos 1990)-and in broader policy discussions over the relevance of works councils in the Yet there are good theoretical reasons why institutional transfer may be difficult to achieve. No institutional practice stands alone but rather, as many have argued already, each practice is situated in a broader institutional and cultural context which shapes the outcomes addition to the issue of whether or not different practices complement one another and "fit" 2 together into a coherent system, each of these institutional arrangements also rests on and interacts with distinct sociopolitical relations which shape how these institutions actually work (Locke 1995). Through a case study of the diffusion of the acclaimed West German "dual system" of vocational training to the former German Democratic Republic, now known as the new federal states (neue Bundesliinder), we develop the argument about the importance of local sociopolitical relations for the successful transfer and implementation of institutional arrangements. More specifically, we argue that notwithstanding massive levels of government funding, the presence of the same complimentary institutional supports, and the concerted efforts of the country's major social partners, dualistic training arrangements are experiencing significant difficulties taking root in the new federal states. This is due not simply to the particular politics of unification (which entailed the wholesale transfer of West German arrangements regardless of whether or not they were appropriate to Eastern Germany) or even to the paucity of dynamic private firms capable of and willing to train
There are particular tissue repair procedures that require the delivery of therapeutics to a target site in a patient's body. Optimally, these repair procedures are performed in a way to minimize damage caused by the repair procedure itself while maximizing the accuracy of placement of the therapeutic in relation to the damage site. Often these repair sites are difficult to reach within a patient's body or are sites in tissue where any disturbance of the surrounding environment can exacerbate and limit the repair process. In this light, although repair procedures utilizing biologic therapeutics have become more prominent, delivery procedures have not. For example, stem cell therapies directed at cartilage or bone repair are now being widely researched, and procedures developed to maximize the therapeutics capacity for a particular target tissue. However, placement of the stem cells at the target tissue site is generally taken for granted, relying on direct placement of the cells by a surgeon, injection of the cells into the site using an 18 g or 20 g needle, or intravenous infusion of the cells into the patient (relying on the cells inherent capacity to find the correct environment or sheer numbers to gain a foothold at the site). A particular tissue repair site of interest is utilized herein to further illustrate the concepts discussed above, the intervertebral disc. Intervertebral disc, or disc herein, lie between and separate each vertebra of the spine. Vertebrae within the spine are referred to as being in the cervical, thoracic, lumbar or sacrum regions. Each vertebra comes together to form the spinal column, or spine, which function is to protect the spinal cord, and support the body and head. Discs make up approximately one fourth of the spine's length, each disc acting as a cushion or shock absorber to protect the vertebrae and other aspects of the spine and brain during movement. Discs are generally non-vascular, fibrocartilaginous tissue composed of a nucleus pulposus and an annulus fibrosus. The nucleus pulposus is centrally located in the disc and composed of a mucoprotein gel that resists compression and provides the cushion of the disc. The annulus fibrosus is a series of concentric sheets of collagen fibers that surround and enclose the nucleus pulposus. Since the annulus fibrosus surrounds and thereby encloses the nucleus pulposus, the nucleus pulposus is capable of providing an even distribution of pressure across the disc. The annulus fibrosus also provides a tethering point between the disc itself and endplates of adjacent vertebra. Manipulation of the disc environment, annulus or pulposus, can lead to additional damage and can further limit the capability of the disc to be repaired by a therapeutic. Back pain often results from disruption of one or more disc in a patient's spine. Disc disruption is typically caused by trauma, inflammation, herniation, and/or instability of adjacent vertebral bodies. Conventional therapies address the severity of the disc injury, while attempting to minimize risk and cost to the patient. Often, non-surgical approaches are utilized to treat disc-involved back pain, for example, rest, therapeutic exercise and medications are often a first-line defense in the treatment of back pain. These non-surgical approaches are targeted at a gradual and progressive improvement in symptoms for a patient. However, in some circumstances a damaged disc requires surgical intervention to facilitate repair of the damaged tissue. Surgical intervention includes invasive and/or minimally invasive procedures, where the type of procedure depends on the severity of the injury or damage. With regard to minimally invasive procedures, a number of endoscope or endoscope-like devices tailored for use in the spine have been developed. For example, disc repair procedures that utilize an endoscope (or other like instrument) include procedures for chemonucleolysis, laser directed techniques, and mechanical directed techniques. Recently, procedures have been proposed for utilizing biologic therapies in disc repair procedures. However, little advancement has been made to facilitate these new therapies, especially with regard to the placement of the therapeutics in the damage site. These procedures require delivery of materials into the disc, for example delivery of stem cells into a site within the disc. Little progress has been made in these stem cell or therapeutic delivery techniques. As such, there is a need in the art for improved therapeutic delivery devices and methods for the delivery of a therapeutic to a site in a patient. The need in the art requires delivery of therapeutics with high accuracy while minimizing disturbance to the environment of the damage. These devices and methods can be used in the treatment of disc, ligaments, labrum and other like sites. Against this backdrop the present disclosure is provided.
SAN DIEGO, Calif. (CBS 8) -- District Attorney Bonnie Dumanis announced Wednesday she plans to meet with the family of a young mother, murdered last week on the campus of San Diego City College. The DA's office has come under fire for not filing felony charges in an earlier domestic violence case involving the estranged husband of the murder victim, 19-year-old Diana Gonzalez. Dumanis appeared at the Mid-City police department in City Heights Wednesday as part of a news conference to promote domestic violence awareness and prevention. "All of our hearts go out to Diana's family and friends," Dumanis said as she opened the event. "I can't imagine the pain that they are feeling right now." For the first time since Gonzalez's murder on the evening of October 12, Dumanis fielded questions from reporters. Dumanis said she wants to meet with family members of the victim in private to address their concerns about her office's handling of the case. "I would discuss it with the victim's family. In fact, I have plans to see the victim's family," Dumanis said. The DA would not answer questions from reporters in any detail, however, about why her office declined to file charges in the earlier case last month; when Gonzalez's estranged husband – 37-year-old Armando Perez – was arrested on suspicion of assaulting, kidnapping and sexually assaulting Gonzalez. Perez is now believed to be hiding in Mexico. The district attorney filed murder charges against Perez Friday. Three weeks before her death, Gonzalez reported to police Perez had choked her to unconsciousness near the City College campus, kidnapped her, and repeatedly raped her in two different motel rooms over the course of three days. The police report notes, "there was redness to (Gonzalez's) entire face, and her forehead and cheeks were covered with small red spots." The report also details a half-inch diameter bruise on Gonzalez's inner wrist. A sexual assault exam was performed on Gonzalez at the time, according to the report. Officers arrested Perez September 24, but prosecutors did not file charges and he was released from jail five days later. "We don't file cases where there is insufficient evidence to support a conviction beyond a reasonable doubt," Dumanis said. "That's our threshold in every case and, as you know, that was not met in this case and that's why chargers were not filed." Dumanis declined to answer questions about the supporting evidence in the kidnapping case and whether Gonzalez had been cooperating with the investigation before her murder. Gonzalez obtained a restraining order against Perez one day before he got out of jail. At Wednesday's news conference, one reporter asked Dumanis, "So, you're not prepared at this time to say that your office made any mistakes in dropping these charges?" Dumanis responded, "I'm not prepared to discuss this case at all. We do our own internal review and we will continue to do that." The victim's sister, Janette Gonzalez, believes prosecutors dropped the ball by not pressing charges in the kidnapping case. "There was enough evidence for them to do something about it," she said. "I think if they would have pressed charges on him, she would still be (alive). I don't understand why or what they needed." As for meeting with Bonnie Dumanis, Janette Gonzalez said she mainly wants to know what's being done to bring Armando Perez to justice. She said, at this point, nothing can bring her sister back. "All we want is for this not to happen again," she said. "I mean, hopefully, they get the message and this doesn't happen to anybody else." Dumanis said she will take questions from the media after the case has been adjudicated. Meanwhile, San Diego Crimestoppers is offering a $1,000 reward for information that leads to the arrest of Armando Perez.
def _mount_routes(self, app: web.Application): for (url, method), fn in self._endpoints.items(): self._mount_one_route(method, url, fn, app) self._mount_system_health(app)
Preview | Recap | Notebook Hawks-Bulls Preview By MATT BECKER Posted Jan 03 2012 12:28AM The Chicago Bulls have won three in a row and are coming off a 40-point victory. The Atlanta Hawks likely won't be too intimidated. In their first meeting since last season's conference semifinals, the surging Bulls host a Hawks team coming off an impressive win in their first game of a daunting stretch. After opening the season with a 3-1 road trip, Chicago (4-1) made its long-awaited home opener Sunday night against Memphis, and didn't disappoint the United Center fans. The Bulls jumped out to a 26-point halftime advantage, led by as much as 46 in the third quarter and cruised to a 104-64 victory. The game was so lopsided, Carlos Boozer and Joakim Noah spent more time on the bench than on the court and Derrick Rose played just 25 minutes. There were a few concerns for the Bulls, though, as starting shooting guard Richard Hamilton was scratched shortly before tip-off with a sore left groin and backup point guard C.J. Watson sprained his left elbow late in the game. Hamilton will likely be a game-time decision against the Hawks (4-1), while Watson is unlikely to play. Ronnie Brewer, who started in place of Hamilton, and Boozer led the Bulls with 17 points apiece Sunday and Rose finished with 16 points and six assists. Chicago is averaging 108.7 points during its three-game winning streak, while outrebounding opponents by an average of 9.3 boards. "I thought we had chance to win (the championship) last year. I think we have a chance to win this year," Boozer told the Bulls' official website. "We're going to have a chance every year. Aside from Olympics and All-Star teams, this is the most talented team I've been on in the NBA." Boozer and the Bulls got past Indiana and Atlanta in last season's playoffs before being eliminated by Miami in five games. The Heat opened this season with five straight victories and went into Monday's game against Atlanta coming off a 39-point win over Charlotte the night before, but couldn't keep their perfect record intact as the Hawks won 100-92. Joe Johnson had 21 points and Tracy McGrady hit a pair of 3-pointers in the final 2:26 of the fourth quarter and scored 13 of his 16 points in the final period. Al Horford also had 16 points for the Hawks, who finished with a season-low 10 turnovers. "Great execution," McGrady said. "That's what you have to do when you're playing great teams." Atlanta will see two of the Eastern Conference's top teams this week. After facing Chicago, the Hawks return home to open a back-to-back-to-back stretch with a rematch against the Heat on Thursday. They'll then play in Charlotte the next night and conclude the three games in three nights stretch Saturday at home against the Bulls. Atlanta took Game 1 of last season's playoff series with Chicago at the United Center, but the Bulls went on to win the series in six games. Rose led Chicago, averaging 29.8 points and 9.8 assists. Johnson scored 34 on 12-of-18 shooting in upsetting the Bulls in Game 1, but averaged 16.8 points in the other five games. Copyright 2012 by STATS LLC and Associated Press. Any commercial use or distribution without the express written consent of STATS LLC and Associated Press is strictly prohibited Copyright 2012 by STATS LLC and Associated Press. Any commercial use or distribution without the express written consent of STATS LLC and Associated Press is strictly prohibited Deng's layup with 3.7 left leads Bulls past Hawks Posted Jan 04 2012 12:13AM CHICAGO (AP) With Derrick Rose having a big fourth quarter for the Chicago Bulls, everyone expected the reigning NBA MVP to take the final shot with the score tied. Instead, it was Luol Deng who got the game-winner in the closing seconds. Deng's layup with 3.7 seconds left lifted the Bulls to a 76-74 victory over the Atlanta Hawks on Tuesday night. Rose scored 17 of his 30 points in the fourth quarter to rally the Bulls from a 19-point deficit. Coming out of a timeout, Deng cut along the baseline and took a feed from Joakim Noah to put Chicago ahead. "At the end of practice, we always run (the play)," Deng said. "(Joakim) made a great pass. We had a feeling both of (the defenders) would go with Derrick." Turns out the play was designed for Rose after all. "Obviously, we're trying to get it to Derrick," Bulls coach Tom Thibodeau said. "They did a good job taking the first and second option away. Derrick set a great screen, (Noah) made a great pass and Luol made a great cut." The play caught the Hawks off guard as they were anticipating Rose to get the last shot. "We all assumed that the ball was coming back to Rose," Atlanta's Al Horford said. "We were going to come and trap. That was a great play by their coach designed to take the pressure from Rose and hit Deng on the back. We didn't expect that at all." Joe Johnson shot an airball at the buzzer, giving the Bulls the win. Deng finished with 21 points for the Bulls and scored eight over the last four minutes after Atlanta re-established a lead. "You can't say enough about Luol," Thibodeau said. "He does everything." Rose missed a runner with 21 seconds left, but Atlanta's Jeff Teague missed two free throws. Rose then drove past Teague and scored over the Hawks' Josh Smith with 9.9 seconds left to put the Bulls up 74-73. Horford, who led the Hawks with 16 points, was fouled when Deng ran into him near the top of the key. Horford split two free throws with 7.7 seconds remaining, tying the score and setting up Deng's heroics. "We stopped defending with the intensity that we did in the first three quarters," Hawks coach Larry Drew said. "We didn't make our free throws going down the stretch. If you don't make your free throws on the road you can't expect to win." Marvin Williams added 14 points and Josh Smith had 13 for Atlanta, which shot just 35 percent from the floor but held Chicago to 34 percent. The Bulls have won four straight and their 5-1 start is their best since opening 12-0 in 1996-97. Rose scored 11 points early in the fourth, including three 3-pointers. His pullup 3 cut Atlanta's lead to two with 8:15 to play and capped the Bulls' 20-3 run. "We were right there with them," Horford said. "We dominated for most of the game. Just Derrick Rose happened." Two minutes later, Rose passed to Deng for a 3 in the corner, evening the score at 62 for the game's first tie. "I was saying something the whole game, cursing and everything," Rose said when asked if he said anything between the third and fourth quarters to fire his team up. "We got things together at the right time." Johnson answered with a 3 of his own to spark a six-point Atlanta run. Chicago closed the gap to one on a series of free throws by Deng, setting up the stretch run. The Bulls had won three straight coming in and were coming off a 40-point rout of Memphis on Sunday but didn't lead until Rose's spectacular crossover move and layup to put Chicago up 72-71 with 57.9 seconds to play. The Hawks played in a back-to-back game for the third time in the first 10 days of the season and for the fourth time in five nights, all games in different cities, so perhaps it wasn't surprising that they faltered down the stretch. "It was disappointing the way we started the game," Thibodeau said. "We played a low energy game. They played great and we caught a break (because) this was the second night for them in a back to back." Chicago shot 2 for 21 in the second period and went more than 8 minutes without a field goal during one stretch, missing 14 straight shots. Atlanta led 38-26 at halftime as Chicago scored just three points more than its franchise record for fewest points in a half, set on April 10, 1999, against Miami. After winning 62 games last season, the Bulls aren't satisfied with ugly wins this time around. "A win is a win and it's hard to win in this league," Rose said. "But we know we're 10 times better than what we showed out there. I felt bad for our fans to see us play that bad." NOTES: Ronnie Brewer started in place of Richard Hamilton, who sat out his second straight game with a strained left groin. Hamilton was a game-time scratch from the starting lineup. . Chicago's C.J. Watson missed the game with a strained left elbow suffered on Sunday. Watson is listed as day-to-day by the team but was seen before the game with his arm in a sling. . Atlanta handed Miami its first loss on Monday and was trying to knock off one of last season's Eastern Conference finalists for the second straight night. . After a day off on Wednesday, the Hawks open a stretch of three games in three nights at home, starting with a matchup against Miami on Thursday. The last game of the stretch is against the Bulls. Copyright 2012 by STATS LLC and Associated Press. Any commercial use or distribution without the express written consent of STATS LLC and Associated Press is strictly prohibited Copyright 2012 by STATS LLC and Associated Press. Any commercial use or distribution without the express written consent of STATS LLC and Associated Press is strictly prohibited
Application of Probabilistic Robustness Framework: Risk Assessment of Multi-Storey Buildings under Extreme Loading Abstract Risk assessment is a requirement for robustness design of high consequence class structures, yet very little guidance is offered in practice for performing this type of assessment. This paper demonstrates the application of the probabilistic risk assessment framework arising from COST Action TU0601 to multi-storey buildings subject to extreme loading. A brief outline of the probabilistic framework is first provided, including the main requirements of describing uncertainty in the hazards and the associated local damage as well as the consequences of global failure. From a practical application perspective, it is emphasised that there is a need for (a) computationally efficient deterministic models of global failure for specific local damage scenarios, and (b) effective probabilistic simulation methods that can establish the conditional probability of global failure on local damage. In this respect, this work utilises a recently developed multi-level deterministic assessment framework for multi-storey buildings subject to sudden column loss, which is coupled with a response surface approach utilising first-order reliability methods to establish the conditional probability of failure. The application of the proposed approach is illustrated to a multi-storey steel-composite building, where it is demonstrated that probabilistic risk assessment is a practical prospect. The paper concludes with a critical appraisal of probabilistic risk assessment, highlighting areas of future improvement.
<filename>src/main/java/kr/gooroom/gpms/common/controller/MainController.java /* * Copyright 2015-2017 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package kr.gooroom.gpms.common.controller; import java.security.Principal; import java.text.SimpleDateFormat; import java.util.Date; import javax.annotation.Resource; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.codehaus.jettison.json.JSONException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.propertyeditors.CustomDateEditor; import org.springframework.stereotype.Controller; import org.springframework.ui.ModelMap; import org.springframework.web.bind.WebDataBinder; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.InitBinder; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.servlet.ModelAndView; import kr.gooroom.gpms.account.service.AccountVO; import kr.gooroom.gpms.account.service.LoginService; import kr.gooroom.gpms.client.service.ClientService; import kr.gooroom.gpms.client.service.ClientSummaryVO; import kr.gooroom.gpms.common.GPMSConstants; import kr.gooroom.gpms.common.service.GpmsCommonService; import kr.gooroom.gpms.common.service.ResultVO; import kr.gooroom.gpms.common.service.StatusVO; import kr.gooroom.gpms.common.service.impl.EmailServiceImpl; import kr.gooroom.gpms.common.utils.MessageSourceHelper; import kr.gooroom.gpms.user.service.AdminUserService; import kr.gooroom.gpms.user.service.AdminUserVO; /** * Handles requests for main process * <p> * home page service. * * @author HNC * @version 1.0 * @since 1.8 */ @Controller public class MainController { private static final Logger logger = LoggerFactory.getLogger(MainController.class); @Resource(name = "loginService") private LoginService loginService; @Resource(name = "adminUserService") private AdminUserService adminUserService; @Resource(name = "clientService") private ClientService clientService; @Resource(name = "gpmsCommonService") private GpmsCommonService gpmsCommonService; @Autowired public EmailServiceImpl emailService; /** * initialize binder for date format * <p> * ex) date format : 2017-10-04 * * @param binder WebDataBinder * @return void * */ @InitBinder public void initBinder(WebDataBinder binder) { SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd"); binder.registerCustomEditor(Date.class, new CustomDateEditor(dateFormat, true)); } /** * show(generate) home(main) page * * @param req HttpServletRequest * @param res HttpServletResponse * @param principal Principal * @return ModelAndView home * @throws JSONException */ // @GetMapping(value = "/home") // public ModelAndView loginSuccess(HttpServletRequest request, HttpServletResponse response, Principal principal) // throws JSONException { // // String userId = principal.getName(); // try { // request.getSession().setAttribute("AccountVO", (AccountVO) loginService.getLoginInfo(userId)); // } catch (Exception e1) { // e1.printStackTrace(); // } // // ModelAndView mv = new ModelAndView("/"); // // try { // ResultVO vo = adminUserService.selectAdminUserData(userId); // if (vo != null && vo.getData() != null && vo.getData().length > 0) { // AdminUserVO user = (AdminUserVO) vo.getData()[0]; // if (user != null) { // mv.addObject("adminName", user.getAdminNm()); // mv.addObject("adminId", user.getAdminId()); // } // } // } catch (Exception e) { // e.printStackTrace(); // } // // return mv; // } @GetMapping(value = "/testMail") public ModelAndView testMail(HttpServletRequest request, HttpServletResponse response, Principal principal) throws JSONException { ModelAndView mv = new ModelAndView("/"); String cont = "<html><h1>Hello</h1><br>world!</html>"; emailService.sendSimpleMessage("<EMAIL>", "mail title", cont); return mv; } /** * show(generate) dashboard page * * @param req HttpServletRequest * @param res HttpServletResponse * @param model ModelMap * @return ModelAndView home * @throws JSONException */ @GetMapping(value = "/pageDashboard") public ModelAndView pageDashboard(HttpServletRequest req, HttpServletResponse res, ModelMap model) { ModelAndView mv = new ModelAndView("pageDashboard"); ClientSummaryVO vo = new ClientSummaryVO(); // dashboard items try { // 1. client status ResultVO result = clientService.getClientStatusSummary(); if (GPMSConstants.MSG_SUCCESS.equals(result.getStatus().getResult())) { ClientSummaryVO reVO = (ClientSummaryVO) result.getData()[0]; vo.setTotalCount(reVO.getTotalCount()); vo.setOnCount(reVO.getOnCount()); vo.setOffCount(reVO.getOffCount()); } // 2. user login status result = clientService.getLoginStatusSummary(); if (GPMSConstants.MSG_SUCCESS.equals(result.getStatus().getResult())) { ClientSummaryVO reVO = (ClientSummaryVO) result.getData()[0]; vo.setLoginCount(reVO.getLoginCount()); vo.setUserCount(reVO.getUserCount()); } // 3. package status result = clientService.getUpdatePackageSummary(); if (GPMSConstants.MSG_SUCCESS.equals(result.getStatus().getResult())) { ClientSummaryVO reVO = (ClientSummaryVO) result.getData()[0]; vo.setNoUpdateCount(reVO.getNoUpdateCount()); vo.setUpdateCount(reVO.getUpdateCount()); vo.setMainUpdateCount(reVO.getMainUpdateCount()); } mv.addObject("clientSummary", vo); } catch (Exception e) { e.printStackTrace(); } mv.addObject("clientData", "{\"total\":\"123\"}"); return mv; } /** * response available network data for gpms connect * * @param req HttpServletRequest * @param res HttpServletResponse * @param model ModelMap * @return ResultVO result data bean * */ @PostMapping(value = "/readGpmsAvailableNetwork") public @ResponseBody ResultVO readGpmsAvailableNetwork(HttpServletRequest req, HttpServletResponse res, ModelMap model) { ResultVO resultVO = new ResultVO(); try { resultVO = gpmsCommonService.getGpmsAvailableNetwork(); } catch (Exception ex) { logger.error("error in readGpmsAvailableNetwork : {}, {}, {}", GPMSConstants.CODE_SYSERROR, MessageSourceHelper.getMessage(GPMSConstants.MSG_SYSERROR), ex.toString()); if (resultVO != null) { resultVO.setStatus(new StatusVO(GPMSConstants.MSG_FAIL, GPMSConstants.CODE_SYSERROR, MessageSourceHelper.getMessage(GPMSConstants.MSG_SYSERROR))); } } return resultVO; } /** * show(generate) refuse page * <p> * not available network client * * @param req HttpServletRequest * @param res HttpServletResponse * @param model ModelMap * @return ModelAndView home * */ @PostMapping(value = "/refusePage") public ModelAndView pageClient(HttpServletRequest req, HttpServletResponse res, ModelMap model) { ModelAndView mv = new ModelAndView("refusePage"); return mv; } }
<reponame>adrienbrunet/django-angular<gh_stars>0 import warnings warnings.warn( "Templatetags `djangular_tags` have been renamed to `djng_tags`." ) from .djangular_tags import *
Exports of manufactured goods in Colorado grew 5.7 percent during the past decade, generating $6.7 billion in 2011. And those in the industry agree that the market likely will continue to expand as more companies see the benefits of making goods locally. Mark Manger's newest invention is the GrOpener , advertised as the world's fastest one-handed bottle opener. Manger, a photographer, applied artist and inventor from Denver, knew exactly how he wanted to manufacture the GrOpener: locally. "I want to make something here," Manger explains. In 2007, Manger invented what he calls the Zoot Snoot , a tool for photographers. Manger wanted to build the product in the United States, and so started making calls to local manufacturers to obtain price quotes. The result, he says, was plenty of costly options that wouldn't have made financial sense. Finally, he found a company in California that could connect him with a low-cost manufacturing option in Taiwan. The overall experience was not ideal, Manger says. "We get inquiries from companies looking to return manufacturing to the U.S. almost every day," says Intertech Plastics' Tim Nakari. "It just makes you feel good as a manufacturer." someone and have input." "It was fine in the end, it just took a long time," Manger says, noting that it took months to receive a prototype and the design process was difficult because the manufacturer didn't offer any feedback. "I would have liked to be able to talk tosomeone and have input." Ultimately, Manger concludes: "That was a tedious way to do it." For the GrOpener, Manger decided to do things differently, so he started making some calls. Manger eventually found that he could keep costs low if he had the extrusions for the gadget done in Utah and then had those components shipped to Denver. Then Manger found separate but inexpensive locations in Denver for the necessary cutting, tumbling, anodization and logo application. Manger said he adds the GrOpener's magnets himself. "Once I started calling and asking for quotes … I found it was very affordable to have it done here," he says. Today the GrOpener is available online and at seven Denver retail locations. Manufacturing -- and re-shoring -- on the rise of $2.1 trillion in exports in 2011. The Obama administration says the figures offer "proof that 'Made in the USA' products are in demand all over the world." Separately, the total workforce -- figures that are also an increase over previous years. According to the U.S. Commerce Department , American exports totaled $2.2 trillion in 2012, eclipsing the previous recordof $2.1 trillion in exports in 2011. The Obama administration says the figures offer "proof that 'Made in the USA' products are in demand all over the world." Separately, the National Association of Manufacturers found that manufacturers in Colorado accounted for 7.8 percent of the state's total output in 2011, or $20.6 billion, and employed 5.7 percent of thetotal workforce -- figures that are also an increase over previous years. And perhaps most importantly, Boston Consulting Group predicts that 2.5 million to 5 million direct and indirect manufacturing jobs will return to the U.S. from abroad by 2020. Manger's GrOpener is part of that "re-shoring" trend, and is by no means alone. Tim Nakari, Director of Marketing for Denver-based manufacturing company Intertech Plastics , explains that "companies are bringing things back to the U.S." An employee quality checks products at Intertech Plastics. Nakari says that, in some cases, it's cheaper for companies to build products locally. For example, he says one of Intertech's customers decided to build its plastic storage bins in Denver because it was less expensive to build them locally and then distribute them in the western part of the country than ship them all the way from China. Further, some designers -- like Manger -- are concerned that overseas manufacturers might steal their designs. "There's less of an anticompetitive threat if they stay domestic," Nakari explains. Finally, Nakari says that financing is more readily available in the U.S. "It's easier to finance things in the U.S. rather than wiring money overseas," he notes, explaining that Chinese manufacturers generally only accept cash while U.S.-based manufacturing companies can work on credit. "That's something that not everyone realizes when they say, 'Oh, I'll just manufacture this in China.'" "We get inquiries from companies [looking to return manufacturing operations to the United States] almost every day now," Nakari concludes. "It just makes you feel good as a manufacturer." 'Made in the USA' a big selling point Aside from cost, some manufacturers see domestic operations as a key marketing message. "We do all the manufacturing of all our fishhooks here," says Chris Russell, marketing director for Wright & McGill Co. , owners of the Eagle Claw-brand fishhook. "It gives us a point of differentiation. That's a point we can really take to market." Eagle Claw , which has manufactured its fishhooks in Denver since 1925, outsourced the manufacture of its Eagle Claw-branded rods and reels in the 1960s due in order to save costs on the labor-intensive product. But Russell said the company still makes fishhooks locally, which he says saves the company money on shipping and "the quality we can deliver we feel is a huge benefit." "Our company has always said, 'We're going to make them here,'" Russell says. The "Made in USA" stamp is also important in the relatively new business of snowboarding. While rivals like K2 and Burton have moved manufacturing overseas, snowboard maker Never Summer Industries plans to continue building its snowboards and skis here in Denver. "As for outsourcing, that's not really an option for us," says Vince Sanders, product developer for Never Summer. Never Summer Industries just upgraded to a 26,000 square foot location. In fact, Never Summer is moving into a new manufacturing facility here in Denver, expanding to 26,000 square feet from the company's previous location of 19,000 square feet. The company currently churns out 120 snowboards, 60 longboards and 50 pairs of skis per day. Sanders says that most of Never Summer's materials are local too: The company's raw wood blocks come from Fort Collins, for example.
Insights into organic waste management practices followed by dairy farmers of Ludhiana District, Punjab: Policy challenges and solutions The study was conducted in Ludhiana District of Punjab (India) to understand the organic waste management practices followed by dairy farmers of the area. To investigate the practices pertaining to organic waste management, an ex-post facto research design was used and a total of 80 dairy farmers were selected randomly for the study, grouped as small and large dairy farmers. Results revealed that the majority of the farmers were using paddy straw as animal bedding followed by in situ burning. As far as paddy stubbles were concerned, most of the farmers were mulching them followed by in situ burning. All farmers were found to be using wheat straw as livestock feed and mulching wheat stubbles. For household waste, the majority of the farmers were found to be feeding kitchen waste to their livestock, preparing farmyard manure from garden waste and paper waste. For dairy waste management, all the farmers were preparing farmyard manure from dung and discarding livestock urine in drains. A little more than half of the farmers were producing biogas from the dairy waste. The majority of the dairy farmers of the research area were found to have low organic waste utilization scores. Relational analysis was carried out and social participation and knowledge level were found to be highly significant (p<0.01) with a positive effect on the organic waste utilization score. Therefore, the study was concluded with the impression that the knowledge level of the farmers needs to be enhanced for better and effective utilization of organic waste.
ANTONIO CONTE has sung Liverpool’s praises ahead of Chelsea’s pre-season friendly. The two Premier League giants face each other in the International Champions Cup in California tomorrow. Conte, who has overseen one win and one defeat in pre-season, thinks Jurgen Klopp needs time to transmit his ideas at Anfield. “Jurgen Klopp worked already for six months with this team and it’s very important for him and Liverpool to understand each other," Conte said. “Liverpool is a great team and can fight at the end to win the title with the other teams. "It’s important to work and have the time to transfer your methods and ideas of football." The Reds reached the finals of the Capital One Cup and Europa League last term. And Conte is convinced the Merseyside outfit will be title contenders next season. "Liverpool is one of the best teams in the league and I’m sure Liverpool can fight to win the title," he said. "It is a good game for us, but winning is not the most important thing. We try to work on different aspects, and we’ve worked a lot. But we want to play a good game and I want to see improvement from my players. "A game against a good opponent is a good game and I’m very curious to see my team against Liverpool.
The Human Genome Project: the role of analytical chemists. The Human Genome Project (HGP) is the most ambitious and important effort in the history of biology. It has provided a complete genetic blueprint for human life, and will provide important insights into human health and development. HGP involves a huge amount of data that is stored on computers all over the world. More than just vast amounts of DNA sequences, the project is about developing sets of integrated maps that involve genetic, physical, and sequence data. The data can be sorted, annotated and organized in many different ways using different types of database software, different analysis algorithms and different forms of interfaces. The genomic sequences of the human and the substantial portions of the mouse genome are expected to be finished by 2005. Analytical chemists took the opportunity, addressing the problem of achieving a high throughput with good sensitivity. This paper discusses how analytical chemists saved the Human Genome Project or at least gave it a helping hand.
<reponame>wilsonlv/jaguar package top.wilsonlv.jaguar.oauth2.component; import lombok.RequiredArgsConstructor; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; import org.springframework.data.redis.core.BoundValueOperations; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.core.userdetails.UsernameNotFoundException; import org.springframework.security.oauth2.config.annotation.web.configuration.AuthorizationServerConfigurerAdapter; import org.springframework.stereotype.Component; import top.wilsonlv.jaguar.oauth2.Oauth2Constant; import top.wilsonlv.jaguar.oauth2.config.security.FeignSecurityConfigurer; import top.wilsonlv.jaguar.oauth2.model.SecurityAuthority; import top.wilsonlv.jaguar.oauth2.model.SecurityUser; import java.io.Serializable; import java.util.Collections; /** * @author lvws * @since 2021/12/2 */ @Component @RequiredArgsConstructor @ConditionalOnMissingBean(AuthorizationServerConfigurerAdapter.class) public class RedisResourceServerServiceImpl implements UserDetailsService { private final RedisTemplate<String, Serializable> redisTemplate; @Override public SecurityUser loadUserByUsername(String serverId) throws UsernameNotFoundException { BoundValueOperations<String, Serializable> operations = redisTemplate.boundValueOps(Oauth2Constant.RESOURCE_SERVER_CACHE_KEY_PREFIX + serverId); Serializable resourceServer = operations.get(); if (resourceServer == null) { throw new UsernameNotFoundException("无效的serverId:" + serverId); } SecurityUser securityUser = (SecurityUser) resourceServer; securityUser.setAuthorities(Collections.singleton( new SecurityAuthority(FeignSecurityConfigurer.FEIGN_PERMISSION))); return securityUser; } }
use std::process; pub fn show_result( arg_weight : &Vec<f64>, arg_title : &Vec<String> ) { println!( "\nResult:\n" ); if ( *arg_weight ).len() != ( *arg_title ).len() { println!( "\nERROR !!!\n" ); println!( "Length of (*arg_weight) != Length of (*arg_title)" ); println!( "\nProgram halted !!!\n" ); process::exit(1); } let num_seq : usize = ( *arg_weight ).len(); println!( "num\tweight\ttitle" ); for i in 0 .. num_seq { if ( *arg_title )[i].len() > 80 { print!( "{}\t{:.3}\t{}", i + 1, (*arg_weight)[i], (*arg_title)[i].chars().take(80).collect::<String>() ); println!( " ..." ); } else { println!( "{}\t{:.3}\t{}", i + 1, (*arg_weight)[i], (*arg_title)[i] ); } } }
<gh_stars>0 package tryan.inq.overhead; public class QGameConstants { public static final int UNIT_DISTANCE = 25; public static final int DEF_ANIM_PRIORITY = 3; // Private so one cannot be created on accident! private QGameConstants() {} }
Draft of Paper to Appear in a Festschrift For 0. In An Internalist Theory of Normative Grounds, Robert Audi provides what his title promises. His account is characteristically nuanced and ecumenical; it therefore constitutes an excellent basis for an appraisal that is not merely ad hominem of one kind of internalism. With admirable generality, Audi treats the normative grounds for both belief and action. For simplicity, this paper concentrates on his account of the justification of belief. Its arguments, if sound, extend to the justification of action too. Audi explains what he means by normative in the case of belief: cognitive (epistemic) normativity is a matter of what ought to be believed, where the force of the ought is in part to attribute liability to criticism and negative (disapproving) attitudes toward the person(s) in question.
In a question-and-answer session held on the program’s official website, as well as another on its Twitter account, Florida Gators athletic director Jeremy Foley discussed a number of topics including how the recently-passed cost of attendance legislation will affect his program next season. “The cost of attendance that passed is obviously an opportunity for us to do something that we’ve all talked about for our student-athletes for a number of years now. In essence, it increases the value of their scholarship. There’s going to be a financial impact for us, a little over a million dollars,” he explained via the school’s website. “But it’s money well spent and certainly I think the new autonomy structure has allowed that to happen. It excites us that we had the flexibility to do something like that, and I’m sure there will be more changes in the future.” The cost of attendance (COA) legislation will pay out the difference between what student-athletes earn via scholarship and the average of what it costs for a student at a school to attend it for a full academic year, including tuition and applicable fees, room and board, books, supplies, transportation and other expenses. On average, the gap is estimated at $3,500 per athlete, though each school’s COA is different and therefore the amount of money paid to each school’s athletes will vary. Foley is not alone in estimating an expenditure of approximately $1 million for cost of attendance in 2015 with a number of athletic directors across the country projecting similar figures. Other topics Foley discussed with the school’s website… » On new head football coach Jim McElwain and his first Gators staff: “I’m very impressed with the staff that he has put together. I’m very impressed by how he did it, very thoughtful, very deliberate. He found pieces that fit, not only good coaches but good people and really good recruiters. … I think Mac’s got a plan and he is going to enact that plan thoughtfully and deliberately.” » On finally deciding to build an indoor practice facility: “The fact of the matter is that with the renovation of the O’Connell Center, what was our backup no longer existed. So you have no backup, in essence you’re telling your football program that you’ve got no place to go. Obviously you could not do that. I think when you look at some of the weather changes, especially the heat early in the year, this facility is just going to enhance our program and give our coaches more options. We’re excited about doing it. I think there is an opinion that we were dragging our feet on it, and I can understand that and accept that, but you know everything done here is done for a reason, done with a purpose, done with some thought behind it, and at the end of the day, we built the facility not only because we thought it would enhance our football program, maybe enhance recruiting, but our backup was gone. You certainly can’t have a program of this magnitude and not have a backup for inclement weather.” » On the struggling Florida basketball team: “My mother used to tell me that her father would go to the horse races and he always bet on the jockey, not the horse. I’ll bet on our jockey every single day. He’ll get this thing exactly the way he wants it, the way the university wants it, to what our fans expect. I don’t worry one second about that. He’s the best in the business and obviously we’re fortunate to have him.” Topics Foley touched on during his Twitter question-and-answer session… » Foley said no head coaching contract at Florida, including Will Muschamp’s, includes mitigation to reduce the amount of buyout owed if said coach is fired by the school. » On whether Gators football will adopt “more modernized” uniforms: “Not at this time.” » It has not been determined where the camera platforms will be placed in the redesigned Stephen C. O’Connell Center and whether Florida plans to move them across the arena to the alumni side in order to capture the Rowdy Reptiles during the entire game. “That is being discussed as part of the design,” he said. » Bringing WiFi to Ben Hill Griffin Stadium is “on the list for future improvements” but does not appear to be a likely addition for the 2015 season. » There are no plans to allow fans to tour the new indoor practice facility, as it was something that the Gators had not considered yet, but Foley plans to “review” whether to allow fans to get a look at the facility after it is built.