content
stringlengths
7
2.61M
Barak and Ayalon to face each other in runoff elections for Labour leadership. Both Barak and Ayalon have warned that they will pull Labour out of Olmert's coalition unless the prime minister steps down because of the scathing inquiry into his handling of last year's war in Lebanon. Yossi Alpher, an Israeli commentator, said: "One way or another, it means in about a year there will be early elections." Barak spent nearly six years in a political wilderness after he was beaten by Ariel Sharon, also a former prime minister, in a 2001 election. Barak's short premiership collapsed amid failed efforts at making peace with Syria and the Palestinians. But Barak has played on his experience, saying on Monday: "I tell voters only two things: I tell them to think about who they want more in a time of war, and I tell them that only with me heading our team can we beat Netanyahu." Ayalon, who only entered parliament last year, said on Monday: "I think many people understand that we are, in fact, not just voting on the future of the Labour Party but to a very large extent on the future leadership of the state of Israel." With the possibility of a challenge to his leadership, Olmert could face three options - to resign, to try to form a new coalition with an ultra-Orthodox or a right-wing party, or call for early elections. About 104,000 party members were eligible to vote in Monday's contest.
Relationship Between Expectation of Death and Location of Death Varies by Race/Ethnicity Background: Older black and Latino Americans are more likely than white Americans to die in the hospital. Whether ethnic differences in expectation of death account for this disparity is unknown. Objectives: To determine whether surviving family members expectation of death has a differential association with site of death according to race or ethnicity. Methods: We conducted an analysis of decedents from the Health and Retirement Study, a nationally representative study of US older adults. Telephone surveys were conducted with family members for 5979 decedents (decedents were 55% were women, 85% white, 9% black, and 6% Latino). The outcome of interest was death in the hospital; the predictor variable was race/ethnicity, and the intervening variable was expectation of death. Covariates included sociodemographics (gender, age, household net worth, educational attainment level, religion) and health factors (chronic conditions, symptoms, health-care utilization). Results: Decedents race/ethnicity was statistically related to the expectation of death and death in the hospital. When death was not expected, whites and Latinos were more likely to die in the hospital than when death was expected (49% vs 29% for whites and 55% vs 37% for Latinos; P <.001). There was no difference in site of death according to familys expectation of death among blacks. Conclusion: Expectation of death did not fully account for site of death and played a greater role among whites and Latinos than among black Americans. Discussing prognosis by itself is unlikely to address ethnic disparities. Other factors appear to play an important role as well.
<filename>codeview/src/main/java/io/github/kbiakov/codeview/classifier/CodeProcessor.java package io.github.kbiakov.codeview.classifier; import android.content.Context; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; /** * @class CodeProcessor * * Provides easy interface to code classifier. It response for train * code classifier & classifying code by code snippet. Both tasks * delegated to code classifier, but wrapped in extremely simple * interface to avoid possible errors. * * @author <NAME> */ public class CodeProcessor { private static final String TAG = "CodeClassifier"; private static volatile CodeProcessor sInstance; private static Future<CodeProcessor> sTrainingTaskFuture; /** * Thread-safe code processor getter. * * If instance was not created or trained, it performs necessary operations. * * @param context Context * @return Code processor instance */ public static CodeProcessor getInstance(Context context) { if (notInstanceAvailable()) { synchronized (CodeClassifier.class) { if (notInstanceAvailable()) { sInstance = new CodeProcessor(context); } } } return sInstance; } /** * Private (and only one) constructor. * * Code processor creation instantiate code classifier training task. * * @param context Context */ private CodeProcessor(Context context) { CodeClassifier.INSTANCE.train(context); } /** * Code processor should be created ones at start. But processor creation * is not guarantee that classifier is available. Not trained classifier * is not ready to use & must be trained soon as possible. * * The main cases why code processor is not available is: * 1) processor is not created yet & classifier not trained * 2) processor created, but occurs an error on classifier train * 3) processor created, classifier start train, but not finished * * (3rd case is ok, it's temporary unavailability & awaiting for training) * * In 3rd case, user awaiting for train accomplish to get code processor * and then take classifier to perform language classifying (see below). * * @return Flag indicates that classifier instance is not available. */ private static boolean notInstanceAvailable() { if (sInstance == null) { if (!sTrainingTaskFuture.isDone()) { try { sInstance = sTrainingTaskFuture.get(); return false; } catch (InterruptedException | ExecutionException e) { e.printStackTrace(); return true; } } else { return true; } } return false; } /** * If training task future is exists, then classifier was started * to train or already trained & classifier is ready to use. * * @return If classifier was trained. */ public boolean isTrained() { return sTrainingTaskFuture != null; } /** * Start point for apps that use code classifying. Called ones at app start. * It creates training task for code classifier. * * @param context Context */ public static void init(Context context) { if (sInstance == null) { final ExecutorService service = Executors.newSingleThreadExecutor(); sTrainingTaskFuture = service.submit(new TrainingTask(context)); } else { throw new IllegalStateException("Attempt to train code classifier twice.\n" + "It should be initialized once at start to make train asynchronously."); } } /** * Creates code snippet language classifying task. * * @param snippet Code snippet to classify. * @return Classified language wrapped in Future. */ public Future<String> classify(String snippet) { final ExecutorService service = Executors.newSingleThreadExecutor(); return service.submit(new ClassifyingTask(snippet)); } /** * @class TrainingTask * * Classifier training task. * * @author <NAME> */ private static class TrainingTask implements Callable<CodeProcessor> { private Context context; public TrainingTask(Context context) { this.context = context; } @Override public CodeProcessor call() { return new CodeProcessor(context); } } /** * @class ClassifyingTask * * Language classifying task for presented code snippet. * * @author <NAME> */ private static class ClassifyingTask implements Callable<String> { private String snippet; public ClassifyingTask(String snippet) { this.snippet = snippet; } @Override public String call() { return CodeClassifier.INSTANCE.classify(snippet); } } }
Calibrating Climate Models Using Inverse Methods: Case studies with HadAM3, HadAM3P and HadCM3. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the GaussNewton line-search algorithm: a standard GaussNewton algorithm in which, in each iteration, all parameters were perturbed and a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution. For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss-Newton linesearch algorithm: a standard Gauss-Newton algorithm in which, in each iteration, all parameters were perturbed and a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution. For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and run for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with preindustrial estimates. For the 7-parameter cases the Gauss-Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm, with 6 parameters being ran-domly perturbed, about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed, algorithm performance was poor. Our results suggest the computational cost for the Gauss-Newton algorithm scales between P and P 2, where P is the number of parameters being calibrated. For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand-tuned configuration. An algorithm in which 6 out of 13 parameters were randomly perturbed failed to converge. These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human-driven trial and error. However, convergence and costs are likely sensitive to details of the algorithm. metrics used opaque (;), with the main approach being trial and error. Consequently, expensive person time is needed to calibrate or tune climate models. Methods that could automatically calibrate model parameters would allow easier development of parametrisations, objective discussion of the observed targets and more rapid development of climate models. Such an approach would also facilitate uncertainty analysis and would improve understanding of the contribution of parametrisation compared to resolved dynamics in model properties, including model error. Tett et al. (2013b) (T13 from here on) outlined an approach to model parameters calibration by considering it as an inverse optimisation problem for which the aim is to find the parameter values which produce an atmospheric model with the smallest error relative to a predetermined set of weighted observations. T13 focused on only two observations, global mean outgoing longwave and reflected shortwave radiation, and modified only four parameters. They were able to calibrate the model parameters to several different observational targets. In this paper we further develop the approach taken by T13 to increase the number of observations and parameters used. We then couple some of the resulting atmospheric models to an ocean model to test if the resulting coupled model is stable. Various approaches have been taken to optimising model parameter values. Golaz et al. hand-tuned the GFDL CM3 model to radiation balance by adjusting several parameters in the cloud scheme, finding a significant impact on aerosol forcing but not on greenhouse gas forcing or on "Cess" climate sensitivity (). They found very large differences during the 20th century due to the perturbed impact of aerosols. Bellprat et al. (2012Bellprat et al. (, 2015 generated a model emulator for three climate variables from a regional model. From this emulator by Latin hypercube sampling they found the parameter combinations that minimised error. Their earlier work focused on five parameters while their recent paper used eight parameters and considered North American and European regions. They found the calibrated model improved the simulation of summers in both regions. Williamson et al. use a combination of emulation and ruling out implausible observations to construct models. They used four observational constraints: global average surface air temperature (SAT), Northern Hemisphere meridional temperature gradient, seasonal cycle of temperature in the Northern Hemisphere and global average precipitation. They found that SAT was the most important constraint. Later they included the strength of the Antarctic Circumpolar Current (ACC) in their analysis and found parameter combinations where the model had a good simulation of both the ACC and SAT (). Irvine et al. generated 200 variants of HadCM3 using a Latin hypercube experimental design and splitting each parameter range into 200 bins. They then ran the resulting coupled models and found that about 10 % were acceptable. Tomassini et al., using a low-resolution version of the MPI-ESM model, perturbed eight parameters randomly across their plausible range and generated coupled models with a broad range of global average temperatures. They then examined the different feedbacks and mechanisms for those feedbacks in their model, finding that four convective parameters related to convective mixing had strong impacts on both the mean tropical circulation and on climate sensitivity. Such brute force approaches become extremely expensive as the dimensionality of the problem increases, though the use of emulators may help. Attempts have been made using data assimilation techniques to calibrate parameters. Such systems simultaneously estimate the atmospheric state and the parameter values. Schirber et al. reported on a study in which they used that approach but found no improvement in the model climatology. Ruiz and Pulido used a similar algorithm and found an improvement in medium-range forecast skill but did not report on the impact on model climatology. Another approach is to use forecast error. Ollinaho et al. updated four parameter values and their covariances iteratively using a set of 3-day forecasts of ECHAM5 and found a modest reduction in forecast error. When they ran the model with observed sea surface temperature and seaice they found a reduction in top-of-atmosphere flux errors. They followed up this study with one in which they minimised forecast errors in the total energy (). They also applied the technique to the ECMWF forecasting system and found a modest change in parameter values and an increase in forecast skill in the tropics (). The approach we consider is optimisation via direct evaluation of the model, something attempted by Jones et al. for a low-resolution version of HadCM3. Yang et al., building on Jackson et al., applied the SSAA algorithm to tune parameters in CAM5 to improve the simulation of the partitioning between convective and large-scale precipitation. Zou et al. applied a similar approach to an East Asian regional model by modifying seven parameters and optimising only mean precipitation. They found a significant improvement in both the rainfall pattern and daily rainfall distribution. Here we update T13 to include a larger number of observations and parameters. The observations we use, such as T13 used, are multi-annual, large-spatial averages. As before we continue to use a Gauss-Newton algorithm but include a randomised block-coordinate variant where, on each iteration, a random sub-set of the parameters are perturbed. Our objectives are as follows: 1. test how well a Gauss-Newton algorithm does in minimising error in the HadAM3 N48 model () with 7 and 14 parameters and multiple observations; 2. test for equifinality in which models with different parameter values have similar observed values (Beven and Freer, 2001); 3. see how coupled model variants of HadCM3 (), with the parameters taken from the optimisation, behave; 4. test these algorithms with the N96 HadAM3P model (). The remainder of this paper first describes the models, the optimisation method and the observational metrics used. We next describe results of optimisation, the properties of the atmospheric models and how the coupled models behave. We discuss our results before concluding. Methods In this section we outline our methods. We first describe two related atmospheric models we use. Next we outline the Gauss-Newton algorithm and a randomised blockcoordinate variant of it, deal with the need to regularise matrices and describe how the algorithm terminates. We then describe the choices we made in parameter selection and parameter perturbation as well as the observations and covariance matrices we used. Finally we describe how we evaluate the optimised configurations and estimate uncertainties in the parameter values. Models We use the N48 (3.75 2.5 ) resolution configuration of HadAM3, which uses a 360-day calendar, driven with the same package of forcings used by T13. Simulations were run from 1 December 1998 to 1 March 2005 (6.25 years), and the period 1 March 2000 to 30 February 2005 was compared with observations. In addition we use the N96 (1.875 1.25 ) configuration of HadAM3P () with a similar package of forcings to that used in the N48 configuration. This model was run from 1 December 1999 to 1 March 2005 (5.25 years) We use the standard landsurface dataset rather the time-varying dataset used in the N48 case, including both the direct and indirect effect of SO 2 aerosols on clouds (), and used, after interpolation, the same ozone dataset as we used in the N48 case. Some results from the default configuration are described in Tett et al. (2013a). Gauss-Newton and line-search We build on the approach used by T13 which minimised an objective function which was the root mean square of the global average outgoing longwave radiation and reflected shortwave radiation. We extend this to a larger number of observations, taking account of both observational error and simulated internal variability. As we focus on large-scale, multi-annual averages we assume that both terms can be represented by multivariate Gaussian distributions characterised by covariance matrices C O (observational error) and C i (internal variability), respectively. If the model was perfect we would expect (S − O) ∼ N (0, C) where C = C O + 2C i. Therefore, the cost function (F (p)) depending on parameters (p) we minimise is as follows: where N is the number of observations, S is the simulated observations, and O is the target observations. This requires that C is invertible and, if necessary, we regularise it (see below). This way of defining F (p) allows for covariance between observations to be taken account of. For example internal variability might generate large correlation between total outgoing radiation and precipitation and so not weighting them would give greater weight to configurations with small error in outgoing radiation and precipitation than is justified. We also want to reduce the importance of observations with high uncertainty and, conversely, increase the weight of observations with small uncertainty. We follow Sexton and Murphy and generally use a crude estimate of observational error based on the difference between two different observational datasets. Our aim in this paper is the application of inverse methods to parameter calibration, not the production of good estimates of observational error. That is a matter for the groups that produce the observational datasets and so is beyond the scope of the work reported on here. We estimate C i from 100 simulations of the standard N48 HadAM3 model configuration. Estimating observational error is more difficult. For the radiation observations we use the fractional error estimates from Loeb et al. and apply them to each regional value. For other datasets we define them as the difference between the default values and the equivalent from another observational dataset. We explored applying a covariance structure to the observational error but found this did not work well (see discussion), nor was there very strong objective justification for any covariance structure. The Gauss-Newton algorithm is an iterative two-step algorithm. The first step is to compute the Jacobian J : where S i (p) is the ith simulated observation when the model is run with the vector of parameters p and p j is the j th parameter. We approximate this using finite differences (Nocedal and Wright, 2006): with p j being a suitably small perturbation to the j th parameter and e j the j th coordinate vector with the j th element being 1 and all other elements being 0 (e.g. (0,..., 0, 1, 0..., 0)). In order to avoid using parameter values outside the expert range, we chose, at each iteration, the sign of p j so as to perturb towards the middle of the allowed range. Note that p j is ideally chosen such that the Jacobian is above internal variability and not, as is common, to machine precision. The choice of p j follows our noise estimates, and we use ideas from implicit filtering techniques for derivative free optimisation (Nocedal and Wright, 2006, Sect. 9) Having computed the Jacobian, the algorithm proceeds by computing the line-search vector (s) to proceed along to minimise the cost function, F, through solving the linear problem: where H = J T C −1 J is the finite-difference approximation to the Hessian matrix (H = J T C −1 J ). Having computed the line-search vector, s, we then evaluate F 2 (p) at several steps along it ("line-search"). The values, and number, of the line-search steps are defined when we describe the algorithms we trial later in the paper. If any of the chosen line-search parameter values are outside the expert-defined plausible range, we project these to the appropriate boundary. The minimum value of F 2 from the line search is used as the starting point for the next iteration. The Gauss-Newton algorithm can be modified to include an additional constraint by modifying the cost function to the following: where O c and S c are the values we wish to constrain, and is an user choice to be decided on after experimentation This can be rewritten in the same form as Eq. : Building on ideas of Nesterov and Kim and Lee, we also tried a randomised block-coordinate version of Gauss-Newton in which, in each iteration, P rand different parameters were chosen at random and used in both the Gauss-Newton and line-search steps. Non-perturbed parameters used the values from the previous iteration. Scaling and regularisation Our algorithm could suffer from using ill-conditioned matrices in two places. First, if the Hessian matrix is singular or ill-conditioned, defined as having a condition number greater than 10 10, we use a Tihkonov regularisation (Nocedal and Wright, 2006) in which we add a small multiple of an identity matrix to the Hessian matrix. We iteratively increase the identity matrix scaling by a factor of 10 starting with 10 −7 until the regularised Hessian is no longer ill-conditioned or the scaling is 10 −2. In the latter case our algorithm terminates with an error. This regularisation introduces a scale dependence into the algorithm. Each time we compute the Jacobian, we scale all parameters whose magnitudes are less than 1 so they have magnitude 1 and invert this scaling when computing the linesearch direction. Secondly, we also regularise C. Rather than adding the identity matrix, we scale the diagonal of the covariance matrix by increasing factors of 2 until the condition number of the entire matrix is less than 5 times the condition number of the diagonal matrix. We apply this regularisation after scaling all values and before computing the Jacobian. For the bulk of our work C is well conditioned, so this regularisation is not applied. Algorithm termination We need criteria to terminate the algorithm. Classical Gauss-Newton terminates when sufficiently close to the stationary point of the cost function (F (p)), and so F stops reducing (Nocedal and Wright, 2006). However, the climate is a chaotic system which introduces noise into the model evaluations. Therefore, the algorithm may continue to iterate even when it is not making any significant progress or terminate because of not improving due to this noise. The algorithm terminates on iteration k when one, or more, of the following occurs: 1. F (p k ) − F (p k−1 ) < c, where p k are the parameter values at iteration k. That is, F (p) has not reduced by a critical amount c. 2(S where S k is the simulated observations at iteration k and c i is a critical value from a 2 distribution with N degrees of freedom. This checks that the new and previous simulated observations (S) are statistically similar. value from a 2 distribution with N degrees of freedom. This checks that the current simulated observations are in statistical agreement with the target observations. In our implementation c, c i and c o are all choices to be made in the algorithm. For the random variant of the algorithm, if the cost function did not reduce by c, then the algorithm was restarted from the previous best parameter set by rerunning that case and another set of random perturbations. If the error then Table 1. Parameters, default values, and allowed ranges and perturbations. Shown for each parameter name are the component of HadAM3 they are from, the default value, allowed range, perturbations used in HadAM3-7 cases ( 1 ) and perturbations used in all HadAM3-14 cases and HadAM3P cases ( 2 ). For more information on the parameters see Yamazaki et al. failed to reduce by c the algorithm would terminate. This means that the random variant will require at least two iterations before it terminates. This approach results in some duplicate simulations, although because of model chaos the simulated observations differ. Some inefficiency results from this which could be reduced by keeping track of all cases that have run and not rerunning those cases. For ease of implementation we did not do this. Future work could implement such an optimisation. Parameter selection and step size We used up to 14 parameters from the analysis of Yamazaki et al. but restricted our analysis to parameters that varied continuously. Some of those parameters are "metaparameters" in that changes in one affected other parameters. We used the same algorithms as Williamson et al. did to modify parameters from the meta-parameters. Ranges of allowed parameter values were taken from Murphy et al.. We carried out three cases: 1. We adjusted seven parameters using HadAM3. Step sizes for the Jacobian calculation were taken from T13 for ENTCOEFF, VF1, CT and RHCRIT. For the remaining three parameters we used 10 % of their range. 2. We adjusted 14 parameters, again, using HadAM3. To compute the step size for the additional parameters we set the value to the upper or lower range value that was most different from the standard value. Then for all 14 parameters, we computed Where d i was greater than 100 we reduced p i by approximately √ d i /100. And where d i was less than 100 we increased p i, limiting the increase to 50 % of the allowed range, so that d i would, assuming linearity, be greater than 100. 3. We adjusted 7 and 13 parameters using HadAM3P using the same step sizes as in the 14-parameter HadAM3 cases. Parameters, ranges, default values and step sizes for the Jacobian computations are shown in Table 1. Observations, covariance matrices and optimisation choices Here we describe the choices we made in our optimisation study. We focus on large-scale properties of the climate system and so consider the northern hemispheric extra-tropical ( > 30 N) and tropical (30 S ≥ ≤ 30 N) means and the southern hemispheric extra-tropical ( < 30 S) means. We do this for seven variables, all as an average over the 5-year period March 2001 to February 2005 (inclusive), for the following: Land air temperature (LAT) Land temperature has an impact on simulated biology, evaporation, snow and other important parts of the Earth system with changes in it being a significant impact from climate change. We use the observed CRU TS Vn 3.21 dataset () and the HadAM3 N48 land-sea mask to determine land, and restrict to data north of 60 S. For a second estimate of LAT we use ERA-Interim data (). Land precipitation (LP) This is a key measure of the hydrological cycle. We also use the CRU TS Vn 3.21 dataset, the HadAM3 N48 land-sea mask, and restrict to data north of 60 S. For a second estimate we use the vn6 GPCC dataset (). All simulated and observed values were converted to millimetres per day (mm day −1 ). Mean sea level pressure (SLP) We use this as a measure of the planetary scale circulation. To correct for model mass loss we used sea-level pressure differences between the global-average value and the extra-tropical Northern Hemisphere and tropics. We did not include the southern extra-tropics as that provided no new information and consequently made the covariance matrix uninvertable. We used values from ERA-Interim as observations and, for a second estimate used, the NCEP reanalysis (). All observations and simulations were converted to hPa. Reflected shortwave radiation (RSR) This measures the reflectivity of the Earth and is driven by clouds, snow, sea-ice and other surface properties. We compute values, and uncertainties, from the vn2.8 EBAF dataset (updated from ). Outgoing longwave radiation (OLR) This is a measure of the outgoing thermal radiation from the Earth and is driven by atmospheric temperatures and clouds. We also use the vn2.8 EBAF dataset. Temperature at 500 hPa (T500) This gives an estimate of the temperature lapse rate. We use ERA-Interim data as observations and for a second estimate use the NCEP reanalysis. Relative humidity at 500 hPa (q500) This provides a measure of mid-troposphere water vapour, which is an important greenhouse gas. We also estimate values from ERA-Interim and use the NCEP reanalysis as a second estimate. See Table 2 for target values used in all our studies. We need to estimate a total covariance matrix (C) and a covariance matrix for internal variability (C i ). We estimated observational uncertainty for each regional OLR and RSR from the fractional uncertainties in Loeb et al.. For other datasets we estimated the standard deviation (SD) as the difference between two different datasets. We assumed no correlation in observational error, so C O is diagonal. The diagonal values of C O were significantly larger than the equiva-lent values of C i (internal variability), so observational error is the dominant term in the total error-covariance matrix (C). We also applied a constraint (see Sect. 2.6) in order to generate atmospheric models that had a net radiative flux close to the observed value which double-counts the OLR and RSR observations (or at least their sum). After some experimentation we settled on a value for of 0.01, corresponding to an observational error of 0.015 W m −1, close to the observational error of about 0.2 W m −1 that Tett et al. (2013c) estimated from the difference of observational datasets. When producing the datasets for the 7-parameter cases we made two errors in the computation of C. First we computed it as C i + C O and secondly we mis-specified the three precipitation components. Given the focus of our work was on optimisation rather then the exact definition of the cost function, we do not believe these errors are very significant. Evaluation We evaluate the inverse approach in several different ways. For the algorithm we consider the expected number of iterations, evaluations and final error, following the approach of T13 of using a strategy of repeatedly running the Gauss-Newton algorithm after it failed until convergence. This gives the expected number of model evaluations (E): where E c, E f and f are the mean number of evaluations (or simulations) for studies that were comparable to, or better than, the standard configurations, the mean number of evaluations for studies that failed and the fraction that failed, respectively. The expected number of iterations is computed similarly except that iterations rather than simulations are used. The line-search component of the algorithm has a selection effect as it takes the parameter combination that produced the smallest cost function. Due to chaos in the model which leads to pseudo-random noise, this will lead to a selection effect as the smallest cost-function values may have arisen by chance. To avoid this effect and to examine the properties of the resulting models, we take the final optimised parameter sets and for each one run an ensemble of two simulations from December 1998 to April 2010. Each simulation was started from the same initial state but with a different small perturbation. We compare results of these independent simulations for 2000-2005 with the standard configuration and each other and look for evidence of equifinality (Beven and Freer, 2001), in which different parameter combinations produce models that appear similar. For greater out-of-sample comparison we compare differences between the 2005-2010 and 2000-2005 periods from the observations we use and the standard and independent optimised simulations. For the HadAM3 cases we also carry out 20-year simulations of HadCM3 () using the converged parameter sets. We compare results from the last 10 years of the 20-year simulation with the standard control simulation of HadCM3, all started from the same initial state corresponding to about 5000 years of spinup. Parameter covariance Assuming that the parameter perturbations are small, we can compute the covariance matrix for the parameter error (C p ). We do this following Nocedal and Wright by a linear transformation of the total observational covariance matrix: where P is a transformation matrix = (J T C −1 J) −1 J T C −1. From these parameter error covariance matrices we can compute a distance between two parameter sets (p i and p j ) as follows: where C p i and C p j are the parameter error covariance matrices for sets i and j, respectively; d 2 ij is roughly 2 dis-tributed, though given the crudeness of our observational error estimates we err on the conservative side. So we claim that if d 2 ij > 100 then the parameter sets are different. Results In this section we present our results. We tried several different algorithms using the HadAM3 and HadAM3P atmospheric models. We first present numerical results on the convergence behaviour of those algorithms, then compare some aspects of the climatologies of the modified models with the standard mode. Finally, we report on results of variants of the coupled atmosphere-ocean HadCM3 model that uses the optimal parameter sets from the HadAM3 test cases. Atmospheric model convergence We carried out several case studies. The first was one in which we perturbed seven parameters using the Gauss-Newton algorithm. Using 14 parameters, we tested the Gauss-Newton algorithm and two random parameter variants. Finally we tested three algorithms using the HadAM3P model configuration. In no case did the algorithm terminate because the cost function was small. Given the crudeness of the observational covariance used in our cost function, we do not draw any inference from this. That would require a much better estimate of observational error than we made. Instead we take the pragmatic view that a perturbed model is comparable to (substantially better than) the standard configuration, in the simulation of the observations we used, if the cost function is less than 120 % of standard models' cost function. We stress that this is a subjective choice that we made. HadAM3 7-parameter case For the 7-parameter (HadAM3-7) trials, we generated 12 random initial parameter choices by selecting values from their extreme limits (Table 3). For this algorithm we tried out five line-search evaluations at scalings of 1.0, 0.7, 0.3, 0.1 and 0.01 of the search vector and required F (p) to reduce to keep iterating (i.e. c = 0). Two cases failed in the first iteration with a model error, with the remaining 10 cases terminated when they failed to make progress. All those 10 had cost values similar to the default model's value of 5.0 (Fig. 1a). These cases took between 3 and 12 iterations, requiring 37 to 145 model evaluations, to terminate. As in our earlier study (T13) the cost function reduces rapidly over the first one to two iterations with slow reduction after that (Fig. 1a). We carried out five line searches partially to test if any of the scalings on the search vector were preferred. We found no strongly preferred scaling value (Table 4). In the rest of the paper we use scalings of 1, 0.7 and 0.3 on the search vector. HadAM3 14-parameter cases We trialled three related algorithms to perturb 14 parameters. The algorithms we tested were the standard Gauss-Newton algorithm (HadAM3-14) and two variants with random perturbations. In one we perturbed six random parameters (HadAM3-14r6) and the other eight (HadAM3-14r8). For each algorithm we did five studies with each one being started from the same random extreme parameter choices ( Table 3). As described above we corrected the error in the computation of C and adjusted the parameter perturbations (Table 1). For the random variants we required that the cost function reduce by 0.2 to continue iterating. Many of the simulations failed due to being marginally unstable, in which case we perturbed parameters by about 1 part in 1000 and reran that case. An operational system would restart the model with a small perturbation to a previous state and run past the failure point. Unlike the HadAM3-7 cases the HadAM3-14 cases did not all produce cost function values comparable to the default model (Fig. 1b), with three cases failing and two succeeding. The successful cases took between four and six iterations(evaluations), with the unsuccessful cases taking one to four iterations. Neither of the successful cases are obviously better than the standard configuration. Next we turn to the HadAM3-14r6 cases. This algorithm performed well, with four out the five cases succeeding taking between 6 and 9 iterations (evaluations). Three of the cases had cost functions less than the standard configuration but not substantially so (Fig. 1b). In contrast the HadAM3-14r8 algorithm performs poorly, with only one case having a cost function comparable with the standard configuration. This case took 5 iterations (evaluations) to terminate. The unsuccessful cases took four iterations to terminate. HadAM3P cases The HadAM3P cases differ from the standard configuration not only in increased resolution but in the addition of a cloud anvil parametrisation and the indirect effects of aerosols on cloud optical properties (). One approach to model development would be to take the parameters from the previous model version and then re-calibrate the parameters using inverse methods with the new model. We tested three algorithms with all cases starting from the default HadAM3 parameters. Our comparison case is the default HadAM3P configuration. Unless stated otherwise all studies used the same choices of covariance matrices, observations, parameter perturbations and other choices as the HadAM3 14-parameter studies (Table 1). So, for example, at each iteration the cost function would need to reduce by 0.2 for the algorithm to continue. The three algorithms were as follows: HadAM3P-13r6 The diffusion parameter was kept at its default HadAM3P values but all remaining 13 parameters were changed, with 6 being chosen, at random, in each iteration. HadAM3P-7 Here the same parameters as used in the HadAM3 seven-parameter cases were perturbed and termination occurred immediately if the cost function did not decrease by 0.2. HadAM3P-7r3 As HadAM3P13r6 but with, at each iteration, three parameters, of the seven used in the sevenparameter HadAM3 case, perturbed at random. The standard configuration of HadAM3P (Fig. 1c) is substantially worse, using our metric, than the standard HadAM3 configuration (Fig. 1b). Starting from the standard parameters the cost function reduces less than for the HadAM3 cases which all started from extreme parameter choices. The HadAM3P-7 and HadAM3P-7r3 cases produced configurations comparable with the standard HadAM3P model. The HadAM3P-7r3 study took 5 iterations with 31 evaluations. The HadAM3-7 case took 3 iterations also needing 31 evaluations of the model. The HadAM3P-13r6 case failed to converge and needed 3 iterations to fail. Algorithm performance For each algorithm we tested using HadAM3 we characterised its performance using Eq.. For each of the three HadAM3P algorithms we only carried out one case, so algorithm performance is evaluated from that single case. As discussed earlier there is a potential selection effect in that from the line-search evaluations we chose the one case with minimum error. To examine the effect of this we compared the average cost from the optimised cases with the independent runs and with the cost values for the standard cases. Note that the independent and optimised cases have identical parameter sets but the 14-and 7-parameter algorithms use slightly different cost functions. The mean cost from the independent simulations is, except for the HadAM3P-7r3 algorithm, larger than the mean cost for the optimised simulations (Table 5). The mean difference between the independent and best optimisation depends on the algorithm but ranges from 0.2 to 0.6 (5 to 15 %) of the cost function for the standard configurations. The expected number of iterations increases from the HadAM3-7 to HadAM3-14 algorithms but does not double. Our earlier work (T13) found that the median number of iterations for optimisation using two observations and four parameters required between three and five iterations. This suggests that the cost of increasing the number of parameters is not excessive, with the iteration count increasing less than P (the number of parameters). As each iteration needs P model evaluations then the total number of iterations likely increases between P and P 2. The six-random-parameter (HadAM3-14r6) algorithm worked well with an average cost function slightly better than the standard configuration (Table 5). Though requiring 60 % more iterations than the seven-parameter case, it only has an additional 20 % more expected evaluations for twice as many parameters. Random selection of 6 parameters has many less expected evaluations than perturbing all 14 parameters on each iteration. However, perturbing 8 parameters at random performs very much worse than perturbing 6 at random or all 14 parameters. We will explore possible reasons for this later. For the HadAM3P cases the HadAM3P-13r6 algorithm failed while both the HadAM3P-7r3 and HadAM3P-7 algorithms succeeded. To summarise this subsection we find that a relatively simple Gauss-Newton algorithm works well to automatically calibrate parameters in an atmospheric model. The algorithm did not reduce the error to zero and so terminated when it stopped improving. We found that the expected number of iterations increases, though less than linearly, as we increased the number of parameters. Random selection of 6 out of 14 parameters worked well though random selection of 8 from 14 worked poorly. We were also able to reduce the cost function of the HadAM3P model relative to the standard configuration of that model. Atmospheric model evaluation We now investigate the behaviour of the optimised HadAM3 and HadAM3P models by first focusing on the optimal parameters, then examining the simulation of the target observations in the independent simulations before comparing the model fields of key variables with observations. We aim to test for equifinality (Beven and Freer, 2001), where different parameter sets can lead to very similar outputs. This could arise from multiple minima or a single broad flat minima. We normalise the parameter values by their expert-based plausible ranges, with 0 being the minimum and 1 the maximum. We find for both the 7-and 14-parameter HadAM3 case studies that many of the parameters have a broad range of optimal values (Fig. 2). For each parameter we test if the distribution of optimal values is significantly different from a 0-1 uniform distribution using a Kolmogorov-Smirnov test. For 3 parameters (RHCRIT, ENTCOEFF and CW_LAND), in the 7-parameter case, we can reject this null hypothesis. For the 14-parameter cases we can reject the null hypothesis of a uniform distribution for the same 3 parameters and, in addition, an additional 5 parameters have distributions inconsistent with a uniform distribution. These results suggest that minimising the cost function does provide a weak constraint on some individual parameters. Using Eqs. and we can test if the optimised 7parameter values are within parameter error of one another. We compute, for each optimised parameter set, the Jacobian from the final iteration and use this to compute squared distances between all 10 parameter sets. We find that the minimum value of this is 1.7 10 10. It may be that the Jacobian has significant noise contamination, so we repeat the calculations with the mean parameter error covariance and Jacobian. We find minimum distances squared of 10 9 and 310 8, respectively. This suggests that the 10 parameter sets found through optimisation are all significantly different from one another. We now consider how the independent simulations behave for the successfully optimised HadAM3-7-and -14, and HadAM3P parameter sets. These, to remind the reader, are two simulations run with the same parameter set as the successful optimised case. All model observation differences are normalised by the diagonal elements of the covariance matrix which is dominated by our crude estimate of observational error. For the HadAM3 7-and 14-parameter cases the optimised simulations are, for many target observations, similar to the standard configuration (Fig. 3) with little scatter across the best cases. The 14-parameter cases have larger scatter than the 7-parameter cases, suggesting the additional parameters lead to more ways to produce an optimised model. The medians are generally, though not always, a small improvement (closer to zero) on the standard cases. However, for the optimised and standard parameter sets several simulated observations are outside the ±2 uncertainty range, suggesting that further model improvement would need better representation of processes either through new parametrisations or higher resolution. Reflected shortwave radiation biases show the greatest variation across the optimised cases with Northern Hemisphere extra-tropical land air temperature and tropical RH at 500 hPa, also showing large variation across the optimised cases. We now turn to the two optimised HadAM3P cases. These configurations have, like the standard HadAM3P, smaller biases in land air temperature across the three large regions we consider. This is particularly so in the northern hemispheric extra-tropics, suggesting that enhanced resolution improves this particular observation. However, this model has a much worse simulation of precipitation in the tropics, even with tuning, than does the HadAM3 case. Optimising the parameters does reduce biases in the HadAM3P model but not enough to support the claim that is better than its lowerresolution and computationally cheaper HadAM3 cousin. Comparison of the optimised cases with the initial extreme random parameter choices gives a sense of how important variation in the parameters is for those observational biases. One thing that stands out is that large-scale biases in the tropics (Fig. 4) are sensitive to parameter values. In contrast biases in extra-tropical relative humidity at 500 hPa are insensitive to changes in parameter values, suggesting this is driven by the large-scale resolved dynamics rather than parameterisation. In the extra-tropics biases in RSR and OLR are the most sensitive to parameter variation, with temperature at 500 hPa, MSLP and northern hemispheric precipitation being least sensitive. That would suggest that the behaviour of these latter variables are mainly driven by the large-scale resolved dynamics rather than the parametrisations. We now examine how the bias changes when we consider a period outside the period we used to calibrate the model. Here we compare changes in bias between March 2005-February 2010and March 2000-February 2005. We normalise by the expected internal variability. For most observations and optimised configurations the bias does not significantly change between the two periods (Fig. 5), with the standard configurations and optimised cases behaving similarly. However, the extra-tropical relative humidity shows significant changes in bias between the two periods, with all simulations showing a significant increase in bias. As all models behave similarly this suggests either a lack of homogenisation in the ERA-Interim reanalysis or some systematic bias in all models. So far we have focused on large-scale biases. We use Taylor diagrams to examine how fields from the independent simulations compare with the observations. We focus on the same fields and observational datasets as those used to compute the biases described above. Taylor diagrams summarise field similarity by computing field correlations and centred field SDs. We use the normalised variant where the centred field SDs are scaled by the equivalent values from the observed field we use. This allows us to compare fields with different units. We find that for land air temperature, 500 hPa temperature and outgoing long wave radiation there is little variation in the location on the Taylor diagram ( Fig. 6a and b). For SLP patterns the scatter does not appear much greater than would be expected by chance for both HadAM3 and HadAM3P. Precipitation is generally slightly worse for the HadAM3 optimised cases than the standard configuration with spread to smaller correlations and larger RMS differences. For the HadAM3P configuration the optimisation slightly improves the spatial patterns of precipitation (Fig. 6a). For RSR, pattern correlations and centred RMS differences show the largest spread across the variables we consider with some of the seven-parameter optimised cases an improvement on the standard configuration. For HadAM3P the centred RSR patterns are worse than the standard HadAM3P case. The optimised HadAM3 cases for Relative Humidity at 500 hPa scatter around the standard cases with some better and some worse, though as with other variables the differences are small. Overall, the HadAM3 optimised and standard values are very similar. Coupled model results To test if calibrating atmospheric parameters results in reasonable coupled models, we took the calibrated parameters from all successful 7-and 14-parameter cases in a set of control simulations of HadCM3. The surface temperature adjusts in the first decade (Fig. 7a), though the deep ocean is still adjusting during the 20-year simulations (Fig. 7b). Williamson et al. estimated that pre-industrial temperatures were 13.6 C, with a robust error estimate of ±0.5 C. We claim that a coupled model is "good" if the global and time average of its surface air temperature for years 10-19 is consistent with that estimate. The standard configuration is, just, within this range and as noted by Gordon et al. HadCM3 is somewhat too cool. For the HadAM3-7 cases we find that eight of the parameter combinations produce temperatures within the target range (Table 5). For the HadAM3-14 cases five out of seven parameter combinations give temperatures within the target range. For all four of the cases that failed to produce coupled models, this was because they are too cold rather than too warm. As we start from the standard configuration we may be more able, in the 20-year simulations we did, to identify cooling rather than warming biases. Though all atmospheric models were constrained to be in rough energy balance, the individual fluxes are less constrained. For three of the cases that cooled, RSR rapidly increases over the first 5 years with OLR decreasing over the same period. However, the RSR increases by more than the OLR decreases, so the coupled model is out of balance and cools (Fig. 7). This may be due to negative cloud feedbacks in these model configurations. The remaining coupled models show a range of OLR and RSR values but are generally stable. We now examine if there is any relationship between properties in the atmospheric model simulation and the coupled model simulation. Above we showed that RSR changes were somewhat larger than OLR changes and, across the optimised parameter sets, RSR variability was larger, relative to its uncertainty, than OLR variability (Fig. 3). Thus, we focus on relationships between global-average RSR and various properties of the coupled models. We examine the 10-year global average for 2001-2010 from the independent atmospheric simulations and years 10-19 from the control simulations. For surface air temperature and volume average ocean temperature, there is a relationship between atmospheric model RSR and coupled model values, with an increase in atmospheric RSR leading to cooling in the coupled model (Fig. 8), though with some scatter around this general relationship. Uncertainties on the regression are small. We also find an inverse relationship between the strength of the Atlantic Meridional Overturning Circulation (AMOC) in the control simulation and the atmospheric RSR, likely because cold models have a stronger AMOC. Similar results hold true for northern hemispheric snow area and sea ice area. For land precipitation the scatter is too large to conclude there is a strong linear relationship. We repeated this analysis using OLR from the atmospheric simulations and found similar, though opposite signed and weaker, results. This likely arises from the constraint on the net flux, meaning that enhanced RSR must be balanced by reduced OLR. Note that the range of atmospheric RSR values is within the estimated uncertainty estimate for RSR (See Fig. 1 of T13) and so all cases (after running the atmospheric optimisation) are "good". Discussion Our results suggest that calibrating the atmospheric component of a coupled model to multiple observations is computationally feasible, with the resulting coupled models behaving well much, but not all, of the time. However, we found that calibration of 14 parameters was less successful than that of 7 parameters. We now investigate potential reasons for this by looking at the Jacobian matrices from all 7-and non-random 14-parameter studies. We also examine the Jacobian of the HadAM3P 7-parameter cases to see if changing resolution affects the Jacobian, which might explain the failure of the HadAM3P-13r6 case. We computed Jacobians for each iteration with the parameters normalised by their range so that 0 is the minimum (maximum) value and normalised each bias by its simulated internal variability. To see which parameters have the strongest effect on simulated observations, we compute the mean, over all iterations, of the absolute Jacobian values. We compare this to internal variability by comparison with a folded normal distribution () using a 90 % critical value. To derive the parameters for this distribution, we assume that the underlying normal distribution arises from the difference of two random distributions with unit variance and zero mean ( = √ 2, = 0). We see that in the 7-parameter cases (Fig. 9a) that all parameters, except ICE_SIZE, have an significant impact on net flux and the cost function (F ). ICE_SIZE affects both OLR and RSR outside the northern hemispheric extratropics, but changes in OLR and RSR must offset one another, leading to a small impact on net flux and on F. All parameters affect RSR in the tropics and almost all affect it in the extra-tropics. In contrast, tropical OLR is significantly affected by only three parameters (ICE_SIZE, VF1 and ENTCOEFF) with the remaining four parameters having little impact on OLR. Northern hemispheric extra-tropical land precipitation and SLP and tropical SLP are not significantly affected by any of the parameter perturbations. In the Southern Hemisphere, land temperature is only weakly affected by changes in VF1, while precipitation is not signif-icantly impacted. These likely reflect the small land area in the Southern Hemisphere and the resulting increase in internal variability. In terms of relative importance we see that changes in the ENTCOEFF parameter has the most impact on the cost function with RSR being most affected by parame- ter changes. In contrast we see that ICE_SIZE has the least impact on the cost function and extra-tropical land precipitation and pressure gradients, being unaffected by parameter perturbations. Examining the 14-parameter Jacobians (Fig. 9b), we see that 4 of the additional 7 parameters have a significant impact on the cost function. However, of these only DYNDIFF has a more than small impact on the cost function. For these six parameters, that generated small or insignificant perturbations to the cost function, our preliminary tuning (see above) had led to parameter perturbations that were large relative to the range (Table 1). As with the seven-parameter cases, ENTCOEFF has the largest impact, with RHCRIT the second most important. However, from this larger set of parameters all simulated observations, except Southern Hemisphere land temperature and precipitation, are affected. The CHARNOCK, ICE_SIZE and ALPHAM parameters have no significant impact on the cost function. Further, the CHARNOCK parameter was perturbed by about one-third of its range, meaning there is little freedom to further perturb it. The mean of the absolute Jacobians between the 14-and 7-parameter cases shows some differences in detail (compare Fig. 9a with b) suggesting that the Jacobians are, as expected, not constant. More detailed examination of this (not shown) suggests that within an individual study, after the first itera-tion, the Jacobians are fairly stable but within different parts of parameter space the Jacobians differ even if the final states appear quite similar. Looking at the absolute Jacobians from the HadAM3-7 computations (Fig. 9c) we see differences from the two HadAM3 results, with VF1 and RHCRIT no longer having a significant impact on the cost function. This likely arises from the smaller impacts on the net flux than in HadAM3, which has, in our constrained optimisation, a large effect on the cost function. In contrast the effect of ENTCOEFF and CT on the cost function is much larger in HadAM3P than it is in HadAM3. Regarding the poor performance of the HadAM3-14r8 algorithm, it is unclear at this stage precisely what has caused it, given that HadAM3-14r6 behaves very well. We speculate that this may be caused by noise contamination, and that the fewer parameters we perturb in the algorithm, the smaller the chance of seeing the effect of noise. Alternatively there could be instability in the randomised algorithmic variant, again due to noise. We note that if the cost function is smooth and accurate derivatives were available, one can easily observe improving rates of convergence for randomised block Gauss-Newton variants the more parameters one chooses in the block. As part of the development of our approach we carried out four trial cases where we started from parameter sets (Table 3) with the largest climate sensitivities. We present results from them to explore the sensitivity of our results to changes in the algorithm and cost function. We also carried out a case parallel to the HadAM3-7 cases where we started the optimisation with the standard HadAM3 parameters and used the correct cost function calculation (as done for the 14parameter cases). The five cases are as follows: Figure 10. (a) Cost as function of iteration for four trial 7-parameter and stdopt cases (lines and symbols). Also shown are the cost functions for the standard model for each case (coloured horizontal lines) with the colour corresponding to the trial case. Large symbols for each trial case show cost from independent atmospheric simulation with other details, as in Fig. 1. (b) Scatter plot (symbols as a) of optimal minus standard configurations for 2000-2010 mean RSR from atmospheric-only simulations against the coupled control mean for years 10-19 of 1.5 m temperature. Vertical grey region show estimates of the difference between pre-industrial global-average air temperatures and the standard configuration; configurations in this region are "good". All trials (Fig. 10) converged to states with cost functions similar, though slightly larger, than the reference model. Independent simulations have cost functions slightly larger than the cases from the optimisation, with the difference being largest for trial7#15m. However, all cases produced models that cooled and had temperatures outside the range of acceptable coupled models. This suggests that the Gauss-Newton algorithm converges for a range of cost functions but not necessarily to a case that produces an acceptable coupled model. Starting from the default parameter set, we found that the algorithm produced a model with a slightly improved cost function, taking four iterations before terminating. The resulting coupled model is just outside the acceptable range. Zhang et al. reported successful optimisation of the IAP LASG version 2 atmospheric model. They focused on only seven parameters and, unlike us, used a root-meansquare error between simulation and observations normalised by the SD of the standard simulation. They considered a broader range of variables than we did, though they used the older ERBE data rather than the recent CERES data. Unlike us they screened out three of the parameters using the Morris method. Starting from the default parameter set, they improved their skill score by a small amount, although unlike us they did not test if this was a selection effect. Their best algorithm took about 60 iterations, broadly consistent with our expected number of about 70 iterations. Various other studies have attempted to produce stable coupled models. Yamazaki et al. used emulation to find parameter sets that would be expected to produce, in HadCM3, RSR and OLR, values that, relative to the standard configuration of HadCM3, are within the uncertainty limit of Tett et al. (2013c). They found global average temperatures of 289.9 ± 3.6 K, which is a range larger than that of the CMIP3 and CMIP5 ensembles. The uncertainty estimate used in their study includes several sources of uncertainty in addition to observational error. Restricting their analysis to model configurations that have RSR and OLR values within 20 % of that uncertainty range, which has a net TOA flux range of ±1.1 W m −2, they found those models had a broad range of climate sensitivities and a global mean temperature range of 286-291 K (Fig. 5 of ). Irvine et al. used a Latin hypercube design to produce 200 versions of the coupled atmosphere-ocean HadCM3 model, with 8 parameters being perturbed. They ran each version for 20 years, estimated the final equilibrium temperature and discarded cases which were outside the range 13.6 ± 2 C. From their 200 initial versions they found 20 cases that met that criteria. How does the computational cost of this compare with our approach of perturbing the atmospheric model then coupling the perturbed atmosphere to the ocean model? The nearest cases we have are the HadAM3-7 cases which need an expected 68 evalua-tions (Table 5), each of 6.25 years for a total of 425 years of atmospheric simulation. As the atmospheric model is about half the cost of the coupled model, this is equivalent to about 210 years of coupled simulation. We then need to carry out 10/8 coupled model simulations, each of 20 years, to get one that is within observational uncertainty for a grand total of 225 coupled-model years. This is approximately the same computational resource that Irvine et al. need but produces coupled models in better agreement with the pre-industrial temperature estimates. Conclusions Using multi-annual, large-spatial-scale observations, we have automatically calibrated HadAM3 and HadAM3P. Much of the time we ended up with models that have similar cost functions to the standard configuration, or for HadAM3P, better than the standard configuration. We used two variants of the Gauss-Newton algorithm. One in which all parameters were varied and a second random blockcoordinate variant in which a sub-set of the parameters, chosen at random on each iteration, were varied. For the studies in which we perturbed 7 parameters in HadAM3 we found that all cases converged, taking an average number of 68 evaluations for a total of 425 simulated years. For the 14-parameter cases we used both the standard Gauss-Newton algorithm and a variant where a random number of parameters were selected. We tried two random cases. One in which 6 parameters were perturbed and an another in which 8 were perturbed. For each algorithm five studies starting from the same initial parameter choices were carried out. We find large differences in the performance of these algorithms, with the 6 random perturbation algorithm performing best, the 8 random perturbation cases worst and the standard Gauss-Newton algorithm performing intermediately. The 6-random case needs an expected number of 82 evaluations (or 512 simulated years) and, on average, produces models that are slightly better than the standard configuration. We found considerable sensitivity to the number of random parameters in the total number of iterations needed to produce acceptable models. This suggests that further work is needed to determine how many parameters should be perturbed. As discussed above, the poor performance on the 14parameter case seems to be due to some of the parameter perturbations having only a small impact on the cost function, leading to noise contamination of the line-search vector and causing the algorithm to head in random directions. The poor performance of the random variant that perturbed 8 out of the 14 parameters at random may also be due to noise contamination arising again from unimportant parameters being included, similarly to the full 14-parameter case, or causing some kind of algorithm instability. We recall that Eizenberg illustrated numerically that for smooth problems with available derivatives, the randomised variants' rates of convergence improve continuously the more parameters are included in the blocks being perturbed. We also found that several different parameter combinations led to models that were broadly comparable with the standard configurations. This suggests that HadAM3 exhibits equifinality (Beven and Freer, 2001), with different parameter sets leading to models that appear similar. Further, many, though not all, of the resulting coupled models are consistent with pre-industrial temperatures, without any need for flux correction. This is a significant advance on previous work using perturbed physics models, which have generally had to flux-correct the resulting models. If these techniques could be successfully applied to stateof-the-art models it would be practical to do the following: generate perturbed models to test if an observationally constrained ensemble has a narrow range of climate feedbacks; add new parametrisations of processes to a model then recalibrate the model; explore the effect of changing resolution without large changes in the simulation of large-scale climate. Though our algorithm works reasonably well for a modest number of parameters, it would benefit from a better understanding of the effect of noise on it. Both the line-search through a selection effect and the computation of the Jacobian/Hermitian matrices are affected by noise. A better algorithm would identify parameters that did not appear to impact the cost function and remove them from the analysis, as done by Zhang et al.. Another potential approach might be to update the components of the Jacobian from the previous iterations values depending on the relative amount of noise contamination in them. We hope that our derivative-based experience with randomised block variants of Gauss-Newtonwhere the rates improve the larger the size of the block of parameters being perturbed -would then be observed here as well. This would further allow us to quantify the trade-offs of the lower evaluation cost per iteration of the small-block randomised variants against their respective global rates of convergence. Our work focused on optimisation rather than the cost function. We used a cost function based on crude estimates of observational uncertainty and a subjective choice of largescale observations. Future work would benefit from much better estimates of observational uncertainty and an objective means of selecting observations. One approach might be to choose observations of which we have good evidence matter for climate feedbacks or other properties of the model we are concerned about. Nevertheless, our results suggest that it is possible and computationally feasible to automatically calibrate the atmospheric component of a climate model and generate a plausible coupled model. We implemented and developed the algorithms described above using bash shell scripts and ipython (Prez and Granger, 2007) with the numpy (Van Der ), pandas (http:// pandas.pydata.org/) and iris (http://scitools.org.uk/iris/docs/latest/ index.html) modules. Each iteration was managed using Grid Engine with runs of the climate models each being followed by a job that computed the simulated observables. A final job in each iteration tested for termination and, if required, set up the next iteration. Visualisation was done using Matplotlib Number of converged cases with cost function similar to standard model. N Control Number of coupled control simulations consistent with pre-industrial temperatures ().
Bad dancing. Worse fashion. A stupidly catchy song. Yes, friends, it's time for a new Psy video. "Daddy," which the South Korean pop star just dropped, has a pretty simple premise: CL of girl group 2NE1 would like to know "Hey, where'd you get that body from?" The answer is, naturally, "I got it from my daddy." Commence crazy dance interludes, costume changes, and an insane hyped-up beat. (Skrillex, is that you?) The video may not have the same surprise-hit factor that Psy’s video for "Gangnam Style" had, but it’s just as bonkers, and it’s already gotten nearly 6 million views in less than 24 hours. So now the real question is: Where do you get that viral magic from, Psy?
//===-- InstrumentationRuntimeStopInfo.h ------------------------*- C++ -*-===// // // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. // See https://llvm.org/LICENSE.txt for license information. // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception // //===----------------------------------------------------------------------===// #ifndef LLDB_TARGET_INSTRUMENTATIONRUNTIMESTOPINFO_H #define LLDB_TARGET_INSTRUMENTATIONRUNTIMESTOPINFO_H #include <string> #include "lldb/Target/StopInfo.h" #include "lldb/Utility/StructuredData.h" namespace lldb_private { class InstrumentationRuntimeStopInfo : public StopInfo { public: ~InstrumentationRuntimeStopInfo() override = default; lldb::StopReason GetStopReason() const override { return lldb::eStopReasonInstrumentation; } const char *GetDescription() override; bool DoShouldNotify(Event *event_ptr) override { return true; } static lldb::StopInfoSP CreateStopReasonWithInstrumentationData( Thread &thread, std::string description, StructuredData::ObjectSP additional_data); private: InstrumentationRuntimeStopInfo(Thread &thread, std::string description, StructuredData::ObjectSP additional_data); }; } // namespace lldb_private #endif // LLDB_TARGET_INSTRUMENTATIONRUNTIMESTOPINFO_H
Hackers can take advantage of certain aspects of the Android ecosystem. 2 What Is an Android App? Google's Android operating system may be built on a secure Linux kernel, but the mobile operating system is far from immune to security risks. Threats to your company's Android devices can come from the open nature of the Android app store on Google Play, users deactivating certain security mechanisms, or even the way carriers control security updates from Google. The more aware you are of potential security issues, the greater your chances of avoiding malware infections in your company's IT infrastructure. Giving your permission for an Android device to install an app infected with malware allows that malicious code to bypass most of the system's built-in security. However, if you root your device to bypass Android's layer of protective restrictions you are also sidelining the security mechanisms protecting your system from malicious code you haven't given permission to execute on your device. This makes Androids more vulnerable to being infected by Internet-based attacks, and spreading those infections on company networks to which they connect. One of the major security risks in the Android eco-system is the risk of downloading apps from the Google Play store harboring malware. Google makes it easy for developers to add their apps to Google's app store, which creates a large and diverse selection of apps for Android users. However, the loose curation of the store makes it easier for programmers to upload apps containing malicious code to Google Play. These malware apps can pretend to be anything from games to Android anti-virus utilities. Another security threat to Android devices comes from the open nature of app submission for Google Play: apps do not come with malicious code but employ an insecure software design. When app developers leave security vulnerabilities in their code, hackers or malware can exploit these vulnerabilities to compromise your device. The malicious code uses the permission you gave the insecure app to slip past your device's security, similar to the way thieves in movies steal key-cards from unwitting employees. While Google incorporates the latest security fixes into the latest version of Android, not every Android device runs the latest version of the operating system. Different devices across different carriers all run different versions of Android, and individual carriers control when their customers can upgrade from version to version, or even install updates from Google for their current version of Android. This version fragmentation creates a situation in which wide segments of the Android ecosystem remain vulnerable to security threats which Android developers have already closed. McDunnigan, Micah. "Security Risks of Androids." Small Business - Chron.com, http://smallbusiness.chron.com/security-risks-androids-68511.html. Accessed 23 April 2019.
<reponame>G33tha/sunbird-android<gh_stars>1-10 package org.sunbird.models; import java.io.Serializable; import java.util.Date; /** * Created by JUSPAY\nikith.shetty on 24/11/17. */ public class Notification implements Serializable { private long expiryTime; private String displayTime; private Date receivedAt; private String notificationJson; private String status; private String msgid; private String title; private String msg; private int relativetime; private String icon; private String time; private int validity; private int actionid; private ActionData actiondata; private String dispbehavior; private int isRead; public Notification() { } public Date getReceivedAt() { return this.receivedAt; } public void setReceivedAt(Date receivedAt) { this.receivedAt = receivedAt; } public String getNotificationJson() { return this.notificationJson; } public void setNotificationJson(String notificationJson) { this.notificationJson = notificationJson; } public String getMsgid() { return this.msgid; } public void setMsgid(String msgid) { this.msgid = msgid; } public long getExpiryTime() { return this.expiryTime; } public void setExpiryTime(long expiryTime) { this.expiryTime = expiryTime; } public String getDisplayTime() { return this.displayTime; } public void setDisplayTime(String displayTime) { this.displayTime = displayTime; } public String getStatus() { return this.status; } public void setStatus(String status) { this.status = status; } public String getTitle() { return this.title; } public void setTitle(String title) { this.title = title; } public String getMsg() { return this.msg; } public void setMsg(String msg) { this.msg = msg; } public int getRelativetime() { return this.relativetime; } public void setRelativetime(int relativetime) { this.relativetime = relativetime; } public int getActionid() { return this.actionid; } public void setActionid(int actionid) { this.actionid = actionid; } public String getIcon() { return this.icon; } public void setIcon(String icon) { this.icon = icon; } public String getTime() { return this.time; } public void setTime(String time) { this.time = time; } public int getValidity() { return this.validity; } public void setValidity(int validity) { this.validity = validity; } public ActionData getActiondata() { return this.actiondata; } public void setActiondata(ActionData actiondata) { this.actiondata = actiondata; } public String getDispbehavior() { return this.dispbehavior; } public void setDispbehavior(String dispbehavior) { this.dispbehavior = dispbehavior; } public int isRead() { return this.isRead; } public void setIsRead(int isRead) { this.isRead = isRead; } }
Emerging cyberworld attack vectors: Modification, customization, secretive communications, and digital forensics in PC video games Complexity of customization in video games threatens to provide people with malicious intent a new vector for the secretive transmission of messages as well as data. This paper explores six different games including some of the most popular games of early 2013: World of Warcraft (WoW), League of Legends (LoL), Defense of the Ancients 2 (DotA 2), StarCraft 2 (SC2), Battlefield 3 (BF3), and Garry's Mod (GMod). Our research has shown that each of these games have at least one feature that an attacker may exploit in order to transfer information. Since video game forensics is still in an infantile stage, an investigator may not suspect video games and their data files as accomplices to crime. Within this paper, we will describe methods and methodology for hiding, displaying, and transferring data in video games and their related applications. Additionally, we will offer recommendations on how an investigator might search for any hidden data such as comparing hashes of unaltered game files to the altered game files on a suspect's machine. To the best of our knowledge, this is the first systematic research on the modification and forensics of popular games.
Identification, Characterization, and Evaluation of Nematophagous Fungal Species of Arthrobotrys and Tolypocladium for the Management of Meloidogyne incognita Root-knot nematodes belonging to the genus Meloidogyne are agriculturally important pests, and biocontrol strategies offer safer alternatives for their management. In the present study, two fungal species from Indian soils were identified as Arthrobotrys thaumasia and Tolypocladium cylindrosporum based on morphological characteristics and further confirmed using molecular markers. In vitro evaluation of A. thaumasia against M. incognita and Caenorhabditis elegans showed 82 and 73% parasitism, respectively, whereas T. cylindrosporum gave 65.2 and 57.7% parasitism, respectively. Similarly, culture filtrates of A. thaumasia caused 57.7 and 53.7% mortality of M. incognita and C. elegans, respectively, whereas T. cylindrosporum caused higher mortality of 87.3 and 64%, respectively. Besides, greenhouse evaluation of both fungi against M. incognita infecting tomato significantly reduced nematode disease burden reflecting parasitic success measured as the total number of galls, egg masses, eggs per egg mass, and derived nematode multiplication factor. Application of A. thaumasia and T. cylindrosporum reduced nematode multiplication factor by 80 and 95%, respectively, compared with control. General metabolite profiling of tested fungi using gas chromatographymass spectrometry and ultra-performance liquid chromatographyquadrupole/time of flight mass spectrometry reported for the first time here showed presence of various volatile and non-volatile compounds with nematicidal activity, viz., trimethyl-heptadiene, methyl-hexadecanol, dodecadienal, decane, terpendole E, dodecane, acetamido-6-anthraquinone, and hexadecanol. Also, other compounds such as undecane, dibutyl-disulfide, octadecenal, paganin, talathermophilin, dactylarin, tolypyridone A, tolypyridone B, pyridoxatin, and destruxin were identified, reported in the literature to possess antibacterial, antifungal, and insecticidal properties. This is the first report of the occurrence of both fungi from India and pioneer demonstration of T. cylindrosporum for root-knot nematode management. INTRODUCTION Plant-parasitic nematodes (PPNs) are considered hidden enemies and pose a major threat to both agricultural and horticultural crops. They have a universal distribution and cause an estimated yield loss of US$ 173 billion every year. Amongst the top 10 economically important PPN species worldwide, rootknot nematodes (RKNs) belonging to the genus Meloidogyne are considered to be the most severe ones. The preparasitic J2s of the most important RKN species, Meloidogyne incognita, penetrate and infect the root tip using a hollow needlelike protrusible stylet. The stylet is used for probing the plant tissue and entering into the vascular cylinder, where it injects the esophageal gland secretions that induce the formation of specialized feeding cells known as giant cells by suppressing the plant host immune system. Eventually, the feeding J2s undergo consecutive molts to J3 (third stage juveniles) and J4 (fourth stage juveniles) stages and young females that develop into reproducing mature females that lay eggs (). Despite enormous damage caused by PPNs in various crops, there is still a dearth of effective and efficacious nematode management option(s). Traditionally, management of PPNs relied on integrated cultural and physical tactics such as clean cultivation practices, crop rotation, solarization of the soil before planting, adequate fertilization, and removal of infected plants and weeds (). Additionally, one of the most effective, economical, and environmentally safe methods to reduce yield losses from nematode diseases is to use nematode-resistant cultivars. However, commercially viable resistant varieties and/or cultivars may not be either available for all the crops or limited in numbers for only specific crops due to lack of resistant sources required for varietal development (). Chemical nematicides are the mainstay due to their ability to reduce high densities of nematodes in the soil in a short period to avoid significant yield losses (Regmi and Desaeger, 2020). However, due to their high toxicity and possible environmental and health hazards, most of the chemical nematicides, fumigants, and insecticides have been withdrawn or banned from the global market (;;b,c). In addition, limited label claim of some of the recently introduced synthetic chemicals such as fluensulfone, fluopyram, etc., restricts their usage in all the crops and also against different nematodes. Therefore, using novel comprehensive approaches is the need of the hour for sustainable agricultural production. Nematode management using bio-control strategies has been known to be a safer alternative and practical approach. This is reflected by a considerable investment of venture capital in research required for developing biocontrol options (). One of the biocontrol approaches is to regulate nematode populations using nematophagous fungi, which have antagonistic activity against infective juveniles (). Nematophagous fungi and/or endophytic fungi can directly attack, kill, immobilize, or repel nematodes, confuse them while finding their host, interfere with giant cell development, compete for resources, or use a combination of these options. It can also capture, parasitize, or paralyze nematodes and act as natural enemies of plant and animal-parasitic nematodes. They are divided into four groups, i.e., endoparasitic fungi, nematode-trapping fungi (NTF), opportunistic fungi, and toxinproducing fungi (Siddiqui and Shaukat, 2004). NTFs are the unique group of soil-inhabiting fungi that can switch from the saprophytic to pathogenic lifestyle once they come in contact with nematodes as a response to nutrient depletion, and the predatory behavior adapted by them is exciting. Arthrobotrys oligospora Fres. 1852 (Orbiliaceae: Orbiliales) was the first recognized nematode-trapping fungus and the most abundant in the environment. The nematode-trapping process of the network structure of A. oligospora demonstrated that a specialized mycelial structure traps the nematodes followed by penetration of nematode cuticle, after which it digests the body contents. Zhang et al. revealed that A. scaphoides isolated from soil using Panagrellus redivivus nematodes as bait caused three-dimensional (3D) adhesive networks that trapped nematodes within 2 days. The utilization of NTFs will be an attractive alternative for the biological control of infective larvae. The importance of physicochemical processes of NTF is of immense interest, and researchers have collectively revealed that trapping and/or immobilization are associated with upregulation of several signaling pathways, intercellular communications, production of adhesive proteins, and organic metabolites as well as nitrate assimilation (). Besides, Wang B.L. et al. highlighted that Caenorhabditis elegans was attracted toward A. oligospora due to three fungal metabolites, namely 2(5H)-furanone, furan-2-yl methanol, and furan-2-carbaldehyde. However, the compound 3-hydroxy-2-methyl-4H-pyran-4-one (known as maltol) displayed a significant increase in the formation of 3D traps. Likewise, a quantity of four fungal metabolites, e.g., desferriferrichrome, linoleyl alcohol, non-adecanamide, and citicoline, were found to increase when fungi switch the lifestyle to the predatory stage, and they also showed considerable nematicidal activity (). Metabolite profiling of 100 wild isolates of NTF in three different species, A. oligospora, Arthrobotrys thaumasia, and Arthrobotrys musiformis, revealed the production of thousands of metabolites belonging to various structural families such as peptide, siderophore, fatty alcohol, and fatty acid amide during their interaction with C. elegans as demonstrated by liquid chromatography-mass spectrometry (LC-MS) analyses. The endophytic entomopathogenic fungus Tolypocladium spp. is known to survive in the soil during the absence of insects using nematodes as alternate hosts (Samson and Soares, 1984). T. cylindrosporum is a saprotroph and an entomopathogenic fungus studied as a biological control agent against insects of several orders but, to the best of our knowledge, so far, not known to be useful against PPNs. Nematophagous fungi serve as both predators and decomposers in the environment, and there might be regional differences in the effectiveness of different fungal isolates (Wang F.H. et al., 2017). Hence, isolation, identification, and characterization of native strains of fungi with predatory activity are crucial to identify the potential fungi along with an understanding of their ecology, biology, mode of action, and interactions to exploit them successfully against target PPNs. In view of the importance of nematophagous fungi as biocontrol agents for nematode management, we had isolated and identified 81 fungal isolates up to generic level from 17 soil samples collected across different states of India using C. elegans and M. incognita as bait (). In continuation, the present study is aimed to identify the species of two important fungal isolates, Arthrobotrys and Tolypocladium, using morphological characters and molecular markers. Furthermore, the effect of nutrition, temperature, and pH on growth rate and trap formation has been studied. Besides, fungal parasitization against M. incognita and C. elegans was evaluated under in vitro and in vivo conditions. We have also profiled the volatile and non-volatile chemical compounds produced by these fungal isolates. Nematode Cultures C. elegans strain N95 was maintained on a nematode growth medium with Escherichia coli strain OP50 lawn. A chunk of agar containing hundreds of worms was cut and transferred onto an overnight grown OP50 lawn in a fresh Petri plate. Plates were incubated at 25 C for 3 days for nematode multiplication. The authenticated population of M. incognita was maintained and multiplied on tomato roots (Solanum lycopersicum L. cv. Pusa Ruby) in the greenhouse. Approximately 35-day-old plants were harvested, roots washed free of soil, and used for collecting fresh egg masses, which were hatched via modified Baermann's technique (Whitehead and Hemming, 1965) to obtain infective second-stage juveniles (J2s) required for all the experiments. Morphological and Molecular Classification of the Fungal Isolates In the present study, we have taken the two Indian isolates of Arthrobotrys and Tolypocladium for species identification. Pure cultures of both the fungal isolates were grown separately in potato dextrose broth (PDB) at 25 C for 1 week. Cultures of Arthrobotrys spp. and Tolypocladium spp. were observed under an Olympus BX50 compound microscope, and morphological measurements of conidia, conidiophores, and phialides (length and width) were recorded. For molecular characterization, genomic DNA was extracted from a 1week-old mycelial mat using the CTAB method as previously described by Wang et al.. Polymerase chain reaction (PCR) analyses were carried out with genomic DNA extracted from the fungal isolates to amplify the two markers; internal transcribed spacers (ITSs) and -tubulin using universal primers, ITS-1 (5 -TCCGTAGGTGAACCTGCGG-3 ) and ITS-4 (5 -TCCTCCGCTTATTGATATGC-3 ) as well as t2A (5 -GGTAACCAAATCGGTGCTGCTTTC-3 ) and t2B (5 -ACCCTCAGTGTAGTGACCCTTGGC-3 ), respectively. PCR amplification was carried out as per the procedure by Glass and Donaldson. The amplified products were analyzed by electrophoresis on a 1.2% (w:v) agarose gel (Sigma Aldrich, United States) and visualized using a gel documentation system (Alpha Image Analyzer, United States). PCR products were further purified using a PCR clean-up kit (Macherey-Nagel, Germany) and sequenced (Applied Biosystems, United States). The sequences of ITS and -tubulin generated for the tested fungi were compared with previously reported sequences in the GenBank database. All the sequences were imported for alignment by setting default parameters in the ClustalW algorithm into a MEGA X software application package. To search for homologs of both genes, sequences from each isolate were subjected to the Nucleotide Basic Local Algorithm Search Tool of the National Center for Biotechnology Information. 1 The sequences were further analyzed and submitted to GenBank. Additionally, phylogenetic trees using the sequences of ITS and -tubulin of the tested isolates were constructed in MEGA X using the maximum composite likelihood approach considering 1,000-bootstrap replications under Kimura twoparameter distance models. For this, Neurospora crassa and Cordyceps ophioglossoides were used as out-groups (;;). In addition, the identified fungal isolates were submitted to the Indian Type Culture Collection (ITCC), which is an affiliate member of the World Federation for Culture Collections and is registered with the World Data Centre for Microorganisms (registration number 430). Effect of Nutrition, Temperatures, and pH on Growth Rate and Trap Formation of the Selected Fungi Pure cultures of both the fungi were grown in six different media, viz., potato dextrose agar (PDA), cornmeal agar (CMA), Czapek malt agar (CzMA), rose bengal agar (RBA), peptone-yeastglucose, and synthetischer nhrstoffarmer agar (SNA) (HiMedia, India) in 90-mm diameter (dm) Petri plates at 25 C to examine the differences in structure, color, growth rate, and sporulation among colonies. For this, a small piece of agar measuring around 5 5 mm was cut from a well-established colony and placed upside down at the center of the fresh Petri plate containing different media. Three replicates (n = 3) for each medium were used, and observations were made for all the replicates separately. The fungal growth was determined at 3-, 6-, 9-, and 12-day post-inoculation (dpi) by measuring the colony diameter. The growth rate was estimated as growth rate per day (mm/day) = /days of culture, as described earlier by Wang F.H. et al.. Furthermore, the medium that supported the maximum growth rate was used to evaluate the effect of different temperatures, viz., 15, 20, 25, 30, and 35 C, by following the procedures described earlier. Additionally, a combination of the specific medium and temperature supporting maximum fungal growth was studied to evaluate the effect of different pH levels (ranging from 4 to 10). Three replicates (n = 3) were used for each treatment. The growth rate was estimated at 3, 6, 9, and 12 days after inoculation by a formula adopted from Wang F.H. et al.. The mean value of the diameter of fungal hyphae was measured each day in each group and used for calculating the growth rate by deducting the original diameter of the fungal disc used for sub-culturing. In vitro Efficacy of Fungal Filtrate on Nematode Mortality The two selected fungi were grown on a PDB medium for 10 days at 25 C with 180 rpm in an incubator shaker. Approximately 100 surface-sterilized nematodes (both M. incognita and C. elegans) were added "separately" into 5-ml Eppendorf tubes containing 3ml fungal filtrate (FF) of each isolate. The tubes were incubated on the shaker at 120 rpm at 25 C for 3 days. The worms were washed thrice with sterile water (SW) followed by re-incubation in SW for 24 h at 25 C for revival in the nematode movement. Then, the nematodes were examined under a stereo binocular microscope to record dead and live nematodes, and nematode mortality percentage was calculated as% mortality = (number of dead nematodes/total number of nematodes) 100. There were three replicates (n = 3) for each treatment, and observations were made for all the replicates independently. Worms in the PDB medium served as the control for comparison. In vitro Evaluation of Direct Fungal Parasitism Against M. incognita and C. elegans Fungal parasitism against M. incognita J2s and C. elegans L3s was evaluated in water agar plates. Briefly, fungi were grown on PDA at 25 C for 6 days, and then, 5-dm discs were transferred into 2% water-agar plates containing 1% ampicillin (100 g ml −1 ). The plates were incubated at 25 C for 3 days, and then 100 surface-sterilized worms were added to the plates separately (). The number of captured larvae was scored after 3 days using a light microscope (40). Three replicates (n = 3) were used for each treatment and compared with control (nematodes only in water agar medium). Parasitization was calculated as previously described by Siddiqui and Shaukat as% parasitization = (number of parasitized nematodes/total number of nematodes) 100. Captured nematodes of both M. incognita J2s and C. elegans L3s were also examined under a scanning electron microscope. For this, worms were fixed in 2% glutaraldehyde in 0.1-M phosphate buffer (pH 7.2) overnight, followed by 2% osmium tetroxide fixation for 6 h and dehydration using a series of ethanol as described by Den Belder et al.. M. incognita One-month-old healthy tomato seedlings (S. lycopersicum L. cv. Pusa Ruby) were transplanted into 4-inch dm pots filled with 400-g autoclaved soil, sand, and farmyard manure (50:25:25), which were kept in the greenhouse. At the same time, fungal suspensions were prepared by culturing the colonies on medium, which showed a high sporulation level, and incubated at 25 C for 10 days. Subsequently, 5 ml of sterile distilled water was added to the plate and the spores were scraped using a spatula. The mixture was placed in a small beaker and stirred for 10 min, filtered using cheesecloth, and quantified using a hemocytometer on a light microscope. Finally, the fungal suspensions were adjusted to get 1 10 6 spores/ml and used for inoculation around the root zone during transplantation. After 1 week, 800 J2s of M. incognita were inoculated at the rate of 2 J2s per gram of soil, and the pots were maintained in the greenhouse. Plants that received sterile distilled water served as control. Five replicates (n = 5) were used for each treatment, and observations were made for all the replicates individually. Plants were carefully harvested 45 dpi, and roots were washed free of soil. Plant growth parameters, viz., plant length, fresh weight, and dry weight were recorded. Additionally, nematode disease burden was determined as the number of galls, egg masses, and eggs per egg mass and used for deriving the nematode multiplication factor (MF) as as described previously by Hada et al. (2020Hada et al. (, 2021a. Extraction of Fungal Volatiles for Gas Chromatography-Mass Spectrometry Analysis The selected fungal isolates were grown on PDA media (HiMedia, India) in Petri plates (90 mm diameter) for 10 days. Fungal mats along with media were taken out separately and extracted with hexane (50 ml 3) thrice using a bath sonicator at 30-Hz amplitude for 30 min. The extracts were filtered, pooled, and passed through anhydrous sodium sulfate (20 g) to remove traces of water if any. The extracts were concentrated separately under reduced pressure in a rotary evaporator (Heidolph, Germany) and dissolved in gas chromatography-mass spectrometry (GC-MS) grade hexane for further analysis (a). Analysis by Gas Chromatography-Mass Spectrometry GC-MS analysis was carried out using 7890A GC (Agilent Technologies, United States) equipped with an HP-5MS column (30 m 0.25 mm/0.25 m, Agilent Co., United States), which was directly connected to a triple-axis HED-EM 5975C mass spectrometer (Agilent Co., United States). The injection volume was 1 l with flow mode in split control. The carrier gas flow was set at 1 ml min −1 helium. The oven temperature was initially held at 40 C for 2 min. Thereafter, the temperature was raised with a gradient of 3 C min −1 until the temperature reached 130 C and held for 2 min. Again, the temperature was raised with a gradient of 5 C min −1 up to 220 C and held for 1 min. Finally, the oven temperature was raised to 280 C with an increment of 10 C min −1. The total runtime was 59 min. The MS acquisition parameters were set with the ion source temperature 175 C, electron ionization 70 eV, full scan mode (50-550 mass units), and transfer line temperature 250 C. Compounds were identified by matching their mass spectra. Volatile organic compounds (VOCs) were identified by library matching from the National Institute of Standards and Technologies Mass Spectra Library (a). Extraction of Fungal Metabolites for Ultra-Performance Liquid Chromatography-Quadrupole/Time of Flight-Electrospray Ionization-Mass Spectrometry Analysis The selected fungi were again grown on PDA for 10 days. Fungal mats along with media were taken out separately and extracted with methanol (50 ml, each) thrice using a bath sonicator at 30 Hz. amplitude for 30 min. The extracts were filtered and pooled, and the solvent was evaporated under vacuum in a rotary evaporator that resulted in respective concentrates. The concentrates obtained for each fungus were dissolved in LC-MS grade methanol separately for further analysis (b). Analysis by Ultra-Performance Liquid Chromatography-Quadrupole/Time of Flight-Electrospray Ionization-Mass Spectrometry The analysis was performed on ultra-performance liquid chromatography-quadrupole/time of flight mass spectrometry (QToF-MS, Synapt G2 HDMS, Waters Corporation, Manchester, United Kingdom). The QToF-MS was operated with electrospray ionization (ESI) at a nominal mass resolution of 20,000 and controlled by MassLynx 4.1 software. The data acquisition was made with the MS E function in continuum mode in the range of m/z 50-1,200. The chromatographic separation was performed on an ACQUITY Ultra-Performance Liquid Chromatography (UPLC) BEH C 18 column (2.1 100 mm, 1.8 m, Waters India Pvt. Ltd., Bangalore) at 35 C. The mobile phase consisted of A phase: methanol-water (20:80) and B phase: methanol-water (80:20) with 0.1% formic acid in both the phases. A gradient program was used with 0.4 ml/min flow rate, with 0-4.0 min 100% A, 4.0-7.0 min 70% A, 7.0-12.0 min 50% A, 12-15 min 30% A, and 15-25.0 min 100% A. The injection volume was 5 l, and the samples were maintained at 25 C throughout the analysis (b). Statistical Analysis Data of laboratory experiments were analyzed in a completely randomized design. Greenhouse experiments were conducted in the randomized complete block design, and data were subjected to analysis of variance and Duncan's multiple range test at 1 and 5% level of significance using statistical package version 160 (SPSS 16.0; IBM Corp., United States). All the experiments were conducted thrice to validate the final outcome. Morphological and Molecular Classification of the Fungal Isolates Species identification has been primarily made using morphological characters along with the measurement of taxonomically useful features. The Arthrobotrys isolate exhibited straw white-colored colonies with raised concentric rings along with thin and hairy rings on PDA media. The length and width of conidia and conidiophores of A. thaumasia were approximately 24.58-60 and 10.15-22.88 m, as well as 211-446 and 2.3-5.4 m (n = 50), respectively. Furthermore, the shape of the conidium appeared as an inverted pear with 1-3 septa. All these characteristics matched with the reported description of the fungus A. thaumasia (Supplementary Table 1 and Figure 1A), and we designated our isolate as A. thaumasia At_RK. Similarly, in the case of Tolypocladium isolate, the colony appeared hairy with whitish cream as well as reverse yellow to pale on PDA media. The conidium was 2-4.3 and 1.3-1.7 m in length and width, respectively. Likewise, the length and width of conidiophores were approximately 31-44 and 1.1-2.8 m, respectively. The phialide length and width were 4.5-8.5 and 2-3.2 m, respectively. Finally, conidia hyaline were smoothwalled, short cylindrical, straight and/or slightly curved, and both ends obtusely rounded. Comparison of all these characteristics matched with the reported description of T. cylindrosporum (Supplementary Table 1 and Figure 1B), and our isolate was designated as T. cylindrosporum Tc_RK. To further confirm the species identity, the fungi discussed earlier were characterized using two molecular markers, ITS and -tubulin. Sizes of ITS and -tubulin that could be amplified in both the isolated fungi were 580 and 370 bp, respectively. Purified PCR products were sequenced and submitted to the National Center for Biotechnology Information database (Supplementary Table 2). Homology search of ITS and -tubulin sequences using BLAST program showed that A. thaumasia At_RK sequence was 99% (accession: KT215216.1) and 97% (accession: EU977531.1) identical to earlier reported sequences of A. thaumasia, respectively. Similarly, the ITS sequence of T. cylindrosporum Tc_RK was 99.06% (accession: NR_167967.1), identical to a previously reported sequence of T. cylindrosporum. Furthermore, we constructed an evolutionary tree based on the sequences of the ITS, and -tubulin that demonstrated A. thaumasia At_RK was closest to A. thaumasia strain CBS 376 97 (accession: KT215216.1), whereas -tubulin sequence was close to A. thaumasia isolate 111 (accession: EU977531). Similarly, the ITS sequence of T. cylindrosporum Tc_RK was closest to T. cylindrosporum isolate TCDAs18R1A9 (accession: MT911434.1), whereas that of -tubulin was close to T. tropicale strain IQ214 (accession: KF747166.1) and T. tropicale strain MX338 (accession: KF747190.1). The results of phylogenetic analysis based on the sequence of ITS and -tubulin indicated that the isolates in the present study could be different geographical strains of A. thaumasia and T. cylindrosporum (Supplementary Figure 1). Additionally, the studied fungal isolates were submitted to ITCC with strain/accession numbers: ITCC8969 for T. cylindrosporum Tc_RK and ITCC8970 for A. thaumasia At_RK (Supplementary Table 2). Effect of Nutrition, Temperatures, and pH on Growth Rate and Trap Formation of the Fungal Colonies Selected fungal isolates cultured on different nutrient media showed clear variations in the growth rates and trap formation at different time intervals. The highest growth rate of A. thaumasia was observed significantly in SNA media (2.00 mm/day), followed by CMA (1.375 mm/d) and CzMA (1.272 mm/day) compared with other tested media after 3 days (Supplementary Figure 2A). Likewise, at 6 dpi, the growth rates were observed to be significantly higher in SNA (1.432 mm/day), CMA (1.405 mm/day), PDA (1.405 mm/day), and CzMA (1.395 mm/day) media, respectively. Finally, the hyphae covered the entire Petri plate, and growth rates of fungus were equal in all the media after 9 dpi (0.94 mm/day) and 12 dpi (0.71 mm/day). Concurrently, sporulation was observed, and the number of spores was counted using a hemocytometer. The maximum sporulation was noticed on RBA (30.3 10 4 spore/ml) followed by CzMA medium (9.1 10 4 spore/ml) (Supplementary Table 3). The response to different media shows that A. thaumasia mycelia are very sensitive to salts and amino acids because they promote trap formation at specific nutrient combinations and inhibit it partially or completely at other combinations. There are significant differences among the different media compositions (p < 0.01). In the case of T. cylindrosporum isolate, the growth was slow in all the tested media (Supplementary Figure 2B), but PDA media comparatively showed a considerable growth rate (0.64 mm/day) at 3 and 6 dpi. Subsequently, the maximum growth rate was observed in the PDA media (0.525 mm/day) and lowest in CMA media (0.328 mm/d) after 9 and 12 dpi. Consequently, the fungal sporulation was also observed in all the tested media, and the maximum sporulation was obtained on SNA media (9006.3 10 4 spore/ml) followed by PDA (7203.3 10 4 spore/ml) and CMA media (3427.2 10 4 spore/ml) (Supplementary Table 3 and Supplementary Figure 3A). Furthermore, A. thaumasia grown on SNA media that provided a maximum growth rate was used to evaluate the effect of different temperatures. The data indicate that the growth rate of A. thaumasia was significantly higher at 30, 25, 20, and 15 C (0.293, 0.634, 0.61, and 0.523 mm/day) after 3, 6, 9, and 12 dpi, respectively. In the case of T. cylindrosporum that grew on PDA media, it exhibited the quickest growth rate at 20 C after 3 and 6 dpi (0.525 and 0.465 mm/day, respectively), and the highest growth was noticed after 9 and 12 dpi (0.563 and 0.475 mm/day, respectively) as compared with other treatments (p < 0.01). The fungus could not grow at 35 C after 3 and 6 dpi but could grow in the range from 15 to 30 C (Supplementary Figure 3B). Additionally, a combination of the specific media and temperature supporting maximum fungal growth was studied to evaluate the effect of different pH levels. For this, an isolate of A. thaumasia was cultured on SNA media at 30 C. The result showed that the radial growth rate at pH 6-9 (8.575 and 8.475 mm/day) was significantly faster than other tested pH after 3 days. The highest growth under different pH recorded was at pH 9 > pH 6 > pH 7 = pH 8 > pH 10 > pH 5 > pH 4. Similarly, a culture of T. cylindrosporum grown on PDA medium at 20 C caused the highest growth rate at pH 6 (1.55-5.15 mm/day) followed by pH 9 and 10, and minimal growth was recorded at pH 4 (0.525-1.975 mm/day) (Supplementary Figure 3C). In vitro Evaluation of Fungal Filtrate of the Tested Isolates on Nematode Mortality The FF of the selected isolates were found to be effective against both M. incognita J2s and C. elegans L3s compared with control in the PDB medium. T. cylindrosporum Tc_RK and A. thaumasia At_RK caused 87.3 ± 6.02 and 57.7 ± 3.5% mortality of M. incognita J2s, respectively. Likewise, FF of T. cylindrosporum Tc_RK and A. thaumasia At_RK caused 64 ± 3.6 and 53.7 ± 2.3% mortality in C. elegans L3s (Supplementary Table 4). Both M. incognita and C. elegans worms exhibited normal behavior in the PDB control without any mortality. In vitro Evaluation of Direct Fungal Parasitism Against M. incognita and C. elegans Fungal parasitism against M. incognita J2s and C. elegans L3s was evaluated on water agar plates under in vitro conditions. The tested fungal isolates were found to be effective against both M. incognita and C. elegans after 3 days compared with control in the water agar plates (Supplementary Table 4). Results showed that A. thaumasia At_RK formed different types of traps to hunt the nematodes, viz., 2D adhesive network, 3D adhesive network, and non-constricting rings (Figure 2). It caused approximately 82 ± 3.6% parasitism of M. incognita J2s. During parasitization, the fungal mycelium penetrated the nematode cuticle, developed inside its body, consumed the body contents, then ruptured the cuticle, and subsequently grew out of the body (Figure 3). The fungus mostly penetrated the nematode body along the lateral lines in both the nematodes (Figures 3, 4). Similarly, the percentage of C. elegans parasitized by A. thaumasia At_RK was approximately 73 ± 4.5%. It captured C. elegans, penetrated the cuticle and grew inside the body, and then ruptured the cuticle (Figure 4). It was noticed that it colonized approximately five to six M. incognita J2s but only one to two C. elegans L3s. Direct parasitism of T. cylindrosporum Tc_RK against both the nematodes was not as clear as in A. thaumasia. The capture and colonization of M. incognita by T. cylindrosporum were 65.2 ± 3.1% after 3 dpi and 57.7 ± 3.6% in C. elegans (Supplementary Table 4). The attachment of fungal spores on M. incognita cuticle was observed as a first step of the parasitism process (Figure 5). The fungal parasitization and spore attachment on C. elegans were significantly less compared with M. incognita, but consumption of nematode contents was detected quite well, as shown in Figure 6. In vivo Evaluation of Selected Fungi Against M. incognita Fungal spore suspension having 1 10 6 spores/ml was inoculated in the vicinity of tomato roots during transplantation. The selected fungal isolates significantly increased plant growth parameters in plant length, weight, and dry weight when compared with the control plants treated with only nematodes and SW. The maximum plant length (68.2 cm) and weight (14.92 g) were observed in the T. cylindrosporum Tc_RK-treated samples. Also, plants treated with A. thaumasia At_RK showed an average length of 66.8 cm, whereas the average weight was 13.4 g ( Table 1). Interestingly, A. thaumasia At_RK-treated plants showed significantly higher dry weight (3.22 g) compared with only nematode-treated plants without the addition of fungal suspension. Furthermore, the effect of the selected fungi was determined on nematode disease burden in tomato plants inoculated with M. incognita. Results showed that the application of fungal suspension caused a significant reduction in nematode infection compared with control. It was found that average galling was in the range of 29 ± 16.2 in A. thaumasia-and 43.4 ± 12.8 in T. cylindrosporum-treated plants compared with control, which documented approximately 142 ± 5.4 that led to a decrease in galling intensity by 80 and 69%, respectively. Corroborating this, the number of egg masses was approximately 18.6 ± 6.6 and 7.2 ± 3.8 in A. thaumasia-and T. cylindrosporum-treated plants, respectively, compared with 50.6 ± 4.5 in control plants. Likewise, the number of eggs per egg mass was 325.2 ± 31.1 and 221.2 ± 24.4 in A. thaumasia-and T. cylindrosporum-treated plants, respectively, compared with 583.4 ± 40.7 in control plants. As a consequence, approximately 63 and 86% reduction in egg masses and 44 and 62% reduction in eggs per egg mass could be obtained in A. thaumasia-and T. cylindrosporumtreated plants, respectively. Ultimately, approximately 80 and 95% decline in nematode MF was observed in A. thaumasia-and T. cylindrosporum-treated plants, respectively ( Table 1). DISCUSSION RKNs, Meloidogyne spp., are the most damaging endoparasites with a wide range of hosts resulting in huge losses in various crops worldwide. Climatic fluctuations have resulted in the evolution of new races of nematodes that overcome the previous source of resistance and cause diseases. To date, it has been very difficult to recommend a promising nematode management tool that is effective, environmentally safe, economical, and harmless to the non-targets. In this regard, biocontrol agents have been preferred due to their potential against target parasitic nematodes. NTFs hold great potential as biocontrol agents, as they capture nematodes by producing trapping devices from their vegetative mycelia and also produce various metabolites as nematicidal weapons that have antagonistic activity against infective juveniles ). To understand its importance in the integrated pest management programs and to enrich the options of the fungi to be used as biocontrol agents, the present study was undertaken to characterize nematophagous fungal isolates of Arthrobotrys and Tolypocladium, which were isolated using C. elegans and M. incognita as bait (). These two isolates were used for species identification and evaluation against C. elegans and M. incognita. For morphological characterization, structure, nature, and color of the fungal colonies, along with the measurement of taxonomic features such as size and shape of conidia, conidiophores, and phialides, as well as the presence of chlamydospores, were considered. In the case of isolate Arthrobotrys, the conidia were inverted pear-shaped, 1-3 septate, and 24.58-60 10.15-22.88 m in size. Furthermore, the trapping device is not a constricting ring but an adhesive network that matches the description of A. thaumasia (Wang X. et al., 2017;). Thus, both the morphological characteristics and nematophagous behavior of our isolate were in conformity with the previous description of A. thaumasia that was designated as A. thaumasia At_RK. It is important to reiterate that this was isolated from dead M. incognita, unlike the other geographical strains reported (). This is the first T. cylindrosporum Methyl-hexadecanol 2 2.08 It was found in Satureja montana oil having nematotoxic and phytotoxic effects () Octadecenal 2.22 7.52 It was found as semiochemicals (attractants) such as pheromones, kairomones, and allomones that act to modify behavior of pests or their natural enemies It was identified as minor components in pheromone gland of female moths as part of the sex pheromone Ethenyloxy-octadecane 2.28 4.34 It was found in T. longibrachiatum having antibacterial activity against Bacillus subtilis () Ethyl-3-methyl-benzene 2.54 8.1 It was identified from T. harzianum and Alternaria alternate PDB (;) Decane 2.81 17.7 It was found in Capsicum annum root extract, work against Meloidogyne incognita () It was found in Azadirachta indica leaf extract Undecane 4.28 4.34 It is found as an active chemical compound of neem Azadirachta indica leaf extract Methylene-1-indene 6.65 0.94 It was found in many natural products and drug candidates with remarkable biological activities () It is used as a treatment of Alzheimer's disease () Dodecane 6.86 5.73 It was found in Capsicum annum root extract against Meloidogyne incognita () It was isolated from T. longibrachiatum, having antibacterial activity against Bacillus subtilis () It was isolated from Cymbopogon nardus and Dysphania ambrosioides, did not exhibit nematicidal effects (de Freitas ) Bis-(dimethyl-ethyl)-phenol 18.34 7.04 It was found in at least 169 species of bacteria and fungi having antibacterial, insecticidal, and nematicidal activities against Caenorhabditis elegans () Ethyl-5-(2-ethylbutyl)-octadecane 21.62 0.18 It was found in Salmali niryasa having antimicrobial activity against Klebsiella pneumoniae, Staphylococcus aureus, E. coli, and Candida albicans (Divya and Prabhu, 2015) Hexadecanol 35.14 4.86 It was isolated from Satureja montana oil, which has nematotoxic and phytotoxic effects () Total 62.83 Frontiers in Microbiology | www.frontiersin.org () It was isolated from entomopathogenic fungus Cordyceps sinensis () Cyclo(l-Pro-l-Leu) C 11 H 18 N 2 O 2 210.1368 210.1373 2.38 M + It was produced by Achromobacter xylosoxidans, a bacterium that inhibits Aflatoxin production by Aspergillus parasiticus () It has been isolated from various bacterial and fungal species, including Streptomyces () Paganin () It was found in Alternaria PDB culture () (;) It was isolated from many fungi viz., Acremonium, Alternaria, Aspergillus, Beauveria, Fusarium, Isaria, Metarhizium, Penicillium, and Rosellina. Having various biological activities such as antimicrobial and insecticidal (Wang B.L. et al., 2018 T. cylindrosporum They were cytotoxic to human cancer cells () They were isolated from Acremonium spp. and Trichoderma hamatum () (Liu and Tzeng, 2012;;) (;) report of the presence of A. thaumasia in India. In the case of Tolypocladium isolate, conidia were hyaline, smooth-walled, short cylindrical, straight, or slightly curved, both ends obtusely rounded, and one-celled 2-4.3 1.3-1.7 m, adhering to the phialide tips in slimy heads. Phialides were 4.5-8.5 2-3.2 m in size and consisted of an inflated ellipsoidal to cylindrical base tapering abruptly to a thin neck, which often gives a bent appearance. All these features corresponded with the description of T. cylindrosporum (Bissett, 1983;Samson and Soares, 1984), and hence, we designated our isolate as T. cylindrosporum Tc_RK, which is also the first report from India. It is also important to emphasize that this isolate was from the dead nematodes, unlike the isolate used for comparison, which was from insects (Samson and Soares, 1984). Morphological identification of the selected fungi was further supported by molecular characterization using two molecular markers. In the case of ITS, the sequence A. thaumasia At_RK showed a high similarity to the already reported strain of A. thaumasia available in GenBank. Likewise, the sequence of T. cylindrosporum Tc_RK also showed maximum identity to the already reported strain of T. cylindrosporum. These findings confirmed the identity and presence of A. thaumasia and T. cylindrosporum in the Indian rhizospheric soils. Our results showed that analysis of molecular variation and maximum composite likelihood analysis using ITS and -tubulin markers revealed a considerable degree of differentiation between geographical isolates. The results also demonstrate that the ITS and -tubulin markers are useful for phylogenetic analysis and classification of Arthrobotrys and Tolypocladium species. Li et al. also studied phylogenies of NTFs deduced from sequence analyses of 5.8S rDNA, 28S rDNA, and -tubulin genes and redefined the systematic classification of nematophagous fungi and amended the generic analysis of NTF based on types of trapping devices. Furthermore, these isolates were studied under different media, incubation temperatures, and pH levels to analyze the growth rates and sporulation characteristics. The SNA media having pH 9 at 30 C showed the best growth rate, whereas RBA media supported high sporulation in the case of A. thaumasia At_RK. The results reported herein are in line with those of Wang F.H. et al., who obtained the highest growth at the optimal temperature at 30 C for the same fungus. In contrast, Fernandez et al. showed that A. oligospora exhibited the best growth rate at 20 C, and another fungus, Duddingtonia flagrans, grew at 10 C and formed trapping nets more slowly at this temperature when induced by nematodes. The Indian isolate of A. thaumasia At_RK that grew at optimum temperature at 30 C could be advantageous as a biological control agent in subtropical environments that remain at 30 C for a longer period. Additionally, the results of the current study match with the findings of Wang F.H. et al., who reported optimum growth of A. thaumasia on a media having pH 9 and 10. Contrastingly, there are no previous reports about the impact of different media, incubation temperatures, and pH levels on T. cylindrosporum growth and sporulation; however, the present study showed the best growth rate of T. cylindrosporum Tc_RK in the PDA media, having pH 6 at 20 C, whereas SNA medium supported high sporulation level. These optimized conditions for getting maximum fungal sporulation and growth could be highly useful for the large-scale production of these fungi in the future. Most importantly, the utility of both fungi was primarily demonstrated by their ability to parasitize M. incognita and C. elegans under in vitro conditions. The selected fungal isolates were found to be effective against both the assessed nematodes after 3 days compared with control. Our results revealed that A. thaumasia showed significantly higher parasitism compared with T. cylindrosporum and water agar control plates (p < 0.01). Direct parasitism of the Indian isolate of A. thaumasia At_RK against M. incognita was comparatively higher (82%) than the efficacy obtained with the Korean isolate of A. thaumasia Nema-1 (55%) (). The finer details such as different types of traps and penetration in the region of lateral lines, etc., during A. thaumasia parasitizing M. incognita J2s was lacking, and the same has been documented for the first time in the present study. The culture filtrate of T. cylindrosporum Tc_RK provided 87.3% mortality of M. incognita compared with control. So far, to the best of our knowledge, this is the first report about the ability of T. cylindrosporum to parasitize M. incognita. Interestingly, the results revealed that mortality of C. elegans caused by these two local fungal isolates was comparatively less, although A. thaumasia At_RK was isolated from dead C. elegans. The possible reason underlying lower mortality of C. elegans may be due to the fast movement that could have prevented them from immobilization and paralyzation, and/or body secretion of M. incognita could have attracted the trapping fungi relatively more. During the in vivo evaluation of the isolated fungi against M. incognita-infected tomato, there was a significant increase in plant growth parameters compared with control plants infected with only nematodes. The tested isolates did not show any promotion in growth parameters compared with healthy control, but they enhanced plant growth in nematode-infected plants as a sign of protection provided against them. Further application of fungal filtrate of A. thaumasia At_RK caused a significant reduction in nematode disease burden per plant compared with control. Similar observations have been recorded by Park et al. using the culture filtrate of A. thaumasia on M. incognita. Likewise, the application of fungal suspensions of T. cylindrosporum Tc_RK in the present study caused a significant reduction in the number of egg masses and eggs per egg mass. This ultimately led to a decline in the nematode MF up to 94.5%. This is the first report on the effect of T. cylindrosporum Tc_RK on nematode fecundity. The successful nematode mortality brought about by both the tested nematophagous fungi in the present study led us to analyze their metabolite profiles, particularly the volatile and non-volatile chemical compounds (VOCs) using GC-MS and UPLC-QToF-ESI-MS to identify the metabolites responsible for the nematicidal activity. The results showed that A. thaumasia At_RK secreted both volatile and non-volatile compounds that could be responsible for nematicides. Two volatile compounds, 2-methyl-1-pentanethiol and trimethyl-heptadien-4-one, were identified at higher concentrations in this isolate. The activity of these compounds as a repellent, odor character, insecticides, and nematicides have already been reported earlier by Huang et al. and Mravkov et al.. Similarly, the compounds, dodecadienal, undecane, and nerolic acid observed throughout metabolite profiles in the present study have already been reported as nematicidal, antibacterial, and antifungal compounds (;;). Correspondingly, A. thaumasia At_RK secreted non-volatile compounds such as aganing, talathermophilin, and dactylarin found in other nematophagous fungi, A. entomopa, Talaromyces thermophilus, and Dactylaria lutea (;Degenkolb and Vilcinskas, 2016;Wang X. et al., 2017). Interestingly, none of these compounds were detected during the analysis of 100 isolates belonging to three species A. oligospora, A. thaumasia, and A. musiformis, during their interaction with C. elegans (). Thus, this is the first general metabolite profiling for A. thaumasia isolated from dead M. incognita, and hence, it could be promising for commercial exploitation in the future. Likewise, our study with the Indian isolate of T. cylindrosporum Tc_RK showed secretion of volatile compounds, namely, methyl-hexadecanol, hexadecanol, decane, dodecane, and bis-(dimethyl-ethyl)-phenol, and the secreted compounds were reported to have antagonistic activity against nematodes (;;). Other chemical compounds having nematicidal activities such as ethyl-3-methyl-benzene and undecane were also observed in the Indian isolate of T. cylindrosporum Tc_RK. These compounds were described to have nematicidal activities in the culture filtrate of Trichoderma harzianum and leaf extracts of Azadirachta indica (;Rady, 2018). On the other hand, T. cylindrosporum Tc_RK also secreted non-volatile compounds with activity against nematodes, for instance, tolypocladenols, tolypyridone (A&B), and pyridoxatin, which were reported as metabolites from endolichenic isolate of T. cylindrosporum, Acremonium spp., and Trichoderma hamatum ). In addition to that, the tested isolate in the present study also secreted compounds such as terpendole E, 4-chloro-2-phenylphenol, destruxin A, and acetamido-6-anthraquinone, which were reported to have nematicidal activity (Ohri and Pannu, 2010;;;;) and not known to be present in T. cylindrosporum. It is also important to mention here that T. cylindrosporum Tc_RK isolated from dead C. elegans could be promising for nematode management due to its secretion of various novel metabolites with nematicidal properties. CONCLUSION Despite several reports being available for the efficacy of nematophagous fungi as biocontrol agents, the present investigation established an in-depth study on the Indian isolates of A. thaumasia and T. cylindrosporum for RKN management. Both these fungi are reported for the first time from India. Furthermore, this is the first report showing the potential of T. cylindrosporum for RKN management. Besides, this is also an established report revealing the presence of nematicidal compounds in both fungi using metabolite profiling. In view of the potential demonstrated for both the selected fungi against M. incognita, they can be further explored for commercial product development. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
Mutation of RORT reveals a role for Th17 cells in both injury and recovery from renal ischemia reperfusion injury. To investigate Th17 cells in the setting of acute kidney injury (AKI), the master regulator of Th17 cell differentiation, RORT, was mutated in Lewis rats using CRISPR/Cas9 technology. In response to 40 min bilateral renal I/R, Rorc-/- rats were resistant to injury relative to wild-type Rorc+/+ rats. This protection was associated with inhibition of IL17 expression and reduced infiltration of CD4+ cells, CD8+ cells, B-cells and macrophages. To evaluate the effect of Th17 cells on repair, ischemia was increased to 50 min in Rorc-/- rats. This maneuver equalized the initial level of injury in Rorc-/- and Rorc+/+ rats 1 to 2 days post I/R based on serum creatinine values. However Rorc-/- rats, but not Rorc+/+ rats, failed to successfully recover renal function and had high mortality by 4 days post I/R. Kidney tubules from Rorc+/+ rats showed evidence of repair by day-4 post I/R, while Rorc-/- rats showed persistent necrosis and elevated cell proliferation. Adoptive transfer of CD4+ cells from spleen of Rorc+/+ rats or supplementation of exogenous rIL17 by osmotic mini-pump improved renal function and survival of Rorc-/- rats following 50 min of I/R. This was associated with a relative decrease in the number of M1-type macrophages and a relative increase in the percentage of T-regulatory cells. Taken together, these data suggest Th17 cells have both a deleterious and beneficial role in kidney injury and recovery, contributing to early post-ischemic injury and inflammation but may also be critical in resolution of inflammation during kidney repair.
<reponame>natnaelh14/social-media-app import { GroupContainer, FormInputContainer, FormInputLabel } from "./form-input.styles"; type InputProps = { name: string, label: string, type: string, value: string, onChange: (event: React.ChangeEvent<HTMLInputElement>) => void; } const FormInput = ({ onChange, label, name, type, value}: InputProps) => ( <GroupContainer> <FormInputContainer onChange={onChange} name={name} required type={type} /> {label ? ( <FormInputLabel className={value.length ? "shrink" : ""}> {label} </FormInputLabel> ) : null} </GroupContainer> ); export default FormInput;
#ifndef __AP_TIMERPROCESS_H__ #define __AP_TIMERPROCESS_H__ #include "PeriodicProcess.h" #include "../Arduino_Mega_ISR_Registry/Arduino_Mega_ISR_Registry.h" // default to 1kHz timer interrupt #define TIMERPROCESS_PER_DEFAULT (256-62) // 1kHz #define AP_TIMERPROCESS_MAX_PROCS 4 class AP_TimerProcess : public AP_PeriodicProcess { public: AP_TimerProcess(uint8_t period = TIMERPROCESS_PER_DEFAULT); void init( Arduino_Mega_ISR_Registry * isr_reg ); void register_process(ap_procedure proc); void set_failsafe(ap_procedure proc); void suspend_timer(void); void resume_timer(void); static void run(void); protected: static uint8_t _period; static ap_procedure _proc[AP_TIMERPROCESS_MAX_PROCS]; static ap_procedure _failsafe; static uint8_t _pidx; static bool _in_timer_call; static bool _suspended; }; #endif // __AP_TIMERPROCESS_H__
from flask_restful.reqparse import RequestParser, Argument # Parser for Query query_parser = RequestParser() query_parser.add_argument(Argument( name='query', required=True, type=str, help="This argument is always required in order to get a response from the bot." )) query_parser.add_argument(Argument( name='Authorization', required=False, type=str, location='headers', help="Authorization, currently only Waldur API token is supported." )) # Parser for Teach teach_parser = RequestParser() teach_parser.add_argument(Argument( name='statement', required=True, type=str, help="Statement that is a possible response to 'previous_statement'." )) teach_parser.add_argument(Argument( name='previous_statement', required=True, type=str, help="Statement to which 'statement' is a possible response." )) # Parser for Authenticate auth_parser = RequestParser() auth_parser.add_argument(Argument( name='token', required=True, type=str, help="Waldur API token" ))
<filename>app/users/models.py from django.db import models class Guest(models.Model): username = models.CharField(max_length=30, default="user1") password = models.CharField(max_length=30, default = "iksarman") first_name = models.CharField(max_length=30, default="Joseph") last_name = models.CharField(max_length=30, default="Dulapp") email = models.CharField(max_length=30, default="Nebrasa") phone_number = models.CharField(max_length=35, default=" ") address = models.CharField(max_length=60, default='8009 39th AVE NE') city = models.CharField(max_length=30, default='Seattle') state = models.CharField(max_length=30, default='Washington') country = models.CharField(max_length=30, default='United States') zip = models.IntegerField( default=98115) class Host(models.Model): username = models.CharField(max_length=30, default="firstTab.txt") password = models.CharField(max_length=30, default = "<PASSWORD>") first_name = models.CharField(max_length=30, default="Joseph") last_name = models.CharField(max_length=30, default="Dulapp") email = models.CharField(max_length=30, default="Nebrasa") phone_number = models.CharField(max_length=35, default=" ") address = models.CharField(max_length=60, default='8009 39th AVE NE') city = models.CharField(max_length=30, default='Seattle') state = models.CharField(max_length=30, default='Washington') country = models.CharField(max_length=30, default='United States') zip = models.IntegerField( default=98115)
Designing a shared freight service intelligence platform for transport stakeholders using mobile telematics Internet of Things (IoT) technology transforms freight transport operations by adopting novel data-driven services and enables information sharing among actors involved in global transport chains. Mobile telematics represents emerging IoT technologies for global forwarding increasingly applied to full loads conveyed by freight transport assets (FTAs) (e.g., ISO containers) facilitating intelligent services. In this light, telematics-enabled FTAs support freight transport operations utilized by individual stakeholders in three overarching service dimensions: transport management, fleet management, and risk management. This topic is, however, understudied by information systems (IS) research and service science. For this reason, we establish a design science research project, conceptualize a shared Freight Service Intelligence Platform (FSIP), and introduce freight service intelligence as an interdisciplinary research field. To this aim, we first review related literature, interview 14 transport stakeholders, and theorize six meta-requirements. Second, we propose five design principles that indicate how the meta-requirements may be associated. Third, we develop a web-based prototype application to instantiate the proposed design principles comprising performance analytics, anomaly detection, risk assessment including prediction, data exchange, communication, and IS integration. Subsequently, we evaluate the application with six transport stakeholders and logistics software vendors. Finally, we conclude with a discussion on the implications of an emerging topic addressed by this paper. Introduction Freight transportation is a key process for logistics services and represents a vital element in the intertwined economic growth of societies around the globe. Today, environmental awareness, transport risks, shortage of truck drivers, high operating costs, and evolving legal requirements for the compliant movement of materials and finished goods constitute significant challenges for global supply chains. For this reason, freight transport operations require new technological approaches to achieve efficient management of intermodal transport processes (;;a). Information flows in supply and transport chains have consequently been a central aspect of data-driven transport operations in recent years (). Likewise, the field of collaborative electronic business in logistics has increasingly leveraged "businessto-business interactions facilitated by the internet" (Johnson and Whang 2002). This has led to the emergence of cloud computing and the Internet of Things (IoT) revealing new approaches for secure information sharing among the actors involved in logistics flows, interoperable logistics systems, and smart logistics based on the use of multi-agents and blockchain technologies (;;). Against this backdrop, Giannopoulos has advocated to "increase the intelligence of freight transport operations and make it available to all players" based on integrated technologies associated with information systems (IS). Although IoT technologies are widely used in logistics systems to enable realtime tracking, quality management, and control of supply chain operations (), the application of mobile telematics and their service capabilities in the forwarding domain represents yet a nascent field. To be more precise, loading units (e.g., cargo or products on pallets) require physical and standardized freight transport assets (FTAs) (e.g., ISO containers, swap bodies, intermodal trailers) over the road, rail, and sea transport mode to facilitate freight services. Herein, freight service intelligence is positioned within the concept of smart cargo (), autonomous freight transportation (Sternberg and Andersson 2014), and intelligent goods services (Jevinger and Olsson 2021) at the edge of an integrated platform for shared information from freight operations among transport stakeholders. Thus, for the purpose of this paper, we understand freight service intelligence as the capabilities of mobile telematics applied as an IoT enabler to FTAs based on the concept of smart connected products (Porter and Heppelmann 2014). In this context, telematics-enabled FTAs represent a boundary object in intelligent freight service systems and facilitate information processing and autonomous decision-making associated with intelligent resources (e.g., tagged goods) (). Looking at freight operations more closely, however, reveals the need to integrate information into collaborative transport management yielding improved visibility and accuracy of decision-making for operations as suggested by Okdinawati et al.. Even though IoT technologies used in freight transportation are promising, their application is rather limited to end-to-end monitoring, particularly of drivers on the road, and decision-making, investigations from a stakeholder perspective are scarce (). This is surprising since related IoT services that build on the data obtained from FTAs are used to describe the current conditional state of the cargo loaded, enabling advancements for the collaboration among multiple transport stakeholders by a cloud-based platform investigated by Gnimpieba et al.. This fundamental approach is likewise supported by actual European legal initiatives that strive for the establishment of a cloud-based platform among the stakeholders to exchange electronic freight transport information (eFTI 1 ) in the digital transport ecosystem. Following this train of thought, IoT combined with cloud computing contributes to freight transport operations through value co-creation examined in the shipping industry (e.g., ). In essence, interactions of transport stakeholders via product-service platforms based on IoT technology result in value cocreation that draws on the service-dominant logic (Balaji and Roy 2017), providing a new research ground for telematics-enabled FTAs. This idea builds on emerging concepts discussed as "smart service platform" that requires design knowledge to support interactions among different stakeholder groups for mutual benefit (). Considering the different stakeholders and tasks for freight transport operations in a fragmented transport market that makes data sharing and its exchange difficult, the concept of a shared Freight Service Intelligence Platform (FSIP) represents a foundation of IoT services in three operational dimensions: transport management (e.g., handling of transport orders), fleet management (e.g., use of the physical freight equipment), and risk management (e.g., probability of service issues such as delays). This situation addresses a problem space from the real-world requiring design knowledge to explore innovative solutions that contribute to design science (). To cater for these issues, we think a stakeholder-oriented approach to exploring the design of a FSIP based on telematics-enabled FTAs is appropriate to lead us to a novel solution grounded on the underlying need to share information among the actors participating in freight operations. Furthermore, this idea sheds light on an emerging topic in the sphere of smart and connected logistics providing benefits to (a) explore freight service intelligence capabilities, (b) uncover transport stakeholder requirements to manage shared information associated with freight transport operations, and (c) specify the interactions of stakeholders yielding new forms of value co-creation. Currently, there is no guidance for designing a software platform that would address the multifaceted requirements and aspects of shared information for transport users based on freight service intelligence (Saoud and Bellabdaoui 2021). For instance, a shared (smart) platform connected with telematics-enabled FTAs could assist joint performance monitoring, identify critical situations for further investigation, derive operational measures for improvements in the form of a business intelligence dashboard (), and support automated decisionmaking for individual stakeholders. For this purpose, we investigate the following Mobile telematics for freight operations Transport objects (e.g., loading units, ocean containers, and road trucks) are presently equipped with radio frequency identification (RFID), sensor-tracking devices, and telematics units enabling IoT services to optimize transport operations (). As a result, the objects gain intelligent characteristics encompassing identification, localization, communication, sensing, or logical functions and enable innovative IoT services in supply chain management (). In the context of freight forwarding, IoT technologies are growingly applied in the form of telematics technologies and operate as a gateway to process information from transport assets (e.g., containers providing GPS-based position for localization) and the conditional state of goods loaded (e.g., temperature) enabling decentralized control of transport operations (;). Telematics concerns particularly the use of driver monitoring and vehicle efficiency by trucks using integrated systems (b). Notwithstanding road trucks, based on the application of mobile telematics, processing information capabilities contribute to an intelligent system and allow FTAs to make decisions that often refer to "tagged goods" discussed in the context of decentralized freight intelligence (Sternberg and Andersson 2014). However, in this paper, we focus on telematics-enabled intelligence of freight services understood as moving full freight loads (e.g., full container 1 3 Designing a shared freight service intelligence platform load) comprising loading units (e.g., pallet) associated with standardized FTAs in conditions appropriate to their sensitivity and provided by transport operators (e.g., carrier, forwarder). Accordingly, full loads forwarded by transport assets do not require goods tagged (e.g., RFID) to facilitate freight intelligence since telematics technologies represent intelligent resources enabling processing information and making decisions. Telematics comprises telecommunications and information technology and facilitates to collect, progress, and supply of real-time actionable sensor information and function as an integrated IoT enabler with capabilities to automate freight operations of FTAs in remote positions without power supply (e.g., ;Schulte 2013;Sternberg and Andersson 2014). To this extent, scholars have investigated telematics applied in the transport industry that supports fleet operations through data insights of trucks addressing cost optimizations (e.g., monitor fuel consumption), driver workflows (e.g., communicate driver instructions), and compliance aspects (e.g., identify risky driver behavior) (Mikulski 2010;;b). Likewise, the application of telematics for FTAs reveals IoT service capabilities based on transport monitoring, tracking, and automated notifications in case of deviations in conditions (e.g., temperature), particularly for the food industry (;;). The real-time information and automatization of telematics-enabled FTAs allow transport stakeholders to achieve visibility and optimizations over freight operations based on the gained data yielding data-driven business models in logistics (;a). That is, for instance, an automated billing service offered to customers once the FTA equipped with telematics has arrived in a geofenced area. At the same time, telematics technologies have shown capabilities to detect freight integrity and damages of shipments that support the management of risks for shippers, consignees, transport operators, and insurance companies involved in freight operations (Salant and Gershinsky 2019;Chaba 2021). Having said this, capturing and understanding the same information along the transport chain is pivotal to identifying the root causes and taking proper measures for optimizing processes. Although telematics is predominantly applied to achieve greater visibility of FTAs in freight transport systems, the emerging capabilities of telematics in the transport market indicate new applications that arise for advanced fleet management and go beyond vehicle-integrated telematics systems (e.g., trucks) (Huk and Kurowski 2021). For instance, the telematics technology provider Mecomo 2 offers solar-based telematics hardware for intermodal fleet operations to be mounted on FTAs that interact with cloud-based software to integrate the data for connecting and automating freight processes. Integrated data is therefore collected and shared for the purpose of monitoring, controlling, and tracking yielding more efficient decision-making and transport logistics performance managed by platform users (). As a consequence, telematics-enabled FTAs that communicate with their environment and make partially autonomous decisions indicate data-driven intelligence of transport assets to facilitate shared freight services at the intersection of smart products, IoT, and cloud computing as demonstrated in academia (e.g., ;). Stakeholder-oriented and shared freight service intelligence "Telematics is the tool that visualize the actual course of transportation units" (Huk and Kurowski 2021). Looking at the freight forwarding industry, this technology provides a variety of functions primarily with increasing value applied to road freight operations for advanced fleet management (b). For instance, commercial manufacturers offer vehicle-integrated telematics systems delivering services to make truck and vehicle status visible (e.g., speed), support fleet operations (e.g., predictive maintenance), enable transport order execution, and optimize consumption to reduce emissions (Mikulski 2010;Osinska and Zalewski 2020). Furthermore, the emergence of telematics capabilities for road freight has led to fleet management systems that "() consists in data collecting, processing, transmitting, and analyzing within three subsystems: a data acquisition subsystem, a data processing subsystem, and a subsystem for displaying contents to users" (, p. 60). Considering the sensitivity of freight or goods being delivered, intermodal transportation units that operate remotely without power supply such as trailers, swap bodies, and containers are increasingly equipped with telematics technologies to achieve augmented visibility and optimized operations of the freight equipment in use (Hajdul and Kawa 2015;). Telematics-enabled FTAs, therefore, represent boundary objects and can be tracked, traced, and monitored via the internet following the concept of IoT (Schoenberger 2002) with capabilities to sense, analyze data, and execute specific tasks through information sharing and synchronizing decisions () and, thus, follows the concept of 'smart service systems' (). Since telematics hardware remains permanently installed in transport units, the advantage is the continuous availability of information and communication technology (ICT) acting as a gateway that collects and transmits data in association with wireless sensor network nodes (e.g., RFID tags attached to loading units such as pallets) (;). Based on the "mobile" characteristic, the application of telematics and their sensor capabilities are investigated for intermodal freight processes of containers in the maritime and rail industry (e.g., Mahlknecht and Madani 2007;;). That is, a battery or solar-powered telematics device with built-in sensors gathers data grouped according to their monitoring purpose in their direct environment into position, temperature, humidity, accelerometer, light, acceleration, shock, and tamper-proof (e.g., door status) respectively (). Given the emerging data-driven service opportunities of FTAs, telematics represents a promising technology for physical transport assets in combination with IoT resources and Big Data since the data collected by the sensors and communicated in real-time helps to increase transparency (e.g., location of the transport asset), ensure freight integrity (e.g., real-time event notifications in case of security issues or temperature deviations), and optimize fleet equipment management for 1 3 Designing a shared freight service intelligence platform transport operators (e.g., prediction of maintenance services). The spectrum of digital service capabilities facilitated by telematics, thus, contributes to collaborative freight operations by sharing information using cloud infrastructure (;;Saoud and Bellabdaoui 2021) and fosters the development of multimodal intelligent transport systems (). Typically, freight services are provided by forwarders, carriers, and logistics service providers to customers. This group of transport operators offers transport services focusing on the economic use of transport equipment (e.g., transport costs, high load utilization) to sustain in a heterogeneous and competitive transport market. Speaking about IoT-enabled services in freight ecosystems, a variety of services from telematics-enabled FTAs can be offered to additional stakeholders participating in transport operations, namely: shippers, consignees, and insurance companies. Naturally, shippers have an interest in understanding the tradelane performance of FTAs to maintain service quality based on the full-load transport orders transmitted to transport operators. Likewise, consignees benefit from services to estimate time of arrival (ETA) and support communication in case of issues that may arise. Addressing the sensitivity of loaded freight in vulnerable transport chains, insurance companies seek to understand freight transport risks affecting transport service quality and costs of goods consequently. Since all stakeholders operate along the same line of freight transport activities, we infer that telematics-enabled FTAs intermediate datadriven freight services in three overarching dimensions for the actors involved: transport management addressing the handling of full-load orders, fleet management to achieve equipment efficiency, and risk management encompassing the prevention of critical impacts to the freight including order performance, tamperproof deployment of FTAs, and the handlings of claims among the stakeholders. For this reason, freight service intelligence reveals a spark toward further exploration of shared information services that encompasses different definitions and concepts addressed by scholars including intelligent cargo, smart goods, smart freight, intelligent goods, and intelligent packaging. The definitions have emerged over the years from an IoT perspective yielding varying concepts. For instance, the European Commission describes intelligent cargo as implying that "() goods become self, context-and location-aware as well as connected to a wide range of information services" (European Commission 2008, p. 8). Thus, FTAs connected via the internet support freight intelligence that builds on the features of intelligent products proposed by McFarlane et al.. Based on the characteristics of McFarlane et al., Lumsden and Stefansson (2007, p. 7) and Meyer et al. further describe the capabilities of intelligent products as possessing a unique identity, communicating with the environment, storing data about itself, deploying a language to display its features, production requirement, etc., and participating in or making decisions relevant to its own routing. Sternberg and Andersson explain that goods processing information and making decisions are per se viewed as intelligent if tagged with RFID or Barcode facilitating storing and identification but not on an item level (e.g., box, parcel). This approach likewise applies to physical FTAs with information processing capabilities enabled by ICT systems (e.g., sensor tracking technologies) (;) and associated with "tagged goods" for decentralized freight Designing a shared freight service intelligence platform intelligence systems (Sternberg and Andersson 2014). From that perspective, Jevinger and Olsson suggest five types of service capabilities enabling intelligence that can correspondingly be applied to transport assets equipped with telematics to realize freight service intelligence: metadata information, condition monitoring, position monitoring, shipment integrity, and system autonomy. In Table 1, we present freight service intelligence capabilities derived from intelligent goods services, provide corresponding exemplary services of telematicsenabled FTAs, assign the overarching service category for freight transport operation, and state the group of transport stakeholders applying the services. From the overlapping of service intelligence assigned to different stakeholder groups, a shared value from the data insights gained from virtualized FTAs equipped with IoT is indicated. More precisely, telematics-enabled FTAs become context-aware as they can sense, communicate, act, interact, and exchange data, information, and knowledge (). Therefore, cloud computing is applied to support the virtualization of transport chains, which has been demonstrated especially for food supply chains due to the natural sensitivity of goods based on smart connected objects operating in a dynamic environment (). In essence, transport stakeholders involved in freight transportation benefit from shared information for collaborative decision-making yielding advanced operations enabled by IoT services. Focusing on the different activities and responsibilities of freight transport tasks in a complex environment, we suggest a stakeholderoriented approach that allows us to understand their requirements and information needs. This helps to identify uniform design knowledge for a shared platform based on virtual telematics-enabled FTAs supporting freight operations collaboratively as intended by this paper. Design science research approach To address our research goal and enable the development of a novel software platform supporting transport stakeholders by shared freight service intelligence, we apply a Design Science Research (DSR) method. DSR provides a structured method for the development of artifacts from the identification of a problem to the implementation and application (). For this reason, we use the approach of Sonnenberg and vom Brocke who frame the phases Identify Problems, Design, Construct and Use and suggest an evaluation between each phase. Following this approach, we completed the steps to justify our research problem, derive meta-requirements and design principles, and arrive at the ex-ante evaluation (e.g., first cycle) of a developed FSIP based on the use of mobile telematics in the freight forwarding domain (Fig. 1, colored gray). From the application of the method, we aim at providing a relevant solution for the identified problem as described in the introduction section by applying a scientific approach, and deriving generalized design implications for the IS research discipline according to Gregor and Jones, and presenting a design foundation for construction in subsequent iterations. Implementation in the real world consequently facilitates expost evaluations. Data collection Based on the problem definition that we have already presented in the introduction section, we collected data from scientific literature and interviews with professionals from forwarding practice in Germany. Since the design search process requires flexibility for uncovering the needs and preferences of users that work in the freight transportation industry related to the topic, we decided to conduct a semi-structured interview study (Eisenhardt 1989). To this end, we selected four different freight service stakeholder groups from the side of shippers, consignees, transport operators, and insurance companies based on theoretical relevance and due to our scientific network and direct access to potential practitioners. We decided to interview at least two professionals from each stakeholder group to ensure that our results will represent generalizable insights. From the identified requirements, we theorized meta-requirements, derived design principles for meeting those meta-requirements, and developed a web application that implements the proposed design principles by an instantiated artifact. Subsequently, we evaluated the developed web application by conducting additional interviews and the course of an additional interview study. Since the prototype is evaluated ex-ante and does not allow experimental evaluation yet (evaluation 2), the interviews conducted with professionals from multiple Designing a shared freight service intelligence platform organizations represent general stakeholders involved in the freight transportation process related to our identified problem presented in the introduction section. For the literature review, we queried the term freight AND digital AND (transport* OR supply) AND data AND (middleware OR cloud OR platform) in seven bibliographic databases: SpringerLink, ScienceDirect, Wiley, Emerald Insight, AIS, Web of Science, and EBSCOhost. We carefully sorted out the results of our literature search by their relevance to our topic and initially identified seven relevant articles. We employed forward and backward searches, leading to the inclusion of further articles that focused on freight transportation in international trade. Finally, we analyzed 10 papers using a concept matrix (Webster and Watson 2002) to identify requirements for the FSIP from the scientific literature with relevance to our research study. In addition, we conducted 14 interviews (interview evaluation 1) to gather data from practice and six additional interviews (interview evaluation 2) for evaluating and refining the proposed design principles and their implementation. Therefore, we recruited freight transport professionals engaged in the full load freight transport industry (e.g., freight conveyed in a full truck or full container loads) with a focalized interest in IoT, tracking, and sensor technologies of trucks, trailers, swap bodies, ocean containers or similar freight transport units. Moreover, we interviewed experienced logistics software vendors consisting of software developers and designers due to our access to receive their opinion on the technical feasibility of the designed FSIP. The interviewees are from the German transport market to ensure consistent data collection subject to the same legal frameworks of operations (e.g., freight loading). All interviews were semi-structured and conducted by at least one author of this paper with sensitivity in the field of digital logistics and freight forwarding following pre-defined questionnaire guidelines. The questionnaire guideline for the interviews during the first evaluation was divided into four sections: interviewee's background, IoT technologies and services for freight transportation, usage of telematics enabling freight service intelligence, and prototype of a FSIP. Appendix 1 illustrates the questions we developed per section for the interview guideline. During the second evaluation of the ex-ante phase, the questionnaire guideline focused on receiving feedback on the proposed design principles and the implemented prototype. Additionally, we asked further questions in all interviews and left room for the interviewee's thoughts if needed. All interviews were conducted oneon-one, transcribed in German, and translated into English. The average work experience of the professionals is 14 years and the interviews lasted for 43 min (evaluation 1) and 44 min (evaluation 2) on average. Table 2 provides an overview of the interview professionals including the organization's stakeholder role, size, expert position, their respective work experience, and interview duration during the evaluations of the DSR process (cf. Fig. 1). Further details cannot be presented due to confidentiality agreed with professionals upon the interviews. Data analysis To analyze the data from transcribed audio records, we performed a qualitative content analysis (Mayring 2014) using the software MAXQDA (Release 2020.0.0) and elicited requirements. The software is offered by the company VERBI 3 and applied for computer-aided qualitative data and text analysis of interviews. It allows assigning codes systematically to segments of texts based on unstructured data. The software-assisted analysis consisted of reviewing, coding, categorizing, and interpreting the data. For the reviewing and coding process, at least one author and one scientific assistant analyzed every sentence in the transcripts line-by-line independently to identify key requirements comprising platform and freight service characteristics (open coding). Relevant content related to freight service intelligence was identified and we labeled common themes in the data (text segments) that correspond with the topic of our study (Bhattacherjee 2012). Afterward, we identified (coded) relevant themes, or ideas in the data with relevance to our research interest and grouped them into categories. Subsequently, we divided the analyzed content into smaller fragments and aggregated these into more abstract, conceptual categories using descriptive codes to label platform aspects for freight service intelligence. To this end, our focus lies on the transport stakeholder description of the shared platform and IoT-enabled freight service intelligence based on freight units equipped with mobile telematics. We consequently consolidated our findings into a matrix table according to the identified coding dimensions by comparing the results and ensuring inter-coder reliability. When interpreting the results, the authors and assistants operated independently to derive the concepts and categories without coherence that were reviewed and discussed until a consensus was reached. Finally, codes are devised to concepts pursuant to the evaluation of design principles and to enhance the prototype. This process likewise includes grouping basic sets of concepts (sub-categories of codes) together. Deriving requirements from scientific literature and transport experts Due to the identified problem in the introduction section, we deduce requirements and discuss how we derived meta-requirements and design principles with relevance for the development of a shared FSIP applied by transport stakeholders. To explore the requirements for a FSIP, we analyzed data collected from scientific literature (L) and expert interviews (E) with practitioners. Overall, we specified 17 requirements and further aggregated them into six meta-requirements that aim of supporting shared freight transport operations of telematics-enabled FTAs (see Fig. 2 below). Based on a concept matrix, we mapped the analyzed articles to the derived requirements for a FSIP. The literature reveals that a FSIP must enable the integration of data from various IS (L1) to facilitate automation and exchange of synchronized transport data (e.g., transport order details obtained from transport management system-TMS). This refers likewise to electronic and shared document management (L2) due to the vast amount of transport-related and physical paperwork handled during operations among the stakeholders. Transport assets equipped with mobile telematics are required to provide information on shipment integrity (L3) for transport stakeholders to ensure compliant conditional states (e.g., temperature control of goods moved with cold chains). To achieve transparency about transport operations, a FSIP must provide a record of transport process transactions (e.g., a digital log of incidents) and present the transport progress (L4), which is important to provide a comprehensive information basis for the participating transport users to gain shared insights into actual freight operations. This feature is supported by real-time visibility of geo-positions based on GPS for the objects (L5). The desired platform must further support visualization and data analytics by key performance indicators (KPIs) (L6) to determine the performance analysis of FTAs. Since telematics-enabled FTAs allow the generation of event notifications, i.e., any deviation from a predefined value detected by the integrated sensors does trigger a server message communicated to transport stakeholders that benefit from eventbased communication among the users (L7). From the interviews, we derived that intelligent transport management is assisted by integrated freight load matching opportunities, specifically on the road, enabled by artificial intelligence (AI) (e.g., automated assignment of shipments received Designing a shared freight service intelligence platform from freight exchanges) (E1), to increase transport asset utilization and achieve competitive freight rates with transport operators for shippers: "If the mobile telematics unit detects that the trailer has been waiting at a ramp for at least 20 min, a trained system should set the asset status to 'unloaded'. Based on the status, the trailer should automatically be assigned to another transport order available from our planning system." (Shipper A) Moreover, both shippers and transport operators show a strong interest in the detection of operational peculiarities based on business routines and individual transport parameters (E2). This allows the recognition of critical impacts based on irregular activities (e.g., stationary time and route deviations of FTAs) in combination with process automation (e.g., booking of orders in the order planning system): "() I think that based on the configured transport settings including time windows, transshipment points, and freight monitoring parameters, transport assets must monitor activities themselves and provide suggestions for improvements automatically according to the data retrieved." (Transport operator C) In this way, a FSIP utilizes data integrated from external data sources (e.g., weather information system) to analyze the actual situation and facilitate predictive operations of FTA activities for tradelanes (E3): "I suggest including weather forecasts. () If I am traveling from Poland to Spain by train for 5 days, strong snowfalls may lead to interruptions since freight wagons are completely frozen and freight operation is not possible. Indeed, it is important to obtain predicted information of tradelanes in the platform that affects my end customer business." (Consignee B) More specifically, tradelanes represent a shipper-specific and contracted agreement with transport operators for recurring shipments to be forwarded within the same transport boundaries, i.e., same shipper and consignee. Furthermore, the freight equipment in use within transport networks is supported by real-time visibility (E4) that facilitates the presentation of the fleet status of assets and conditions of the goods loaded according to customer requirements: "From a service perspective, I would equip all swap bodies used in the freight networks with solar-based telematics units to capture asset data for managing our fleet equipment efficiently and provide our customers advanced real-time visibility services after booking a transport." (Transport operator A) In addition, transport stakeholders require event-based notification services (e.g., automatic notification if the FTA has arrived at its destination) and the provision of a uniform event record (E5) to analyze transport and fleet activities collaboratively: "The gained data from assets should be provided in a modularized output format allowing users to navigate throughout their individual reports using the same data. Thereby, every stakeholder is supposed to read and understand the events generated from the transport assets in the same way." (Insurance provider A) From the events detected through the applied telematics units attached to FTAs, the desired platform must enable an event-based evaluation of transport operators including automated notifications. The platform consequently supports subsequent processes (e.g., inbound) to achieve interconnected freight operations (e.g., frequent deviations detected at a specific location are being automatically communicated to stakeholders) (E6): "If I have a shock sensor, acceleration is frequently measured with excess at a specific location, I might even want to get an email automatically. Indeed, this service goes with rules: If the temperature is above 20 degrees and there is a shock that is greater than X, then it informs our planning system directly or the inbound operation team is informed that goods inspection must be carried out 100% as soon as the freight arrives." (Shipper B) These initiatives are accompanied by shared KPIs that are especially important for all interviewees to measure specific performances based on activities of tradelanes, transport orders, and fleet equipment (E7). Overall, our 14 interviewees described 54 different KPIs that are of interest to them, and it is likely that this number would grow further with additional interviews. Thus, we aggregated the KPIs to specify eight overarching KPI types for transport stakeholders: availability, utilization, events, conditions, shipment integrity, distance, time, and emissions. The types are neither collectively exhaustive nor mutually exclusive. In Table 3, we list the types of KPIs with an exemplary quote that illustrates the need for this KPI. Since transport management follows standardized processes using multiple IS, generated events (e.g., entry of FTAs into geofenced areas) and data processing (e.g., calculations of ETA) can be utilized to compile values for transaction records. Furthermore, the data triggers subsequent processes toward freight process automation (e.g., generation of carrier invoices after entering the geofence) (E8): "The benefit of using mobile telematics for freight operations is definitely its real-time capabilities and the connected services, such as the automated invoice generation for carriers once the assets have entered our pre-defined geofence." (Shipper C) The interviewed experts confirmed that a FSIP must support real-time monitoring of transport progress and conditional state (E9) to ensure the quality of transport services and nature of goods loaded: "I would like to be informed promptly once the transport status has changed or is foreseen to change in a way that negatively affects my operations or customers satisfaction." (Shipper D) Since freight documents are broadly exchanged in physical format among transport stakeholders based on operational and customer needs (e.g., delivery note, proof of delivery, waybill), we identified that the aspired platform solution requires managing transport documents based on blockchain technologies (E10). Since transport operators have argued that data exchange based on the documents in electronic format requires a trustworthy environment, blockchain is considered an enabler technology to provide secure data flow: "Sharing documents such as a waybill in a digital and secured manner is a prime feature for our operations. () I suggest incorporating blockchain technology to achieve secure data exchange." (Transport operator D) Based on our analysis, we further consolidated the identified requirements and derived six meta-requirements (MRs). A FSIP should enable the integration of data related to freight coordination and external providers (MR1, based on L1, E1, and E3). Since sharing freight-related information by a platform to leverage collaborative transport management is key to facilitating transport-specific decision-making (Saoud and Bellabdaoui 2021), the platform should support a uniform user view to read and interpret the aggregated data. This encompasses the exchange of freight information, shipping documents, and records of transactions for freight transport operations (MR2, based on L2, L4, E5, E8, and E10). Due to the automated capabilities facilitated by emerging IoT (tracking) technologies, transport stakeholder processes are assisted by event-based real-time visibility and automation services addressing FTAs and transport operations (MR3, based on L3, L5, E2, E4, E5, E6, E8, and E9). Freight service intelligence consequently appears to be explicit for automation in two ways: driven by events to trigger automated actions (e.g., an FTA entering a geofenced area does make a booking for inbound freight in shippers' logistics IS to optimize gate in processes), and following pre-set routines to provide telematics-enabled FTAs gradual autonomy toward self-optimization (e.g., a detected idling time of FTAs suggests using the equipment to transport freight dispatchers; overdue maintenance services, such as preventive accident check may set the FTA status "on hold"). Meanwhile, performance and analytical insights into transport assets are particularly relevant for advanced management of fleet objects and transport orders. The FSIP should allow individual definition and visualization of KPIs for FTAs equipped with telematics (MR4, based on L6 and E7). Following the dynamic transport situations in a complex global forwarding environment, the FSIP should facilitate immediate and secure communication among transport users for transport operations based on events generated by transport assets and shipment status related to freight documents (MR5, based on L7, E6, E8, and E10). To achieve optimized transport operations and support individual decision-making, the platform should support the prediction of parameters for tradelanes, transport orders, and FTAs based on analytical insights attained from the aggregated data using telematics and integrated IS (e.g., TMS) (MR6, based on E2, E3, and E6). In Fig. 2, we illustrate a detailed overview of the relationships between requirements from the scientific literature (L1-L7), the transport stakeholder expert interview (E1-E10), and the derived meta-requirements (MR1-MR6). Furthermore, it likewise visualizes the connections between the meta-requirements and the short form of design principles that are derived in the next section. Deriving design principles for a freight service intelligence platform After the identification of meta-requirements, we propose five design principles (DPs) and contribute to the specification of design theory (Gregor and Jones 2007). To derive the connections between meta-requirements and design principles, each meta-requirement may address multiple design principles and each design principle may be addressed by multiple meta-requirements indicating a many-to-many (m:n) relationship. In addition, we follow the proposed scheme by Gregor et al. and distinctively specify design principles that support us to arrive at design knowledge for further development and implementation. Shared tradelane-specific and data-driven KPIs over the fleet and freight transport assets Data processed using mobile telematics reflect the performance of FTAs for the overall objective assigned to customer-specific tradelanes and transport orders. The insights gained from the assets and based on the data support an advanced dashboard approach providing a set of KPIs to support data-driven decisions to be made by transportation managers, particularly for logistics services and the management of related transport processes in conjunction with business intelligence (cf. Silva 3 Designing a shared freight service intelligence ). Therefore, KPIs are provided by quantifiable metrics for assets utilized within fleet organizations that allow users to measure the quality of processes and freight services based on flexible definition and visualization (MR4). Since shippers may have a business relationship on a contractual basis with transport operators, tradelane-specific KPIs allow close monitoring of carrier performance based on existing agreements for individual FTAs. In return, shippers, consignees, transport operators, and insurance companies benefit from a unified presentation of real-time performance metrics to obtain insights into activities with an impact on transport management (e.g., on-time delivered orders), fleet management (e.g., distance traveled compared to all FTAs), and risk management (e.g., shipment damages or loss) during transportation (MR3). Thus, we define our first design principle as follows: Design Principle 1. To allow transport business managers in an organization (users) to support analysis of freight transport asset performance (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should provide shared tradelane-specific KPIs, i.e., deviations from contractually agreed transport routines, over the fleet and individual freight transport assets. Anomaly detection and automation based on event specifications and notification rules Given the increasing digital competition in the transport market due to emerging IoT technologies and digital platforms yielding digital business models by startups (;a), visibility in transport chains has emerged as a commodity service for shippers and consignees. Against this issue, our analysis revealed that transport operators have a strong interest in the detection of irregularities in the equipment used to observe the performance of their fleet with a focus on operational efficiency and customer satisfaction. The automated detection of anomalies from business routines is therefore of paramount importance on the tactical management level for all stakeholders to understand the status of real-world operations indicating critical tendencies. Furthermore, we found that this feature enables the automation of freight operations identified in our analysis (e.g., an email sent to pre-defined users once the geofenced area has been entered). Hence, flexible events (e.g., setting a max. temperature for the goods loaded) are associated with notifications that allow automation of (subsequent) processes (MR3). Accordingly, transport users that participate in freight operations receive the opportunity to take preventive actions before deterioration. In addition, a shared user interface facilitates communication among all transport users (MR5) and fosters collaborative management of freight handling operations by interpreting the same values. Therefore, we define our second design principles as follows: Design Principle 2. To allow freight transport users in an organization (users) to support efficient freight operation (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should enable anomaly detection and automation of freight operations based on event specifications and notification rules that be defined in a shared and flexible manner. Risk assessment and prediction of shipment integrity and freight service quality Performance analysis for telematics-enabled FTAs based on KPIs was identified as a business need for managers involved in transport and fleet management (MR4). The data compiled by telematics is based on the individual parameters set by shippers and the scope of shipment integrity monitored using various sensors. This is subject to the nature of goods loaded (e.g., high valuables, cold chain) and results in a vast amount of data retrieved, stored, and analyzed. In essence, business intelligence provides highly accurate information and the appropriate tools for data analysis, and decision-making processes () leading to a competitive advantage in the sector at the nexus of Big Data Analytics and Supply Chain Management (Kache and Seuring 2017). The data source for risk assessment includes telematics sensors, TMS, enterprise resource planning systems, and external databases (e.g., weather information systems, social media, or news channels). Based on these sources, we identified that analytical insights support the risk assessment and prediction of shipments integrity (e.g., compliance with freight transport conditions, unauthorized door opening) and freight service quality (e.g., transport lead time, ETA accuracy anticipated, deliveries made in full, delivery rate forecasted) (MR6). The essence of a secure FTA during transportation from our analysis grounds on industrial road freight security standards from associations such as the TAPA EMEA 4 based on security levels that support a risk scoring concept for FTAs. Therefore, a FSIP should highlight indicators that represent the risk for shipments and freight service performance. For this reason, we propose our third design principle as follows: Design Principle 3. To allow decision-makers focusing on risk management in an organization (users) to achieve secure freight transport operations (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should facilitate the assessment of risks and prediction of shipment integrity and freight service quality. Secure data exchange and communication among participating transport stakeholders We identified that a seamless freight service over different modes of transport used entails a substantial commitment and willingness to cooperate with the engaged transport stakeholders (Flodn and Williamsson 2016). To coordinate an efficient operational freight service process, communication was emphasized by the transport stakeholders to enable unobstructed logistics workflows using ICTs (Ross 2010). Considering the FSIP as a shared front-end for transport stakeholders, operational issues that may arise can be immediately addressed and solved through direct interaction facilitated by an integrated communication module. This feature reflects the significant amount of transport activities and different logistics actors involved in the transport processes and goes along with the exchange of operational information and documents enabled by a platform solution to achieve more efficient transport operation (Dahlberg and Nokkala 2019). We analyzed that the forwarding domain extensively processes information related to transport orders, fleet assets, and risks in a paper-based format. Therefore, the electronic exchange of freight handling information, shipping documents, and recorded events including transactions in the platform solution does support a consistent freight service workflow (MR2). Furthermore, more efficient decision-making is enabled according to our analysis by the coordination of shared communication in a secure manner among transport users related to transport operations (MR5). To this end, blockchain technologies are suggested for application to secure freight processes and communication in a trusted environment, particularly among shippers and transport operators (cf. Lacity and Hoek 2021). We consequently define the following design principle: Integration of additional data sources based on a high-level architecture Based on our analysis, we identified the need to integrate additional data from various sources to achieve more efficient freight transport services addressing freight coordination, environment, and third parties (MR1). To address this meta-requirement, we propose aggregating data by centering a database within the cloud infrastructure. This facilitates more accurate and precise KPI measurement, risk assessment, and prediction of performance for the use of FTAs and the orders being handled among the stakeholders. Likewise, enhanced multimodal freight operations have been demonstrated by integrating real-time IoT data in decentralized ecosystems based on peer-to-peer to securely share data using blockchain technologies among the participants (). Moreover, transport orders from digital platforms have shown capabilities by integrated and connected transport operations that emerge especially in the road forwarding sector (e.g., Sucky and Asdecker 2019; a). For this reason, we suggest a high-level architecture and present related information flows that build on the Industrial Data Spaces (IDS) architecture promoting the concept of decentralized intelligence in logistics and requiring adopters for operation (Sternberg and Andersson 2014). IDS suggests the application of connectors with embedded algorithms to connect the data sources with the cloud infrastructure for trusted data exchange in a decentralized manner among the platform participants (). Following the work of Gallay et al., we suggest additional connectors to merge data from sources based on their data specification and manage aggregated data (e.g., meta-connector) to optimize processes by providing information to stakeholders via a user interface. Our analysis, thus, suggests data integration from various sources addressing transport assets (e.g., telematics-enabled FTAs), vehicles (e.g., trucks equipped with telematics), transport orders (e.g., freight exchanges, TMS), and third parties (e.g., weather information, news channels). Figure 3 presents the proposed high-level architecture including data flows for integrating the data from FTAs, vehicles, transport orders, and third parties. Our suggestion to use real-time IoT sensor data from FTA for merging with other data sources to achieve a uniform and enriched data space for freight operations consequently leads us to the definition of our fifth design principle: Design Principle 5. To allow freight transport users in an organization (users) to support optimization of data connection with data sources (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should allow data integration via connectors embedded with algorithms to merge data sources and manage aggregated data from transport assets, vehicles, transport orders, and third parties. Summary of design principles Overall, we propose five DPs to address the identified meta-requirements. In Table 4, we summarize the DPs and provide the associated meta-requirements derived from literature and expert interviews. Information architecture of freight service intelligence platform To virtualize transport operations and freight services intelligence at the boundary of cloud computing and IoT according to the derived design principles, we follow Verdouw et al. and propose different elements for the IS architecture of a Designing a shared freight service intelligence platform Table 4 Design principles and associated meta-requirements DP DP description MRs addressed by DP 1 To allow transport business managers in an organization (users) to support analysis of freight transport asset performance (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should provide shared tradelane-specific KPIs, i.e., deviations from contractually agreed transport routines, over the fleet and individual freight transport assets. MR3, MR4 2 To allow freight transport users in an organization (users) to support efficient freight operation (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should enable anomaly detection and automation of freight operations based on event specifications and notification rules that be defined in a shared and flexible manner. MR3, MR5 3 To allow decision-makers focusing on risk management in an organization (users) to achieve secure freight transport operations (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should facilitate the assessment of risks and prediction of shipment integrity and freight service quality. MR4, MR6 4 To allow freight transport users in an organization (users) to support collaborative freight service decisions (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should allow secure data exchange and communication of participating transport stakeholders for freight documents, transaction logs and events from freight transport assets. MR2, MR5 5 To allow freight transport users in an organization (users) to support optimization of data connection with data sources (aim) when mobile telematics is applied to full loads in the freight forwarding industry, a Freight Service Intelligence Platform should allow data integration via connectors embedded with algorithms to merge data sources and manage aggregated data from transport assets, vehicles, transport orders, and third-parties. MR1 platform-centered freight transport chain: identification, sensing, and actuation, data exchange, information integration and application services. This concept provides a valuable approach to investigating our research topic since telematics-enabled FTAs constitute IoT devices with capabilities to convert transport assets into smart connected FTAs that follow the paradigm of a smart service platform (Porter and Heppelmann 2014;) to enable smart services along with the end-to-end freight lifecycle processes (;a). Therefore, we combine the theoretical concepts of virtual food supply chains () and intelligent good services (Jevinger and Olsson 2021) and establish an adopted architecture for a novel FSIP as the research ground for the study presented in this paper. To this extent, we follow the definition of intelligent goods proposed by Jevinger et al. who characterize the intelligence of goods as different capability dimensions delivering support to different degrees. That is, for instance, the memory storage capability dimension of goods that facilitate the storage of an identity, additional types of data, or algorithms/decision rules. Herein, Jevinger and Olsson Fig. 4 Stakeholder-oriented freight service intelligence architecture adopted from Verdouw et al. 1 3 Designing a shared freight service intelligence platform emphasize that intelligence requires more than being able to communicate the identity of tagged goods allowing intermodal transport assets equipped with an ICT to function as an enabler to realize freight service intelligence. In essence, freight service intelligence comprises information processing and autonomous decision-making capabilities based on ICT-enabled physical assets (e.g., FTAs equipped with permanently installed telematics units) and loaded with intelligent resources (e.g., RFID tagged goods) to invoke services and start processes autonomously applied by different stakeholders. In Fig. 4, we propose the theoretical concept of a stakeholder-oriented FSIP. The presented concept is based on six layers that describe the derived information system architecture for virtualized freight transport chains facilitated by telematics-enabled FTAs to be elaborated in more detail in the following. For the first layer, the underlying freight lifecycle processes aligned to the phases of TMS applications form the basis for information services to support the different activities and tasks. In the next layer, FTAs using telematics facilitate automatic identification by unique identifiers (e.g., asset identification number of swap bodies, license plate of trailers). In addition, sensors are applied to measure different dynamic parameters according to the environmental conditions in which FTAs operate (e.g., temperature, humidity). Thus, in the layer sensing and actuating, RFID transponders and tags are used to track and trace objects on a freight item level (;) to communicate sensor data from telematics through wireless (sensor) networks (e.g., GPRS, and Wi-Fi) to an intermediary back-end system using cloud storage. Physical FTAs, therefore, exchange data with virtual objects in the next layer that are constantly updated (). The cloud-based middleware acts as a data hub and the exchanged data can be further enriched through the data integration/processing of other IS (e.g., weather information systems, cloud-based freight exchanges, TMS) by the existing freight service intelligence capabilities derived from intelligent good services (Jevinger and Olsson 2021). Subsequently, service provisions are determined in the next layer and "() differ from basic virtualizations that only show the whereabouts of physical objects to smart virtual objects that proactively take actions" (). The provided services reflect the tasks and responsibilities of stakeholders bound in three overarching dimensions: transport management, fleet management, and risk management. These dimensions follow a user-oriented approach and correspond with the generic types of application services proposed by Meyer et al. : information handling (e.g., position details among different transport management systems), problem notification (e.g., position out of a geofenced area, temperature too low), and decision-making services (e.g., billing upon arrival according to the estimated time of arrival). Shared information from freight service intelligence is offered in the front-end to users that build on the telematics-based service offerings in the layer stakeholder application for four different groups: (a) shipper, (b) consignee, (c) transport operators, and (d) insurance companies. In summary, the illustrated information architecture provides a novel concept for telematics-enabled FTAs in global forwarding operations. Furthermore, the theoretical concept provides the conceptual ground for designing an innovative FSIP focusing on IoT-enabled services and shared information used by different stakeholders to support freight operations. Prototype demonstration After we derived the design principles and presented the information architecture, we developed a web application artifact called Freight Service Intelligence Platform (FSIP) that serves as a prototype demonstration for the first evaluation cycle with potential users (cf. Fig. 1). With that objective in mind, we subsequently demonstrate the prototype and show the front-end of the platform to potential users from the field of freight forwarding operations and transport software development that will be elaborated in the next Chapter. We developed a web-based application accessible via a web browser and implement DP1-DP5 with the development tool Figma, 5 which uses the latest web technologies. The web application is not connected to other components of the architecture (cf. Fig. 4) and has no access to a real database and, thus, uses dummy data intertwining the layers "service provision" and "stakeholder application". However, the artifact represents a user interface (UI) illustrating the implementation of the design principles. Moreover, the development of hypotheses, statistics-based confirmation, or rejection is not part of this study and requires further evaluation studies to be conducted in the future. This approach is recommended for DSR projects (Kuechler and Vaishnavi 2012) as demonstrated, for instance, by Sein et al. who conduct multiple studies based on different prototypes. Pursuant to the developed UIs, we discussed the usefulness of the prototype with potential users. 6 The web application presents a landing page, i.e., the first screen the users would see, and provides an overview of all apps installed on the platform. An app is a software module that provides specific functionality to the user divided into three types: Administration Apps, Freight Performance Apps, and Transport Operations and Management Apps. Administration Apps are necessary, for instance, to maintain master data about users, transport assets, and tradelanes. Freight Performance Apps provide the users with basic KPIs and analytical information about tradelanes and FTAs, i.e., transport asset utilization, event performance, probabilities of ETA accuracy, and anticipated status of freight integrity including freight risks. Transport Operations and Management Apps encompass the monitoring of FTAs, the definition of events, notification rules, information on the freight risk, handling of freight documents, records of logs based on transactions, and the management of data connections. When the user selects an app, a corresponding UI is loaded. In Fig. 4, we illustrate the UIs in a web-based front-end for the FSIP. Moreover, we show the elements addressing DP1-DP5 that we have implemented in the web application and describe them in more detail in the following. The first DP refers to the freight performance based on tradelane-specific KPIs in addition to individual transport asset KPIs (DP1). Thus, the UI provides users with metrics according to the actual performance related to freight equipment, events, asset utilization, weight per asset, emissions generated, and service quality, i.e., delivery performance, customer claim rate, and time per transport. In addition, based on the collected data 1 3 Designing a shared freight service intelligence platform from telematics-enabled FTAs, analytical insights enabling prediction are presented by the ETA, the projected in-full deliveries, the freight integrity status, and the freight risks associated with integrated environmental information and performance prediction (DP3). To support operational transport decisions aligned to DP2, users benefit from a shared Monitoring Cockpit to achieve real-time visibility of FTA status and the goods loaded. The platform allows users to communicate the transport status, events, transactions (logs), and exchange data based on freight documents (DP4). Thus, we incorporated a Document Cockpit module in the FSIP that enables the upload of various transport and shipping documents with an assignment to transport orders. Additionally, the platform summarizes the actual status of pending tasks for users to be addressed during operations to achieve transport efficiency. Moreover, users can administer freight documents assigned to transport orders by controlling a list that allows them to preview a selected document and leave comments for other users in case of required document revisions. To this end, this feature provides a set of functions to users to create, edit, approve, attach, report, request, compile and submit documents. If a transport document was created or revised according to the transport order, a Transport Order Control area offers features enabling the import of transport orders according to the data retrieved from integrated transport order data systems (e.g., TMS) or to update the order information vice versa, if revised. This feature is assisted by testing data connections to ensure data exchange between the FSIP and, for instance, integrated transport order data systems. To facilitate freight communication based on the documents in case of issues or questions that may arise among users, a link to the Chat software module is integrated providing an overview of pending tasks. Since a Chat represents an implemented feature that is permanently available for users, it consolidates communication addressing documents and events from mobile telematics applied to transport assets. Ultimately, specific data from various data sources can be integrated by a Connect Data module (DP5) in the platform core, offering value-adding information to the freight transport operations. This yields the integration of specific vehicle data (i.e., integrated truck telematics), transport order data components (i.e., TMS, forwarding software, cloud-based freight exchange platforms), and third-party data (i.e., weather information, social media systems, customs systems), to support the freight-related services based on data exchange, transport load coordination, and prediction. In essence, we have implemented DP1-DP5 addressing the derived meta-requirements and provided the respective UIs in our web-based prototype application illustrated in Fig. 5. Prototype evaluation After completing the design and prototype, we aimed at the iterative evaluation. For this reason, we conducted an ex-ante evaluation that is the first evaluation cycle. The purpose of this evaluation was to obtain formative feedback on the prototype and measure the completeness of the DPs. To reach this goal, we demonstrated the prototype to potential users from the field of global forwarding operations and transport software development. Following this way, the participants were then asked if the design has possibilities for improvements. Correspondingly, our first evaluation cycle focuses on receiving support, criticism, and ideas about our proposed DPs and the web-based prototype. Shared tradelane-specific and data-driven KPIs over the fleet and freight transport assets (DP1). From the feedback of the participants in our evaluation study, we received confirmation on the appropriateness and usefulness of the demonstrated KPIs shared among the users. Shippers and consignees have emphasized that a tradelane-specific KPI does support the performance analysis of distribution networks and assists the measurement of service quality received from contracted transport operators according to the assigned tradelanes. "It makes much sense in my opinion to measure KPIs related to the agreement with freight forwarders for specific transport destinations since we will be able to understand the performance together with the shipper by reading the same indicators." (Consignee A) Likewise, transport operators explained that KPIs are most relevant to monitoring and controlling their fleet equipment enabling the measurement of fleet performance based on the assets and events that occur. Specifically, the provided utilization of freight transport assets supports an indication for fleet operators on the use of assets and the corresponding economic impact. Overall, asset-based metrics relate to shipments, weight, and emissions generated: "I need my performance metrics to manage our fleet equipment efficiently and decide if I need to change anything. In this way, shared KPIs with customers can be helpful to compete in the market." (Transport Operator) Designing a shared freight service intelligence platform Anomaly detection and automation based on event specifications and notification rules (DP2). The participants confirmed the importance of anomaly detection of FTAs based on events and automated notifications. Therefore, a metrics system to measure the performance of transport orders carried out (e.g., lead time, ontime deliveries), FTA activities (e.g., in motion, stationary), sensor values (e.g., an opened/closed FTA door), and scorings addressing service level, maintenance, and risks is useful for the stakeholders to obtain the real-time status and facilitate immediate actions based on detected irregularities compared to historical data recorded. Herein, we found that our presented scoring approach to provide a metric for measuring event-based performance is a properly addressed area for all participants to make freight operations status visible to the users provided by the Monitoring Cockpit. Data-based insights consequently address issues in advance to prioritize transport and fleet management decisions. Moreover, we identified that the collaborative specification of events and notification rules, i.e., email messages upon entering geofences, in a flexible manner is a significant feature for the users: "In my opinion, event management is the most significant proposition of the platform since I am able to determine deviations of transport operations collaboratively based on individual event configurations." (Shipper A) Risk assessment and prediction of shipment integrity and freight service quality (DP3). In the first evaluation cycle, the participants supported the detection of risks and prediction of transport performance based on business intelligence tools enabling data analytics. We identified that freight analytics presents an opportunity for economical and qualitative decisions on tradelane services, shipment integrity, and fleet operations with high relevance for managers. For instance, one shipper explained the benefit of ETA for customer service due to the accuracy in combination with political incidents of the real world offered by an integrated link (e.g., a news ticker indicating a strike at the port of unloading): "If a container on an ocean vessel is of high interest, I need to know if the port of arrival will be closed due to strikes that may arise, which has an impact on the ETA accuracy. Therefore, the linked information presented in this system is particularly important to inform my customers in advance." (Shipper A) Interestingly, an insurance provider explained that the estimated freight risk presented does not support their claim processing solely. Rather, historical data of the entire freight lifecycle including the sensor values and a log record encompassing all transactions provide a greater benefit to deciding on an insurance case: "When a consignee declines acceptance of a container because the cold chain was interrupted due to high temperatures, it is of interest for us to identify the time and place of occurrence () based on a temperature profile. () The presented system allows me to individually compile data reports and obtain the information I need to manage the insurance case efficiently." (Insurance provider C) Secure data exchange and communication among participating transport stakeholders (DP4). The Chat feature and the integration in the Document Cockpit module were confirmed by all participants to support efficient handling of freight documents along with the progress of freight transport activities. The usefulness of this DP is found in the direct coordination of issues addressed by the summary of pending tasks and the integrated app messenger that facilitates instant communication with personal contacts comparable with existing smartphone app services as emphasized by a transport operator: "The document cockpit will in my opinion lead to synergies since the shared and secured form of electronic document handling combined with embedded communication does support an efficient document management process." (Transport operator C) However, we identified that insurance providers exchange an extensive number of paper-based documents, especially with logistics service providers in a sparsely digitalized transport environment. Thus, our Document Cockpit module has been confirmed especially as an interactive application for shared electronic freight transportation information accompanying freight movement: "Your document cockpit for us as an insurer currently represents a 'clockwork' we have never taken a careful look at yet. () Since you have the order ID, shipper, and consignee details in the platform it will help the carrier to manage his work and consequently us." (Insurance provider C) Integration of additional data sources based on a high-level architecture (DP5) During our evaluation study, it became apparent that data integration is necessary to facilitate shared KPIs, anomaly detection, risk assessment, and prediction. Likewise, data exchange must address transport orders to synchronize the information on a shipment level. We found that the participants support data connection with IS components through the integration of transport data, particularly from transport order data sources and third-party data sources that impact FTA operations based on uniform data interfaces (e.g., API, JSON). Furthermore, freight service users and software vendors support data aggregation from various data sources aiming to achieve digital forwarding toward process automation (e.g., an invoice is created in the TMS once real-time data from FTA communicate that a geofence area has been entered at the customer site). However, it is important to note that merging IoT tracking technologies with transport assets (e.g., telematics-enabled FTAs) is a prerequisite in our research study addressed by the Freight Assets software module in the Administration App of our prototype: "The integration of data from other information systems is mandatory to support our transport management solutions. Customers rely on updated information in the TMS that is equipped with Power BI and the presented interface module is in my eyes an advantage to connect systems on demand." (Logistics software vendor A) 3 Designing a shared freight service intelligence platform Evaluation Summary. Overall, our evaluation study indicates that all participants promote the design principles and approve our developed prototype. The potential users have rated the UIs of our implemented DPs as a suitable support for transport stakeholders to enhance freight service operations based on telematics-enabled FTAs. Nevertheless, we have identified specific areas for improvement that do not cause major design issues. This comprises a refined consideration of KPIs from the shippers' side and a more condensed view for users based on individual workflow management over the different modes of transport and order transactions as proposed by the Freight Log software module. Novelty and practical contribution In our DSR project, we designed a digital platform that aims at supporting transport stakeholders, i.e., shippers, consignees, transport operators, and insurance providers, to foster shared freight service operations in the global forwarding industry. In particular, the objective was to explore the design of the desired platform that addresses shared freight service intelligence based on generated data using IoT-enabled freight transport assets (e.g., ISO containers, intermodal trailers, swap bodies equipped with mobile telematics units). To accomplish this, we derived requirements from scientific literature and interviews with practitioners from the transport logistics industry. Subsequently, we consolidated these requirements into meta-requirements. Based on the input from the knowledge base (rigor) and the application of such a platform in practice (relevance), we derived design principles and thereby answer our RQ1. Afterward, we proposed an information architecture and implemented the design principles in a web-based platform application that addresses RQ2 and conducted an evaluation study of the prototype in the first evaluation cycle of our DSR project. During the evaluation phase, the developed platform is presented in the form of instantiated front-end user interfaces to potential users. The developed prototype allows users to obtain uniform information based on performance metrics and analytical insights into the trend for conditional state for loaded freight items, transport assets (e.g., events), and the associated risk (e.g., freight damages). Transport operators are particularly provided with fleet management capabilities to operationalize flexible and customized KPIs related to the transport assets in use. It likewise provides KPIs to shippers and consignees that depend on tradelane configurations based on formal agreements among shippers and logistics service providers. Thus, the Freight Performance Apps are to be considered useful, since the transport stakeholders control the KPIs focusing either on tradelanes, transport orders, or FTAs. Tradelane-specific KPIs reveal performance understanding of FTAs associated with contractual agreements, and the platform, therefore, assists the alignment of actual freight operations with existing customer service levels. To this end, the shared metrics allow shippers, consignees, transport operators, and insurance providers to make joint decisions according to the performance of freight operations addressing transport orders, FTA activities, state of goods, and risks assessed. Our findings fill in existing knowledge specifically toward freight integrity that results from events detected by the platform based on pre-defined values, which offers new data opportunities for freight transport efficiency beyond economic indicators (cf. ). For this reason, our study reveals that the involvement of insurance providers in freight operations to judge risks or issues on established business routines (e.g., tradelanes composed of recurring shipments) and freight service quality (e.g., quote of shipment damages or delays) is useful for shippers and transport operators similarly to support tactical decisions on transport configurations (e.g., high stationary time indicating room for improvements) yielding advanced freight service operations. Moreover, risk assessment can be associated with additional dimensions such as political incidents mentioned by shipper A to provide improved indicators for advanced freight handling (e.g., addressing transport delays that arise due to strikes). Even though telematics-enabled FTAs support autonomous decision-making associated with intelligent resources (e.g., tagged goods), freight service intelligence focuses on the provision of relevant information fostering individual processes for stakeholders. Since data exchange and communication remain substantial challenges for transport stakeholders in practice, a shared platform solution does ultimately require mechanisms to provide information in the same data format, understand the status of freight operations, and indicate ways for improving the entire end-to-end workflow. In this way, the Monitoring Cockpit combines tracking and tracing information and connects real-time visibility of FTAs and related transport orders with additional features to control freight equipment, geofences, points of interest, sensor devices, and manage reports. Shippers have argued that tradelane agreements with transport operators affect the way of collaborative transport management with transport operators. For instance, an agreement of "round-trips" for specific tradelanes enables shippers to turn their position into self-freight dispatchers focusing on optimized asset utilization to achieve economic advantages. Thus, the implementation of assisting features, i.e., a "load radar" that proposes available full load offers obtained from integrated freight exchanges, is of benefit to assist collaborative freight transportation together with freight dispatchers from the side of transport operators. Regarding the sensor values, insurance providers have explained in the study to require primarily historical data combined with details of secure transactions based on blockchain-assisted "Smart Contracts" (Vivaldini 2020). The obtainment of relevant data for decision-making does therefore require precise user management to ensure a diligent use of the platform among all transport stakeholders. We identified the Document Cockpit as a crucial and innovative feature to reduce the time for data exchange and communication within the group of stakeholders. Shippers, consignees, and logistics services providers benefit from the instantiated "single window" concept (Niculescu and Minea 2016) for freight documents that address the collection of all documents required to proceed with a transport order, the assignment of tasks for stakeholders in case of arising issues, and the immediate communication by the integrated link to the Chat module. In addition, insurance providers elucidated their participation in the platform to obtain electronic documents for completing their tasks. It was identified in the study, that the platform allows the integration of technologies to further protect data exchange of freight documents such as blockchain. This aspect might attract other stakeholders (e.g., financial institutions, public authorities) and support current developments toward data exchange standards. Likewise, our developed prototype features the emergence of interconnecting data exchange for transport operations, for instance, as represented by the European initiative to establish a standard for electronic Freight Transport Information (eFTI). Finally, the Chat module reveals further interesting and feasible aspects that support transport operations and management by addressing issues immediately and combining detected events from FTAs while navigating in an individualized environment. Since all transport stakeholders argue that freight forwarding is yet facing communication and transparency problems, the incorporation of shared and interactive messaging services may enhance freight transport service decisions and quality. Theoretical contribution In the presented research, we designed a specific solution to solve a relevant problem from the day-to-day business of the logistics industry and thereby contribute to theory in the form of prescription. Prescriptive recommendations are typically characterized by design principles (Kuechler and Vaishnavi 2012) and support the guidance of instantiations. Based on the description of our problem, the metarequirements are addressed by the design principles, which prescribe how the problem can be solved () and thereby represent a theoretical contribution. A theoretical contribution is consequently given by the theoretical relationship presented by each combination of the meta-requirements, design principles, and the proposed information architecture. While our generated knowledge is described as prescriptive knowledge (Gregor and Hevner 2013), the formulated design principles describe an IT artifact () and we suggest further investigations of the findings by empirical testing or by action research to achieve generalization of the theory (Lee and Baskerville 2003;). To the best of our knowledge, no other study investigated the design of a shared software platform for telematics-enabled FTAs facilitating freight service intelligence. Given the complexity of business relationships, legal frameworks, and technological impediments in the freight transport market, this development is highly relevant since transport stakeholders are not using a single platform to manage freight transport management collaboratively. Notwithstanding this issue, the existing capabilities of freight service intelligence toward automated process and autonomous decision-making explored in this paper are predominantly grounded on the collection, aggregation, and processing of data from various data sources, yielding the provision of information in a uniform cloud environment that facilitates operational interactions among the users. Our investigation of a domain-specific problem yields six meta-requirements that reveal the aspects to be met by a solution that supports the collaborative analysis and anomaly detection of transport assets, tradelane, and transport orders and collaborative decision-making to achieve efficient transport operations and management. We call the solution Freight Service Intelligence Platform (FSIP). Even though the proposed novel system for advanced freight transport operations is proposed as a new solution, existing technologies (e.g., digital platforms for transport management) and analytical data capabilities (e.g., Big Data Analytics, Business Intelligence) serve as a basis for our platform solution. Therefore, we generate prescriptive knowledge by adopting known solutions to solve new problems (Gregor and Hevner 2013) and herein position our work. For this reason, prescriptive knowledge is contributed in the form of design principles that address meta-requirements and a developed artifact instantiation (). After the adoption of the platform by transport stakeholders, we expect descriptive knowledge, for instance, by conduction of an empirical evaluation. In summary, we provide valuable insights into the emerging topic of uniform interfaces to facilitate electronic data exchange in a dynamic freight forwarding business domain. We establish the theoretical concept of a stakeholder-oriented FSIP and propose an information system architecture for virtualized freight transport chains facilitated by telematics-enabled FTAs. This leads us to a shared digital platform instantiation that addresses the concept of value co-creation through the engagement of the transport stakeholders. Value co-creation applies in the form of shared KPI measurement, anomaly detection, risk assessment including prediction, data exchange including communication, and the integration of additional data sources. The FSIP solution consequently contributes to service science knowledge and proposes a smart service platform "that builds on a smart product to enable direct interactions between two or more distinct but interdependent groups of users to create mutual value" (). Both the Chat module and the Document Cockpit provide input for future directions of enhanced freight service operations. In essence, our study has shown that IoT services enabled by FTAs equipped with mobile telematics support transport management, fleet management, and risk management using a shared platform frontend by transport stakeholders. However, decentralized intelligence does yet support the autonomous behavior of telematics-enabled FTAs (Sternberg and Andersson 2014) that are not mature from a service perspective applied by transport stakeholders in a shared manner. Other than focusing on service applications by users, the capabilities of freight service intelligence are to be illuminated by focusing on individual activities along the freight lifecycle in the context of smart service systems (cf. ) based on the operational use of telematics-enabled FTAs as a boundary object following the characteristics of smart connected products (Porter and Heppelmann 2014). From that perspective, it is yet to be explored whether the proposed system allows a transferability to other domains such as the health industry and smart living using similar and interoperable IoT technologies leveraging data-driven service systems engineering in a collaborative manner within decentral data ecosystems (e.g., ). Limitations and future work Due to the complexity of freight service intelligence and the different transport stakeholders involved in this study, our findings are subject to some limitations. First, while this study provides an adopted solution in an under-research business domain, our proposed design and prototype are evaluated qualitatively. Since the development of a more advanced prototype allowing a more comprehensive evaluation would yield excessive costs, it was not possible to develop during the first evaluation cycle. Second, a limitation is the small number of study participants for the evaluation that was given due to the limited access to transport stakeholders. Third, although we formulated meta-requirements and design principles on an abstract level that allows the application of our insights gained from all modes of transport, a more granular focus on dedicated intermodal freight transports (e.g., container transports performed by road trucks, rail, or ocean vessels) would have revealed additional insights into specific transport areas. This applies especially to ocean-based container transports that present the largest share of the global freight market with a tradition to develop sensor and network technologies for tracking and monitoring service offerings enabling digital business (;). However, the market entries of providers such as Mecomo give rise to a focal point on technologies for intermodal freight transportation. The fourth limitation in our DSR project is the collected data of four stakeholder groups from the regional market in Germany that cannot guarantee an entire representation of the global freight forwarding market. The involvement of other stakeholders (e.g., customs authorities) from other regional markets (e.g., Asia, North America) may have revealed other requirements. These limitations should be addressed in future research. As a spark toward further exploration, the proposed design should be implemented and evaluated through the usage of a prototype with data from the day-today business. This allows scholars to explore new features supporting collaborative transport management beyond the existing relationships in traditional business. That is, for instance, the integration of external data sources with an impact on transport operations (e.g., social media) providing analytical insights indicating delays. From the stakeholder perspective, it remains to be explored how the platform ownership can be organized among the participants to capture the value of (multi-sided) platforms based on the service-dominant logic. Thus, we suggest conducting further research focusing on the governance mechanism and value co-creation within the boundaries of transport chains that is pivotal to establishing digital platform ecosystems, for instance, as a "consortium" (). Furthermore, the prescriptive theoretical findings may guide future research in designing freight technologies toward smart services supporting logistics platform strategies for organizations (b). To give an example, manufacturers and insurance companies can assess the tradelane-specific performance of transport carriers and identify the number of sub-contracted carriers. Therefore, it could be interesting to establish indicators about the number of sub-contracted carriers used for a tradelane and 1 3 provide their compliance status based on events. The service offerings enabled by the FSIP utilize data generated during freight lifecycle processes and open new opportunities for research on emerging digital shadows as an informational representation of transport operations. In this context, future work is likewise necessary to understand how the presented platform can support value creation by interactions among the stakeholders associated with production system networks and their stakeholders, for instance, toward innovative business models (). Conclusion In this research paper, we constitute freight service intelligence as an emerging interdisciplinary research field that builds a bridge between IS researchers and researchers from other disciplines, i.e., operations research, data science, and service engineering. In our belief, freight service intelligence provides new application domains for IS scholars by combining function-oriented IoT technologies and collaborative freight transportation in an overarching stakeholder approach that relies on data to attain efficient processes embedded in organizations. Moreover, we designed a Freight Service Intelligence Platform (FSIP) for transport stakeholders by establishing a DSR project. By investigating the two research questions, our research aimed to a) examine what requirements should be considered when designing a software platform for freight service intelligence, and b) how these requirements can be addressed to conceptualize a software platform. To this aim, we propose an information architecture for a stakeholder-oriented FSIP and five design principles from the derived meta-requirements of scientific literature and expert interviews of transport stakeholders. Subsequently, we conducted an evaluation study to verify our results based on a developed platform prototype. By means of our developed prototype, we make a first step in contributing to design theory for FSIPs due to the design principles associated with meta-requirements indicating specific goals for users. Moreover, our contribution is likewise relevant for logistics software vendors, software enterprises, and technological service providers committed to IoT services in logistics and supply chain management in practice since no platform exists to our knowledge that addresses our identified requirements. It is hoped that the insights from this paper span the gap between industry practice and academic theory for testing freight service intelligence theories within a logistics and supply chain context. This will further contribute to novel design knowledge on a contemporary issue and, in doing so, augment multidisciplinary discussions on the topic. In which industry does your organization operate? B. IoT technologies and services for freight transportation 4 How many years of experience do you have with IoT technologies (e.g., sensor devices, telematics) applied to full load transportation (e.g., full truck load, full container load) in the forwarding industry? 5 Can you imagine a situation in which IoT technologies are used to enable data-driven freight services? 6 Did you experience IoT technologies and digital platforms focusing on the support or enhancements of freight services based on the generated data during transportation? C. Usage of telematics enabling freight service intelligence 7 In your opinion, what real-time information about an "intelligent" logistics object (e.g., freight unit equipped with mobile telematics) is of interest in global transport chains to generate concrete benefits? 8 What aspects are of importance to support freight operations using mobile telematics from your stakeholder view? 9 Which decisions and processes could be supported by data-driven freight services from telematicsequipped freight units? D. Prototyping a freight service intelligence platform 10 Do you see potential for the use of a digital platform providing freight service intelligence enabled by telematics-equipped freight units? 11 In your opinion, what are the limits or restrictions for implementing a freight service intelligence platform? 12 Are there any other aspects that are important we did not mention yet? Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
Somali-focused effort could be a model for police serving immigrant populations. Minneapolis police officer Mike Kirchen hopped off his bike on a recent afternoon and strolled through the Cedar-Riverside neighborhood, home to the largest Somali immigrant population in the United States. An hour into his shift, he had given stickers to curious children, stopped traffic to help a woman cross a busy street, moved loitering teens away from a market and talked with a business owner who wanted to file a police report. Kirchen’s work is one thread in a federally funded community-policing initiative begun in January 2013. In a groundbreaking attempt to strengthen ties with Minneapolis’ Somali community, police are working with elders and young people, probation officers, courts, city and county attorney’s offices, business owners and law enforcement experts. In 2011, the U.S. Justice Department’s Bureau of Justice Assistance awarded a $600,000 grant to the Washington, D.C.-based Police Executive Research Forum (PERF), which in turn chose Minneapolis’ Somali-American community as its subject. Justice officials said they were impressed by Minneapolis’ work so far to build trust with its immigrant population. But they also made it clear they want the project to result in a national model other cities can replicate. “There is lots of research out there that points to the importance of how officers treat people and how that translates in building relationships, but this is one of the first projects that attempts to take those concepts and put them into operation,” said Chuck Wexler, PERF’s executive director. Minneapolis police officer Mike Kirchen interacted with Mohamed Salat, left, and Abdi Ali at the Brian Coyle Community Center. When planning for the project started in 2012, police had already made some inroads in the area. At least two Somali-born officers were patrolling Cedar-Riverside, and a safety center had been opened in Riverside Plaza. Crime was already declining, so crime reduction did not need to be the overriding goal. Philosophically, the project plays out at the intersection of “procedural justice,” which is how an officer shows objectivity and respect in interactions with people, and “police legitimacy,” a broader community acceptance of police authority and actions as fair and just, Wexler said. Among the ways the department made those concepts concrete was by adding officers, providing cultural training, printing business cards with officers’ cell numbers in Somali, focusing on young chronic offenders and hosting community events. A couple of years before the project became reality, the community was reeling. Young Somali-Americans were being recruited to fight for terrorist groups in war-torn Somalia. A few community members were charged with funding terrorist activity. And in early 2010, a Somali man was charged with killing three people in a Seward market. Although densely populated, the community hadn’t seen the type of police presence that it now is experiencing, said Russom Solomon, owner of the Red Sea Bar & Restaurant. Solomon, who is active in several West Bank and Cedar-Riverside associations, said the initiative is creating some change, but that further progress will come only if attitudes improve within the Police Department. During the project’s planning stages, meetings were held to hear from community members. At one, someone said many taxidrivers were getting tickets and ending up in court, a move perceived as an attempt to damage their livelihood. Another resident complained that racism must be involved when an officer didn’t immediately respond to a 911 call or follow up on a police report. Those involved in the project set to work to change negative perceptions and realities. Gail Baez, a senior prosecutor in the Hennepin County attorney’s office, meets with community members monthly to get street-level intelligence on emerging crime issues and offenders causing problems. She and others also help residents provide victim impact statements at sentencings “so they can feel they have a voice in the courtroom,” she said. Carla Nielson, a crime-prevention specialist at the community’s safety center, plays the roles of educator and ambassador for the Police Department. At first, residents were hesitant to drop in, she said. Now, she answers inquiries on topics ranging from curfews to human trafficking. Tackling domestic violence, a key component of the project, has produced immediate results. Incident reports have increased as residents realize that reporting violence doesn’t cast shame on the community, said First Precinct Inspector Medaria Arradondo. Interacting with young people also has been a priority. Kirchen coordinates a program that has given away 500 bicycle helmets and a dozen bikes. He often eats lunch at the Brian Coyle Community Center, a gathering place for Somalis. Ibsa Mussa, 22, who volunteers to run the center’s soccer games, says kids share stories with Kirchen as they eat and get to see an officer doing more than responding to a 911 call. Community activist Abdirizak Bihi said the PERF project has “changed the whole landscape,” especially with young people. Denise O’Donnell, Bureau of Justice assistance director, said preliminary indications are that the project, which is scheduled to wrap up this fall, is working. “We expect that this work will lead to a national model that cities can implement to build stronger trust … resulting in violence reduction and prevention,” she said.
#pragma once #include <array> #include <cstdint> #include <string> namespace atechips { class ROM { public: ROM(std::array<uint8_t, 1024> _buf); ROM(); void setBuffer(std::array<uint8_t, 1024> buff); const uint8_t get_byte(const uint16_t offset) const; const uint16_t get_word(const uint16_t offset) const; const std::string get_hex_word(const uint16_t offset) const; const std::string disassemble_word(const uint16_t offset) const; size_t size(); const uint16_t operator[](const uint16_t offset) const; private: std::array<uint8_t, 1024> _buffer; }; } // namespace atechips
Measurement of -oxidation capacity of biological samples by respirometry: a review of principles and substrates. Oxidation of fatty acids is a major source of energy in the heart, liver, and skeletal muscle. It can be measured accurately using respirometry in isolated mitochondria, intact cells, and permeabilized cells or tissues. This technique directly measures the rate of oxygen consumption or flux at various respiratory states when appropriate substrates, uncouplers, and inhibitors are used. Acylcarnitines such as palmitoylcarnitine or octanoylcarnitine are the commonly used substrates. The -oxidation pathway is prone to feedforward inhibition resulting from accumulation of short-chain acyl-CoA and depletion of CoA, but inclusion of malate or carnitine prevents accumulation of these intermediaries and CoA depletion.
<gh_stars>10-100 #pragma once #include "il2cpp-config.h" #ifndef _MSC_VER # include <alloca.h> #else # include <malloc.h> #endif #include <stdint.h> #include <assert.h> #include <exception> // System.Runtime.Remoting.Messaging.AsyncResult struct AsyncResult_t4124112563; // System.Object struct Il2CppObject; // System.Threading.WaitHandle struct WaitHandle_t1661568373; // System.Runtime.Remoting.Messaging.IMessageSink struct IMessageSink_t2257382795; // System.Runtime.Remoting.Messaging.IMessageCtrl struct IMessageCtrl_t2256916835; // System.Runtime.Remoting.Messaging.IMessage struct IMessage_t600037848; // System.Runtime.Remoting.Messaging.MonoMethodMessage struct MonoMethodMessage_t1666929341; #include "codegen/il2cpp-codegen.h" #include "mscorlib_System_Runtime_Remoting_Messaging_MonoMet1666929341.h" // System.Void System.Runtime.Remoting.Messaging.AsyncResult::.ctor() extern "C" void AsyncResult__ctor_m4145929563 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Object System.Runtime.Remoting.Messaging.AsyncResult::get_AsyncState() extern "C" Il2CppObject * AsyncResult_get_AsyncState_m1982026226 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Threading.WaitHandle System.Runtime.Remoting.Messaging.AsyncResult::get_AsyncWaitHandle() extern "C" WaitHandle_t1661568373 * AsyncResult_get_AsyncWaitHandle_m1919809002 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Boolean System.Runtime.Remoting.Messaging.AsyncResult::get_CompletedSynchronously() extern "C" bool AsyncResult_get_CompletedSynchronously_m448147035 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Boolean System.Runtime.Remoting.Messaging.AsyncResult::get_IsCompleted() extern "C" bool AsyncResult_get_IsCompleted_m563284563 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Boolean System.Runtime.Remoting.Messaging.AsyncResult::get_EndInvokeCalled() extern "C" bool AsyncResult_get_EndInvokeCalled_m130757730 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Void System.Runtime.Remoting.Messaging.AsyncResult::set_EndInvokeCalled(System.Boolean) extern "C" void AsyncResult_set_EndInvokeCalled_m4140056803 (AsyncResult_t4124112563 * __this, bool ___value0, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Object System.Runtime.Remoting.Messaging.AsyncResult::get_AsyncDelegate() extern "C" Il2CppObject * AsyncResult_get_AsyncDelegate_m1003284646 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Runtime.Remoting.Messaging.IMessageSink System.Runtime.Remoting.Messaging.AsyncResult::get_NextSink() extern "C" Il2CppObject * AsyncResult_get_NextSink_m920306782 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Runtime.Remoting.Messaging.IMessageCtrl System.Runtime.Remoting.Messaging.AsyncResult::AsyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage,System.Runtime.Remoting.Messaging.IMessageSink) extern "C" Il2CppObject * AsyncResult_AsyncProcessMessage_m1971781732 (AsyncResult_t4124112563 * __this, Il2CppObject * ___msg0, Il2CppObject * ___replySink1, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Runtime.Remoting.Messaging.IMessage System.Runtime.Remoting.Messaging.AsyncResult::GetReplyMessage() extern "C" Il2CppObject * AsyncResult_GetReplyMessage_m1791966425 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Void System.Runtime.Remoting.Messaging.AsyncResult::SetMessageCtrl(System.Runtime.Remoting.Messaging.IMessageCtrl) extern "C" void AsyncResult_SetMessageCtrl_m1503809360 (AsyncResult_t4124112563 * __this, Il2CppObject * ___mc0, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Void System.Runtime.Remoting.Messaging.AsyncResult::SetCompletedSynchronously(System.Boolean) extern "C" void AsyncResult_SetCompletedSynchronously_m190268221 (AsyncResult_t4124112563 * __this, bool ___completed0, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Runtime.Remoting.Messaging.IMessage System.Runtime.Remoting.Messaging.AsyncResult::EndInvoke() extern "C" Il2CppObject * AsyncResult_EndInvoke_m2269051289 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Runtime.Remoting.Messaging.IMessage System.Runtime.Remoting.Messaging.AsyncResult::SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage) extern "C" Il2CppObject * AsyncResult_SyncProcessMessage_m2452418033 (AsyncResult_t4124112563 * __this, Il2CppObject * ___msg0, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Runtime.Remoting.Messaging.MonoMethodMessage System.Runtime.Remoting.Messaging.AsyncResult::get_CallMessage() extern "C" MonoMethodMessage_t1666929341 * AsyncResult_get_CallMessage_m1178656201 (AsyncResult_t4124112563 * __this, const MethodInfo* method) IL2CPP_METHOD_ATTR; // System.Void System.Runtime.Remoting.Messaging.AsyncResult::set_CallMessage(System.Runtime.Remoting.Messaging.MonoMethodMessage) extern "C" void AsyncResult_set_CallMessage_m2645023562 (AsyncResult_t4124112563 * __this, MonoMethodMessage_t1666929341 * ___value0, const MethodInfo* method) IL2CPP_METHOD_ATTR;
SAN FRANCISCO (MarketWatch) — Facebook may file documents for an initial public offering on Wednesday, eyeing a valuation of $75 billion to $100 billion, the Wall Street Journal reported Friday. Morgan Stanley is close to winning the IPO deal, while Goldman Sachs is expected to play a key role, according to the Journal, which cited an unnamed source. Read more on Facebook at WSJ.com. Art and message board at Facebook headquarters in Menlo Park, Calif. Speculation of a Facebook IPO heightened this week after news reports of a halt in trading in the company’s privately-held shares on the secondary market, which was said to be a signal that the firm was about to file papers. Trading on the secondary market was reportedly suspended until Monday, according to news reports from the New York Times and Bloomberg that cited unnamed sources. Facebook, the leading social-networking site in the world with more than 800 million users, is widely expected to file for an IPO this year — a deal that’s expected to become one of the biggest public offerings in history. Currently, Facebook has an implied value of roughly $80 billion based on data from SharesPost, a trading platform for privately-held shares of pre-IPO firms. Read more on Facebook’s private valuation at The Tell blog.
The Arizona Interscholastic Association has seen a decline from about 22,000 football participants five years ago to about 19,900 last year. When Paul Moro got off the hill, where he was king of White Mountains football, and came to the Valley five years ago, he had a grand football vision at Poston Butte. The San Tan community looked like fertile football ground that would keep growing. But after two years, the second of which saw the 13-time state coaching champion at Lakeside Blue Ridge go 0-10, Moro left. Dain Thompson, the trusted Blue Ridge assistant who left the pines with Moro, took over the Poston Butte program, only to see an annual decline in participation numbers. But as he keeps his three programs afloat, three junior varsity games and a freshman game have been canceled on Poston Butte because those schools lacked enough players. "From just five years ago, when Coach Moro and I moved down to Poston together, our program numbers have declined by about 35," Thompson said. "That’s about 10-12 less kids on each team." Poston Butte is part of the Florence Unified School District, which two years ago acquired San Tan Valley from the Coolidge district. That redirected some Poston Butte students there, so Poston Butte's enrollment dropped. With more than 2,100 students, Poston was overcrowded, which was why the Florence district consolidated three Coolidge district schools. San Tan Foothills' enrollment was 300 before then. Now it's at 800. Poston has since dropped to 1,460 students. Chris Knutsen, superintendent of the Florence Unified School District, said this has allowed for room to grow across the district without having to build a new school, which he says is a great thing for the district and taxpayers. But it hasn't helped the football programs at Poston Butte and San Tan Valley. San Tan Valley Athletic Director Rick Romero said there are approximately 65 players in three levels and only a couple who came from Poston. "I think participation in football is on the decline," Romero said. It has been for the last five years in Arizona. According to AIA Executive Director David Hines, there were about 22,000 football participants in 2013. Last season, he said, there was about 19,900. Is that reason for concern? "I just think parents are more aware of things now and kids have more options," Hines said. The National Federation of State High School Associations last week released findings that participation in sports rose for a 29th consecutive year with 7,979,986, according to figures provided by the 51 member state associations (including the District of Columbia). Football is the most participated sport, but, even with 1,035,942 participants nationally, there was a decline for a second consecutive year. In 2017, according to the NFHSA, participation in 11-man football declined by two percent (21,465) from the previous year. There has been more awareness across the nation, from the NFL to college to high school to youth football. Concussions. CTE. Deaths. Lawsuits. Jason Henslin, who played football at Fountain Hills and now leads the program, said there has been a decline in participation over the last decade at his alma mater. "The last couple of years we've been able to bring the numbers up slightly but not to the level we would like," Henslin said. "We would prefer to have enough players to have JV and varsity completely separate, but as of now we have 48 players between the two and it makes practice difficult at times." The main contributors for decline? Students are afraid of injuries, Henslin hears, seeing their chances of getting injured greater in football than other sports. "They also have a fear of concussions," Henslin said. "I understand the concern. I am concerned, as well, which is why we've done everything we can to remove the head from tackling, limit the amount of full contact in practice, and to also protect our players with the latest and greatest in helmet technologies." But Phoenix Arcadia first-year coach Kerry Taylor, who has already made big strides in resurrecting the 2-0 Titans after an 0-10 2017 season, believes it's more than head and knee injuries that keep players away. "That's the easy target to blame it on," Taylor said of concussions. "I believe the main reason in the decline in numbers has to do with the way society is these days. We live in a world that has a culture of, if you don't get what you want, then quit. "If you're not a starter, quit. If a coach doesn't see it the way the player or parent sees it, then quit. We live in a softer world these day where everyone is sensitive to the truth, and sensitive to reality. The decrease in numbers I don't think is coming from the front end of rosters. I think it's coming more from the back end of rosters. The willingness to fight and work hard for the things you want is at a decrease." After being outscored 436-6 in its first six games, Phoenix Sierra Linda last year canceled it game against Peoria Centennial for safety reasons because it was so undermanned after a slew of injuries. The JV game also got canceled that week. This year, under new coach Nate Gill, and playing in a new region that gives the school a better chance at succeeding, Sierra Linda is experiencing a resurgence. It scored 27 points in a season-opening win over Tucson Rincon University, before falling last week to Phoenix Washington 54-6. Gill said he saw a huge increase in numbers this year and expects it to rise past 100 for all three levels next year. "The boost in participation I believe has a lot to do with the groundwork that myself and Coach (Rico) Tipton did collaboratively on campus in the spring to get our students excited about football again," Gill said. "We got the students to believe that with the right staff in place, football could in fact be an outlet and a way to college for our student-athletes. "As we say everyday at practice, 'Be all in.' I also feel that being in a region where we can compete has bolstered participation, as well." Sierra Linda this year moved from being in that killer region with Centennial to the 5A Union with five Phoenix Union High School District schools. Coaches and districts have had to become more creative to get kids to come out. Florence doesn't have Pop Warner or American Youth Football (AYF), so, to introduce tackle football, there are sixth-to-eighth grade football played during the third quarter of the school year with a district championship. "Not many districts have this model, as it is expensive," Knutsen said. The White Mountains, where Moro became a coaching legend, no longer is home for the 3A powerhouses in Arizona. Moro's last year at Blue Ridge, in 2013, was the last time a White Mountains team won state. A four-year absence without a White Mountains team holding up a gold ball at the end is the longest drought since 1981. The five 3A East schools -- Blue Ridge, Winslow, Holbrook, Payson, Snowflake, Show Low -- are a combined 5-6 after two weeks this season. "Here in small town America our numbers are still pretty good," Snowflake coach Kay Solomon said. "I've got 85 kids in my high school program. We have seventh and eighth grade teams in the junior high and our Little League football program has to turn kids away most every year. "However, we are certainly feeling the trend when many other small schools are not able to field freshmen and/or JV teams to play against. Another issue that I know keeps kids from playing is the mindset that success in football is measured solely by playing time in games. "If they are not playing, in some cases they are not staying."
<gh_stars>1-10 def sequential_search(number_list, n): found = False for i in number_list: if i == n: found = True break return found if __name__ == '__main__': my_list = range(1, 1000) seq_s = sequential_search(my_list, 1889) print(seq_s)
The five-episode animated series throws some major side-eye from the sidelines. The inaugural National College Football Playoff Championship game on Monday between Ohio State and Oregon is a hot button topic, and not solely for what's about to unfold on the gridiron. The NFL has seen it’s fair share of intense controversy this year, and while much of that has to do with the individual actions and personal lives of the players, there is also a lot to be said about the culture of the sport and the attitudes of the people in charge. It’s hard not to watch “Football U” and feel kind of gross. It’s so obvious that Coach Pebow is biased (whether he knows it or not is still up for debate), and yet he is the person speaking the loudest. He is still the guy in charge. And maybe it’s uncomfortable to watch because, well, it’s kind of true? He's that guy who gets away with saying whatever he wants because of who he is. Sound familiar? Yeah, probably. Series creator Nick Cannon was not just inspired by football, but also the long running history of cartoons commenting on the cultural zeitgeist. Sometimes it takes laughing at a cartoon to see how quickly hidden biases can slip by and perpetuate without our consent. “Football U” is an important reminder that the way we treat people -- even star athletes -- matters. Everyone is biased in some way or another, and everyone is capable of unlearning those biases, and changing the way they -- intentionally or unintentionally -- treat other people.
#ifndef CLICK_SYNFLOOD_HH #define CLICK_SYNFLOOD_HH #include <click/config.h> #include <click/batchelement.hh> #include <click/task.hh> #include <click/timer.hh> #include <click/notifier.hh> CLICK_DECLS /** * =c * SYNFlood(SRCIP, DSTPORT,I<keywords>) * * * =s tcp * * Generates a SYNflood * * =d * * Sequentially emit TCP SYN packets with increasing source port and IP. * The 5-tuple space will be scanned in a round-robin fashion. * The output packets are ethernet packets with the addresses set to 0. * * The source port will be increased until it will reach the 0 value. * Then, it will start from a new source IP and generates SYNs for all the ports. * When the IP space is finished, it will wrap and start from 0.0.0.1 * * * Keywords: * =item SRCIP * Initial Source IP * =item DSTIP * Destination IP * =item SPORT * Source port. Default is 1000 * =item DPORT * Initial destination port. Default is 80 * =item STOP * Stop the driver when the limit is reached. Default true. * =item ACTIVE * If false, then do not emit packets until the `active' handler is written. * Default is true. * =item BURST * How many packets to generate per each iteration. Default is 32. * =item LIMIT * Limit the total number of packets to generate. Default is -1 (no limit). * The effective number of packets emitted will be the smallest multiple of BURST after LIMIT. * * =item LEN * The lenght of the generated packets. Default is 60 * * * SYNFlood(10.1.1.1, 172.16.1.1, 1, 80, LEN 1400) * -> EtherRewrite(11:11:11:11:11:11, 22:22:22:22:22:22) * -> ToDPDKDevice(0) * * **/ class SYNFlood : public BatchElement { public: SYNFlood() CLICK_COLD; ~SYNFlood() CLICK_COLD; const char *class_name() const override { return "SYNFlood"; } const char *port_count() const override { return PORTS_0_1; } // const char *processing() const override { return PUSH;} int configure(Vector<String> &, ErrorHandler *) CLICK_COLD; int initialize(ErrorHandler *) override CLICK_COLD; void add_handlers() override CLICK_COLD; bool run_task(Task *) override; Packet *get_packet(bool push = true); Packet *pull(int) override; #if HAVE_BATCH PacketBatch *pull_batch(int, unsigned) override; #endif void run_timer(Timer *timer) override; private: int _active; int _stop; int64_t _limit; Task _task; ActiveNotifier _notifier; Timer _timer; unsigned _burst; struct in_addr _sipaddr; struct in_addr _dipaddr; uint16_t _sport; uint16_t _dport; uint16_t _len = 60; static String read_handler(Element *, void *) CLICK_COLD; static int write_handler(const String &, Element *, void *, ErrorHandler *) CLICK_COLD; }; CLICK_ENDDECLS #endif
/** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. */ #pragma once #include <aws/directconnect/DirectConnect_EXPORTS.h> #include <aws/core/utils/memory/stl/AWSString.h> #include <aws/directconnect/model/DirectConnectGatewayAssociationProposalState.h> #include <aws/directconnect/model/AssociatedGateway.h> #include <aws/core/utils/memory/stl/AWSVector.h> #include <aws/directconnect/model/RouteFilterPrefix.h> #include <utility> namespace Aws { namespace Utils { namespace Json { class JsonValue; class JsonView; } // namespace Json } // namespace Utils namespace DirectConnect { namespace Model { /** * <p>Information about the proposal request to attach a virtual private gateway to * a Direct Connect gateway. </p><p><h3>See Also:</h3> <a * href="http://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DirectConnectGatewayAssociationProposal">AWS * API Reference</a></p> */ class AWS_DIRECTCONNECT_API DirectConnectGatewayAssociationProposal { public: DirectConnectGatewayAssociationProposal(); DirectConnectGatewayAssociationProposal(Aws::Utils::Json::JsonView jsonValue); DirectConnectGatewayAssociationProposal& operator=(Aws::Utils::Json::JsonView jsonValue); Aws::Utils::Json::JsonValue Jsonize() const; /** * <p>The ID of the association proposal.</p> */ inline const Aws::String& GetProposalId() const{ return m_proposalId; } /** * <p>The ID of the association proposal.</p> */ inline bool ProposalIdHasBeenSet() const { return m_proposalIdHasBeenSet; } /** * <p>The ID of the association proposal.</p> */ inline void SetProposalId(const Aws::String& value) { m_proposalIdHasBeenSet = true; m_proposalId = value; } /** * <p>The ID of the association proposal.</p> */ inline void SetProposalId(Aws::String&& value) { m_proposalIdHasBeenSet = true; m_proposalId = std::move(value); } /** * <p>The ID of the association proposal.</p> */ inline void SetProposalId(const char* value) { m_proposalIdHasBeenSet = true; m_proposalId.assign(value); } /** * <p>The ID of the association proposal.</p> */ inline DirectConnectGatewayAssociationProposal& WithProposalId(const Aws::String& value) { SetProposalId(value); return *this;} /** * <p>The ID of the association proposal.</p> */ inline DirectConnectGatewayAssociationProposal& WithProposalId(Aws::String&& value) { SetProposalId(std::move(value)); return *this;} /** * <p>The ID of the association proposal.</p> */ inline DirectConnectGatewayAssociationProposal& WithProposalId(const char* value) { SetProposalId(value); return *this;} /** * <p>The ID of the Direct Connect gateway.</p> */ inline const Aws::String& GetDirectConnectGatewayId() const{ return m_directConnectGatewayId; } /** * <p>The ID of the Direct Connect gateway.</p> */ inline bool DirectConnectGatewayIdHasBeenSet() const { return m_directConnectGatewayIdHasBeenSet; } /** * <p>The ID of the Direct Connect gateway.</p> */ inline void SetDirectConnectGatewayId(const Aws::String& value) { m_directConnectGatewayIdHasBeenSet = true; m_directConnectGatewayId = value; } /** * <p>The ID of the Direct Connect gateway.</p> */ inline void SetDirectConnectGatewayId(Aws::String&& value) { m_directConnectGatewayIdHasBeenSet = true; m_directConnectGatewayId = std::move(value); } /** * <p>The ID of the Direct Connect gateway.</p> */ inline void SetDirectConnectGatewayId(const char* value) { m_directConnectGatewayIdHasBeenSet = true; m_directConnectGatewayId.assign(value); } /** * <p>The ID of the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithDirectConnectGatewayId(const Aws::String& value) { SetDirectConnectGatewayId(value); return *this;} /** * <p>The ID of the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithDirectConnectGatewayId(Aws::String&& value) { SetDirectConnectGatewayId(std::move(value)); return *this;} /** * <p>The ID of the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithDirectConnectGatewayId(const char* value) { SetDirectConnectGatewayId(value); return *this;} /** * <p>The ID of the account that owns the Direct Connect gateway.</p> */ inline const Aws::String& GetDirectConnectGatewayOwnerAccount() const{ return m_directConnectGatewayOwnerAccount; } /** * <p>The ID of the account that owns the Direct Connect gateway.</p> */ inline bool DirectConnectGatewayOwnerAccountHasBeenSet() const { return m_directConnectGatewayOwnerAccountHasBeenSet; } /** * <p>The ID of the account that owns the Direct Connect gateway.</p> */ inline void SetDirectConnectGatewayOwnerAccount(const Aws::String& value) { m_directConnectGatewayOwnerAccountHasBeenSet = true; m_directConnectGatewayOwnerAccount = value; } /** * <p>The ID of the account that owns the Direct Connect gateway.</p> */ inline void SetDirectConnectGatewayOwnerAccount(Aws::String&& value) { m_directConnectGatewayOwnerAccountHasBeenSet = true; m_directConnectGatewayOwnerAccount = std::move(value); } /** * <p>The ID of the account that owns the Direct Connect gateway.</p> */ inline void SetDirectConnectGatewayOwnerAccount(const char* value) { m_directConnectGatewayOwnerAccountHasBeenSet = true; m_directConnectGatewayOwnerAccount.assign(value); } /** * <p>The ID of the account that owns the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithDirectConnectGatewayOwnerAccount(const Aws::String& value) { SetDirectConnectGatewayOwnerAccount(value); return *this;} /** * <p>The ID of the account that owns the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithDirectConnectGatewayOwnerAccount(Aws::String&& value) { SetDirectConnectGatewayOwnerAccount(std::move(value)); return *this;} /** * <p>The ID of the account that owns the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithDirectConnectGatewayOwnerAccount(const char* value) { SetDirectConnectGatewayOwnerAccount(value); return *this;} /** * <p>The state of the proposal. The following are possible values:</p> <ul> <li> * <p> <code>accepted</code>: The proposal has been accepted. The Direct Connect * gateway association is available to use in this state.</p> </li> <li> <p> * <code>deleted</code>: The proposal has been deleted by the owner that made the * proposal. The Direct Connect gateway association cannot be used in this * state.</p> </li> <li> <p> <code>requested</code>: The proposal has been * requested. The Direct Connect gateway association cannot be used in this * state.</p> </li> </ul> */ inline const DirectConnectGatewayAssociationProposalState& GetProposalState() const{ return m_proposalState; } /** * <p>The state of the proposal. The following are possible values:</p> <ul> <li> * <p> <code>accepted</code>: The proposal has been accepted. The Direct Connect * gateway association is available to use in this state.</p> </li> <li> <p> * <code>deleted</code>: The proposal has been deleted by the owner that made the * proposal. The Direct Connect gateway association cannot be used in this * state.</p> </li> <li> <p> <code>requested</code>: The proposal has been * requested. The Direct Connect gateway association cannot be used in this * state.</p> </li> </ul> */ inline bool ProposalStateHasBeenSet() const { return m_proposalStateHasBeenSet; } /** * <p>The state of the proposal. The following are possible values:</p> <ul> <li> * <p> <code>accepted</code>: The proposal has been accepted. The Direct Connect * gateway association is available to use in this state.</p> </li> <li> <p> * <code>deleted</code>: The proposal has been deleted by the owner that made the * proposal. The Direct Connect gateway association cannot be used in this * state.</p> </li> <li> <p> <code>requested</code>: The proposal has been * requested. The Direct Connect gateway association cannot be used in this * state.</p> </li> </ul> */ inline void SetProposalState(const DirectConnectGatewayAssociationProposalState& value) { m_proposalStateHasBeenSet = true; m_proposalState = value; } /** * <p>The state of the proposal. The following are possible values:</p> <ul> <li> * <p> <code>accepted</code>: The proposal has been accepted. The Direct Connect * gateway association is available to use in this state.</p> </li> <li> <p> * <code>deleted</code>: The proposal has been deleted by the owner that made the * proposal. The Direct Connect gateway association cannot be used in this * state.</p> </li> <li> <p> <code>requested</code>: The proposal has been * requested. The Direct Connect gateway association cannot be used in this * state.</p> </li> </ul> */ inline void SetProposalState(DirectConnectGatewayAssociationProposalState&& value) { m_proposalStateHasBeenSet = true; m_proposalState = std::move(value); } /** * <p>The state of the proposal. The following are possible values:</p> <ul> <li> * <p> <code>accepted</code>: The proposal has been accepted. The Direct Connect * gateway association is available to use in this state.</p> </li> <li> <p> * <code>deleted</code>: The proposal has been deleted by the owner that made the * proposal. The Direct Connect gateway association cannot be used in this * state.</p> </li> <li> <p> <code>requested</code>: The proposal has been * requested. The Direct Connect gateway association cannot be used in this * state.</p> </li> </ul> */ inline DirectConnectGatewayAssociationProposal& WithProposalState(const DirectConnectGatewayAssociationProposalState& value) { SetProposalState(value); return *this;} /** * <p>The state of the proposal. The following are possible values:</p> <ul> <li> * <p> <code>accepted</code>: The proposal has been accepted. The Direct Connect * gateway association is available to use in this state.</p> </li> <li> <p> * <code>deleted</code>: The proposal has been deleted by the owner that made the * proposal. The Direct Connect gateway association cannot be used in this * state.</p> </li> <li> <p> <code>requested</code>: The proposal has been * requested. The Direct Connect gateway association cannot be used in this * state.</p> </li> </ul> */ inline DirectConnectGatewayAssociationProposal& WithProposalState(DirectConnectGatewayAssociationProposalState&& value) { SetProposalState(std::move(value)); return *this;} /** * <p>Information about the associated gateway.</p> */ inline const AssociatedGateway& GetAssociatedGateway() const{ return m_associatedGateway; } /** * <p>Information about the associated gateway.</p> */ inline bool AssociatedGatewayHasBeenSet() const { return m_associatedGatewayHasBeenSet; } /** * <p>Information about the associated gateway.</p> */ inline void SetAssociatedGateway(const AssociatedGateway& value) { m_associatedGatewayHasBeenSet = true; m_associatedGateway = value; } /** * <p>Information about the associated gateway.</p> */ inline void SetAssociatedGateway(AssociatedGateway&& value) { m_associatedGatewayHasBeenSet = true; m_associatedGateway = std::move(value); } /** * <p>Information about the associated gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithAssociatedGateway(const AssociatedGateway& value) { SetAssociatedGateway(value); return *this;} /** * <p>Information about the associated gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithAssociatedGateway(AssociatedGateway&& value) { SetAssociatedGateway(std::move(value)); return *this;} /** * <p>The existing Amazon VPC prefixes advertised to the Direct Connect * gateway.</p> */ inline const Aws::Vector<RouteFilterPrefix>& GetExistingAllowedPrefixesToDirectConnectGateway() const{ return m_existingAllowedPrefixesToDirectConnectGateway; } /** * <p>The existing Amazon VPC prefixes advertised to the Direct Connect * gateway.</p> */ inline bool ExistingAllowedPrefixesToDirectConnectGatewayHasBeenSet() const { return m_existingAllowedPrefixesToDirectConnectGatewayHasBeenSet; } /** * <p>The existing Amazon VPC prefixes advertised to the Direct Connect * gateway.</p> */ inline void SetExistingAllowedPrefixesToDirectConnectGateway(const Aws::Vector<RouteFilterPrefix>& value) { m_existingAllowedPrefixesToDirectConnectGatewayHasBeenSet = true; m_existingAllowedPrefixesToDirectConnectGateway = value; } /** * <p>The existing Amazon VPC prefixes advertised to the Direct Connect * gateway.</p> */ inline void SetExistingAllowedPrefixesToDirectConnectGateway(Aws::Vector<RouteFilterPrefix>&& value) { m_existingAllowedPrefixesToDirectConnectGatewayHasBeenSet = true; m_existingAllowedPrefixesToDirectConnectGateway = std::move(value); } /** * <p>The existing Amazon VPC prefixes advertised to the Direct Connect * gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithExistingAllowedPrefixesToDirectConnectGateway(const Aws::Vector<RouteFilterPrefix>& value) { SetExistingAllowedPrefixesToDirectConnectGateway(value); return *this;} /** * <p>The existing Amazon VPC prefixes advertised to the Direct Connect * gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithExistingAllowedPrefixesToDirectConnectGateway(Aws::Vector<RouteFilterPrefix>&& value) { SetExistingAllowedPrefixesToDirectConnectGateway(std::move(value)); return *this;} /** * <p>The existing Amazon VPC prefixes advertised to the Direct Connect * gateway.</p> */ inline DirectConnectGatewayAssociationProposal& AddExistingAllowedPrefixesToDirectConnectGateway(const RouteFilterPrefix& value) { m_existingAllowedPrefixesToDirectConnectGatewayHasBeenSet = true; m_existingAllowedPrefixesToDirectConnectGateway.push_back(value); return *this; } /** * <p>The existing Amazon VPC prefixes advertised to the Direct Connect * gateway.</p> */ inline DirectConnectGatewayAssociationProposal& AddExistingAllowedPrefixesToDirectConnectGateway(RouteFilterPrefix&& value) { m_existingAllowedPrefixesToDirectConnectGatewayHasBeenSet = true; m_existingAllowedPrefixesToDirectConnectGateway.push_back(std::move(value)); return *this; } /** * <p>The Amazon VPC prefixes to advertise to the Direct Connect gateway.</p> */ inline const Aws::Vector<RouteFilterPrefix>& GetRequestedAllowedPrefixesToDirectConnectGateway() const{ return m_requestedAllowedPrefixesToDirectConnectGateway; } /** * <p>The Amazon VPC prefixes to advertise to the Direct Connect gateway.</p> */ inline bool RequestedAllowedPrefixesToDirectConnectGatewayHasBeenSet() const { return m_requestedAllowedPrefixesToDirectConnectGatewayHasBeenSet; } /** * <p>The Amazon VPC prefixes to advertise to the Direct Connect gateway.</p> */ inline void SetRequestedAllowedPrefixesToDirectConnectGateway(const Aws::Vector<RouteFilterPrefix>& value) { m_requestedAllowedPrefixesToDirectConnectGatewayHasBeenSet = true; m_requestedAllowedPrefixesToDirectConnectGateway = value; } /** * <p>The Amazon VPC prefixes to advertise to the Direct Connect gateway.</p> */ inline void SetRequestedAllowedPrefixesToDirectConnectGateway(Aws::Vector<RouteFilterPrefix>&& value) { m_requestedAllowedPrefixesToDirectConnectGatewayHasBeenSet = true; m_requestedAllowedPrefixesToDirectConnectGateway = std::move(value); } /** * <p>The Amazon VPC prefixes to advertise to the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithRequestedAllowedPrefixesToDirectConnectGateway(const Aws::Vector<RouteFilterPrefix>& value) { SetRequestedAllowedPrefixesToDirectConnectGateway(value); return *this;} /** * <p>The Amazon VPC prefixes to advertise to the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& WithRequestedAllowedPrefixesToDirectConnectGateway(Aws::Vector<RouteFilterPrefix>&& value) { SetRequestedAllowedPrefixesToDirectConnectGateway(std::move(value)); return *this;} /** * <p>The Amazon VPC prefixes to advertise to the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& AddRequestedAllowedPrefixesToDirectConnectGateway(const RouteFilterPrefix& value) { m_requestedAllowedPrefixesToDirectConnectGatewayHasBeenSet = true; m_requestedAllowedPrefixesToDirectConnectGateway.push_back(value); return *this; } /** * <p>The Amazon VPC prefixes to advertise to the Direct Connect gateway.</p> */ inline DirectConnectGatewayAssociationProposal& AddRequestedAllowedPrefixesToDirectConnectGateway(RouteFilterPrefix&& value) { m_requestedAllowedPrefixesToDirectConnectGatewayHasBeenSet = true; m_requestedAllowedPrefixesToDirectConnectGateway.push_back(std::move(value)); return *this; } private: Aws::String m_proposalId; bool m_proposalIdHasBeenSet; Aws::String m_directConnectGatewayId; bool m_directConnectGatewayIdHasBeenSet; Aws::String m_directConnectGatewayOwnerAccount; bool m_directConnectGatewayOwnerAccountHasBeenSet; DirectConnectGatewayAssociationProposalState m_proposalState; bool m_proposalStateHasBeenSet; AssociatedGateway m_associatedGateway; bool m_associatedGatewayHasBeenSet; Aws::Vector<RouteFilterPrefix> m_existingAllowedPrefixesToDirectConnectGateway; bool m_existingAllowedPrefixesToDirectConnectGatewayHasBeenSet; Aws::Vector<RouteFilterPrefix> m_requestedAllowedPrefixesToDirectConnectGateway; bool m_requestedAllowedPrefixesToDirectConnectGatewayHasBeenSet; }; } // namespace Model } // namespace DirectConnect } // namespace Aws
/// add a notify hook for a particular property. users may need to call particular /// specialised versions of this. void Set::addNotifyHook(const std::string &s, NotifyHook *hook) const { Property *prop = fetchProperty(s); if(prop) { prop->addNotifyHook(hook); } }
<reponame>Aimmecat/Shrew /* * @文件描述: * @公司: thundersdata * @作者: 李洪文 * @Date: 2020-09-01 09:49:40 * @LastEditors: liuweis * @LastEditTime: 2020-12-27 19:46:25 */ import '@/api'; import { history } from 'umi'; API.system.ping.request({}, { hideError: true }).catch(() => { history.push('/user/login'); });
/*! \property Q3SpinBox::buttonSymbols \brief the current button symbol mode The possible values can be either \c UpDownArrows or \c PlusMinus. The default is \c UpDownArrows. \sa ButtonSymbols */ void Q3SpinBox::setButtonSymbols( ButtonSymbols newSymbols ) { if ( buttonSymbols() == newSymbols ) return; switch ( newSymbols ) { case UpDownArrows: d->controls->setButtonSymbols( Q3SpinWidget::UpDownArrows ); break; case PlusMinus: d->controls->setButtonSymbols( Q3SpinWidget::PlusMinus ); break; } }
""" Classes for modelling the plasma, without any matching """ from prepic.base import BaseClass from prepic.constants import r_e import numpy as np import unyt as u dim = u.dimensions @u.accepts(ωp=1 / dim.time, τL=dim.time) def interaction_regime(ωp, τL): """Outputs the laser-plasma interaction regime. Parameters ---------- ωp: float, 1/time Plasma frequency. τL: float, time Laser pulse duration at FWHM in intensity. """ def magnitude(x): """Get order of magnitude of ``x``. >>> magnitude(100) 2 """ return int(np.log10(x)) ω_mag = magnitude((1 / ωp).to_value("femtosecond")) τ_mag = magnitude(τL.to_value("femtosecond")) if ω_mag == τ_mag: return "LWFA" elif τ_mag > ω_mag: return "SMLWFA/DLA" else: raise NotImplementedError("Unknown interaction regime.") class Plasma(BaseClass): """Class containing plasma parameters. Attributes ---------- npe : float, 1/volume Plasma electron (number) density. ωp : float, 1/time Plasma frequency. lp : float, length Unit of length. tp : float, time Unit of time. λp : float, length Plasma skin depth. kp : float, 1/length Plasma wavenumber. Ewb : float, energy/charge/length Cold, 1D wave-breaking field. laser : :obj:`Laser` Instance containing laser params. γp : float, dimensionless Plasma γ factor. Pc : float, energy/time Critical power for self-focusing. dephasing : float, length Electron dephasing length. depletion : float, length Pump depletion length. Ez_avg : float, energy/charge/length Average accelerating field \ in the direction of electron propagation. R : float, length Radius of the plasma bubble. Lacc : float, length Distance over which laser propagates. N : float, dimensionless Estimated number of electrons in the bunch. Q : float, charge Estimated total electron bunch charge. ΔE : float, energy Maximum energy gained by one electron \ propagating for Lacc \ see Lu et al., 2007 Phys. Rev. ST. Accel. Beams. η : float, dimensionless Energy transfer efficiency, defined as \ total bunch energy `N` * `ΔE` / laser energy `ɛL` \ under matching conditions, `η` ~ 1 / (2 * a0). Examples -------- >>> import unyt as u >>> Plasma(n_pe=1e18 / u.cm**3) <Plasma(1e+18 cm**(-3), None, None)> """ def __init__(self, n_pe, laser=None, bubble_radius=None, propagation_distance=None): """Creates plasma with given density. Parameters ---------- n_pe : float, 1/volume Plasma electron (number) density. laser : :obj:`Laser`, optional Instance containing laser params. bubble_radius : float, length, optional Radius of the plasma bubble. propagation_distance : float, length, optional Length of plasma region (defaults to `dephasing`). """ self.npe = n_pe.to("1/cm**3") self.λp = np.sqrt(np.pi / (r_e * self.npe)).to("micrometer") self.kp = (2 * np.pi / self.λp).to("1/micrometer") self.ωp = (u.clight * self.kp).to("1/femtosecond") self.Ewb = (u.me * u.clight * self.ωp / np.abs(u.qe)).to("megavolt/mm") self.lp = (u.clight / self.ωp).to("micrometer") self.tp = (1 / self.ωp).to("femtosecond") if laser: self.laser = laser self.γp = (self.laser.ωL / self.ωp).to("dimensionless") self.Pc = (17 * self.γp ** 2 * u.gigawatt).to("terawatt") self.dephasing = ( 4 / 3 * self.γp ** 2 * np.sqrt(self.laser.a0) / self.kp ).to("mm") self.depletion = (self.γp ** 2 * u.clight * self.laser.τL).to("mm") self.Ez_avg = (self.Ewb * np.sqrt(self.laser.a0) / 2).to("megavolt/mm") if propagation_distance: self.Lacc = propagation_distance.to("mm") else: self.Lacc = self.dephasing self.ΔE = (np.abs(u.qe) * self.Ez_avg * self.Lacc).to("megaelectronvolt") if bubble_radius: self.R = bubble_radius.to("micrometer") self.N = (1 / 30 * (self.kp * self.R) ** 3 / (self.kp * r_e)).to( "dimensionless" ) self.Q = (self.N * np.abs(u.qe)).to("picocoulomb") self.η = (self.N * self.ΔE / self.laser.ɛL).to("dimensionless") else: self.R = None else: self.laser = None self.R = None def __eq__(self, other): return super().__eq__(other) def __repr__(self): return f"<{self.__class__.__name__}({self.npe}, {repr(self.laser)}, {self.R})>" def __str__(self): msg = f"Plasma with nₚ={self.npe:.1e}, ωₚ={self.ωp:.3f}, kₚ={self.kp:.3f}, λₚ={self.λp:.1f}, Ewb={self.Ewb:.1f}" if self.laser: n_ratio = (self.npe / self.laser.ncrit).to("dimensionless") msg = ( f"Plasma with nₚ={self.npe:.1e} ({n_ratio.to_value('dimensionless'):.2e} × nc), ωₚ={self.ωp:.3f}, " f"kₚ={self.kp:.3f}, λₚ={self.λp:.1f}, Ewb={self.Ewb:.1f}" ) _ = interaction_regime(ωp=self.ωp, τL=self.laser.τL) # fixme # assert regime == "LWFA", regime msg += ( f"\nPc={self.Pc:.1f}, Ldeph={self.dephasing:.2f}, Ldepl={self.depletion:.2f}, " f"ΔE={self.ΔE:.1f} over Lacc={self.Lacc:.2f}" ) if self.R: msg += ( f"\nN={self.N.to_value('dimensionless'):.1e} electrons, Q={self.Q:.1f}, " f"η={self.η.to_value('dimensionless'):.3f}" ) return msg
1. Field of the Invention The present invention relates to the construction of a manifold reactor constituting an engine exhaust cleansing apparatus for an automobile and other internal combustion engine-driven vehicles and, more specifically, relates to the construction of such a manifold reactor of a triple construction system comprising an inner shell or baffle, an intermediate shell and an outer shell, whereby the inner shell and the intermediate shell are so designed as to be capable of being subjected to expansion and contraction freely enough in any direction, in relation to the outer shell, by means of a suspension bolt that fixes the inner shell and the intermediate shell in place specifically selected as the center thereof. 2. Description of Prior Art In general, a manifold reactor is an apparatus of such a category as is provided with a lagging reaction chamber for removing HC and CO contained in an exhaust gas emitted from an internal combustion engine by the application of high-temperature oxidative reaction that results from making use of heat energy held in the exhaust gas itself, and is usually fitted in place in lieu of an exhaust manifold for the purpose of retaining high temperature, thus being termed a manifold reactor, And, usually a double construction system or a triple construction system is adopted for the purpose of elevating the performance of the said high-temperature oxidative reaction; besides, suitable heat insulating material is incorporated therein for the purpose of constituting a heat barrier and a heat shield. It the case of the manifold reactor, the inner body of the shell thereof coming in contact with a high-temperature gas is subjected to expansion and contraction by virtue of heat. It is desirable for a manifold reactor of the triple construction system, for instance, that the inner shell and the intermediate shell thereof have such a construction that is well capable of being subjected to expansion and contraction by heat in any direction freely enough in the relation thereof to the outer shell thereof, in terms of being kept free from deformation by thermal strain; however, none have thus far proved to be satisfactory enough to meet the said requirement.
def export_modules_csv(request): global Module_CSV_Data, global_context response = HttpResponse(content_type='text/csv') if not Module_CSV_Data: messages.success(request, ("No modules to export! Please try again.")) return render(request, 'index.html', global_context) response['Content-Disposition'] = 'attachment; filename="modules.csv"' try: writer = csv.writer(response) writer.writerow(["Department_Name", "Department_ID", "Module_Name", "Module_ID", "Faculty", "Credit_Value", "Module_Lead", "Catalogue_Link", "Description"]) modules = Module_CSV_Data.values_list("Department_Name", "Department_ID", "Module_Name", "Module_ID", "Faculty", "Credit_Value", "Module_Lead", "Catalogue_Link", "Description") for module in modules: writer.writerow(module) except: messages.success(request, ("No modules to export! Please try again.")) return render(request, 'index.html', global_context) return response
If there's one thing that Portlander's are pretty proud of, it's that they're progressive. Now, I am not going to argue that on many things, such as the environment, gay rights and homeless issues, this is a very progressive city. But in my conversations with people of color, many feel that progressive spirit kind of falls off when it comes to race. It's not that people of color necessarily experience acts of overt racism, but say they deal daily with a covert, benign form of racism. The kind that comes with a smile, and sometimes, good intentions. It could be the comment about a black woman's hair. Or the person who asks the Asian-American student what country he's from. Or the "compliment" about a dark-skinned person looking exotic. Most of us like to think we are are reviled by racism -- when we can recognize it. It's easy to think one's progressive on issues of race while living in the whitest major city in the United States. Here, you can take tests on a range of prejudices and see if how you want to see yourself is who you really are. After you take the test, I'd like to hear your thoughts. But be careful. The results can be unsettling.
PowerMath: a system for the Macintosh PowerMath is a symbolic algebra system for the MacIntosh computer. This paper outlines the design decisions that were made during its development, and explains how the novel MacIntosh environment helped and hindered the development of the system. While the interior of PowerMath is fairly conventional, the user interface has many novel features. It is these that make PowerMath not just another micro-computer algebra system.
<reponame>jlgrock/camel /** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.camel.impl; import org.junit.Test; import org.apache.camel.CamelContext; import org.apache.camel.ContextTestSupport; import org.apache.camel.StartupListener; import org.apache.camel.builder.RouteBuilder; /** * @version */ public class StartupListenerTest extends ContextTestSupport { private MyStartupListener my = new MyStartupListener(); @Override protected CamelContext createCamelContext() throws Exception { CamelContext context = super.createCamelContext(); context.addStartupListener(my); return context; } private static class MyStartupListener implements StartupListener { private int invoked; private boolean alreadyStarted; public void onCamelContextStarted(CamelContext context, boolean alreadyStarted) throws Exception { invoked++; this.alreadyStarted = alreadyStarted; if (alreadyStarted) { // the routes should already been started as we add the listener afterwards assertTrue(context.getRouteStatus("foo").isStarted()); } else { // the routes should not have been started as they start afterwards assertTrue(context.getRouteStatus("foo").isStopped()); } } public int getInvoked() { return invoked; } public boolean isAlreadyStarted() { return alreadyStarted; } } @Test public void testStartupListenerComponent() throws Exception { // and now the routes are started assertTrue(context.getRouteStatus("foo").isStarted()); getMockEndpoint("mock:result").expectedMessageCount(1); template.sendBody("direct:foo", "Hello World"); assertMockEndpointsSatisfied(); assertEquals(1, my.getInvoked()); assertFalse(my.isAlreadyStarted()); } @Test public void testStartupListenerComponentAlreadyStarted() throws Exception { // and now the routes are started assertTrue(context.getRouteStatus("foo").isStarted()); MyStartupListener other = new MyStartupListener(); context.addStartupListener(other); getMockEndpoint("mock:result").expectedMessageCount(1); template.sendBody("direct:foo", "Hello World"); assertMockEndpointsSatisfied(); assertEquals(1, other.getInvoked()); assertTrue(other.isAlreadyStarted()); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:foo").routeId("foo").to("mock:result"); } }; } }
<reponame>dotenv-org/dotenv-vault import {Command} from '@oclif/core' import {NewService} from '../services/new-service' export default class New extends Command { static description = 'Create your project at Dotenv Vault' static examples = [ '<%= config.bin %> <%= command.id %>', ] static args = [ { name: 'dotenvProject', required: false, description: 'Set .env.project identifier. Defaults to user prompt.', hidden: false, }, ] public async run(): Promise<void> { const {args} = await this.parse(New) const dotenvProject = args.dotenvProject new NewService({cmd: this, dotenvProject: dotenvProject}).run() } }
/** * setup scene: fluid pours into a bowl */ void createScene() { SceneGraph& scene = SceneGraph::getInstance(); scene.setUpperBound(Vector3f(1.5, 1, 1.5)); scene.setLowerBound(Vector3f(-0.5, 0, -0.5)); std::shared_ptr<StaticBoundary<DataType3f>> root = scene.createNewScene<StaticBoundary<DataType3f>>(); root->loadCube(Vector3f(-0.5, 0, -0.5), Vector3f(1.5, 2, 1.5), 0.02, true); root->loadSDF("../../Media/bowl/bowl.sdf", false); std::shared_ptr<RigidBody<DataType3f>> rigidbody = std::make_shared<RigidBody<DataType3f>>(); root->addRigidBody(rigidbody); rigidbody->loadShape("../../Media/bowl/bowl.obj"); rigidbody->setActive(false); auto renderModule = std::make_shared<RigidMeshRender>(rigidbody->getTransformationFrame()); renderModule->setColor(Vector3f(0.8, std::rand() % 1000 / ( double )1000, 0.8)); rigidbody->getSurface()->addVisualModule(renderModule); std::shared_ptr<ParticleFluid<DataType3f>> fluid = std::make_shared<ParticleFluid<DataType3f>>(); root->addParticleSystem(fluid); fluid->loadParticles(Vector3f(0.5, 0.2, 0.4), Vector3f(0.7, 1.5, 0.6), 0.005); fluid->setMass(100); auto ptRender = std::make_shared<PointRenderModule>(); ptRender->setColor(Vector3f(1, 0, 0)); ptRender->setColorRange(0, 4); fluid->addVisualModule(ptRender); fluid->currentVelocity()->connect(&ptRender->m_vecIndex); }
import { Flex, Box, HStack, Link } from '@chakra-ui/layout' import { Button, useColorModeValue, Image, Text, Skeleton } from '@chakra-ui/react' import { Tr, Td } from '@chakra-ui/table' import { Tag } from '@chakra-ui/tag' import { numberFormatter } from 'utils/helpers' import { bnOrZero } from 'utils/math' import { AprLabel } from './AprLabel' type GenericStakingRowProps = { apy?: string | null tvl?: string | null assetImage?: string assetImageSecondary?: string assetName?: string assetDescription?: string network?: string rewardsImage?: string url?: string urlLabel?: string aprFallbackLabel?: string } export const GenericStakingRow = ({ apy, tvl, assetImage, assetImageSecondary, assetName, assetDescription, rewardsImage, network, url, urlLabel, aprFallbackLabel }: GenericStakingRowProps) => { const bg = useColorModeValue('gray.100', 'gray.750') const renderApy = (size?: string) => { return ( <> {apy === null && aprFallbackLabel ? ( <Link href={url} isExternal> <Tag colorScheme='gray'>{aprFallbackLabel}</Tag> </Link> ) : apy ? ( <> <AprLabel size={size || 'md'} apr={apy ?? '-'} /> </> ) : ( <Text>-</Text> )} </> ) } return ( <Tr _hover={{ bg }}> <Td> <Flex minWidth={{ base: '100px', lg: '250px' }} alignItems='center' flexWrap='nowrap'> <Flex mr={2}> <Image src={assetImage} boxSize={{ base: '30px', lg: '40px' }} boxShadow='right' zIndex={0} mr={assetImageSecondary ? -3 : undefined} borderRadius='full' /> {assetImageSecondary && ( <Image src={assetImageSecondary} boxSize={{ base: '30px', lg: '40px' }} borderRadius='full' /> )} </Flex> <Box> <Text fontWeight='bold'>{assetName}</Text> <Text color='gray.500' fontSize='sm' display={{ base: 'none', lg: 'table-cell' }}> {assetDescription} </Text> <Skeleton display={{ base: 'inline-flex', lg: 'none' }} isLoaded={apy !== undefined}> {renderApy('sm')} </Skeleton> </Box> </Flex> </Td> <Td display={{ base: 'none', lg: 'table-cell' }}> <Skeleton isLoaded={apy !== undefined}>{renderApy()}</Skeleton> </Td> <Td display={{ base: 'none', md: 'table-cell' }}> <Skeleton isLoaded={tvl !== undefined}> {tvl === null ? ( <Text>-</Text> ) : ( <Text>${numberFormatter(bnOrZero(tvl ?? null).toNumber(), 2)}</Text> )} </Skeleton> </Td> <Td display={{ base: 'none', md: 'table-cell' }}> <Tag colorScheme='purple' textTransform='capitalize'> {network} </Tag> </Td> <Td display={{ base: 'none', md: 'table-cell' }}> <HStack> {rewardsImage ? <Image src={rewardsImage} boxSize='24px' /> : <Text>-</Text>} </HStack> </Td> <Td display={{ base: 'none', md: 'table-cell' }}>-</Td> <Td display={{ base: 'block', md: 'table-cell' }}> <Button isFullWidth colorScheme='green' as={Link} href={url} isExternal> {urlLabel} </Button> </Td> </Tr> ) }
<reponame>TheNetAdmin/tiger-compiler #include <iostream> #include "../src/driver.h" int main(int argc, char * argv[]){ std::cout << "Tiger Compiler" << std::endl; Tiger::Driver driver; for (int i = 1; i < argc ; ++i) { if (argv[i] == std::string("-p")) driver.trace_parsing = true; else if (argv[i] == std::string("-s")) driver.trace_scanning = true; else{ std::cout << "Openning file: " << argv[i] << std::endl; driver.parse(argv[i]); } } return 0; }
import React, { useState } from 'react'; import { Label } from '../../Typography'; import { PhoneInputFieldProps } from '../FormFields.types'; import { StyledFieldset, StyledInputContainer, StyledLegend, } from '../InputContainer.styled'; import { FieldInput } from './FieldInput'; export type CountryProps = { cca2: string; flag: string; idd: string; name: string; }; export const PhoneInputField = ({ varient, isError = false, name, label, errorMessage, ...props }: PhoneInputFieldProps) => { const [selectedCountry, setSelectedCountry] = useState<CountryProps>({ cca2: 'none', flag: '', idd: '', name: '', }); const [fieldError, setFieldError] = useState(false); return ( <> {varient === 'outlined' ? ( <StyledFieldset style={{ display: 'flex', flexDirection: 'row' }} disableFloat varient={varient} isError={isError || fieldError} > <FieldInput setSelectedCountry={setSelectedCountry} selectedCountry={selectedCountry} varient={varient} isError={isError} fieldError={fieldError} setFieldError={setFieldError} name={name} label={label} errorMessage={errorMessage} {...props} > <StyledLegend isError={isError || fieldError} disableFloat className="legend" > {label} </StyledLegend> <StyledLegend disableFloat className="float-legend" isError={isError || fieldError} children={label} /> </FieldInput> </StyledFieldset> ) : ( <StyledInputContainer disableFloat display="flex" flexDirection="row" justifyContent="center" alignItems="flex-end" varient={varient} isError={isError || fieldError} > <FieldInput setSelectedCountry={setSelectedCountry} selectedCountry={selectedCountry} varient={varient} isError={isError} fieldError={fieldError} setFieldError={setFieldError} name={name} label={label} errorMessage={errorMessage} {...props} > <Label style={{ position: 'absolute', top: '0', left: '1rem' }} htmlFor={name} children={label} textColor={isError || fieldError ? 'error' : undefined} /> </FieldInput> </StyledInputContainer> )} </> ); };
Adaptive detection and classificaion system for power quality disturbances This paper describes an intelligent measurement system for Power Quality (PQ) assessment. Computational guts are based in Higher Order Statistics (HOS) and the intelligent decision system is based in the Case-Base Reasoning (CBR) paradigm, which could re-configure its parameter according to the power net conditions. The power signal characterization is done using a sliding window procedure, and calculating the variance, the skewness and the kurtosis over the points inside the window. Those values are introduced in the CBR system and the signal state is returned. If the signal is healthy, the system study the current HOS values for substitute the normal considerations of the CBR system. This procedure returns a precision over the 90%.
Metastable States Observed by Optical Absorption of DX Centers in AlxGa1-xAs:Te With the use of thick n-type AlxGa1-xAs:Te crystals (x=0.29, 0.37 and 0.46) between 10 K and 300 K, optical absorption of deep levels, so called DX centers, has been measured in sequence under (a) thermal equilibrium and (b) non-equilibrium states induced by the photoionization of the deep DX center. Absorption found at the expense of the deep DX center absorption has a peak at 0.56 eV when x=0.46: the band is related to the metastable persistent photoconductivity. The intensity correlation of the two absorptions indicates the presence of another metastable state which is optically inactive. This state is interpreted as a neutral donor state with reference to a negative U model of DX centers.
Pathways of Care: targeting the early childhood sector for early intervention While many people with a mental illness care for young children, there is a paucity of resources for these families and the professionals working with them. The purpose of this paper is to describe a new online resource, Pathways of Care, specifically designed for parents with a mental illness, early childhood educators, and mental health workers, and report on a pilot evaluation of the resource. Using a mixed method design, the effectiveness of the online resource in effecting worker confidence, knowledge and family-focused practice change will be examined. Pathways of Care aims to promote collaborative practice between agencies, identify relevant agencies and support workers in talking to parents about mental illness in families. Fifteen workers completed the Family Focus Mental Health Practice Questionnaire pre- and post-viewing the resource, to measure confidence and practice change; semi-structured interviews were then conducted with eight of these same workers to further explore the utility of the resource. Findings tentatively indicate that the resource was effective in increasing worker knowledge and confidence. This study highlights the importance of the development and provision of resources, such as the Pathways of Care, to promote collaboration between service providers in the early childhood and mental health sectors working with families with young children.
Lee Hyori and Uhm Jung Hwa have reportedly worked together for a new track! According to insiders, Uhm Jung Hwa has been preparing for a new album for a while and is currently taking the last steps to complete her album's production. Among the new songs in the upcoming album, it was reported that Lee Hyori featured in one to support her sunbae Uhm Jung Hwa. However, it was also revealed that Lee Hyori has no plans for broadcast promotions. In the past, Lee Hyori made a cameo appearance on the movie 'Dancing Queen' starring Uhm Jung Hwa and Hwang Jung Min. Many once again look forward to Lee Hyori and Uhm Jung Hwa working together, especially since she has been away from promotions for years. Stay tuned for updates on Uhm Jung Hwa's new track featuring Lee Hyori.
ContraPolice: a libc Extension for Protecting Applications from Heap-Smashing Attacks In todays computer security, buffer overflows are a huge problem most of the time caused by inexperienced programmers using inadequate language without fully understanding all consequences of using. One of these languages that cause such problems ist C, an imperative programming language developed in the early 1970s by Dennis Ritchie. Unfortunately, at this time hardly anybody of the big problems that insecure buffer handling could cause. This led to a set of insecure library functions in the first versions of the C standard library that eventually got standardized together with the C language and that are still widely used by inexperienced C programmers and even by their teachers, since that was the way they learned it themselves. And to keep up backward compatibility, even new versions of the C standard (i.e. C99) still contain these insecure functions. The fundamental problem of buffer overflows is that memory is accidently being overwritten that is interpreted as e.g. a function pointer or return address inside the program. The cause is that programmers often enough dont really care about input validation, including input length. Programmers still use insecure functions like
/* * Copyright Hyperledger Besu Contributors. * * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the * specific language governing permissions and limitations under the License. * * SPDX-License-Identifier: Apache-2.0 */ package org.hyperledger.besu.plugin.data; import org.hyperledger.besu.datatypes.Address; import org.hyperledger.besu.datatypes.Quantity; import org.apache.tuweni.units.bigints.UInt64; /** * A withdrawal is a system-level operation to support validator withdrawals that are pushed from * the beacon chain to EVM. */ public interface Withdrawal { /** * A monotonically increasing index, starting from 0 that increments by 1 per withdrawal to * uniquely identify each withdrawal * * @return withdrawal index */ UInt64 getIndex(); /** * Validator index of the validator on the consensus layer the withdrawal corresponds to * * @return validator index */ UInt64 getValidatorIndex(); /** * Recipient for the ether to be withdrawn to * * @return recipient address */ Address getAddress(); /** * Amount of ether to be withdrawn and credited to the recipient address * * @return withdrawn ether amount */ Quantity getAmount(); }
Los Angeles Blades (WHL) History Following the 1960-61 season, Spokane Comets owner Mel Smith informed the WHL that he was considering moving his team to either Los Angeles or San Francisco. At the same time, Los Angeles Sports Arena general manager Bill Nicholas revealed that he intended to affiliate with the WHL if he could not gain an NHL franchise. As a result, the WHL evaluated both the Sports Arena and the Cow Palace near San Francisco to evaluate their readiness for possible expansion. On April 23, 1961, the WHL approved the transfer of the Victoria Cougars to a Los Angeles-based ownership group headed by James Piggott and Los Angeles Rams owner Dan Reeves. The WHL also approved a conditional expansion franchise for San Francisco on the same day, creating an all-California rivalry that would begin in October 1961. After finishing 25-39-6 in their inaugural season, the Blades improved to 35-32-3 in 1962-63, led by coach Jack Bownass and fleet left wing Willie O'Ree, the NHL's first black player. Los Angeles won its playoff opener over San Francisco, only to lose the next two games and the best-of-three series to the Seals. The Blades' breakout year came in 1963-64, when Alf Pike took over as coach. While Los Angeles finished at .500 (31-31-8), the Blades would make it all the way to the WHL finals, where the San Francisco Seals defeated Los Angeles in six games. Pike's biggest impact on the Blades came when he shifted O'Ree - who'd lost the vision in his right eye to a puck during his junior hockey days - from left wing to right. O'Ree went on to become one of the WHL's most exciting players and prolific scorers, improving from 17 goals in 1963-64 to 38 in 1964-65 and scoring 30 or more goals in three consecutive seasons in Los Angeles. But the Blades were unable to match O'Ree's artistry, failing to make the playoffs in their final three seasons in the WHL. On Feb. 9, 1966, the National Hockey League - sensing a possible merger between the WHL and the American Hockey League - awarded expansion franchises to Los Angeles, Minneapolis, Philadelphia, Pittsburgh, St. Louis and San Francisco for the 1967-68 season. Jack Kent Cooke was awarded the Los Angeles franchise, which would be called the Kings; the Blades played their final game in April 1967. The Blades name was revived twice - once for a short-lived franchise in the Pacific Hockey League from 1978–79 and again for a franchise in Roller Hockey International from 1993-97. The last link to the Los Angeles Blades is the Saskatoon Blades of the major junior Western Hockey League, founded as a feeder team for Los Angeles in 1964; the Saskatoon club wore hand-me-down Los Angeles Blades uniforms into the 1970s.
#include "stdafx.h" #include "EOSAIMain.h" #include "Interproc.h" EOSAI::Main* g_pEOSAIMain = NULL; using namespace EOSAI; Main* EOSAI::Main::s_pMain; Main::Main() { s_pMain = this; g_pEOSAIMain = this; } Main::~Main() { // free the memory g_pEOSAIMain = NULL; } void Main::InitializeInterprocessCommunication() { Interproc::Initialize(); } void Main::ShutdownInterprocessCommunication() { Interproc::Shutdown(); } CEOSAIRegionManager2* Main::GetAIRegionManager() { return m_AICommonData.GetAIRegionManager(); }
package com.sparrow.oms.config; import com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceBuilder; import com.baomidou.mybatisplus.plugins.PaginationInterceptor; import com.baomidou.mybatisplus.plugins.PerformanceInterceptor; import org.mybatis.spring.annotation.MapperScan; import org.springframework.boot.autoconfigure.condition.ConditionalOnExpression; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Profile; import javax.sql.DataSource; //@EnableTransactionManagement @Configuration @MapperScan({ "com.sparrow.mapper", }) public class MybatisPlusConfig { /** * mybatis-plus 性能分析拦截器<br> * 文档:http://mp.baomidou.com<br> */ @Bean @Profile({"dev", "test"}) @ConditionalOnExpression("${mybatisPlus.performanceInterceptorEnabled:false}") public PerformanceInterceptor performanceInterceptor() { return new PerformanceInterceptor(); } @Bean public PaginationInterceptor paginationInterceptor() { PaginationInterceptor paginationInterceptor = new PaginationInterceptor(); // 开启 PageHelper 的支持 paginationInterceptor.setLocalPage(true); return paginationInterceptor; } @Bean @ConfigurationProperties("spring.datasource") public DataSource dataSource() { return DruidDataSourceBuilder .create() .build(); } }
PHENOTYPIC AND GENOTYPIC CHARACTERIZATION OF PSEUDOMONAS SAVASTANOI PV. SAVASTANOI CAUSING OLIVE KNOT DISEASE IN TURKEY. Olive knot disease caused by Pseudomonas savastanoi pv. savastanoi ( Psv ) is one of the major diseases influencing olive ( Oleae europaea L.) production in Turkey. The disease incidence rate was found to range between 4 and 80% according to the 2015 and 2018 surveys. A total of 67 isolates were recovered from 7 symptomatic Turkish olive cultivars on a semi-selective medium, PVF-1, and then identified as Psv by biochemical and molecular tests. The isolates produced characteristic gall symptoms on olive plants and were consistently re-isolated. Fatty acid methyl ester (FAME) analysis indicated that the major fatty acid components were oleic acid (18:1), palmitoleic acid (16:1), and palmitic acid (16:0), and also clustered the olive strains into 2 groups. The repetitive element palindromic PCR (rep-PCR) primer, the BOXA1R primer, produced the discriminatory profile, with amplicon sizes ranging from ~ 200 bp to 2 kb, and categorized the olive strains into 2 separate groups. Pulsed-Field Gel Electrophoresis (PFGE) differentiated the olive Psv isolates into 3 discrete haplotype groups after the genomic DNA was digested with Spe I. This is the first study using PFGE to determine the genetic diversity of the Psv olive population. Introduction The olive tree (Olea europaea L.) is a standout amongst the most significant and transcendent organic product trees found in western and focal Italy and Spain, southern Morocco and Tunisia, and western Turkey, and Greece (Loumou and Giourga, 2003). It serves as a source of edible fruit and oil for millions of people in various parts of Turkey. At present, global olive oil production exceeds 2,500,000 tons. In 2015, the aggregate production by the member states of the International Olive Oil Council (IOC) was 2,964,500 tons, 94% of the global production. EU production was 2,322,000 tons, whereas the individual IOC member state production showed Spain to be leading with 1,401,600 tons, followed by Italy with 474,600 tons, and then by Greece with 320,000 tons. Turkey produced 143,000 tons, Tunisia 140,000 tons, Morocco 130,000 tons, Algeria 83,500 tons, and Iran 5,000 tons (International Olive Oil Council, 2015). Olive knot disease is perhaps the earliest plant disorder to be specifically reported in ancient times, and is regarded as one of the most critical diseases that affects olive trees. Olive knot disease is characterised by the production of hyperplastic galls on several plant parts (Nester and Kosuge, 1981). The disease is a serious threat to olive production in the Mediterranean basin, including Turkey, where several climatic conditions including rain, wind, humidity, and temperature favour pathogen PssI7, PssI21, PssI24, Psn9) obtained from surveys in Antalya province of Turkey in 2000 (Basm and Ersoy, 2000) were also included in this study ( Table 2). The Myrtus strain, PssI24, and Nerium strain, Psn9, were used for clustering. The disease rates within the olive plantations from which the contaminated samples were collected were assessed utilizing the equation as defined by Bansal et al. : Putative Psv isolates were identified by several biochemical tests, including LOPAT, according to Schaad et al.. The tests were carried out using flat, greyish-white, irregular margins or semi-translucent colonies grown on PVF-1 medium. The identity of the Psv isolates was confirmed by GC-FAME analysis and PCR. The Psv isolates were identified directly from bacterial suspension as well as from purified genomic DNA by PCR utilizing primers, IAALN1/IAALN2 () and PSS1/PSS2; PSS3/PSS4 (Basm and Ersoy, 2001). The polymerase chain reaction (PCR) conditions are given in Table 1. Pathogenicity test For putative Psv isolates, pathogenicity tests were carried out by inoculating the stems of a one-year-old Gemlik olive cultivar. The bark of the stem was wounded with a sterilized needle dipped in bacterial suspension, which contained ~10 8 CFU/ml, and the wounds were then covered with Parafilm for 3 days. The inoculated olive plants were kept in a controlled-room at 25 ± 2 °C and 80-85% RH and monitored for symptom development according to Surico and Lavermicocca. Olive plants similarly treated with reference strains or sterile dH2O were utilized as positive and negative controls, respectively. GC-FAME analysis GC-FAME analyses were performed on each Psv isolate as stated in the manufacturer's specifications to determine the phenotypic characteristic of the isolates. The Psv isolates were growth on tryptic soy broth agar at 27 °C for 24 h. The each Psv isolate (a loopful) was blended within a glass tube containing 1.2 N NaOH in methanol: H2O. The tubes were shortly vortexed, kept in a bubbling water bath for 5 min. The tubes firmly vortexed once more for 10 s and were exchanged to the bubbling water bath for 30-min warming to finish reaction. Two ml methylation arrangement (325 ml 6.0 N hydrochloric acid, 275 ml methyl solution) was included to the tubes after the tubes were cooled to room temperature. The samples were without further ado vortexed, warmed at 80 °C for 10 min. The samples were cooled quickly on ice, and afterward 1.25 ml the extraction arrangement (N-hexane/methyl tert-butyl ether (1:1; v/v) was included. The samples were tenderly blended utilizing a tube revolver for 10 min. The under aqueous phases of the samples were discarded by a micropipette. Three milliliters of the sample clean up solution (10.8 g NaOH dissolved in 900 ml H2O) was included into the sample, and tenderly blended for 5 min. And after that ~ 2/3 of natural period of the examples was moved into a dull GC vial. These final extracts were analysed with the HP 6980 GC System (Agilent Technologies, CA, USA), and the MIDI system (Microbial ID Inc., USA) with a 25 m 0.2 mm silica capillary column was performed. The 67 isolates of Psv were identified, and phenotypically characterized based on FAME composition by dendrogram analysis using the MIDI software version 6.0. DNA extraction Genomic DNA from Psv isolates was extracted based on a modified CTAB method (Doyle and Doyle, 1990), and was dissolved in 50 l TE buffer. The concentration of DNA was adjusted to 100 ng/l with TE, using a Nanodrop (Thermo Fisher Scientific, Waltham, MA, USA) and after which the DNA solution was stored at 4 °C. PFGE The Psv isolates were cultured in 5 ml Nutrient Broth at 28 °C at 140 rpm shaking for 24 h. Cell suspensions were adjusted to an OD of 0.3 A (approximately 1 10 8 CFU/ml) at 600 nm by utilizing a spectrophotometer (Eppendorf, Hamburg, Germany). The bacterial suspensions (1.5 ml) were centrifugated at 14,000 rpm for 2 min, pelleted, and resuspended in 1 ml sterile dH2O. This procedure was repeated twice. After suspension of the cells in 500 l TE buffer (10 mM Tris-HCl, pH 8.0; 10 mM MgCl2; 25 mM EDTA, pH 8.0),1 10 8 CFU/ml cells were encapsulated in 500 l 2% (w/v) low melting point agarose (Bio-Rad Laboratories, Hercules, CA, USA), and transferred into sterile fitting-molds (Bio-Rad Laboratories, Hercules, CA, USA). The hardened agarose fittings at 4 °C for 20 min were returned to a 1.5 ml microfuge tube. Agarose fittings with 1 mg/ml Proteinase K in 250 mM EDTA (pH 9.5) and 25% (w/v) N-laurylsarcosine were incubated overnight at 50 °C. The agarose plugs were returned to a sterile tube containing 1. were utilized for the DNA fingerprint matching and dendrogram analyses. Isolation and identification of bacteria The isolates and their locations are given in Figure 1 and Table 2. The pathogens were characterized after isolation from infected olive trees, thereby establishing their presence across the various districts of Antalya province in Turkey. The incidence of the olive knot disease reported for each district by Bansal et al. are: 80% in Serik, 20% in Demealt-Krkgz, 17% in Aksu-Topall, 10% in Ka-Dalyan, and 4% in Antalya-Center. Psv grew well and produced unique levan-negative colonies on selective PVF-1, KB, and NSA media. The colonies grown on KB medium were flat, with a diameter ranging from 1-3 mm, greyish to white colour, and irregular margins. The bacteria produced fluorescence pigment on both KB and PVF-1 media under UV light. The colonies grown on PVF-1 were greyish white, slightly raised, smooth, and relatively small (2-3 mm). The colonies grown on NSA medium were grey or pale yellow, slightly raised or flat with a diameter of 3-5 mm. LOPAT indicated negative levan, oxidase, potato Basim et soft rot, and arginine dehydrolase tests, but all isolates showed positive hypersensitive reaction in the tobacco plant. In all, 67 isolates were recovered from diseased olive trees in Turkey and identified by PCR. The numbers of groups and strains in each group are presented in Table 2. The pathogenicity of the isolates on the one-year-old Gemlik olive seedlings produced characteristic knot symptoms with variable sizes. The healthy olive seedling treated with sterile dH2O as a control did not show any symptoms (Fig. 2a). The pathogen was reliably re-isolated from the knots in the repeated tests, establishing Psv as the causal agent of the knot or gall disease in olive plants, thereby satisfying Koch's postulates (Fig. 2b and c). The relationships among the Psv strains were established based on FAME analysis. All 67 isolates recovered from the olive plant hosts in Antalya province and the reference strains were analysed. The results show percent similarities of 78.50-100% to the MIS library. The major fatty acids were palmitic acid (16:0), palmitoleic acid (16:1), and oleic acid (18:1) ( Table 3). The FAME cluster analysis based on the Euclidean distance categorized the Turkish olive strains into 2 groups (Fig. 3). Each clade is unique, showing a close relationship among each other. Rep-PCR The primer, BOXA1R, which was used to amplify the repetitive DNA sequence of Psv, showed various genomic fingerprints. The results obtained by the BOXA1R primer pair showed variability among the Psv strains. This primer pair produced different polymorphic patterns among the strains with amplification fragments ranging from ~250 to 2250 bp (Fig. 4). BOX PCR cluster analysis categorized all the Turkish olive strains into 2 groups (Fig. 5). PFGE All 67 isolates and reference strains were evaluated by using PFGE. The results showed various discrete DNA patterns of Psv after digesting the total genome with SpeI. The restriction digestion of the Psv genome yielded fragments which ranged from 9 to 1000 kb (Fig. 6). Ase I and Xba I did not effectively digest genomic DNA in this study. The Turkish olive strains were separated into 3 discrete PFGE groups (Fig. 7). Based on the cluster analysis, there was nearly identical linkage between the haplotype Basim et group I and the reference strain. The percentage similarity among the haplotypes as shown by the cluster was 42-100% (Fig. 7). Based on the cluster analysis of PFGE, haplotype group I consisted of 38 haplotypes and reference strain NCPPB639; haplotype group II (21 haplotypes): haplotype group III (11 haplotypes), and the myrtus isolate was placed into haplotype group IV as Figure 7 and Table 2. Discussion Of the isolates collected from diseased olive plants during the 2015-2018 survey, 67 were found to be heterogenic and divided into 3 haplotype groups by PFGE. From the survey results, the highest percentage of the disease and the olive plant cultivars from which the pathogen were isolated were from the Serik district, and the highest infected (80%) or susceptible cultivar from this district was the Gemlik cultivar. The least infected orchard recorded was from the Antalya-center district. When all the collected samples across the various districts were compared, the Gemlik cultivar showed the highest percentage of infection. Some commercial olive cultivars have been found to be considerably tolerant to olive knot disease (). The high rate of infection in the Gemlik cultivar could be due to a high susceptibility to this pathogen. It could also be also be attributed to the exposure of these varieties to powerful winds common in the coastal regions of Antalya-center and Ka-Dalyan. These strong winds create wounds on the plants which allow entrance for the pathogen. Additionally, the fungus, Cycloconium oleaginum, a causal agent of olive leaf spot disease that results in defoliation, creates wounds that can serve as entrance point for Psv. The different groups of Psv strains according to PFGE occurred in the same districts because of the large olive plant production as well as different cultivars of olive grown in these areas. There was no association of genetic diversity of Turkish Psv population with olive cultivars of isolation. The results of Scortichini et al. were supported by our findings. On the other hand, Moretti et al. and Krid et al. reported an association with geography and olive cultivars. The morphological and biochemical tests indicated that the 3 media used in this study produced colonies typical of Psv. Considerable heterogeneity in the colonies established by Psv strains isolated from different cultivars of olive in various localities have been reported in Italy (Surico and Marchi 2003). LOPAT showed isolates to be levan negative, and all isolates produced a positive hypersensitive reaction in tobacco plant. The LOPAT profile () and other tests described by Lelliott and Stead were used to separate a typical Psv strain from P. syringae subsp. syringae. Janse observed that some Psv strains isolated from various host plants had almost similar physiological and biochemical dispositions. All isolates were confirmed to be pathogenic on a one-year-old olive plant, but not on an oleander plant. Our result is in concordance with the findings reported by Perez-Martinez et al.. Table 3 indicates the relative (percent) total fatty acid methyl esters as found by FAME analysis of the Psv isolates, which were entered into the standard MIDI library. The FAME cluster categorized all the strains isolated from olive plants collected from various districts of Antalya province which were clustered into 2 groups based on Euclidean metrics. The FAME analysis provides useful information with regards to the phenotypic identification of the pathogen, but it is not an effective technique for discriminating between strains. The diversity study was carried out using rep-PCR with primer pair, BOXA1R, which was found to be informative and discriminatory for all the Psv isolates tested in this study, and produced polymorphic patterns corresponding to different amplification fragment sizes (Fig. 4). All the strains isolated from olive plants formed 2 groups based on the BOX-PCR. Although the BOX-PCR was found to be effective at differentiating Psv isolates, its discriminating ability was less than that of PFGE as observed in this study. Interestingly, it could be seen from the cluster tree that strains from different districts of a province or geographical location showed similar genetic homogeneity and were grouped together in one clade by the BOX-PCR cluster analysis (Fig. 5). The genetic variability among the Psv strains was further confirmed by using PFGE after restriction digestion with Spe I, a rare-cutting enzyme. Three different haplotype groups were produced based on the DNA fragment patterns generated as compared to FAME and BOX-PCR analyses, which categorized the isolates from olive plant hosts into 2 groups each. The results of this study recommend that the FAME and BOX-PCR strategies were not capable as PFGE to separate Psv strains. The PFGE utilizing rarecutting endonuclease Spe I given the foremost comprehensive comes about. The effective discriminatory ability and reproducibility of PFGE makes it one of the foremost broadly utilizied method for comparative fingerprinting of most bacterial species (;;van ). Based on PFGE, most Psv strains were placed in haplotype group I with reference strain NCPPB639. The other strains were separated into 2 haplotype groups, resulting in a total 3 different haplotype groups in the olive Psv population. This heterogeneity among the Psv population can be clarified by even exchange of plasmids and chromosomal genes as seen in Xanthomonas and Pseudomonas pathovars (;;Sundin, 2007). Based on PFGE, the isolate Pss9 from Nerium oleander was placed in haplotype group II, along with several olive strains. However, the same strain was placed in a different haplotype group by rep-PCR analysis. Comparative comes about were found by Moretti et al.. P. s. pv. nerii and P. s. pv. fraxinii were plainly isolated from the Psv population by utilizing rep-PCR. Based on MLST, isolates of P. s. pv. nerii and P. s. pv. fraxini pathovars clearly have a same genetic feature as Psv, and may have adjusted to oleander and fraxinus, individually, after Psv risen as an olive plant pathogen (). Conclusion Our results show that PFGE using SpeI has an effective discriminatory capability for genotypic analysis of Psv population. The present study provides important outputs to better comprehension of the genotypic and phenotypic structure of the Psv population in Turkey. The results about given an incredible opportunity for following strain shifting in the Psv population in future, and for olive breeding programs pointed at the advancement of an olive cultivar safe to the distinctive Psv strains.
def fake_start(application): if application.name == application1.name: return fail(Exception('First start failure.')) else: return real_start_application(application)
<reponame>kagemeka/competitive-programming import typing import sys sys.setrecursionlimit(1 << 20) def solve( s: str, t: str, ) -> typing.NoReturn: def dfs( i: int, j: int, d: int, ) -> bool: if j == len(t): return True if i < 0 or len(s) <= i: return False if s[i] != t[j]: return False ok = dfs(i - 1, j + 1, 0) if d == 0: return ok ok |= dfs(i + 1, j + 1, 1) return ok for i in range(len(s)): ok = dfs(i, 0, 1) if ok: print('YES'); break else: print('NO') def main() -> typing.NoReturn: q = int(input()) for _ in range(q): s = input() t = input() solve(s, t) main()
Localisation of focal liver lesions to specific hepatic segments--comparison of multiphase spiral CT and MR imaging. The purpose of this study was an evaluation of the ability of the multiphase spiral CT and MR imaging to localise focal liver lesions referring to specific hepatic segments. The authors studied prospectively 26 focal liver lesions in 26 patients who had undergone spiral CT and MRI before surgery. Multiphase spiral CT included non-contrast scans, hepatic arterial-dominant phase, portal venous--dominant phase and equilibrium phase. MRI was performed in all cases. The following sequences were performed: SE and TSE T1- and T2-weighted images, STIR and dynamic T1-weighted FFE study after i.v. administration of gadolinium (Gd-DTPA). The CT and MR scans were prospectively and independently reviewed by three radiologists for visualisation of hepatic and portal veins and segmental localisation of hepatic lesions. The authors used the right and left main portal veins along with transverse fissura, hepatic veins and gallbladder fossa as landmarks for the tumour localisation to specific hepatic segments. The primary segmental locations of the lesions were correctly determined with CT in 22 of 26 focal liver lesions (85%) and with MR imaging in 24 of 26 lesions (92%). The full extent of lesions was correctly described with sCT in 19 of 26 focal lesions and with MR in 21 of 26 tumours. MRI and CT were helpful preoperative tools for determining the segmental location of focal liver lesions and for planning the surgical approach.
Interaction of APOE4 alleles and PET tau imaging in former contact sport athletes Highlights Cortical PET tau was compared between APOE4 carriers and non-carriers. APOE4 carriers had higher cortical PET tau comparing to non-carriers. APOE4 as a risk factor for tau accumulation in former contact sports athletes. aggression, depression, memory and cognitive impairments, as well as heightened suicidality [McKee et al., 1,, Omalu et al., Jun,. Although the pathological changes of CTE were originally described in boxers [Martland, 1928Oct 13, Critchley, 1949, Millspaugh, 1937, confirmed CTE cases come from a variety of contact sports including American football, hockey, wrestling, and soccer; as well as from military personnel and non-sport related concussions [,. Two recent studies found strong dose-response relationships between number of years played contact sports and CTE neuropathology [,. The clinical and pathological presentations of CTE overlap with those of Alzheimer's disease (AD) and frontotemporal lobar degeneration, but CTE pathology has its own distinct features [,. The pathognomonic lesion of CTE, as defined by a National Institute of Neurological Disease and Stroke (NINDS)/National Institute of Biomedical Imaging and Bioengineering (NIBIB) meeting, consists of irregular hyperphosphorylated tau deposits in neurons and astroglia, preferentially at the depths of the sulci in the superficial cortical layers and around blood vessels . amyloid and TAR DNA-binding protein 43 inclusions are also reported in some studies [McKee et al., 1,, Omalu et al., Jun,, Omalu et al., 1, Gavett et al., 1, Corsellis et al., Aug, 16, Corsellis and Brierley, 1959. There are currently no antemortem biomarkers for the tau pathology of CTE, and the diagnosis is made based on post-mortem neuropathological examination of the brain tissue. Phosphorylated tau, the pathological substrate of CTE is similar to that observed in Alzheimer's disease but has its own distinct features . The use of positron emission tomography (PET) imaging with AV-1451 (T807; Flortaucipir, AVID Radiopharmaceuticals), a tau specific tracer, allows the detection of abnormal aggregates of phosphorylated tau protein in vivo in AD . Its use in AD has been widely examined and tracer retention correlated with post mortem neurofibrillary tangles (NFTs) containing tau in the form of paired helical filaments [,,. As well, binding was higher in AD patients than in patients with mild cognitive impairment or healthy controls, and tracer binding was associated with worsening cognitive function . PET imaging with AV-1451 tau specific tracer shows promise as a potential in vivo biomarker of CTE pathology, however its ability to reliably detect CTE lesions is unclear and requires more investigation . One study reported mildly elevated PET tau binding in two out of nine amyloid negative patients at risk for CTE, with the distribution pattern consistent with CTE pathology stages III-IV. This result suggests PET tau might not be sensitive to CTE lesions in early disease stages 1]. Earlier case reports of this tracer in formerly concussed athletes presented cases of former National Football League (NFL) players with a history of multiple concussions . The first case was of a 71-year-old with memory impairments and a clinical profile similar to AD. The amyloid PET scan was negative so no evidence of AD pathology. The PET tau tracer AV-1451 showed predominantly subcortical signal, with the highest signal coming from the basal ganglia and substantia nigra . Tracer retention in the basal ganglia and substantia nigra regions has previously been pathologically confirmed to be off-target binding , but a more recent study described the basal ganglia binding to be correlated with age-related iron accumulation in that region . The second case of AV-1451 tracer binding was in a 39 year old athlete with progressive neuropsychiatric issuesspecifically emotional lability and irritability. The amyloid scan was negative, largely ruling out AD pathology, and the PET AV-1451 tau scan showed a higher tracer signal in the cortex . Other signal increases were noted in the midbrain, globus pallidus, and the hippocampus, with the midbrain and globus pallidus being pathologically confirmed off-target binding sites . Another study examined the use of the same PET tau tracer in veterans with blast neurotrauma, and found increased tracer signal in the frontal, occipital, and cerebellar brain regions . Finally, a more recent cohort study using AV-1451 PET tau tracer found increased bilateral superior frontal, bilateral medial temporal and left parietal SUVRs in 26 former National Football League players comparing to 31 controls. Tau SUVRs in these regions correlated with total years of tackle football amongst the former players cohort . Even though the exact CTE incidence amongst athletes is unclear, not all individuals with exposure to contact sports and repetitive head impacts develop CTE . Genetics might play a role in increasing CTE susceptibility. There is growing evidence that some genetic polymorphisms increase the risk of neurodegenerative diseases . Allelic variants of the apolipoprotein E (APOE) gene have been implicated in a number of neurodegenerative diseases . The two missense polymorphisms in APOE underly the three molecular isoforms : APOE epsilon 2 (2), APOE epsilon 3 (3), and APOE epsilon 4 (4). APOE4 has been shown to increase the risk of AD . The exact mechanism by which APOE4 influences AD risk is not yet understood, however increasing evidence points to the amyloid hypothesiswhere APOE4 directly, and indirectly influences amyloid beta metabolism . The relationship between APOE alleles and tau pathology is less clear. Some authors propose an interaction between amyloid and tau proteins in the brain, where amyloid fibrils increase tau phosphorylation and aggregation . Therefore, APOE4 may have an indirect effect on tau accumulation through amyloid. However, some in vitro animal studies demonstrated a direct effect of APOE on tau pathogenesis [,. In context of traumatic brain injuries (TBIs), APOE4 is associated with poor clinical outcomes in patients with TBIs . Additionally, the APOE4 allele has been associated with elevated postconcussion symptoms in military veterans , and increased phosphorylated tau levels in the brains of a blast-injury mouse model . This provides limited, but possible evidence for an association between APOE and tau pathology in TBI cases. Another polymorphism implicated in neurodegeneration is in the microtubule-associated protein tau (MAPT) gene, which is responsible for the production of tau protein . Mutations in the MAPT gene may lead to abnormal structure and function of tau, and currently almost 60 MAPT mutations are linked to neurodegeneration. There are two main MAPT haplotypes -H1 and H2 . The H1 haplotype is associated with an increased risk of developing 4-repeat tauopathiesprogressive supranuclear palsy (PSP) and corticobasal degeneration (CBD). Previous research highlighted that the H1 haplotype is significantly overrepresented in pathologically confirmed CBD and PSP populations, compared to controls . The literature examining MAPT haplotypes in relation to head impacts and CTE is limited, however one study found a slight increase in frequency of MAPT H1/H1 genotype in men with contact sports exposure and confirmed CTE pathology, comparing to men with contact sports exposure without CTE pathology and to clinical controls . This study examines the effect of the APOE4 allele and MAPT H1H1 on SUVRs of PET tau-specific AV-1451 tracer in former professional contact sport athletes at risk for CTE. We hypothesize that carriers of APOE4 allele and/or H1H1 carriers will have higher PET AV-1451 signal. Participants Thirty-eight athletes engaged in sports with high risk of concussions were included as part of this ongoing study. The recruitment was completed through the Canadian Football League (CFL) Alumni Association and the Toronto Western Hospital (Toronto, Canada) concussion clinic. Inclusion criteria are participants under 85 years old who are fluent in English and are former professional or semi-professional sport athletes at high risk of concussions. Exclusion criteria included the diagnosis of a neurological or psychotic disorder prior to the concussions, systemic illnesses affecting the brain, or lesions seen on magnetic resonance imaging (MRI). Due to the invasiveness of the procedure, only nine of 38 participants agreed to undergo a lumbar puncture so their CSF could be tested for AD biomarkers. For participants with no CSF availablestructural MRI scans and PET tau imaging were examined by a cognitive neurologist (MCT) for evidence of AD pattern. All participants underwent comprehensive neuropsychological and neurological assessments, neuroimaging and blood collection during the same consecutive two-day visit. The study was approved by the Research Ethics Board of the University Health Network and written consent was obtained from all participants. Concussion exposure was determined based on the player's recall of injury using the concussion definition provided by the Concussion in Sport Group, as detailed in their most recent consensus statement on concussion in sport . In addition, all players underwent a semi-structured interview to verify the information and to jog memory for any events they may not have recalled. Biofluid collection and genetics Lumbar puncture for CSF collection was performed following AD Neuroimaging Initiative (ADNI) protocol . After CSF collection into polypropylene tubes, a sandwich ELISA method was used to measure A 42, phosphorylated tau (p-tau) and total tau (t-tau) levels according to the manufacturer's instructions . AD pathology was considered present if p-tau > 68 pg/ml and A 42 to t-tau index < 0.8 . Blood was collected from all participants and genomic DNA was extracted using a Qiagen kit from whole blood. The APOE genotypes and MAPT haplotypes were determined as previously described. Neuroimaging PET tau imaging with 5mCi of AV-1451 tracer was performed. Thirty-six participants were scanned using a Biograph HiRez XVI PET/CT scanner (Siemens Molecular Imaging, Knoxville, TN, USA), while 2 participants were scanned using a 3D High Resolution Research Tomograph (HRRT) (CPS/Siemens, Knoxville, TN, USA) PET scanner. Following a 45-minute uptake time, static PET images (45-120 min) were acquired for a duration of 75 min. T1 structural MRI images were acquired using a 3T GE Signa scanner with 8 channel headcoil and the following scan parameters: TE=5 ms, TR=12 ms, flip angle = 45°; 128 axial slices, slice thickness=1.5 mm, 256 256 matrix, FOV=24 24 cm. The region of interest (ROI) analysis was completed on the PET data using in-house ROMI software using the ROI delineation method as previously described . The PET images were corrected for head motion and partial volume effect . For a single ROI of the cortical grey matter (excluding cerebellum), SUVRs were calculated from the PET data between 50 and 80 min and, in a subset of the participants, from the data between 80-100 min post injection. The cerebellar grey matter was used as the reference region. Neuropsychological testing The following tests, with known sensitivity to TBIs and neurodegeneration were used for this study: trail making test (TMT) parts A and B [,, Rey auditory verbal learning test (RAVLT) and Rey visual design learning test (RVDLT) , symbol digit modalities test (SDMT) [Smith, 1982, and digit span backward and forward . Personality was assessed using the personality assessment inventory (PAI) . The scores were standardized based on posted norms [, Smith, 1982, D. Wechsler, 1997,, Heaton, 1992. The higher scores on TMT A & B, RAVLT, RVDLT, SDMT, digit span forward & backward assessments indicate better cognitive functioning, while higher scores on PAI depression and aggression assessments indicate higher levels of impairment. The cut-off threshold of 1.5 standard deviations below the mean was used to signify impaired functioning on TMT A & B, RAVLT, RVDLT, SDMT, digit span forward & backward assessments. The cut-off threshold of 1.5 standard deviations above the mean was used to signify impaired functioning on PAI aggression and depression assessments. Statistical analysis Statistical analysis was completed using IBM SPSS Statistics version 24 (IBM Corp., Armonk, NY, USA). All between-group demographics and neuropsychological testing comparisons were completed using an independent samples t-test, with the type of scanner comparison completed using Fisher's exact test. The number of concussions was not found to be normally distributed, therefore all between-group concussion number comparisons were completed using the Mann-Whitney U test. Due to the small sample size, participants had to be grouped based on APOE4 carrier and non-carrier status. Regarding the MAPT genecarriers of H1H2 and H2H2 diplotypes had to be grouped together and compared to carriers of H1H1 diplotype. The difference in mean cortical grey matter PET AV-1451 SUVRs between carriers and noncarriers of specific alleles and diplotypes was determined using one-way ANCOVA, controlled for age. Neuropsychological assessment scores between APOE4 carriers and non-carriers were also determined using an independent samples t-test. The cortical PET tau SUVR values amongst the study population presented on a continuum, ranging from 0.95 to 1.57. In order to compare the frequency of high risk allele APOE4 in the lowest and the highest group based on PET tau SUVR values in the cortex, participants were divided into tertiles based on mean cortical PET AV-1451 SUVR values, and the middle group was dropped from the analysisleaving the high and low groups to be compared. We then completed a hypothesis driven comparison using Fisher's exact test of APOE4 frequency between high and low cortical PET tau group, expecting a higher frequency of APOE4 carriers amongst the high cortical grey matter PET tau group. Bonferroni correction was used to account for comparisons in mean cortical grey matter SUVR values between genotypes, and both adjusted and non-adjusted p-values are reported with a significance level set at p<0.05. For neuropsychological assessment score comparisons between genotypes, Bonferroni adjusted p-values with a significance level set at p<0.05 were reported only if any unadjusted p-values were significant at p<0.05. The number of self-reported concussions for the whole cohort (N = 38) ranged from 0 to 60 (6.16 ± 9.61). For those who had self-reported concussions (data presented for N = 35 because 2 participants did not recall any concussions and 1 participant did not remember the date of last concussion), the number of years since last reported concussion ranged from 0.5 to 61 years (20.90 ± 16.27). The 2 participants with no reported concussions were included in the study because each had ≥10 years of play in contact sports and were very likely exposed to subconcussive blows. Nine out of 38 participants who had cerebrospinal fluid (CSF) were AD negative. The remaining 29 participants were examined for the presence of AD-like pattern on MRI i.e. medial temporal atrophy and/or precuneus/posterior cingulate atrophy and on PET AV-1451 SUVR for increased tracer uptake specifically in middle temporal lobe and posterior cortical regions including parietal lobe, and no such pattern was seen. Although cannot be ruled out entirely, AD pathology is unlikely in this cohort. The APOE genotype distribution of the entire cohort was as follows: 2 individuals with APOE2/APOE4, 5 individuals with APOE3/APOE2, 20 individuals homozygous for APOE3, 10 individuals with APOE3/APOE4, and 1 individual homozygous for APOE4 allele. The MAPT diplotype distribution of the entire cohort was as follows: 21 individuals with H1H1, 14 individuals with H1H2 and 3 individuals with H2H2 diplotype. Neuropsychological assessment results of the participant cohort Overall, the participant cohort of this study showed to be quite high functioning with only a few individuals with impaired scores on neuropsychological testing. The distribution of performance on neuropsychological assessments was as follows: 1/38 participants had impaired performance on TMT A & B assessments; 1/37 participants had impaired performance on RAVLT, SDMT oral score, and digit span forward assessments; 7/37 participants had impaired performance on RVDLT assessment; 2/38 participants had impaired performance on SDMT written score; 5/38 participants had impaired performance on PAI depression and aggression assessments. Finally, no participants had impaired performance on digit span backward assessment. The impaired scores for each neuropsychological assessment between APOE and MAPT genotype groups are presented in Table 1 and 2. The impaired scores for each neuropsychological assessment for groups divided into tertiles based on cortical grey matter PET tau SUVR values are presented in Table 4 and 5. Comparison between 50-80 and 80-100 min post-tracer injection time All PET SUVR values reported were computed using 50-80 min post-tracer injection time. A subset of the participants (N = 24) had results available for 80-100 min post-tracer injection time, allowing for direct comparison between the time intervals. The cortical grey matter PET SUVRs were not found to be significantly different between the 2 time intervals for these 24 participants (p>0.4). The relationship between APOE4 and cortical grey matter PET tau The APOE4 carrier and non-carrier groups did not differ in demographics (Table 1). No difference in demographics was found between the MAPT H1H1 and H1H2/H2H2 diplotype groups (Table 2). One-way ANCOVA controlled for age showed a significant difference in cortical grey matter PET AV-1451 SUVR values between the APOE4 carrier and non-carrier groups (p = 0.010), however, no significant difference in SUVR were found in MAPT diplotypes (p = 0.895). After implementing Bonferroni to control for multiple comparisons, the relationship between the APOE4 allele and cortical SUVRs remained significant (p = 0.020) ( Table 3). The relationship between APOE and MAPT genotypes and neuropsychological assessments The neuropsychological assessment results for the APOE4 carrier/ non-carrier groups are summarized in Table 1. The neuropsychological assessment results for the MAPT H2 carrier and non-carrier groups are summarized in Table 2. The independent student t-test showed no significant difference in the scores on TMT A & B, RAVLT, RVDLT, SDMT, digit span forward & backward, and PAI depression and aggression scores (all unadjusted p>0.06), between the APOE4 carriers and non-carriers. No significant differences in the neuropsychological assessment scores (all unadjusted p>0.15) were found between MAPT H2 carrier and non-carrier groups. Independent student t-test, Fisher's exact test & Mann-Whitney U comparison; unadjusted significance level set at p<0.05 (2-sided). The number of participants with impaired scores is presented underneath the mean scores for each neuropsychological assessment in each group. a Data is not included for 1 participant because he did not recall any concussions. b Data is not included for 2 participants because 1 did not recall any concussions and 1 could not recollect the date of last concussion. c One participant's score is missing due to the refusal to undergo the full neuropsychological testing, and a reduced battery was administered instead. A. Vasilevskaya, et al. NeuroImage: Clinical 26 102212 3.6. Genotype counts between high and low cortical PET tau groups In order to compare the frequency of APOE4 carriers and non-carriers according to cortical PET tau, we divided the entire cohort (N = 38) into three equal groups based on PET AV-1451 SUVR values and dropped the middle group, leaving the low (N = 13; ≤1.278 SUVR) and high (N = 13; ≥1.384 SUVR) PET tau groups for comparison. The demographics of the high and low PET tau groups did not differ (Table 4). The independent student t-test showed no significant difference in the scores on TMT A & B, RAVLT, RVDLT, SDMT, digit span forward & backward, and PAI depression and aggression scores following Bonferroni correction, between the high and low cortical PET tau groups. Fisher's exact test showed a significantly higher frequency of APOE4 allele carriers in the high cortical grey matter PET SUVR group (p = 0.048; one-sided) (Fig. 1). The demographics, neuropsychological assessment scores, and genotype counts for the middle tertile that was dropped from the statistical analysis is presented in Table 5. Discussion To our knowledge, this is the first study to examine the relationship between APOE, MAPT and cortical tau burden as seen with PET AV-1451 imaging in a cohort of former professional and semi-professional sport athletes with multiple concussions or sub-concussive hits at risk of delayed neurodegeneration, specifically CTE. The results of this study showed a significant association between the presence of an APOE4 allele and higher cortical grey matter PET AV-1451 SUVR, currently believed to be a marker of tau burden in AD. As well, APOE4 carriers were more frequent amongst the high cortical PET tau group, compared to the low cortical tau group. No association was found between MAPT H1H1 carrier status and cortical grey matter PET AV-1451 SUVR. The exact direct or indirect mechanism that implicates APOE in tau burden is still unclear. APOE is present in the cytoplasm of nerve cells, where it may interact with other molecules in an isoform-dependant manner . Tau is a microtubule-associated protein implicated in axonal transport, and previous findings show a decreased affinity of APOE4 towards the microtubule-binding domain of tau protein . This makes tau more vulnerable to being hyperphosphorylated, and therefore unable to bind microtubules, leading to its aggregation and consequently pathology . Furthermore, APOE4 showed increased binding to A which is implicated in increased senile plaque formation in AD . Autopsy studies showed greater staining for senile plaques in the brains of APOE4 homozygotes than APOE3 homozygotes . In the most recent literature, tau and amyloid were proposed to work together synergistically to amplify each other's abnormal aggregation and subsequent tauassociated cognitive decline, specifically in the context of AD . In the current study, of the 9 participants who had CSF analysis all were negative for AD biomarkers. The remaining 29 participants showed no typical AD atrophy on MRI or tracer signal retention on PET, so there is no obvious evidence to suspect that the results of our study are due to an underlying AD pathology. Our cohort is that of former contact sport athletes at risk for neurodegeneration, especially CTE, and the pathophysiology behind CTE is mainly defined by abnormal aggregates of hyperphosphorylated tau. The exact pathophysiology behind the toxic function of tau aggregates remains unclear. However, it is hypothesized that abnormal aggregates of hyperphosphorylated tau disrupt the normal cellular transport within the axons, leading to synapse loss and ultimate neuronal deathresulting in disrupted neural circuits and eventual cognitive decline . Previous studies highlight a close relationship between tau pathology, neuronal loss and disease severity in AD and other tauopathies . The lack of association between tau burden and MAPT H1H1 may not be unexpected given that this diplotype prevalence is elevated in PSP Independent student t-test, Fisher's exact test & Mann-Whitney U comparison; unadjusted significance level set at p<0.05 (2-sided). The number of participants with impaired scores is presented underneath the mean scores for each neuropsychological assessment in each group. a Data is not included for 2 participants because 1 could not recall any concussions and 1 could not recollect the date of last concussion. b Data is not included for 1 participant because he had no reported concussions. c One participant's score is missing due to the refusal to undergo the full neuropsychological testing, and a reduced battery was administered instead. Table 3 Difference in mean cortical grey matter SUVRs based on genotype (mean ± standard deviation). One-way ANCOVA, controlled for age; N.S. = not significant. a Bonferroni adjusted p-value; significant at p<0.05. A. Vasilevskaya, et al. NeuroImage: Clinical 26 102212 and CBD, wherein the underlying tau pathology is a 4-repeat isoform tauopathy and a straight filament, whereas CTE is similar to AD with a mixture of both 3-and 4-repeat and a paired helical filament, and so very different . The role of APOE4 in concussion remains unclear. Most previous studies examining the potential effect of APOE4 included TBIs of various severity in diverse populations, making between-study comparisons difficult. An association between APOE4 alleles and concussion has been reported in college athletes , and there is evidence for an increased risk of bleeding following TBI in APOE4 carriers, which may prolong recovery . A prospective study in college athletes did not, however, find an association between APOE4 and the risk of first concussion . amongst army veterans, APOE4 allele carriers showed poorer performance on memory tasks following TBI compared to non-carriers but no difference in executive function . A meta-analysis showed an association between APOE4 and increased risk of poor outcome 6-months post TBI . However, another study using the same 6-month post TBI follow up duration found no relationship between APOE4 and patient prognosis . Specific to athletes, the presence of APOE4 has been associated with increased symptom reporting following a sportrelated concussion and boxers who were APOE4 carriers showed worse neurological outcome . There does not appear to be an increased risk of suffering a concussion in APOE4 carriers . The results of our study are similar to previous research, where we found no significant association between APOE4 and concussion history or performance on neuropsychological assessments, however, we did find that APOE4 There are a number of limitations to the current study. First, the small sample size and lack of a replication cohort limit the statistical power. As well, the total years of play for all athletes was not collected, missing an opportunity to examine the effect of total years of play on neuroimaging and fluid biomarkers. Next, participant cohort is highly varied with regards to age, concussion number, and performance on neuropsychological tests. There is also no matched healthy control group. Presence of a reliable control group with no history of contact sports would have provided a PET tau SUVR cut-off that could be used to divide participants into groups with normal and elevated tau burden. Furthermore, making a comparison between the high and low PET tau groups by dropping the middle third of the cohort decreased the total number of participants significantly, reducing the power for that specific analysis. Another limitation is the solely neuropathological nature of CTE diagnosis, leaving us unable to tell whether any of the participants have underlying CTE related changes. The results of this study are thus generalizable to former professional and semi-professional sport athletes at high risk of concussions with no evidence of active neurodegenerative changes. One limitation of the current study is lack of information with regards to race of included participants. Previous studies reported differences in APOE allele frequencies between populations [, Seet et al., 1, Tang et al., 11, KB, and APOE4 was found to be a determinant of AD risk in whites. Earlier studies reported that African Americans and Hispanics have an increased frequency of AD regardless of their APOE genotype, however the most recent literature showed that APOE4 has a weak association with AD incidence amongst African Americans and Hispanics, in comparison to white populations [Tang et al., 11,,, KB. With regards to MAPT, the H2 haplotype was reported to be almost exclusively Caucasian in origin . Finally, the use of cerebellar grey matter as a PET reference region has been widely studied and established for use in AD, but not in concussion. Cerebellar atrophy has been reported within a concussed cohort , and therefore the cerebellum might not be the ideal reference region in TBI cases. One study examining the AV-1451 tracer in veterans with blast neurotrauma used a different reference region (ie. isthmus of cingulate) for its PET tau analysis rather than the usual reference region of the cerebellum used in the athletes' PET studies described above . Further research is warranted in this area. Overall, our results suggest a relationship between APOE4 and tau burden as measured by AV-1451 in the brain of athletes at risk for delayed neurodegeneration and CTE. A marked feature of CTE pathology is the abnormal aggregates of phosphorylated tau protein within the cortex in the form of NFTs. The increased tracer signal in the cortex of APOE4 carriers could signify a neurodegenerative process and PET tau may be a biomarker for this process, but more research is needed to establish that. Authors' roles A.V. acquired the data, analysed the data, interpreted the data and drafted the manuscript for intellectual content. F.T. and C.B. analysed and interpreted the data. A.T., S.A.N, M.K., and R.G. had major roles in data acquisition. C.S. analysed and interpreted the data, revised manuscript for intellectual content. M.G. and D.M. analysed and interpreted the data. R.W. and D.M. interpreted the data and revised the manuscript for intellectual content. R.B. and B.C. acquired and interpreted the data, R.B. also revised the manuscript for intellectual content. K.D.D., P.R., S.H. and E.G. interpreted the data and revised the manuscript for intellectual content. C.T. had a major role in acquisition of data, interpreted the data and revised the manuscript for intellectual content. M.C.T. had a major role in acquisition of data, interpreted the data, drafted and revised the manuscript for intellectual content. A. Vasilevskaya, et al. NeuroImage: Clinical 26 102212 Writing -review & editing. Maria C. Tartaglia: Investigation, Formal analysis, Writing -review & editing, Supervision. Declaration of Competing Interest Authors report no conflicts of interest. The number of participants with impaired scores is presented underneath the mean scores for each neuropsychological assessment. a One participant's score is missing due to the refusal to undergo the full neuropsychological, testing, and a reduced battery was administered instead.
<reponame>abadona/qsimscan<filename>common/sequtil.h ////////////////////////////////////////////////////////////////////////////// //// This software module is developed by SciDM (Scientific Data Management) in 1998-2015 //// //// This program is free software; you can redistribute, reuse, //// or modify it with no restriction, under the terms of the MIT License. //// //// This program is distributed in the hope that it will be useful, //// but WITHOUT ANY WARRANTY; without even the implied warranty of //// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. //// //// For any questions please contact <NAME> at <EMAIL> ////////////////////////////////////////////////////////////////////////////// #ifndef __sequtil_h__ #define __sequtil_h__ #include <ctype.h> #include <ostream> #include <platform.h> #include <tracer.h> #include <bitops.h> #include <myassert.h> #include "align_batch.h" // definitions related to binary sequence encoding #define NNNUM 4 #define AANUM 24 #define BITS_PER_BASE 2 #define BITS_PER_BASE_SHIFT 1 #define BASES_PER_BYTE (BITS_PER_BYTE/BITS_PER_BASE) #define BASES_PER_BYTE_SHIFT 2 // how much the length of sequence in bases should be shifted right to get number of bytes the packed sequence will take void n_revert_seq (char* dest, const char* source, unsigned len, unsigned beg=0); bool n_ascii2binary (char* dest, unsigned destlen, const char* source, unsigned start, unsigned len); bool n_binary2ascii (char* dest, unsigned destlen, const char* source, unsigned start, unsigned len); void n_countnucs (unsigned& a, unsigned& g, unsigned& c, unsigned& t, const char* src, unsigned start = 0, unsigned len = -1); std::ostream& output_seq (std::ostream& o, const char* buffer, unsigned buflen, unsigned chars_in_line, bool counts, bool decades, unsigned first_off); std::ostream& fasta_output_seq (std::ostream& o, const char* buffer, unsigned buflen, unsigned chars_per_line); std::ostream& gb_output_seq (std::ostream& o, const char* buffer, unsigned buflen); unsigned swap_batches (BATCH* batches, unsigned batch_no); // returns total batch length unsigned len_batches (const BATCH* batches, unsigned batch_no); unsigned count_gaps (const BATCH* batches, unsigned batch_no); unsigned count_matches (const char* xeq, const char* yseq, const BATCH* batches, unsigned batch_no); bool a_ascii2binary (char* dest, unsigned destlen, const char* source, unsigned start, unsigned len); bool a_binary2ascii (char* dest, unsigned destlen, const char* source, unsigned start, unsigned len); // get single base inline unsigned get_base (const char *seq, unsigned pos) { return (seq[pos >> 2] >> ((pos & 3) << 1)) & 3; } // put single base inline void put_base (char *seq, unsigned pos, unsigned base) { // assert (!(base & ~0x3)); seq [pos >> 2] &= ~(0x3 << ((pos & 3) << 1)); // zero the two bits seq [pos >> 2] |= (base << ((pos & 3) << 1)); // set the two bits } // get k-tuple, 'short' version. Extraction of tuples not longer then 13 bp // in k-tuple, the earlier bits appear as less significant inline DWORD get_ktup (const char *seq, unsigned pos, DWORD kt_mask) { return (GET32_U (seq + (pos >> 2)) >> ((pos & 3) << 1)) & kt_mask; } // get k-tuple, 'long' version. Extraction of tuples not longer then 29 bp // in k-tuple, the earlier bits appear as less significant inline QWORD get_lktup (const char *seq, unsigned pos, QWORD lkt_mask) { return (GET64_U (seq + (pos >> 2)) >> ((pos & 3) << 1)) & lkt_mask; } extern BYTE LTUPLE_SHIFTS []; extern const BYTE* const TUPLE_SHIFTS; // this could be more efficient if templetized and computed at compile time. // However this will mean miltiplication (bloating) of same code for all possible ktuples inline BYTE tuple_shift (BYTE len) { // return (((sizeof (DWORD) << BASES_PER_BYTE_SHIFT) - len) << BITS_PER_BASE_SHIFT); return TUPLE_SHIFTS [len]; } inline BYTE ltuple_shift (BYTE len) { // return (((sizeof (QWORD) << BASES_PER_BYTE_SHIFT) - len) << BITS_PER_BASE_SHIFT); return LTUPLE_SHIFTS [len]; } extern QWORD LTUPLE_MASKS []; extern DWORD TUPLE_MASKS []; // compute mask for a tuple of given length, 'long' version - tuples below 32 bases inline QWORD ltuple_mask (BYTE len) { // return (~((QWORD) 0)) >> (sizeof (QWORD)*BITS_PER_BYTE - len * BITS_PER_BASE); return LTUPLE_MASKS [len]; } // compute mask for a tuple of given length, 'long' version - tuples below 32 bases inline DWORD tuple_mask (BYTE len) { // return (~((QWORD) 0)) >> (sizeof (QWORD)*BITS_PER_BYTE - len * BITS_PER_BASE); return TUPLE_MASKS [len]; } inline BYTE swapbases_hard (BYTE in) { BYTE out = 0; out |= (in & 0x03) << 6; out |= (in & 0x0C) << 2; out |= (in & 0x30) >> 2; out |= (in & 0xC0) >> 6; return out; } static const unsigned SWAP_TABLE_SIZE = 0x100; static BYTE SWAP_TABLE [SWAP_TABLE_SIZE]; static bool make_swap_table () { for (unsigned a = 0; a != SWAP_TABLE_SIZE; ++a) { SWAP_TABLE [a] = swapbases_hard (a); } return true; } static bool table_made = make_swap_table (); inline BYTE swapbases (BYTE in) { return SWAP_TABLE [in]; } #ifdef SCIDM_LITTLE_ENDIAN #define GET16BASES(ptr) ( (((DWORD) swapbases (((const BYTE*) ptr) [0])) << 24) | \ (((DWORD) swapbases (((const BYTE*) ptr) [1])) << 16) | \ (((DWORD) swapbases (((const BYTE*) ptr) [2])) << 8 ) | \ (((DWORD) swapbases (((const BYTE*) ptr) [3])) ) ) #define GET32BASES(ptr) ( (((QWORD) swapbases (((const BYTE*) ptr) [0])) << 56) | \ (((QWORD) swapbases (((const BYTE*) ptr) [1])) << 48) | \ (((QWORD) swapbases (((const BYTE*) ptr) [2])) << 40) | \ (((QWORD) swapbases (((const BYTE*) ptr) [3])) << 32) | \ (((QWORD) swapbases (((const BYTE*) ptr) [4])) << 24) | \ (((QWORD) swapbases (((const BYTE*) ptr) [5])) << 16) | \ (((QWORD) swapbases (((const BYTE*) ptr) [6])) << 8 ) | \ (((QWORD) swapbases (((const BYTE*) ptr) [7])) ) ) #else #define GET16BASES(ptr) ( (((DWORD) ((const BYTE*) ptr) [3]) << 24) | \ (((DWORD) ((const BYTE*) ptr) [2]) << 16) | \ (((DWORD) ((const BYTE*) ptr) [1]) << 8 ) | \ (((DWORD) ((const BYTE*) ptr) [0]) ) ) #define GET32BASES(ptr) ( (((QWORD) ((const BYTE*) ptr) [7]) << 56) | \ (((QWORD) ((const BYTE*) ptr) [6]) << 48) | \ (((QWORD) ((const BYTE*) ptr) [5]) << 40) | \ (((QWORD) ((const BYTE*) ptr) [4]) << 32) | \ (((QWORD) ((const BYTE*) ptr) [3]) << 24) | \ (((QWORD) ((const BYTE*) ptr) [2]) << 16) | \ (((QWORD) ((const BYTE*) ptr) [1]) << 8 ) | \ (((QWORD) ((const BYTE*) ptr) [0]) ) ) #endif // get tuple, 'short' version. Extraction of tuples not longer then 13 bp // in tuple, the earlier bits appear as more significant inline DWORD get_tup (const char *seq, unsigned pos, BYTE len) { return ((GET16BASES (seq + (pos >> 2)) << ((pos & 3) << 1)) >> tuple_shift (len)) & tuple_mask (len); } // get tuple, 'long' version. Extraction of tuples not longer then 29 bp // in tuple, the earlier bits appear as more significant inline QWORD get_ltup (const char *seq, unsigned pos, BYTE len) { /* QWORD t = GET32BASES (seq + (pos >> 2)); // t is bitpair-inverse BYTE sh = ((pos & 3) << 1); t <<= sh; BYTE tsh = ltuple_shift (len); t >>= tsh; QWORD mask = ltuple_mask (len); t &= mask; return t; */ return ((GET32BASES (seq + (pos >> 2)) << ((pos & 3) << 1)) >> ltuple_shift (len)) & ltuple_mask (len); } // get 12 bases, upper bits undefined inline unsigned get_12_bases (const char *seq, int pos) // DO NOT CHANGE SECOND ARG TO unsigned type! This will break the nsimscan code! { // HACK / IMPROPER! this code uses features of negative number shifting/ return GET32_U (seq + (pos >> 2)) >> ((pos & 3) << 1); } // special case of bit counting algorithm inline unsigned count_12 (unsigned r) { r += r >> 2; //16 x 2-bit vector addition r &= 0x333333; r += r >> 4; // 8 x 4-bit vector addition r += r >> 8; // 4 x 4-bit vector addition r += r >> 16; // 2 x 4-bit vector addition return (r & 0xf); } extern const char aa2number []; extern const char number2aa []; extern const char number2base []; inline char base2char (unsigned b) { if (b > 4) b = 0; return number2base [b]; } inline unsigned char2base (char c) { switch (tolower(c)) { case 'r': case 'w': case 'g': return 1; case 'y': case 'c': return 2; case 'k': case 'u': case 't': return 3; default: return 0; } } inline unsigned char2aa (char c) { if (c >= 'a' && c <= 'z') c -= ('a' - 'A'); if (c <'A' || c > 'Z') return 23; return aa2number [c - 'A']; } inline char aa2char (unsigned aanum) { if (aanum < 0 || aanum > 23) return 'X'; return number2aa [aanum]; } inline QWORD get_tuple (const char* seq, unsigned off, unsigned len, DWORD* mask) { register QWORD rv = 0; register unsigned char c; if (mask) *mask = 0UL; seq += off; myassert (len <= 32); while (len) { if (mask) *mask <<= 1; c = (unsigned char) char2base (*seq); if (c == 0xff) { if (mask) *mask |= 1; c = 0; } rv <<= 2; rv |= c; seq ++; len --; } return rv; } inline void tuple2ascii (QWORD tuple, unsigned len, char* dest) // omitted destlen !!! { dest [len] = 0; while (len) { dest [len - 1] = base2char (tuple & 3); tuple >>= 2; len --; } } // helper class for printing out tuples struct TUPLE { public: QWORD tuple_; BYTE l_; TUPLE (QWORD t, BYTE l) : tuple_ (t), l_ (l) { } TUPLE () : l_ (0) { } }; inline std::ostream& operator << (std::ostream& ostr, const TUPLE& tuple) { char buf [33]; tuple2ascii (tuple.tuple_, tuple.l_, buf); ostr << buf; return ostr; } inline Logger& operator << (Logger& logger, const TUPLE& tuple) { if (logger.enabled ()) logger.o_ << tuple; return logger; } #endif // __sequtil_h__
def should_thrash_store(self): if not self.store_thrash: return False return self.rng.randrange(0, 101) < self.store_thrash_probability
Non-Invasive Respiratory Impedance Enhances Cerebral Perfusion in Healthy Adults Optimization of cerebral blood flow (CBF) is the cornerstone of clinical management in a number of neurologic diseases, most notably ischemic stroke. Intrathoracic pressure influences cardiac output and has the potential to impact CBF. Here, we aim to quantify cerebral hemodynamic changes in response to increased respiratory impedance (RI) using a non-invasive respiratory device. We measured cerebral perfusion under varying levels of RI (6cm H2O, 9cm H2O, and 12cm H2O) in 20 healthy volunteers. Simultaneous measurements of microvascular CBF and middle cerebral artery mean flow velocity (MFV), respectively, were performed with optical diffuse correlation spectroscopy and transcranial Doppler ultrasound. At a high level of RI, MFV increased by 6.4% compared to baseline (p=0.004), but changes in cortical CBF were non-significant. In a multivariable linear regression model accounting for end-tidal CO2, RI was associated with increases in both MFV (coefficient: 0.49, p<0.001) and cortical CBF (coefficient: 0.13, p<0.001), although the magnitude of the effect was small. Manipulating intrathoracic pressure via non-invasive RI was well tolerated and produced a small but measurable increase in cerebral perfusion in healthy individuals. Future studies in acute ischemic stroke patients with impaired cerebral autoregulation are warranted in order to assess whether RI is feasible as a novel non-invasive therapy for stroke. overcome the impedance. This device has typically been used for respiratory muscle training, but the effect on cerebral perfusion in humans is not well studied. The limited data that exist have focused on patients with orthostatic hypotension, where RI has been shown to increase CBF velocity , and reduce subjective symptoms. Transcranial Doppler provides an important measure of cerebral hemodynamics, capturing blood flow velocity through proximal intracranial vessels. This non-invasive, continuous measure provides a valuable surrogate to global CBF. Diffuse correlation spectroscopy (DCS) is a relatively new optical technique that permits real-time, continuous, non-invasive bedside monitoring of tissue-level CBF using near-infrared light. DCS holds great promise for monitoring cerebral hemodynamics and has been validated against other measures of CBF such as ASL-MRI, Xenon CT, TCD, phase-encoded velocity mapping MRI, and fluorescent microspheres. This instrumentation has also been recently employed to quantify changes in CBF associated with position change after stroke. For this study, we use both DCS and TCD to measure changes in cerebral hemodynamics that occur during RI in healthy adults. MaTerials anD MeThODs study Population Twenty healthy adult volunteers were enrolled in this study at the Hospital of the University of Pennsylvania between August 2015 and October 2015. Subjects were eligible for the study if they were over 18 years of age but were excluded if any of the following were present: history of stroke or transient ischemic attack, known cerebrovascular disease, history of congestive heart failure, history of COPD, prior neurosurgical procedure, history of brain tumor, or active pregnancy. Subjects with well-controlled vascular risk factors, such as hypertension and hyperlipidemia, were permitted to participate in the study. The protocol was approved by the University of Pennsylvania Institutional Review Board (Protocol Number 822204). Written informed consent was signed by each participant prior to enrollment. cBF Monitoring Diffuse correlation spectroscopy provides a transcranial measurement of relative CBF. Briefly, the temporal fluctuations of near-infrared light scattered by moving red blood cells in tissue are detected. These fluctuations are quantified by the light intensity temporal autocorrelation function. Its decay rate is related to changes in CBF. Our instrument employed a long-coherencelength laser operating at 785 nm and four single-photon counting avalanche photodiode detectors for each hemisphere (i.e., a total of two lasers and eight detectors). Optical fibers were used to couple sources and detectors to the head via 2 cm 5 cm rubber optical probes that were placed bilaterally at the temporal margin of the forehead, superior to the frontal sinuses; this configuration enabled measurement of tissues supplied by the anterior middle cerebral artery (MCA). The distance between the source and detector fibers was 2.5 cm, permitting average light penetration through the cortical surface (~1.25 cm). An elastic headband was placed over the optical probes to maintain secure contact during the course of the protocol (Figure 1). Data were collected from both hemispheres at a sampling frequency of 1 Hz. Mean CBF was calculated for each segment of the protocol after discarding 10 s preceding and 20 s following each RI transition. This protocol helped to negate spurious motion-induced signal fluctuations and enabled stabilization after each transition. Blood Flow Velocity Monitoring Mean flow velocity (MFV) within the MCA was assessed in all subjects using a Compumetics DWL TCD System (Compumetics Ltd., Singen, Germany). Probes were secured using a DiaMon ® adjustable headframe (Figure 1). MCA trunks were insonated bilaterally via transtemporal windows at a depth of 40-65 mm. Emphasis was placed on obtaining reliable signal from one MCA, as there was no expectation of asymmetry in this population of healthy volunteers. MFV waveforms were sampled at a rate of 25 Hz, time-synchronized, and recorded on a computer with DCS-measured CBF. Average values were calculated for each segment of the protocol, i.e., after discarding 10 s preceding and 20 s following each RI transition. This protocol helped to negate spurious motion-induced signal fluctuations and enabled stabilization after each transition. cardiopulmonary Monitoring A finger photophlethysmogaph (Finapres Medical Systems, Arnhem, Netherlands) was placed on the right wrist and third digit of the right hand and provided continuous measurement of mean arterial pressure (MAP), systolic blood pressure, heart rate (HR), and CO. A transcutaneous pulse-oximeter was placed on the second digit of the right hand for continuous measurement of oxygen saturation. The RI device was coupled with a sensor that provided continuous measurements of both end-tidal CO2 and respiratory rate (Figure 1). All cardiopulmonary waveforms were digitized, time-synchronized, and recorded on a computer with DCS-measured CBF at a sampling frequency of 25 Hz. Average values were calculated for each segment of the protocol after discarding 10 s preceding and 20 s following each RI transition, in order to negate spurious motion-induced signal fluctuations and enable stabilization after each transition. ri Protocol The Philips Inspiratory Muscle Trainer (IMT; Philips Respironics) was utilized to non-invasively augment RI. The device has a oneway, spring-loaded valve, which provides an adjustable resistance during inspiration only. No resistance is imposed during expiration. When the device is in place, inspiratory effort must increase in order to generate sufficient negative intrathoracic pressure to overcome the selected resistance. Three discrete levels of resistance were tested (6 cm H2O, 9 cm H2O, and 12 cm H2O). Every subject was exposed to all three levels of resistance in a prespecified random order. Each subject was positioned in a hospital bed, with the head-of-bed at 45°. Baseline hemodynamic data were collected for 5 min, during which the subject was breathing through a respiratory mouthpiece providing no resistance. The IMT was then mounted to the back of the respiratory mouthpiece for a 3-min RI segment, during which the subject was instructed to breathe naturally through the IMT and maintain a relatively stable respiratory rate, in order to avoid fluctuations in end-tidal CO2, if possible. If fluctuation in respiratory rate or end-tidal CO2 occurred, the subject was reminded to breathe comfortably at a normal rate, but more intensive attempts to coach breathing were avoided. After 3 min of RI, the IMT was removed for 3 min. This 6-min cycle was repeated for each level of resistance. Subjects were blinded to the level of resistance. At the completion of the study protocol, subjects were asked if they experienced shortness of breath, chest pain, fatigue, and lightheadedness. A single Neurologist at the Hospital of the University of Pennsylvania was familiar with the protocol and present for the entirety of the protocol. The protocol was carried out in the CBF lab, within the Hospital of the University of Pennsylvania. statistical analyses All data processing was performed while blinded to the level of RI. Mean cerebral perfusion (using both CBF and MFV) values for each RI segment were compared to the preceding 3 min of normal breathing. Pairwise comparisons were completed using the Wilcoxon signed-rank tests. Kruskal-Wallis test and Cuzick's non-parametric test of trend were used to compare perfusion measures across all RI segments. Additionally, mixed-effects linear regression was employed, using a maximum likelihood to model changes in DCS and TCD across levels of RI. Models incorporated a random slope, and the covariance was modeled as unstructured. This approach was used in a prior study of DCS-CBF and head-ofbed manipulation. DCS and TCD were dependent variables. Level of RI was the independent variable and was considered to be an interval variable. The subject variable was included in the model to assess possible individual variability. End-tidal CO2 was included in the model in order to account for possible devicerelated effects, which may influence perfusion, independent of the proposed mechanism. Specifically, subjects were encouraged to breathe at a normal rate, yet if the device caused them to hyperventilate, MFV and CBF would be expected to decrease. Blood pressure and CO were not included in the model because they are expected to be on the causal pathway, rather than confounders. In a secondary analysis, the mixed-effects regression was repeated without the inclusion of end-tidal CO2, and the level of RI was considered to be categorical rather than interval. The sample size was derived from prior TCD data in human subjects, from which we estimated a 10% mean increase in MFV associated with RI (SD 10%). Setting power to 0.80 and significance to 0.05, 16 healthy controls would be sufficient to demonstrate the effect of the intervention. No previous literature exists to provide expectations of mean increase in DCS-derived CBF associated with RI. However, considering the relative similarity between perfusion changes measured with TCD and DCS, we posit that the sample size calculations used for TCD measures will be applicable for CBF measured with DCS. resUlTs The study enrolled 20 consecutive healthy volunteers. The average age was 39 years (SD: 11 years). Fifty-five percent of volunteers were male, and 70% were Caucasian. Vascular risk factors were uncommon in the cohort: 10% had hypertension, 15% hyperlipidemia, 10% asthma, and no subjects had diabetes or coronary artery disease. No subjects were taking beta-blockers. Also, 5% were taking nodal acting calcium channel blockers, and 10% were taking inhaled bronchodilators. There were no adverse events associated with the IMT device. Specifically, no subjects reported shortness of breath, chest pain, fatigue, or lightheadedness. No subjects elected to terminate the protocol before completion, and there were no documented episodes of hypoxia, hypoventilation, or hyperventilation. Figure 2 provides an example of the raw time series data acquired from the study of one subject, where there is augmentation of both cortical CBF and MCA flow velocity during each level of RI, most notable with the highest level of RI. Figure 3 depicts the relationship between blood flow and RI averaged across the cohort. There was a 6.4% increase in TCD-measured MFV at the maximum level of resistance (12 cm H2O) (p = 0.004), but no significant change from baseline was noted at the low and medium levels of RI. Pairwise testing similarly compared DCSmeasured CBF at each level of RI (in comparison to baseline), and while no significant differences were identified, point estimates suggest a subtle dose-response relationship between CBF and RI. Table 1 depicts all hemodynamic data, including MAP, HR, CO, and end-tidal CO2 across the range of RI. Similar to the change in MFV, an increase in MAP was noted with the highest level of RI. Importantly, when averaged over the cohort, there were no significant changes in end-tidal CO2. When all levels of RI were compared by Kruskal-Wallis test, both TCD and DCS were significantly different across levels (p < 0.001 for both measures). A Cuzick's non-parametric test of trend confirmed that this difference across levels of RI was ordered for both TCD and DCS (p < 0.001 for both measures). To better quantify the trend, mixed-effects linear regression was employed, incorporating end-tidal CO2 and individual variability in the model, this approach demonstrated that level of RI was associated with both TCD-measured MFV (coefficient: 0.49, p < 0.001) and DCS-measured microvascular CBF (coefficient: 0.13, p < 0.001). The larger coefficient for MFV suggests that RI had a greater effect on flow through the intracranial trunk vessels as compared to cortical tissue perfusion, although the magnitude of effect was small overall. Post-estimation predicted probabilities suggest that RI of 12 cm H2O, relative to no RI, was associated with a 5.8% increase in TCD MFV and a 1.2% increase in CBF. Values predicted by the linear model were very similar to measured values at 6 cm H2O, 9 cm H2O, and 12 cm H2O. Removal of end-tidal CO2 did not influence the model for TCD or DCS. In the mixed-effects models, a Wald test identified significant variability between individuals for both TCD (p < 0.001) and DCS (p < 0.001). In a secondary analysis, the level of RI was considered to be categorical rather than interval, but the model was not significantly affected. DiscUssiOn Non-invasive RI proved to be well tolerated and holds promise as a bedside intervention to augment cerebral perfusion. In this study of healthy volunteers, RI resulted in a small but significant increase in MFV measured by TCD. Brain tissue flow measured by DCS did not show a significant change as compared to baseline, but there appears to be a linear relationship between the level of RI and the resultant increase in both MCA trunk and tissue-level flow, independent of end-tidal CO2, although the magnitude of change was small. Changes in end-tidal CO2 are proportional to changes in CBF through its potent effect of vasomotor control. Because RI influences the respiratory cycle, care was taken to quantify respiratory rate and end-tidal CO2 to ensure that the measured hemodynamic effects were a result of RI rather than a surrogate effect. While the effect of RI on TCD has been previously studied, the current study is the first to utilize measures of tissue-level cortical CBF and compare the perfusion response across three levels of RI. In the comparison of individual medians, a significant increase from baseline flow was only noted by TCD at the highest level of resistance. This may indicate a threshold of effect, but more likely it is due to limited power to detect small differences. Similarly, DCS was unable to identify a significant change in tissue flow. The point estimates indicate a dose-response relationship, although, again, the absolute differences were small, and the study was not powered to detect these differences. The mixed-effects regression model for both TCD and DCS has greater power because it incorporates repeated measures for each subject. This model demonstrated a statistically significant increase in both DCS and TCD as RI increased, although the absolute difference was low. The discrepancy between TCD and DCS may reflect the fact that the two technologies are monitoring different components of the vascular system. For example, the increase in MAP at the highest level of RI mirrored changes in MCA velocity. Cerebrovascular autoregulation, which is largely imposed at the arteriolar level, may dampen the effect at the tissue level, resulting in a nonsignificant increase in DCS-measured CBF. Patient populations with impaired autoregulation may demonstrate greater effects of RI. While there are little human data available for comparison, a study of patients with orthostatic hypotension demonstrated a 10% increase in MFV with 7 cm H2O resistance, while a study of normovolemic healthy volunteers found that RI reduced symptoms generated by orthostatic maneuvers but yielded no objective effect on MFV. Mechanistically, RI decreases intrathoracic pressure, which in turn increases venous return to the heart and ventricular preload. Depending on volume status and cardiac function, the Frank-Starling law indicates that an increase in preload results in an increase in cardiac stroke volume, as previously observed, which in turn increases vital organ perfusion, including brain perfusion. This physiologic mechanism is more impactful in the context of hypovolemia, and the modest effect measured in the current study may be a consequence of the subjects' volume status. Additionally, as noted above, cerebral autoregulation may dampen the augmentation of tissue perfusion with RI. So, although we observed a small magnitude of effect in healthy volunteers, other populations may experience greater increases in CBF. For example, ischemic stroke impairs cerebrovascular autoregulation, and many patients with acute stroke present to the hospital in a hypovolemic state, which is associated with worse outcome. As such, RI may achieve more significant increases in cerebral perfusion in patients with acute ischemic stroke, which will be an important area of future study. Based on the proposed mechanism of increased cardiac preload, it may be surprising that RI did not increase CO, as was previously reported. It is worth noting that the Finapres continuous hemodynamic monitor does not directly measure cardiac stroke volume. Rather, it calculates stroke volume and CO based on the contour of the blood pressure waveform, so technical limitations should temper the confidence in this measurement. Still, prior studies of RI that have measured an increase in CO have used a similar approach. The discrepancy between these findings may relate to the underlying volume status of the study subjects, but it would also be reasonable to consider more direct measure of stroke volume and CO in future studies. An additional mechanism by which RI may increase cerebral perfusion is through manipulation of intracranial pressure (ICP). In animal models, decreases in ITP have been shown to result in reduced ICP. This may occur because of the increased venous return through the jugular veins or spinal venous system. The potential relationship between ITP and ICP has not been studied in spontaneously breathing humans, and because the effect of interest in the current study was cerebral perfusion, an emphasis was placed on perfusion monitoring rather than a mechanistic evaluation, but potential mechanism could be explored in future studies. Adjusting for end-tidal CO2 in the regression analysis did not diminish the effect of RI. Still, end-tidal CO2 should be a key component of future studies, because if a subject were to hyperventilate during RI, the resultant hypocapnia could be expected to influence vasomotor tone and flow, confounding results. Over a longer period of time, which would be required for any kind of clinical utility, there is a theoretical possibility of respiratory fatigue, though this was not observed under the current protocol. RI was well tolerated, without any reported shortness of breath, lightheadedness, fatigue, or chest pain. Continuous pulse-oximetry demonstrated maintenance of appropriate blood oxygen saturation throughout the protocol. Low levels of RI have been shown to nearly double the work of normal physiologic breathing, without signs of intolerance, but again less than 10 min of RI was tested. Prior studies have exposed subjects to periods of RI greater than 10 min, with good tolerability, showcasing the potential of using this intervention over longer time scales than tested in this study. There are several limitations of the current study. Most notably, the detected changes in perfusion were smaller than anticipated, so the sample size was not powered to identify significant changes in perfusion during lower levels of RI. RI was limited to 3 min, so conclusions cannot directly be drawn about the effects of prolonged use. The volume status of each volunteer also serves as a significant confounder, and because volume status was not objectively quantified in this cohort, adjustment was not possible. Response to RI may be confounded by volume status, cardiac function, or dysrhythmia, which was not objectively assessed in this cohort. In addition, DCS signals cannot be reliably recorded through hair. Thus, tissue perfusion data are restricted to the frontal lobe, leaving other territories and deeper structures unmeasured. While TCD is a commonly used surrogate of CBF, it more specifically measures blood flow velocity through the MCA trunk, so one must assume a constant arterial diameter in order to use TCD as a reliable surrogate. This is a general limitation of TCD, but in the current study, we avoided such assumptions by also using DCS, a direct CBF monitor. A fairly homogenous population limited generalizability to specific patient populations, but the intent of this study was to demonstrate tolerability and potential for effect. Because the acute stroke population stands to benefit from an intervention that augments CBF, that population will be the subject of future study. In the study of stroke patients, attention should be paid to volume status, duration of RI, cardiac function, dysrhythmia, and infarct location. cOnclUsiOn Manipulating intrathoracic pressure via non-invasive RI was well tolerated and resulted in a small but measurable increase in cerebral perfusion in healthy individuals. While flow changes were not identified at low levels of resistance, there was a linear relationship between the level of resistance and both MCA flow and tissue flow measures. Future study in ischemic stroke patients will assess whether RI has any utility as a novel non-invasive therapy for acute ischemic stroke treatment. eThics sTaTeMenT This study was carried out in accordance with the Belmont Report with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the University of Pennsylvania Institutional Review Board (Protocol Number 822204). aUThOr cOnTriBUTiOns CF designed the study, collected the data, interpreted the data, and drafted the manuscript. AP collected the data, analyzed the data, and revised the manuscript. JD designed the study, interpreted the study, and revised the manuscript. AY developed the technology, collected the data, and revised the manuscript. MM and SM analyzed the data, interpreted the data, and revised the manuscript. SK designed the study, interpreted the data, and revised the manuscript. FUnDing This work was supported by the National Institutes of Health (grant numbers R01-NS060653, P41-EB015893, and P30-NS045839) and American Heart Association (grant number 14POST20460161). reFerences
Jolla is a Finnish company that launched a smartphone running on an open-source software called Sailfish back in November 2013 and it has since introduced a tablet running on the same OS, which we saw at Mobile World Congress. The Jolla Tablet is a crowd-sourced project that came out of Indigogo after smashing its goal and it will launch in Q2 of this year with a refined version of the software, dubbed Sailfish OS 2.0. We got our hands on the Jolla Tablet at the show in Barcelona to see what we make of the new software and whether the tablet does enough to float our boat. The Jolla Tablet is a very slick and sophisticated device with a slim profile and we really liked its design. It has curved edges on either side, which are finished in the same black plastic as the rear of the device, while the top and bottom are straight and finished in white. You're not looking at the slimmest 7.85-inch tablet out there with the Jolla Tablet, but that doesn't really matter because it has a lovely finish that is easy to hold in one hand. It measures 203 x 137 x 8.3mm and hits the scales at 384g, which is nice and light. The IPS display has a 2048 x 1536 resolution, which looked great from what we could tell in the time we had with it, offering bright, vibrant colours, a nice crisp image at 330ppi and decent viewing angles. On the rear is a large Jolla logo that has a glossy finish in comparison to the rest of the area surrounding it. You'll find speakers on one edge and a 5-megaixel rear camera on the other edge, while the front sports a 2-megapixel snapper. We didn't get a chance to test the cameras, but they are pretty average in terms of numbers for this size tablet so we won't be expected DSLR-quality shots. The Jolla Tablet's 4450mAh battery capacity is charged via Micro-USB and there is a microSD slot next to the port for expanding the 16GB or 32GB internal storage up to 128GB. The power button, volume rocker, headphone jack and small Jolla logo all sit on the opposite white edge, while both black edges are feature-free for simple, but effective finish. Despite having a great design though, the most interesting thing about the Jolla Tablet isn't how it looks - it's how it runs. The Sailfish OS is built on the heritage of MeeGo, which was an open source operating system formerly developed by Nokia among others and it has been throughly refined since it's first appearance on the company's 2013 smartphone. Sailfish isn't Android but it is compatible with Android apps and while it's full potential wasn't quite realised on the smartphone, the updated software has found its home on the tablet. You still get access to the likes of Facebook and Twitter, but there is also a Jolla Store rather than a Play Store and if you can't find what you're looking for, Jolla says "you can always make it". We also spotted the Amazon appstore too though, just in case you weren't feeling that creative. Sailfish OS 2.0 comes with all sorts of neat tricks, such as swiping up from the bottom for the app launcher, or to the side to switch between apps, while another swipe will bring you to a social hub of some sort with all your accounts including Twitter, emails and your calendar events in one place. Swiping from the top allows you can change the mode, each of which will prioritise various things from schedules and inboxes to apps. The model we played with had Work, Party and Outdoor modes for example and you can set them to change automatically based on your calendar, or you can manually choose the mode you want. We can see this being a handy feature as it means going from work to play nice and simple. Multi-tasking is something Jolla has clearly put at the heart of the Sailfish OS 2.0 as there is also a "partner screen" that displays the functionality of a certain app, allowing you to control it from this screen, rather than opening the app in full. Deezer was the lucky app in the hot spot on the demo model, meaning we could change the music without any effort but it would also work for Netflix for example, enabling you to play or pause a video quickly without waiting for the entire app to load. The Jolla Tablet has a 1.8GHz quad-core 64-bit Intel processor and 2GB of RAM under the hood, and although it was slick and fast, we couldn't judge it comparatively so we will wait until our full review to do so. We liked how the Sailfish OS 2.0 and the tablet itself looked and considering it smashed its Indigogo goal to raise over $2.25m, we clearly aren't the only ones. It was very simple, but simple is good sometimes and it will be interesting to see how it performs in the real world. The device and software we played with at MWC wasn't final hardware so we won't pass judgement on it just yet but so far, so good. We liked what we saw and we are looking forward to seeing more of it when it arrives in Q2 for $249.
1. Field of the Invention The present invention relates to a process for preparing a metamorphosed alkali metal titanate. 2. Description of the Prior Art Alkali metal titanates are represented by the formula EQU Ma.sub.2 O.nTiO.sub.2.mH.sub.2 O wherein Ma is an alkali metal, n and m are respectively an integer not more than 10, and are well known as insulating materials having excellent heat resistance and high refractive index. Recently, many attempts have been made to modify or metamorphose such alkali metal titanates, in accordance with diversified industrial needs. Thus, attempts have been with a view to lowering the insulating property of an alkali metal titanate or, in other words, making an alkali metal titanate semiconductive and further conductive, without extinction of its heat resistance and its reinforcing property for composites owing to its shape, or with a view to coloring an alkali metal titanate having in general a large refractive index and high whiteness and being difficult to apply to colored materials, into black, blue, etc. Heretofore, there has been a well known process for making an alkali metal titanate electroconductive, for example, by heating and metamorphosing a mixture of titanium dioxide and various sodium salts under a hydrogenous atmosphere to form a hydrogenated sodium titanate. In such process, it was necessary to heat the above-mentioned mixture at a high temperature of approx. 800.degree. to 1200.degree. C. in a hydrogenous atmosphere. The heating at a high temperature in a hydrogenous atmosphere, however, is extremely dangerous. That is, the known process involved various problems to be solved for the insurance of safety, with regard to manufacturing equipments and process controls. Furthermore, hydrogenated sodium titanates obtained by said process release hydrogen and as a result lose conductivity when they are brought in contact with an oxidative atmosphere. Thus, the sodium titanates were restricted in application. Under the circumstances, the inventors of the present invention have already proposed processes to prepare metamorphosed alkali metal titanates in the presence of a carbonous compound under a non-oxidative atmosphere (Japanese Patent Laid-open Nos. sho.58-135129 and sho.58-135130). However, it has been noted after further investigation that the carbonous compounds used in said processes and contained in the metamorphosed alkali metal titanates (1) lower the heat resistance characteristic of alkali metal titanates and (2) decrease the bulk density, and accordingly there is difficulty in mixing and dispersion of the products with other materials to obtain composites. Thus, it was necessary to separate the carbonous compounds in order to obtain metamorphosed alkali metal titanates having excellent heat resistance. It is an object of the present invention to provide a novel process for preparing metamorphosed alkali metal titanates having excellent heat resistance and electroconductivity, which are free from disadvantages as mentioned above.
/** * Created by buress on 12/2/16. */ @Controller public class DetailController { @Autowired private CuisineService cuisineService; @Autowired private OrderService orderService; public DetailController() {super();} @RequestMapping(path = "/cuisine_detail/{name}") public String showCuisineDetail(@PathVariable String name, final Cuisine cuisine) { cuisine.copy(this.cuisineService.findByName(name)); return "cuisinedetail"; } }
import { MockedResponse } from '@apollo/client/testing'; import { screen, waitFor, within } from '@testing-library/react'; import { createMemoryHistory } from 'history'; import { axe } from 'jest-axe'; import * as React from 'react'; import translations from '../../../common/translation/i18n/fi.json'; import { CollectionFieldsFragment, EventTypeId, } from '../../../generated/graphql'; import { getCollectionDetailsMock } from '../../../test/apollo-mocks/collectionsDetailsMocks'; import { getEventsByIdsMock } from '../../../test/apollo-mocks/eventByIdsMocks'; import { fakeCollection, fakeEvent, fakeLocalizedObject, } from '../../../test/mockDataUtils'; import { renderWithRoute } from '../../../test/testUtils'; import { ROUTES } from '../../app/routes/constants'; import CollectionPageContainer from '../CollectionPageContainer'; const curatedEventId = 'kulke:51381'; const curatedEventName = 'Curated test event'; const collection = fakeCollection({ curatedEvents: [ `https://tapahtumat.test.kuva.hel.ninja/fi/event/${curatedEventId}?places=tprek%3A7254`, ], }) as CollectionFieldsFragment; const path = ROUTES.COLLECTION; const routes = [ROUTES.COLLECTION.replace(':slug', collection.slug)]; const draftRoutes = [ `${ROUTES.COLLECTION.replace(':slug', collection.slug)}?draft=true`, ]; const eventsByIds = [ fakeEvent({ id: curatedEventId, name: fakeLocalizedObject(curatedEventName), }), ]; const getMocks = ( collectionDetails: CollectionFieldsFragment, draft = false ): MockedResponse[] => [ getCollectionDetailsMock({ collectionDetails, variables: { draft, slug: collectionDetails.slug }, }), getEventsByIdsMock({ variables: { ids: [curatedEventId], eventType: [EventTypeId.General, EventTypeId.Course], include: ['location'], pageSize: 10, sort: 'end_time', }, eventsByIds, }), ]; it('component should be accessible', async () => { const mocks = getMocks(collection, false); const { container } = renderWithRoute(<CollectionPageContainer />, { mocks, path, routes, }); await waitFor(() => { expect(screen.getByText(collection.title.fi)).toBeInTheDocument(); }); await waitFor(() => { expect(screen.queryByTestId('loading-spinner')).not.toBeInTheDocument(); }); expect(await axe(container)).toHaveNoViolations(); }); it('should show PreviewBanner if draft version is requested', async () => { const mocks = getMocks(collection, true); renderWithRoute(<CollectionPageContainer />, { mocks, path, routes: draftRoutes, }); await waitFor(() => { expect(screen.getByText(collection.title.fi)).toBeInTheDocument(); }); expect(screen.getByText(translations.commons.preview)).toBeInTheDocument(); }); it("should show 'not found' page if collection doesn't exist", async () => { renderWithRoute(<CollectionPageContainer />, { mocks: [], path, routes, }); await waitFor(() => { expect( screen.getByText(translations.collection.notFound.title) ).toBeInTheDocument(); }); expect( screen.getByText(translations.collection.notFound.linkSearchEvents) ).toBeInTheDocument(); expect( screen.getByText(translations.collection.notFound.text) ).toBeInTheDocument(); }); it('should show error hero if selected language is not supported', async () => { const mocks = getMocks( { ...collection, title: { ...collection.title, fi: '' } }, false ); renderWithRoute(<CollectionPageContainer />, { mocks, path, routes, }); await waitFor(() => { expect( screen.getByText(translations.collection.languageNotSupported.title) ).toBeInTheDocument(); }); expect( screen.getByText( translations.collection.languageNotSupported.linkSearchEvents ) ).toBeInTheDocument(); expect( screen.getByText(translations.collection.languageNotSupported.text) ).toBeInTheDocument(); }); it('should show error hero if collection is expired', async () => { const mocks = getMocks({ ...collection, expired: true }, false); renderWithRoute(<CollectionPageContainer />, { mocks, path, routes, }); await waitFor(() => { expect( screen.getByText(translations.collection.expired.title) ).toBeInTheDocument(); }); expect( screen.getByText(translations.collection.expired.linkSearchEvents) ).toBeInTheDocument(); expect( screen.getByText(translations.collection.expired.text) ).toBeInTheDocument(); }); it('should fetch and render curated event and scroll to it', async () => { const mocks = getMocks(collection, false); const history = createMemoryHistory(); history.push({ pathname: routes[0], state: { eventId: curatedEventId } }); const scrollIntoViewMock = jest.fn(); jest.spyOn(document, 'getElementById').mockImplementation(() => { const el = document.createElement('div'); el.scrollIntoView = scrollIntoViewMock; return el; }); renderWithRoute(<CollectionPageContainer />, { mocks, routes, path, history, }); await waitFor(() => { expect(screen.queryByTestId('loading-spinner')).not.toBeInTheDocument(); }); const eventsList = screen.getByTestId('curated-events-list'); await waitFor(() => { expect( within(eventsList).queryByText(curatedEventName) ).toBeInTheDocument(); }); expect(scrollIntoViewMock).toHaveBeenCalledWith({ behavior: 'smooth', block: 'center', }); });
/** * @author linhuiw * @desc 工具方法 */ import * as path from 'path'; import * as _ from 'lodash'; import * as inquirer from 'inquirer'; import * as fs from 'fs'; import { PROJECT_CONFIG, KIWI_CONFIG_FILE } from './const'; function lookForFiles(dir: string, fileName: string): string { const files = fs.readdirSync(dir); for (let file of files) { const currName = path.join(dir, file); const info = fs.statSync(currName); if (info.isDirectory()) { if (file === '.git' || file === 'node_modules') { continue; } const result = lookForFiles(currName, fileName); if (result) { return result; } } else if (info.isFile() && file === fileName) { return currName; } } } /** * 获得项目配置信息 */ function getProjectConfig() { const configFile = path.resolve(process.cwd(), `./${KIWI_CONFIG_FILE}`); let obj = PROJECT_CONFIG.defaultConfig; if (configFile && fs.existsSync(configFile)) { obj = { ...obj, ...JSON.parse(fs.readFileSync(configFile, 'utf8')) }; } return obj; } /** * 获取语言资源的根目录 */ function getKiwiDir() { const config = getProjectConfig(); if (config) { return config.kiwiDir; } } /** * 获取对应语言的目录位置 * @param lang */ function getLangDir(lang) { const langsDir = getKiwiDir(); return path.resolve(langsDir, lang); } /** * 深度优先遍历对象中的所有 string 属性,即文案 */ function traverse(obj, cb) { function traverseInner(obj, cb, path) { _.forEach(obj, (val, key) => { if (typeof val === 'string') { cb(val, [...path, key].join('.')); } else if (typeof val === 'object' && val !== null) { traverseInner(val, cb, [...path, key]); } }); } traverseInner(obj, cb, []); } /** * 获取所有文案 */ function getAllMessages(lang: string, filter = (message: string, key: string) => true) { const srcLangDir = getLangDir(lang); let files = fs.readdirSync(srcLangDir); files = files.filter(file => file.endsWith('.ts') && file !== 'index.ts').map(file => path.resolve(srcLangDir, file)); const allMessages = files.map(file => { const { default: messages } = require(file); const fileNameWithoutExt = path.basename(file).split('.')[0]; const flattenedMessages = {}; traverse(messages, (message, path) => { const key = fileNameWithoutExt + '.' + path; if (filter(message, key)) { flattenedMessages[key] = message; } }); return flattenedMessages; }); return Object.assign({}, ...allMessages); } /** * 重试方法 * @param asyncOperation * @param times */ function retry(asyncOperation, times = 1) { let runTimes = 1; const handleReject = e => { if (runTimes++ < times) { return asyncOperation().catch(handleReject); } else { throw e; } }; return asyncOperation().catch(handleReject); } /** * 设置超时 * @param promise * @param ms */ function withTimeout(promise, ms) { const timeoutPromise = new Promise((resolve, reject) => { setTimeout(() => { reject(`Promise timed out after ${ms} ms.`); }, ms); }); return Promise.race([promise, timeoutPromise]); } /** * 使用google翻译 */ function translateText(text, toLang) { const CONFIG = getProjectConfig(); const options = CONFIG.translateOptions; const { translate: googleTranslate } = require('google-translate')(CONFIG.googleApiKey, options); return withTimeout( new Promise((resolve, reject) => { googleTranslate(text, 'zh', PROJECT_CONFIG.langMap[toLang], (err, translation) => { if (err) { reject(err); } else { resolve(translation.translatedText); } }); }), 5000 ); } function findMatchKey(langObj, text) { for (const key in langObj) { if (langObj[key] === text) { return key; } } return null; } function findMatchValue(langObj, key) { return langObj[key]; } /** * 将对象拍平 * @param obj 原始对象 * @param prefix */ function flatten(obj, prefix = '') { var propName = prefix ? prefix + '.' : '', ret = {}; for (var attr in obj) { if (_.isArray(obj[attr])) { var len = obj[attr].length; ret[attr] = obj[attr].join(','); } else if (typeof obj[attr] === 'object') { _.extend(ret, flatten(obj[attr], propName + attr)); } else { ret[propName + attr] = obj[attr]; } } return ret; } /** * 获取翻译源类型 */ async function getTranslateOriginType() { const { googleApiKey, baiduApiKey } = getProjectConfig(); let translateType = ['Google', 'Baidu']; if (!googleApiKey) { translateType = translateType.filter(item => item !== 'Google'); } if (!baiduApiKey || !baiduApiKey.appId || !baiduApiKey.appKey) { translateType = translateType.filter(item => item !== 'Baidu'); } if (translateType.length === 0) { console.log('请配置 googleApiKey 或 baiduApiKey '); return { pass: false, origin: '' }; } if (translateType.length == 1) { return { pass: true, origin: translateType[0] }; } const { origin } = await inquirer.prompt({ type: 'list', name: 'origin', message: '请选择使用的翻译源', default: 'Google', choices: ['Google', 'Baidu'] }); return { pass: true, origin: origin }; } export { getKiwiDir, getLangDir, traverse, retry, withTimeout, getAllMessages, getProjectConfig, translateText, findMatchKey, findMatchValue, flatten, lookForFiles, getTranslateOriginType };
No one would’ve blamed Ben Cherington if he would have bowed out of Saberseminar, the annual meeting of baseball’s analytical minds. When he’d accepted the invitation, months before the conference was scheduled to start, Cherington was the GM of the Red Sox. On Saturday, he was a civilian, newly resigned from the organization he’d worked for since Pedro’s peak. Still, he stood on a Boston University stage, leading with a self-deprecating reference to his present unemployment, dispensing hard-earned advice about drawing conclusions from data, and taking questions from the crowd. In a kinder world, the first question would’ve been a softball about David Ortiz, or a request to explain Alejandro De Aza. Instead, it started with, “So … Hanley Ramirez.” It took several seconds for the resulting laughter to subside. It’s not unusual for high-priced free agents to go from headline to punch line in the span of a single contract, but it doesn’t often happen as quickly as it has for Boston’s top winter targets, Ramirez and Pablo Sandoval. Seeking a quick recovery from a last-place 2014 finish, the Red Sox signed the two stars after a season in which they’d combined for more than six WAR and helped take their teams, the Dodgers and Giants, to the playoffs. So far, the Sox have received minus-2.5 WAR for their $183 million, nine-year investment, and the team is only three wins ahead of last year’s 71-win pace. The Ramirez and Sandoval acquisitions, completed on the same payroll-raising day late last November, didn’t make many lists of the winter’s best bargains, but they also weren’t widely condemned. Critics echoed the standard concerns expressed about any long-term deal — the players are past their primes, they won’t age well, they don’t address the real roster deficiencies — along with a louder-than-usual murmur of “they’re question marks in the clubhouse.” But even those who thought the Red Sox’s deals were so big that they were likely to fail shouldn’t sprain a shoulder patting themselves on the back. There’s some amount of money at which anyone would’ve wanted Cherington’s new toys. If not $183 million, then what about $150 million? $100 million? $50 million? In retrospect, almost any amount would have been an overpay, at least for this season. According to Ultimate Zone Rating, Ramirez and Sandoval have been the worst and second-worst defenders in baseball, relative to their positional peers. They’re also having their worst offensive seasons, posting .254/.296/.435 and .259/.307/.392 lines, respectively. Start with bad defense, flavor with below-average offense, and bake for a combined 213 games, and you get two of the first five names on the list of lowest WAR totals. Boston’s problems go well beyond Ramirez and Sandoval, but had the pair played up to their projections, the Sox (and, in all likelihood, Cherington) would be contending for a wild card instead of stuck in last place. Many critics can claim they wouldn’t have signed Sandoval and Ramirez, particularly at list price, but no one can claim to have been bearish enough. My colleague Jonah Keri, whose response to the signings stressed the risk the Sox were assuming, nonetheless conceded that adding two of the winter’s top-rated free agents — one of whom played the other’s ideal position — made Boston “a better team today than it was yesterday.” At the time, that seemed almost inarguable. Nine months later, we know it not to be true. We could blame Sandoval’s decline on his weight, which observers are always eager to do. We could blame Ramirez’s offensive slump on his early-May collision with the left-field wall, which came when he was slugging .609. But Hanley’s performance in left is the most stubborn cipher. And if his inexperience in the outfield contributed to that collision, which in turn took down his offense, then it wouldn’t be a stretch to pin the blame for his whole, miserable season on the position switch from shortstop. Or maybe on Cherington, who orchestrated the switch. After his opening laugh line, the Saberseminar inquisitor asked Cherington why the Sox had assumed that putting Hanley in the outfield would work. “We made a bet based on the history of what players look like when they move from a middle-infield position to another position,” Cherington said. “And there’s data that can help us try to make an educated guess on that.” $88 million is a big stake for a “bet” or an “educated guess.” But no signing is a certainty. And while Boston’s data might be better than ours, we can still get a good idea of what the Red Sox were relying on when they concluded that Hanley could handle left field. It’s not just Mookie Betts’s smooth move from second to center: Over the 13 full seasons for which we have UZR data, 332 players — ranging from utility types to Albert Pujols — spent time at both shortstop and left field in the same year. Weighted by innings played at each position, their defense rated 3.2 runs below average per 150 games at shortstop relative to other shortstops, and 4.2 runs above average per 150 games relative to other left fielders. It isn’t uncommon for a below-average part-timer at short to be above average in an outfield corner. Admittedly, there isn’t much precedent for the direct short-to-left transition: In the UZR era, no player has switched from full-time shortstop to full-time left fielder in consecutive seasons. If Hanley was asked to be a trailblazer, though, it was only because most shortstops don’t have the bat or the bulk to play left, and because most shortstops’ gloves are too good to hide at a relatively unimportant position. Shortstops who age off the position or are pushed aside by better players tend to slide over to second or third, or even to become successful center fielders, like Billy Hamilton or, before that, Melvin Upton (when he still went by B.J.). Physically, Ramirez belonged at third base, the position he played in 2012. And the third-to-left gambit has worked out well before. In placing their big bet, the Sox were counting on positional scarcity and positional adjustments, two backbones of WAR. It’s basic baseball knowledge that the positions on the left side of the defensive spectrum (first base, left field) are easier than those on the opposite end (shortstop, catcher), which means that more people can play them. Given a choice between an average defensive shortstop and an average defensive left fielder with the same offensive ability, almost any team would take the shortstop. It doesn’t follow that every shortstop could excel in left, but many of them could hang there. Using the performance of past position changers as a guide, WAR attempts to quantify that relationship, giving shortstops a 7.5-run value boost and subtracting the same amount from left fielders. Left field at Fenway is the smallest in baseball, at 103,100 square feet — 2,000 square feet smaller than the next-smallest left field, and almost 6,000 square feet smaller than the MLB average. Unpredictable bounces off the Green Monster and its manual scoreboard make the corner more difficult than it would be with a normal outfield fence, but Boston’s evaluation options were limited. Asking Ramirez to shag flies would’ve been a breach of free-agency etiquette. And even if Fenway hadn’t looked like this for much of the winter, a few offseason fungoes wouldn’t have revealed how Ramirez would react in a real game, after months of preparation. As Cherington acknowledged on Saturday, “there was no way to know for sure.” That admission makes Cherington sound smarter than another Red Sox evaluator, who insisted this spring that the relocation carried no risk. Logically, though, there was little reason to expect Ramirez to be even further below the league average at an “easy” position than he was at one of the hardest. And yet, by our best estimates, he has been. According to Inside Edge, Hanley has made 86 percent of possible plays in left this season. Leaguewide — and Red Sox rosterwide, excluding Ramirez — left fielders have made 91 percent of possible plays. That sounds like a small gap, the difference between a high B and a medium A-. But given that most outfield opportunities are cans of corn, a small difference in balls caught can be costly. And the stats are backed up by a gag reel of eye-gouging misplays, highlighted by the following six, which IE classified as “likely” or “almost certain” outs. That’s not even counting the throws. [protected-iframe id=”53be1a6ee82cb39ec475331e05cd4242-60203239-35703816″ info=”http://wheatleyschaller.com/dev/video_embed.php?id=MerryWickedAngelwingmussel” width=”540″ height=”300″] Yesterday, I talked to a former teammate of Ramirez, who told me that the conversion was doomed from the start — that because of Ramirez’s sensitivity and self-doubt, his first mistake was destined to snowball into many more. This is where a crusty columnist would say that the Red Sox trusted too much in their spreadsheets and saw Hanley not as a human, but as a position and a Player ID. That seems like an oversimplification: It’s not as if the Sox queried their database and called it a day. They also talked to Ramirez, who sounded super-confident, and undoubtedly consulted scouts and other sources for insights into his psyche. So what didn’t the Sox see? In a 2014 essay entitled “N=1,” Baseball Prospectus author Russell Carleton argued that baseball analysis has focused too often on “Large N” research — studies based on big populations of players — rather than drilling down on the individual. “We’ve come to ignore, or worse, dismiss the thought that players might react to situations in different ways,” Carleton wrote. The “next big thing,” he added, is “understanding the nuances of each player on a case-by-case basis” instead of relying only on “strategies that can be applied across entire organizations or in every situation.” Maybe the takeaway is that the Sox rushed through their homework, or that they discarded the parts they couldn’t square with the stats. But maybe it’s just that the homework was hard. In groups, players are predictable. In isolation, they’re unknowns. In 2004, Bill James — who works in the Red Sox front office and was in the audience at Saberseminar — wrote a defense of uncertainty called “Underestimating the Fog.” He argued that many counterintuitive tenets that were seen as central to sabermetrics at the time — assertions like “clutch hitters don’t exist,” “winning or losing close games is luck,” or “batters don’t get hot or cold” — weren’t actually settled science, and that our failure to find proof that some alleged skills existed wasn’t conclusive proof that they didn’t. James listed nine such maxims, but he might have added a 10th: “It’s easier to play positions on the easier end of the defensive spectrum.” Or, more specifically, “it’s easier to play left field than shortstop.” For most players, this is probably true. In Hanley’s case, though, left field is in the fog. There were plenty of reasons to dislike the Ramirez signing: his salary, his fragility, his age, even his attitude. But “because he’ll be one of the worst players in baseball” wasn’t one of them. Cherington deserved his dismissal, but we were all wrong about Ramirez, to different degrees. We just weren’t the ones with the checkbook, or up on that Saberseminar stage. Thanks to Ryan Sullivan for audio assistance and Keith Isley of Inside Edge for research assistance.
<filename>evaluation_metrics/precision_recall_curves.py # Precision-recall curve for the default SVC classifier (with balanced class weights) import numpy as np import pandas as pd import seaborn as sn import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import precision_recall_curve from sklearn.svm import SVC dataset = load_digits() X, y = dataset.data, dataset.target == 1 X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) # create a two-feature input vector matching the example plot above jitter_delta = 0.25 X_twovar_train = X_train[:,[20,59]]+ np.random.rand(X_train.shape[0], 2) - jitter_delta X_twovar_test = X_test[:,[20,59]] + np.random.rand(X_test.shape[0], 2) - jitter_delta # train model with balanced class weights clf = SVC(kernel='linear', class_weight='balanced').fit(X_twovar_train, y_train) y_scores = clf.decision_function(X_twovar_test) precision, recall, thresholds = precision_recall_curve(y_test, y_scores) closest_zero = np.argmin(np.abs(thresholds)) closest_zero_p = precision[closest_zero] closest_zero_r = recall[closest_zero] # plot precision recall curve plt.figure() plt.xlim([0.0, 1.01]) plt.ylim([0.0, 1.01]) plt.title ("Precision-recall curve: SVC, class_weight = 'balanced'") plt.plot(precision, recall, label = 'Precision-Recall Curve') plt.plot(closest_zero_p, closest_zero_r, 'o', markersize=12, fillstyle='none', c='r', mew=3) plt.xlabel('Precision', fontsize=16) plt.ylabel('Recall', fontsize=16) plt.axes().set_aspect('equal') plt.show() print('At zero threshold, precision: {:.2f}, recall: {:.2f}'.format(closest_zero_p, closest_zero_r))
package workflow import "testing" func TestEvent_MatchPaths(t *testing.T) { // TODO: check whether GitHub webhook changes have head-slash or not e := Event{ Paths: []string{"**.go"}, } m, err := e.MatchPaths([]string{"foo/main.go"}) if err != nil { t.Fatalf("error: %s", err) } if !m { t.Errorf("MatchPaths wants true but false") } }
/* * Copyright © 2021-present Arcade Data Ltd (<EMAIL>) * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.arcadedb.query.sql.executor; import com.arcadedb.TestHelper; import org.junit.jupiter.api.Assertions; import org.junit.jupiter.api.Test; import java.util.Optional; /** * @author <NAME> (<EMAIL>-(at)-orientdb.<EMAIL>) */ public class ExplainStatementExecutionTest extends TestHelper { @Test public void testExplainSelectNoTarget() { ResultSet result = database.query("sql", "explain select 1 as one, 2 as two, 2+3"); Assertions.assertTrue(result.hasNext()); Result next = result.next(); Assertions.assertNotNull(next.getProperty("executionPlan")); Assertions.assertNotNull(next.getProperty("executionPlanAsString")); Optional<ExecutionPlan> plan = result.getExecutionPlan(); Assertions.assertTrue(plan.isPresent()); Assertions.assertTrue(plan.get() instanceof SelectExecutionPlan); result.close(); } }
It is NHL Playoff time and that means that your favorite teams will be providing you memories that will last a lifetime (both good and bad). The New Jersey Devils missed the playoffs for the second time in three seasons, but over the time of their existence they have provided their fan base with some great memories and highlight-reel goals after the regular season has ended. It was hard to narrow it down to just ten, but I was not going to bore all of you with a top-25 list or some cheesy slideshow like some other websites do. Instead, I listed what I think are the ten most memorable goals in New Jersey Devils history playoff wise. I was at four of these games, how many were you at? If you feel I missed one (or two) or you think I was spot on, be sure to let me know in the comments section below and also let us know which ones you were at: Claude Lemieux at Philadelphia June 11, 1995 (Game 5 of ECF) The game was tied 2-2. The series was also tied 2-2 when Claude Lemieux came down the right wing boards at the Spectrum and blasted a shot past Flyers goalie Ron Hextall with 44.2 seconds left in regulation. I remember where I was watching and who I was with, truly a memorable moment in Devils’ history. Lemieux’s goal ended up as the GWG and two days later New Jersey would eliminate Philadelphia and clinch their first trip to the Stanley Cup Finals against the heavily-favored Detroit Red Wings. Lemieux stuns the city of Brotherly Love: Martin Brodeur vs Montreal April 17, 1997 (Game 1 of ECQF) This was before the trapezoid was behind NHL nets (or as I like to refer to it: the idiot box). One of the reasons he eventually became legendary was Martin Brodeur’s puck-handling ability. With the Canadiens down by two late in the game they pulled their goalie Jocelyn Thibault for an extra attacker and fired the puck into the Devils’ zone. Brodeur retrieved it and quickly fired down the ice towards the empty Montreal net… and right in with 44.2 seconds left. Even though he had a mask on, you could see Brodeur smiling through his cage. When you’re ahead by 2, you can do this: Patrik Elias at Philadelphia May 26, 2000 (Game 7 of ECF) The Devils were attempting to make a miraculous comeback against the Flyers in the East Finals after falling behind 1-3. Game 7 of this series between the bitter rivals is usually most remembered for Scott Stevens’ thunderous shoulder-check that left Eric Lindros out cold and concussed (again) on the ice. But late in the third period the best offensive player in team history — Patrik Elias — slipped a rebound past Brian Boucher with 2:32 left to give New Jersey a 2-1 lead/eventual win, completing the improbable comeback. Elias makes quick work of Dan McGillis, and then Boucher: Jason Arnott at Dallas June 10, 2000 (Game 6 of SCF) Another game I remember exactly where I was, and Gary Thorne’s call of the winning goal will always stick in my mind: “The New Jersey Devils have won the Stanley Cup! Jason Arnott with the Game-Winning Goal!!” A few nights earlier the Devils & Dallas Stars played an epic 1-0 triple-overtime game that Mike Modano ended in the wee hours of the morning and prevented New Jersey from winning the Stanley Cup on home ice. In Dallas, it took only two overtimes for Jason Arnott to bury a brilliant pass from Elias past Ed Belfour and clinch the second Stanley Cup in franchise history. That one was for Sykora! Arnott’s goal still gives fans chills: http://www.youtube.com/watch?v=6szYBqMq_6k Grant Marshall vs Tampa May 2, 2003 (Game 5 of ECSF) This was by far the longest game I have ever attended, as I was a rookie reporter interning with a living legend. Sure the Devils were up on the Tampa Bay Lightning 3-1 in the series heading into Game 5, but that Lightning team was dangerous, as evidenced by their Stanley Cup championship the following year. Also of all people to score the goal, Grant Marshall was certainly not at the top of your list. But as he did his whole career he battled and was able to find a loose puck and put it past surprise starter John Grahame (thanks John Tortorella) midway through the third overtime in the second longest Devils’ playoff game ever. The unlikeliest of heroes clinches the series with his third of the series: Jeff Friesen at Ottawa May 23, 2003 (Game 7 of ECF) Jeff Friesen scored two goals during the first six games of the series against the Ottawa Senators and both were GWG’s. In Game 7, on the road, New Jersey found itself down 2-0 by the midpoint of the game and the Sens appeared on their way to their first-ever Stanley Cup Finals. But the Devils’ tied it 2-2 in the third period and Friesen who had drawn the ire of coach Pat Burns earlier in the period for mental mistakes found himself streaking down the middle of the ice alone before burying a pass behind Patrick Lalime on a two-on-one (with Marshall), capping another improbable Devils comeback, on the road, in a Game 7, to punch their ticket to the Finals. How high did you jump after Friesen’s goal? Mike Rupp vs Anaheim June 9, 2003 (Game 7 of SCF) Out of all of the goals on this list, this one is likely the least dramatic, but it certainly holds a place in the hearts and minds of Devils fans. New Jersey and the Mighty Ducks of Anaheim (yes that’s what they were called back then) had split the first six games of the series, with each team winning its three home games. Joe Nieuwendyk was unable to play in the series with a back injury, so little used and little known Mike Rupp was used in his place and had the game of his life in Game 7. Rupp scored the first and third goals in a 3-0 whitewashing of the Ducks to clinch the third championship in team history. This was Brodeur’s THIRD 3-0 win in the Finals but somehow he didn’t win the Conn Smythe Trophy! Rupp deflects Scott Niedermayer’s shot right through J.S. Giguere’s five-hole: Travis Zajac vs Florida April 24, 2012 (Game 6 of ECQF) The Florida Panthers were giving the Devils all they could handle in their first round series, so much so that the sixth-seeded Devils found themselves on the brink of elimination heading into Game 6 against the Southeast Division champions. Travis Zajac didn’t play much that season due to an off-ice Achilles injury, but he certainly found his game in the playoffs, starting with his game 6 heroics to give his team a chance to play in a do-or-die Game 7 in the Sunshine State (see below). Zajac’s goal was the first of four overtime-winning goals that New Jersey scored on their magic carpet ride through the Eastern Conference playoffs. Hey! Everybody look at Ilya Kovalchuk, while I score this goal: Adam Henrique at Florida April 26, 2012 (Game 7 of ECQF) Already a familiar name in New Jersey, Adam Henrique really appeared on the NHL radar in the 2012 postseason, beginning with his double-overtime goal that propelled the Devils out of the first round for the first time since 2007. Sure Henrique was a Calder candidate after his breakout rookie season, but this goal (and his next OT winner) cemented his place in NHL history as a prime-time performer. The legend of Henrique’s mystique begins: Adam Henrique vs New York May 25, 2012 (Game 6 of ECF) Since 1994, fans of the New Jersey Devils have, for some reason, been haunted by Stephane Matteau’s Game 7, double-overtime goal that pushed the New York Rangers into the Stanley Cup Finals for the first time in eons; a Cup they eventually beat the Vancouver Canucks for. I said ‘for some reason’ because after 1994 New Jersey won not one, not two, but THREE Stanley Cups, and appeared in another Finals (2001); the Rangers haven’t been back since. After Henrique’s quickie overtime goal in 2012 that sent the underdog Devils to the Finals, it’s possible that those demons of 1994 have been put to bed. Even though New Jersey eventually lost to the Los Angeles Kings, Henrique’s goal gave finally gave fans a comeback in the never ending dispute in the Rangers-Devils rivalry, and until the Rangers reach the Finals again, the upper hand. Henrique! It’s Over!! Dan Rice can be reached via Twitter: @DRdiabloTHW or via Email: drdiablo321@yahoo.com
<gh_stars>0 // XLAutomation.h: interface for the CXLAutomation class. // ////////////////////////////////////////////////////////////////////// #if !defined(AFX_XLAUTOMATION_H__E020CE95_7428_4BEF_A24C_48CE9323C450__INCLUDED_) #define AFX_XLAUTOMATION_H__E020CE95_7428_4BEF_A24C_48CE9323C450__INCLUDED_ #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 class CXLAutomation { #define MAX_DISP_ARGS 10 #define DISPARG_NOFREEVARIANT 0x01 #define DISP_FREEARGS 0x02 #define DISP_NOSHOWEXCEPTIONS 0x03 #define xlWorksheet -4167 #define xl3DPie -4102 #define xlRows 1 #define xlXYScatter -4169 #define xlXYScatterLines 74 #define xlXYScatterSmoothNoMarkers 73 #define xlXYScatterSmooth 72 #define xlXYScatterLinesNoMarkers 75 #define xlColumns 2 #define xlNormal -4143 #define xlUp -4162 public: BOOL OpenExcelFile(CString szFileName); BOOL InsertPictureToWorksheet(BYTE* pImage, int Column, int Row, double dPicWidth, double dPicHeight); BOOL PlaceImageToClipboard(BYTE* pImage); BOOL InsertPictureToWorksheet(CString szFileName, int Column, int Row, double dPicWidth, double dPicHeight); CString GetCellValueCString(int nColumn, int nRow); BOOL SaveAs(CString szFileName, int nFileFormat, CString szPassword, CString szWritePassword, BOOL bReadOnly, BOOL bBackUp); BOOL DeleteRow(long nRow); BOOL ReleaseExcel(); BOOL PasteStringToWorksheet(CString pDataBuffer); BOOL UpdatePlotRange(int nYColumn); BOOL AddArgumentCStringArray(LPOLESTR lpszArgName,WORD wFlags, LPOLESTR *paszStrings, int iCount); BOOL SetRangeValueDouble(LPOLESTR lpszRef, double d); BOOL CreateXYChart(int nYColumn); BOOL SetCellsValueToString(double Column, double Row, CString szStr); BOOL AddArgumentOLEString(LPOLESTR lpszArgName, WORD wFlags, LPOLESTR lpsz); BOOL AddArgumentCString(LPOLESTR lpszArgName, WORD wFlags, CString szStr); BOOL CreateWorkSheet(); BOOL AddArgumentDouble(LPOLESTR lpszArgName, WORD wFlags, double d); BOOL AddArgumentBool(LPOLESTR lpszArgName, WORD wFlags, BOOL b); BOOL AddArgumentInt2(LPOLESTR lpszArgName, WORD wFlags, int i); BOOL AddArgumentDispatch(LPOLESTR lpszArgName, WORD wFlags, IDispatch * pdisp); void AddArgumentCommon(LPOLESTR lpszArgName, WORD wFlags, VARTYPE vt); BOOL InitOLE(); CXLAutomation(); CXLAutomation::CXLAutomation(BOOL bVisible); virtual ~CXLAutomation(); protected: void ShowException(LPOLESTR szMember, HRESULT hr, EXCEPINFO *pexcep, unsigned int uiArgErr); void ReleaseDispatch(); BOOL SetExcelVisible(BOOL bVisible); void ReleaseVariant(VARIANTARG *pvarg); void ClearAllArgs(); void ClearVariant(VARIANTARG *pvarg); int m_iArgCount; int m_iNamedArgCount; VARIANTARG m_aVargs[MAX_DISP_ARGS]; DISPID m_aDispIds[MAX_DISP_ARGS + 1]; // one extra for the member name LPOLESTR m_alpszArgNames[MAX_DISP_ARGS + 1]; // used to hold the argnames for GetIDs WORD m_awFlags[MAX_DISP_ARGS]; BOOL ExlInvoke(IDispatch *pdisp, LPOLESTR szMember, VARIANTARG * pvargReturn, WORD wInvokeAction, WORD wFlags); IDispatch* m_pdispExcelApp; IDispatch *m_pdispWorkbook; IDispatch *m_pdispWorksheet; IDispatch *m_pdispActiveChart; BOOL StartExcel(); }; #endif // !defined(AFX_XLAUTOMATION_H__E020CE95_7428_4BEF_A24C_48CE9323C450__INCLUDED_)
def select(n): if n.children: return select(max(n.children.keys(), key=ucb)) else: return n
mod external_data_readers; pub use external_data_readers::ExternalDataReaders; use std::{error, fmt, io}; use bytes::Buf; use noodles_bam as bam; use noodles_core::Position; use noodles_sam::{ self as sam, record::{quality_scores::Score, sequence::Base}, AlignmentRecord, }; use super::num::get_itf8; use crate::{ container::ReferenceSequenceId, data_container::{ compression_header::{ data_series_encoding_map::DataSeries, encoding::Encoding, preservation_map::tag_ids_dictionary, }, CompressionHeader, }, huffman::CanonicalHuffmanDecoder, record::{ feature::{self, substitution}, Feature, Flags, NextMateFlags, }, BitReader, Record, }; #[allow(clippy::enum_variant_names)] #[derive(Clone, Debug, Eq, PartialEq)] pub enum ReadRecordError { MissingDataSeriesEncoding(DataSeries), MissingTagEncoding(tag_ids_dictionary::Key), MissingExternalBlock(i32), } impl error::Error for ReadRecordError {} impl fmt::Display for ReadRecordError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { Self::MissingDataSeriesEncoding(data_series) => { write!(f, "missing data series encoding: {:?}", data_series) } Self::MissingTagEncoding(key) => write!(f, "missing tag encoding: {:?}", key), Self::MissingExternalBlock(block_content_id) => { write!(f, "missing external block: {}", block_content_id) } } } } pub struct Reader<'a, CDR, EDR> where CDR: Buf, EDR: Buf, { compression_header: &'a CompressionHeader, core_data_reader: BitReader<CDR>, external_data_readers: ExternalDataReaders<EDR>, reference_sequence_id: ReferenceSequenceId, prev_alignment_start: Option<Position>, } impl<'a, CDR, EDR> Reader<'a, CDR, EDR> where CDR: Buf, EDR: Buf, { pub fn new( compression_header: &'a CompressionHeader, core_data_reader: BitReader<CDR>, external_data_readers: ExternalDataReaders<EDR>, reference_sequence_id: ReferenceSequenceId, initial_alignment_start: Option<Position>, ) -> Self { Self { compression_header, core_data_reader, external_data_readers, reference_sequence_id, prev_alignment_start: initial_alignment_start, } } pub fn read_record(&mut self) -> io::Result<Record> { let bam_bit_flags = self.read_bam_bit_flags()?; let cram_bit_flags = self.read_cram_bit_flags()?; let mut record = Record { bam_bit_flags, cram_bit_flags, ..Default::default() }; let read_length = self.read_positional_data(&mut record)?; self.read_read_names(&mut record)?; self.read_mate_data(&mut record, bam_bit_flags, cram_bit_flags)?; record.tags = self.read_tag_data()?; if bam_bit_flags.is_unmapped() { self.read_unmapped_read(&mut record, cram_bit_flags, read_length)?; } else { self.read_mapped_read(&mut record, cram_bit_flags, read_length)?; } self.prev_alignment_start = record.alignment_start(); Ok(record) } fn read_bam_bit_flags(&mut self) -> io::Result<sam::record::Flags> { let encoding = self .compression_header .data_series_encoding_map() .bam_bit_flags_encoding(); decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| u16::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) .map(sam::record::Flags::from) } fn read_cram_bit_flags(&mut self) -> io::Result<Flags> { let encoding = self .compression_header .data_series_encoding_map() .cram_bit_flags_encoding(); decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| u8::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) .map(Flags::from) } fn read_positional_data(&mut self, record: &mut Record) -> io::Result<usize> { record.reference_sequence_id = match self.reference_sequence_id { ReferenceSequenceId::Some(id) => Some(id), ReferenceSequenceId::None => None, ReferenceSequenceId::Many => self.read_reference_id()?, }; let read_length = self.read_read_length()?; record.read_length = read_length; record.alignment_start = self.read_alignment_start()?; record.read_group = self.read_read_group()?; Ok(read_length) } fn read_reference_id(&mut self) -> io::Result<Option<usize>> { use bam::record::reference_sequence_id::UNMAPPED; let encoding = self .compression_header .data_series_encoding_map() .reference_id_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::ReferenceId), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| match n { UNMAPPED => Ok(None), _ => usize::try_from(n) .map(Some) .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e)), }) } fn read_read_length(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .read_lengths_encoding(); decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_alignment_start(&mut self) -> io::Result<Option<Position>> { let ap_data_series_delta = self .compression_header .preservation_map() .ap_data_series_delta(); let encoding = self .compression_header .data_series_encoding_map() .in_seq_positions_encoding(); let alignment_start_or_delta = decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, )?; let alignment_start = if ap_data_series_delta { let prev_alignment_start = i32::try_from( self.prev_alignment_start .map(usize::from) .unwrap_or_default(), ) .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?; prev_alignment_start + alignment_start_or_delta } else { alignment_start_or_delta }; usize::try_from(alignment_start) .map(Position::new) .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e)) } fn read_read_group(&mut self) -> io::Result<Option<usize>> { // § 10.2 "CRAM positional data" (2021-10-15): "-1 for no group". const MISSING: i32 = -1; let encoding = self .compression_header .data_series_encoding_map() .read_groups_encoding(); decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| match n { MISSING => Ok(None), _ => usize::try_from(n) .map(Some) .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e)), }) } fn read_read_names(&mut self, record: &mut Record) -> io::Result<()> { let preservation_map = self.compression_header.preservation_map(); // Missing read names are generated when resolving mates. if preservation_map.read_names_included() { record.read_name = self.read_read_name()?; } Ok(()) } fn read_read_name(&mut self) -> io::Result<Option<sam::record::ReadName>> { use sam::record::read_name::MISSING; let encoding = self .compression_header .data_series_encoding_map() .read_names_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::ReadNames), ) })?; let buf = decode_byte_array( encoding, &mut self.core_data_reader, &mut self.external_data_readers, None, )?; match &buf[..] { MISSING => Ok(None), _ => sam::record::ReadName::try_from(buf) .map(Some) .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e)), } } fn read_mate_data( &mut self, record: &mut Record, mut bam_flags: sam::record::Flags, flags: Flags, ) -> io::Result<()> { if flags.is_detached() { let next_mate_bit_flags = self.read_next_mate_bit_flags()?; record.next_mate_bit_flags = next_mate_bit_flags; if next_mate_bit_flags.is_on_negative_strand() { bam_flags |= sam::record::Flags::MATE_REVERSE_COMPLEMENTED; } if next_mate_bit_flags.is_unmapped() { bam_flags |= sam::record::Flags::MATE_UNMAPPED; } record.bam_bit_flags = bam_flags; let preservation_map = self.compression_header.preservation_map(); if !preservation_map.read_names_included() { record.read_name = self.read_read_name()?; } record.next_fragment_reference_sequence_id = self.read_next_fragment_reference_sequence_id()?; record.next_mate_alignment_start = self.read_next_mate_alignment_start()?; record.template_size = self.read_template_size()?; } else if flags.has_mate_downstream() { record.distance_to_next_fragment = self.read_distance_to_next_fragment().map(Some)?; } Ok(()) } fn read_next_mate_bit_flags(&mut self) -> io::Result<NextMateFlags> { let encoding = self .compression_header .data_series_encoding_map() .next_mate_bit_flags_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::NextMateBitFlags), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| u8::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) .map(NextMateFlags::from) } fn read_next_fragment_reference_sequence_id(&mut self) -> io::Result<Option<usize>> { use bam::record::reference_sequence_id::UNMAPPED; let encoding = self .compression_header .data_series_encoding_map() .next_fragment_reference_sequence_id_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding( DataSeries::NextFragmentReferenceSequenceId, ), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|id| match id { UNMAPPED => Ok(None), _ => usize::try_from(id) .map(Some) .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e)), }) } fn read_next_mate_alignment_start(&mut self) -> io::Result<Option<Position>> { let encoding = self .compression_header .data_series_encoding_map() .next_mate_alignment_start_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::NextMateAlignmentStart), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) .map(Position::new) } fn read_template_size(&mut self) -> io::Result<i32> { self.compression_header .data_series_encoding_map() .template_size_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::TemplateSize), ) }) .and_then(|encoding| { decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) }) } fn read_distance_to_next_fragment(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .distance_to_next_fragment_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::DistanceToNextFragment), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_tag_data(&mut self) -> io::Result<sam::record::Data> { use bam::reader::record::data::field::get_value; use sam::record::data::Field; let tag_line = self.read_tag_line()?; let tag_keys = self .compression_header .preservation_map() .tag_ids_dictionary() .get(tag_line) .ok_or_else(|| io::Error::new(io::ErrorKind::InvalidData, "invalid tag line"))?; let tag_encoding_map = self.compression_header.tag_encoding_map(); let mut fields = Vec::with_capacity(tag_keys.len()); for key in tag_keys { let id = key.id(); let encoding = tag_encoding_map.get(&id).ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingTagEncoding(*key), ) })?; let data = decode_byte_array( encoding, &mut self.core_data_reader, &mut self.external_data_readers, None, )?; let mut data_reader = &data[..]; let value = get_value(&mut data_reader, key.ty())?; let field = Field::new(key.tag(), value); fields.push(field); } sam::record::Data::try_from(fields) .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e)) } fn read_tag_line(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .tag_ids_encoding(); decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_mapped_read( &mut self, record: &mut Record, flags: Flags, read_length: usize, ) -> io::Result<()> { let feature_count = self.read_number_of_read_features()?; let mut prev_position = 0; for _ in 0..feature_count { let feature = self.read_feature(prev_position)?; prev_position = usize::from(feature.position()); record.add_feature(feature); } record.mapping_quality = self.read_mapping_quality()?; if flags.are_quality_scores_stored_as_array() { record.quality_scores.as_mut().reserve(read_length); for _ in 0..read_length { let score = self.read_quality_score()?; record.quality_scores.push(score); } } Ok(()) } fn read_number_of_read_features(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .number_of_read_features_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::NumberOfReadFeatures), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_feature(&mut self, prev_position: usize) -> io::Result<Feature> { use feature::Code; let code = self.read_feature_code()?; let delta = self.read_feature_position()?; let position = Position::try_from(prev_position + delta) .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?; match code { Code::Bases => { let bases = self.read_stretches_of_bases()?; Ok(Feature::Bases(position, bases)) } Code::Scores => { let quality_scores = self.read_stretches_of_quality_scores()?; Ok(Feature::Scores(position, quality_scores)) } Code::ReadBase => { let base = self.read_base()?; let quality_score = self.read_quality_score()?; Ok(Feature::ReadBase(position, base, quality_score)) } Code::Substitution => { let code = self.read_base_substitution_code()?; Ok(Feature::Substitution(position, code)) } Code::Insertion => { let bases = self.read_insertion()?; Ok(Feature::Insertion(position, bases)) } Code::Deletion => { let len = self.read_deletion_length()?; Ok(Feature::Deletion(position, len)) } Code::InsertBase => { let base = self.read_base()?; Ok(Feature::InsertBase(position, base)) } Code::QualityScore => { let score = self.read_quality_score()?; Ok(Feature::QualityScore(position, score)) } Code::ReferenceSkip => { let len = self.read_reference_skip_length()?; Ok(Feature::ReferenceSkip(position, len)) } Code::SoftClip => { let bases = self.read_soft_clip()?; Ok(Feature::SoftClip(position, bases)) } Code::Padding => { let len = self.read_padding()?; Ok(Feature::Padding(position, len)) } Code::HardClip => { let len = self.read_hard_clip()?; Ok(Feature::HardClip(position, len)) } } } fn read_feature_code(&mut self) -> io::Result<feature::Code> { let encoding = self .compression_header .data_series_encoding_map() .read_features_codes_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::ReadFeaturesCodes), ) })?; decode_byte( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|id| { feature::Code::try_from(id).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e)) }) } fn read_feature_position(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .in_read_positions_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::InReadPositions), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_stretches_of_bases(&mut self) -> io::Result<Vec<Base>> { let encoding = self .compression_header .data_series_encoding_map() .stretches_of_bases_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::StretchesOfBases), ) })?; let raw_bases = decode_byte_array( encoding, &mut self.core_data_reader, &mut self.external_data_readers, None, )?; raw_bases .into_iter() .map(|n| Base::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) .collect() } fn read_stretches_of_quality_scores(&mut self) -> io::Result<Vec<Score>> { let encoding = self .compression_header .data_series_encoding_map() .stretches_of_quality_scores_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding( DataSeries::StretchesOfQualityScores, ), ) })?; let scores = decode_byte_array( encoding, &mut self.core_data_reader, &mut self.external_data_readers, None, )?; scores .into_iter() .map(|n| Score::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) .collect() } fn read_base(&mut self) -> io::Result<Base> { let encoding = self .compression_header .data_series_encoding_map() .bases_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::Bases), ) })?; decode_byte( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| Base::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_quality_score(&mut self) -> io::Result<Score> { let encoding = self .compression_header .data_series_encoding_map() .quality_scores_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::QualityScores), ) })?; decode_byte( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| Score::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_base_substitution_code(&mut self) -> io::Result<substitution::Value> { let encoding = self .compression_header .data_series_encoding_map() .base_substitution_codes_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::BaseSubstitutionCodes), ) })?; decode_byte( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .map(substitution::Value::Code) } fn read_insertion(&mut self) -> io::Result<Vec<Base>> { let encoding = self .compression_header .data_series_encoding_map() .insertion_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::Insertion), ) })?; let raw_bases = decode_byte_array( encoding, &mut self.core_data_reader, &mut self.external_data_readers, None, )?; raw_bases .into_iter() .map(|n| Base::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) .collect() } fn read_deletion_length(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .deletion_lengths_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::DeletionLengths), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_reference_skip_length(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .reference_skip_length_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::ReferenceSkipLength), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_soft_clip(&mut self) -> io::Result<Vec<Base>> { let encoding = self .compression_header .data_series_encoding_map() .soft_clip_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::SoftClip), ) })?; let raw_bases = decode_byte_array( encoding, &mut self.core_data_reader, &mut self.external_data_readers, None, )?; raw_bases .into_iter() .map(|n| Base::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) .collect() } fn read_padding(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .padding_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::Padding), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_hard_clip(&mut self) -> io::Result<usize> { let encoding = self .compression_header .data_series_encoding_map() .hard_clip_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::HardClip), ) })?; decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| usize::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))) } fn read_mapping_quality(&mut self) -> io::Result<Option<sam::record::MappingQuality>> { use sam::record::mapping_quality::MISSING; let encoding = self .compression_header .data_series_encoding_map() .mapping_qualities_encoding() .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingDataSeriesEncoding(DataSeries::MappingQualities), ) })?; let n = decode_itf8( encoding, &mut self.core_data_reader, &mut self.external_data_readers, ) .and_then(|n| u8::try_from(n).map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e)))?; match n { MISSING => Ok(None), _ => Ok(sam::record::MappingQuality::new(n)), } } fn read_unmapped_read( &mut self, record: &mut Record, flags: Flags, read_length: usize, ) -> io::Result<()> { record.bases.as_mut().reserve(read_length); for _ in 0..read_length { let base = self.read_base()?; record.bases.push(base); } if flags.are_quality_scores_stored_as_array() { record.quality_scores.as_mut().reserve(read_length); for _ in 0..read_length { let score = self.read_quality_score()?; record.quality_scores.push(score); } } Ok(()) } } fn decode_byte<CDR, EDR>( encoding: &Encoding, core_data_reader: &mut BitReader<CDR>, external_data_readers: &mut ExternalDataReaders<EDR>, ) -> io::Result<u8> where CDR: Buf, EDR: Buf, { match encoding { Encoding::External(block_content_id) => { let src = external_data_readers .get_mut(block_content_id) .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingExternalBlock(*block_content_id), ) })?; if !src.has_remaining() { return Err(io::Error::from(io::ErrorKind::UnexpectedEof)); } Ok(src.get_u8()) } Encoding::Huffman(alphabet, bit_lens) => { if alphabet.len() == 1 { Ok(alphabet[0] as u8) } else { let decoder = CanonicalHuffmanDecoder::new(alphabet, bit_lens); decoder.decode(core_data_reader).map(|i| i as u8) } } _ => todo!("decode_byte: {:?}", encoding), } } fn decode_itf8<CDR, EDR>( encoding: &Encoding, core_data_reader: &mut BitReader<CDR>, external_data_readers: &mut ExternalDataReaders<EDR>, ) -> io::Result<i32> where CDR: Buf, EDR: Buf, { match encoding { Encoding::External(block_content_id) => { let src = external_data_readers .get_mut(block_content_id) .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingExternalBlock(*block_content_id), ) })?; get_itf8(src) } Encoding::Huffman(alphabet, bit_lens) => { if alphabet.len() == 1 { Ok(alphabet[0]) } else { let decoder = CanonicalHuffmanDecoder::new(alphabet, bit_lens); decoder.decode(core_data_reader) } } Encoding::Beta(offset, len) => core_data_reader.read_u32(*len).map(|i| (i as i32 - offset)), _ => todo!("decode_itf8: {:?}", encoding), } } fn decode_byte_array<CDR, EDR>( encoding: &Encoding, core_data_reader: &mut BitReader<CDR>, external_data_readers: &mut ExternalDataReaders<EDR>, buf: Option<Vec<u8>>, ) -> io::Result<Vec<u8>> where CDR: Buf, EDR: Buf, { match encoding { Encoding::External(block_content_id) => { let src = external_data_readers .get_mut(block_content_id) .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingExternalBlock(*block_content_id), ) })?; let mut buf = buf.unwrap(); if src.remaining() < buf.len() { return Err(io::Error::from(io::ErrorKind::UnexpectedEof)); } src.copy_to_slice(&mut buf); Ok(buf) } Encoding::ByteArrayLen(len_encoding, value_encoding) => { let len = decode_itf8(len_encoding, core_data_reader, external_data_readers)?; let buf = vec![0; len as usize]; let value = decode_byte_array( value_encoding, core_data_reader, external_data_readers, Some(buf), )?; Ok(value) } Encoding::ByteArrayStop(stop_byte, block_content_id) => { let src = external_data_readers .get_mut(block_content_id) .ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidData, ReadRecordError::MissingExternalBlock(*block_content_id), ) })?; let len = match src.chunk().iter().position(|&b| b == *stop_byte) { Some(i) => i, None => { return Err(io::Error::new( io::ErrorKind::InvalidData, "missing byte array stop byte", )) } }; let mut buf = vec![0; len]; src.copy_to_slice(&mut buf); // Discard the stop byte. src.advance(1); Ok(buf) } _ => todo!("decode_byte_array: {:?}", encoding), } } #[cfg(test)] mod tests { use super::*; #[test] fn test_decode_byte() -> io::Result<()> { fn t(encoding: &Encoding, expected: u8) -> io::Result<()> { let core_data = [0b10000000]; let mut core_data_reader = BitReader::new(&core_data[..]); let external_data = [0x0d]; let mut external_data_readers = ExternalDataReaders::new(); external_data_readers.insert(1, &external_data[..]); let actual = decode_byte(encoding, &mut core_data_reader, &mut external_data_readers)?; assert_eq!(expected, actual); Ok(()) } t(&Encoding::External(1), 0x0d)?; t(&Encoding::Huffman(vec![0x4e], vec![0]), 0x4e)?; Ok(()) } #[test] fn test_decode_itf8() -> io::Result<()> { fn t(encoding: &Encoding, expected: i32) -> io::Result<()> { let core_data = [0b10000000]; let mut core_data_reader = BitReader::new(&core_data[..]); let external_data = [0x0d]; let mut external_data_readers = ExternalDataReaders::new(); external_data_readers.insert(1, &external_data[..]); let actual = decode_itf8(encoding, &mut core_data_reader, &mut external_data_readers)?; assert_eq!(expected, actual); Ok(()) } t(&Encoding::External(1), 13)?; t(&Encoding::Huffman(vec![0x4e], vec![0]), 0x4e)?; t(&Encoding::Beta(1, 3), 3)?; Ok(()) } #[test] fn test_decode_byte_array() -> io::Result<()> { fn t(external_data: &[u8], encoding: &Encoding, expected: &[u8]) -> io::Result<()> { let core_data = []; let mut core_data_reader = BitReader::new(&core_data[..]); let mut external_data_readers = ExternalDataReaders::new(); external_data_readers.insert(1, external_data); let actual = decode_byte_array( encoding, &mut core_data_reader, &mut external_data_readers, None, )?; assert_eq!(expected, actual); Ok(()) } let len_encoding = Encoding::External(1); let value_encoding = Encoding::External(1); t( &[0x04, 0x6e, 0x64, 0x6c, 0x73], &Encoding::ByteArrayLen(Box::new(len_encoding), Box::new(value_encoding)), b"ndls", )?; t( &[0x6e, 0x64, 0x6c, 0x73, 0x00], &Encoding::ByteArrayStop(0x00, 1), b"ndls", )?; assert!(matches!( t(&[0x6e, 0x64, 0x6c, 0x73], &Encoding::ByteArrayStop(0x00, 1), b""), Err(e) if e.kind() == io::ErrorKind::InvalidData )); Ok(()) } }
Last week, writer Cheryl Strayed stopped by Los Angeles’ Zocalo Public Square to talk about her book, Wild: From Lost to Found on the Pacific Crest Trail. The tale she tells is unusual in a couple of ways: it’s about an experience she had nearly two decades ago – a far cry from the live-blogging world of today’s adventurous exploits – and she was quite possibly the least prepared person imaginable to hike 1100 miles through some of North America’s most diverse and torturous terrain. Strayed was 26 years old, her personal life in a dark downward spiral; one day, she needed a shovel, and decided to stop by REI, a slightly unusual but ultimately fateful shopping choice. In the checkout line, she saw a map of the Pacific Crest Trail – a path that stretches from the Mojave Desert to northern Washington – “and something called to me…a few months later I was on the Pacific Crest Trail.” Never mind that she had never been backpacking before. “This book is certainly not a guidebook,” Strayed noted wryly. The story has all of the hallmarks of a well-worn genre of wilderness memoir – broken, confused protagonist seeks grand revelations through a challenging physical experience. In her own words, Strayed dropped everything and bought a one-way ticket to Southern California “in order to save myself.” The premise is an echo of Walden, or *Into the Wild *(though, spoiler alert: she survives), a projection of personal struggles onto the grand American wilderness. And it begs the question, Why? Why do we go into the wilderness for therapy? The wilderness is the most popular backdrop for this type of soul-searching transformation, but I would argue that such journeys are not wilderness-specific. Physical and emotional challenges seem conflated in the “into the wild” model. In fact, for regular climbers, hikers, and backpackers, perhaps the real way to see what you’re made of is to trade the carabiners for cufflinks and work a 9 to 5 job. In energetic terms, it’s the displacement from our personal equilibrium that generates the most potential energy for transformation, and there’s no reason this must follow a nature-based template. If you’re looking to heal rather than merely probe the outer limits of your disposition, time in the wilderness offers some advantages. The freeing aspect of asceticism forces you to jettison the trappings of the comfortable life. You return to something more essential, and the absence of distractions creates space for reflection, for wandering thoughts and indirect conclusions of what really matters in your life. Of course, spending time in nature doesn’t need to be the ultimate Rorschach test, and Strayed was conscious of not over-selling the life-changing aspect of her experience. “I did feel transformed by the journey, but the transformation was discreet,” said Strayed, “and I still am the same person.” This perspective is refreshing from the wilderness genre; the profound-ization of nature-based experiences is an easy and tempting trap to fall into. Almost every climbing, surfing, and skiing video makes these activities seem like the most profound thing since the Big Bang, privileged peek into a new corner of the universe. It’s ok if it’s merely fun. Whether it’s a cross-country backpacking trip, a new relationship, or a difficult job, growth is really about challenging ourselves in new ways. Strayed’s transformation was particularly literal, but it doesn’t have to be that way. The wilderness is an appealing, effective avenue for personal growth, but it’s not the only one.
Cindy Lee Garcia, one of the actresses in "Innocence of Muslims," holds a news conference before a hearing at Los Angeles Superior Court in Los Angeles, Thursday, Sept. 20, 2012. (Photo: File photo by Jason Redmond, AP) SAN FRANCISCO — An appeals court has overturned a controversial ruling that required YouTube to take down a video that disparaged Muslims. One of the actresses in the film sued to take it down and won, but an appeals court ruled Monday she didn't have the right to control the film's distribution. When it was released in 2012, the short film titled Innocence of Muslims sparked violence in the Middle East and death threats to the actors. "The appeal teaches a simple lesson — a weak copyright claim cannot justify censorship in the guise of authorship," the court wrote in its ruling. Ninth Circuit chief judge Alex Kozinski had ruled in February that Cindy Lee Garcia, who appeared in the movie, could ask for an injunction against the movie because she said she and the other actors in the movie were duped and that anti-Muslim dialogue was dubbed in over their lines without their knowledge. The actors said they were hired to appear in a movie called Desert Warrior and that the film and script they worked on did not include references to Mohammed or Islam. Google, which owns YouTube, said Garcia had no copyright claim to the film. It also argued that allowing someone with a bit part in a movie to suppress the final product could set a dangerous precedent that could give anyone involved in a production the right to stop its release. A federal appeals court agreed, ruling Monday that YouTube should not have been forced to take the movie down from its site, despite that Garcia "was bamboozled when a movie producer transformed her five-second acting performance into part of a blasphemous video proclamation against the prophet Mohammed," the ruling said. "This it not a blasphemy case, this is not a fraud case, this is a copyright case — an extremely unusual copyright case," said Eugene Volokh, a law professor at UCLA who specializes in intellectual property issues. In a typical movie, the filmmaker has an explicit or implicit agreement with the actors to use their work. In the film in question, Garcia claims that there is no contract because the filmmaker lied to her about the work in which she was performing, said Volokh. The original opinion was a preliminary injunction that said Garcia owned the copyright to her work and could ask for the movie to be taken down from YouTube. Monday's 9th U.S. Circuit Court of Appeals ruling overturns that, saying the order to take the movie down was "unwarranted and incorrect." The 14-minute film was first uploaded to YouTube in 2012. It has also been titled The Real Life of Muhammad and Muhammad Movie Trailer. The movie contains scenes that depict the prophet Mohammed as a womanizer, homosexual, child molester and thug. While not the focus of the case, the court also said that the original ruling "gave short shrift to the First Amendment values at stake." The judges said the injunction "censored and suppressed a politically significant film — based upon a dubious and unprecedented theory of copyright. In so doing, the panel deprived the public of the ability to view firsthand, and judge for themselves, a film at the center of an international uproar." "Although Ms. Garcia has legitimate concerns and grievances, copyright law is not the appropriate remedy for them," said Raza Panjwani, Policy Counsel at Public Knowledge, a Washington, D.C., public interest group. As of Monday it did not appear that the video had been reloaded on YouTube. Read or Share this story: http://usat.ly/1HcJIWU
The relationship between President Donald Trump and Sen. Mitch McConnell, the majority leader, has disintegrated to the point that they have not spoken to each other in weeks, and McConnell has privately expressed uncertainty that Trump will be able to salvage his administration after a series of summer crises. What was once an uneasy governing alliance has curdled into a feud of mutual resentment and sometimes outright hostility, complicated by the position of McConnell’s wife, Elaine Chao, in Trump’s Cabinet, according to more than a dozen people briefed on their imperiled partnership. Angry phone calls and private badmouthing have devolved into open conflict, with the president threatening to oppose Republican senators who cross him, and McConnell mobilizing to their defense. The rupture between Trump and McConnell comes at a highly perilous moment for Republicans, who face a number of urgent deadlines when they return to Washington next month. Congress must approve new spending measures and raise the statutory limit on government borrowing within weeks of reconvening, and Republicans are hoping to push through an elaborate rewrite of the federal tax code. Yet Trump and McConnell are locked in a political cold war. Neither man would comment for this story. McConnell’s allies warn that the president should be wary of doing anything that could jeopardize the Senate Republican majority.
<gh_stars>1-10 package com.skytala.eCommerce.domain.product.relations.quantityBreak.command.type; import org.apache.ofbiz.entity.Delegator; import org.apache.ofbiz.entity.DelegatorFactory; import org.apache.ofbiz.entity.GenericEntityException; import org.apache.ofbiz.entity.GenericValue; import com.skytala.eCommerce.domain.product.relations.quantityBreak.event.type.QuantityBreakTypeAdded; import com.skytala.eCommerce.domain.product.relations.quantityBreak.mapper.type.QuantityBreakTypeMapper; import com.skytala.eCommerce.domain.product.relations.quantityBreak.model.type.QuantityBreakType; import com.skytala.eCommerce.framework.pubsub.Broker; import com.skytala.eCommerce.framework.pubsub.Command; import com.skytala.eCommerce.framework.pubsub.Event; import edu.emory.mathcs.backport.java.util.concurrent.TimeUnit; public class AddQuantityBreakType extends Command { private QuantityBreakType elementToBeAdded; public AddQuantityBreakType(QuantityBreakType elementToBeAdded){ this.elementToBeAdded = elementToBeAdded; } @Override public Event execute(){ Delegator delegator = DelegatorFactory.getDelegator("default"); QuantityBreakType addedElement = null; boolean success = false; try { elementToBeAdded.setQuantityBreakTypeId(delegator.getNextSeqId("QuantityBreakType")); GenericValue newValue = delegator.makeValue("QuantityBreakType", elementToBeAdded.mapAttributeField()); addedElement = QuantityBreakTypeMapper.map(delegator.create(newValue)); success = true; } catch(GenericEntityException e) { e.printStackTrace(); addedElement = null; } Event resultingEvent = new QuantityBreakTypeAdded(addedElement, success); Broker.instance().publish(resultingEvent); return resultingEvent; } }
def parse_parameters(parameters): options = [] if 'cores' in parameters and 'penv' in parameters: options.append('-pe {} {}'.format(parameters['penv'], parameters['cores'])) elif 'cores' in parameters: options.append('-l slots={}'.format(parameters['cores'])) if 'memory' in parameters: options.append('-l h_vmem={}'.format(parameters['memory'])) if 'queue' in parameters: options.append('-q {}'.format(parameters['queue'])) if 'time' in parameters: options.append('-l h_rt={}'.format(convert_time(parameters['time']))) if 'working_directory' in parameters: options.append('-wd {}'.format(parameters['working_directory'])) if 'name' in parameters: options.append('-N {}'.format(parameters['name'])) if 'extra' in parameters: options.append('{}'.format(parameters['extra'])) return options
<filename>pkg/storagekit/storage.go package storagekit import ( "context" "io" ) type PutObjectOptions struct { ContentType string } // Provide a simplifier interface to upload file type Storage interface { // Endpoint returns the endpoint of the object storage Endpoint() string // Bucket returns the bucket name in the object storage Bucket() string // PutObject add an object into the storage bucket PutObject(ctx context.Context, objectName string, reader io.Reader, objectSize int64, opts PutObjectOptions) error }
Gendered Intrahousehold Bargaining Power is Associated with Child Nutritional Status in Nepal. BACKGROUND Women's intrahousehold bargaining power is an important determinant of child nutrition in Nepal, but a better understanding is needed on how men's bargaining power is related to child nutrition. OBJECTIVES We examined the relation of women's and men's household bargaining power with child height-for-age z score (HAZ). METHODS We analyzed cross-sectional data from 2012, collected as an impact evaluation baseline of the Suaahara 1 program. A subsample of households with data on women's and men's intrahousehold bargaining power (n = 2170) with children aged 0-59 mo across Nepal was considered for this analysis. Intrahousehold bargaining power consisted of 4 domains: 1) ownership and control of assets, 2) social participation, 3) time allocation to work activities (workload), and 4) household decision-making control. Using multilevel methods, we analyzed associations between HAZ and 1) women's bargaining power, 2) men's bargaining power, and 3) women's and men's bargaining power, adjusted for individual- and household-level confounding factors and clustering. RESULTS Women's ownership and control of assets was positively associated with HAZ when women's and men's domains were modeled together (: 0.0597, P = 0.026). Men's social participation was positively associated with HAZ in the men's model (: 0.233, P < 0.001) and the model with women's and men's domains (: 0.188, P = 0.001). Women's workload was negatively associated with HAZ in the women's model (: -0.0503, P = 0.014) and in the model with women's and men's domains (: -0.056, P = 0.008). Household decision making for women (: -0.0631, P = 0.007) and for men (: -0.0546, P = 0.017) were negatively associated with HAZ in the gender-specific models. Women's social participation, men's ownership and control of assets, and men's workload were not associated with HAZ. CONCLUSIONS Women's workload and ownership and control of assets and men's social participation may be important in improving child HAZ in Nepal. Nutrition interventions should address women's intrahousehold bargaining power and promote men's social engagement.
New age constraints for the limit of the British-Irish Ice Sheet on the Isles of Scilly : The southernmost terrestrial extent of the Irish Sea Ice Stream (ISIS), which drained a large proportion of the last BritishIrish Ice Sheet, impinged on to the Isles of Scilly during Marine Isotope Stage 2. However, the age of this ice limit has been contested and the interpretation that this occurred during the Last Glacial Maximum (LGM) remains controversial. This study reports new ages using optically stimulated luminescence (OSL) dating of outwash sediments at Battery, Tresco (25.5 (cid:1) 1.5ka), and terrestrial cosmogenic nuclide exposure dating of boulders overlying till on Scilly Rock (25.9 (cid:1) 1.6 ka), which confirm that the ISIS reached the Isles of Scilly during the LGM. The ages demonstrate this ice advance on to the northern Isles of Scilly occurred at (cid:3) 26 ka around the time of increased ice-rafted debris in the adjacent marine record from the continental margin, which coincided with Heinrich Event 2 at (cid:3) 24 ka. OSL dating (19.6 (cid:1) 1.5 ka) of the post-glacial Hell Bay Gravel at Battery suggests there was then an (cid:3) 5-ka delay between primary deposition and aeolian reworking of the glacigenic sediment, during a time when the ISIS ice front was oscillating on and around the Ll ^ yn Peninsula, (cid:3) 390km to the north. Introduction Providing accurate age constraints for the behaviour of palaeo-ice sheets is important for testing ice-sheet models (e.g. ). BRITICE-CHRONO is a large consortium project that is generating an extensive dataset constraining the retreat of the last British-Irish Ice Sheet (BIIS) from its maximum extent during Marine Isotope Stage (MIS) 2. Determining accurate ages for the proposed glacial limit on the Isles of Scilly is important for reconstructing ice retreat as this is the southernmost terrestrial record of a possible short-lived advance of the Irish Sea Ice Stream (ISIS) into the central Celtic Sea, previously described as a surge (;Scourse and Furze, 2001;). Despite some views to the contrary Veyret, 1984, 1989), there has been a consensus over the position of an ice limit (Fig. 1) on the Isles of Scilly (Barrow, 1906;Mitchell and Orme, 1967;Scourse, 1991) following the first identification of erratic material in the 19th century. The age of this ice limit has, however, been contested and the interpretation that this occurred during MIS 2 remains controversial (cf. McCabe, 2008). Mitchell and Orme and Bowen argued that the glacial deposits were of Wolstonian age (MIS 6) on the basis of lithostratigraphic correlation with coastal sequences elsewhere around the Irish Sea Basin; this was later revised by Bowen to MIS 16. However, Scourse has interpreted the glacigenic sediment-landform suite to be of Late Devensian (MIS 2) age on the basis of geochronological data. Published radiocarbon, thermoluminescence (TL), optically stimulated luminescence (OSL) and terrestrial cosmogenic nuclide (TCN) ages for the most recent advance of ice on to the Isles of Scilly indicate an age within the Last Glacial maximum (LGM) (). However, the spread in these ages is large and it is difficult to determine whether any readvance may have been involved. Bayesian analysis of the existing geochronological ( 14 C, OSL, TCN) data combined with the prior knowledge of ice retreat for the entire ISIS have been used to reduce the uncertainties inherent to the individual techniques, and model ice impingement on to the Isles of Scilly at 23.3-24.0 ka (). The timing of this advance was accompanied by an associated increase in ISIS-sourced ice-rafted detritus (IRD) flux to the adjacent deep ocean (a) and agrees with recent 14 C dating of an arctic bivalve recovered from glacigenic sediments cored close to the shelf break in the Irish sector of the Celtic Sea (). The age and extent of the glacial evidence on Scilly has implications for the glaciation of the wider Celtic Sea and for the wider dynamics of the BIIS. While the consensus of an ice limit on Scilly might imply that this marks the maximum extent of glaciation, Scourse et al. documented evidence for grounded ice and, to the south-west, glacimarine conditions (Melville Till and Melville Laminated Clay) into the central Celtic Sea. This has been considered the maximum extent of ice advance across the Celtic shelf, but the recent evidence reported by Praeg et al. places the Scilly ice limit in a new context (Fig. 1); does it represent a lateral ice limit to the ISIS, flowing southwards from the north to the west, or does it represent a terminal limit to a larger ice body on the Irish shelf to the west with driving stresses from the north-west? Certainly, the ice flow data and the distribution of glacigenic sediment units on Scilly are consistent with ice flow from the north-west rather than from the north. Although previous studies have provided 14 C, TL, OSL and TCN ages for the presence of ice on the Isles of Scilly, the reliability of some of the ages is questioned (see Supporting Information, Table S1 for details). Conventional radiocarbon ages on humic-humin fractions of tundra pollen-bearing organic sequences lying stratigraphically below the glacigenic sequence from multiple sites yielded finite ages between 39.0 AE 2.1 and 23.9 AE 1.8 ka BP for samples Q-2446 and Q-2358/9, respectively, when re-calibrated using IntCal13 (). However, the accuracy of these ages was potentially compromised by contamination with younger carbon derived from burrowing bees, rootlet and groundwater sources because the sampled organic sequences were in open coastal sections. The reliability of the existing TL and OSL data from the Isles of Scilly is also questioned as either the data were determined using experimental methods (Wintle, 1981;) or the publications lacked sufficient detail about the analyses, such as dose-recovery experiments (a;Scourse and Rhodes, 2006). Finally, an assessment of radionuclide inheritance cannot be made for the TCN age from Shipman Head () as it was determined from a single boulder (see section 'Shipman Head, Bryher'). As a consequence, a recent review of legacy geochronological data relating to the reconstruction of the BIIS by the BRITICE-CHRONO Consortium Project (Small et al., in press) considered the existing chronology for the Isles of Scilly determined from 14 C, TL, OSL and TCN dating not to be reliable for constraining ice retreat. Therefore, the aim of this paper is to report new geochronological (OSL, TCN) data for the ice advance to the Isles of Scilly. Improving the age constraints on ice advance to the Isles of Scilly is important as it will develop our understanding of changes in ice sheet dynamics relative to the IRD flux in the adjacent marine record. Study sites and sample descriptions A significant ice limit across the northern Isles of Scilly is delineated by boulders, glacigenic sedimentary units and associated landform elements (Fig. 2). The lithostratigraphical relationship between the sedimentary units ( Fig. 2) has previously been established by Scourse. An advance of the ISIS is indicated by an ice-marginal diamicton at Bread and Cheese Cove, St Martin's, defined as the Scilly Till and suggested to have been subglacially emplaced and post-depositionally glacitectonized. At this site, the ISIS advanced over pre-existing marine and contemporaneous proglacial lacustrine sediments in a similar fashion to that proposed by O Cofaigh and Evans (2001a,b) and Evans and O Cofaigh for Irish Sea Till deposition in south-east Ireland. The Scilly Till also forms the core of a series of major inter-tidal bars in the northern Isles of Scilly (White Island, Bar, Pernagie Bar and possibly also Golden Ball Brow; Fig. 2) interpreted as latero-frontal moraine loops demarcating the onshore flow of lobate ice sheet margins (Scourse, 1991;Scourse and Furze, 2001;). Battery and Gunhill, Tresco The coastal section at Battery, Tresco (4958 0 N, 620 0 W) (Fig. 2) is described in Scourse. Ice-proximal outwash sands and gravels (Tregarthen Gravel) at this site are similar to outwash deposits found at Bread and Cheese Cove, St Martin's, in association with the stratotype of the Scilly Till. OSL ages have previously been determined for sedimentary units at both sites (;a;Scourse and Rhodes, 2006), but are regarded as preliminary because full details of the samples and techniques are not provided (Table S1). Results from Bread and Cheese Cove suggest that the Scilly Till developed glacitectonic structures after 49 AE 3 ka, producing a till largely derived from sediments deposited during MIS 5 (a). Four deglacial, ice-proximal, lenses consisting of wellsorted sands containing rounded to sub-rounded erratic clasts represent the Tregarthen Gravel at Battery, and are interbedded with gelifluctates correlated with the Bread and Cheese Breccia (Fig. 3); this sequence is described and interpreted in Scourse. The gelifluctates sampled for OSL dating represent sediment flows down the slopes of the valley transverse to the palaeocurrent direction. The erratic clasts within the sand lenses form gravel lags at the base, with the palaeocurrent direction inferred to have been from west to east. Four sedimentary samples (T4BATT01, T4BATT03, T4BATT04 and T4BATT05) were taken from the Tregarthen Gravel at Battery for OSL dating (Fig. 3). Sample T4BATT01 was taken from horizontally laminated medium-to-coarse sand at a depth of 3 m. Sample T4BATT03 was taken at a depth of 1.8 m, from a 0.4-m-thick channel-fill unit composed of planar cross-set, fine-to-medium sand and granular gravel. Sample T4BATT04 was taken from the section at a depth of 2.8 m and is composed of horizontally stratified, fineto-medium sand. Finally, sample T4BATT05 was the lowest sample taken, at a depth of 2.5 m, from horizontally stratified medium sand and some granular gravel. A fifth OSL sample (T4BATT06) was taken at Battery from a depth of 1 m within a unit of horizontally stratified silt-to-medium sand (Fig. 3) comprising the post-glacial Hell Bay Gravel that caps the Tregarthen Gravel (Fig. 2). The Hell Scourse et al., and dashed line is an inferred limit based on the interpretation that the ISIS reached the shelf edge (). The core site (VC-64) on which this interpretation is based is also shown. Flow lines of the ISIS adapted from Chiverrell et al.. Black crosses mark dated sites in southern Ireland (Cofaigh and Evans, 2007;). Place names mentioned in the text are shown. Copyright Figure 2. Location of the Isles of Scilly in south-west Britain (a). The northern Isles of Scilly with the sites and associated dates discussed in the text where the inferred maximum ice limit is also shown (b). The lithostratigraphic models for the southern and northern Isles of Scilly (c; Scourse, 1991;Scourse and Furze, 2001;a). The Scilly Till and Tregarthen Gravel represent primary in situ glacigenic units. Bay Gravel is interpreted as gelifluctates derived from the glacigenic units (Scilly Till, Tregarthen Gravel) deposited penecontemporaneously with widespread sandloess (Old Man Sandloess) associated with the Scilly Till. Scourse has interpreted the Old Man Sandloess as genetically associated with the glacial event. Two TL ages both of 18.6 AE 3.7 ka and an OSL age of 20 AE 7 ka () 1 have previously been published for the deposition of the Old Man Sandloess. On Gunhill, and northern Tresco in general, there are a number of isolated boulders, and some erratic clasts at the surface (Fig. 3a); two boulders and one erratic clast were sampled for TCN dating. Sample T4TRE01 was taken from a granite boulder proximal to a tor displaying some signs of glacial modification (cf. ). Sample T4TRE02 was collected from a large tabular granite boulder that has been moved ca. 50 m from its parent tor. The upper surface of this boulder displays no weathering pits or runnels, suggesting that it has either been overturned or had overlying material removed. Given the flat nature of the top surface, it is likely that separation occurred along a pre-existing joint within the granite bedrock. The sampled boulder occurs at the same altitude as the parent tor; thus, it is unlikely that periglacial processes, such as boulder ploughing, could have separated the boulder from its parent tor, overturned it and moved it to its present position. This boulder occurs at an elevation of $32 m OD which is within the limit of storm waves on Scilly (see discussion of Scilly Rock below) but is shielded from wave activity by the higher ridge forming the northern coast of Tresco. The site is now fully vegetated, whereas storm-influenced contexts are devoid of soil and vegetation. Given this geomorphological context, it is likely that the only mechanism that could have been responsible for boulder mobilization is ice. Sample T4TRE03 was obtained from a cobble clast sampled from the surface and within the maximum extent of glacial deposits. The cobble was a grey, coarse-grained, non-foliated rock composed of >90% quartz. No sedimentary structures were visible and during crushing the rock fractured through the quartz crystals. It is thus identified as a quartzite and bears affinity to the Holyhead quartzite and bedrock found in eastern Ireland (Br€ uck and Reeves, 1976). The Isles of Scilly are composed entirely of Variscan granite of the Cornubian batholith; therefore, the clast is interpreted as an erratic probably from exposures on the east coast of Ireland and on Anglesey in Wales within the trunk of the ISIS. A full description of the lithologies and likely bedrock sources of erratics from the Isles of Scilly is included as Appendix 2 in Scourse. Shipman Head, Bryher A linear collection of free-standing boulders positioned just inside the ice limit on Shipman Head (4957 0 N, 621 0 W), Bryher, has been interpreted as a 'boulder moraine' (). The limit is further marked by a change in the character of granite tors from heavily eroded forms north of the ice limit to highly ornate castellated or mammilated forms with abundant free-standing core-stones south of the limit (Scourse, 1991;). TCN dating has previously generated a 10 Be age of 20.9 AE 2.2 ka (), recalculated to 22.1 AE 1.3 ka using a production rate derived at Loch Lomond () for the upper surface of an $27-m 3 granite boulder that is 3 m high, measured from the highly weathered underside to the sampled top surface. This boulder was interpreted to have been mobilized by ice from a nearby tor and subsequently inverted as the underside contains a network of weathering pits and drainage channels. However, these features also imply significant exposure before overturning. Such an extended period is also suggested by exposure ages of 95.3 AE 5.2 and 143.9 AE 8.7 ka from granite tors outside the maximum extent of glaciation (). This raises the possibility that the apparent exposure age may be overestimated as it contains a significant contribution of 10 Be from deeply penetrating muons (cf. ). To test this hypothesis, a sample was collected from the top (T4SHI01) and bottom (T4SHI02) Scilly Rock Scilly Rock (4957 0 N, 622 0 W) is a small granite island (0.25 km by 0.12 km) on the north-west extremity of the Scilly archipelago, situated 1.50 km west of Bryher and 0.75 km north-west of the island of Gweal (Fig. 2). Scilly Rock was not surveyed by Scourse nor, to our knowledge, visited by Mitchell and Orme during their work in the 1960s. However, the latter authors comment that 'The rocks to the west of Bryher and Gweal have been largely washed clean of superficial material. Scilly Rock, to the north-west, has traces of superficial material (head?) in some fissures, and the profile of the Rock suggests that it has been smoothed by ice' (Mitchell and Orme, 1967, p. 78). Following observation from a small boat of a prominent linear ridge of loose boulders, fieldwork was undertaken by the authors during the summer of 2013 and is the first report of the Pleistocene sequence and landforms of the island. Scilly Rock is heavily fissured and rugged, reaching a maximum elevation of $22 m along the central spine of the island which trends NE-SW, but the fissures that follow a structural lineation in the granite trend NW-SE and create a series of deep gullies, some of which extend to below sea level (Fig. 4a). One particularly prominent gully divides Scilly Rock almost in two, a deep fissure separating the south-west part of the island from the rest (Fig. 4a). It was not possible to land on this south-western part of the island, so all the observations below relate to the larger north-eastern portion. The linear boulder feature is located along the central spine of the island at its highest elevation. The boulders forming this feature are all granite and vary in size from cobbles to >10 m 3. Many of the boulders rest on the solid granite without any matrix present, but some rest on a diamicton matrix consisting of very poorly sorted silty sand with some clay containing abundant erratic clasts. This diamicton was sampled for micromorphological analyses to determine the depositional context of this sediment. Although no large erratics were observed, the lithic assemblage of smaller clasts was identical to that found in the Hell Bay Gravel, Scilly Till and Tregarthen Gravel, which are notably rich in Cretaceous flint, and red and grey sandstones. Also, some of the clasts are clearly faceted and striated. This material is currently being actively eroded by storm waves, which have been observed to break over the summit of the island (Fig. 4b), so only fragments remain in the more protected situations under large boulders. The preliminary field interpretation of the Scilly Rock boulder accumulation was that it was glacial in origin, so a number of the boulders that were deposited on the potential glacial material (e.g. Fig. 4c) were sampled for TCN analysis (T4SCI01, T4SCI02 and T4SCI03). OSL dating All five samples for OSL dating were collected in opaque tubes that were hammered into the sedimentary section to prevent exposure to sunlight during sample collection. External beta dose-rates were determined for OSL dating using inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma atomic emission spectroscopy (ICP-AES), while the external gamma dose-rates were determined using in situ gamma spectrometry (Table 1). The external beta dose-rates were also determined using a Ris GM-25-5 beta counter to assess the accuracy of these measurements; the results were within uncertainties of the beta dose-rates determined using the ICP analyses. In addition, the external gamma dose-rates were calculated using the chemical concentrations determined from ICP-MS and compared with the gamma dose-rates determined using in situ gamma spectrometry. The external gamma dose-rates for sample T4BATT06 were similar when determined using the field gamma spectrometry (1.14 AE 0.07 Gy ka 1 ) and ICP-MS results (1.08 AE 0.08 Gy ka 1 ). Sample T4BATT06 was taken from a thick sedimentary unit >0.3 m away from any boundaries and so the ICP-MS results provided an accurate estimate of the gamma dose-rate that was similar to the doserate determined using in situ gamma spectrometry. However, the external gamma dose-rates determined using field gamma spectrometry for the four samples taken from the Tregarthen Gravel were 0.9 Gy ka 1 (T4BATT01), 0.3 Gy ka 1 (T4BATT03), 0.7 Gy ka 1 (T4BATT04) and 0.6 Gy ka 1 (T4BATT05) higher than those determined using the ICP results; this is calculated relative to the central value of the gamma dose-rate and excluding uncertainties, which are typically 10-15% of the gamma dose-rate. The differences for the four samples from the Tregarthen Gravel are probably because these samples were taken from units that were thinner than the effective range of gamma rays ($0.3 m), and so field gamma spectrometry was required to accurately determine the external gamma dose-rate in situ. Given the challenging nature of the thin sand lenses sampled from the Copyright Tregarthen Gravel, four replicate samples were taken for OSL dating from this unit to constrain the depositional event. To isolate coarse-grained quartz for OSL analysis, each sample was first treated with a 10% v/v dilution of 37% HCl and with 20% v/v of H 2 O 2 to remove carbonates and organics, respectively. Dry sieving isolated the 212-250 mm diameter grains, and density-separation using sodium polytungstate provided the 2.62-2.70 g cm 3 (quartz-dominated) fractions. The quartz grains were etched for 1 h in 40% hydrofluoric (HF) acid to remove the outer portion of the quartz grains affected by alpha irradiation and to remove any contaminating grains of feldspar. After the etching, the quartz was washed in 10% HCl to remove any fluorides that may have been produced during HF etching and resieved at 212 mm. Grains were mounted into 10 by 10 grids of 300-mm-diameter holes in a 9.8-mm-diameter aluminium single-grain disc for analysis. All luminescence measurements were performed using a Ris TL/OSL DA-15 automated single-grain system equipped with a 90 Sr/ 90 Y beta source (). Stimulation was performed using a green laser and detected through a 2.5-mm-thick U-340 filter and convex quartz lens placed in front of the photomultiplier tube. The signal was recorded at 125C for a total of 1 s, where the OSL signal was summed over the first 0.1 s of stimulation and the background calculated from the final 0.2 s. Instrument reproducibility of 2.5% () was incorporated into the calculation of the equivalent dose (D e ) values. The preheat temperature was determined from a dose-recovery preheat plateau test performed on multiple-grain aliquots (5 mm in diameter) of sample T4BATT03. The results suggested that the D e values determined were not dependent upon the preheat temperature used, but recuperation was >5% above preheat temperatures of 220C. Therefore, a preheat of 220C for 10 s and cutheat of 160C were used for the single aliquot regenerative dose (SAR) protocol (Murray and Wintle, 2000). Dose-recovery experiments performed suggested that the SAR protocol was appropriate for OSL dating: T4BATT03 (ratio of 0.94 AE 0.02, overdispersion 5 AE 1%); T4BATT04 (ratio of 0.95 AE 0.04, overdispersion 15 AE 1%) and T4BATT06 (ratio of 0.98 AE 0.03, overdispersion 7 AE 1%). Six screening criteria were applied to the data throughout the analyses; associated uncertainties were included for each test. Grains were only accepted if the response to the test dose was greater than three sigma above the background, the test dose uncertainty was <20%, the recycling ratios and OSL-IR depletion ratios were within the range 0.8-1.2, recuperation was <5% of the response from the largest regenerative dose (150 Gy) and the single-grain D e values were not part of a population of very low doses that were identified by the finite mixture model (FMM) to be inconsistent with the geological context of the sample (i.e. 1 ka). Only 0-8% of grains giving a D e value failed this last criterion. After applying all screening criteria, between 2.6 and 3.9% of the total grains analysed were used to calculate D e values that were then used for age calculation (Table 1). TCN dating Eight samples from three locations inferred to be within the maximum extent of the ISIS were collected for analysis of in situ produced 10 Be in quartz (Table 2). Shielding from surrounding topography was measured and corrected for using the CRONUS-Earth online calculator (Table 2; ). The boulder samples were chiselled from upper boulder surfaces and the cobble sample (T4TRE03) was collected as a whole clast. Samples were crushed and washed at the University of Glasgow. Quartz was separated from the 250-500 mm fraction using standard mineral separation techniques and purified by ultrasonication in 2% HF/HNO 3 to remove remaining contaminants and meteoric 10 Be. Quartz purity was assessed by measuring the aluminium content using flame atomic absorption spectrometry. Beryllium extraction was carried out at the Cosmogenic Isotope Analysis Facility Scottish Universities Environmental Research Centre (CIAF SUERC), using procedures based on Child et al.. The 10 Be/ 9 Be ratios were measured by accelerator mass spectrometry (AMS) at SUERC () and 10 Be exposure ages were calculated using the CRONUS-Earth online calculator (Table 2; ). See Table S2 for details on the chemistry and AMS data for these analyses. Exposure ages presented are based on the time-dependent Lm scaling and assuming an erosion rate of 1 mm ka 1. Assuming an erosion rate of 0 mm ka 1 would change our ages by <3% and not impact upon any conclusions of this study (Table 2). Work from high latitudes in both hemispheres report standardized production rates that are 5-15% higher than the global production rate used in the CRONUS calculator (). The post-2008 production rates reduce scaling uncertainties and improve agreement with independent chronological techniques (;;;). Two independently calibrated local production rates are available from the British Isles: (i) the Loch Lomond production rate (LLPR) () and (ii) the Glen Roy production rate (GRPR) (Small and Fabel, 2015). These production rates agree within uncertainties (3.92 AE 0.18 and 4.26 AE 0.21 atoms g 1 a 1, respectively). The LLPR is preferred in this study as it is derived from direct age control provided by limiting radiocarbon ages (), instead of the assumed ages of tephra within a varve chronology () used to determine the GRPR. Micromorphology To provide contextual confirmation of the glacigenic interpretation for the boulders sampled for TCN dating on Scilly Rock, a series of monolith samples from the underlying matrix were taken for micromorphological analysis. The samples were impregnated and thin-sectioned to a final thickness of ca. 30 mm following established procedures (see ;Hiemstra, 2013). A transmitted-light petrographic microscope (Leica TM DM EP) was used for the analysis, with magnifications of up to 40 times, in both plane-and cross-polarized light settings. Two of the samples were subsequently analysed using micro-X-ray tomography (mCT, Nikon TM Metrology/X-Tek XTH 225). mCT analysis allows the visualization and reconstruction of 3-D phenomena, notably fracture plane geometry and particle long axis fabric, on the basis of 2-D information (see ). Shipman Head, Bryher The boulder samples from Shipman Head on Bryher were collected to assess the reliability of the exposure age obtained by McCarroll et al.. Re-sampling of the top surface of the boulder (T4SHI01) produced an exposure age of 23.8 AE 1.6 ka (Table 2), which agrees with the published TCN age (22.2 AE 1.3 ka). The sample collected from the underside of the boulder (T4SHI02) produced an apparent exposure age of 231.8 AE 14.8 ka. Such a prolonged period of exposure before overturning means that the 10 Be inventory measured from the upper surface will include a significant muonic contribution as muons can produce 10 Be at depths >3 m (). A simple model of 10 Be concentration with depth assuming a total period of exposure before overturning equivalent to the apparent exposure age of 231 ka and no prior inheritance has been determined following Granger and Smith (Fig. 5). While this model is a first-order quantification of the inherited 10 Be inventory due to muons, it demonstrates that a significant proportion ($20%) of the measured 10 Be inventory from the top surface is due to production by muons during exposure before overturning. This level of inheritance would produce an apparent exposure age which overestimates the true age of exposure by $5 ka, suggesting that overturning occurred between ca. 19 and 17 ka based on the ages from the upper surface of the boulder in this study and McCarroll et al.. While understanding of muon interaction cross-sections has continued to improve (cf. ), the non-trivial level of inheritance Table 1. Concentrations of K, Rb, U and Th determined for OSL dating using ICP-MS and ICP-AES analysis, presented to the appropriate decimal places according to the associated detection limits. The beta doserates were calculated using the conversion factors of Guerin et al. and beta dose-rate attenuation factors of Gu erin et al.. Gamma dose-rates were measured in situ using a portable gamma spectrometer. Water contents of 17 AE 5% were applied and are expressed as a percentage of the mass of dry sediment. The water contents were estimated from the field and saturated water contents, and environmental history for each sample. Cosmic dose-rates were determined in accordance with Prescott and Hutton (1994 due to such a prolonged period of prior exposure will produce a spurious apparent exposure age for the boulder's top surface regardless of the depth-production model used. Scilly Rock Diamicton Microscopically, the diamicton is characterized as matrix supported, albeit with a variable grain density. It shows an abundance of randomly orientated, elongated, irregularly shaped pores, but there are also sets of planar fractures that display regular, symmetrical geometric patterns (subhorizontal and steeply inclined planes), which was corroborated by mCT analysis (Fig. 6a). Pebbles in the diamicton are granitic, often subangular with irregular outlines, showing evidence of in situ weathering (exfoliation, rind formation and biotite alteration). Silt and sand grains in the diamicton are highly variable in terms of shape and roundness, and display strong preferred long-axis orientation (micro-fabric) in places. Locally, lineaments and associated turbate structures can be (), Lm scaling, assuming a density of 2.6 g cm 3. Analytical uncertainties are given in parentheses. Copyright observed (Fig. 6b), both of which can be taken as evidence of simple shear deformation (see Hiemstra and Rijsdijk, 2003). mCT analysis was also used to corroborate the suggested micro-fabric signal. Two samples, each consisting of several thousands of grains, show that there are subhorizontal modes in the micro-fabric (Fig. 6c), although the calculated eigenvalues are only moderately strong. It is reasonable to assume that the signals reflect some form of flow or strain and are related either to depositional or to deformational processes. The predominantly silty matrix is heterogeneous, where there are zones enriched in clay around large pebbles (possibly related to granite weathering) and other parts that are distinctly sandier. In some places, a network of silty to sandy 'tracks' delineate diamictic aggregates, which is probably the reason for the fragmented nature of the diamicton. The tracks look flushed or have a fluidized appearance and they seem to be associated with the irregularly shaped pores and fractures described above. There is also evidence of clay illuviation in pores, which together with the tracks strongly suggest water circulation in the diamicton, most probably post-depositionally. In cross-polarized light, the diamicton shows patterns of unidirectional birefringence (plasmic fabrics; see Hiemstra, 2013), which reflects narrow zones of preferentially aligned clay particles (Fig. 6d) and are generally taken as evidence of strain within the sediment. This is probably in response to syn-depositional simple shear deformation. The orientation of the plasmic fabrics often conforms to the preferred micro-fabrics observed. Overall, the micro-scale characteristics of the Scilly Rock diamicton strongly suggest that post-depositional processes have played a major role in the formation of the characteristics of this sediment (see also Hiemstra and Carr, 2015). Firstly, there is evidence of in situ weathering of granite clasts (including the alteration of biotite minerals to clays). Secondly, there is ample evidence of water movement probably occurring post-depositionally, not syn-depositionally, within Figure 6. Example of a micro-X-ray tomography image of fracture patterns observed in the Scilly Rock diamicton (a). While the overall appearance of the fracture network is rather chaotic, locally (for example in the white frame indicated) a regular geometric pattern of subhorizontal and steeply inclined planes is visible. Example of turbate microstructure (plane light, width of view 4 mm) defined as circular arrangements of fine elongated grains around a coarse core grain (b). It suggests that the coarse grain rotated in response to simple shear, thereby reorientating the fine-textured material in its direct vicinity. Example of a lower hemisphere equal area stereoplot (with Kamb contouring heat maps) representing silt and sand grain long axis fabric in the Scilly Rock diamicton (c). Although calculated first eigenvalue l 1 is only 0.43, there are convincing subhorizontal to low oblique signals visible. Example of moderately to well-developed unistrial to masepic plasmic fabrics (preferred clay mineral orientations shown by white arrows) (cross-polarized light; width of view 4 mm) (d). Such unidirectional plasmic fabrics are normally attributed to simple shear deformation. Copyright the diamicton based on the irregular nature of pore spaces, the localized effects of winnowing and fluidization, and traces of clay illuviation. There is also strong microscopic evidence of 'primary' strain, which is localized but consistent in terms of overall character. The geometric planar fracture patterns and the identified types of fabrics are compatible in terms of general orientations with a simple, syn-depositional shearing regime. The micro-fabric modes, the unidirectional plasmic fabrics and their close association with turbates would be consistent with a subglacial shearing environment (see van der Meer, 1993;van der Meer and Menzies, 2011;and references therein). This suggests that the diamicton analysed represents a basal till or a subglacial traction till (cf. b) that has been post-depositionally modified. Age constraints The three TCN samples overlying the diamicton from Scilly Rock yielded apparent exposure ages of 26.7 AE 1.6, 44.8 AE 2.5 and 25.0 AE 1.5 ka (Table 2). Given the age correspondence (within their analytical uncertainties) of two of the samples, the oldest age (T4SCI02) appears to be an outlier and is attributed to nuclide inheritance. Samples T4SCI01 and T4SCI03 have exposure ages that agree within their analytical uncertainties (Table 2). While their geomorphological context does not allow their previous orientation to be inferred, their ages imply mobilization around the time of the LGM. Also, the boulders from which these samples were taken directly overlie erratic-bearing diamicton (see 'Diamicton' above), which shows that there is probably a direct relationship between the boulder and the glacial deposits. If this is the case, and considering the potential for inheritance (see 'Shipman Head, Bryher' above), then the remaining two Scilly Rock exposure ages constrain the timing of the LGM on the Isles of Scilly to an arithmetic mean age of 25.9 AE 1.6 ka with a range of 28.3-23.5 ka. Tresco Previous OSL studies of glacial sediments from the Isles of Scilly (Scourse and Rhodes, 2006) have reported issues with feldspar contamination in the density-separated quartz fractions. Single-grain OSL measurements of the quartz fraction separated in this study show that $5% of the grains gave D e values, but $40% of these grains failed the OSL-IR depletion ratio test, which addressed issues of feldspar contamination. Typical decay curves and a doseresponse curve measured for a single grain of quartz from sample T4BATT03 are shown in Fig. 7. The D e values for single grains of quartz that passed the OSL-IR depletion ratio test (and the other criteria described above) from all five samples gave overdispersion values ranging from 37 to 43% (Fig. 8). The single-grain D e values determined for the samples in this study are included in Tables S3-S7. The central age model (CAM) was used to determine OSL ages for these samples (Table 1) as the symmetrical distribution of D e values did not suggest that the grains were heterogeneously bleached before burial (Fig. 8) Discussion The TCN concentration measured from the underside of the Shipman Head boulder and the existing TCN exposure ages from outside the inferred ice limit () indicate that the granite tors of the Isles of Scilly have a long exposure history, increasing the likelihood of nuclide inheritance in the TCN samples. This fact is highlighted by the ages presented for samples T4SCI02 and T4TRE01, which pre-date the LGM. The long exposure history highlights the potential for significant muonic contributions to 10 Be inventories measured in large overturned boulders, demonstrated by the exposure ages obtained from the upper surface of the Shipman Head boulder (see 'Shipman Head, Figure 7. Examples of typical decay curves (a) and a dose-response curve (b) for a single grain of quartz from sample T4BATT03. The test-dose used in this study was 9.5 Gy. Bryher' in the Results). These ages include roughly 20 000 10 Be atoms g 1, equivalent to $5 ka of full exposure, acquired before overturning. An age of 19-17 ka for deposition of the boulder at Shipman Head suggests that glaciation is an unlikely agent of boulder mobilization at this time as the ISIS is known to have retreated $500 km north of the Isles of Scilly to the Isle of Man by $17 ka (). Consequently, an alternative mechanism(s) must be responsible for overturning this boulder, such as periglacial activity or the highly energetic storm waves known to influence the Isles of Scilly (e.g. Fig. 4a), and the usage of the term 'boulder moraine' to describe the Shipman Head feature should be discontinued. Although the potential for nuclide inheritance suggests that caution needs to be applied when interpreting exposure ages, there is supporting geomorphological and sedimentary evidence that can be used to draw some inferences on when the ISIS impinged on to the northern Isles of Scilly. The inference that ice was responsible for mobilization of the boulder sampled for T4TRE02 (see 'sample sites', 'Scilly Rock') is not refuted by the apparent exposure age obtained from its top surface of 30.4 AE 1.8 ka. Although this age pre-dates the LGM, the boulder appears to be overturned due to the absence of weathering features on its top surface and it is likely to contain a significant muonic contribution (see 'Shipman Head, Bryher' in the Results), thus overestimating the time elapsed since boulder mobilization by an indeterminable amount. The age therefore represents a maximum limit on the timing of glaciation of the Isles of Scilly. The youngest exposure age from Tresco (T4TRE03) was obtained from an erratic quartzite clast sampled from the surface within the maximum extent of the Hell Bay Gravel. Considering that the clast was exposed at the present-day surface, it is likely that it was covered by overlying material to some degree in the past. This would act to attenuate the incoming cosmic radiation, reducing the production rate of 10 Be within the sample and resulting in an apparent exposure age that underestimates the true age of deposition. Although the depth and duration of cover cannot be quantified, the extremely low relief of the sample site precludes significant erosion of material implying that any pre-existing cover was probably thin. As a result, the exposure age is not corrected for any post-depositional shielding and the resulting age is interpreted as a minimum. The Isles of Scilly are composed entirely of Variscan granite and so the quartzite erratic was most likely deposited by the ISIS when it impinged upon the northern Isles of Scilly. The exposure age of 25.3 AE 1.5 ka for sample T4TRE03 is interpreted as a minimum limit on the timing of glaciation of the Isles of Scilly. The boulders from which samples T4SCI01 and T4SCI03 were collected both directly overlie glacigenic sediments (see 'Sample sites'). While the potential for nuclide inheritance cannot be discounted, the good agreement of these ages and their sedimentological association suggest that their exposure ages represent boulder mobilization by ice. Additionally, these ages are bracketed by the maximum and minimum limiting age control provided by the ages from Tresco, which adds further chronological constraints to the last glaciation of the Isles of Scilly. As a result, the TCN data suggest that the ISIS extended to the Isles of Scilly during a time interval after 30.4 AE 1.8 ka and before 25.3 AE 1.5 ka. This agrees with the exposure ages from Scilly Rock (T4SCI01 and T4SCI03), which suggest ice impinged on the Isles of Scilly at 28.3-23.5 ka, with an arithmetic mean age of 25.9 AE 1.6 ka. The new OSL ages for the deposition of the Tregarthen Gravel associated with the Scilly Till at Battery (Fig. 3) also suggest that ice was impinging on the Isles of Scilly during MIS 2 (Table 1). Although the reliability of the preliminary ages reported by Scourse and Rhodes for the Tregarthen Gravel at Battery is difficult to assess due to the lack of information published for the analyses, the ages of 25.1 AE 2.2 and 22.7 AE 0.9 ka are consistent with the new OSL ages in this study. The OSL ages of samples taken from the Tregarthen Gravel in this study are not in simple stratigraphic order (Fig. 3). This is unlikely to have been caused by inaccurate environmental dose-rates as these have been independently assessed using multiple methods in this study (see 'OSL dating'). Inaccurate estimation of the water content throughout burial is also unlikely as samples were taken from within 1 m of each other in the same stratigraphic section with identical overlying and underlying sedimentary units of breccia (Fig. 3c). The OSL ages are consistent with each other within AE2s, and this probably reflects the reproducibility of OSL dating of replicate samples from a single depositional event. An approach to determine an OSL age for the ice advance to the Isles of Scilly indicated by the Tregarthen Gravel is therefore to calculate the weighted mean and standard error of the four OSL ages (25.5 AE 1.5 ka), where the standard error was calculated using equations 21 and 22 of Aitken and Alldred. The weighted mean of the OSL ages for the Tregarthen Gravel at Battery (25.5 AE 1.5 ka) agrees with the TCN exposure ages from boulders on Scilly Rock (25.9 AE 1.6 ka), and provides strong evidence that sediments were deposited by ice during MIS 2 (Fig. 2). The OSL and TCN ages for ice advance to the northern Isles of Scilly suggest that it occurred around the time of maximum position at 25.4-24.0 cal ka BP reported for the south coast of Ireland from the youngest radiocarbon ages for reworked shell fragments within subglacial Irish Sea diamicton ( O Cofaigh and Evans, 2007). OSL ages for proglacial outwash in southern Ireland at Whiting Bay (24.4 AE 1.8 ka; 24.2 AE 2.3 ka) and Ballycroneen (23.8 AE 2.1 ka; 21.6 AE 2.1 ka) ( O ) then suggest rapid retreat of the ISIS from its maximum position in the Celtic Sea to south-eastern Ireland at 23.7-22.9 ka ( Fig. 9; ), and thus retreated more rapidly than the subsequent retreat of the ISIS northwards to the Isle of Man (). While the ISIS was rapidly retreating from the Isles of Scilly to the coasts of south-east Ireland and south-west Wales, ice masses in Ireland (e.g. Ballantyne and Stone, 2015) and Wales (e.g. ) are reported to have rapidly thinned. The terrestrial signature for an ice advance to the northern Isles of Scilly constrained by the OSL (25.5 AE 1.5 ka) and TCN (25.9 AE 1.6 ka) ages suggests that the maximum extent of the ISIS in the Celtic Sea coincided with global LGM ( Fig. 9; ). Ice impingement on to the Isles of Scilly ended around the time of Heinrich Event 2 (H2) $24 ka as the ISIS ice front was in south-east Ireland 23.7-22.9 ka ( Fig. 9; ). This is shown in the IRD record from the marine core OMEX-2K at Goban Spur by an increase in IRD flux from the ISIS at $24 ka ( Fig. 9; a;). The Figure 9. Plotted alongside the OSL and TCN ages determined in this study are the d 18 O records and Greenland interstadials as identified from the Greenland ice cores presented in Rasmussen et al. plotted using the Greenland Ice Core Chronology GICC05 (b2k), summer isolation (Berger and Loutre, 1991), records of dolomitic carbon (DC) and total IRD flux from the OMEX2K marine core with radiocarbon age model tuned to GISP2 (), and the dust records obtained from the NGRIP () and Vostok () ice cores. The grey shading marks Heinrich Events H1 and H2 (after ). marine record at Goban Spur documents a decrease in salinity immediately following H2, which may be linked to ISIS deglaciation (). Rapid recession of the ISIS followed the ice advance during the marked warming of Greenland Interstadial 2 (GI-2; $23 ka) (Fig. 9). It is likely that a combination of factors contributed towards the rapid retreat of the ISIS from its maximum extent in the Celtic Sea to south-eastern Ireland; these include glacio-eustatic forcing linked to H2, ocean and atmospheric warming (a;), a megatidal regime (b), destabilization and potential overextension of the ISIS after the short-lived advance that would have sustained a wider and more active calving margin (). OSL dating of the Hell Bay Gravel at Battery suggests that this site was ice-free at 19.6 AE 1.5 ka. This is consistent with the Bayesian model for the retreat of the ISIS that suggests the ice front had retreated $400 km northwards to Anglesey in Wales by $20 ka (). Comparing the OSL age for the Hell Bay Gravel with the TL (18.6 AE 3.7 ka; Wintle, 1981) and OSL (20 AE 7 ka; ) ages for the Old Man Sandloess supports the suggestion by that both units represent post-glacial deposition on the Isles of Scilly at a similar time. However, the original ages from the Old Man Sandloess were determined using experimental methods and so these samples need to be re-visited using modern OSL dating protocols (e.g. the SAR protocol; Murray and Wintle, 2000) to increase confidence in this comparison. After deposition of the Scilly Till and Tregarthen Gravel in the northern Isles of Scilly (Fig. 2), the OSL age of the Hell Bay Gravel (19.6 AE 1.5 ka) suggests there was a delay of $5 ka between primary glacigenic deposition and aeolian reworking of this material. This occurred around a similar time to the overturning of the boulder at Shipman Head (see Results, 'Shipman Head, Bryher'). At present, there are no stratigraphic or geochronological data to inform the character of landscape evolution on the Isles of Scilly during the $5-ka lacuna between primary glacigenic deposition and aeolian reworking. At $20 ka, the ISIS ice front was known to be oscillating on and around the Lln Peninsula (Thomas and Chiverrell, 2007;), which was at a similar time to the deposition of the Old Man Sandloess. There is evidence on the Isles of Scilly for a phase of intense solifluction and ploughing block emplacement following the primary deposition of the Old Man Sandloess $20 ka. This is represented by the Bread and Cheese Breccia north of the ice limit and the upper Porthloo Breccia to the south of the ice limit (Fig. 2). The OSL age for the Old Man Sandloess provides a maximum age for this final phase of solifluction. It is also worth noting that the revised age of $19-17 ka for the Shipman Head boulder coincides with the period immediately following the deposition of the Old Man Sandloess. This raises the possibility that the boulder accumulation on Shipman Head is a remani e collection of soliflucted or ploughing blocks derived from the adjacent tor. Any matrix associated with the boulder accumulation has been subsequently removed by wave action in this exposed context (Fig. 4b), similar to the partial removal of the diamicton matrix associated with the boulders on Scilly Rock. The phase of solifluction that followed aeolian reworking of the glacigenic sediments may have also occurred during the period $20 ka when the ISIS ice front was oscillating on and around the Lln Peninsula (Thomas and Chiverrell, 2007;), but could also be much younger, possibly even Lateglacial in age. Conclusions The new ages reported in this study in combination with previous work provide strong evidence that ice advanced to the Isles of Scilly during MIS 2. The OSL age of 25.5 AE 1.5 ka for the deposition of ice-marginal outwash sediments at Battery, the limiting TCN exposure ages of 30.4 AE 1.8 and 25.3 AE 1.5 ka from northern Tresco, and the mean TCN exposure age of 25.9 AE 1.6 ka from boulders directly overlying till on Scilly Rock suggest that ice was impinging on the northern Isles of Scilly earlier than was previously estimated by Chiverrell et al.. This implies that ice impingement on to the Isles of Scilly ended around the time of increased IRD flux in the marine record at $24 ka associated with H2. This supports the suggestions of previous studies that ice advance and retreat on to the Isles of Scilly was related to H2 and was followed by recession of the ISIS during the warming of the GI-2 at $23 ka. After the ISIS had receded from the Isles of Scilly, there was a delay of $5 ka between the primary deposition and aeolian reworking of glacigenic sediment according to the OSL age of 19.6 AE 1.5 ka for the Hell Bay Gravel. At present, there is a lack of evidence for the environmental history of the Isles of Scilly after the advance of ice and before the phase of aeolian deposition, during a time when the ISIS ice front is known to have been oscillating on and around the Lln Peninsula, Wales. This phase of aeolian activity was then followed by a phase of active solifluction. Acknowledgements. This paper was supported by a Natural Environment Research Council consortium grant (BRITICE-CHRONO NE/J008672/1). H. Wynne is thanked for etching the quartz grains for OSL dating. A. Palmer and S. Carr are also acknowledged for preparing the thin sections and running the tomograph analyses, respectively. Thanks to the Tresco Estate for allowing us access to the Battery and Gunhill sites and facilitating sampling there, to Dave Mawer and Julie Love of the IOS Wildlife Trust for facilitating access to Shipman Head and Scilly Rock, and for supplying the photograph (Fig. 4b). We would like to thank Jeremy Phillips of the St Mary's Boatmen's Association for logistical support. Table S1. Details of the ages determined from previous studies on the Isles of Scilly. Table S2. Chemistry and AMS data for samples from the Isles of Scilly. Table S3. D e values from OSL dating of sample T4BATT01 from the Isles of Scilly. Table S4. D e values from OSL dating of sample T4BATT03 from the Isles of Scilly. Table S5. D e values from OSL dating of sample T4BATT04 from the Isles of Scilly. Table S6. D e values from OSL dating of sample T4BATT05 from the Isles of Scilly. Table S7. D e values from OSL dating of sample T4BATT06 from the Isles of Scilly.
WASHINGTON – As lawmakers in Washington debate the future of health care, the Administrator of Medicare and Medicaid Services, Seema Verma, is trying to improve services under the laws still in place. Verma was the seventh person President Trump nominated to his administration. She knew moving from the private to the public sector would be a challenge, yet she wanted to help her country – even if that meant commuting each week between Indiana and DC to oversee the multi-billion dollar programs utilized by 130 million Americans. "I saw some of the things that were going on in health care and I realized that our country was going in the wrong direction on health care," Verma told CBN News. One of the first things she wanted to tackle in her role was patient confusion. "You don't have the information that you need in terms of how much are things going to cost," Verma explained. "You don't know about the quality and you don't have your medical record, so I think there's a lot of confusion." Verma wants to give patients the necessary information to make the best decisions about their health care through an initiative called the Blue Button 2.0, or MyHealthEData. "The federal government spent some 36 billion dollars on investing in doctors and hospitals having electronic medical records and I think that's exciting, but what happened in that is all of the information is siloed," said Verma. "It just sort of stayed in your doctor's office. Before it was a filing cabinet and right now it's an electronic silo right inside the computer." Verma told CBN News how a personal emergency brought her faced to face with this issue. "I get on the phone with the paramedics and they say your husband's not breathing - is he on any medications? What's his health care background?" recalled Verma. "It was a very difficult moment, there's so much going on, the panic of not being with my kids and wondering what's going on with him," Verma recounted. "For the medical professionals that were treating him, they didn't have the information that they needed to be able to diagnose him." While her husband recovered, they still had to jump through hoops to get his information. "When I left the hospital, they gave me essentially a CD-ROM and they said here's his health care information," said Verma. In today's tech world, however, many computers won't even read CD-ROMs anymore. "I think the big issue is a lot of the information wasn't even on what they gave me so there's all of this information about him that's sort of trapped at the hospital," explains Verma. Verma says this lack of access is especially difficult for patients who move or see multiple health care providers. "We're working on an initiative to make it very clear to providers that that data and that information belongs to the patient, it is theirs. And we want to make sure when they leave the providers they have that information," said Verma. Under Verma, the Centers for Medicare and Medicaid Services, or CMS, is requiring providers to share this information with patients or face penalties. "You're going to be able to understand your health a lot better and what you need to do to improve your health, but you also have the opportunity to give that to your doctor so your doctor is not repeating tests, there aren't safety issues there are no drug interactions – so that's exciting," continued Verma. Verma believes that will not only save time and money but could lead researchers to find breakthroughs. "It's really going to give rise to the type of innovation that I think we've seen in the American health care system but I think it's going to bring it to a much higher level," she predicted. And she says so far the response from patients using the program has been positive. "I think people are excited about it. You know we hear stories all the time, I remember talking to our staff or people saying, 'ya know I'm going to a new doctor and the doctor asked for all of my health care information,' and she said, 'I don't have time to go around asking every doctor for information,' and with this type of tool she should be able to aggregate all of that information," said Verma. Verma tells CBN News they have more than a thousand app developers working to make this data more user-friendly for patients. "It's things like apps that are organizing your medications, there are apps that allow you to take that information and put it into your doctor's electronic health record," she explained. "I think the exciting thing here is the possibilities are endless to see what American innovators are going to be able to do with this data." And she believes this program will continue no matter which party controls the White House. "I think this is something that we have heard from both sides of the aisle about how important this is and it's important on so many different dimensions," said Verma. Although health care negotiations are moving slow on Capitol Hill, Verma is trying to make the best of her current authority. "From my perspective, I'm going to focus on what I can control," she explained. "We always stand ready to work with Congress if they want to make changes and to provide them with support as they consider changes, but in the meantime, I don't want to stand still and I want to do what I can to make sure that health care is working for every American." That means working toward the constant goal of lowering health care costs and making sure Medicare and Medicaid programs are sustainable. "We're trying to modernize the program, strengthen the program, do things for the program that are going to empower our beneficiaries and make sure they have the information that they need whether its price transparency, quality transparency, or making sure that they have their medical record," said Verma. And she says for her, working in the Trump administration is exciting. "For me, it's exciting to be here. We are with an administration and a president that isn't afraid of disrupting the status quo on behalf of the American people," she said. "That's what I like about the administration, it's okay to be disruptive. The status quo isn't working for so many different Americans and so the idea here is to be bold and to make changes that are going to have a lasting impact to improve health care for our country." Verma believes the changes happening at CMS can be an impact felt throughout the entire health care system.
Environmental and Physiological Barriers to Child Growth and Development. Aggregated analyses of child growth in low- and middle-income countries (LMICs) reveal a remarkably consistent picture of serious growth failure compared to the WHO reference growth curves. Impoverished diets with low dietary diversity are a key driver of poor growth, but there are important additional environmental factors that limit the uptake and utilization of nutrients. This paper considers such factors. A large proportion of the rapid growth deterioration in later infancy can be ascribed to infections and to wider nonspecific effects of living in an unhygienic environment, including the ingestion of toxins such as aflatoxin. Despite never revealing themselves as clinical syndromes, the great majority of children in rural low-income settings of Africa and Asia are antibody positive to numerous pathogens (CMV, EB, HepB, Helicobacter pylori, and many more) by 24 m; these infections must take their toll. Additionally, there is a syndrome widely termed environmental enteric disease that combines gut leakage with a chronic inflammation leading to nutrient losses and cytokine-mediated growth retardation. Systemic inflammation also inhibits nutrient uptake and utilization. Elimination of these environmental barriers will be key to achieving optimal child growth and development in LMICs.
#include "pch_script.h" #include "UIGameTutorial.h" #include "UIWindow.h" #include "UIStatic.h" #include "UIXmlInit.h" #include "object_broker.h" #include "../../xrEngine/xr_input.h" #include "../xr_level_controller.h" #include "../../xrServerEntities/script_engine.h" #include "../ai_space.h" #include "../../xrEngine/xr_ioconsole.h" #include "../UIGameCustom.h" #include "UIActorMenu.h" #include "UIPdaWnd.h" extern ENGINE_API BOOL bShowPauseString; void CallFunction(shared_str const& func) { luabind::functor<void> functor_to_call; bool functor_exists = ai().script_engine().functor(func.c_str() ,functor_to_call); THROW3 (functor_exists, "Cannot find script function described in tutorial item ", func.c_str()); if( functor_to_call.is_valid() ) functor_to_call(); } void CallFunctions(xr_vector<shared_str>& v) { xr_vector<shared_str>::const_iterator it = v.begin(); for(;it!=v.end();++it) CallFunction(*it); } void CUISequenceItem::Load(CUIXml* xml, int idx) { XML_NODE* _stored_root = xml->GetLocalRoot(); xml->SetLocalRoot (xml->NavigateToNode("item",idx)); int disabled_cnt = xml->GetNodesNum (xml->GetLocalRoot(), "disabled_key"); for(int i=0; i<disabled_cnt; ++i) { LPCSTR str = xml->Read ("disabled_key", i, NULL); m_disabled_actions.push_back( action_name_to_id(str) ); } int j; int f_num = xml->GetNodesNum(xml->GetLocalRoot(),"function_on_start"); m_start_lua_functions.resize (f_num); for(j=0; j<f_num; ++j) m_start_lua_functions[j] = xml->Read(xml->GetLocalRoot(), "function_on_start", j, NULL); f_num = xml->GetNodesNum(xml->GetLocalRoot(),"function_on_stop"); m_stop_lua_functions.resize (f_num); for(j=0; j<f_num; ++j) m_stop_lua_functions[j] = xml->Read(xml->GetLocalRoot(), "function_on_stop", j, NULL); m_check_lua_function = xml->Read(xml->GetLocalRoot(), "function_check_start", 0, NULL); m_onframe_lua_function = xml->Read(xml->GetLocalRoot(), "function_on_frame", 0, NULL); xml->SetLocalRoot (_stored_root); } bool CUISequenceItem::AllowKey(int dik) { xr_vector<int>::iterator it = std::find(m_disabled_actions.begin(),m_disabled_actions.end(),get_binded_action(dik)); if(it==m_disabled_actions.end()) return true; else return false; } void CUISequenceItem::Update() { if( m_onframe_functor.is_valid() ) m_onframe_functor(current_factor()); } void CUISequenceItem::Start() { CallFunctions(m_start_lua_functions); if(m_onframe_lua_function.size()) { bool functor_exists = ai().script_engine().functor(m_onframe_lua_function.c_str() ,m_onframe_functor); THROW3 (functor_exists, "Cannot find script function described in tutorial item ", m_onframe_lua_function.c_str()); } } bool CUISequenceItem::Stop(bool bForce) { CallFunctions(m_stop_lua_functions); return true; } CUISequencer::CUISequencer() { m_flags.zero(); } void CUISequencer::Start(LPCSTR tutor_name) { VERIFY (m_sequencer_items.size()==0); Device.seqFrame.Add (this, REG_PRIORITY_LOW-10000); m_UIWindow = xr_new<CUIWindow>(); CUIXml uiXml; uiXml.Load (CONFIG_PATH, UI_PATH, "game_tutorials.xml"); int items_count = uiXml.GetNodesNum (tutor_name, 0, "item"); VERIFY(items_count>0); uiXml.SetLocalRoot (uiXml.NavigateToNode(tutor_name, 0)); m_flags.set(etsPlayEachItem, !!uiXml.ReadInt("play_each_item", 0, 0)); m_flags.set(etsPersistent, !!uiXml.Read("persistent", 0, 0)); m_flags.set(etsOverMainMenu, !!uiXml.Read("over_main_menu", 0, 0)); int render_prio = uiXml.ReadInt("render_prio", 0, -2); CUIXmlInit xml_init; if(UI().is_widescreen() && uiXml.NavigateToNode("global_wnd_16", 0) ) { xml_init.AssignColor ("tut_gray", color_rgba(255,255,255,255)); xml_init.InitWindow (uiXml, "global_wnd_16", 0, m_UIWindow); }else { xml_init.AssignColor ("tut_gray", color_rgba(100,100,100,255)); xml_init.InitWindow (uiXml, "global_wnd", 0, m_UIWindow); } XML_NODE* bk = uiXml.GetLocalRoot(); uiXml.SetLocalRoot (uiXml.NavigateToNode("global_wnd", 0)); { LPCSTR str = uiXml.Read ("pause_state", 0, "ignore"); m_flags.set (etsNeedPauseOn, 0==_stricmp(str, "on")); m_flags.set (etsNeedPauseOff, 0==_stricmp(str, "off")); } LPCSTR snd_name = uiXml.Read("sound", 0, ""); if (snd_name && snd_name[0]) { m_global_sound.create (snd_name, st_Effect,sg_Undefined); VERIFY (m_global_sound._handle() || strstr(Core.Params,"-nosound")); } m_start_lua_function = uiXml.Read("function_on_start", 0, ""); m_stop_lua_function = uiXml.Read("function_on_stop", 0, ""); uiXml.SetLocalRoot (bk); for(int i=0;i<items_count;++i) { LPCSTR _tp = uiXml.ReadAttrib("item",i,"type",""); bool bVideo = 0==_stricmp(_tp,"video"); CUISequenceItem* pItem = 0; if (bVideo) pItem = xr_new<CUISequenceVideoItem>(this); else pItem = xr_new<CUISequenceSimpleItem>(this); m_sequencer_items.push_back(pItem); pItem->Load (&uiXml,i); } Device.seqRender.Add (this, render_prio /*-2*/); CUISequenceItem* pCurrItem = GetNextItem(); R_ASSERT3 (pCurrItem, "no item(s) to start", tutor_name); pCurrItem->Start (); m_pStoredInputReceiver = pInput->CurrentIR(); IR_Capture (); m_flags.set (etsActive, TRUE); m_flags.set (etsStoredPauseState, Device.Paused()); if(m_flags.test(etsNeedPauseOn) && !m_flags.test(etsStoredPauseState)) { Device.Pause (TRUE, TRUE, TRUE, "tutorial_start"); bShowPauseString = FALSE; } if(m_flags.test(etsNeedPauseOff) && m_flags.test(etsStoredPauseState)) Device.Pause (FALSE, TRUE, FALSE, "tutorial_start"); if (m_global_sound._handle()) m_global_sound.play(NULL, sm_2D); if(m_start_lua_function.size()) CallFunction(m_start_lua_function); } CUISequenceItem* CUISequencer::GetNextItem() { CUISequenceItem* result = NULL; while(m_sequencer_items.size()) { luabind::functor<bool> functor_to_call; result = m_sequencer_items.front(); shared_str const f = result->m_check_lua_function; if(f.size()==0) break; bool functor_exists = ai().script_engine().functor(f.c_str() ,functor_to_call); THROW3 (functor_exists, "Cannot find script function described in tutorial item ", f.c_str()); bool call_result = true; if( functor_to_call.is_valid() ) call_result = functor_to_call(); if(!call_result) { m_sequencer_items.pop_front(); result = NULL; }else { break; } } return result; } extern CUISequencer * g_tutorial; extern CUISequencer * g_tutorial2; void CUISequencer::Destroy() { if(m_stop_lua_function.size()) CallFunction(m_stop_lua_function); m_global_sound.stop (); Device.seqFrame.Remove (this); Device.seqRender.Remove (this); delete_data (m_sequencer_items); delete_data (m_UIWindow); IR_Release (); m_flags.set (etsActive, FALSE); m_pStoredInputReceiver = NULL; if(!m_on_destroy_event.empty()) m_on_destroy_event (); if(g_tutorial==this) { g_tutorial = NULL; } if(g_tutorial2==this) { g_tutorial2 = NULL; } } void CUISequencer::Stop() { if(m_sequencer_items.size()) { if (m_flags.test(etsPlayEachItem)) { Next (); return; }else { CUISequenceItem* pCurrItem = m_sequencer_items.front(); pCurrItem->Stop (true); } } { if(m_flags.test(etsNeedPauseOn) && !m_flags.test(etsStoredPauseState)) Device.Pause (FALSE, TRUE, TRUE, "tutorial_stop"); if(m_flags.test(etsNeedPauseOff) && m_flags.test(etsStoredPauseState)) Device.Pause (TRUE, TRUE, FALSE, "tutorial_stop"); } Destroy (); } void CUISequencer::OnFrame() { if(!Device.b_is_Active) return; if(!IsActive() ) return; if(!m_sequencer_items.size()) { Stop (); return; }else { CUISequenceItem* pCurrItem = m_sequencer_items.front(); if(!pCurrItem->IsPlaying()) Next(); } if(!m_sequencer_items.size()) { Stop (); return; } VERIFY(m_sequencer_items.front()); m_sequencer_items.front()->Update (); m_UIWindow->Update (); } void CUISequencer::OnRender () { if (m_UIWindow->IsShown()) m_UIWindow->Draw(); VERIFY(m_sequencer_items.size()); m_sequencer_items.front()->OnRender (); } void CUISequencer::Next () { CUISequenceItem* pCurrItem = m_sequencer_items.front(); bool can_stop = pCurrItem->Stop(); if (!can_stop) return; m_sequencer_items.pop_front (); delete_data (pCurrItem); if(m_sequencer_items.size()) { pCurrItem = GetNextItem(); if(pCurrItem) pCurrItem->Start (); } } bool CUISequencer::GrabInput() { if(m_sequencer_items.size()) return m_sequencer_items.front()->GrabInput(); else return false; } void CUISequencer::IR_OnMousePress (int btn) { if(m_sequencer_items.size()) m_sequencer_items.front()->OnMousePress (btn); if(!GrabInput() && m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnMousePress(btn); } void CUISequencer::IR_OnMouseRelease (int btn) { if(!GrabInput()&&m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnMouseRelease(btn); } void CUISequencer::IR_OnMouseHold (int btn) { if(!GrabInput()&&m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnMouseHold(btn); } void CUISequencer::IR_OnMouseMove (int x, int y) { if(!GrabInput()&&m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnMouseMove(x, y); } void CUISequencer::IR_OnMouseStop (int x, int y) { if(!GrabInput()&&m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnMouseStop(x, y); } void CUISequencer::IR_OnKeyboardRelease (int dik) { if(!GrabInput()&&m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnKeyboardRelease(dik); } void CUISequencer::IR_OnKeyboardHold (int dik) { if(!GrabInput()&&m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnKeyboardHold(dik); } void CUISequencer::IR_OnMouseWheel (int direction) { if(!GrabInput()&&m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnMouseWheel(direction); } void CUISequencer::IR_OnKeyboardPress (int dik) { if(m_sequencer_items.size()) m_sequencer_items.front()->OnKeyboardPress (dik); bool b = true; if(m_sequencer_items.size()) b &= m_sequencer_items.front()->AllowKey(dik); bool binded = is_binded(kQUIT, dik); if(b && binded ) { Stop (); return; } if(binded && CurrentGameUI()) { if(CurrentGameUI()->ActorMenu().IsShown()) { CurrentGameUI()->HideActorMenu(); return; } if(CurrentGameUI()->PdaMenu().IsShown()) { CurrentGameUI()->HidePdaMenu(); return; } Console->Execute("main_menu"); return; } if(b && !GrabInput() && m_pStoredInputReceiver) m_pStoredInputReceiver->IR_OnKeyboardPress (dik); } void CUISequencer::IR_OnActivate() { if(!pInput) return; int i; for (i = 0; i < CInput::COUNT_KB_BUTTONS; i++ ) { if(IR_GetKeyState(i)) { EGameActions action = get_binded_action(i); switch (action){ case kFWD : case kBACK : case kL_STRAFE : case kR_STRAFE : case kLEFT : case kRIGHT : case kUP : case kDOWN : case kCROUCH : case kACCEL : case kL_LOOKOUT : case kR_LOOKOUT : case kWPN_FIRE : { IR_OnKeyboardPress (i); }break; }; }; } }
A Comprehensive Analysis of Common and Rare Variants to Identify Adiposity Loci in Hispanic Americans: The IRAS Family Study (IRASFS) Obesity is growing epidemic affecting 35% of adults in the United States. Previous genome-wide association studies (GWAS) have identified numerous loci associated with obesity. However, the majority of studies have been completed in Caucasians focusing on total body measures of adiposity. Here we report the results from genome-wide and exome chip association studies focusing on total body measures of adiposity including body mass index (BMI), percent body fat (PBF) and measures of fat deposition including waist circumference (WAIST), waist-hip ratio (WHR), subcutaneous adipose tissue (SAT), and visceral adipose tissue (VAT) in Hispanic Americans (nmax = 1263) from the Insulin Resistance Atherosclerosis Family Study (IRASFS). Five SNPs from two novel loci attained genome-wide significance (P<5.00x10-8) in IRASFS. A missense SNP in the isocitrate dehydrogenase 1 gene (IDH1) was associated with WAIST (rs34218846, MAF = 6.8%, PDOM = 1.62x10-8). This protein is postulated to play an important role in fat and cholesterol biosynthesis as demonstrated in cell and knock-out animal models. Four correlated intronic SNPs in the Zinc finger, GRF-type containing 1 gene (ZGRF1; SNP rs1471880, MAF = 48.1%, PDOM = 1.00x10-8) were strongly associated with WHR. The exact biological function of ZGRF1 and the connection with adiposity remains unclear. SNPs with p-values less than 5.00x10-6 from IRASFS were selected for replication. Meta-analysis was computed across seven independent Hispanic-American cohorts (nmax = 4156) and the strongest signal was rs1471880 (PDOM = 8.38x10-6) in ZGRF1 with WAIST. In conclusion, a genome-wide and exome chip association study was conducted that identified two novel loci (IDH1 and ZGRF1) associated with adiposity. While replication efforts were inconclusive, when taken together with the known biology, IDH1 and ZGRF1 warrant further evaluation. Introduction Obesity is a global health problem closely associated with an increased risk for multiple metabolic diseases. Body mass index (BMI) has been widely used in studies to estimate total body adiposity. However, BMI is derived from total body weight which possesses inter-individual variability attributed to muscle mass, i.e. BMI is not a direct measure of fat deposition, which is closely linked to health outcomes. Waist-hip ratio (WHR) and waist circumference (WAIST) have been well-recognized as complementary approaches to estimate fat deposition. However, they are often skewed by age and skeletal structure. In addition to anthropometric measures, computed tomography (CT) has been recognized as the gold standard for measuring regional fat deposition. Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) can be estimated by CT scans with both being strong risk factors for metabolic disturbances. Alternatively, dual-energy X-ray absorptiometry (DEXA) can provide a direct measurement of total body fat volume by partitioning total body mass into bone, lean, and fat soft tissue components. Genome-wide association studies (GWAS) have been successful in identifying obesityrelated loci with more than 100 loci identified to date. However, over 80% of GWAS variants fall outside protein coding regions, which impairs causal inference. In addition, associated variants possess small effect sizes providing limited information for disease risk prediction. More recent evidence suggests low frequency and rare variants (minor allele frequency (MAF) <5%) also play a role in susceptibility to disease. In addition, although the overall risk of obesity is much higher in Hispanic populations compared to non-Hispanic whites, i.e. 40.4% versus 34.3%, respectively, studies of the genetic contributors have been few in number and limited in scope in the Hispanic population. Until now, VIVA LA FAMILIA was the only cohort with published genome-wide significant obesity signals specific to the Hispanic population. In this study, we hypothesized that genetic factors are responsible for the increased obese status in the Hispanic population. By combining more refined adiposity measures and genotypic information from GWAS and exome chip, we are able to conduct a comprehensive scan of the genome with the potential to identify ethnic specific causal variants. Ethics Statement Participants included in this study were recruited from clinical centers in San Antonio, TX and San Luis Valley, CO. The Institutional Review Board of each clinical (UT Health Science Center San Antonio Review Board and Colorado Multiple Institutional Review Board, respectively) and analysis (Wake Forest School of Medicine) site approved the study protocol and all participants provided their written informed consent. Study Participants Study design and recruitment for the Insulin Resistance Atherosclerosis Family Study (IRASFS) have been described. Briefly, the IRASFS was designed to identify the genetic and environmental basis of insulin resistance and adiposity. Hispanic Americans included in this report (n = 1417 individuals, 90 pedigrees) were recruited from clinical centers in San Antonio, TX and San Luis Valley, CO. While a diagnosis of diabetes was not required for participation, about 12.7% of genotyped individuals had diabetes. A detailed description of the phenotypes can be found in supplemental materials (S1 Text). Genotyping and Quality Control GWAS genotyping was supported through the Genetics Underlying Diabetes in Hispanics (GUARDIAN) Consortium. Genotyping was attempted for 1039 Hispanic Americans plus 13 quality control (QC) duplicates using the Illumina OmniExpress Array (Illumina Inc.; San Diego, CA, USA; n = 730,525 markers) with an additional 14 external controls included to verify reproducibility across genotyping runs. Exome chip genotyping was carried out on the Illumina HumanExome Array v1.0 (n = 560) and v1.1 (n = 864) in the Center for Genomics and Personalized Medicine Research at Wake Forest School of Medicine, Winston-Salem, NC, USA. A detailed description of the quality control procedures can be found in supplemental materials (S1 Text). Overall, 687,094 polymorphic autosomal SNPs from the OmniExpress and 81,599 SNPs from the exome chip were analyzed in 1034 and 1263 individuals, respectively. Among them, 18,289 SNPs were overlapping between the two platforms. Genotype concordance rate was over 99.9%. Phenotypes Anthropometric measures of adiposity were obtained using standard methods including height, weight, waist circumference (minimum between 10 th rib and iliac crest), and hip circumference (maximum circumference at the buttocks). BMI was calculated as weight in kilograms divided by height in meters squared. A CT scan was performed to estimate visceral and subcutaneous fat area (cm 2 ). This procedure consisted of a single scout of the abdomen followed by a 10-mm thick axial image at the L4-L5 disc space using a standard protocol. CT images were read centrally at the University of Colorado Health Sciences Center. VAT and SAT were computed as previously described. Percent body fat (PBF) was measured using DEXA at a 5 year follow-up exam, thus a reduced sample size as compared to other measures was available. A whole body DEXA scan uses the differential attenuation of two low dose x-ray beams to partition total body mass into bone, lean, and fat soft tissue components based on established mass-attenuation constants for bone mineral and lipid. Percent body fat (PBF) was calculated using total fat mass divided by measured weight x 100. Statistical Analysis for GWAS and Exome Chip Phenotypes were transformed to best approximate the distributional assumptions of conditional normality and homogeneity of variance. Specifically, BMI, WAIST, and WHR were natural log transformed, SAT and VAT values were square-root transformed and PBF required no transformation. Admixture estimates were calculated using maximum likelihood estimation of individual ancestries using ADMIXTURE. Specifically, the largest set of uncorrelated markers (r 2 <0.1) for K populations yielding the lowest cross validation (CV) error was used for unsupervised calculation of ancestral proportions. Representative ancestral populations from HapMap (CEU, YRI, CHN, and MEX) were included in the analysis. For GWAS, 117,347 LD-pruned SNPs for K = 5 populations (CV error = 0.48) were used. For exome chip, 10,566 uncorrelated SNPs for K = 5 populations (CV error = 0.52) were used. Three admixture estimates explained the largest amount of variation within the data and were highly correlated (r 2 >0.93) across platforms. Tests of association between individual variants and quantitative traits were computed using the Wald test from the variance component model implemented in Sequential Oligogenic Linkage Analysis Routines (SOLAR). Genetic models of association were calculated adjusting for age, gender, recruitment center, and admixture estimates. The primary inference was the additive genetic model. A lack of fit to the additive model was also tested using the orthogonal contrast (-1, 2,-1). If that lack of fit was significant (P<0.05), the model with the "best" p-value is the minimum of the dominant, additive, and recessive. Overall, the results were modestly inflated with inflation factors ranging from 1.04 to 1.08. QQ-plots of the six adiposity traits are shown in S1-S6 Figs. For robust estimation purposes, the additive and recessive genetic models were not computed if there were less than 10 and 20 individuals homozygous for the minor allele, respectively (similar to a minimal MAF of 1% and 2%). Conditional analysis was performed by adding the SNP with the strongest statistical significance to the model as a covariate. Power analysis Power was computed using QUANTO (http://hydra.usc.edu/GxE).Simulations suggest that for these pedigrees the effective sample size equivalent to unrelated individuals for a quantitative trait is 92%. Thus, power calculations were based on a sample size of 951 for GWAS and 1162 for exome chip. The statistical power of our study to detect SNP-trait associations was computed assuming a type 1 error rate of = 5.0x10 -8. Overall, the OmniExpress had power of 0.70, 0.80, and 0.90 to detect SNP-trait associations that explain 3.7%, 4.1% and 4.7% of the trait variation, respectively. Similarly, the exome chip had power of 0.70, 0.80, and 0.90 to detect SNP-trait associations that explain 3.0%, 4.1% and 4.7% of the trait variation, respectively. De novo Genotyping in IRASFS and IRAS In an effort to directly replicate the top association signals observed from exome chip and to search for potential causal SNPs at the IDH1 and ZGRF1 loci, a total of 76 SNPs were genotyped using the Sequenom MassARRAY Genotyping System (Sequenom, San Diego, CA, USA). Among these, 51 SNPs from the exome chip were chosen for genotyping in IRAS (n = 184) for replication (P<5.0x10 -5 ). Another 25 SNPs (including 13 missense SNPs) within the IDH1 and ZGRF1 loci which were not covered by GWAS or exome chip were chosen for genotyping in IRASFS. Overall, genotyping efficiency was greater than 95%. To evaluate genotyping accuracy, 12 and 72 blind duplicate samples were included in IRAS and IRASFS, respectively. For all SNPs, genotyping was 99% concordant. PedCheck was computed for IRASFS genotype data and resulted in zeroing of 24 genotypes due to Mendelian inconsistencies. Association analysis in IRASFS was computed using SOLAR as described. Analysis of data from IRAS was computed using QSNPGWA (https://www.phs.wakehealth.edu/public/ home.cfm). Overall, 38 of the 51 SNPs were polymorphic and all SNP genotypes conformed to Hardy-Weinberg expectation (P>0.05). A total of 71 GWAS SNPs (P<5.00x10 -6 ) from the six adiposity phenotypes were selected for replication in the six cohorts in the GUARDIAN consortium. Meta-analysis of BMI, WAIST, and WHR was computed using the fixed effect model implemented in METAL (www. sph.umich.edu/csg/abecasis/metal/) as well as a random effect model in Metasoft (http:// genetics.cs.ucla.edu/meta/). For PBF, only IRASFS, BetaGene, MACAD, and HTN were included. For SAT and VAT, as they were not available in replication cohorts, a weighted meta-analysis of the p-values and samples sizes using surrogate phenotypes was performed. For example, BMI was used as the surrogate for PBF in IRAS, TRIPOD, and HTN-IR; BMI for SAT in all six replication cohorts; and WAIST for VAT in all six replication cohorts. Evaluation of previously identified signals A total of 127 independent signals (r 2 <0.8) associated with adiposity and adiposity-related traits with genome-wide significance from previously published studies were evaluated. A complete list of phenotypes used for the query can be found in supplemental material (S1 Text). Proxy SNPs (r 2 >0.8) for each of the 127 tag SNPs were also identified using SNAP Proxy Search under the 1000 Genomes Pilot 1 SNP data set with a distance limit of 500kb. Association analysis was computed for all proxy SNPs with the six adiposity traits in IRASFS. Imputation of targeted variants not present on the OmniExpress Array was performed using IMPUTE2. All IRASFS samples genotyped on the OmniExpress Array (n = 1034) were imputed together using the 1000 Genomes Integrated Reference Panel (March 2012). In addition, 67 SNPs with associations to BMI and obesity from the 127 SNPs were selected for risk score analysis. The risk score was generated based on the number of risk alleles of the 67 SNPs. Associations of the risk score with six obesity phenotypes was conducted using SOLAR adjusting for age, gender, center, and admixtures. Results Characteristics of the study samples are shown in Table 1. Across all studies there was a higher proportion of females. On average, individuals were overweight with a mean BMI greater than 28kg/m 2. The IRASFS exome chip analysis included an additional 229 samples (n = 1263) compared to GWAS (n = 1034), of which 161 were individuals with T2D. This resulted in modestly increased means in adiposity-related traits. In IRASFS, 687,094 polymorphic autosomal SNPs from the OmniExpress and 81,599 SNPs from the exome chip were analyzed in 1034 and 1263 individuals, respectively. A summary of the association results are shown in Fig 1 and Table 2. In total, five SNPs from two loci reached genome-wide significance (P<5.00x10 -8 ). Among these were four highly correlated SNPs (rs13144672, rs7696816, rs1471880, rs12054518; r 2 >0.96) associated with WHR in the Zinc finger, GRF-type containing 1 gene (ZGRF1). SNP rs1471880 (MAF = 48.1%), an intronic variant, showed the strongest signal of association under a dominant genetic model (WHR, P DOM = 1.00x10 -8 ) and explained 2.7% of the variance in WHR. On average, minor allele carriers have 2.3% lower of WHR (0.84±0.082 as compared to 0.86±0.085 in non-carriers). The second genome-wide significant signal was rs34218846 (MAF = 6.8%) with WAIST (P DOM = 1.62x10 -8 ). This SNP explains 2.1% of the phenotypic variance and marks a valine to isoleucine change (V178I) in the Isocitrate Dehydrogenase 1 gene (IDH1) on chromosome 2. De novo genotyping of additional, putatively functional SNPs at these loci in the IRASFS cohort did not identify additional statistically significant variants (S1 Table). Replication of signals from the IRASFS GWAS (n = 71 SNPs with P<5.00x10 -6 ) was attempted through meta-analysis with six additional Hispanic-American cohorts. Overall, no SNP attained genome-wide significance after meta-analysis (Table 3 and S2 and S3 Tables). The most significant signal remained to be rs1471880 (P DOM = 8.38x10 -6 ) at the ZGRF1 locus associated with WAIST, which was also the strongest signal identified by GWAS (WHR, P DOM = 1.00x10 -8, WAIST P DOM = 6.47x10 -7 ). Among replication cohorts, similar allele frequencies and a consistent direction of effect were observed in five of the larger cohorts while the two smaller cohorts, TRIPOD (n = 125) and NIDDM-Athero (n = 179), had an opposite direction of effect (S7 Fig). For IDH1, the top SNP rs34218846 was identified from exome chip and was not available for in silico replication among the additional cohorts. Analysis of two GWAS proxy SNPs, rs6435435 (r 2 = 0.91 with rs34218846, P DOM = 1.73x10 -6 for BMI) and rs6734788 (r 2 = 0.37 with rs34218846, P ADD = 7.33x10 -7 for WAIST), near IDH1 resulted in decreased significance (rs6435435 P DOM = 0.11 with BMI and rs6734788 P ADD = 7.98x10 -4 with WAIST) with inconsistent directions of effect. De novo genotyping of variants at the IDH1 locus in IRAS (n = 187) revealed five nominally associated SNPs (P<0.05), of which two SNPs, rs12105636 (BMI P ADD = 0.046) and rs16840781 (BMI P DOM = 0.030), were significant with a consistent direction of effect. However, the top IDH1 missense SNP (rs34218846) was not significant (WAIST P ADD = 0.45) with an opposite direction of effect (S4 Table). SNP rs12105636 and rs16840781 had nominal association signals in the IRASFS GWAS (WAIST P DOM = 3.94x10 -3 and P DOM = 2.33x10 -3, respectively) and were poorly correlated with rs34218846 (r 2 = 0.34). In addition to the search for novel adiposity variants, 127 independent signals (r 2 <0.8) associated with adiposity and adiposity-related traits with genome-wide significance from previously published studies were evaluated in the IRASFS. Among these, 116 SNPs were directly genotyped or successfully imputed in IRASFS (S5 Table). Overall, 71 SNPs showed nominal association (P<0.05) with consistent direction of effect for at least one of the six adiposity traits. These included 23 SNPs for BMI, 17 SNPs for WAIST, 13 SNPs for WHR, 31 SNPs for SAT, 21 SNPs for VAT, and 13 SNPs for PBF. A two-sided nonparametric sign test was computed for the p-value thresholds of 0.10, 0.05, 0.01, and 4.31x10 -4 (based on a Bonferroni correction of 116 variants) and the results were summarized in S6 Table. In brief, significantly higher replication signal concordance was observed with SAT and VAT (P<0.05). However, Table 3. Fixed-effect meta-analysis results (P<2.0x10 -3 ) for significant signals of association (5.00x10 -6 ) from IRASFS. no replication signal survived Bonferroni correction. The strongest signal observed was rs2820464 located intergenically between lysophospholipase-like 1 gene (LYPLAL1) and solute carrier family 30, member 10 gene (SLC30A10) associated with SAT (P ADD = 7.06x10 -4 ). This variant was identified in a European cohort for an association with WHR (P = 7.00x10 -9 ). Risk score analysis of the 67 previously identified obesity SNPs showed the strongest signal for SAT (P = 5.9x10 -4 ). BMI, WAIST, and PBF were nominally associated with P-values 2.2x10 -3, 7.7x10 -3, and 3.2x10 -3, respectively. Not surprising, VAT (P = 0.22) and WHR (P = 0.83) were not associated with the risk score as they are measures of adiposity depositions instead of total fat volumes. Discussion Here we present a combined study of genome-wide and exome chip arrays to investigate the genetic determinants of adiposity measures in the Hispanic-American population. The complementary approach of using GWAS and exome chip enabled a broader coverage of both common and rare functional variants, resulting in an increased chance to identify causal mutations. Obesity-related traits evaluated included anthropometric (WAIST, WHR, and BMI), CT (SAT and VAT), and DEXA (PBF) measures. The assessment of CT and DEXA scans provided more accurate estimates of regional and total adiposity, respectively. We evaluated associations among Hispanic Americans from IRASFS (n max = 1263) using GWAS and exome chip analysis with replication in six independent Hispanic cohorts (n max = 4155). Association studies revealed ZGRF1 and IDH1 as two possible novel adiposity-related loci: ZGRF1 was associated with waist-hip ratio (P DOM = 1.00x10 -8 ) and IDH1 was associated with waist circumference (P DOM = 1.62x10 -8 ). Overall, three intronic variants and one missense SNP in ZGRF1 were identified above genome-wide significance for WHR ( Table 2). The missense mutation (rs7696816) marks an asparagine to serine amino acid change with a benign effect predicted by PolyPhen. The specific function of this gene remains unclear. The overall expression of ZGRF1 in the human body is relatively low with the exception in brain and testis. Direct replication of the ZGRF1 signals was performed across six cohorts and the strongest signal from meta-analysis was rs1471880, P DOM = 8.38x10 -6 ( Table 3). A consistent direction of effect was observed across the five larger cohorts (n max = 3645) However, the statistical significance decreased (S7 Fig). Examination of this region in the GIANT (Genetic Investigation of Anthropometric Traits) Consortium for BMI and class 1 obesity (BMI>30) failed to reveal significant signals of association at the ZGRF1 locus (P>0.01; S8 Fig). Interestingly, previous studies have identified ALPK1 (rs4833407), 100kb proximal to ZGRF1, to be associated with obesity in European populations. However, the two SNPs in ALPK1 and ZGRF1 were poorly correlated in both CEU and IRASFS Hispanic Americans (r 2 = 0.005 and 0.013, respectively). In IRASFS, most association signals centered around the ZGRF1 locus with a few in NEUROG2 and very weak signals in ALPK1 (Fig 2). NEUROG2 is a proneural protein neurogenin and has been shown to control cortical neuron migration through the regulation of small GTP-binding protein Rnd2 and no direct link with adiposity has been established. Conditional analysis of this region with rs1471880 as a covariate abolished all association signals in ZGRF1 as well as the signals in nearby NEUROG2 without changes in ALPK1 (Fig 2). IDH1 encodes cytosolic NADP+ dependent isocitrate dehydrogenase (IDPc) which has been proposed as a key enzyme for supplying cytosolic NADPH. The most significant association signal observed was SNP rs34218846 (MAF = 0.068; P DOM = 1.62x10 -8 ) encoding a missense mutation from valine to isoleucine in exon 6 and was predicted as "probably damaging" by PolyPhen. This mutation is located at the subunit dimerization interface, Association analyses were computed with adjustment for age, gender, recruitment center, and admixture estimates with SNP rs1471880 as an additional covariate in panel B. The recombination rates are indicated on the right-hand Y axis based on HapMap. The color of each SNP annotates its correlation (r 2 ) with the index SNP and was taken from the 1000 Genomes AMR population. A circle denotes intronic and intergenic SNPs, a triangle denotes a missense SNP, and a square denotes a SNP in the untranslated region (UTR). suggesting a potential regulatory role in gene function (S9 Fig). Previous genetic studies have suggested a strong correlation between IDH1 mutations and cancer. A biological link between IDH1 and adiposity has been postulated using cell models. Specifically, stable transfection of IDH1 cDNA positively correlated with adipogenesis of 3T3-L1 cells whereas decreased IDPc expression using an antisense IDPc vector retarded 3T3-L1 adipogenesis. A more recent study reported knockdown of IDPc expression by RNA interference (RNAi) which inhibited adipocyte differentiation and lipogenesis in 3T3-L1 preadipocytes. In addition, in diet-induced obese mice transduced with IDPc short-hairpin RNA, a loss of body weight and reduction of triglyceride levels were observed. The evaluation of serum triglyceride levels in IRASFS revealed carriers of rs34218846 T allele (adiposity protective allele) had a 20mg/dL decrease in triglyceride levels compared to non-carriers (P DOM = 7.79x10 -3 ). Taken together, IDH1 appears to play an important role in fat metabolism. SNP rs34218846 was not directly genotyped among the replication cohorts. Therefore, two proxy SNPs, rs6435435 (P DOM = 1.73x10 -6 for BMI, r 2 = 0.91 with rs34218846) and rs6734788 (P ADD = 7.33x10 -7 for WAIST, r 2 = 0.37 with rs34218846), were selected for meta-analysis. However, these proxies failed to replicate (rs6435435 P DOM = 6.74x10 -5 for WAIST and rs6734788 P ADD = 9.02x10 -5 for WHR). Lack of association was similarly observed in IRAS (n = 184) with direct genotyping of rs34218846 (P = 0.45; S4 Table), which could be attributed reduced power given the small sample size. To search for additional putatively causal variants in IDH1, we conducted de novo genotyping in IRASFS which revealed an intronic SNP (rs59684347) showing stronger evidence of association (P ADD = 7.42x10 -9 ; WAIST) ( Fig 3). However, rs34218846 and rs59684347 were highly correlated (r 2 = 1.00) and all evidence of association in the region was abolished after inclusion of rs34218846 as a covariate in the analysis (Fig 3). Overall, IDH1 represents a promising locus with evidence of association to adiposity-related traits, especially waist circumference. Notably, larger cohorts from European-derived populations in the GIANT Consortium have identified BMI associated signals in CRYGD (rs10932241), which is 100kb proximal to IDH1. However, there was no signal of association for the IDH1 locus in GIANT and rs10932241 was poorly correlated with rs34218846 (r 2 = 0.057) and only nominally associated with BMI (p-value = 0.057; S10 Fig) in IRASFS despite a similar minor allele frequency observed in European populations (MAF = 5.31%). In summary, although encouraging results have been revealed, there are several study limitations. Like most minority studies, sample size largely limited the power, especially for rare variants assessed on the exome chip. In addition, the utility of the Illumina HumanExome Array in Hispanic Americans is not optimal as only 81,559 out of 242,901 SNPs on the array were polymorphic, likely attributable to a design based on findings in Caucasians and African Americans. The application of Illumina OmniExpress BeadChip has similar concerns: the SNPs on the chip may not tag the LD structure as well in Hispanic Americans. Another issue is the lack of replication signals: all signals fell below the significance threshold after meta-analysis. There are several possible reasons: first, the replication cohorts were limited to directly genotyped GWAS variants and we were unable to replicate signals from the exome chip among all cohorts. Second, some replication cohorts did not have CT and DEXA measures for replication, necessitating the use of surrogate phenotypes. Third, while all cohorts were of Hispanic ancestry, different ascertainment criteria were used. For example, BetaGene recruited participants at high risk of gestational diabetes while HTN-IR recruited participants at high risk of hypertension. This differs from IRASFS which is a population-based study recruited based on large family size. Additionally, the sample sizes for IRAS, TRIPOD, and NIDDM-Athero were relatively small. This may explain why the more significant associations, e.g. rs1471880 demonstrated an opposite direction of effect in TRIPOD (n = 125) and NIDDM-Athero (n = 179) (S7 Fig). Another concern is the large effects of IDH1 (2.1%) and ZGRF1 (2.7%) in this study are weak from previous European population studies (S8 and S10 Figs). One explanation is the potential for ethnic-specific variants or that the signals are the result of gene-environment effects. It is also possible that the signals observed are not causal and they were detected due to a long range LD with other loci. Until now, VIVA LA FAMILIA was the only cohort with published genome-wide significant obesity-related signals specific to the Hispanic population. Further evaluation of the obesity-related loci from VIVA LA FAMILIA in IRASFS revealed nominal association for rs2823615 (P DOM = 7.86x10 -3 with SAT), an intronic SNP in the Family with Sequence Association analyses were computed with adjustments for age, gender, recruiting center, and admixtures with SNP rs34218846 as an additional covariate in panel B. The recombination rates are indicated on the right-hand Y axis based on HapMap. The color of each SNP annotates its correlation (r 2 ) with the index SNP and was taken from the 1000 Genomes AMR population. A circle denotes intronic and intergenic SNPs, a triangle denotes a missense SNP, and a square denotes a SNP in the untranslated region (UTR). Similarity 222 Member A gene (FAM222A). This SNP has been shown to be associated with increased respiratory quotient in VIVA LA FAMILIA and increased SAT in IRASFS. In summary, we computed a combined study of genome-wide and exome chip arrays in the IRASFS Hispanic-American population. Six obesity related traits were analyzed for association. ZGRF1 and IDH1 attained genome-wide significance in IRASFS and replication of significant signals was evaluated in six additional Hispanic cohorts (n max = 4155). Meta-analysis suggested decreased levels of significance (ZGRF1 rs1471880, P DOM = 8.38x10 -6 ; IDH1 rs6435435, P DOM = 6.74x10 -5 ). These results highlight the importance of GWAS and exome chip research in minority populations where an increased prevalence of adiposity-related diseases may be associated with a differential genetic architecture than in European-derived populations.
MPC-Based Virtual Synchronous Generator for LVRT Capability Enhancement of DFIG-Based Wind Farm with Battery Energy Storage System More and more large-scale wind farms are interfaced into the power grid to achieve sustainable development and environmental protection, and the penetration rate of wind power generation in the power system is increasing. To improve the frequency support ability of wind turbines (WTs) and weaken the adverse effect of phase locked loop, virtual synchronous generator (VSG) control of WTs is in the ascendant. However, the VSG-controlled doubly-fed induction generator (DFIG) based wind farm does not have sufficient capability to ride-through grid faults such as voltage drops. In this paper, a model predictive control-based VSG control for DFIG is proposed, which can limit the current surge and accelerate the decay of the transient flux component under symmetrical voltage faults and suppress the transient and negative sequence rotor current components under asymmetrical voltage faults. At the same time, a calculation method of power reference values of DFIGs and battery energy storage system under voltage faults is proposed, which can reduce the decline of wind farm output active power while providing the required reactive current to the power grid. Finally, simulation and experimental results verify that the proposed control strategy is effective in enhancing the low-voltage ride through capability of DFIG-Based wind farms.
/** * The method to return all existing LDAP Groups configured in Vault * @param token * @return Ldap Groupnames configured in vault * * Sample output * { "keys": ["ldapgroup1","ldapgroup2"] } * */ @ApiOperation(value = "${LDAPAuthControllerV2.listLdapGroups.value}", notes = "${LDAPAuthControllerV2.listLdapGroups.notes}", hidden = true) @GetMapping(value="/v2/auth/ldap/groups",produces="application/json") public ResponseEntity<String> listLdapGroups(@RequestHeader(value="vault-token",required=false) String token){ return ldapAuthService.listLdapGroups(token); }
<reponame>Renovamen/torchattn import torch from torch import nn class Involution(nn.Module): """ Implementation of the Involution operator proposed in [1]. Parameters ---------- in_channels : int Number of channels in the input tensor. kernels : int List of kernel sizes for each branch. stride : int Stride for the sliding blocks. reduction : int, optional, default=4 Reduction ratio to control the intermediate channel dimension. group_channels : int, optional, default=16 Number of channels in a group. Each group shares the same involution kernel. References ---------- 1. "`Involution: Inverting the Inherence of Convolution for Visual Recognition. \ <https://arxiv.org/abs/2103.06255>`_" <NAME>, et al. CVPR 2021. """ def __init__( self, in_channels: int, kernel_size: int, stride: int, reduction: int = 4, group_channels: int = 16 ) -> None: super(Involution, self).__init__() out_channels = in_channels // reduction paddingg = (kernel_size - 1) // 2 self.in_channels = in_channels self.stride = stride self.group_channels = group_channels self.groups = in_channels // group_channels self.kernel_size = kernel_size self.pool = nn.AvgPool2d(kernel_size=stride) if stride > 1 else nn.Identity() self.conv = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=1), nn.BatchNorm2d(out_channels), nn.ReLU(), nn.Conv2d(out_channels, (kernel_size ** 2) * self.groups, kernel_size=1, stride=1) ) self.unfold = nn.Unfold(kernel_size, dilation=1, padding=paddingg, stride=stride) def forward(self, x: torch.Tensor) -> torch.Tensor: """ Parameters ---------- x : torch.Tensor (batch_size, in_channels, height, width) Input tensor. Returns ------- out : torch.Tensor (batch_size, in_channels, H = height / stride, W = width / stride) Output of the involution layer. """ height, width = x.size(2), x.size(3) assert height % self.stride == 0 and width % self.stride == 0 # ----- generate kernal ----- kernel = self.conv(self.pool(x)) # (batch_size, G * K * K, H = height / stride, W = width / stride) batch_size, _, h, w = kernel.shape kernel = kernel.view(batch_size, self.groups, self.kernel_size ** 2, h, w) kernel = kernel.unsqueeze(2) # (batch_size, G, 1, K * K, H, W) # ----- involution ----- unfolded_x = self.unfold(x) # (batch_size, in_channels * K * K, H * W) unfolded_x = unfolded_x.view(batch_size, self.groups, self.group_channels, self.kernel_size ** 2, h, w) out = (kernel * unfolded_x).sum(dim=3) # (batch_size, G, group_channels, H, W) out = out.view(batch_size, self.in_channels, h, w) return out
// This file is part of the tetris-table project. Copyright (c) <NAME>. #pragma once #include <stdint.h> #include <atomic> // abstract the transport layer (eg, Serial vs...) class SenderInterface { public: virtual ~SenderInterface() = default; // @returns: wether the PushBuffer methods are usable virtual bool CanSend() = 0; virtual void PushBuffer(const uint8_t* buffer, size_t bufferLength) = 0; void PushBuffer(const void* buffer, size_t bufferLength) { PushBuffer((uint8_t*)buffer, bufferLength); } virtual bool CanReceive() = 0; // @returns: number of readable bytes virtual size_t Available() = 0; // @param buffer: destination buffer // @param bufferLength: destination buffer capacity // @returns: read byte count virtual size_t ReceiveBuffer(uint8_t* buffer, size_t bufferLength) = 0; // used by the Serial sender to signal end-of-frame acknowledgment by the arduino std::atomic<bool> LastSegmentAck{true}; };
<reponame>dram/metasfresh package de.metas.shipper.gateway.derkurier.misc; import org.apache.commons.lang.StringUtils; import org.compiere.util.Env; import com.google.common.annotations.VisibleForTesting; import de.metas.document.DocumentSequenceInfo; import de.metas.document.IDocumentSequenceDAO; import de.metas.document.sequence.IDocumentNoBuilder; import de.metas.document.sequence.IDocumentNoBuilderFactory; import de.metas.util.Check; import de.metas.util.Services; import lombok.AccessLevel; import lombok.Getter; import lombok.NonNull; /* * #%L * de.metas.shipper.gateway.derkurier * %% * Copyright (C) 2018 metas GmbH * %% * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as * published by the Free Software Foundation, either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public * License along with this program. If not, see * <http://www.gnu.org/licenses/gpl-2.0.html>. * #L% */ public class ParcelNumberGenerator { public static final int NO_AD_SEQUENCE_ID_FOR_TESTING = -99; private final IDocumentNoBuilder documentNoBuilder; @VisibleForTesting @Getter(value = AccessLevel.PACKAGE) private final int adSequenceId; /** for unit testing only */ @VisibleForTesting ParcelNumberGenerator() { this(NO_AD_SEQUENCE_ID_FOR_TESTING); } public ParcelNumberGenerator(final int parcelNumberAdSequenceId) { this.adSequenceId = parcelNumberAdSequenceId; final DocumentSequenceInfo documentSeqInfo = Services.get(IDocumentSequenceDAO.class) .retriveDocumentSequenceInfo(parcelNumberAdSequenceId); this.documentNoBuilder = Services.get(IDocumentNoBuilderFactory.class) .createDocumentNoBuilder() .setClientId(Env.getClientId()) .setDocumentSequenceInfo(documentSeqInfo) .setFailOnError(true); } public String getNextParcelNumber() { final String parcelNumberWithoutCheckDigit = documentNoBuilder.build(); return computeAndAppendCheckDigit(parcelNumberWithoutCheckDigit); } @VisibleForTesting String computeAndAppendCheckDigit(@NonNull final String parcelNumberWithoutCheckDigit) { // See #3991; Check.assumeNotEmpty(parcelNumberWithoutCheckDigit, "Parcel Number is empty"); Check.assume(StringUtils.isNumeric(parcelNumberWithoutCheckDigit), "Parcel Number must only contain digits but it is: " + parcelNumberWithoutCheckDigit); final int checkDigit = computeCheckDigit(parcelNumberWithoutCheckDigit); return parcelNumberWithoutCheckDigit + checkDigit; } private int computeCheckDigit(@NonNull final String parcelNumberWithoutCheckDigit) { int sumOdd = 0; int sumEven = 0; for (int i = 0; i < parcelNumberWithoutCheckDigit.length(); i++) { // odd if (i % 2 == 0) { sumOdd += Integer.parseInt(Character.toString(parcelNumberWithoutCheckDigit.charAt(i))); } else { sumEven += Integer.parseInt(Character.toString(parcelNumberWithoutCheckDigit.charAt(i))); } } int result = 3 * sumOdd + sumEven; result = (10 - result % 10) % 10; return result; } }
<filename>asn1c_defs/X2N_E-RABs-Admitted-Item.c /* * Generated by asn1c-0.9.29 n1 (http://lionet.info/asn1c) * From ASN.1 module "X2AP-PDU-Contents" * found in "../../asn_defs/asn1/x2ap-15-04.asn" * `asn1c -fcompound-names -fno-include-deps -findirect-choice -gen-PER -no-gen-OER` */ #include "X2N_E-RABs-Admitted-Item.h" #include "X2N_GTPtunnelEndpoint.h" #include "X2N_ProtocolExtensionContainer.h" static asn_TYPE_member_t asn_MBR_X2N_E_RABs_Admitted_Item_1[] = { { ATF_NOFLAGS, 0, offsetof(struct X2N_E_RABs_Admitted_Item, e_RAB_ID), (ASN_TAG_CLASS_CONTEXT | (0 << 2)), -1, /* IMPLICIT tag at current level */ &asn_DEF_X2N_E_RAB_ID, 0, { 0, 0, 0 }, 0, 0, /* No default value */ "e-RAB-ID" }, { ATF_POINTER, 3, offsetof(struct X2N_E_RABs_Admitted_Item, uL_GTP_TunnelEndpoint), (ASN_TAG_CLASS_CONTEXT | (1 << 2)), -1, /* IMPLICIT tag at current level */ &asn_DEF_X2N_GTPtunnelEndpoint, 0, { 0, 0, 0 }, 0, 0, /* No default value */ "uL-GTP-TunnelEndpoint" }, { ATF_POINTER, 2, offsetof(struct X2N_E_RABs_Admitted_Item, dL_GTP_TunnelEndpoint), (ASN_TAG_CLASS_CONTEXT | (2 << 2)), -1, /* IMPLICIT tag at current level */ &asn_DEF_X2N_GTPtunnelEndpoint, 0, { 0, 0, 0 }, 0, 0, /* No default value */ "dL-GTP-TunnelEndpoint" }, { ATF_POINTER, 1, offsetof(struct X2N_E_RABs_Admitted_Item, iE_Extensions), (ASN_TAG_CLASS_CONTEXT | (3 << 2)), -1, /* IMPLICIT tag at current level */ &asn_DEF_X2N_ProtocolExtensionContainer_8231P5, 0, { 0, 0, 0 }, 0, 0, /* No default value */ "iE-Extensions" }, }; static const int asn_MAP_X2N_E_RABs_Admitted_Item_oms_1[] = { 1, 2, 3 }; static const ber_tlv_tag_t asn_DEF_X2N_E_RABs_Admitted_Item_tags_1[] = { (ASN_TAG_CLASS_UNIVERSAL | (16 << 2)) }; static const asn_TYPE_tag2member_t asn_MAP_X2N_E_RABs_Admitted_Item_tag2el_1[] = { { (ASN_TAG_CLASS_CONTEXT | (0 << 2)), 0, 0, 0 }, /* e-RAB-ID */ { (ASN_TAG_CLASS_CONTEXT | (1 << 2)), 1, 0, 0 }, /* uL-GTP-TunnelEndpoint */ { (ASN_TAG_CLASS_CONTEXT | (2 << 2)), 2, 0, 0 }, /* dL-GTP-TunnelEndpoint */ { (ASN_TAG_CLASS_CONTEXT | (3 << 2)), 3, 0, 0 } /* iE-Extensions */ }; static asn_SEQUENCE_specifics_t asn_SPC_X2N_E_RABs_Admitted_Item_specs_1 = { sizeof(struct X2N_E_RABs_Admitted_Item), offsetof(struct X2N_E_RABs_Admitted_Item, _asn_ctx), asn_MAP_X2N_E_RABs_Admitted_Item_tag2el_1, 4, /* Count of tags in the map */ asn_MAP_X2N_E_RABs_Admitted_Item_oms_1, /* Optional members */ 3, 0, /* Root/Additions */ 4, /* First extension addition */ }; asn_TYPE_descriptor_t asn_DEF_X2N_E_RABs_Admitted_Item = { "E-RABs-Admitted-Item", "E-RABs-Admitted-Item", &asn_OP_SEQUENCE, asn_DEF_X2N_E_RABs_Admitted_Item_tags_1, sizeof(asn_DEF_X2N_E_RABs_Admitted_Item_tags_1) /sizeof(asn_DEF_X2N_E_RABs_Admitted_Item_tags_1[0]), /* 1 */ asn_DEF_X2N_E_RABs_Admitted_Item_tags_1, /* Same as above */ sizeof(asn_DEF_X2N_E_RABs_Admitted_Item_tags_1) /sizeof(asn_DEF_X2N_E_RABs_Admitted_Item_tags_1[0]), /* 1 */ { 0, 0, SEQUENCE_constraint }, asn_MBR_X2N_E_RABs_Admitted_Item_1, 4, /* Elements count */ &asn_SPC_X2N_E_RABs_Admitted_Item_specs_1 /* Additional specs */ };
package com.imwyw.spannotation; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; /** * @author wangyuanwei * @title: SpAnnotationApplication * @projectName springboot-demo * @description: 描述 * @date 2021/4/26 23:13 */ @SpringBootApplication public class SpAnnotationApplication { public static void main(String[] args) { SpringApplication.run(SpAnnotationApplication.class, args); } }
import logging from django.conf import settings from django.http import HttpRequest, HttpResponse from django.utils.module_loading import import_string from django.views.generic import TemplateView logger = logging.getLogger(__name__) class SamlIDPErrorView(TemplateView): """ Default error view used when a 'known' error occurs in the saml2 authentication views. Subclass this to use your own template and styling for the error page (only set template_name on your subclass), or to do entirely customized error handling (override the handle_error method). settings.SAML_IDP_ERROR_VIEW_CLASS should point to your customized subclass. """ template_name = 'djangosaml2idp/error.html' @classmethod def handle_error(cls, request: HttpRequest, exception: Exception, status_code: int = 500, **kwargs) -> HttpResponse: """ Default behaviour: log the exception as error-level, and render an error page with the desired status_code on the response. """ logger.error(kwargs, exc_info=exception) # Render an http response and return it response = cls.as_view()(request, exception=exception, **kwargs) response.status_code = status_code return response def get_context_data(self, **kwargs) -> dict: """ Add some exception-related variables to the context for usage in the template. """ context = super().get_context_data(**kwargs) exception = kwargs.get("exception") context.update({ "exception": exception, "exception_type": exception.__class__.__name__ if exception else None, "exception_msg": exception.message if exception and hasattr(exception, 'message') else str(exception) if exception else None, "extra_message": kwargs.get("extra_message"), }) return context error_cbv = import_string(getattr(settings, 'SAML_IDP_ERROR_VIEW_CLASS', 'djangosaml2idp.error_views.SamlIDPErrorView'))
// Copyright 2017-2022 @polkadot/api-derive authors & contributors // SPDX-License-Identifier: Apache-2.0 import type { Observable } from 'rxjs'; import type { Compact } from '@polkadot/types'; import type { BlockNumber } from '@polkadot/types/interfaces'; import type { DeriveApi } from '../types'; import { map } from 'rxjs'; import { memo } from '../util'; // re-export these - since these needs to be resolvable from api-derive, i.e. without this // we would emit code with ../<somewhere>/src embedded in the *.d.ts files export type { BlockNumber } from '@polkadot/types/interfaces'; export function unwrapBlockNumber <T extends { number: Compact<BlockNumber> }> (fn: (api: DeriveApi) => Observable<T>): (instanceId: string, api: DeriveApi) => () => Observable<BlockNumber> { return (instanceId: string, api: DeriveApi) => memo(instanceId, () => fn(api).pipe( map((r) => r.number.unwrap()) )); }
/* * Copyright (C) 2021 Huawei Device Co., Ltd. * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "classic_config.h" #include <vector> #include "bt_def.h" #include "classic_defs.h" #include "log.h" namespace bluetooth { ClassicConfig &ClassicConfig::GetInstance() { static ClassicConfig instance; return instance; } ClassicConfig::ClassicConfig() : config_(AdapterDeviceConfig::GetInstance()) {} ClassicConfig::~ClassicConfig() {} bool ClassicConfig::LoadConfigFile() const { /// Load Device Config File. bool ret = config_->Load(); if (!ret) { LOG_ERROR("[ClassicConfig]::%{public}s failed!", __func__); } return ret; } bool ClassicConfig::Save() const { bool ret = config_->Save(); if (!ret) { LOG_ERROR("[ClassicConfig]::%{public}s failed!", __func__); } return ret; } std::string ClassicConfig::GetLocalName() const { std::string name = ""; if (!config_->GetValue(SECTION_HOST, PROPERTY_DEVICE_NAME, name)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return name; } bool ClassicConfig::SetLocalName(const std::string &name) const { if (!config_->SetValue(SECTION_HOST, PROPERTY_DEVICE_NAME, name)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetLocalAddress(const std::string &addr) const { if (!config_->SetValue(SECTION_HOST, PROPERTY_DEVICE_ADDR, addr)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } int ClassicConfig::GetLocalDeviceClass() const { int cod = 0; if (!config_->GetValue(SECTION_HOST, PROPERTY_CLASS_OF_DEVICE, cod)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return cod; } bool ClassicConfig::SetLocalDeviceClass(int cod) const { if (!config_->SetValue(SECTION_HOST, PROPERTY_CLASS_OF_DEVICE, cod)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } int ClassicConfig::GetIoCapability() const { int io = 0; if (!config_->GetValue(SECTION_HOST, PROPERTY_IO_CAPABILITY, io)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return io; } int ClassicConfig::GetDiscoverableTimeout() const { int time = 0; if (!config_->GetValue(SECTION_HOST, PROPERTY_DISCOVERABLE_TIMEOUT, time)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return time; } bool ClassicConfig::SetDiscoverableTimeout(int time) const { if (!config_->SetValue(SECTION_HOST, PROPERTY_DISCOVERABLE_TIMEOUT, time)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } std::string ClassicConfig::GetLocalPasskey() const { std::string passkey = ""; if (!config_->GetValue(SECTION_HOST, PROPERTY_LOCAL_PASSKEY, passkey)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return passkey; } int ClassicConfig::GetSecurityMode() const { int securityMode = 0; if (!config_->GetValue(SECTION_HOST, PROPERTY_SECURITY_MODE, securityMode)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return securityMode; } std::vector<std::string> ClassicConfig::GetPairedAddrList() const { std::vector<std::string> pairedList; if (!config_->GetSubSections(SECTION_BREDR_PAIRED_LIST, pairedList)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return pairedList; } std::string ClassicConfig::GetRemoteName(const std::string &subSection) const { std::string name = ""; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_DEVICE_NAME, name)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return name; } std::string ClassicConfig::GetRemoteAlias(const std::string &subSection) const { std::string name = ""; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_ALIAS_NAME, name)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return name; } std::string ClassicConfig::GetRemoteLinkkey(const std::string &subSection) const { std::string key = ""; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_LINK_KEY, key)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return key; } int ClassicConfig::GetRemoteDeviceType(const std::string &subSection) const { int type = 0; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_DEVICE_TYPE, type)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return type; } int ClassicConfig::GetRemoteLinkkeyType(const std::string &subSection) const { int type = 0; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_LINK_KEY_TYPE, type)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return type; } int ClassicConfig::GetRemoteDeviceClass(const std::string &subSection) const { int cod = 0; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_CLASS_OF_DEVICE, cod)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return cod; } int ClassicConfig::GetRemoteDeviceIoCapability(const std::string &subSection) const { int io = 0; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_IO_CAPABILITY, io)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return io; } bool ClassicConfig::SetRemoteName(const std::string &subSection, const std::string &name) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_DEVICE_NAME, name)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetRemoteAlias(const std::string &subSection, const std::string &name) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_ALIAS_NAME, name)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetRemoteDeviceType(const std::string &subSection, int type) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_DEVICE_TYPE, type)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetRemoteLinkkey(const std::string &subSection, const std::string &linkKey) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_LINK_KEY, linkKey)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetRemoteLinkkeyType(const std::string &subSection, int type) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_LINK_KEY_TYPE, type)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetRemoteDeviceClass(const std::string &subSection, int cod) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_CLASS_OF_DEVICE, cod)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetRemoteDeviceIoCapability(const std::string &subSection, int io) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_IO_CAPABILITY, io)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::GetRemoteDevicePairFlag(const std::string &subSection) const { bool flag = false; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_PAIR_FLAG, flag)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return flag; } bool ClassicConfig::GetRemoteDeviceBondFromLocal(const std::string &subSection) const { bool flag = false; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_BOND_FROM_LOCAL, flag)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return flag; } bool ClassicConfig::SetRemoteDevicePairFlag(const std::string &subSection, const bool flag) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_PAIR_FLAG, flag)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetRemoteDeviceBondFromLocal(const std::string &subSection, const bool flag) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_BOND_FROM_LOCAL, flag)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::RemovePairedDevice(const std::string &subSection) const { if (!config_->RemoveSection(SECTION_BREDR_PAIRED_LIST, subSection)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } bool ClassicConfig::SetRemoteUuids(const std::string &subSection, const std::string &uuids) const { if (!config_->SetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_REMOTE_UUIDS, uuids)) { LOG_WARN("[ClassicConfig]::%{public}s failed!", __func__); return false; } return true; } std::string ClassicConfig::GetRemoteUuids(const std::string &subSection) const { std::string uuids = ""; if (!config_->GetValue(SECTION_BREDR_PAIRED_LIST, subSection, PROPERTY_REMOTE_UUIDS, uuids)) { LOG_INFO("[ClassicConfig]::%{public}s failed!", __func__); } return uuids; } } // namespace bluetooth
// // Generated by classdumpios 1.0.1 (64 bit) (iOS port by DreamDevLost)(Debug version compiled Jun 10 2020 10:03:13). // // Copyright (C) 1997-2019 <NAME>. // #import <objc/NSObject.h> __attribute__((visibility("hidden"))) @interface CFUserNotificationHelper : NSObject { } + (void)CFUserNotificationTearDownAccessoryNotification:(struct __CFUserNotification **)arg1; // IMP=0x0000000100085544 + (struct __CFUserNotification *)CFUserNotificationPostAccessoryNotification:(struct __CFString *)arg1 forMsg:(struct __CFString *)arg2 forDefaultButton:(struct __CFString *)arg3 withAlternateButton:(struct __CFString *)arg4 forNotification:(struct __CFUserNotification **)arg5 withCallback:(CDUnknownFunctionPointerType)arg6 andTimeout:(double)arg7 andRecordID:(struct __CFString *)arg8; // IMP=0x0000000100085378 @end
Identification of Golovinomyces artemisiae Causing Powdery Mildew, Changes in Chlorophyll Fluorescence Parameters, and Antioxidant Levels in Artemisia selengensis Artemisia selengensis Turcz. is a valuable edible and medicinal vegetable crop widely cultivated in Northeast China. Powdery mildew (PM) disease occurs during field and greenhouse cultivation, resulting in production losses and quality deterioration. The pathogen in A. selengensis was Golovinomyces artemisiae identified using optical microscopic and scanning electron microscopic observations, morphological identification, and molecular biological analyses. Parameters of chlorophyll fluorescence (ChlF) and antioxidant system responses as well as callose and lignin contents in A. selengensis were analyzed with inoculating G. artemisiae. Obvious of PM-infected leaves were confirmed with significantly lower values in electron transport rate (ETR), non-photochemical quenching (NPQ), photochemical quenching (qP), and actual photochemical efficiency , but higher values in non-adjusting energy dissipation yield , supposed that maximal photosystem II quantum yield (Fv/Fm) value and images could be used to monitor PM degree on infectedA. selengensis. In addition, malondialdehyde (MDA), superoxide anion (O2), callose, lignin contents, and peroxidase (POD) activity increased, while superoxide dismutase (SOD) activity, catalase (CAT) activity, and ascorbic acid (AsA) content decreased significantly in infected leaves compared to mock-inoculated leaves, indicated that lignin and protective enzymes are the key indicators for detecting PM resistant in A. selengensis. These results suggest that PM caused by G. artemisiae disrupted the photosynthetic capacity and induced imbalance of antioxidant system inA. selengensis. The findings were of great significance for designing a feasible approach to effectively prevent and control the PM disease in A. selengensis as well as in other vegetable crops. INTRODUCTION Artemisia selengensis Turcz. is a perennial plant belonging to the genus Artemisia of the Asteraceae family (). Due to its high nutritional and medicinal value, A. selengensis has been favored as both a kind of vegetable and a herbal medicine in Northeast China for thousands of years (;). However, leaves as the main edible parts of the plant are extremely vulnerable to powdery mildew (PM) disease when the plant is cultivated in field and/or in greenhouse, especially under low air flow and high relative humidity environment in summer and autumn. This has a negative economic impact on the plant production and the overall agricultural industry. Even though PM symptoms can be easily recognized, it is challenging to determine the species assignment. Morphological characteristics and observation of pathogen are crucial for the identification of pathogen at species and prevention of PM. For example, Blumeria graminis (DC.) Speer is unique in forming conidia compared to other species of Erysiphales. Previous studies revealed that the main types of PM pathogens parasitizing Asteraceae are Golovinomyces cichoracearum, Golovinomyces chrysanthemi, and Golovinomyces artemisiae (Matsuda and Takamatsu, 2003;;). G. artemisiae is described in Europe with Artemisia vulgaris being a type of host, of which a detailed description has been published by Braun. G. artemisiae in Artemisia annua is also reported and identified using a combination of morphological and internal transcribed spacer (ITS) methods in Korea (). However, the species of pathogen causing PM in A. selengensis remains unclear and phenotypic and physiological changes of A. selengensis plants induced by PM are rarely reported in Northeast China (). When plants are infected with PM, photosynthesis is reduced through a lower supply of light energy because of the leaf surface covered by mycelium (). On the other hand, CO 2 influx is inhibited due to stomata closure (Duniway, 1982;). Previous studies have demonstrated that Erysiphe alphitoides leading to the reduction of foliage photosynthetic activity in pedunculate oak (Quercus robur) (). Modern chlorophyll fluorescence (ChlF) technology allows the rapid and nondestructive detection of photosynthetic activity (). Maximal photosystem II quantum yield (Fv/Fm) was used to diagnose several diseases, including coffee (Coffea arabica L.) infected by Hemileia vastatrix and cedar (Cedrus deodara) infected by Pestalotiopsis spp. (;Honorato ). Meanwhile, the parameter Fv/Fm could distinguish resistant and susceptible lettuce (Lactuca sativa L.) lines against the Bremia lactucae (). In terms of Fv/Fm and effective quantum yield of PSII , leaves infected by Bipolaris sorokiniana were also measured dramatically impaired on the most susceptible cultivar compared to a less susceptible cultivar in wheat (Triticum aestivum L.) (). Reductions in values of Fv/Fm, Y(II), quantum yield of non-regulated energy dissipation , and photochemical quenching (qP) coefficient are noticeable on necrotic vein tissues induced by Colletotrichum truncatum in contrast to the surrounding leaf tissue in soybean (Glycine max L.) (). The non-photochemical quenching (NPQ) processes increase in Podosphaera xanthiiinfected melon leaves, which constitute a major mechanism for the avoidance of photodamage (). Furthermore, different fungi have been shown to inhibit photosynthetic electron transfer reactions variably, which are a source of reactive oxygen species (ROS) (Duniway, 1982;;). Lignin and callose activate the host defense system, giving the host plant time to initiate subsequent defense responses, such as ROS burst and antioxidant enzyme activity regulation (;). Callose was accumulated in Arabidopsis (Arabidopsis thaliana L.) infected with PM, which enhanced its resistance to host (). Meanwhile, lignin content was increased to prevent pathogens infection and spread of wheat against PM by causing cell wall suberization (). Moreover, the increasing of lignin content can significantly improve peroxidase (POD) activity (). To response Glomerella cingulata attack, POD activity was maintained at a higher level, superoxide dismutase (SOD) and catalase (CAT) were inhibited, reducing ROS scavenging capacity in susceptible cultivar compared to that of the resistance cultivar in apple (Malus pumila). Excess ROS would cause serious damage to plant protein and membrane system. The scavenging of O 2 − depends on the high activities of SOD, POD, and CAT enzymes for rice (Oryza sativa L.) to resist Magnaporthe oryzae infection (;). Malondialdehyde (MDA) increases twofold in wheat seedlings infected by Fusarium pseudograminearum, which has long been used as a marker of stress tolerance to lipid peroxidation (). Ascorbic acid (AsA), as the most abundant antioxidant in plant, can directly mitigate the damaging effects of ROS or indirectly as a substrate for the ascorbate peroxidase enzyme (). AsA deficiency has been found to positively modulate plant's biotic defense cascades leading to better disease resistance response in Arabidopsis to Pseudomonas syringae (). In this scenario, the antioxidant systems exhibit an ever-increasing importance in the complex process of defense mechanisms against PM in A. selengensis. Nevertheless, detailed study is lacking on these indicators as regulatory mechanisms markers in A. selengensis infected by PM. In this study, G. artemisiae was characterized using light microscopic and scanning electron microscopic (SEM) observations to investigate the responses of A. selengensis to PM. ITS and 28S ribosomal DNA (rDNA) regions were sequenced for supporting the identification of pathogen. We further determined the physiological and biochemical indicators such as ChlF, lignin, callose, and antioxidant enzymes in A. selengensis leaves infected by G. artemisiae. This study is a pilot study for providing basic knowledge and information for improving PM resistance of A. selengensis and also for other plant species. Plant Materials and Powdery Mildew Isolation Artemisia selengensis Turcz. was cultivated in farm field of Northeast Agricultural University, China (45 43 55 N, 126 43 21 E). Leaves of A. selengensis with typical PM colonies were sampled in September 2021, which were further used for isolating pathogen and inoculation to young seedlings. Seedlings were prepared by sowing seeds for pot culture in greenhouse. Briefly, 10 seeds of A. selengensis were sown in PVC pots with sterile substrate soil for a total of 10 pots in the early August. After the seedlings reached to 15 cm in height (nearly 40 days cultivation), the pathogen inoculation was performed. The individual isolate, which obtained from the farm leaves, was purified by single-colony inoculation on healthy seedlings for five consecutive generations (;;). Controlled growth conditions in greenhouse were set at 20/18 C (day/night) and 12 h of light (125 mol m −2 s −1 ). Morphological Characterization of Golovinomyces artemisiae Chasmothecia and conidia were removed from G. artemisiaeinfected leaves with a dissecting needle, mounted in water, and observed under optical microscope (Carl Zeiss Model Axioskop 40). Taxonomic characters were examined and recorded, including chasmothecial appendages, number of asci and ascospores, and lengths and widths of conidia and conidiophore foot cells. Fifty or more measurements were made for individual characters from each sample and compared to the species pathogen descriptions by Choi et al.. Scanning Electron Microscope Observation of Golovinomyces artemisiae Leaves infected with G. artemisiae were cut into small squares sized 5 mm in length around veins, immediately put in a vial containing 2.5% glutaraldehyde, and fixed with 2 ml of 0.1 mol l −1 phosphate buffer (pH 6.8) for 3 times, 10 min each time. The leaves were gradually dehydrated using 2 ml of 50, 70, and 90% ethanol solutions for 15 min each, respectively. Leaves were transferred to a pure tert-butanol solution and let stand for 20 min and then washed with an equal volume of anhydrous ethanol and tert-butanol once and pure tert-butanol twice, with submergence for 15 min each time. Finally, the samples were put in a freezer at −20 C for 30 min and transferred into the ES-2030 (HITACHI) freeze dryer for 4 h. Afterward, ice crystals were evaporated and dried in vials and sputtered on a gold-plated film in ion coater, which were then observed and imaged by SEM (Hitachi SU-8010, Tokyo, Japan). Molecular Identification and Phylogenetic Analyses of Golovinomyces artemisiae Total genomic DNA was isolated from 100 mg of PM (conidia and mycelia) using the cetyltrimethylammonium bromide (CTAB) method (). The sequence of ITS and 28S rDNA was amplified using the ITS1/ITS4 (ITS1: 5 -TCCGTAGGTGAACCTGCGG-3, ITS4: 5 -TCCTCCGCTTATTGATATGC-3 ) and PM3/TW14 (PM3: 5 -GKGCTYTMCGCGTAGT-3, TW14: 5 -GCTATCCT GAGGGAAACTTC-3 ) primers pair, respectively (;). The reaction procedure was 94 C for 10 min; 32 cycles (94 C for 30 s, 57 C for 30 s, and 72 C for 90 s); 72 C for 5 min; and 4 C termination. PCR product was purified and ligated to the pEASY-Blunt Zero vector and transformed into Escherichia coli and a positive strain was sequenced. The sequences were uploaded to the National Center for Biotechnology Information (NCBI) database and used as queries in BLAST 1 searches to identify the most similar sequences available in the GenBank. These sequences were collected and aligned for constructing the phylogenetic tree using ClustalW (). The maximum likelihood (ML) method was used to generate phylogenetic trees based on tandem sequences of the ITS and 28S rDNA genes using the MEGA version 7.0 software (). Bootstrap analysis was made using 1,000 replications. Pathogenicity Assays of Golovinomyces artemisiae Pathogenicity was verified by inoculating 10 healthy seedlings using the above purified PM pathogen. Different paint brushes were used to dust conidia from one PM patch onto another plant leaves of A. selengensis (). Mockinoculated (CK) leaves (i.e., no conidia were attached to the leaf surface) were used as controls to monitor and minimize potential contamination. Leaf symptoms were recorded every 1-2 days. Diseased leaves were collected for microscopic examinations to observe the morphological characteristics of the inoculated pathogens. After 14 days, G. artemisiae inoculation (GI) and CK leaves were used to measure the ChlF and collected immediately stored at −80 C for the determination of antioxidant-related indexes. Leaf Chlorophyll Fluorescence Chlorophyll fluorescence parameters of GI and CK were measured using the Imagining-PAM (MAXI) system (Walz, Germany). The value (Ft) of the selected sample in area of interest (AOI) was set within the range of 0.1-0.2, plant saturation pulse light frequency was set to 20 s/times and the intensity was set to 4,000 mol m −2 s −1, and the light intensity for actinic light parameters was set to 86 mol m −2 s −1 (). The plant samples were treated in darkness for 20 min; minimum fluorescence (Fo) and maximum fluorescence (Fm) of the samples were obtained using the measuring light and saturated pulsed light, respectively. The values and images of NPQ, actual photochemical efficiency , non-adjusting energy dissipation yield , qP, and electron transport rate (ETR) were then obtained through actinic light measurements. Fv/Fm was calculated as: Fv/Fm = (Fm − Fo) / Fm (Maxwell and Johnson, 2000). Determination of Callose, Lignin, and Antioxidant-Related Indexes For the assay of antioxidant-related index, 0.5 g of fresh leaves was homogenized using 2 ml of 50 mM phosphate extraction buffer in ice-cold mortar. The mixture was centrifuged at 12,000 g for 15 min at 4 C for collecting the supernatant. The supernatant was used to determine the content of superoxide anion (O 2 − ) and activities of CAT, POD, callose, and SOD. Callose contents were measured following the method of Khle et al.. A total of 0.2 ml of the supernatant was put into a 1.5-ml centrifuge tube. A total of 0.4 ml aniline blue (0.1%), 0.21 ml HCl (1 moll −1 ), and 0.59 ml glycine/NaOH buffer (1 moll −1, pH 9.5) were added in turn, reacted at 50 C for 20 min. The mixture was cooled to room temperature and measured the fluorescence intensity with fluorescence spectrophotometer. The excitation wavelength of the measurement was 400 nm, the emission wavelength was 500 nm, and the slit width was 5 nm. Peroxidase was determined spectrophotometrically by monitoring the formation of tetraguaiacol from guaiacol (extinction coefficient at 470 nm) in the presence of hydrogen peroxide (H 2 O 2 ) (). The reaction mixtures consisted of 2.9 ml of 50 mM PBS (pH 7.0), 1 ml of 0.3 mM guaiacol, 1 ml of 0.1 mM hydrogen peroxide, and 0.1 ml of supernatant. Catalase was estimated by the rate of H 2 O 2 decomposition at 240 nm (Havir and McHale, 1989). The reaction mixture contained 0.2 ml of supernatant, 1.5 ml of PBS (PH 7.8), 1 ml of distilled water, and 0.3 ml of 100 mM H 2 O 2. The absorbance was recorded every 1 min for a total of 4 min. Superoxide anion content was determined from oxidation of hydroxylamine (). A total of 0.1 ml of supernatant was incubated at 25 C for 20 min with a mixture of 0.9 ml of 65 mM phosphate buffer (pH 7.8) and 0.1 ml of 10 mM hydroxylammonium chloride; 0.2 ml of 17 mM sulfanilamide and 0.2 ml of 7 mM -naphthylamine were then added to the mixture and incubated again at 25 C for 20 min. An equal volume of chloroform was added. The mixture was centrifuged at 10,000 g for 3 min and absorbance was read at 530 nm. Referring to the method of Morrison, the lignin content was determined. A total of 0.5 g fresh leaves were ground to a homogenate by adding 95% ethanol in a mortar and the precipitate was collected after centrifugation at 4,500 rpm for 10 min. The pellet was washed three times with an equal volume of a 1:1 95% ethanol and n-hexane solution and precipitate was collected and dried. The dried product was dissolved in 0.5 ml of 25% glacial acetic acid and then set in a water bath at 70 C for 30 min. Thereafter, 0.9 ml of 2 mol/l NaOH was added to terminate the reaction. A total of 5 ml of glacial acetic acid and 0.1 ml of 7.5 mol/l hydroxylamine hydrochloride were added into mixture. After mixing and centrifugation of the samples at 4,500 rpm for 5 min, 0.1 ml of the supernatant was aspirated and diluted, with 3.0 ml of glacial acetic acid. Absorbance was measured at 280 nm using spectrophotometer. Ascorbic acid content was measured by following the method of Kampfenkel et al.. About 0.1 g of leaf samples was extracted with 0.5 ml of 6% trichloroacetic acid (TCA) and centrifuged at 12,000 g for 10 min at 4 C. This assay was based on the reduction of ferric ion (Fe 3+ ) to ferrous ion (Fe 2+ ) with AsA in acid solution, followed by formation of a red chelate between Fe 2+ and 2,2 -dipyridyl. Samples were finally read for absorbance at 525 nm using spectrophotometer. Malondialdehyde content was performed using the thiobarbituric acid method (Heath and Packer, 1968). The supernatant (1 ml in volume) was mixed with 1 ml of thiobarbituric acid (0.6%) and then maintained in boiling water bath for 15 min. After cooling, the mixture was centrifuged at 4,000 g for 10 min. The absorbance of supernatant was then determined at 450, 532, and 600 nm, respectively. Statistics and Analysis All the data were analyzed using the Student's t-test with SPSS version 10.0 software (SPSS Incorporation, Chicago, IL, United States). Figures were plotted using GraphPad Prism version 9.00 (GraphPad Company, San Diego, CA, United States). Symptom of Powdery Mildew and Morphological Observation Leaves of A. selengensis were major infected parts of the plant for PM ( Figure 1D). Whitish colonies with abundant spores were observed on both the adaxial and abaxial surfaces of the infected leaves (Figures 1A-E). Gradually, these infected leaves turned yellow and dark brown with spherical chasmothecia formed on the surfaces (Figures 1F,G). Molecular Phylogenetic Identification of Golovinomyces artemisiae Determined ITS and 28S rDNA region of this pathogen being 594 and 860 bp were submitted to GenBank (ITS: MZ366322, 28 rDNA: MW989746). Results of the phylogenetic tree constructed by the ML method showed that this pathogen and G. artemisiae belong to the same branch (95% bootstrap support), which was later confirmed by the molecular biosis (Figure 4). Pathogenicity Identification of Golovinomyces artemisiae After 8-10 days, the mock-inoculated (CK) leaves remained free of symptoms during the entire period of the experiment in the greenhouse (Supplementary Figure 1A). GI leaves showed typical symptoms, which were consistent with the diseased leaves in field (Supplementary Figure 1B). The experiment was repeated for a few times, which all produced the same results. ITS and 28S rDNA sequences of conidia from the infected leaves further validated the results of the purified G. artemisiae. Leaf Chlorophyll Fluorescence Performances Chlorophyll fluorescence information indicated that the Fv/Fm in CK was significantly greater than that in GI. The images of ChlF parameters showed the emergence of local necrosis in GI. At the same time, the photochemical activity was inhibited and photodamage was occurred ( Figure 5A). The value of Fv/Fm for CK was between 0.80 and 0.81 and the value for GI was below 0.80 ( Figure 5B). In terms of parameters related to plant light energy absorption and electron transfer, the values of qP, Y(II), and ETR in CK were 11.4, 10.0, and 8.8% higher than those in GI, respectively (Figures 5C,D,G). Obviously, the occurrence of PM inhibited the photosynthetic capacity in A. selengensis. Some ChlF parameters associated with light energy consumption showed the opposite expression trends of NPQ and Y(NO) in the two comparison groups. The value of Y(NO) was 4.8% higher in GI than in the CK and NPQ in CK was 53.5%, significantly higher than that in GI (Figures 5E,F). Callose, Lignin, and Antioxidant System The contents of callose and lignin were significantly increased to 28.0 and 36.9% in GI compared to CK, respectively (Figures 6A,B). MDA content in GI was higher (1.2-fold) than that in CK (Figure 6C), meanwhile, O 2 − content in GI was significantly higher (2.8-fold) ( Figure 6D). In terms of changes in antioxidant enzyme activity, G. artemisiae infection resulted in a reduction of 65.9 and 12.6% of CAT and SOD activities in GI, respectively, compared with CK (Figures 6G,H). While AsA, as a non-enzymatic antioxidant, was 84.8% in GI, it is significantly lower than that in CK ( Figure 6E). POD activity was 143.9% higher in GI relative to CK (Figure 6F). DISCUSSION Powdery mildew is one of the most frequently occurred fungal diseases in plants around the world. Considerable efforts and investments have been put for the control of the disease via application of proper fungicides and/or breeding of plant varieties tolerant/resistant to the disease. PM appears to be more diverse and the biology of its pathogen seems to be very complex. A holistic approach of combined studies in morphology and analyses of ITS and 28S rDNA regions can accurately identify its causal fungi at the species level (). To the best of our knowledge, the G. artemisiae cluster comprises sequences obtained from PM hosts of the genera Artemisia, Chrysanthemum, and Nipponanthemum (). In this study, we observed typical symptoms of PM on A. selengensis (Figure 1). These symptoms were identical to those previously reported in A. annua in Korea (). However, due to specific geographical and climatic environments in Northeast China, physiological race(s) of G. artemisiae infecting A. selengensis appear to be quite different from those in other regions. Life cycles of PM pathogens can involve both a sexual state (teleomorph) and asexual state (anamorph) or either can be lacking. For example, chasmothecia of Erysiphe berberidis DC. were observed in Europe, but they were unknown in western Washington. In this study, chasmothecia were observed, length of conidiophores was less, and pathogenicity was prolonged than that in Korea (). Meanwhile, ITS sequence analysis reflected obvious base mutations (;). Based on the morphology identification and molecular phylogenetic analysis, this study suggested that the pathogen causing PM on A. selengensis in both the field and glasshouse in Northeast China Values are means ± SE of three biological replicates. Significant differences were calculated using the unpaired Student's t-test (**P ≤ 0.01). is G. artemisiae. As the most basic and important indicators of diseases, comprehensive analysis of antioxidant system and photosynthesis indicators is crucial to reveal the phenotype and physiological changes of A. selengensis infection with PM. As one of the most important physiological processes in plants, photosynthesis is inhibited by diseases and other stresses (). Fv/Fm parameter is shown to be a sensitive indicator of photosynthetic performance, with optimal values being close to 0.8 for most plant species (Krause and Weis, 1991). The Fv/Fm values obtained in GI were less than 0.8, indicating the damage to the photosynthetic apparatus due to G. artemisiae infection ( Figure 5B). Moreover, ETR was inhibited by PM in GI, leading to further reduction in the degree of openness of PS II reaction center ( Figure 5G). qP was decreased in GI, which was consistent with the decreasing trend in leaves of Brassica juncea with a mosaic virus infection (). The accumulation of reactive intermediates is prevented by increasing the NPQ level in bean (Phaseolus vulgaris), which dissipates excess light energy absorbed by the light-harvesting complex harmlessly (;). Therefore, the progressively increased Y(NO) values and decreased NPQ values indicated the photooxidative damage in GI (Figures 5E,F). It can be further inferred from those Y(II) value that PM caused a decreased energy used for photochemical reactions in GI (Figure 5B), highlighting the reduction of the photosynthetic rate in A. selengensis following G. artemisiae infection. Early detection of wheat leaves with PM infection by means of fluorescence imaging was 2-3 days before visual symptoms became apparent (). In this study, the ChlF imaging exhibited the parts of GI leaves infected by PM was different from the surrounding area. The health status of A. selengensis can be determined by monitoring the change of Fv/Fm value. Collectively, ChlF is essential for detecting PM epidemics, examining plant health in a timely manner without causing damage. Plants respond to pathogen invasion by activating a series of defense responses. The deposition of callose after Colletotrichum gloeosporioides inoculation of Stylosanthes guianensis was associated with cultivar resistance (). Our results showed that the damage degree of G. artemisiae by PM may be mitigated by the increase of callose content in GI ( Figure 6A). The increase of lignin content enhanced the activity of POD, which was consistent with the results in Arabidopsis (). The synergistic effect of increased lignin content and enhanced POD activity enhanced the resistance of A. selengensis to PM (Figures 6B,F). However, in different mustard (B. juncea L.) cultivars, the lignin content of Erysiphe polygoni DC. in the preinfected stage was higher than that in the diseased stage (Rathod and Chatrabhuji, 2010). Although numerous studies have shown that POD activity is positively correlated with plant disease resistance, POD activity in susceptible cultivars is higher than that in resistant cultivars of pumpkin kernel (Cucurbita pepo L.). Thus, the most obviously increased POD activity acted essentially in the hydrolysis of H 2 O 2 in GI ( Figure 6F). These results exhibited great difference changes of relevant indexes after the occurrence of diseases in different plant species. Reactive oxygen species production is one of the earliest cellular responses following successful pathogen recognition (;). O 2 − or H 2 O 2 generation in apoplast of Arabidopsis was infected by P. syringae (). In this study, O 2 − content increased by about threefold in GI compared to CK, indicating a serious damage in A. selengensis caused by G. artemisiae infection ( Figure 6D). As another toxic byproduct of ROS metabolism, MDA significantly increased in GI, which was consistent with that in roots of brittle leaf disease-affected date palm (Phoenix dactylifera L.) (). Increased SOD activity has been pinpointed as the key ROS scavenger in response to Erwinia amylovora in pear (Pyrus communis L.) (). However, a higher potential of CAT activity leads to lower H 2 O 2 accumulation in rice infected with M. oryzae (). Our results showed that the antioxidant capacity was limited due to significantly decreased CAT and SOD enzymes activities in GI (Figures 6G,H). AsA accumulation triggers defense system response in cacao (Theobroma cacao) tissues infected by Moniliophthora perniciosa (). Moreover, the suppression of AsA synthesis affects the photosynthetic electron transport in tomato infected with P. syringae (). In this study, the decreasing AsA content inhibited disease resistance and photosynthesis in GI ( Figure 6E). Previous study showed that inhabitation in photosynthetic electron transport inevitably led to the formation of O 2 − in wheat invaded by pathogens (Yang and Luo, 2021). The levels of antioxidative systems and antioxidants were further increased (Yang and Luo, 2021). Combined with the decreased ETR and significantly increased O 2 − in GI, we speculate that photosynthesis should be affected by fungus earlier than the antioxidant system. In conclusion, the pathogen on A. selengensis leaves with typical PM characteristics was purified. The conidia, conidiophore, and hyphae of the pathogen were observed under light microscope and SEM. In light of the combined data and information of ITS and 28S rDNA sequence, the PM pathogen of A. selengensis was identified as G. artemisiae. GI results in damage to photosynthesis in A. selengensis. ETR, NPQ, qP, and Y(II) significantly decreased, but Y(NO) increased in infected leaves, further reflecting severe photodamage. Fv/Fm value could be used as the indicator to monitor the health status of A. selengensis. In addition, severe stress was reflected due to significant increase in MDA and O 2 − contents in the infected leaves. SOD, CAT activity, and AsA content in GI decreased significantly, with an imbalanced antioxidant system and decreased defense response capacity, while POD activity and lignin contents increased significantly in GI, which are considered to be the key indicators against G. artemisiae. The results may help to design PM control approaches for integrating disease control in A. selengensis and likewise plants. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/genbank/ (ITS: MZ366322, 28 rDNA: MW989746). AUTHOR CONTRIBUTIONS ZG and XS performed the experiment and data analysis and drafted the manuscript. LD, LX, and LQ helped in collection of data of the experiment. FX contributed to data interpretation and manuscript writing. DQ and YC designed and supervised the experiment. All authors agreed to submit the manuscript for publication.
<reponame>karanmagdani1/ARO-RP // Copyright 2021 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package collectors import "github.com/prometheus/client_golang/prometheus" // ProcessCollectorOpts defines the behavior of a process metrics collector // created with NewProcessCollector. type ProcessCollectorOpts struct { // PidFn returns the PID of the process the collector collects metrics // for. It is called upon each collection. By default, the PID of the // current process is used, as determined on construction time by // calling os.Getpid(). PidFn func() (int, error) // If non-empty, each of the collected metrics is prefixed by the // provided string and an underscore ("_"). Namespace string // If true, any error encountered during collection is reported as an // invalid metric (see NewInvalidMetric). Otherwise, errors are ignored // and the collected metrics will be incomplete. (Possibly, no metrics // will be collected at all.) While that's usually not desired, it is // appropriate for the common "mix-in" of process metrics, where process // metrics are nice to have, but failing to collect them should not // disrupt the collection of the remaining metrics. ReportErrors bool } // NewProcessCollector returns a collector which exports the current state of // process metrics including CPU, memory and file descriptor usage as well as // the process start time. The detailed behavior is defined by the provided // ProcessCollectorOpts. The zero value of ProcessCollectorOpts creates a // collector for the current process with an empty namespace string and no error // reporting. // // The collector only works on operating systems with a Linux-style proc // filesystem and on Microsoft Windows. On other operating systems, it will not // collect any metrics. func NewProcessCollector(opts ProcessCollectorOpts) prometheus.Collector { //nolint:staticcheck // Ignore SA1019 until v2. return prometheus.NewProcessCollector(prometheus.ProcessCollectorOpts{ PidFn: opts.PidFn, Namespace: opts.Namespace, ReportErrors: opts.ReportErrors, }) }
Modulation of Hypoxia Induction Factor1 in Age Related Diseases Hypoxia Induction Factor (HIF1) activity is important in anoxia/hypoxia events in aging and agerelated diseases. The HIF1 transcribes genes that play an important role in angiogenesis, vascular remodeling, glucose metabolism, cell proliferation and survival, as well as erythropoiesis and iron homeostasis that are important in tumor growth and the damage that occurs from heart attacks and stroke. Hepatocellular carcinoma development (HCC) in animal models is inhibited by phenyltertbutyl nitrone (PBN). The mechanism of action of PBN is unknown. We observed that PBN inhibits both IGF1 as well as hypoxiamediated expression of HIF1, as well as the HIF1 downstream gene iNOS in several cancer cell models. Cancer cell killing is increased in the presence of PBN when the cells are given a hypoxia/anoxia challenge. This implicates that PBN inhibits the expression of HIF1 downstream genes important in glycolysis and cell survival. Extending our efforts into stroke where PBN has shown significant effect in preventing brain injury. We explored the possibility that specific brain regions other than the cerebral cortex may be highly vulnerable to stroke. Pertinent to striatum, the dopaminergic PC12 cells exposed to anoxia/hypoxia caused an increase in HIF1 whereas IGF1 caused a decrease under normoxia as well as anoxia/hypoxia.
package gnu.bytecode; public class TryState { Label end_label; Label end_try; Variable exception; ExitableBlock exitCases; Variable finally_ret_addr; Label finally_subr; TryState previous; Variable[] savedStack; Type[] savedTypes; Variable saved_result; Label start_try; boolean tryClauseDone; ClassType try_type; public TryState(CodeAttr code) { this.previous = code.try_stack; this.start_try = code.getLabel(); } static TryState outerHandler(TryState innerTry, TryState outerTry) { while (innerTry != outerTry && (innerTry.finally_subr == null || innerTry.tryClauseDone)) { innerTry = innerTry.previous; } return innerTry; } }
“It’s scum of the Earth stuff.” On last night’s Late Show with Stephen Colbert, John Stamos, Bob Saget, and Dave Coulier got together to preview their new Full House spin-off, a gritty cop procedural called Full House Nights. Of course, Full House Nights isn’t real. But the actual Full House reboot, Fuller House, just debuted on Netflix last week. Like in Full House, Full House Nights features three adult men working together to solve problems. Only this time, it’s at night, the kids have gone to bed, and the three guys are detectives out to fix the rampant crime of San Francisco. In the sketch, Colbert even guest stars as a Russian goon named Demetri. Maybe like Fuller House, Full House: Nights will also be renewed. What do you think? Have you started watching Fuller House on Netflix? Would you watch a full season of Full House Nights? More about: CBS TV shows: canceled or renewed?, Full House, Full House: canceled or renewed?, Fuller House, Fuller House: canceled or renewed?, Netflix TV shows: canceled or renewed?, The Late Show with Stephen Colbert, The Late Show with Stephen Colbert: canceled or renewed?
package com.cybozu.kintone.client.module.record; import static org.junit.Assert.*; import java.util.ArrayList; import java.util.HashMap; import java.util.Map.Entry; import org.junit.Before; import org.junit.Test; import com.cybozu.kintone.client.TestConstantsSample; import com.cybozu.kintone.client.authentication.Auth; import com.cybozu.kintone.client.connection.Connection; import com.cybozu.kintone.client.exception.KintoneAPIException; import com.cybozu.kintone.client.model.app.form.FieldType; import com.cybozu.kintone.client.model.file.FileModel; import com.cybozu.kintone.client.model.member.Member; import com.cybozu.kintone.client.model.record.AddRecordResponse; import com.cybozu.kintone.client.model.record.GetRecordResponse; import com.cybozu.kintone.client.model.record.GetRecordsResponse; import com.cybozu.kintone.client.model.record.SubTableValueItem; import com.cybozu.kintone.client.model.record.UpdateRecordResponse; import com.cybozu.kintone.client.model.record.field.FieldValue; import com.cybozu.kintone.client.module.file.File; public class updateRecordByIDTest { private static Integer APP_ID; private static String API_TOKEN = "xxx"; private static String NO_ADD_PERMISSION_API_TOKEN = "xxx"; private static String ADD_NO_VIEW_API_TOKEN = "xxx"; private static String GUEST_SPACE_API_TOKEN = "xxx"; private static String PROHIBIT_DUPLICATE_API_TOKEN = "xxx"; private static String REQUIRED_FIELD_API_TOKEN = "xxx"; private static Member testman1 = new Member("xxx", "xxx"); private static Member testman2 = new Member("xxx", "xxx"); private static Member testgroup1 = new Member("xxx", "xxx"); private static Member testgroup2 = new Member("xxx", "xxx"); private static Member testorg1 = new Member("xxx", "xxx"); private static Member testorg2 = new Member("xxx", "xxx"); private Record passwordAuthRecordManagerment; private Record guestAuthRecordManagerment; private Record tokenRecordManagerment; private Record noAddPermissionTokenReocrdManagerment; private Record addNoViewTokenRecordManagerment; private Record prohibitDuplicateTokenRecordManagerment; private Record requiredFieldTokenRecordManagerment; private Record tokenGuestRecordManagerment; private Record certRecordManagerment; private Record certGuestRecordManagerment; private Integer uniqueKey = 1; @Before public void setup() throws KintoneAPIException { Auth passwordAuth = new Auth(); passwordAuth.setPasswordAuth(TestConstantsSample.USERNAME, TestConstantsSample.PASSWORD); Connection passwordAuthConnection = new Connection(TestConstantsSample.DOMAIN, passwordAuth); //passwordAuthConnection.setProxy(TestConstants.PROXY_HOST, TestConstants.PROXY_PORT); this.passwordAuthRecordManagerment = new Record(passwordAuthConnection); Auth guestAuth = new Auth(); guestAuth.setPasswordAuth(TestConstantsSample.USERNAME, TestConstantsSample.PASSWORD); Connection gusetConnection = new Connection(TestConstantsSample.DOMAIN, guestAuth, TestConstantsSample.GUEST_SPACE_ID); this.guestAuthRecordManagerment = new Record(gusetConnection); Auth tokenAuth = new Auth(); tokenAuth.setApiToken(API_TOKEN); Connection tokenConnection = new Connection(TestConstantsSample.DOMAIN, tokenAuth); this.tokenRecordManagerment = new Record(tokenConnection); Auth tokenAuth3 = new Auth(); tokenAuth3.setApiToken(PROHIBIT_DUPLICATE_API_TOKEN); Connection tokenConnection3 = new Connection(TestConstantsSample.DOMAIN, tokenAuth3); this.prohibitDuplicateTokenRecordManagerment = new Record(tokenConnection3); Auth tokenAuth4 = new Auth(); tokenAuth4.setApiToken(REQUIRED_FIELD_API_TOKEN); Connection tokenConnection4 = new Connection(TestConstantsSample.DOMAIN, tokenAuth4); this.requiredFieldTokenRecordManagerment = new Record(tokenConnection4); Auth tokenAuth5 = new Auth(); tokenAuth5.setApiToken(NO_ADD_PERMISSION_API_TOKEN); Connection tokenConnection5 = new Connection(TestConstantsSample.DOMAIN, tokenAuth5); this.noAddPermissionTokenReocrdManagerment = new Record(tokenConnection5); Auth tokenAuth6 = new Auth(); tokenAuth6.setApiToken(ADD_NO_VIEW_API_TOKEN); Connection tokenConnection6 = new Connection(TestConstantsSample.DOMAIN, tokenAuth6); this.addNoViewTokenRecordManagerment = new Record(tokenConnection6); Auth tokenGuestAuth = new Auth(); tokenGuestAuth.setApiToken(GUEST_SPACE_API_TOKEN); Connection tokenGuestConnection = new Connection(TestConstantsSample.DOMAIN, tokenGuestAuth, TestConstantsSample.GUEST_SPACE_ID); this.tokenGuestRecordManagerment = new Record(tokenGuestConnection); Auth certAuth = new Auth(); certAuth.setPasswordAuth(TestConstantsSample.USERNAME, TestConstantsSample.PASSWORD); certAuth.setClientCertByPath(TestConstantsSample.CLIENT_CERT_PATH, TestConstantsSample.CLIENT_CERT_PASSWORD); Connection certConnection = new Connection(TestConstantsSample.SECURE_DOMAIN, certAuth); this.certRecordManagerment = new Record(certConnection); Auth certGuestAuth = new Auth(); certGuestAuth.setPasswordAuth(TestConstantsSample.USERNAME, TestConstantsSample.PASSWORD); certGuestAuth.setClientCertByPath(TestConstantsSample.CLIENT_CERT_PATH, TestConstantsSample.CLIENT_CERT_PASSWORD); Connection CertGuestConnection = new Connection(TestConstantsSample.SECURE_DOMAIN, certGuestAuth, TestConstantsSample.GUEST_SPACE_ID); this.certGuestRecordManagerment = new Record(CertGuestConnection); // get maximum "数値"field value in all records and set it uniqueKey. String query = "order by 数値 desc"; ArrayList<String> fields = new ArrayList<String>(); fields.add("数値"); GetRecordsResponse response = this.passwordAuthRecordManagerment.getRecords(APP_ID, query, fields, true); ArrayList<HashMap<String, FieldValue>> resultRecords = response.getRecords(); this.uniqueKey += Integer.parseInt((String) resultRecords.get(0).get("数値").getValue()); } public HashMap<String, FieldValue> createTestRecord() { HashMap<String, FieldValue> testRecord = new HashMap<String, FieldValue>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text"); testRecord = addField(testRecord, "数値", FieldType.NUMBER, this.uniqueKey); this.uniqueKey += 1; testRecord = addField(testRecord, "文字列__複数行", FieldType.MULTI_LINE_TEXT, "test multi text"); testRecord = addField(testRecord, "リッチエディター", FieldType.RICH_TEXT, "<div>test rich text<br /></div>"); ArrayList<String> selectedItemList = new ArrayList<String>(); selectedItemList.add("sample1"); selectedItemList.add("sample2"); testRecord = addField(testRecord, "チェックボックス", FieldType.CHECK_BOX, selectedItemList); testRecord = addField(testRecord, "ラジオボタン", FieldType.RADIO_BUTTON, "sample2"); testRecord = addField(testRecord, "ドロップダウン", FieldType.DROP_DOWN, "sample3"); testRecord = addField(testRecord, "複数選択", FieldType.MULTI_SELECT, selectedItemList); testRecord = addField(testRecord, "リンク", FieldType.LINK, "http://cybozu.co.jp/"); testRecord = addField(testRecord, "日付", FieldType.DATE, "2018-01-01"); testRecord = addField(testRecord, "時刻", FieldType.TIME, "12:34"); testRecord = addField(testRecord, "日時", FieldType.DATETIME, "2018-01-02T02:30:00Z"); ArrayList<Member> userList = new ArrayList<Member>(); userList.add(testman1); userList.add(testman2); addField(testRecord, "ユーザー選択", FieldType.USER_SELECT, userList); ArrayList<Member> groupList = new ArrayList<Member>(); groupList.add(testgroup1); groupList.add(testgroup2); addField(testRecord, "グループ選択", FieldType.GROUP_SELECT, groupList); ArrayList<Member> orgList = new ArrayList<Member>(); orgList.add(testorg1); orgList.add(testorg2); addField(testRecord, "組織選択", FieldType.ORGANIZATION_SELECT, orgList); return testRecord; } public HashMap<String, FieldValue> addField(HashMap<String, FieldValue> record, String code, FieldType type, Object value) { FieldValue newField = new FieldValue(); newField.setType(type); newField.setValue(value); record.put(code, newField); return record; } @Test public void testUpdateRecordById() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @SuppressWarnings("unchecked") @Test public void testUpdateRecordByIDWithAttachment() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); Auth auth = new Auth(); auth.setPasswordAuth(TestConstantsSample.USERNAME, TestConstantsSample.PASSWORD); Connection connection = new Connection(TestConstantsSample.DOMAIN, auth); File attachmet = new File(connection); FileModel file = attachmet.upload("src/test/resources/record/ValidRecordValue.txt"); ArrayList<FileModel> al = new ArrayList<>(); al.add(file); testRecord = addField(testRecord, "添付ファイル", FieldType.FILE, al); UpdateRecordResponse response = this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); GetRecordResponse rp = this.passwordAuthRecordManagerment.getRecord(APP_ID, id); HashMap<String, FieldValue> record = rp.getRecord(); for (Entry<String, FieldValue> entry : testRecord.entrySet()) { assertEquals(entry.getValue().getType(), record.get(entry.getKey()).getType()); if (FieldType.FILE == record.get(entry.getKey()).getType()) { ArrayList<FileModel> alf = (ArrayList<FileModel>) record.get(entry.getKey()).getValue(); assertEquals(1, alf.size()); } } } @SuppressWarnings("unchecked") @Test public void testUpdateRecordByIDWithAttachmentToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); Auth auth = new Auth(); auth.setPasswordAuth(TestConstantsSample.USERNAME, TestConstantsSample.PASSWORD); Connection connection = new Connection(TestConstantsSample.DOMAIN, auth); File attachmet = new File(connection); FileModel file = attachmet.upload("src/test/resources/record/ValidRecordValue.txt"); ArrayList<FileModel> al = new ArrayList<>(); al.add(file); testRecord = addField(testRecord, "添付ファイル", FieldType.FILE, al); UpdateRecordResponse response = this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); GetRecordResponse rp = this.tokenRecordManagerment.getRecord(APP_ID, id); HashMap<String, FieldValue> record = rp.getRecord(); for (Entry<String, FieldValue> entry : testRecord.entrySet()) { assertEquals(entry.getValue().getType(), record.get(entry.getKey()).getType()); if (FieldType.FILE == record.get(entry.getKey()).getType()) { ArrayList<FileModel> alf = (ArrayList<FileModel>) record.get(entry.getKey()).getValue(); assertEquals(1, alf.size()); } } } @SuppressWarnings("unchecked") @Test public void testUpdateRecordByIDWithAttachmentCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); Auth certauth = new Auth(); certauth.setPasswordAuth(TestConstantsSample.USERNAME, TestConstantsSample.PASSWORD); certauth.setClientCertByPath(TestConstantsSample.CLIENT_CERT_PATH, TestConstantsSample.CLIENT_CERT_PASSWORD); Connection connection = new Connection(TestConstantsSample.SECURE_DOMAIN, certauth); File attachmet = new File(connection); FileModel file = attachmet.upload("src/test/resources/record/ValidRecordValue.txt"); ArrayList<FileModel> al = new ArrayList<>(); al.add(file); testRecord = addField(testRecord, "添付ファイル", FieldType.FILE, al); UpdateRecordResponse response = this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); GetRecordResponse rp = this.certRecordManagerment.getRecord(APP_ID, id); HashMap<String, FieldValue> record = rp.getRecord(); for (Entry<String, FieldValue> entry : testRecord.entrySet()) { assertEquals(entry.getValue().getType(), record.get(entry.getKey()).getType()); if (FieldType.FILE == record.get(entry.getKey()).getType()) { ArrayList<FileModel> alf = (ArrayList<FileModel>) record.get(entry.getKey()).getValue(); assertEquals(1, alf.size()); } } } @SuppressWarnings("unchecked") @Test public void testUpdateRecordByIDDataWithTable() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); ArrayList<SubTableValueItem> subTable = new ArrayList<SubTableValueItem>(); SubTableValueItem tablelist1 = new SubTableValueItem(); HashMap<String, FieldValue> tableitemvalue = new HashMap<>(); tableitemvalue = addField(tableitemvalue, "文字列__1行_テーブル", FieldType.SINGLE_LINE_TEXT, "文字列__1行inテーブル"); ArrayList<Member> userList = new ArrayList<Member>(); userList.add(new Member("cyuan", "cyuan")); tableitemvalue = addField(tableitemvalue, "ユーザー選択_テーブル", FieldType.USER_SELECT, userList); tableitemvalue = addField(tableitemvalue, "ドロップダウン_テーブル", FieldType.DROP_DOWN, "sample1"); tablelist1.setID(1); tablelist1.setValue(tableitemvalue); subTable.add(tablelist1); // Main Test processing testRecord = addField(testRecord, "サブテーブル", FieldType.SUBTABLE, subTable); UpdateRecordResponse response = this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); GetRecordResponse rp = this.passwordAuthRecordManagerment.getRecord(APP_ID, id); HashMap<String, FieldValue> record = rp.getRecord(); for (Entry<String, FieldValue> entry : testRecord.entrySet()) { assertEquals(entry.getValue().getType(), record.get(entry.getKey()).getType()); if (FieldType.SUBTABLE == record.get(entry.getKey()).getType()) { ArrayList<SubTableValueItem> al = (ArrayList<SubTableValueItem>) record.get(entry.getKey()).getValue(); assertEquals(1, al.size()); } } } @SuppressWarnings("unchecked") @Test public void testUpdateRecordByIDDataWithTableToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); ArrayList<SubTableValueItem> subTable = new ArrayList<SubTableValueItem>(); SubTableValueItem tablelist1 = new SubTableValueItem(); HashMap<String, FieldValue> tableitemvalue = new HashMap<>(); tableitemvalue = addField(tableitemvalue, "文字列__1行_テーブル", FieldType.SINGLE_LINE_TEXT, "文字列__1行inテーブル"); ArrayList<Member> userList = new ArrayList<Member>(); userList.add(new Member("cyuan", "cyuan")); tableitemvalue = addField(tableitemvalue, "ユーザー選択_テーブル", FieldType.USER_SELECT, userList); tableitemvalue = addField(tableitemvalue, "ドロップダウン_テーブル", FieldType.DROP_DOWN, "sample1"); tablelist1.setID(1); tablelist1.setValue(tableitemvalue); subTable.add(tablelist1); // Main Test processing testRecord = addField(testRecord, "サブテーブル", FieldType.SUBTABLE, subTable); UpdateRecordResponse response = this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); GetRecordResponse rp = this.tokenRecordManagerment.getRecord(APP_ID, id); HashMap<String, FieldValue> record = rp.getRecord(); for (Entry<String, FieldValue> entry : testRecord.entrySet()) { assertEquals(entry.getValue().getType(), record.get(entry.getKey()).getType()); if (FieldType.SUBTABLE == record.get(entry.getKey()).getType()) { ArrayList<SubTableValueItem> al = (ArrayList<SubTableValueItem>) record.get(entry.getKey()).getValue(); assertEquals(1, al.size()); } } } @SuppressWarnings("unchecked") @Test public void testUpdateRecordByIDDataWithTableCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); ArrayList<SubTableValueItem> subTable = new ArrayList<SubTableValueItem>(); SubTableValueItem tablelist1 = new SubTableValueItem(); HashMap<String, FieldValue> tableitemvalue = new HashMap<>(); tableitemvalue = addField(tableitemvalue, "文字列__1行_テーブル", FieldType.SINGLE_LINE_TEXT, "文字列__1行inテーブル"); ArrayList<Member> userList = new ArrayList<Member>(); userList.add(new Member("cyuan", "cyuan")); tableitemvalue = addField(tableitemvalue, "ユーザー選択_テーブル", FieldType.USER_SELECT, userList); tableitemvalue = addField(tableitemvalue, "ドロップダウン_テーブル", FieldType.DROP_DOWN, "sample1"); tablelist1.setID(1); tablelist1.setValue(tableitemvalue); subTable.add(tablelist1); // Main Test processing testRecord = addField(testRecord, "サブテーブル", FieldType.SUBTABLE, subTable); UpdateRecordResponse response = this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); GetRecordResponse rp = this.certRecordManagerment.getRecord(APP_ID, id); HashMap<String, FieldValue> record = rp.getRecord(); for (Entry<String, FieldValue> entry : testRecord.entrySet()) { assertEquals(entry.getValue().getType(), record.get(entry.getKey()).getType()); if (FieldType.SUBTABLE == record.get(entry.getKey()).getType()) { ArrayList<SubTableValueItem> al = (ArrayList<SubTableValueItem>) record.get(entry.getKey()).getValue(); assertEquals(1, al.size()); } } } @Test public void tesUpdateRecordByIDInGuest() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "text", FieldType.SINGLE_LINE_TEXT, "guest 文字列__1行"); AddRecordResponse addResponse = this.guestAuthRecordManagerment.addRecord(360, testRecord); Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); // Main Test processing testRecord = addField(testRecord, "text", FieldType.SINGLE_LINE_TEXT, "guest_文字列__1行__更新"); UpdateRecordResponse response = this.guestAuthRecordManagerment.updateRecordByID(360, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void tesUpdateRecordByIDInGuestToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "text", FieldType.SINGLE_LINE_TEXT, "guest 文字列__1行"); AddRecordResponse addResponse = this.tokenGuestRecordManagerment.addRecord(360, testRecord); Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); // Main Test processing testRecord = addField(testRecord, "text", FieldType.SINGLE_LINE_TEXT, "guest_文字列__1行__更新"); UpdateRecordResponse response = this.tokenGuestRecordManagerment.updateRecordByID(360, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void tesUpdateRecordByIDInGuestCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "text", FieldType.SINGLE_LINE_TEXT, "guest 文字列__1行"); AddRecordResponse addResponse = this.certGuestRecordManagerment.addRecord(360, testRecord); Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); // Main Test processing testRecord = addField(testRecord, "text", FieldType.SINGLE_LINE_TEXT, "guest_文字列__1行__更新"); UpdateRecordResponse response = this.certGuestRecordManagerment.updateRecordByID(360, id, testRecord, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdWithoutRevision() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, null); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdWithoutRevisionToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, null); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdWithoutRevisionCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, null); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdRevisionNegativeOne() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, -1); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdRevisionNegativeOneToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, -1); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdRevisionNegativeOneCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, -1); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdRevisionShouldFailLessThanNegativeOne() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, -2); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdRevisionShouldFailLessThanNegativeOneToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, -2); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdRevisionShouldFailLessThanNegativeOneCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, -2); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailRevisionUnexisted() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, 100000); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailRevisionUnexistedToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, 100000); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailRevisionUnexistedCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, 100000); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailRevisionZero() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, 0); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailRevisionZeroToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, 0); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailRevisionZeroCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, 0); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIDUnexisted() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(10000, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIDUnexistedToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(10000, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIDUnexistedCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(10000, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIDNegativeNumber() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(-1, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIDNegativeNumberToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(-1, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIDNegativeNumberCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(-1, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIdZero() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(0, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIdZeroToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(0, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailAppIdZeroCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(0, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdUnexisted() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, 100000, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdUnexistedToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(APP_ID, 100000, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdUnexistedCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(APP_ID, 100000, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdNegativeNumber() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, -1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdNegativeNumberToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(APP_ID, -1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdNegativeNumberCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(APP_ID, -1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdZero() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, 0, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdZeroToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(APP_ID, 0, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailIdZeroCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(APP_ID, 0, testRecord, null); } @Test public void testUpdateRecordByIdInvalidFieldWillSkip() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "不在在的字段", FieldType.SINGLE_LINE_TEXT, "test single text after"); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); UpdateRecordResponse response = this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, null); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdInvalidFieldWillSkipToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "不在在的字段", FieldType.SINGLE_LINE_TEXT, "test single text after"); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); UpdateRecordResponse response = this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, null); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdInvalidFieldWillSkipCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = createTestRecord(); testRecord = addField(testRecord, "不在在的字段", FieldType.SINGLE_LINE_TEXT, "test single text after"); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); UpdateRecordResponse response = this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, null); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdWithoutRecord() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, null, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdWithoutRecordToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.tokenRecordManagerment.updateRecordByID(APP_ID, id, null, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test public void testUpdateRecordByIdWithoutRecordCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); UpdateRecordResponse response = this.certRecordManagerment.updateRecordByID(APP_ID, id, null, revision); assertEquals((Integer) (revision + 1), response.getRevision()); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailInputStringToNumberField() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "数値", FieldType.NUMBER, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailInputStringToNumberFieldToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "数値", FieldType.NUMBER, "test single text after"); this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailInputStringToNumberFieldCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "数値", FieldType.NUMBER, "test single text after"); this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailFieldProhibitDuplicate() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.passwordAuthRecordManagerment.updateRecordByID(1636, 2, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailFieldProhibitDuplicateToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.prohibitDuplicateTokenRecordManagerment.updateRecordByID(1636, 2, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailFieldProhibitDuplicateCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.certRecordManagerment.updateRecordByID(1636, 2, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailInvalidValueOverMaximum() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "数值", FieldType.NUMBER, 11); this.passwordAuthRecordManagerment.updateRecordByID(1636, 2, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailInvalidValueOverMaximumToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "数值", FieldType.NUMBER, 11); this.prohibitDuplicateTokenRecordManagerment.updateRecordByID(1636, 2, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailInvalidValueOverMaximumCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "数值", FieldType.NUMBER, 11); this.certRecordManagerment.updateRecordByID(1636, 2, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailWhenDoNotSetRequiredField() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "数値", FieldType.NUMBER, 111); this.passwordAuthRecordManagerment.updateRecordByID(1640, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailWhenDoNotSetRequiredFieldToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "数値", FieldType.NUMBER, 111); this.requiredFieldTokenRecordManagerment.updateRecordByID(1640, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailWhenDoNotSetRequiredFieldCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "数値", FieldType.NUMBER, 111); this.certRecordManagerment.updateRecordByID(1640, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdChangeCreatorEtc() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "作成日時", FieldType.CREATED_TIME, "2018-08-28T08:07:00Z"); testRecord = addField(testRecord, "作成者", FieldType.CREATOR, new Member("cyuan", "cyuan")); testRecord = addField(testRecord, "更新日時", FieldType.UPDATED_TIME, "2018-08-28T08:07:00Z"); testRecord = addField(testRecord, "更新者", FieldType.MODIFIER, new Member("cyuan", "cyuan")); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdChangeCreatorEtcToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "作成日時", FieldType.CREATED_TIME, "2018-08-28T08:07:00Z"); testRecord = addField(testRecord, "作成者", FieldType.CREATOR, new Member("cyuan", "cyuan")); testRecord = addField(testRecord, "更新日時", FieldType.UPDATED_TIME, "2018-08-28T08:07:00Z"); testRecord = addField(testRecord, "更新者", FieldType.MODIFIER, new Member("cyuan", "cyuan")); this.tokenRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdChangeCreatorEtcCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "作成日時", FieldType.CREATED_TIME, "2018-08-28T08:07:00Z"); testRecord = addField(testRecord, "作成者", FieldType.CREATOR, new Member("cyuan", "cyuan")); testRecord = addField(testRecord, "更新日時", FieldType.UPDATED_TIME, "2018-08-28T08:07:00Z"); testRecord = addField(testRecord, "更新者", FieldType.MODIFIER, new Member("cyuan", "cyuan")); this.certRecordManagerment.updateRecordByID(APP_ID, id, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailWheDoNotHavepermissionOfApp() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.passwordAuthRecordManagerment.updateRecordByID(1632, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailWheDoNotHavepermissionOfAppToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.noAddPermissionTokenReocrdManagerment.updateRecordByID(1632, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldFailWheDoNotHavepermissionOfAppCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.certRecordManagerment.updateRecordByID(1632, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldSuccessWheDoNotHavepermissionOfRecord() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.passwordAuthRecordManagerment.updateRecordByID(1634, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldSuccessWheDoNotHavepermissionOfRecordToken() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.addNoViewTokenRecordManagerment.updateRecordByID(1634, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldSuccessWheDoNotHavepermissionOfRecordCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test"); this.certRecordManagerment.updateRecordByID(1634, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldSuccessWheDoNotHavepermissionOfField() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "数值", FieldType.NUMBER, 123); this.passwordAuthRecordManagerment.updateRecordByID(1635, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdShouldSuccessWheDoNotHavepermissionOfFieldCert() throws KintoneAPIException { HashMap<String, FieldValue> testRecord = new HashMap<>(); testRecord = addField(testRecord, "数值", FieldType.NUMBER, 123); this.certRecordManagerment.updateRecordByID(1635, 1, testRecord, null); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdWithoutRecordId() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(APP_ID, null, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdWithoutRecordIdToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(APP_ID, null, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdWithoutRecordIdCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(APP_ID, null, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdWithoutApp() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.passwordAuthRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.passwordAuthRecordManagerment.updateRecordByID(null, id, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdWithoutAppToken() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.tokenRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.tokenRecordManagerment.updateRecordByID(null, id, testRecord, revision); } @Test(expected = KintoneAPIException.class) public void testUpdateRecordByIdWithoutAppCert() throws KintoneAPIException { // Preprocessing HashMap<String, FieldValue> testRecord = createTestRecord(); AddRecordResponse addResponse = this.certRecordManagerment.addRecord(APP_ID, testRecord); // Main Test processing Integer id = addResponse.getID(); Integer revision = addResponse.getRevision(); testRecord = addField(testRecord, "文字列__1行", FieldType.SINGLE_LINE_TEXT, "test single text after"); this.certRecordManagerment.updateRecordByID(null, id, testRecord, revision); } }
US of liver transplants: normal and abnormal. Whole-liver transplantation is an accepted and successful method of treating end-stage liver disease. As a result of the shortage of cadaveric livers, split-liver transplantation and living donor liver transplantation are becoming more commonplace. Ultrasonography (US) is the initial imaging modality of choice for detection and follow-up of early and delayed complications from all types of liver transplantation. Vascular complications include thrombosis and stenosis of the hepatic artery, portal vein, or inferior vena cava, as well as hepatic artery pseudoaneurysms and celiac artery stenosis. Biliary complications include leaks, strictures, stones or sludge, dysfunction of the sphincter of Oddi, and recurrent disease. Neoplastic disease in the transplanted liver may represent recurrent neoplasia or posttransplantation lymphoproliferative disorder. Parenchymal disease may take the form of a focal mass or a diffuse parenchymal abnormality. Perihepatic fluid collections and ascites are common after liver transplantation. Knowledge of the surgical technique of liver transplantation and awareness of the normal US appearance of the transplanted liver permit early detection of complications and prevent misdiagnosis.