content
stringlengths 7
2.61M
|
---|
package com.google.android.gms.internal.ads;
import android.content.Context;
import android.os.Bundle;
import android.view.View;
import android.view.ViewGroup;
import android.view.ViewParent;
import java.util.ArrayList;
import java.util.concurrent.Callable;
public final class zzcre implements zzcva<zzcrd> {
/* renamed from: a */
private final zzbbl f27267a;
/* renamed from: b */
private final Context f27268b;
/* renamed from: c */
private final zzcxv f27269c;
/* renamed from: d */
private final View f27270d;
public zzcre(zzbbl zzbbl, Context context, zzcxv zzcxv, ViewGroup viewGroup) {
this.f27267a = zzbbl;
this.f27268b = context;
this.f27269c = zzcxv;
this.f27270d = viewGroup;
}
/* renamed from: a */
public final zzbbh<zzcrd> mo28586a() {
if (!((Boolean) zzyt.m31536e().mo29599a(zzacu.f24203ya)).booleanValue()) {
return zzbar.m26375a((Throwable) new Exception("Ad Key signal disabled."));
}
return this.f27267a.mo30326a((Callable<T>) new C9408el<T>(this));
}
/* access modifiers changed from: 0000 */
/* renamed from: b */
public final /* synthetic */ zzcrd mo31251b() throws Exception {
Context context = this.f27268b;
zzyd zzyd = this.f27269c.f27601e;
ArrayList arrayList = new ArrayList();
View view = this.f27270d;
while (view != null) {
ViewParent parent = view.getParent();
if (parent == null) {
break;
}
int i = -1;
if (parent instanceof ViewGroup) {
i = ((ViewGroup) parent).indexOfChild(view);
}
Bundle bundle = new Bundle();
bundle.putString("type", parent.getClass().getName());
bundle.putInt("index_of_child", i);
arrayList.add(bundle);
if (!(parent instanceof View)) {
break;
}
view = (View) parent;
}
return new zzcrd(context, zzyd, arrayList);
}
}
|
Performance of cat's eye modulating retro-reflectors for free-space optical communications Modulating retro-reflectors (MRR) couple passive optical retro-reflectors with electro-optic modulators to allow free-space optical communication with a laser and pointing/acquisition/tracking system required on only one end of the link. In operation a conventional free space optical communications terminal, the interrogator, is used on one end of the link to illuminate the MRR on the other end of the link with a cw beam. The MRR imposes a modulation on the interrogating beam and passively retro-reflects it back to the interrogator. These types of systems are attractive for a asymmetric communication links for which one end of the link cannot afford the weight, power or expense of a conventional free-space optical communication terminal. Recently, MRR using multiple quantum well (MQW) modulators have been demonstrated using a large area MQW placed in front of the aperture of a corner-cube. For the MQW MRR, the maximum modulation can range into the gigahertz, limited only by the RC time constant of the device. This limitation, however, is a serious one. The optical aperture of an MRR cannot be too small or the amount of light retro-reflected will be insufficient to close the link. For typical corner-cube MQW MRR devices the modulator has a diameter between 0.5-1 cm and maximum modulation rates less than 10 Mbps. In this paper we describe a new kind of MQW MRR that uses a cats eye retro-reflector with the MQW in the focal plane of the cats eye. This system decouples the size of the modulator from the size of the optical aperture and allows much higher data rates. A 10 Mbps free space link over a range of 1 km is demonstrated. In addition a laboratory of a 70 Mbps MQW focal plane is described. |
Prevalence and associated risk factors for Giardia and Cryptosporidium infections among children of northwest Mexico: a cross-sectional study Background G. intestinalis and Cryptosporidium spp. are responsible for gastrointestinal infections worldwide. Contaminated food, feces, drinking water and predictors such as poverty, cultural and behavioral aspects have been involved in their transmission. Published studies about these infections are limited in Mexico. Cananea, Sonora is located in northwest Mexico and is one of the regions with the lowest marginalization index in the Sonora state. However, its rate of gastrointestinal infections increased from 48.7/1000 in 2003 to 77.9/1000 in 2010 in the general population. It was estimated that the prevalence of giardiasis can range from 20 to 30% in the Sonoran childhood population. However, the prevalence of giardiasis and cryptosporidiosis are unknown in Cananea, Sonora and they are likely contributing to its gastrointestinal infections rates. Methods A total of 173 children (average age 8.8±2.8 years) participated in this cross-sectional study. Anthropometric measurements and stool analysis were performed. Socioeconomic, cultural and symptomatology information were collected. The association between the risk factors and intestinal parasitic infections was analyzed by multivariate analysis using the STATA/SE package at a significance level of p≤0.05. Results More than half of the children (n=103, 60%) had intestinal parasitic infections. Cryptosporidium spp. showed the highest prevalence (n=47, 27%), which was followed by G. intestinalis (n=40, 23%). Children with giardiasis and cryptosporidiosis had lower H/A and BMI/A Z scores than children who were free of these infections. Children with giardiasis were at higher risk (OR=4.0; 95%CI=1.1113.02; p=0.030) of reporting abdominal pain, and children who drank tap water were at higher risk (OR=5.0; 95% CI=1.4117.20; p=0.012) of cryptosporidiosis. Conclusions This was the first epidemiological study conducted in children in the region of Cananea, Sonora in northwest Mexico. The findings revealed a high prevalence of cryptosporidiosis and giardiasis, and their interactions with multiple risk factors were investigated. This study suggested that giardiasis and cryptosporidiosis may play an important role as causative factors of gastrointestinal diseases in the study region. Regional authorities must analyze water for human consumption to search for Cryptosporidium spp. and G. intestinalis. Background It was estimated that approximately two hundred million people were affected yearly with giardiasis in Africa, Asia and Latin America. In addition, the overall prevalence of giardiasis was estimated from 2 to 5% in industrialized countries ; meanwhile, the prevalence of Giardia intestinalis (G. intestinalis) in developing countries was estimated as 15 to 20% in children younger than 10 years of age. Another protozoan, Cryptosporidium spp., is found worldwide with a prevalence ranging from 4 to 31% in some developing countries. C. parvum and G. intestinalis have been recognized as important causes of gastrointestinal infections and diarrhea, and they can be transmitted by person to person via fecal-oral route or indirect route through contaminated food and drinking water with human or animal feces. For the above reasons, additional factors contributing to the spread of the parasitic infections are socioeconomic status, age, household crowding, education, animal ownership and lack of access to clean water and proper sanitation. In Mexico, the prevalence of giardiasis can range from 3% up to 50% in different regions, but the epidemiological trends of cryptosporidiosis remain unknown at the national level. In 2006, sporadic published information highlighted that 41% of 100 infants hospitalized had Cryptosporidium spp. in a public health institution located in Mexico city. In 2013, Olivas-Enriquez et al. detected C. parvum in 19 of 38 (50%) household water samples and in 12 of 13 study communities. In 2010, it was published that 5.1% of 5459 infants from 1 month to 5 years old had Cryptosporidium parvum in Guadalajara, Mexico. Otherwise, it was estimated that 14% (372,742 people) of 2,662,480 inhabitants in the state of Sonora (northwest Mexico) lived in rural communities. Sonora is the second largest Mexican state based on territorial area, and it consists of 72 regions and Cananea is one of them. Some years ago a health report pointed out that Cananea had the second highest rate of gastrointestinal infections (increased from 48.7/1000 in 2003 to 77.9/1000 in 2010) at the state level. Based on this, giardiasis and cryptosporidiosis may be associated with a high rate of gastrointestinal infections reported in that region. Because these infections remain unknown in the region of Cananea, we investigated the prevalence levels of giardiasis and cryptosporidiosis and their association with socioeconomic factors and behavioral habits in children from that region. The information will be useful for preventing and controlling these infections by the regional health authorities. Study site and population This was a cross-sectional study conducted from February 2013 to September 2014 in the region of Cananea, Sonora (northwest Mexico). Cananea has a population of 32,936 inhabitants ; it is located at 1654 m above sea level and is bordered to the north by the United States and other Sonoran municipalities such as Naco to the northwest, Arizpe to the south, and Imuris and Santa Cruz to the west (Fig. 1). Cananea has a humid warm climate with an average annual temperature of 15.3°C, average temperatures of 18°C and 14°C during spring and autumn, respectively, and annual rainfall of 545 mm. For this study, three kindergarten and two public primary schools in the region of Cananea were selected because they were located in low income level areas and had high rates of gastrointestinal diseases. A total of 366 preschoolers and 491 school children (total n = 857) who were officially enrolled in the selected schools (school year 2013-2014) were invited to participate. At the same time plastic Fig. 1 Region of Cananea, Sonora at northwest Mexico containers were distributed to collect stool samples (three per child). The study protocol was explained to the school authorities and parents. Ethical consideration Informed consent that explained the purposes, benefits, and risks of the study was provided to each parent or guardian of the participating children prior to starting the study. Two hundred and three children wished to participate in this study. One hundred and seventy-three (20%) children fulfilled the required study criteria. The parents of 30 (3.7%) children did not sign the informed consent. The remaining children (n = 654, 76.3%) did not participate for various reasons. Both participating and nonparticipating children lived in the same living conditions around the selected kindergartens and schools. The ethics committee of the Centro de Investigacin en Alimentacin y Desarrollo approved this study. Children with intestinal parasitic infections received the proper treatment from an experienced physician. Anthropometric measurements Standing height was measured using a stadiometer (Holtain Ltd., Dyfed, UK) with 2.05 ± 0.001 m capacity, and weight was measured to the nearest 10 g using a digital electronic scale (AND FV-150 KA1, A&D Co. Ltd., Toshima-ku, Tokyo, Japan) according to the standardized recommendations. Ages were validated from reliable official school records. The weight-for-age (W/A), height-for-age (H/A), and body mass index-for-age (BMI/A) Z scores were calculated using the WHO Anthro for personal computers, version 1.0.4, 2009: Software for assessing the growth and development of the world's children. Undernutrition risk was defined from −2 to < −1 Z scores and moderate and severe undernutrition from -2 Z scores considering the median reference values of H/A (stunting), W/A, and BMI/A. Fecal sample collection, processing, and analysis In the Faust technique, each fecal sample was poured into a round-bottom tube (100 by 13 mm) to within 20 mm of the rim. Then, 3 mL of distilled water was added, and the fecal material and water were mixed. The suspension was centrifuged for 10 min at 2500 rpm (700 g). All centrifugations were performed without mechanical breaking. The supernatant was decanted, and the last drop was drained onto a clean section of paper towel. This washing procedure was repeated 3 times. Then, 3.5 ml of aqueous ZnSO 3 solution (1.180 specific gravity) was added to within 50.8 mm of the rim of the tube. The packed sediment was re-suspended using applicator sticks until no coarse particles remained. This suspension was centrifuged for 5 min at 2500 rpm (700 g) and transferred without agitation to a rack that held it upright; the suspension was then allowed to stand for 20 min. With a wire loop of 5 mm in diameter bent at a right angle to the stem, two loops of the surface film were transferred to a drop of iodine solution (Weigert's solution) on a glass slide (76.2 by 50.8 mm) for wet-mount examination. Then, 10 and 40 objectives were used for identification of cysts of G. intestinalis. The specific gravity of the zinc sulfate solution was checked every 7 days using a calibrated hydrometer with a specific gravity range of 1.00 to 1.20 throughout the study. In addition, 1 g of fecal sample was smeared on clean slides and stained with the cold acid-fast Kinyoun stain to identify Cryptosporidium spp. oocysts by light microscopy at a magnification of 100. Collection of information from children and their families The collection of particular and socioeconomic information in this study was performed with a structured and locally adapted questionnaire. The interviews were administered face-to-face with parents in the children's households. A well-trained technician and the person responsible for the study conducted the interviews to lessen potential bias. Several scoring indices were constructed to describe the population. The civil status of the parents was assessed from the married status and was assigned for synonymous with stability or for risk or low stability. The socioeconomic status was assessed from the employment status and parental education, assigning for employed or for unemployed, for complete or for incomplete secondary school. Household conditions were assessed by the type of material used for walls (brick, adobe, block or cardboard), roofs (cement or aluminum plus wood) and floors (ground, cement or mosaic), which were categorized based on local costs of materials and the presence or absence of window nets. Sanitation facilities and hygiene indices were assessed as the use of a flushing toilet or a latrine ; drinking water was assessed as treated water or tap water. We inquired about proper hygiene and hand-washing before eating, after restroom use, or after touching pets present in the household, and these were assigned if "always" or if "not always" and food washing as if "always" or if "not always". Crowding was estimated using the number of people per room, and it was categorized as less than three or more than three people per room in agreement with the WHO 2006 guidelines. Family income (including economic support from different sources) was estimated as the number of minimum daily-wages ($67.3 pesos or $4.86 USD with the valid type of exchange of $13.84 pesos per dollar at the study time) from dividing the daily family income by the current local minimum daily wage. The health status of the children was investigated based on the visible child's symptomatology or present symptoms in the last 7 days given by the mothers at the time of interview. The values of and were assigned for the absence and presence, respectively, of abdominal pain and headache. Finally, the absence or presence of domestic animals in the households was investigated. Statistical analysis An exploratory analysis of data from the database was conducted. Age and anthropometric indicators of the study children were expressed as a mean value with standard deviation. The prevalence of infection or parasite species was expressed as the percentage of children with pathogenic or commensal protozoa or each identified spp. or genera of protozoa present in any of the provided fecal samples. Analysis of covariance (ANCOVA) examined the differences between mean values taking into account the influence of the uncontrolled independent variables and Wilcoxon rank-sum test were used to test the differences between medians. The proportions were compared using the chi-square test with the corresponding odds ratios (OR), 95% confidence intervals and two-sided p values. The association between the risk factors and intestinal parasitic infections was analyzed using both univariate analysis by simple logistic regression and multiple logistic regression analysis. All plausible biological variables with OR > 1 and p ≤ 0.2 in univariate analysis were selected for multiple logistic regression analysis using stepwise forward elimination with an acceptance criterion of p ≤ 0.05 and the adjusted OR. The resulting preliminary model was evaluated by interaction (p ≤ 0.1) and collinearity (correlation coefficient > 0.7) to generate the final adjusted model. In all constructed models, the dependent variable was intestinal parasitic infection and the risk factor was the hypothesized independent variable. The variables judged to be possible confounding factors, such as sex and Z scores for the anthropometric indices, were used in the multiple logistic regression analysis. Regression diagnostics to identify outliers and influential data points were also conducted. All data were analyzed using statistical software STATA/SE version 12.0 (StataCorp. 2011. Stata Statistical Software: Release 12. College Station, TX: StataCorp LP). Prevalence of Giardia and Cryptosporidium A total of 488 fecal samples were collected and transported in ice boxes to the parasitology laboratory of the Centro de Investigacin en Alimentacin y Desarrollo and were stored between 5°C and 7°C for 24-72 h until analysis. Eighty-seven percent (n = 150), 8% (n = 15) and 5% (n = 8) of the participant children gave 3, 2 and 1 sample(s) per child, respectively. The overall prevalence of pathogenic and commensal protozoa found in the participant children (n = 173) is shown in Table 1. More than half (n = 103) of the children had protozoan infections (60%), and 29% (n = 50) had infections with two or more protozoan genera. No difference was found in the prevalence of protozoan infections between females and males (p = 0.290; data not shown). Cryptosporidium spp. showed the highest prevalence (n = 47, 27%), which was followed by G. intestinalis. (n = 40, 23%). In addition, E. histolytica/dispar/ moshkovskii were also detected, albeit at a lower prevalence. However, the species of the E. histolytica/dispar/ moshkovskii complex were not identified. On the other hand, E. nana had the highest prevalence (n = 57, 33%) of all pathogenic and commensal protozoa detected in this study. I. btschlii had a lower prevalence (n = 1, 0.6%). Finally, the helminth H. nana was only found in 4 children (2%) (not shown in Table 1). E. histolytica is well recognized as a pathogenic amoeba, but E. dispar, E. nana, E. coli and I. btschlii are considered a nonpathogenic amoebae. However, pathogenicity of E. moschkovskii remains unknown. Infection and nutritional status At baseline, the average age of the study children (n = 173) was 8.8 (±2.8) years old. Fifty-one percent (n = 88) and 49% (n = 85) were female and male, respectively. No difference was found between the proportions of female and male participants (p = 0.747). Our study children consisted of 26 preschools and 147 primary school children with average ages of 3.6 (±0.98) and 9.7 (±1.93), respectively. In relation to the anthropometric measurements, the proportions below −2 SD in W/A, H/A (stunting) and BMI/A No difference was found in the age, weight, height, W/A Z scores between the children with and without giardiasis or cryptosporidiosis (Table 2). However, children with giardiasis or cryptosporidiosis had significantly lower H/A and BMI/A Z scores than uninfected children (p = 0.001 and p = 0.028 and p = 002 and p = 0.030 respectively) ( Table 2). Distribution of risk factors of the families of participating children One hundred and seventy-three mothers and 45 fathers of the children responded to the questionnaires. Most of the participating children came from homes with unmarried mothers and fathers (Table 3). At the time of the interview, most mothers were housewives and most fathers were employed. Most parents had not completed their secondary education, and more than half of the families were living with 1 or less valid minimum wages. Twenty-four percent of the families of the participating children were living in crowded conditions, and 90% were living in households made of materials that were appropriate for the weather conditions of the study area. Also, most of the children's families drank water directly from the tap (59%). In addition, 43% of the families had domestic animals (Table 3). Factors associated with G. intestinalis and Cryptosporidium spp. G. intestinalis and Cryptosporidium spp. were separately analyzed as dependent variables and risk factors as independent variables. Univariate analysis revealed that the drinking water type, presence of domestic animals in the household, symptomatology and seasonality for sample collection for giardiasis; and drinking water type and presence of domestic animals in the household for cryptosporidiosis fulfilled the acceptance criterion for examination in the multiple regression analysis ( Table 4). The civil status, education and economic activity of the parents; family income; crowding and household conditions did not meet the criteria (data not shown). The nutritional status, sex and age are recognized to be factors that influence the prevalence of intestinal parasitic infections. They were used as adjustment variables in the multiple regression logistic analysis (stepwise), and preliminary models for giardiasis and cryptosporidiosis were produced. Interaction (p > 0.1) and collinearity (p < 0.7) were not found for those models, and the final models were defined ( Table 5). The Giardia-infected children had higher risk to present abdominal pain (OR = 4.0, 95%CI = 1.11-13.02, p = 0.030) adjusted by sex, age, W/A, H/A, and BMI/A Z-scores. On the other hand, children drinking tap water were at a higher risk (OR = 5.0, 95%CI =1.41-17.20, p = 0.012) of cryptosporidiosis with the same adjustment variables (Table 5). Discussion This study concerned the prevalence of G. intestinalis and Crypstosporidium spp. and the risk factors associated with their presence. The study was performed from February 2013 to September 2014 in children from two public elementary schools and three public kindergartens located in the region of Cananea in northwestern Mexico. More than half of the children (n = 103, 60%) had infections by pathogenic and commensal protozoa, and one-third had intestinal polyparasitism (n = 50, 29%). In 2010, Snchez and Miramontes also found that nearly half (42.2%) of 2055 children from 3 to Present as family member confirmed by the mother at the time of the interview. The rest of the families had no paternal figure 19 years of age in 14 rural communities in San Luis Potos had intestinal parasites. In addition, no difference was found in the prevalence of these infections between boys and girls or school and preschool children in this study. Some years ago, another local study found no difference (p = 0.989) in the prevalence of these infections between male (n = 157) and female (n = 155) school children of different rural and suburban areas in southern Sonora. These children are probably exposed to the same risk factors, irrespective of their sex. In addition, a recent systematic epidemiological review (n = 103 studies) in Iran revealed that the similar prevalences of these infections between Iranian preschool and school children (38.2 and 43.4%, respectively) are probably influenced by factors such as sanitation, hygiene, awareness of people, seasonal variations and health education. On the other hand, G. intestinalis (n = 40, 23%) remains an important protozoan that causes infection in school children in northwest Mxico. Conversely, Snchez and Miramontes found a low prevalence of giardiasis (approximately 5%) in 2126 children from 3 to 19 years of age, which is probably associated with the massive antiparasitic campaigns in their study communities. In this study, the prevalence of cryptosporidiosis (n = 47, 27%) was as high as giardiasis (23%). Therefore, this study revealed that G. intestinalis is not the only predominant protozoan affecting children in Cananea in northwest Mexico, but it is also accompanied by Cryptosporidium spp. Otherwise, the helminth H. nana and the protozoa E. histolytica/dispar/moshkovskii were found to have low prevalences (2 and 2.3%, respectively). These prevalence levels were lower than those published in 2015 for these organisms (16 and 10%, respectively) in school. The low prevalence of H. nana is probably associated with the national albendazole campaign that is administered twice a year in the study site, but the low prevalence of E. histolytica/dispar/moshkovskii is probably a result of the poor efficacy of the Faust technique for detecting these organisms. Sanchez and Miramontes observed the same finding (approximately 1.2% for E. histolytica) using the Faust technique. It should be remarked that these prevalence data may be underestimated because proper molecular assays are more effective and sensitive than microscopic methods for the detection of parasitic infections and differentiation of species. However, the high cost is still a limiting factor. On the other hand, a high prevalence of commensal protozoa was found in our study (E. nana, 33% and E. coli, 17%). These protozoa are indicators of poor sanitary conditions, and they are a public health concern because they use the same transmission routes as pathogenic organisms. With respect to the nutritional status, the study children had a prevalence of undernutrition according to H/A (stunting) and W/A Z scores that were lower than and similar to, respectively, the prevalence published by the national survey in 2012 (3.3% vs. 13.6% and 1.5% vs. 1.6%). However, it should be emphazised that our study children are not representative of the entire population of the Sonora state. On the other hand, children with giardiasis and cryptosporidiosis had lower H/A and BMI/A Z scores than children free of these infections (Table 2) which is probably a result of the parasite interfering with intestinal absorption leading to malnutrition. Duran et al., found in 3388 children that the average BMI of the Giardia infected children was significantly lower than in those free of these infection (17.86 ± 0.22 kg/m2 versus 19.49 ± 0.097 kg/m2). Similarly, a study in Brazil found a lower H/A Z score in children with giardiasis than in Giardia-free children in 2007. These authors suggested that giardiasis can reduce the weight of affected children, but chronic infection may contribute to the height deficits observed in their study communities. In our study, giardiasis or cryptosporidiosis was associated with stunting, and different studies have shown that even asymptomatic giardiasis and cryptosporidiosis can be associated with growth shortfalls. However, it should be emphasized that factors such as re-infection, chronic infection and chronic poor dietary intake can potentially predispose children to stunting. Furthermore, the system of potable water is of great concern to the transmission of intestinal protozoan infections in the study site. Drinking water directly from tap was a risk factor of cryptosporidiosis (OR = 4.42) in the participating children. The agency responsible of water management at local level add 5.0 mg of chlorine per litter of water to an area of concentrating water before is distributed to the water distribution network of the Cananea region. However, Cryptosporidium spp. is resistant to conventional disinfectants and it has been reported active infections in mice inoculated with 60,000 oocysts exposed for 90 min with 80 mg of chlorine per litter of water. On a worldwide scale, there is great concern for authorities responsible for providing safe drinking water for human consumption because an increasing number of waterborne outbreaks of this infection have been reported, particularly in the USA and UK. Innovative technologies to improve detection, monitoring and surveillance of Cryptosporidium appeared after 1996, which is when the American Federal Government considered these to be drinking water contaminants and prompted the USEPA to create and implement drinking water regulations. On the other hand, there are drinking water regulations in Mexico with standards that do not include Cryptosporidium spp. analysis. No association was found between giardiasis and cryptosporidiosis with the age of the children, the civil status and education of the parents, family income, crowding, household conditions, domestic animals and seasonality or domestic animals at home (data not shown). On the other hand, symptomatology was associated with giardiasis, but not with cryptosporidiosis. Probably this resulted of the reflection of children's susceptibility to giardiasis or cryptosporidiosis in this study. Finally, this was the first epidemiological study in Cananea focusing on children. Our study has limitations to be considered while interpreting the results. We conducted three stool examinations to detect G. intestinalis and Cryptosporidium spp., but 8% (n = 14) and 5% (n = 9) of the children gave two and one sample(s), respectively. Because optimal laboratory diagnosis of parasites in stool samples requires the examination of at least three specimens collected over different days, some degree of underestimation could be present. Previous studies have suggested that one stool sample has a sensitivity of between 50 and 70%, but three serial samples have a sensitivity of up to 90%. Conclusions about the causality of associations between different factors and the detected protozoa could not be drawn because this is a cross-sectional study design. Most socioeconomic characteristics of the participating children were not associated with giardiasis or cryptosporidiosis in this study, which is probably related to the low sample size. On the other, although this study had a low sample size, it was sufficiently strong to identify a significant association between giardiasis and cryptosporidiosis and several variables even after adjusting for the presence of other factors in the analysis. Conclusion This study provided data on the prevalence and important data regarding risk factors for giardiasis and cryptosporidiosis. These infections are a concern of public health in the studied children of the region of Cananea, Sonora, northwest Mexico. The data suggested that giardiasis and cryptsoporidiosis may contribute to height deficits in our study children. On the other hand, because domestic tap water is a risk indicator of infection, monitoring bacteria in drinking water, as dictated by the Mexican law, should be accompanied by analysis of G. intestinalis and Cryptosporidium to assess the quality standards of drinking water in the study site. It was also suggested that giardiasis and cryptosporidiosis may be somewhat responsible for the rate of gastrointestinal diseases present in the population of that region. Based on these findings, actions must be taken by the regional health authorities to reduce and prevent gastrointestinal infections as well as to monitor the quality of the drinking water in the study site. |
Trauma: Contemporary Directions in Theory, Practice, and Research, by Shoshana Ringel and Jerrold R. Brandel (Eds.) trainees understand how to collect, organize, and integrate an abundance of information; how to form clinical inferences; and how to plan, implement, and evaluate their interventions. This book will make it easier for trainees to begin to think integratively about their clients and how to select interventions. It fills the gap for teaching students the conceptualizing skills needed to understand and treat clients. As such, it is a must read for assignment in clinical practice courses in all social work programs. |
<reponame>ZigzagYY/c2021<filename>level1/p07_encrypt_decrypt/string.cpp<gh_stars>1-10
#include<iostream>
#include<cstdio>
#include<math.h>
#include<stdlib.h>
#include<string.h>
using namespace std;
void encrypt(char *s){
int len;
char chu;
chu=s[0];
len=strlen(s);
for(int i=0;i<len-1;i++){
s[i]=s[i+1];
}
s[len-1]=chu;
}
void decrypt(char *s){
int len;
char mo;
len=strlen(s);
mo=s[len-1];
for(int i=len-1;i>=1;i--){
s[i]=s[i-1];
}
s[0]=mo;
}
int main(){
char st[1000];
cin>>st;
encrypt(st);
cout<<st<<endl;
decrypt(st);
cout<<st;
return 0;
}
|
<reponame>moises-dias/hunter-adventures
#include "Posicao.h"
Posicao::Posicao()
{
}
Posicao::~Posicao()
{
}
void Posicao::mover()
{
velocidade += aceleracao;
espaco += velocidade;
}
void Posicao::setEspaco(Vetor_R2 esp)
{
espaco = esp;
}
Vetor_R2& Posicao::getEspaco()
{
return espaco;
}
void Posicao::setVelocidade(Vetor_R2 vel)
{
velocidade = vel;
}
Vetor_R2& Posicao::getVelocidade()
{
return velocidade;
}
void Posicao::setAceleracao(Vetor_R2 ace)
{
aceleracao = ace;
}
Vetor_R2& Posicao::getAceleracao()
{
return aceleracao;
}
void Posicao::setPlatAtual(int plat)
{
platAtual = plat;
}
int Posicao::getPlatAtual()
{
return platAtual;
}
|
Role of Dipeptidyl Peptidase 4 Inhibitors in Antidiabetic Treatment In recent years, important changes have occurred in the field of diabetes treatment. The focus of the treatment of diabetic patients has shifted from the control of blood glucose itself to the overall management of risk factors, while adjusting blood glucose goals according to individualization. In addition, regulators need to approve new antidiabetic drugs which have been tested for cardiovascular safety. Thus, the newest class of drugs has been shown to reduce major adverse cardiovascular events, including sodium-glucose transporter 2 (SGLT2) and some glucagon like peptide 1 receptor (GLP1) analog. As such, they have a prominent place in the hyperglycemia treatment algorithms. In recent years, the role of DPP4 inhibitors (DPP4i) has been modified. DPP4i have a favorable safety profile and anti-inflammatory profile, do not cause hypoglycemia or weight gain, and do not require dose escalation. In addition, it can also be applied to some types of chronic kidney disease patients and elderly patients with diabetes. Overall, DPP4i, as a class of safe oral hypoglycemic agents, have a role in the management of diabetic patients, and there is extensive experience in their use. Introduction Dipeptidyl peptidase 4 (DPP4) enzyme is a type II transmembrane glycoprotein, expressed ubiquitously in many tissues, including the immune cells, kidney, liver, pancreas, fat cells, and presents as a soluble form in the circulation. Dipeptidyl peptidase 4 is a serine protease, can cleave and inactivate incretin hormones, glucagon-like peptide 1 (GLP-1), glucose-dependent insulinotropic polypeptide (GIP), neuropeptides, and chemokines. In addition, DPP4 has been shown to have a direct pro-inflammatory role in lymphocytes, macrophages, and smooth muscle cells. Dipeptidyl peptidase 4 plays a major role in glucose and insulin metabolism, but its functions are not fully understood yet. On one hand, DPP4 degrades incretins such as GLP-1 and GIP, ultimately leading to reduced insulin secretion and abnormal visceral adipose tissue metabolism; on the other hand, DPP4 regulates postprandial glucose through degradation of GLP-1. Due to its ability to prevent the inactivation of GLP-1, DPP4 inhibition (DPP4i) was explored as a target for the treatment and management of type 2 diabetes mellitus (T2DM) in the 1990s. T2DM is the most common type of diabetes and associated with a low-grade chronic inflammation induced by the excessive visceral adipose tissue. This inflammatory status results in dysregulation of homeostatic glucose regulation and peripheral insulin sensitivity. Dipeptidyl peptidase 4 activity is correlated with the onset and severity of obesity and diabetes. The levels of plasma DPP4 activity are elevated in diseases, including T2DM, obesity, chronic diabetic kidney disease, cardiovascular diseases and atherosclerosis. Recently, the circulating levels of endogenous soluble DPP4 were found to be dissociated from the extent of systemic inflammation, glucose intolerance and white adipose tissue inflammation ; therefore, the search for DPP4 inhibitors is a viable approach. Sitagliptin may have potential therapeutic agent for the treatment of cardiovascular diseases via suppressing activation of p38/NF-B signaling. This review discusses the role of DPP4i in the treatment of diabetes, highlighting their benefits and risks. The article will focus primarily effect of the approved DPP4i: sitagliptin, vildagliptin, saxagliptin, alogliptin, and linagliptin. Mechanisms of Effect of DPP4i Diabetes mellitus (DM) is a worldwide health problem, which is a major cause of blindness, chronic kidney disease (CKD), stroke, lower extremity amputations, coronary heart disease and heart failure (HF). T2DM has changed from a chronic disease of the elderly in the traditional concept to a chronic disease of middle-aged and even children and adolescents. Excess body fat along with age constitute the two most important risk factors for the premature development of T2DM. Early onset T2DM relative to late-onset disease is associated with a more rapid deterioration of -cell function, emphasizing the importance for early diagnosis and treatment initiation. Obesity-related mechanisms that are potentially linked to the severity of the disease include adipocyte lipid spillover, ectopic fat accumulation and tissue inflammation. Therapies aiming to decrease body weight are consequently a valuable strategy to delay the onset and decrease the risk of T2DM, as well as managing established disease. In the past few decades, drug therapy for T2DM has developed greatly and involves several new strategies. These new strategies include more patient-friendly ways to use the drug, such as improving weight loss. However, animal studies have demonstrated that a key barrier to the development of anti-obesity drugs is the large inability to predict human cardiovascular safety. In tolerable doses, they rarely achieve 10% weight loss. Although the clinical success of these agents has laid the foundation for a new era of anti-obesity drugs, there is considerable debate as to how GLP1/GIP regulates metabolism and whether its receptor agonists or antagonists can be the drugs of choice for treating obesity and T2DM. At present, DPP4 inhibitors are widely used for the treatment of T2DM. The basis for this approach lies with the finding that DPP4 has a key role in determining the clearance of the incretin hormone, GLP1. GLP1 is an intestinal peptide, which was known to have a role in glucose homeostasis via actions that include the potentiation of glucose-induced insulin secretion and the suppression of glucagon secretion. Dipeptidyl peptidase 4 inhibitor (DPP4i) itself has no hypoglycemic activity. Instead, their anti-hyperglycemia effect is achieved primarily by altering levels of endogenous substrates. Once the catalytic activity of DPP4 is inhibited, the levels of these substrates change. To date, GLP1 has been considered to play a major role in the therapeutic effect of DPP4i. GLP1 has been shown to be a physiological DPP4 substrate. In vivo, endogenous levels of intact, biologically active peptides increase with DPP4 inhibition and are associated with improved glucose homeostasis. Some studies found that GLP1 receptor antagonist inhibited GLP1 signaling pathway, and the hypoglycemic effect of DPP4i decreased, thus confirming the role of GLP1 in the mechanism of action of DPP4i. It also indicates that GLP1 is not the only regulatory factor, and even in the absence of GLP1 receptor activation, the hypoglycemic activity of DPP4i is still significant. Another physiological substrate of DPP4 is glucose-dependent insulin polypeptide (GIP), also known as incretin, and the level of GIP increases with inhibition of DPP4 activity. Similar to GLP-1, GIP enhances insulin secretion in pancreatic beta cells in a glucose-dependent manner but appears to act in a different way on glucagon secretion. The response to GIP was also impaired in T2DM patients. In the past, views on the possible role of GIP in the treatment of T2DM have been largely ignored, because early studies have shown that GIP s ability to stimulate insulin secretion is severely impaired. However, in T2DM patients, further studies to explore this problem were unable to be carried out due to the lack of appropriate GIP receptor antagonists. Recent studies have shown that GIP can improve glycemic control in patients with T2DM and have revived studies on the development of novel antagonists. These studies have led to a re-evaluation of the role of GIP in the anti-hyperglycemia of DPP4i. In addition, GLP1 s ability to inhibit glucagon secretion is weakened when blood glucose levels drop below normal fasting levels, while GIP enhances glucagon response to hypoglycemic levels. Thus, during insulin-induced hypoglycemia, glucagon secretion is increased due to GIP use. Therefore, the increase in intact GIP levels observed after inhibition of DPP4 may help maintain the counter-regulatory response of glucagon when glucose levels are controlled at hypoglycemia. Thus, GIP s role in improving glucagon counter-regulation may further contribute to reducing the risk of hypoglycemia associated with DPP4i. Recent studies found the direct or indirect role of soluble DPP4 in brain, gastric, liver, kidney, adipose tissue, pancreas (with islet), cardiovascular system and muscle through GLP1/GIP signaling ( Figure 1). However, whether other DPP4 substrates also contribute to the therapeutic effect of DPP4i remains to be determined. In vitro, many peptide hormones and chemokines are susceptible to DPP4 cleavage when incubated with DPP4 at high concentrations. However, there is not much evidence that they are altered in vivo by DPP4i and there have been no adverse reactions or safety issues caused by off-target effects of DPP4i on other endogenous substrates. However, in T2DM patients, further studies to explore this problem were unable to carried out due to the lack of appropriate GIP receptor antagonists. Recent studies ha shown that GIP can improve glycemic control in patients with T2DM and ha revived studies on the development of novel antagonists. These studies have to a re-evaluation of the role of GIP in the anti-hyperglycemia of DPP4i. In additi GLP1s ability to inhibit glucagon secretion is weakened when blood glucose levels dr below normal fasting levels, while GIP enhances glucagon response to hypoglycemic l els. Thus, during insulin-induced hypoglycemia, glucagon secretion is increased due GIP use. Therefore, the increase in intact GIP levels observed after inhibition of DP may help maintain the counter-regulatory response of glucagon when glucose levels controlled at hypoglycemia. Thus, GIPs role in improving glucagon counter-r ulation may further contribute to reducing the risk of hypoglycemia associated w DPP4i. Recent studies found the direct or indirect role of soluble DPP4 in brain, gastric, liv kidney, adipose tissue, pancreas (with islet), cardiovascular system and muscle throu GLP1/GIP signaling ( Figure 1). However, whether other DPP4 substrates also contribu to the therapeutic effect of DPP4i remains to be determined. In vitro, many peptide h mones and chemokines are susceptible to DPP4 cleavage when incubated with DPP4 high concentrations. However, there is not much evidence that they are altered vivo by DPP4i and there have been no adverse reactions or safety issues caused by o target effects of DPP4i on other endogenous substrates. DPP4 Inhibitors When DPP4 was identified as a therapeutic target, the search began for compounds suitable for clinical use, namely the progressive development of DPP4 inhibitors such as sitagliptin and saxagliptin. Currently, several structures oriented to target-specific interaction with DPP-4 are already known and officially approved by the United States Food, saxagliptin, alogliptin, and linagliptin, and vildagliptin 12801240 is authorized in Europe (Table 1). Sitagliptin. Sitagliptin was the first DPP4i to receive marketing approval. The apparent terminal elimination half-life of sitagliptin is 12.4 h and renal clearance is 350 mL/min. In healthy adult volunteers, sitagliptin is rapidly absorbed orally after a single 100 mg dose and reaches peak plasma concentration 14 h after the dose. The pharmacokinetic characteristics of sitagliptin in T2DM are generally similar with those of healthy volunteers. Earlier results showed that sitagliptin was mainly eliminated by renal excretion, and the renal clearance rate accounted for about 70% of the plasma clearance rate of sitagliptin in healthy volunteers. Absolute bioavailability of sitagliptin is 87%, oral absorption is not affected by food, and drugs can be taken regardless of food. Vidagliptin. Vildagliptin was the second DPP4i, which to be approved by Europe. About 57% of the circulation of Vildagliptin in vivo is cytochromatin-independent, with a large amount of hydrolysis to produce an inactive molecule (LAY151). The remaining 18% is circulated as an active drug. Therefore, compared with sitagliptin, it has a shorter half-life (~2 h) and is administered in a twice-daily regimen. This metabolism is the main route of elimination of maternal drugs; however, LAY151 is cleared by the kidney and administered in a manner that increases the risk of exposure in patients with impaired renal function. Saxagliptin. Saxagliptin is an effective anti-diabetes drug, which expands the inhibitory effect of DPP4 enzyme and metabolized via cytochrome P450 3A4/A5. The use of saxagliptin at a dose of 2.5 mg, poor membrane permeability and solubility in water may further lead to its elimination with a short half-life (4-6 h) and therefore need to be dosed more than once daily. The parent molecules of saxagliptin are cleared by metabolism in the liver and its metabolites are cleared by the kidneys. However, the effect of liver damage on drug exposure is small, meaning that the therapeutic dose does not need to be changed; however, consistent with the elimination of DPP4i in other kidneys, some dose reductions are recommended when renal function declines. Linagliptin. Linagliptin is the latest to come to market, approved for glycemic management of type 2 diabetes. The metabolism of Linagliptin is not obvious, and its half-life is long (effective half-life is about 12 h, terminal decay is greater than or equal to 100 h), but compared with other DPP4i, the kidney plays a very small role in its elimination, only less than 6% of the drug is cleared in the kidney, most of the drug is excreted into bile then eliminated in the feces. Therefore, linagliptin is not affected by changes in renal function and dose is not adjusted according to renal function. Although it is eliminated by the biliary pathway, there is no clinically significant change in liver damage to drug exposure nor dose adjustment. In the latest randomized trial, in patients with T2D and high cardiovascular (CV) risk, linagliptin showed non-inferiority compared with placebo in the risk of major CV events, with a median time of 2.2 years. Linagliptin is well tolerated in Asian T2DM, with low risk for adverse events. Alogliptin. Like sitagliptin, it has no significant metabolism and has a half-life of about 12.4-21.4 h. Due to a long half-life, alogliptin is generally prescribed once a day. Alogliptin is cleared primarily by the kidney through glomerular filtration and active secretion mechanisms; therefore, it is recommended to reduce the dose in patients with reduced renal function. Alogliptin alone or in combination with metformin, pioglitazone, glyburide, or insulin significantly improved glycemic control in adults or elderly T2DM compared with placebo. Alogliptin will primarily be used to avoid hypoglycemic events in patients with congestive heart failure, kidney failure and liver disease, as well as in the elderly. Alogliptin protects against cyclophosphamide induced lung toxicity by reducing oxidation, inflammation and fibrosis, making it a promising pharmacological treatment for reducing lung toxicity. Benefits of DPP4i As above discussed, the efficacy of DPP4i in inhibiting the catalytic activity of DPP4 is clearly related to its efficacy as an antidiabetic agent. Marketed in the United States, DPP4i has also been evaluated for cardiovascular safety in large cardiovascular outcome trials (CVOTs) and neither type of DPP4i increases the risk of major adverse cardiovascular events. DPP4i reduces long-term cardiovascular risk after percutaneous coronary intervention in patients with diabetes via the insulin-like growth factor-1 axis. However, in CVOT S there was not any cardiovascular benefit and saxagliptin increase heart failure hospitalization, vildagliptin is not marketed in the USA, so there is no data associated with CVOT (Table 2). Previously discussed reports suggest that DPP4i may be more effective in the Asian population than in the white population. It has been suggested that this difference may be related to pathological differences in T2DM between the two groups (emaciation and impaired beta cell function phenotype in Asian patients and obesity and insulin resistance phenotype in white patients). DPP4i all benefit by being highly orally available and well tolerated anti-hyperglycemic medications. They are also easy to use, requiring no dose titration and can be taken at any time of day without regard to mealtimes. Sitagliptin, alogliptin, and linagliptin interact noncovalently with residues in the catalytic bag, then decompose unchanged as parent inhibitor molecules, and then interact freely with the enzyme again plus their inherent long half-life, resulting in sustained DPP4 inhibition, which is compatible with a once-daily dosing regimen. While saxagliptin is covalently bound by cyanopyrrolidine fragments, prolongating the inhibitor s interaction with the enzyme until hydrolysis releases the major metabolite 5-hydroxysaxagliptin. Accordingly, in short-term studies (1-9 days in duration), head-to-head comparisons in patients with T2DM have shown that when used at their therapeutic doses, sitagliptin (100 mg once daily) and saxagliptin (5 mg once daily) all achieve the same maximal and trough levels of DPP4 inhibition and are associated with similar enhancements of intact incretin hormone concentrations. It follows, therefore, that if the extent and duration of DPP4 inhibition is similar, the improvement in glycemic control should also be similar. Indeed, in several studies, direct comparisons have been made between DPP4 inhibitions, glucose excursions and HbA1c levels are reduced to similar extents (Table 3). These inhibitors achieve similar degrees of DPP4 inhibition to the shorter-acting DPP4i discussed already, in line with this finding, head-to-head comparisons have also shown them to be non-inferior with respect to HbA1c control. Table 3. The effect of DPP4 inhibitors in HbA1c. DPP4i Dose ( Their mechanism of action, involving both insulinotropic and glucagon-promoting effects, means that they combine well with other anti-diabetic agents to give additional HbA1c-lowering efficacy. The potency of DPP4i in A1C reduction is moderate. In this regard, the individual DPP4i have a low propensity for drug-drug interactions, meaning that they can be used with any other medications without the need for dose adjustment. Similarly, doses of other agents used together with DPP4i do not generally require adjustment; however, reduction in concomitant sulfonylurea or insulin doses is recommended to minimize the hypoglycemic risk associated with sulfonylureas and insulin. The discovery that metformin also stimulates GLP1 secretion, further explaining the special efficacy of metformin in combination with DPP4i. In a retrospective study in Korea, metformin and DPP4i was found to be effective in reducing HbA1c below 7%, with a low incidence of hypoglycemia.In tacrolimus-induced SD rat model and nephrotoxicity test, DPP4i and sodium-glucose transporter 2 inhibitors (SGLT2i) reduced blood glucose level and HbA1C level, and increased plasma insulin level and islet size of rats, and improved renal function and reduced interstitial fibrosis and pro-fibrotic cytokines, which providing a theoretical basis for the combination of SGLG2i and DPP4i in the treatment of tacrolimusinduced DM and nephrotoxicity. The dual effect of DPP4i on and cells means that they also bind well to the islet independence of SGLT2i. In addition, when DPP4i is combined with insulin secretin or insulin itself, DPP4i provides a complementary effect due to its inhibition of glucagon secretion and reduction in hepatic glucose production, which also means that a beneficial effect on glucose control can be achieved even with reduced cell function. In Chinese trials with T2DM, the addition of linagliptin and insulin improved glycemic control and was well tolerated with no increased risk of hypoglycemia or weight gain. As described earlier, the effect of DPP4i on the internal environment of glucose is not direct, mediated through the action of the substrates they protect, especially GLP1. Therefore, considering that the activity of DPP4 is already completely inhibited when DPP4i is used at its therapeutic dose, any increase in exposure to the drug will not have a further hypoglycemic effect (since the enzyme cannot be inhibited by more than 100%). Combined with the fact that the role of GLP1 and GIP is itself glucose dependent, the inherent risk of treating hypoglycemia with DPP4i is particularly low. Therefore, DPP4i is particularly suitable for use in elderly, frail and/or vulnerable patients whose long-term type 2 diabetes and its complications often result in multidrug therapy. In addition, patients with liver and kidney damage may be contraindications to other antidiabetic drugs. Studies have shown that DPP4i is safe and effective in elderly patients with T2DM (65 or 70 years and above), and CVOTs including elderly patients with comorbidities (≥75 years), indicating that DPP4i is safe and effective in this population and showed a similar glycemic effect as the younger participants. In addition, DPP4i has been shown to be effective and well tolerated in patients with renal dysfunction (including end-stage renal disease patients on dialysis) and DKD. Since sitagliptin, alogliptin, and saxagliptin are cleared through the kidney, it is recommended that DPP4i doses be reduced in patients with reduced renal function, while linagliptin is cleared through the biliary tract without dose adjustment. Dose adjustment for renal elimination of DPP4i was based on pharmacokinetics rather than safety concerns. Therefore, any increase in dose will not cause hypoglycemia or other mechanism-based adverse reactions because the enzyme is already maximally inhibited, and compound specific adverse reactions are unlikely; earlier dosing studies found no adverse events when using 4-32 times the therapeutic dose. Anti-Inflammation Effects of DPP4i DPP4 plays an important role in the maturation and activation of T cells and immune responses, with independent catalytic activity. DPP4, released from hepatocytes and adipose tissue and exogenously administered, promotes inflammatory responses in multiple tissues, often associated with the development of insulin resistance. In vivo sitagliptin can protect the endothelial function of renal artery in spontaneously hypertensive rats by GLP-1 signaling. DPP4 inhibitors play a direct role in antiatherosclerosis by improving endothelial dysfunction, inhibiting inflammation and oxidative stress, and improving plaque instability. Our previous studies have demonstrated that sitagliptin may protect diabetic fatty liver by reducing ROS production and NF-B signaling pathway activation. Recent studies found the anti-inflammation of DPP4 inhibitors in non-diabetes and diabetes model, which summaries in Table 4. Although DPP4i has shown many anti-inflammatory effects in diabetes complications and other disease, this effect has not been shown in clinical trials. Therefore, more human population studies are needed to verify the anti-inflammatory effects of DPP4 in liver, lung, heart, kidney, and nerve in the future. Table 4. The anti-inflammatory effects of DPP4 inhibitors. DPP4i Experimental Model Mechanism of the Effects Ref. Linagliptin Sepsis mouse Suppressed expressions of IL-1 and intercellular adhesion molecule 1 via a NF-B-dependent pathway Acetic acid-induced colitis rats Activated AMPK-SIRT1-PGC-1 pathway and suppressed JAK2/STAT3 signaling pathway LPS induced U937 cells Inhibited inflammation around the TLR-4-mediated pathway. Acute kidney injury in rats Decreased inflammatory cytokines and ROS Early T2DM Not altered plasma nitrate levels Experimental autoimmune myocarditis mice Suppressed oxidative stress in EAM hearts Trinitrobenzene sulfonic acid-evoked colitis in rats Curbed inflammation through the suppression of colonic IL-6, TNF-, and upregulation of IL-10 Anti-glomerular basement membrane antibody induced in nephritis rats Improved resolution of glomerular injury and healing in non-diabetic renal disease OSI-906-induced hepatic steatosis Improved hepatic steatosis via an insulin-signaling-independent pathway Diabetic injured kidney Inhibited the CRP/CD32b/NF-kB-driven renal inflammation and fibrosis Oxidized LDL-induced THP-1 macrophage foam cell formation Decreased the expression of CD36 and LOX-1 and increased the expression of the cholesterol transporter ABCG1 HFD and streptozotocin (STZ) induced diabetic rats: liver fibrosis with T2DM Improved insulin sensitivity and lipid profile and reduced inflammatory mediators, and collagen depositions Atherosclerosis and T2D mice Improved glucose tolerance and reduced hepatic inflammation but had no effect on plaque burden or atherosclerotic inflammation Hyperglycemic mice with stroke Exerted a neuroprotective effect through activation of the Akt/mTOR pathway along with anti-apoptotic and anti-inflammatory mechanisms Mouse bone marrow macrophages Increased M2 macrophage polarization by inhibiting DPP-4 expression and activity 2.5. Adverse Effects FAERS data mining helped examine adverse events associated with DPP-4 inhibitors, all of which were disproportionately associated with four types of adverse events: "gastrointestinal nonspecific inflammation and dysfunction", "allergy", "severe skin adverse reactions", and "noninfectious diarrhea". As for the analysis of the level of preferred terms specified, DPP4i was associated with higher reports of gastrointestinal, pancreatic, malignancies, infections, musculoskeletal disorders, systemic diseases, allergies, and cutaneous adverse reactions. Conclusions Dipeptidyl peptidase 4 plays a key role in the regulation of glucose metabolism. The DPP4 inhibitors are effective and safe hypoglycemic therapy for type 2 diabetes that are effective orally and associated with a low risk of hypoglycemia, weight gain, or other adverse events on a solid basis. There is also a large body of clinical and experimental data suggesting that improved islet function is the key mechanism behind the inhibitory hypoglycemic effect of DPP4i, both of which are associated with increased insulin secretion and glucagon secretion inhibition. Additionally, DPP4i s ability to reduce the risk of hypoglycemia and to work in complementarity with other antidiabetic drugs makes them widely used second-line drugs and means they are particularly useful for other drugs or contraindications that may not be preferred. Co-action of DPP4i with other drugs such as metformin, SGLT2i, and pioglitazone can provide additional glycemic efficacy without increasing the burden of pills. However, unlike GLP1 receptor agonists and SGLT2i, DPP4i was insignificant in terms of cardiovascular benefit and reduced risk of major adverse cardiovascular events. In summary, DPP4i is safe and effective in the majority of T2DM patients, and we hope that DPP4i can help patients achieve glycemic goals in an overall favorable therapeutic setting. In addition, with the deepening of some DPP4i related research, more excellent efficacy may be developed. Conflicts of Interest: The authors declare no conflict of interest. |
Physical Properties of Collisionless Pitch Angle Scattering at X-Points and those Effects on Particle Confinement of Field-Reversed Configuration The single particle dynamics in the peripheral regions of a Field-Reversed Configuration (FRC) is investigated numerically in order to find physical and statistical properties of collisionless pitch angle scattering at the X-points. The results from the ensemble average of positions for a number of ions clarify that due to the statistical properties of the scattering the density concentration of higher energy ions is generated near the X-points. Estimation of the electrostatic potential along a magnetic line of force is also carried out. Although the calculation of self-consistent electric fields remains, it is found that the potential peak is formed in the vicinity of X-points. |
<reponame>KeisukeYamashita/go-datadog-api
/*
* Datadog API for Go
*
* Please see the included LICENSE file for licensing information.
*
* Copyright 2016 by authors and contributors.
*/
package datadog
import (
"fmt"
"net/url"
"time"
)
func (client *Client) doSnapshotRequest(values url.Values) (string, error) {
out := struct {
SnapshotURL string `json:"snapshot_url,omitempty"`
}{}
if err := client.doJsonRequest("GET", "/v1/graph/snapshot?"+values.Encode(), nil, &out); err != nil {
return "", err
}
return out.SnapshotURL, nil
}
// Snapshot creates an image from a graph and returns the URL of the image.
func (client *Client) Snapshot(query string, start, end time.Time, eventQuery string) (string, error) {
options := map[string]string{"metric_query": query, "event_query": eventQuery}
return client.SnapshotGeneric(options, start, end)
}
// Generic function for snapshots, use map[string]string to create url.Values() instead of pre-defined params
func (client *Client) SnapshotGeneric(options map[string]string, start, end time.Time) (string, error) {
v := url.Values{}
v.Add("start", fmt.Sprintf("%d", start.Unix()))
v.Add("end", fmt.Sprintf("%d", end.Unix()))
for opt, val := range options {
v.Add(opt, val)
}
return client.doSnapshotRequest(v)
}
|
<reponame>trespasserw/MPS
package jetbrains.mps.samples.SwingBuilder.constraints;
/*Generated by MPS */
import jetbrains.mps.smodel.runtime.base.BaseConstraintsDescriptor;
import java.util.Map;
import org.jetbrains.mps.openapi.language.SReferenceLink;
import jetbrains.mps.smodel.runtime.ReferenceConstraintsDescriptor;
import jetbrains.mps.smodel.runtime.base.BaseReferenceConstraintsDescriptor;
import org.jetbrains.annotations.Nullable;
import jetbrains.mps.smodel.runtime.ReferenceScopeProvider;
import jetbrains.mps.smodel.runtime.base.BaseScopeProvider;
import org.jetbrains.mps.openapi.model.SNodeReference;
import jetbrains.mps.smodel.SNodePointer;
import jetbrains.mps.scope.Scope;
import jetbrains.mps.smodel.runtime.ReferenceConstraintsContext;
import jetbrains.mps.scope.ListScope;
import jetbrains.mps.internal.collections.runtime.ListSequence;
import jetbrains.mps.lang.smodel.generator.smodelAdapter.SModelOperations;
import jetbrains.mps.lang.smodel.generator.smodelAdapter.SNodeOperations;
import java.util.HashMap;
import org.jetbrains.mps.openapi.language.SConcept;
import jetbrains.mps.smodel.adapter.structure.MetaAdapterFactory;
public class ElementReference_Constraints extends BaseConstraintsDescriptor {
public ElementReference_Constraints() {
super(CONCEPTS.ElementReference$XB);
}
@Override
protected Map<SReferenceLink, ReferenceConstraintsDescriptor> getSpecifiedReferences() {
BaseReferenceConstraintsDescriptor d0 = new BaseReferenceConstraintsDescriptor(LINKS.element$rQ9F, this, true, false) {
@Nullable
@Override
public ReferenceScopeProvider getScopeProvider() {
return new BaseScopeProvider() {
@Override
public SNodeReference getSearchScopeValidatorNode() {
return new SNodePointer("r:7a1c88cb-66d9-4726-9b4a-d5dc6c544de7(jetbrains.mps.samples.SwingBuilder.constraints)", "6836281137582847989");
}
@Override
public Scope createScope(final ReferenceConstraintsContext _context) {
return ListScope.forNamedElements(ListSequence.fromList(SModelOperations.roots(SNodeOperations.getModel(_context.getContextNode()), CONCEPTS.Filter$pE)).union(ListSequence.fromList(SModelOperations.roots(SNodeOperations.getModel(_context.getContextNode()), CONCEPTS.Map$hZ))));
}
};
}
};
Map<SReferenceLink, ReferenceConstraintsDescriptor> references = new HashMap<SReferenceLink, ReferenceConstraintsDescriptor>();
references.put(d0.getReference(), d0);
return references;
}
private static final class CONCEPTS {
/*package*/ static final SConcept ElementReference$XB = MetaAdapterFactory.getConcept(0xb4dbff0c8c314a79L, 0xa45a98e5fd0530e7L, 0xd0f6999e83a1e8aL, "jetbrains.mps.samples.SwingBuilder.structure.ElementReference");
/*package*/ static final SConcept Filter$pE = MetaAdapterFactory.getConcept(0xb4dbff0c8c314a79L, 0xa45a98e5fd0530e7L, 0xd0f6999e83a1c61L, "jetbrains.mps.samples.SwingBuilder.structure.Filter");
/*package*/ static final SConcept Map$hZ = MetaAdapterFactory.getConcept(0xb4dbff0c8c314a79L, 0xa45a98e5fd0530e7L, 0xd0f6999e83a1d95L, "jetbrains.mps.samples.SwingBuilder.structure.Map");
}
private static final class LINKS {
/*package*/ static final SReferenceLink element$rQ9F = MetaAdapterFactory.getReferenceLink(0xb4dbff0c8c314a79L, 0xa45a98e5fd0530e7L, 0xd0f6999e83a1e8aL, 0xd0f6999e83a1e8bL, "element");
}
}
|
SCARLETT Johansson isn’t pretending to know any more about parenting than anyone else. But there is one thing she is sure of.
WHILE many celebrities new to motherhood make it sound effortless, as though they slipped into a vastly altered lifestyle with barely a glitch, Scarlett Johansson, who gave birth to Rose Dorothy last September, is a little more grounded.
“I’m such a newbie at this,” she admitted to News.com.au.
“I always really hate it when actors or people in the spotlight make giant grandiose statements about parenthood because it’s so, so personal,” she smiles.
Johansson, 30, is raising her daughter with second hubby, French advertising executive, Romain Dauriac, between their homes in Paris and New York City.
Although she doesn’t wax lyrical about the joys or challenges of motherhood, she does acknowledge that her career and most certainly her life has forever changed.
“I have a greater responsibility now so that will definitely affect my schedule.
“Before (Rose), I just made the choices that were affecting me but now I’m responsible for somebody else. Eventually I will have to work less and it might even make me more discerning,” she says.
With Mother’s Day approaching, what life lessons does she hope to pass onto baby Rose from her mother?
This afternoon at the Walt Disney studios in Burbank, Johansson is doing promotional duty conducting interviews for the eagerly awaited sequel Avengers: Age of Ultron, in which she reprises her role as Natasha Romanoff/Black Widow.
She looks remarkably trim in skinny black jeans, a snug-fitting patterned blue sweater and red heels. Her hair is cropped and fashionably swept to one side.
“People loved the first Avengers. It’s fun to watch; it’s a good movie,” she says.
As a new wife and mother, this is a busy time in Johansson’s life and she appears relaxed and happy. She married Dauriac on October 1, 2014, in Montana.
The couple met in 2012 and he popped the question the following year. She was formerly married to Ryan Reynolds from 2008 to 2010 (he is currently married to Blake Lively). In between marriages, Johansson enjoyed a brief fling with Sean Penn, as well a relationship with an advertising executive, Nate Naylor. |
package com.miguelsanchez.components;
public class Rotor {
private String name;
private String wiring;
private boolean isSpecialName;
private boolean isNumbers;
private boolean isActiveRotor;
private boolean isConfigureLater;
private boolean isConfigurationAsLetters;
public Rotor (String name, String wiring, boolean isSpecialName, boolean isNumbers, boolean isActiveRotor, boolean isConfigureLater, boolean isConfigurationAsLetters) {
this.name = name;
this.wiring = wiring;
this.isSpecialName = isSpecialName;
this.isNumbers = isNumbers;
this.isActiveRotor = isActiveRotor;
this.isConfigureLater = isConfigureLater;
this.isConfigurationAsLetters = isConfigurationAsLetters;
}
public Rotor () {
this.name = "";
this.wiring = "";
this.isSpecialName = false;
this.isNumbers = false;
this.isActiveRotor = false;
this.isConfigureLater = false;
this.isConfigurationAsLetters = false;
}
public String getName () {
return name;
}
public String getWiring () {
return wiring;
}
public boolean isSpecialName () {
return isSpecialName;
}
public boolean isNumbers () {
return isNumbers;
}
public boolean isActiveRotor () {
return isActiveRotor;
}
public boolean isConfigureLater () {
return isConfigureLater;
}
public boolean isConfigurationAsLetters () {
return isConfigurationAsLetters;
}
public void setConfigureLater (boolean isConfigureLater) {
this.isConfigureLater = isConfigureLater;
}
}
|
UFC light heavyweight champion Jon Jones could have kept us waiting. He didn't, but he could have.
Jones was recently in Baltimore making the media rounds to promote his upcoming April 26 fight against Glover Teixeira, and he had had a long morning. Jones, his manager and members of the UFC's PR team had travelled to multiple radio and television stations before arriving at a downtown hotel to hold court with print and Internet media. In order for Jones to hit the morning news shows he had been on the run for several hours. An ice storm had extended his day, leaving Jones no time for lunch.
When Jones arrived at the table of media members - his UFC light heavyweight title belt serving as the centerpiece, his lunch was waiting. Jones took a few bites of his sandwich, and opened the get together for questions.
One member of the group, who had covered boxing in the past, but never mixed martial arts, seemed somewhat stunned by the lack of chest pounding and finger-pointing from Jones and Teixeira. It seemed as if he was expecting some type of over the top bombast from the two fighters, but what he got were two fighters that were nothing but respectful of each other. Jones explained why that was, "I'm not here to be somebody I'm not. This is our job, and I'm here to do the art the way that I do it, and I think there's a lot of traditional martial artists out here that respect that we're not being idiots and embarrassing martial arts. We're carrying ourselves the way that martial artists carry themselves. Something about being a martial artist, in my mind, it's respect."
I'm not here to be somebody I'm not. This is our job, and I'm here to do the art the way that I do it
Another reason for the absence of any over the top antics, is that Jones and Teixeira are really good at what they do, and they are both aware of that fact, as Jones said, "I respect what he's done to people, and I know he respects what I've done to people."
Inevitably the conversation eventually turned to Jones' last fight, a five round battle with Alexander Gustafsson. That fight ended with Jones' hand being raised in victory, but before that happened, Jones took a beating at the hands of his Swedish opponent. The damage that he absorbed sent Jones to the hospital causing him to miss the post-fight press conference. He was not alone at the hospital, Gustafsson made the trip as well.
Jones spoke of that fight, his record setting sixth UFC light heavyweight title defense saying, "I honestly believe, as an artist, as a creative artist, that it's not always in our control the way our performances are. I believe there's an energy that comes over an artist that watches over them. Like being in the zone, you can't always control being in the zone or even getting into the zone. You can be a person that trains every day, eats healthy, (be) so dedicated and just not get in the zone the right way, and that zone is something that's kind of out of my control."
Even when I'm not at my best I still have the heart to find a way
That zone eluded Jones on the night he stepped into the Octagon against Gustafsson in Toronto, something his family said they noticed as he made his way from the Air Canada Centre dressing room and walked toward the cage, "My family said, 'Jon, when you walk out the Octagon we get this feeling that something cool is about to happen. We feel like a power', and they were saying 'we just didn't feel it from you. You just came out bland, flat, dry, you had nothing.' I felt like I had nothing. I went out there and I performed very dry, very going through the motions, and that's how I felt the entire time."
Despite the fact that he came out flat, despite the fact that he felt he was going through the motions, Jones still came out of the fight with a win, and along with that win, a lesson. "I had an opponent that was out there, young, same age as me, same height as me, extremely ambitious, trying to win the fight, training his hardest, and he still came up short," Jones said. "What that lets me know is that even when I'm not at my best I still have the heart to find a way. I kicked it up into that championship gear that all champions do have and found a way."
Jones does not expect that same fate to befall him when he fights at UFC 172 in Baltimore, a city he referred to as his adopted hometown, "What I'm hoping is that with this Glover fight I'll be back to my usual self, totally in the zone and have a dominating victory like I used to."
Jones will carry a record of 19-1 into his fight with Teixeira. Of those fights, seven have been for a UFC title, and only two of those have gone the distance. The only two other fights that he heard the final horn in were his first two fights in the UFC. All told Jones has finished 15 fights, nine by knockout and six by submission.
He may have one loss on his record, but Jones said he still considers himself undefeated since the loss came due to a disqualification for throwing illegal 12-to-6 elbows. In discussing that loss to Matt Hamill, UFC president Dana White once said it was due to "a moronic referee who had no idea what he was doing."
One of the more interesting things said by Jones were his comments on being a creative artist. That's not something we often consider when it comes to mixed martial arts, but in Jones' mind, we should, "There's standard moves, and then there's just being creative, creating a move that's never been seen, doing funky things that people just don't do, and that's where you get the art. Seeing what your opponent is going to do, just being able to see things before they happen, that's the art."
The creative side, and the work ethic side, they match, and that's where masterpieces are created -
As to how you get there, Jones said that's a combination of two things, "The creative side, and the work ethic side, they match, and that's where masterpieces are created. That's when those fights are created where you didn't even get hit, and it seemed like those things came easy to you, and I felt like I had that my whole career except for that last fight. So, what I'm excited to see is my zone coming back."
Oh, and that sandwich, an hour after the conversation had begun, it sat on the table in front of Jones - forgotten. |
. So far the problems of the generation and evaluation of etiologic hypotheses have been of too little concern to epidemiologists. Epidemiologic research usually deals with two fundamental etiologic questions: the first is 'why' an epidemiological phenomenon occurs; the second is 'how', and the question relates to the mediating mechanism. After having defined the nature of a valuable working hypothesis, we identify several ways by which hypotheses might be merged, and discuss for each of them the corresponding problems of evaluation. It is advocated that 'biological' (clinical) induction is a promising way of gaining insight into the etiology of disease. 'Statistical' induction, on the other hand, may be a useful though precarious way of generating hypotheses, mainly because of evaluation problems. As to deductive thinking, sometimes based on the process of analogy, its use is confined to rather well developed fields of knowledge. Furthermore, deductive thinking may be plagued by logical errors if the biological model of the studied disease is inadequate. Finally, Popperian deduction is also showed to be subjected to logical flaws if the causal model of relationships is poorly grounded into the biology of disease; specifically, the point is made that the refutation or confirmation of hypotheses is not straightforward and may be prevented in cases of complex biological situations. We conclude that epidemiologic analysis may be furthered by contacts with clinicians who would be of invaluable help in formulating and the testing of hypotheses, by study designs devised to reject specific models of disease, and by the rooting of putative causes of disease beyond simple risk markers. |
Jack Stobbs
Club career
Stobbs joined the Academy at Sheffield Wednesday at the age of eight, and signed his first professional contract with the club in March 2014. He made his senior team debut on 26 April, coming on for Joe Mattock 60 minutes into a 3–1 defeat to Bolton Wanderers at Hillsborough, in what was the final home game of the 2013–14 season. However he suffered ankle ligament damage in a friendly at Matlock Town in the 2014–15 pre-season, which left him having to regain his fitness in order to try and force his way into manager Stuart Gray's first team plans. He had to wait until the last day of the 2015–16 season to make his second appearance for the "Owls", when he came on as a 79th-minute substitute for Atdhe Nuhiu in a 2–1 loss at Wolverhampton Wanderers on 7 May 2016. He signed a new one-year contract in June 2017 after captaining the U23 team to the Professional Development League 2 North title and National Championship in 2016–17.
On 17 August 2017, Stobbs joined newly-relegated EFL League Two club Port Vale on loan for the 2017–18 season; manager Michael Brown said that "[chief scout] Darren Wrack has worked very hard on it and he is a good, exciting, young player". However he struggled to even appear on the first-team bench, and speaking in October, new manager Neil Aspin blamed league rules that prevented him from naming more than five loanees in a matchday squad. He was recalled by Wednesday on 2 January 2018. New Wednesday manager Jos Luhukay put him into the first-team and gave him a new two-and-a-half year contract. He featured in one EFL Cup game in the 2018–19 season.
On 20 August 2019, he joined Scottish Premiership side Livingston on loan until 1 January.
Style of play
Speaking in August 2014, Sheffield Wednesday manager Stuart Gray said that Stobbs is "one of those who runs at defenders in one-versus-one situations and he's got a great habit of putting the ball between the posts for someone to score". Stobbs himself stated that "I've got a bit of pace and I like to take people on, I like to get to the byline, get crosses into the box and, where I can, I try to get a few goals". |
#include <cstring>
#include <iostream>
#include "Event.hpp"
Event::Event(std::ifstream& in)
{
size_t len;
in.read((char*) &len, sizeof(len));
m_name = new char[len];
in.read(m_name, len);
in.read((char*) &m_date, sizeof(m_date));
in.read((char*) &m_totalSeats, sizeof(m_totalSeats));
in.read((char*) &m_takenSeats, sizeof(m_takenSeats));
in.read((char*) &m_price, sizeof(m_price));
}
Event::Event(const char* name, time_t date, size_t seats, double price)
: m_name(new char[strlen(name) + 1])
, m_date(date)
, m_totalSeats(seats)
, m_takenSeats(0)
, m_price(price)
{
strcpy(m_name, name);
}
Event::Event(const Event& other)
{
copy(other);
}
Event& Event::operator=(const Event& other)
{
if (this != &other) {
clear();
copy(other);
}
return *this;
}
Event::~Event()
{
clear();
}
void Event::copy(const Event& other)
{
m_name = new char[strlen(other.m_name) + 1];
strcpy(m_name, other.m_name);
m_date = other.m_date;
m_totalSeats = other.m_totalSeats;
m_takenSeats = other.m_takenSeats;
m_price = other.m_price;
}
void Event::clear()
{
delete[] m_name;
m_name = nullptr;
}
bool Event::setName(const char* newName)
{
if (newName) {
char* newEvName = new (std::nothrow) char[strlen(newName) + 1];
if (!newEvName)
return false;
delete[] m_name;
m_name = newEvName;
return true;
}
return false;
}
void Event::setDate(time_t newDate)
{
if (newDate > time(nullptr)) {
m_date = newDate;
}
}
void Event::setTotalSeats(size_t newTotalSeats)
{
if (newTotalSeats >= m_takenSeats) {
m_totalSeats = newTotalSeats;
}
}
void Event::setPrice(double newPrice)
{
if (newPrice > 0) {
m_price = newPrice;
}
}
void Event::serialize(std::ofstream& out) const
{
size_t len = strlen(m_name) + 1;
out.write((const char*) &len, sizeof(len));
out.write((const char*) m_name, len);
out.write((const char*) &m_date, sizeof(m_date));
out.write((const char*) &m_totalSeats, sizeof(m_totalSeats));
out.write((const char*) &m_takenSeats, sizeof(m_takenSeats));
out.write((const char*) &m_price, sizeof(m_price));
}
std::ostream& operator<<(std::ostream& out, const Event& obj)
{
out << obj.getName() << " for $" << obj.getPrice()
<< ". There are " << obj.getFreeSeats() << " seats left. "
<< "The event will be held on " << obj.getDateString();
return out;
}
|
<filename>mmsns-web/mmsns-web-portal/src/main/java/com/lovecws/mumu/mmsns/controller/admin/photo/MMSnsAdminPhotoController.java
package com.lovecws.mumu.mmsns.controller.admin.photo;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import javax.servlet.http.HttpServletRequest;
/**
* @author babymm
* @version 1.0-SNAPSHOT
* @Description: 用户设置 图库管理
* @date 2017-12-15 14:00:
*/
@Controller
@RequestMapping("/admin")
public class MMSnsAdminPhotoController {
@Autowired
private HttpServletRequest request;
@RequestMapping(value = {"/{individuation}/photo/comment"}, method = RequestMethod.GET)
public String photoComment(@PathVariable String individuation) {
request.setAttribute("adminModular", "photoComment");
return "/admin/photo/comments";
}
@RequestMapping(value = {"/{individuation}/photo/album"}, method = RequestMethod.GET)
public String photoAlbum(@PathVariable String individuation) {
request.setAttribute("adminModular", "photoAlbum");
return "/admin/photo/album";
}
@RequestMapping(value = {"/{individuation}/photo/album/{operation}"}, method = RequestMethod.GET)
public String photoAlbumCreate(@PathVariable String individuation, @PathVariable String operation) {
String photoAlbumTitle = null;
if ("view".equals(operation)) {
photoAlbumTitle = "相册详情";
} else if ("edit".equals(operation)) {
photoAlbumTitle = "编辑相册";
} else if ("create".equals(operation)) {
photoAlbumTitle = "创建相册";
}
request.setAttribute("adminModular", "photoAlbum");
request.setAttribute("photoAlbumOperation", operation);
request.setAttribute("photoAlbumTitle", photoAlbumTitle);
return "/admin/photo/album_create";
}
}
|
Jonah Hill may have appeared on Tuesday night’s episode of The Tonight Show Starring Jimmy Fallon to promote his upcoming film War Dogs, but his true shining moment came when he started talking about losing weight after the movie had wrapped — a story that ended with him accidentally e-mailing his food log to Drake.
Hill told the host that — thanks to some advice from Channing Tatum — he had hired a nutritionist who asked the actor to keep track of what he was eating every day and send him a list of it. However, when Hill didn’t receive a response one night, he realized the recipient of the email had not been who he intended. |
<filename>src/ts/components/elements/UserLink/UserLink.tsx<gh_stars>1-10
import React from "react";
import { colors, fonts} from "src/ts/config";
import { getSVG } from "src/assets/svg";
type UserLinkProps = {
/**
* Method to be called upon closing the lightbox
*/
onClick(): void;
/**
* The text to display as the link
*/
text: string;
}
/**
* Component responsible for rendering a link to a user description
*/
export class UserLink extends React.PureComponent<UserLinkProps> {
/**
* Main render method
*/
public render(): React.ReactNode {
return(
<button
className="profile-link"
onClick={this.props.onClick}
>
<i className="user-icon">{getSVG("user2")}</i>
{this.props.text}
<style jsx>{`
/** Button to producers profile */
.profile-link {
/** Positioning the icon and button text horizontally */
display: flex;
flex-direction: row;
/** Colors and fonts */
background-color: transparent;
font-style: bold;
font-family: ${ fonts.text };
/** Size and border */
border: none;
border-radius: 5px;
padding: 10px;
/** Setup effects when hover */
transition: background-color 0.1s linear;
cursor: pointer;
}
.profile-link:hover {
background-color: rgba(219,208,239,0.5);
}
/** User icon placed in button */
.profile-link i {
height: 17px;
width: 17px;
color: ${ colors.primary };
/** Some space between icon and button text */
margin-right: 5px;
}
`}</style>
</button>
);
}
} |
Paratrooper
A paratrooper is a military parachutist—someone trained to parachute into an operation, and usually functioning as part of an airborne force. Military parachutists (troops) and parachutes were first used on a large scale during World War II for troop distribution and transportation. Paratroopers are often used in surprise attacks, to seize strategic objectives such as airfields or bridges.
Overview
Paratroopers jump out of airplanes and use parachutes to land safely on the ground. This is one of the three types of "forced entry" strategic techniques for entering a theater of war; the other two being by land and by water. Their tactical advantage of entering the battlefield from the air is that they can attack areas not directly accessible by other transport. The ability of air assault to enter the battlefield from any location allows paratroopers to evade emplaced fortifications that guard from attack from a specific direction. The possible use of paratroopers also forces defenders to spread out to protect other areas which would otherwise be safe. Another common use for paratroopers is to establish an airhead for landing other units, as at the Battle of Crete.
This doctrine was first practically applied to warfare by the Italians and the Soviets. The first operational military parachute jump was logged in the night of August 9/10 1918 by Italian assault troops, when Lt. Alessandro Tandura dropped behind Austro-Hungarian lines near Vittorio Veneto on a reconnaissance and sabotage mission, followed on later nights by Lts. Ferruccio Nicoloso and Pier Arrigo Barnaba.
The first extensive use of paratroopers (Fallschirmjäger) was by the Germans during World War II. Later in the conflict paratroopers were used extensively by the Allied Forces. Cargo aircraft of the period (for example the German Ju 52 and the American C-47) being small, they rarely, if ever, jumped in groups much larger than 20 from one aircraft. In English, this load of paratroopers is called a "stick", while any load of soldiers gathered for air movement is known as a "chalk". The terms come from the common use of white chalk on the sides of aircraft and vehicles to mark and update numbers of personnel and equipment being emplaned.
In World War II, paratroopers most often used parachutes of a circular design. These parachutes could be steered to a small degree by pulling on the risers (four straps connecting the paratrooper's harness to the connectors) and suspension lines which attach to the parachute canopy itself. German paratroopers, whose harnesses had only a single riser attached at the back, could not manipulate their parachutes in such a manner. Today, paratroopers still use round parachutes, or round parachutes modified so as to be more fully controlled with toggles. The parachutes are usually deployed by a static line. Mobility of the parachutes is often deliberately limited to prevent scattering of the troops when a large number parachute together.
Some military exhibition units and special forces units use "ram-air" parachutes, which offer a high degree of maneuverability and are deployed manually (without a static line) from the desired altitude. Some use High-altitude military parachuting, also deploying manually.
Paratrooper forces around the world
Many countries have one or several paratrooper units, usually associated to the national Army or Air Force, but in some cases to the Navy.
Australia
Airborne forces raised by Australia have included a small number of conventional and special forces units. During the Second World War the Australian Army formed the 1st Parachute Battalion; however, it did not see action. In the post-war period Australia's parachute capability was primarily maintained by special forces units. In the 1970s and 1980s a parachute infantry capability was revived, while a Parachute Battalion Group based on the 3rd Battalion, Royal Australian Regiment (3 RAR) was established in 1983. However, following a reorganisation 3 RAR relinquished the parachute role in 2011, and this capability is now maintained by units of Special Operations Command.
France
Constant "Marin" Duclos was the first French soldier to execute a parachute jump on November 17, 1915. He performed 23 test and exhibition parachute drops without problems to publicise the system and overcome the prejudice aviators had for such life-saving equipment.
In 1935, Captain Geille of the French Air Force created the Avignon-Pujaut Paratroopers Schools after he trained in Moscow at the Soviet Airborne Academy. From this, the French military created two combat units called Groupes d’Infanterie de l’Air.
Following the Battle of France, General Charles de Gaulle formed the 1re Compagnie d’Infanterie de l’Air in September 1940 from members of the Free French forces who had escaped to Britain. It was transformed into the Compagnie de Chasseurs Parachutistes in October 1941. By June 1942, these units were fighting in Crete and Cyrenaica alongside the British 1st SAS Regiment. As part of the SAS Brigade, two independent French SAS units were also created in addition to the other French Airborne units. They operated until 1945.
In May 1943, the 1er Régiment de Chasseurs Parachutistes was created from the 601e Groupe d'Infanterie de l'Air in Morocco and the 3e and 4e Bataillons d'Infanterie de l'Air (BIA) in England in the Special Air Service. The 2e and 3e Régiments de Chasseurs Parachutistes followed in July 1944.
During the Invasion of Normandy, French Airborne forces fought in Britanny, (Operation Dingson, Operation Samwest). The first Allied soldier to land in France was Free French SAS Captain Pierre Marienne who jumped into Brittany (Plumelec, Morbihan) on June 5 with 17 Free French paratroopers. The first Allied soldier killed in the liberation of France was Free French SAS Corporal Emile Bouétard of the 4e Bataillon d’Infanterie de l’Air, also in Brittany in Plumelec: June 6, 0 h 40. Captain Pierre Marienne was killed on July 12 in Plumelec. French SAS paratroopers also fought in the Loire Valley on September 1944, in Belgium on January, and in Netherlands on April 1945. The 1er Régiment Parachutiste de Choc carried out operations in Provence.
After World War II, the post-war French military of the Fourth Republic created several new airborne units. Among them were the Bataillon de Parachutistes Coloniaux (BPC) based in Vannes-Meucon, the Metropolitan Paratroopers, and the Colonial Paratroopers and Bataillons Étrangers de Parachutistes (French Foreign Legion), which coexisted until 1954. During the First Indochina War, a Bataillon Parachutiste Viet Nam was created (BPVN) in southeast Asia. In total, 150 different airborne operations took place in Indochina between 1945 and 1954. These included five major combat missions against the Viet Minh strongholds and areas of concentration.
When the French left Vietnam in 1954, all airborne battalions were upgraded to regiments over the next two years. Only the French Air Force's Commandos de l'Air (Air Force) were excluded. In 1956, the 2e Régiment de Parachutiste Coloniaux took part in the Suez Crisis.
Next, the French Army regrouped all its Army Airborne regiments into two parachute divisions in 1956. The 10th parachute division (10e Division Parachutiste, 10e DP) came under the command of General Jacques Massu and General Henri Sauvagnac took over the 25th Parachute Division (25e Division Parachutiste, 25e DP). Again the Commandos de l'Air were kept under command of the Air Force.
By the late 1950s, in Algeria, the FLN had launched its War of Independence. French paratroopers were used as counter insurgency units by the French Army. This was the first time in airborne operations troops used helicopters for Air Assault and Fire Support.
But in the aftermath of the Algiers putsch, the 10e and 25e Parachute divisions were disbanded and their regiments merged into the Light Intervention Division (Division Légère d'Intervention). This division became the 11th Parachute Division (11e Division Parachutiste, 11e DP) in 1971.
In the aftermath of the Cold War, the French Army reorganised and the 11e DP become the 11th Parachute Brigade in 1999.
Nazi Germany (1935–45)
Nazi Germany´s Luftwaffe Fallschirmjäger units made the first airborne invasion when invading Denmark on April 9, 1940 as part of Operation Weserübung. In the early morning hours they attacked and took control of the Masnedø fort and Aalborg Airport. The Masnedø fort was positioned such as it guarded the Storstrøm Bridge between the islands of Falster and Masnedø – on the main road from the south to Copenhagen. Aalborg Airport played a key role acting as a refuel station for the Luftwaffe in the further invasion into Norway. In the same assault the bridges around Aalborg were taken. Fallschirmjäger were also used in the Low Countries against the Netherlands, although their use against The Hague was unsuccessful. Their most famous drop was the 1941 Battle of Crete, though they suffered large casualties.
Hence later in the war, the 7th Air Division's Fallschirmjäger assets were re-organised and used as the core of a new series of elite Luftwaffe Infantry divisions, numbered in a series beginning with the 1st Fallschirmjäger Division. These formations were organised and equipped as motorised infantry divisions, and often played a "fire brigade" role on the western front. Their constituents were often encountered on the battlefield as ad hoc battle groups (Kampfgruppen) detached from a division or organised from miscellaneous available assets. In accord with standard German practice, these were called by their commander's name, such as Group Erdmann in France and the Ramcke Parachute Brigade in North Africa.
After mid-1944, Fallschirmjäger were no longer trained as paratroops owing to the realities of the strategic situation, but retained the Fallschirmjäger honorific. Near the end of the war, the series of new Fallschirmjäger divisions extended to over a dozen, with a concomitant reduction in quality in the higher-numbered units of the series. Among these divisions was the 9th Fallschirmjäger Division, which was the final parachute division to be raised by Germany during World War II. The Russian army destroyed the division during the Battle of Berlin in April 1945. The Fallschirmjäger were issued specialist weapons such as the FG 42 and specially designed helmets.
Federal Republic of Germany
In the modern German Bundeswehr, the Fallschirmjägertruppe continue to form the core of special operations units. The division has two brigade equivalents and several independent companies and battalions. All told, about 10,000 troops served in that division in 2010, most of them support or logistics personnel. The Fallschirmjägertruppe currently uses the Wiesel Armoured Weapons Carrier (AWC), a light air-transportable armoured fighting vehicle, more specifically a lightly armoured weapons carrier. It is quite similar to historical scouting tankettes in size, form and function, and is the only true modern tankette in use in Western Europe.
Operations
In 1982 the Italian Brigade Folgore landed in Beirut with the Multinational Force in Lebanon.
In 1991, a Parachutist Tactical group was deployed to Kurdistan. Its mission was to provide humanitarian aid. From July 1992, the Brigade supplied personnel to Operation Vespri Siciliani. The Folgore participated in Operation Restore Hope in Somalia from 3 December 1992 to September 1993. Parts of the Brigade have been employed many times in the Balkans (IFOR/SFOR in Bosnia and KFOR in Kosovo), with MNF in Albania and INTERFET in East Timor. The Folgore participated from August 2005 to September 2005 in Operation Babylon in Iraq and to December 2014 in Afghanistan.
In August 2007, the Folgore took part in United Nations Interim Force in Lebanon, under aegis of the United Nations (Resolution 1701), as a result of the war between Israel and Hezbollah of summer 2006.
Peru
During the Ecuadorian–Peruvian War, the Peruvian army had also established its own paratrooper unit and used it to great effect by seizing the Ecuadorian port city of Puerto Bolívar, on July 27, 1941, marking the first time in the Americas that airborne troops were used in combat.
Poland
The 1st (Polish) Independent Parachute Brigade was a parachute brigade under the command of Major General Stanisław Sosabowski, created in the Second World War Scotland in September 1941, with the exclusive mission to drop into occupied Poland in order to help liberate the country. The British government, however, pressured the Poles into allowing the unit to be used in the Western theatre of war. Operation Market Garden eventually saw the unit sent into action in support of the British 1st Airborne Division at the Battle of Arnhem in 1944. The Poles were initially landed by glider from 18 September, whilst, due to bad weather over England, the parachute section of the Brigade was held up, and jumped on 21 September at Driel on the South bank of the Rhine. The Poles suffered significant casualties during the next few days of fighting, but still were able, by their presence, to cause around 2,500 German troops to be diverted to deal with them for fear of them supporting the remnants of 1st Airborne trapped over the lower Rhine in Oosterbeek.
The Brigade was originally trained close to RAF Ringway and later in Upper Largo in Scotland. It was finally based in Lincolnshire, close to RAF Spitalgate (Grantham) where it continued training until its eventual departure for Europe after D-Day.
The Brigade was formed by the Polish High Command in exile with the aim of it being used to support the Polish resistance during the nationwide uprising, a plan that encountered opposition from the British, who argued they would not be able to support it properly. The pressure of the British government eventually caused the Poles to give in and agree to let the Brigade be used on the Western Front. On 6 June 1944 the unit, originally the only Polish unit directly subordinate to the Polish government in exile and thus independent of the British command, was transferred into the same command structure as all other Polish Forces in the West. It was slotted to take part in several operations after the invasion of Normandy, but all of them were cancelled. On 27 July, aware of the imminent Warsaw Uprising, the Polish government in exile asked the British government for air support, including dropping the Brigade in the vicinity of Warsaw. This request was refused on the grounds of "operational considerations" and the "difficulties" in coordinating with the Soviet forces. Eventually, the Brigade entered combat when it was dropped during Operation Market Garden in September 1944.
During the operation, the Brigade's anti-tank battery went into Arnhem on the third day of the battle (19 September), supporting the British paratroopers at Oosterbeek. This left Sosabowski without any anti-tank capability. The light artillery battery was left behind in England due to a shortage of gliders. Owing to bad weather and a shortage of transport planes, the drop into Driel was delayed by two days, to 21 September. The British units which were supposed to cover the landing zone were in a bad situation and out of radio contact with the main Allied forces. Finally, the 2nd Battalion, and elements of the 3rd Battalion, with support troops from the Brigade's Medical Company, Engineer Company and HQ Company, were dropped under German fire east of Driel. They overran Driel, after it was realised that the Heveadorp ferry had been destroyed. In Driel, the Polish paratroopers set up a defensive "hedgehog" position, from which over the next two nights further attempts were made to cross the Rhine.
The following day, the Poles were able to produce some makeshift boats and attempt a crossing. With great difficulty and under German fire from the heights of Westerbouwing on the north bank of the river, the 8th Parachute Company and, later, additional troops from 3rd Battalion, managed to cross the Rhine in two attempts. In total, about 200 Polish paratroopers made it across in two days, and were able to cover the subsequent withdrawal of the remnants of the British 1st Airborne Division.
On 26 September 1944, the Brigade (now including the 1st Battalion and elements of the 3rd Battalion, who were parachuted near to Grave on 23 September) was ordered to march towards Nijmegen. The Brigade had lost 25% of its fighting strength, amounting to 590 casualties.
In 1945, the Brigade was attached to the Polish 1st Armoured Division and undertook occupation duties in Northern Germany until it was disbanded on 30 June 1947. The majority of its soldiers chose to stay in exile rather than hazard returning to the new Communist Poland.
Portugal
The first Portuguese paratroopers were part of a small commando unit, organized in Australia, during World War II, with the objective to be dropped in the rearguard of the Japanese troops that were occupying Portuguese Timor.
However, the first regular parachute unit was only created in 1955, by the Portuguese Air Force, as the Parachute Caçadores Battalion. This unit adopted the green beret, which has become, since then, the principal emblem of the Portuguese paratroopers. The Battalion was expanded to a Regiment and additional parachute battalions were created in the Portuguese overseas territories of Angola, Mozambique and Guinea. These units were actively engaged in the Portuguese Colonial War, from 1961 to 1975, being involved both in airborne and air assault operations. In addition to the regular units of paratroopers, in Mozambique were also created the Parachute Special Groups, composed of African irregular troops who wore a maroon beret.
With the end of the Colonial War, the Portuguese parachute troops were reorganized as the Paratroopers Corps, with the Light Parachute Brigade as its operational unit. In 1993, the Paratroopers Corps was transferred from the Portuguese Air Force to the Portuguese Army and become the Airborne Troops Command, with the Independent Airborne Brigade as its operational unit.
The reorganization of the Portuguese Army in 2006 caused the extinction of the Airborne Troops Command. The Independent Airborne Brigade was transformed in the present Rapid Reaction Brigade, which now includes not only parachute troops but also special operations and commando troops.
Russia
Russian Airborne Troops were first formed in the Soviet Union during the mid-1930s and arguably were the first regular paratrooper units in the world. They were massively expanded during World War II, forming ten Airborne Corps plus numerous Independent Airborne Brigades, with most or all achieving Guards status. The 9th Guards Army was eventually formed with three Guards Rifle Corps (37,38,39) of Airborne divisions. One of the new units was the 100th Airborne Division.
At the end of the war they were reconstituted as Guards Rifle Divisions. They were later rebuilt during the Cold War, eventually forming seven Airborne Divisions, an Independent Airborne regiment and sixteen Air Assault Brigades. These divisions were formed into their own VDV commands (Vozdushno-Desantnye Voyska) to give the Soviets a rapid strike force to spearhead strategic military operations.
Following the collapse of the Soviet Union, there has been a reduction in airborne divisions. Three VDV divisions have been disbanded, as well as one brigade and a brigade-sized training centre. Nevertheless, Russian Airborne Troops are still the largest in the world.
VDV troops participated in the rapid deployment of Russian forces in and around Pristina airport during the Kosovo War. They were also deployed in Chechnya as an active bridgehead for other forces to follow.
Spain
In Spain, the three branches of the Armed Forces have paratrooper units, the biggest in number being the Army's Paratrooper Brigade in Paracuellos de Jarama BRIPAC. All members of the special forces in the Navy (Fuerza de Guerra Naval Especial), the Army and the Air Force must be certified as paratrooper and pass the HALO-HAHO examinations each year.
British Army
The Parachute Regiment has its origins in the elite force of Commandos set up by the British Army at the request of Winston Churchill, the Prime Minister, during the initial phase of the Second World War. Churchill had been an enthusiast of the concept of airborne warfare since the First World War, when he had proposed the creation of a force that might assault the German flanks deep behind the trenches of the static Western Front. In 1940 and in the aftermath of the Dunkirk evacuation and the Fall of France, Churchill's interest was caught again by the idea of taking the fight back to Europe – the airborne was now a means to be able to storm a series of water obstacles... everywhere from the Channel to the Mediterranean and in the East.
Enthusiasts within the British armed forces were inspired in the creation of airborne forces (including the Parachute Regiment, Air Landing Regiment, and the Glider Pilot Regiment) by the example of the German Luftwaffe's Fallschirmjäger, which had a major role in the invasions of Norway, and the Low Countries, particularly the attack on Fort Eben-Emael in Belgium, and a pivotal, but costly role in the invasion of Crete. From the perspective of others, however, the proposed airborne units had a key weakness: they required exactly the same resources as the new strategic bomber capability, another high priority, and would also compete with the badly stretched strategic air lift capability, essential to Churchill's strategy in the Far East. It took the continued reintervention of Churchill to ensure that sufficient aircraft were devoted to the airborne project to make it viable.
Britain's first airborne assault took place on February 10, 1941 when, what was then known as II Special Air Service (some 37 men of 500 trained in No. 2 Commando plus three Italian interpreters), parachuted into Italy to blow up an aqueduct in a daring raid named Operation Colossus. After the Battle of Crete, it was agreed that Britain would need many more paratroopers for similar operations. No 2 Commando were tasked with specialising in airborne assault and became the nucleus of the Parachute Regiment, becoming the 1st Battalion. The larger scale drops in Sicily by the 1st Airborne Division in 1943 met with mixed success, and some commanders concluded the airborne experiment was a failure. Once again, it took the reintervention of senior British political leaders, looking ahead to the potential needs of D-Day, to continue the growth in British airborne resources.
Extensive successful drops were made during the Normandy landings by the 6th Airborne Division (see Operation Tonga), under the command of Major-General Richard Nelson Gale, but Operation Market Garden against Arnhem with the 1st Airborne Division under Roy Urquhart were less successful, and proved, in the famous phrase, to be A Bridge too far and the 1st Airborne was virtually destroyed. Later large scale drops, such as those on the Rhine under Operation Varsity and involving the British 6th and the US 17th, were successful, but less ambitious in their intent to seize ground. After the war, there was fierce debate within the cash-strapped British armed forces as to the value of airborne forces. Many noted the unique contribution they had made within the campaign. Others pointed to the extreme costs involved and the need for strict prioritisation. During the debate, the contribution of British airborne forces in the Far Eastern theatres was perhaps underplayed, to the long term detriment of the argument.
Royal Air Force
Several parachute squadrons of the Royal Air Force Regiment were formed in World War II in order to secure airfields for the RAF – this capability is currently operated by II Squadron. |
Exact controllability of scalar conservation laws with strict convex flux We consider the scalar conservation law with strict convex flux in one space dimension. In this paper we study the exact controllability of entropy solution by using initial or boundary data control. Some partial results have been obtained in,. Here we investigate the precise conditions under which, the exact controllability problem admits a solution. The basic ingredients in the proof of these results are, Lax-Oleinik explicit formula and finer properties of the characteristics curves. |
<reponame>BeiJiaan/ace
//=============================================================================
/**
* @file Naming_Test.cpp
*
* $Id$
*
* This is a test to illustrate the Naming Services. The test
* does binds, rebinds, finds, and unbinds on name bindings using
* the local naming context.
*
*
* @author <NAME> <<EMAIL>> and <NAME> <<EMAIL>>
*/
//=============================================================================
#include "test_config.h"
#include "randomize.h"
#include "ace/Lib_Find.h"
#include "ace/SString.h"
#include "ace/Naming_Context.h"
#include "ace/Profile_Timer.h"
#include "ace/OS_NS_stdio.h"
#include "ace/OS_NS_string.h"
#include "ace/OS_NS_unistd.h"
static char name[BUFSIZ];
static char value[BUFSIZ];
static char type[BUFSIZ];
void
initialize_array (int * array, int size)
{
for (int n = 0; n < size; ++n)
array[n] = n;
}
static void
print_time (ACE_Profile_Timer &timer,
const char *test)
{
ACE_Profile_Timer::ACE_Elapsed_Time et;
timer.stop ();
timer.elapsed_time (et);
ACE_DEBUG ((LM_DEBUG, ACE_TEXT (" ***** %C ***** \n"), test));
ACE_DEBUG ((LM_DEBUG,
ACE_TEXT ("real time = %f secs, user time = %f secs, system time = %f secs\n"),
et.real_time, et.user_time, et.system_time));
ACE_DEBUG ((LM_DEBUG, ACE_TEXT ("time per call = %f usecs\n"),
(et.real_time / double (ACE_NS_MAX_ENTRIES)) * 1000000));
}
static void
test_bind (ACE_Naming_Context &ns_context)
{
int array [ACE_NS_MAX_ENTRIES];
initialize_array (array, sizeof (array) / sizeof (array[0]));
randomize (array, sizeof (array) / sizeof (array[0]));
// do the binds
for (size_t i = 0; i < ACE_NS_MAX_ENTRIES; i++)
{
ACE_OS::sprintf (name, "%s%d", "name", array[i]);
ACE_NS_WString w_name (name);
ACE_OS::sprintf (value, "%s%d", "value", array[i]);
ACE_NS_WString w_value (value);
ACE_OS::sprintf (type, "%s%d", "type", array [i]);
int bind_result = ns_context.bind (w_name, w_value, type);
ACE_TEST_ASSERT (bind_result != -1);
}
}
static void
test_find_failure (ACE_Naming_Context &ns_context)
{
ACE_OS::sprintf (name, "%s", "foo-bar");
ACE_NS_WString w_name (name);
ACE_NS_WString w_value;
char *l_type = 0;
// Do the finds.
for (size_t i = 0; i < ACE_NS_MAX_ENTRIES; i++)
{
int resolve = ns_context.resolve (w_name, w_value, l_type);
ACE_TEST_ASSERT (resolve == -1);
}
}
static void
test_rebind (ACE_Naming_Context &ns_context)
{
int array [ACE_NS_MAX_ENTRIES];
initialize_array (array, sizeof (array) / sizeof (array[0]));
randomize (array, sizeof (array) / sizeof (array[0]));
// do the rebinds
for (size_t i = 0; i < ACE_NS_MAX_ENTRIES; i++)
{
ACE_OS::sprintf (name, "%s%d", "name", array[i]);
ACE_NS_WString w_name (name);
ACE_OS::sprintf (value, "%s%d", "value", -array[i]);
ACE_NS_WString w_value (value);
ACE_OS::sprintf (type, "%s%d", "type", -array[i]);
int rebind = ns_context.rebind (w_name, w_value, type);
ACE_TEST_ASSERT (rebind != -1);
}
}
static void
test_unbind (ACE_Naming_Context &ns_context)
{
int array [ACE_NS_MAX_ENTRIES];
initialize_array (array, sizeof (array) / sizeof (array[0]));
randomize (array, sizeof (array) / sizeof (array[0]));
// do the unbinds
for (size_t i = 0; i < ACE_NS_MAX_ENTRIES; i++)
{
ACE_OS::sprintf (name, "%s%d", "name", array[i]);
ACE_NS_WString w_name (name);
int unbind = ns_context.unbind (w_name);
ACE_TEST_ASSERT (unbind != -1);
}
}
static void
test_find (ACE_Naming_Context &ns_context, int sign, int result)
{
char temp_val[BUFSIZ];
char temp_type[BUFSIZ];
int array [ACE_NS_MAX_ENTRIES];
initialize_array (array, sizeof (array) / sizeof (array[0]));
randomize (array, sizeof (array) / sizeof (array[0]));
// do the finds
for (size_t i = 0; i < ACE_NS_MAX_ENTRIES; i++)
{
if (sign == 1)
{
ACE_OS::sprintf (temp_val, "%s%d", "value", array[i]);
ACE_OS::sprintf (temp_type, "%s%d", "type", array[i]);
}
else
{
ACE_OS::sprintf (temp_val, "%s%d", "value", -array[i]);
ACE_OS::sprintf (temp_type, "%s%d", "type", -array[i]);
}
ACE_OS::sprintf (name, "%s%d", "name", array[i]);
ACE_NS_WString w_name (name);
ACE_NS_WString w_value;
char *type_out = 0;
ACE_NS_WString val (temp_val);
int const resolve_result = ns_context.resolve (w_name, w_value, type_out);
if (resolve_result != result)
ACE_ERROR ((LM_ERROR,
ACE_TEXT ("Error, resolve result not equal to resutlt (%d != %d)\n"),
resolve_result, result));
char *l_value = w_value.char_rep ();
if (l_value)
{
ACE_TEST_ASSERT (w_value == val);
if (ns_context.name_options ()->debug ())
{
if (type_out)
ACE_DEBUG ((LM_DEBUG,
ACE_TEXT ("Name: %C\tValue: %C\tType: %C\n"),
name, l_value, type_out));
else
ACE_DEBUG ((LM_DEBUG, ACE_TEXT ("Name: %C\tValue: %C\n"),
name, l_value));
}
if (type_out)
{
ACE_TEST_ASSERT (ACE_OS::strcmp (type_out, temp_type) == 0);
delete[] type_out;
}
}
delete[] l_value;
}
}
int
run_main (int argc, ACE_TCHAR *argv[])
{
ACE_START_TEST (ACE_TEXT ("Naming_Test"));
ACE_TCHAR temp_file [BUFSIZ];
ACE_Naming_Context *ns_context = 0;
ACE_NEW_RETURN (ns_context, ACE_Naming_Context, -1);
ACE_Name_Options *name_options = ns_context->name_options ();
name_options->parse_args (argc, argv);
/*
** NOTE! This is an experimental value and is not magic in any way. It
** works for me, on one system. It's needed because in the particular
** case here where the underlying mmap will allocate a small area and
** then try to grow it, it always moves it to a new location, which
** totally screws things up. I once tried forcing the realloc to do
** MAP_FIXED but that's not a good solution since it may overwrite other
** mapped areas of memory, like the heap, or the C library, and get very
** unexpected results. (<NAME>, 24-August-2007)
*/
# if defined (ACE_LINUX) && defined (__x86_64__)
name_options->base_address ((char*)0x3c00000000);
#endif
bool unicode = false;
#if (defined (ACE_WIN32) && defined (ACE_USES_WCHAR))
unicode = true;
#endif /* ACE_WIN32 && ACE_USES_WCHAR */
if (unicode && name_options->use_registry () == 1)
{
name_options->namespace_dir (ACE_TEXT ("Software\\ACE\\Name Service"));
name_options->database (ACE_TEXT ("Version 1"));
}
else
{
const ACE_TCHAR* pname = ACE::basename (name_options->process_name (),
ACE_DIRECTORY_SEPARATOR_CHAR);
// Allow the user to determine where the context file will be
// located just in case the current directory is not suitable for
// locking. We don't just set namespace_dir () on name_options
// because that is not sufficient to work around locking problems
// for Tru64 when the current directory is NFS mounted from a
// system that does not properly support locking.
ACE_TCHAR temp_dir [MAXPATHLEN];
if (ACE::get_temp_dir (temp_dir, MAXPATHLEN) == -1)
{
ACE_ERROR_RETURN ((LM_ERROR,
ACE_TEXT ("Temporary path too long, ")
ACE_TEXT ("defaulting to current directory\n")),
-1);
}
else
{
ACE_OS::chdir (temp_dir);
}
// Set the database name using the pid. mktemp isn't always available.
ACE_OS::snprintf(temp_file, BUFSIZ,
#if !defined (ACE_WIN32) && defined (ACE_USES_WCHAR)
ACE_TEXT ("%ls%d"),
#else
ACE_TEXT ("%s%d"),
#endif
pname,
(int)(ACE_OS::getpid ()));
name_options->database (temp_file);
}
if (ns_context->open (ACE_Naming_Context::PROC_LOCAL, 1) == -1)
{
ACE_ERROR_RETURN ((LM_ERROR,
ACE_TEXT ("ns_context->open (PROC_LOCAL) %p\n"),
ACE_TEXT ("failed")),
-1);
}
ACE_DEBUG ((LM_DEBUG, ACE_TEXT ("time to test %d iterations using %s\n"),
ACE_NS_MAX_ENTRIES, name_options->use_registry () ?
ACE_TEXT ("Registry") : ACE_TEXT ("ACE")));
ACE_Profile_Timer timer;
timer.start ();
// Add some bindings to the database
test_bind (*ns_context);
print_time (timer, "Binds");
timer.start ();
// Should find the entries
test_find (*ns_context, 1, 0);
print_time (timer, "Successful Finds");
timer.start ();
// Rebind with negative values
test_rebind (*ns_context);
print_time (timer, "Rebinds");
timer.start ();
// Should find the entries
test_find (*ns_context, -1, 0);
print_time (timer, "Successful Finds");
timer.start ();
// Should not find the entries
test_find_failure (*ns_context);
print_time (timer, "UnSuccessful Finds");
timer.start ();
// Remove all bindings from database
test_unbind (*ns_context);
print_time (timer, "Unbinds");
ACE_OS::sprintf (temp_file, ACE_TEXT ("%s%s%s"),
name_options->namespace_dir (),
ACE_DIRECTORY_SEPARATOR_STR,
name_options->database ());
delete ns_context;
// Remove any existing files. No need to check return value here
// since we don't care if the file doesn't exist.
ACE_OS::unlink (temp_file);
ACE_END_TEST;
return 0;
}
|
A City of Words: Jos F. A. Olivers Istanbul Poems Jos F. A. Olivers experimental urban aesthetics requires a rethinking of the iconic figure of the flneur. In Olivers Istanbul poems, the flneur is disembodied, giving way to a peripatetic consciousness that engages the experiential heterogeneity of the encounter with the city on multiple levels emotional, visual, cognitive, rhythmic, auditory, verbal and versifies this engagement as a city of words. Through this poetic practice, the readers experience the city in their imagination, with vicarious trepidation and excitement. There is a democratic ethos underlying this non-instrumental creative mode, as it is not only the city that comes into being but also the poet and the reader. |
<reponame>MarcPartensky/Pygame-Geometry
from .materialform import MaterialForm
from .abstract import Point,Form,Segment,Vector
import math
class MaterialFormHandler:
def __init__(self,material_forms):
"""Create material form collider object."""
self.forms=material_forms
self.time=0.1
def update(self,t):
"""Update the forms and deal with its collisions."""
self.updateForms(t)
self.dealCollisions()
def updateForms(self,t):
"""Update the forms."""
for form in self.forms:
form.update(t)
def dealCollisions(self):
"""Deal with all collisions."""
l=len(self.forms)
for i in range(l):
for j in range(i):
f1=self.forms[i]
f2=self.forms[j]
self.collide(f1,f2)
def rotate(self,angle=math.pi/2,point=Point(0,0)):
"""Rotate the forms using an angle and a point."""
for form in self.forms:
form.rotate(angle,point)
def collide(self,object1,object2):
"""Deal with the collisions of two objects 'object1' and 'object2'."""
#I've got no clue how to do such a thing
#I just know that i need the motions of the forms, the coordonnates of its points and their masses.
ap1=object1.points
bp1=[Point.createFromVector(p1.getNextPosition(self.time)) for p1 in ap1]
ls1=[Segment(a1.abstract,b1) for (a1,b1) in zip(ap1,bp1)]
ap2=object2.points
bp2=[Point.createFromVector(p2.getNextPosition(self.time)) for p2 in ap2]
ls2=[Segment(a2.abstract,b2) for (a2,b2) in zip(ap2,bp2)]
points=[]
for s1 in ls1:
for s2 in ls2:
print(s1,s2)
point=s1.crossSegment(s2)
if point:
print(point)
points.append(point)
return points
def show(self,surface):
"""Show the material forms on the surface."""
for form in self.forms:
form.show(surface)
def affectFriction(self):
"""Affect all entities with frixion for all dimensions."""
for entity in self.entities:
for i in range(len(entity.center.position)):
entity.velocity=[self.factor*entity.velocity[0],self.factor*entity.velocity[1]]
def affectCollisions(self):
"""Affect all entities with collisions between themselves."""
l=len(self.entities)
for y in range(l):
for x in range(y):
self.affectCollision(self.entities[y],self.entities[x])
def affectCollision(self,entity1,entity2):
x1,y1=entity1.position
x2,y2=entity2.position
r1=entity1.radius
r2=entity2.radius
if sqrt((x1-x2)**2+(y1-y2)**2)<r1+r2:
self.affectVelocity(entity1,entity2)
def affectVelocity(self,entity1,entity2):
x1,y1=entity1.position
x2,y2=entity2.position
vx1,vy1=entity1.velocity
vx2,vy2=entity2.velocity
m1=entity1.mass
m2=entity2.mass
if x2!=x1:
angle=-atan((y2-y1)/(x2-x1))
ux1,uy1=self.rotate2(entity1.velocity,angle)
ux2,uy2=self.rotate2(entity2.velocity,angle)
v1=[self.affectOneVelocity(ux1,ux2,m1,m2),uy1]
v2=[self.affectOneVelocity(ux2,ux1,m1,m2),uy2]
entity1.velocity=self.rotate2(v1,-angle)
entity2.velocity=self.rotate2(v2,-angle)
def affectOneVelocity(self,v1,v2,m1,m2):
return (m1-m2)/(m1+m2)*v1+(2*m2)/(m1+m2)*v2
def rotate2(self,velocity,angle):
vx,vy=velocity
nvx=vx*cos(angle)-vy*sin(angle)
nvy=vx*sin(angle)+vy*cos(angle)
return [nvx,nvy]
if __name__=="__main__":
from .surface import Surface
surface=Surface(name="Material Form Handler")
ps1=[Point(0,0),Point(0,1),Point(1,1),Point(1,0)]
f1=Form(ps1)
f1=MaterialForm.createFromForm(f1)
f1.velocity=Vector(1,1)
ps2=[Point(0,0),Point(0,2),Point(2,2),Point(2,0)]
f2=Form(ps2)
f2=MaterialForm.createFromForm(f2)
forms=[f1,f2]
handler=MaterialFormHandler(forms)
while surface.open:
surface.check()
surface.control()
surface.clear()
surface.show()
handler.update(1)
handler.rotate(0.1)
handler.show(surface)
surface.flip()
|
package com.listener;
import com.event.MqttEvent;
import com.handler.MqttHandler;
import io.netty.buffer.ByteBuf;
import java.nio.charset.StandardCharsets;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
/**
* @Author: chihaojie
* @Date: 2020/1/1 14:05
* @Version 1.0
* @Note
*/
public class MqttMessageListener implements MqttHandler {
private final BlockingQueue<MqttEvent> events;
public MqttMessageListener() {
events = new ArrayBlockingQueue<>(100);
}
@Override
public void onMessage(String topic, ByteBuf message) {
System.out.println("MQTT message [{}], topic [{}]"+ message.toString(StandardCharsets.UTF_8)+topic);
events.add(new MqttEvent(topic, message.toString(StandardCharsets.UTF_8)));
}
}
|
Effects of Esomeprazole on Acid Output in Patients With Zollinger-Ellison Syndrome or Idiopathic Gastric Acid Hypersecretion OBJECTIVES:To evaluate the efficacy and safety of oral esomeprazole in the control of gastric acid hypersecretion in patients with hypersecretory states.METHODS:In this 12-month, open-label, multicenter study, acid output (AO) was evaluated at baseline, day 10, and months 3, 6, and 12. The starting dose of esomeprazole was 40 mg or 80 mg twice daily. On day 10, patients with controlled AO were maintained on the same dose, while those with uncontrolled AO had their doses increased (maximum dose 240 mg/day) until control was attained. Esophagogastroduodenoscopy (EGD) was performed at baseline and at 6 and 12 months. Safety and tolerability were assessed throughout the study by EGD, gastric analysis, and adverse events.RESULTS:Twenty-one patients (19 with Zollinger-Ellison syndrome , 2 with idiopathic gastric acid hypersecretion ) completed the study. Of the 20 patients with controlled AO at day 10, 18 (90%) had sustained AO control for the rest of the study. At 12 months, AO was controlled in 14 of 16 patients receiving esomeprazole 40 mg twice daily, in all 4 patients receiving esomeprazole 80 mg twice daily, and in the 1 patient receiving esomeprazole 80 mg 3 times daily. At 6 and 12 months, no patient had endoscopic evidence of mucosal disease. Esomeprazole was well tolerated; 1 patient had a serious adverse event (hypomagnesemia) attributed to treatment that resolved with magnesium supplementation during continued treatment.CONCLUSION:Esomeprazole in appropriately titrated doses controls AO over 12 months in patients with hypersecretory states and is well tolerated. |
package com.ning.jcbm.lzma;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import com.ning.jcbm.ByteArrayOutputStream;
import com.ning.jcbm.DriverBase;
/**
* Driver that uses original conversion done by LZMA author.
* Codec is not supported any more (AFAIK).
*/
public class LzmaDriver extends DriverBase
{
static final int DEFAULT_ALGORITHM = 2;
// what would be useful defaults? Should probably depend on input size?
static final int MAX_DICTIONARY_SIZE = (1 << 21); // default; 2 megs
static final int DEFAULT_MATCH_FINDER = 1;
static final int DEFAULT_FAST_BYTES = 128;
static final int Lc = 3;
static final int Lp = 0;
static final int Pb = 2;
public LzmaDriver() {
super("LZMA");
}
@Override
protected int compressBlock(byte[] uncompressed, byte[] compressBuffer) throws IOException
{
ByteArrayInputStream inStream = new ByteArrayInputStream(uncompressed);
ByteArrayOutputStream outStream = new ByteArrayOutputStream(compressBuffer);
boolean eos = true; // what does this mean? that size is not known?
SevenZip.Compression.LZMA.Encoder encoder = new SevenZip.Compression.LZMA.Encoder();
if (!encoder.SetAlgorithm(DEFAULT_ALGORITHM)) throw new IllegalArgumentException();
// Let's actually try to determine somewhat optimal size; starting with 4k
int dictSize = (1 << 12);
int uncompLen = uncompressed.length;
while (dictSize < uncompLen) {
dictSize += dictSize;
if (dictSize >= MAX_DICTIONARY_SIZE) {
break;
}
}
if (!encoder.SetDictionarySize(dictSize)) throw new IllegalArgumentException();
if (!encoder.SetNumFastBytes(DEFAULT_FAST_BYTES)) throw new IllegalArgumentException();
if (!encoder.SetMatchFinder(DEFAULT_MATCH_FINDER)) throw new IllegalArgumentException();
if (!encoder.SetLcLpPb(Lc, Lp, Pb)) throw new IllegalArgumentException();
encoder.SetEndMarkerMode(eos);
encoder.WriteCoderProperties(outStream);
for (int i = 0; i < 8; i++) { // just write -1
outStream.write(0xFF);
}
encoder.Code(inStream, outStream, -1, -1, null);
return outStream.length();
}
@Override
protected int uncompressBlock(byte[] compressed, byte[] uncompressBuffer) throws IOException
{
ByteArrayInputStream inStream = new ByteArrayInputStream(compressed);
int propertiesSize = 5;
byte[] properties = new byte[propertiesSize];
if (inStream.read(properties, 0, propertiesSize) != propertiesSize) {
throw new IOException("input .lzma content is too short");
}
SevenZip.Compression.LZMA.Decoder decoder = new SevenZip.Compression.LZMA.Decoder();
if (!decoder.SetDecoderProperties(properties)) {
throw new IOException("Incorrect stream properties");
}
long outSize = 0;
for (int i = 0; i < 8; i++) {
int v = inStream.read();
if (v < 0) {
throw new IOException("Can't read stream size");
}
outSize = (outSize << 8) + (v & 0xFF);
}
ByteArrayOutputStream outStream = new ByteArrayOutputStream(uncompressBuffer);
if (!decoder.Code(inStream, outStream, outSize)) {
throw new IOException("Error in data stream");
}
return outStream.length();
}
/* Streaming not natively supported (or rather, not in the way we could use it);
* could fake by using block mode, but let's not yet bother.
*/
@Override
protected void compressToStream(byte[] uncompressed, OutputStream out) throws IOException {
throw new UnsupportedOperationException();
}
@Override
protected int uncompressFromStream(InputStream in, byte[] buffer) throws IOException {
throw new UnsupportedOperationException();
}
}
|
"""Tests for inline-env-var rule."""
from ansiblelint.rules import RulesCollection
from ansiblelint.rules.inline_env_var import EnvVarsInCommandRule
from ansiblelint.testing import RunFromText
SUCCESS_PLAY_TASKS = """
- hosts: localhost
tasks:
- name: actual use of environment
shell: echo $HELLO
environment:
HELLO: hello
- name: use some key-value pairs
command: chdir=/tmp creates=/tmp/bobbins warn=no touch bobbins
- name: commands can have flags
command: abc --xyz=def blah
- name: commands can have equals in them
command: echo "==========="
- name: commands with cmd
command:
cmd:
echo "-------"
- name: command with stdin (ansible > 2.4)
command: /bin/cat
args:
stdin: "Hello, world!"
- name: use argv to send the command as a list
command:
argv:
- /bin/echo
- Hello
- World
- name: another use of argv
command:
args:
argv:
- echo
- testing
- name: environment variable with shell
shell: HELLO=hello echo $HELLO
- name: command with stdin_add_newline (ansible > 2.8)
command: /bin/cat
args:
stdin: "Hello, world!"
stdin_add_newline: false
- name: command with strip_empty_ends (ansible > 2.8)
command: echo
args:
strip_empty_ends: false
"""
FAIL_PLAY_TASKS = """
- hosts: localhost
tasks:
- name: environment variable with command
command: HELLO=hello echo $HELLO
- name: typo some stuff
command: cerates=/tmp/blah warn=no touch /tmp/blah
"""
def test_success() -> None:
"""Positive test for inline-env-var."""
collection = RulesCollection()
collection.register(EnvVarsInCommandRule())
runner = RunFromText(collection)
results = runner.run_playbook(SUCCESS_PLAY_TASKS)
assert len(results) == 0
def test_fail() -> None:
"""Negative test for inline-env-var."""
collection = RulesCollection()
collection.register(EnvVarsInCommandRule())
runner = RunFromText(collection)
results = runner.run_playbook(FAIL_PLAY_TASKS)
assert len(results) == 2
|
<reponame>status-im/wakuconnect-chat-sdk
/* eslint-disable */
import Long from 'long'
import _m0 from 'protobufjs/minimal'
import { ChatMessage } from './chat_message'
import { EmojiReaction } from './emoji_reaction'
export const protobufPackage = 'communities.v1'
export interface MembershipUpdateEvent {
/** Lamport timestamp of the event */
clock: number
/** List of public keys of objects of the action */
members: string[]
/** Name of the chat for the CHAT_CREATED/NAME_CHANGED event types */
name: string
/** The type of the event */
type: MembershipUpdateEvent_EventType
}
export enum MembershipUpdateEvent_EventType {
UNKNOWN = 0,
CHAT_CREATED = 1,
NAME_CHANGED = 2,
MEMBERS_ADDED = 3,
MEMBER_JOINED = 4,
MEMBER_REMOVED = 5,
ADMINS_ADDED = 6,
ADMIN_REMOVED = 7,
UNRECOGNIZED = -1,
}
export function membershipUpdateEvent_EventTypeFromJSON(
object: any
): MembershipUpdateEvent_EventType {
switch (object) {
case 0:
case 'UNKNOWN':
return MembershipUpdateEvent_EventType.UNKNOWN
case 1:
case 'CHAT_CREATED':
return MembershipUpdateEvent_EventType.CHAT_CREATED
case 2:
case 'NAME_CHANGED':
return MembershipUpdateEvent_EventType.NAME_CHANGED
case 3:
case 'MEMBERS_ADDED':
return MembershipUpdateEvent_EventType.MEMBERS_ADDED
case 4:
case 'MEMBER_JOINED':
return MembershipUpdateEvent_EventType.MEMBER_JOINED
case 5:
case 'MEMBER_REMOVED':
return MembershipUpdateEvent_EventType.MEMBER_REMOVED
case 6:
case 'ADMINS_ADDED':
return MembershipUpdateEvent_EventType.ADMINS_ADDED
case 7:
case 'ADMIN_REMOVED':
return MembershipUpdateEvent_EventType.ADMIN_REMOVED
case -1:
case 'UNRECOGNIZED':
default:
return MembershipUpdateEvent_EventType.UNRECOGNIZED
}
}
export function membershipUpdateEvent_EventTypeToJSON(
object: MembershipUpdateEvent_EventType
): string {
switch (object) {
case MembershipUpdateEvent_EventType.UNKNOWN:
return 'UNKNOWN'
case MembershipUpdateEvent_EventType.CHAT_CREATED:
return 'CHAT_CREATED'
case MembershipUpdateEvent_EventType.NAME_CHANGED:
return 'NAME_CHANGED'
case MembershipUpdateEvent_EventType.MEMBERS_ADDED:
return 'MEMBERS_ADDED'
case MembershipUpdateEvent_EventType.MEMBER_JOINED:
return 'MEMBER_JOINED'
case MembershipUpdateEvent_EventType.MEMBER_REMOVED:
return 'MEMBER_REMOVED'
case MembershipUpdateEvent_EventType.ADMINS_ADDED:
return 'ADMINS_ADDED'
case MembershipUpdateEvent_EventType.ADMIN_REMOVED:
return 'ADMIN_REMOVED'
default:
return 'UNKNOWN'
}
}
/**
* MembershipUpdateMessage is a message used to propagate information
* about group membership changes.
* For more information, see https://github.com/status-im/specs/blob/master/status-group-chats-spec.md.
*/
export interface MembershipUpdateMessage {
/** The chat id of the private group chat */
chatId: string
/**
* A list of events for this group chat, first x bytes are the signature, then is a
* protobuf encoded MembershipUpdateEvent
*/
events: Uint8Array[]
message: ChatMessage | undefined
emojiReaction: EmojiReaction | undefined
}
const baseMembershipUpdateEvent: object = {
clock: 0,
members: '',
name: '',
type: 0,
}
export const MembershipUpdateEvent = {
encode(
message: MembershipUpdateEvent,
writer: _m0.Writer = _m0.Writer.create()
): _m0.Writer {
if (message.clock !== 0) {
writer.uint32(8).uint64(message.clock)
}
for (const v of message.members) {
writer.uint32(18).string(v!)
}
if (message.name !== '') {
writer.uint32(26).string(message.name)
}
if (message.type !== 0) {
writer.uint32(32).int32(message.type)
}
return writer
},
decode(
input: _m0.Reader | Uint8Array,
length?: number
): MembershipUpdateEvent {
const reader = input instanceof _m0.Reader ? input : new _m0.Reader(input)
let end = length === undefined ? reader.len : reader.pos + length
const message = { ...baseMembershipUpdateEvent } as MembershipUpdateEvent
message.members = []
while (reader.pos < end) {
const tag = reader.uint32()
switch (tag >>> 3) {
case 1:
message.clock = longToNumber(reader.uint64() as Long)
break
case 2:
message.members.push(reader.string())
break
case 3:
message.name = reader.string()
break
case 4:
message.type = reader.int32() as any
break
default:
reader.skipType(tag & 7)
break
}
}
return message
},
fromJSON(object: any): MembershipUpdateEvent {
const message = { ...baseMembershipUpdateEvent } as MembershipUpdateEvent
message.members = []
if (object.clock !== undefined && object.clock !== null) {
message.clock = Number(object.clock)
} else {
message.clock = 0
}
if (object.members !== undefined && object.members !== null) {
for (const e of object.members) {
message.members.push(String(e))
}
}
if (object.name !== undefined && object.name !== null) {
message.name = String(object.name)
} else {
message.name = ''
}
if (object.type !== undefined && object.type !== null) {
message.type = membershipUpdateEvent_EventTypeFromJSON(object.type)
} else {
message.type = 0
}
return message
},
toJSON(message: MembershipUpdateEvent): unknown {
const obj: any = {}
message.clock !== undefined && (obj.clock = message.clock)
if (message.members) {
obj.members = message.members.map(e => e)
} else {
obj.members = []
}
message.name !== undefined && (obj.name = message.name)
message.type !== undefined &&
(obj.type = membershipUpdateEvent_EventTypeToJSON(message.type))
return obj
},
fromPartial(
object: DeepPartial<MembershipUpdateEvent>
): MembershipUpdateEvent {
const message = { ...baseMembershipUpdateEvent } as MembershipUpdateEvent
message.members = []
if (object.clock !== undefined && object.clock !== null) {
message.clock = object.clock
} else {
message.clock = 0
}
if (object.members !== undefined && object.members !== null) {
for (const e of object.members) {
message.members.push(e)
}
}
if (object.name !== undefined && object.name !== null) {
message.name = object.name
} else {
message.name = ''
}
if (object.type !== undefined && object.type !== null) {
message.type = object.type
} else {
message.type = 0
}
return message
},
}
const baseMembershipUpdateMessage: object = { chatId: '' }
export const MembershipUpdateMessage = {
encode(
message: MembershipUpdateMessage,
writer: _m0.Writer = _m0.Writer.create()
): _m0.Writer {
if (message.chatId !== '') {
writer.uint32(10).string(message.chatId)
}
for (const v of message.events) {
writer.uint32(18).bytes(v!)
}
if (message.message !== undefined) {
ChatMessage.encode(message.message, writer.uint32(26).fork()).ldelim()
}
if (message.emojiReaction !== undefined) {
EmojiReaction.encode(
message.emojiReaction,
writer.uint32(34).fork()
).ldelim()
}
return writer
},
decode(
input: _m0.Reader | Uint8Array,
length?: number
): MembershipUpdateMessage {
const reader = input instanceof _m0.Reader ? input : new _m0.Reader(input)
let end = length === undefined ? reader.len : reader.pos + length
const message = {
...baseMembershipUpdateMessage,
} as MembershipUpdateMessage
message.events = []
while (reader.pos < end) {
const tag = reader.uint32()
switch (tag >>> 3) {
case 1:
message.chatId = reader.string()
break
case 2:
message.events.push(reader.bytes())
break
case 3:
message.message = ChatMessage.decode(reader, reader.uint32())
break
case 4:
message.emojiReaction = EmojiReaction.decode(reader, reader.uint32())
break
default:
reader.skipType(tag & 7)
break
}
}
return message
},
fromJSON(object: any): MembershipUpdateMessage {
const message = {
...baseMembershipUpdateMessage,
} as MembershipUpdateMessage
message.events = []
if (object.chatId !== undefined && object.chatId !== null) {
message.chatId = String(object.chatId)
} else {
message.chatId = ''
}
if (object.events !== undefined && object.events !== null) {
for (const e of object.events) {
message.events.push(bytesFromBase64(e))
}
}
if (object.message !== undefined && object.message !== null) {
message.message = ChatMessage.fromJSON(object.message)
} else {
message.message = undefined
}
if (object.emojiReaction !== undefined && object.emojiReaction !== null) {
message.emojiReaction = EmojiReaction.fromJSON(object.emojiReaction)
} else {
message.emojiReaction = undefined
}
return message
},
toJSON(message: MembershipUpdateMessage): unknown {
const obj: any = {}
message.chatId !== undefined && (obj.chatId = message.chatId)
if (message.events) {
obj.events = message.events.map(e =>
base64FromBytes(e !== undefined ? e : new Uint8Array())
)
} else {
obj.events = []
}
message.message !== undefined &&
(obj.message = message.message
? ChatMessage.toJSON(message.message)
: undefined)
message.emojiReaction !== undefined &&
(obj.emojiReaction = message.emojiReaction
? EmojiReaction.toJSON(message.emojiReaction)
: undefined)
return obj
},
fromPartial(
object: DeepPartial<MembershipUpdateMessage>
): MembershipUpdateMessage {
const message = {
...baseMembershipUpdateMessage,
} as MembershipUpdateMessage
message.events = []
if (object.chatId !== undefined && object.chatId !== null) {
message.chatId = object.chatId
} else {
message.chatId = ''
}
if (object.events !== undefined && object.events !== null) {
for (const e of object.events) {
message.events.push(e)
}
}
if (object.message !== undefined && object.message !== null) {
message.message = ChatMessage.fromPartial(object.message)
} else {
message.message = undefined
}
if (object.emojiReaction !== undefined && object.emojiReaction !== null) {
message.emojiReaction = EmojiReaction.fromPartial(object.emojiReaction)
} else {
message.emojiReaction = undefined
}
return message
},
}
declare var self: any | undefined
declare var window: any | undefined
declare var global: any | undefined
var globalThis: any = (() => {
if (typeof globalThis !== 'undefined') return globalThis
if (typeof self !== 'undefined') return self
if (typeof window !== 'undefined') return window
if (typeof global !== 'undefined') return global
throw 'Unable to locate global object'
})()
const atob: (b64: string) => string =
globalThis.atob ||
(b64 => globalThis.Buffer.from(b64, 'base64').toString('binary'))
function bytesFromBase64(b64: string): Uint8Array {
const bin = atob(b64)
const arr = new Uint8Array(bin.length)
for (let i = 0; i < bin.length; ++i) {
arr[i] = bin.charCodeAt(i)
}
return arr
}
const btoa: (bin: string) => string =
globalThis.btoa ||
(bin => globalThis.Buffer.from(bin, 'binary').toString('base64'))
function base64FromBytes(arr: Uint8Array): string {
const bin: string[] = []
for (const byte of arr) {
bin.push(String.fromCharCode(byte))
}
return btoa(bin.join(''))
}
type Builtin =
| Date
| Function
| Uint8Array
| string
| number
| boolean
| undefined
export type DeepPartial<T> = T extends Builtin
? T
: T extends Array<infer U>
? Array<DeepPartial<U>>
: T extends ReadonlyArray<infer U>
? ReadonlyArray<DeepPartial<U>>
: T extends {}
? { [K in keyof T]?: DeepPartial<T[K]> }
: Partial<T>
function longToNumber(long: Long): number {
if (long.gt(Number.MAX_SAFE_INTEGER)) {
throw new globalThis.Error('Value is larger than Number.MAX_SAFE_INTEGER')
}
return long.toNumber()
}
if (_m0.util.Long !== Long) {
_m0.util.Long = Long as any
_m0.configure()
}
|
<filename>testsuite/EXP_4/test283.c<gh_stars>10-100
/*
CF3
Copyright (c) 2015 ishiura-lab.
Released under the MIT license.
https://github.com/ishiura-compiler/CF3/MIT-LICENSE.md
*/
#include<stdio.h>
#include<stdint.h>
#include<stdlib.h>
#include"test1.h"
int8_t x1 = 0;
static volatile int64_t x12 = -1LL;
uint16_t x19 = 3U;
int32_t t3 = -1082386;
int32_t x24 = INT32_MIN;
int32_t x36 = -1;
int16_t x44 = INT16_MIN;
int32_t x45 = -1;
uint64_t x50 = UINT64_MAX;
int32_t t13 = 1390066;
static int64_t x63 = INT64_MAX;
volatile int32_t t14 = -13119266;
int8_t x68 = INT8_MIN;
volatile int64_t x72 = 217411LL;
uint64_t t17 = 1396472125352LLU;
uint16_t x79 = 32U;
uint16_t x82 = UINT16_MAX;
static int16_t x83 = INT16_MIN;
uint64_t x88 = 1573991017LLU;
uint64_t t20 = 1543521366094LLU;
static int16_t x96 = INT16_MAX;
static volatile int32_t t22 = 1;
int8_t x109 = -47;
int8_t x112 = INT8_MIN;
static uint16_t x119 = 5797U;
int64_t x120 = 35873991504LL;
uint8_t x123 = 2U;
static int64_t t27 = INT64_MIN;
int16_t x129 = INT16_MIN;
uint64_t x134 = 256283188766795727LLU;
int64_t x136 = INT64_MIN;
static int64_t x139 = -9466LL;
int8_t x150 = 60;
uint8_t x163 = 111U;
static int32_t t36 = -82251736;
volatile int64_t t43 = -187203915114315041LL;
uint32_t x193 = 751U;
static int64_t x197 = INT64_MIN;
int64_t x217 = INT64_MAX;
int32_t x218 = INT32_MIN;
volatile int64_t x226 = INT64_MAX;
int64_t x229 = INT64_MIN;
uint16_t x234 = 624U;
static volatile int16_t x239 = -1;
uint32_t t56 = 7056U;
volatile int64_t x248 = 548LL;
int32_t t58 = -43157;
uint64_t x255 = UINT64_MAX;
static uint64_t x265 = 54138608879051629LLU;
int32_t x266 = INT32_MAX;
static int8_t x275 = -1;
int32_t x280 = 364;
volatile int32_t t63 = 3;
volatile int32_t t64 = INT32_MIN;
int8_t x292 = -2;
static volatile uint64_t t66 = 57720575LLU;
volatile uint32_t x295 = 3U;
volatile int64_t t68 = INT64_MIN;
uint64_t t70 = 2957981834LLU;
static uint32_t x313 = UINT32_MAX;
int16_t x316 = -12;
uint16_t x320 = 6U;
int64_t x323 = INT64_MIN;
uint32_t x325 = 14335U;
volatile uint32_t t75 = 31U;
uint64_t x330 = UINT64_MAX;
volatile uint16_t x332 = UINT16_MAX;
int8_t x336 = -4;
uint32_t x365 = 5949762U;
volatile int64_t x366 = INT64_MIN;
static volatile int64_t t84 = INT64_MIN;
volatile int32_t x386 = -12998173;
static volatile uint64_t x395 = 6664101409822510LLU;
volatile int32_t t87 = -3;
int8_t x413 = INT8_MIN;
int32_t x414 = INT32_MIN;
int8_t x416 = INT8_MIN;
int32_t x417 = -2989180;
static volatile int32_t t92 = 8040265;
uint32_t x422 = 234897344U;
int32_t x433 = INT32_MIN;
int16_t x437 = -1;
int8_t x444 = INT8_MIN;
volatile uint16_t x447 = 2U;
int32_t t99 = 5;
int8_t x450 = INT8_MIN;
int64_t x454 = INT64_MIN;
uint8_t x457 = UINT8_MAX;
int32_t x459 = -1;
int16_t x460 = 59;
int64_t x464 = -1LL;
int32_t x465 = -1;
static volatile int8_t x468 = -1;
int16_t x475 = 290;
volatile int8_t x476 = -1;
uint64_t t105 = UINT64_MAX;
volatile int8_t x482 = INT8_MAX;
int16_t x484 = -1;
int64_t x486 = INT64_MIN;
static int8_t x492 = -2;
uint8_t x497 = 14U;
volatile int64_t x499 = 20LL;
volatile int32_t t111 = -2310804;
static int8_t x502 = INT8_MAX;
static int32_t x513 = INT32_MIN;
uint8_t x520 = UINT8_MAX;
int64_t x521 = -7126LL;
uint16_t x525 = 10U;
int8_t x536 = -2;
int32_t t118 = -15681224;
volatile int8_t x546 = 14;
int64_t x548 = 8894477LL;
int8_t x555 = -1;
static int32_t t124 = -239203;
int16_t x561 = INT16_MIN;
int8_t x573 = INT8_MIN;
volatile int32_t t127 = -470309518;
static volatile int32_t t129 = 1371;
int32_t x586 = -58;
static int32_t x588 = INT32_MIN;
volatile int64_t x589 = INT64_MAX;
static uint8_t x598 = 1U;
uint32_t x603 = 145779358U;
static int16_t x623 = 11782;
int8_t x627 = -1;
volatile int8_t x629 = INT8_MIN;
int32_t x630 = INT32_MIN;
int32_t t139 = -81427;
int64_t x636 = -1LL;
volatile int32_t t140 = 0;
int32_t x641 = -23154;
int64_t t143 = 61484440562793578LL;
volatile int8_t x651 = -1;
volatile int64_t x653 = INT64_MIN;
static volatile int64_t t145 = INT64_MIN;
int32_t x659 = -1;
static int16_t x660 = 490;
uint64_t t146 = UINT64_MAX;
static int32_t x665 = -1;
int64_t x668 = INT64_MAX;
volatile int8_t x672 = -38;
volatile int8_t x673 = -55;
uint32_t x679 = 16143U;
static volatile uint32_t t150 = 21U;
uint16_t x683 = 1247U;
uint8_t x685 = 112U;
int8_t x687 = -1;
volatile int8_t x698 = INT8_MIN;
static int8_t x700 = INT8_MAX;
uint16_t x702 = 247U;
int16_t x708 = -34;
volatile int32_t t156 = INT32_MIN;
int32_t t158 = -76;
int64_t x717 = -3482499739965406404LL;
volatile uint32_t t160 = 0U;
volatile int64_t x730 = INT64_MIN;
volatile uint64_t x731 = 13885058LLU;
volatile int32_t x733 = INT32_MIN;
int32_t x734 = INT32_MAX;
static uint32_t x741 = 36691U;
int64_t x746 = -4529334511652656358LL;
volatile int32_t x747 = INT32_MIN;
int16_t x748 = INT16_MIN;
uint8_t x751 = 1U;
volatile uint32_t x753 = 199U;
int64_t t168 = INT64_MIN;
int8_t x766 = 10;
int8_t x769 = -1;
static uint8_t x771 = UINT8_MAX;
int64_t x781 = -1LL;
int8_t x796 = -4;
volatile int64_t x798 = INT64_MIN;
uint8_t x799 = 7U;
int16_t x804 = INT16_MIN;
volatile int32_t t179 = 654;
int64_t x807 = INT64_MIN;
uint64_t x811 = UINT64_MAX;
volatile int32_t x818 = INT32_MAX;
volatile uint32_t x819 = 1U;
uint32_t x821 = 1084429U;
int16_t x824 = 53;
static int16_t x843 = INT16_MIN;
uint8_t x849 = 56U;
int32_t x854 = INT32_MIN;
int64_t x863 = INT64_MIN;
int16_t x878 = 404;
static int64_t t195 = INT64_MIN;
uint64_t x888 = 2218055070607183881LLU;
int16_t x890 = 1;
volatile int32_t t199 = 5;
void f0(void) {
volatile uint8_t x2 = 13U;
uint64_t x3 = 4056653876099925151LLU;
int32_t x4 = 976397;
int32_t t0 = -104;
t0 = (x1*(x2!=(x3-x4)));
if (t0 != 0) { NG(); } else { ; }
}
void f1(void) {
int64_t x9 = -1LL;
int8_t x10 = -1;
int8_t x11 = 1;
int64_t t1 = 122469562538032LL;
t1 = (x9*(x10!=(x11-x12)));
if (t1 != -1LL) { NG(); } else { ; }
}
void f2(void) {
int32_t x13 = -1;
volatile uint16_t x14 = UINT16_MAX;
int16_t x15 = -1;
int8_t x16 = INT8_MAX;
int32_t t2 = -1293735;
t2 = (x13*(x14!=(x15-x16)));
if (t2 != -1) { NG(); } else { ; }
}
void f3(void) {
int16_t x17 = -1;
static uint64_t x18 = UINT64_MAX;
static uint8_t x20 = 0U;
t3 = (x17*(x18!=(x19-x20)));
if (t3 != -1) { NG(); } else { ; }
}
void f4(void) {
static int16_t x21 = INT16_MIN;
volatile int32_t x22 = INT32_MAX;
int64_t x23 = -9357LL;
static int32_t t4 = 5696755;
t4 = (x21*(x22!=(x23-x24)));
if (t4 != -32768) { NG(); } else { ; }
}
void f5(void) {
int8_t x25 = -4;
volatile int32_t x26 = 57836921;
static int32_t x27 = INT32_MIN;
static int8_t x28 = -1;
volatile int32_t t5 = 89292279;
t5 = (x25*(x26!=(x27-x28)));
if (t5 != -4) { NG(); } else { ; }
}
void f6(void) {
static uint8_t x29 = UINT8_MAX;
int16_t x30 = 22;
uint64_t x31 = 5987501475LLU;
int64_t x32 = -56472LL;
int32_t t6 = -23935207;
t6 = (x29*(x30!=(x31-x32)));
if (t6 != 255) { NG(); } else { ; }
}
void f7(void) {
int8_t x33 = 15;
volatile int32_t x34 = INT32_MIN;
static volatile int32_t x35 = -1;
int32_t t7 = -2669;
t7 = (x33*(x34!=(x35-x36)));
if (t7 != 15) { NG(); } else { ; }
}
void f8(void) {
static int32_t x37 = -211096;
int32_t x38 = INT32_MIN;
volatile int64_t x39 = INT64_MIN;
int32_t x40 = INT32_MIN;
static int32_t t8 = 8;
t8 = (x37*(x38!=(x39-x40)));
if (t8 != -211096) { NG(); } else { ; }
}
void f9(void) {
int32_t x41 = 29251;
static volatile int64_t x42 = INT64_MAX;
int64_t x43 = 384442370224LL;
int32_t t9 = -2;
t9 = (x41*(x42!=(x43-x44)));
if (t9 != 29251) { NG(); } else { ; }
}
void f10(void) {
int8_t x46 = -1;
volatile int64_t x47 = 481827LL;
int8_t x48 = INT8_MIN;
volatile int32_t t10 = 467;
t10 = (x45*(x46!=(x47-x48)));
if (t10 != -1) { NG(); } else { ; }
}
void f11(void) {
uint16_t x49 = 57U;
static uint32_t x51 = 26U;
uint16_t x52 = 19483U;
static int32_t t11 = 139469506;
t11 = (x49*(x50!=(x51-x52)));
if (t11 != 57) { NG(); } else { ; }
}
void f12(void) {
int32_t x53 = -1;
volatile uint8_t x54 = 0U;
int8_t x55 = -3;
static uint16_t x56 = 0U;
int32_t t12 = 210120438;
t12 = (x53*(x54!=(x55-x56)));
if (t12 != -1) { NG(); } else { ; }
}
void f13(void) {
volatile int8_t x57 = INT8_MIN;
volatile int32_t x58 = INT32_MAX;
static int16_t x59 = INT16_MAX;
uint16_t x60 = 1U;
t13 = (x57*(x58!=(x59-x60)));
if (t13 != -128) { NG(); } else { ; }
}
void f14(void) {
int8_t x61 = -37;
static int64_t x62 = INT64_MIN;
uint16_t x64 = 11U;
t14 = (x61*(x62!=(x63-x64)));
if (t14 != -37) { NG(); } else { ; }
}
void f15(void) {
uint32_t x65 = UINT32_MAX;
volatile int64_t x66 = INT64_MIN;
int16_t x67 = -2;
volatile uint32_t t15 = UINT32_MAX;
t15 = (x65*(x66!=(x67-x68)));
if (t15 != UINT32_MAX) { NG(); } else { ; }
}
void f16(void) {
int64_t x69 = -1LL;
int64_t x70 = INT64_MIN;
int8_t x71 = -1;
volatile int64_t t16 = 0LL;
t16 = (x69*(x70!=(x71-x72)));
if (t16 != -1LL) { NG(); } else { ; }
}
void f17(void) {
uint64_t x73 = 7493935107LLU;
static uint64_t x74 = UINT64_MAX;
volatile uint16_t x75 = UINT16_MAX;
uint64_t x76 = UINT64_MAX;
t17 = (x73*(x74!=(x75-x76)));
if (t17 != 7493935107LLU) { NG(); } else { ; }
}
void f18(void) {
static int32_t x77 = INT32_MAX;
uint32_t x78 = 3U;
int16_t x80 = INT16_MIN;
int32_t t18 = INT32_MAX;
t18 = (x77*(x78!=(x79-x80)));
if (t18 != INT32_MAX) { NG(); } else { ; }
}
void f19(void) {
int64_t x81 = INT64_MIN;
uint64_t x84 = UINT64_MAX;
volatile int64_t t19 = INT64_MIN;
t19 = (x81*(x82!=(x83-x84)));
if (t19 != INT64_MIN) { NG(); } else { ; }
}
void f20(void) {
uint64_t x85 = 162693048944967LLU;
int32_t x86 = -1;
int16_t x87 = INT16_MAX;
t20 = (x85*(x86!=(x87-x88)));
if (t20 != 162693048944967LLU) { NG(); } else { ; }
}
void f21(void) {
int64_t x89 = INT64_MAX;
uint16_t x90 = 6748U;
static uint32_t x91 = 134678U;
int32_t x92 = INT32_MIN;
int64_t t21 = INT64_MAX;
t21 = (x89*(x90!=(x91-x92)));
if (t21 != INT64_MAX) { NG(); } else { ; }
}
void f22(void) {
volatile uint8_t x93 = 32U;
int8_t x94 = -1;
uint8_t x95 = 18U;
t22 = (x93*(x94!=(x95-x96)));
if (t22 != 32) { NG(); } else { ; }
}
void f23(void) {
static int64_t x105 = INT64_MIN;
uint64_t x106 = UINT64_MAX;
uint8_t x107 = 23U;
int8_t x108 = INT8_MIN;
volatile int64_t t23 = INT64_MIN;
t23 = (x105*(x106!=(x107-x108)));
if (t23 != INT64_MIN) { NG(); } else { ; }
}
void f24(void) {
uint8_t x110 = UINT8_MAX;
volatile int16_t x111 = INT16_MIN;
static int32_t t24 = -11335;
t24 = (x109*(x110!=(x111-x112)));
if (t24 != -47) { NG(); } else { ; }
}
void f25(void) {
volatile int64_t x113 = -17752LL;
int32_t x114 = -781114959;
int16_t x115 = -208;
static int64_t x116 = -1LL;
static int64_t t25 = -1646208738200LL;
t25 = (x113*(x114!=(x115-x116)));
if (t25 != -17752LL) { NG(); } else { ; }
}
void f26(void) {
volatile int16_t x117 = INT16_MIN;
uint16_t x118 = 24268U;
volatile int32_t t26 = -3413;
t26 = (x117*(x118!=(x119-x120)));
if (t26 != -32768) { NG(); } else { ; }
}
void f27(void) {
static int64_t x121 = INT64_MIN;
uint32_t x122 = UINT32_MAX;
int64_t x124 = 7180LL;
t27 = (x121*(x122!=(x123-x124)));
if (t27 != INT64_MIN) { NG(); } else { ; }
}
void f28(void) {
static int32_t x125 = INT32_MIN;
int64_t x126 = -157LL;
volatile int64_t x127 = -50116822608543967LL;
int8_t x128 = INT8_MIN;
static int32_t t28 = INT32_MIN;
t28 = (x125*(x126!=(x127-x128)));
if (t28 != INT32_MIN) { NG(); } else { ; }
}
void f29(void) {
int8_t x130 = INT8_MIN;
static int16_t x131 = -1;
static uint16_t x132 = 1U;
volatile int32_t t29 = 25;
t29 = (x129*(x130!=(x131-x132)));
if (t29 != -32768) { NG(); } else { ; }
}
void f30(void) {
int32_t x133 = INT32_MAX;
uint64_t x135 = 21656LLU;
int32_t t30 = INT32_MAX;
t30 = (x133*(x134!=(x135-x136)));
if (t30 != INT32_MAX) { NG(); } else { ; }
}
void f31(void) {
uint16_t x137 = 66U;
volatile uint16_t x138 = 61U;
int8_t x140 = -10;
int32_t t31 = 210403727;
t31 = (x137*(x138!=(x139-x140)));
if (t31 != 66) { NG(); } else { ; }
}
void f32(void) {
int8_t x145 = -23;
volatile uint64_t x146 = UINT64_MAX;
int64_t x147 = -2044054109954998LL;
volatile int8_t x148 = INT8_MAX;
static volatile int32_t t32 = 6849761;
t32 = (x145*(x146!=(x147-x148)));
if (t32 != -23) { NG(); } else { ; }
}
void f33(void) {
static int16_t x149 = -43;
static volatile int16_t x151 = INT16_MAX;
int8_t x152 = -1;
static int32_t t33 = -2;
t33 = (x149*(x150!=(x151-x152)));
if (t33 != -43) { NG(); } else { ; }
}
void f34(void) {
volatile int64_t x153 = INT64_MIN;
uint32_t x154 = 11008251U;
int64_t x155 = INT64_MIN;
volatile int16_t x156 = -9;
volatile int64_t t34 = INT64_MIN;
t34 = (x153*(x154!=(x155-x156)));
if (t34 != INT64_MIN) { NG(); } else { ; }
}
void f35(void) {
int8_t x157 = INT8_MIN;
int64_t x158 = -844282092831LL;
int64_t x159 = INT64_MIN;
static int8_t x160 = -1;
int32_t t35 = -527667412;
t35 = (x157*(x158!=(x159-x160)));
if (t35 != -128) { NG(); } else { ; }
}
void f36(void) {
uint16_t x161 = 137U;
int8_t x162 = INT8_MIN;
int16_t x164 = -1;
t36 = (x161*(x162!=(x163-x164)));
if (t36 != 137) { NG(); } else { ; }
}
void f37(void) {
uint8_t x165 = 2U;
uint32_t x166 = 38U;
uint64_t x167 = UINT64_MAX;
uint32_t x168 = 120177U;
static int32_t t37 = -8138;
t37 = (x165*(x166!=(x167-x168)));
if (t37 != 2) { NG(); } else { ; }
}
void f38(void) {
int64_t x169 = 1816734LL;
uint64_t x170 = 262536088LLU;
uint64_t x171 = 54733899LLU;
static int16_t x172 = INT16_MAX;
volatile int64_t t38 = -101474463024LL;
t38 = (x169*(x170!=(x171-x172)));
if (t38 != 1816734LL) { NG(); } else { ; }
}
void f39(void) {
uint8_t x173 = UINT8_MAX;
static uint32_t x174 = UINT32_MAX;
int8_t x175 = INT8_MAX;
static int32_t x176 = -90;
int32_t t39 = -5701638;
t39 = (x173*(x174!=(x175-x176)));
if (t39 != 255) { NG(); } else { ; }
}
void f40(void) {
volatile uint8_t x177 = 3U;
int64_t x178 = -370LL;
volatile uint64_t x179 = UINT64_MAX;
uint8_t x180 = UINT8_MAX;
int32_t t40 = 19590;
t40 = (x177*(x178!=(x179-x180)));
if (t40 != 3) { NG(); } else { ; }
}
void f41(void) {
uint64_t x181 = 307311LLU;
int8_t x182 = INT8_MIN;
int8_t x183 = 0;
uint8_t x184 = 57U;
static volatile uint64_t t41 = 148284926170LLU;
t41 = (x181*(x182!=(x183-x184)));
if (t41 != 307311LLU) { NG(); } else { ; }
}
void f42(void) {
int64_t x185 = 4259521LL;
volatile int32_t x186 = -1;
int8_t x187 = INT8_MAX;
int64_t x188 = -103622280930LL;
volatile int64_t t42 = -61LL;
t42 = (x185*(x186!=(x187-x188)));
if (t42 != 4259521LL) { NG(); } else { ; }
}
void f43(void) {
int64_t x189 = -1LL;
uint32_t x190 = 401040205U;
int64_t x191 = -1340034391366558296LL;
static int16_t x192 = INT16_MIN;
t43 = (x189*(x190!=(x191-x192)));
if (t43 != -1LL) { NG(); } else { ; }
}
void f44(void) {
static uint64_t x194 = 35589081654259LLU;
uint32_t x195 = 104U;
int32_t x196 = -1;
volatile uint32_t t44 = 24013U;
t44 = (x193*(x194!=(x195-x196)));
if (t44 != 751U) { NG(); } else { ; }
}
void f45(void) {
uint16_t x198 = 39U;
int64_t x199 = 282100LL;
static int8_t x200 = INT8_MIN;
volatile int64_t t45 = INT64_MIN;
t45 = (x197*(x198!=(x199-x200)));
if (t45 != INT64_MIN) { NG(); } else { ; }
}
void f46(void) {
uint32_t x201 = 116U;
uint32_t x202 = 63848671U;
static int8_t x203 = INT8_MIN;
uint8_t x204 = 9U;
uint32_t t46 = 649U;
t46 = (x201*(x202!=(x203-x204)));
if (t46 != 116U) { NG(); } else { ; }
}
void f47(void) {
int64_t x205 = INT64_MIN;
int32_t x206 = -743395;
static int8_t x207 = -1;
uint32_t x208 = 1863851U;
static int64_t t47 = INT64_MIN;
t47 = (x205*(x206!=(x207-x208)));
if (t47 != INT64_MIN) { NG(); } else { ; }
}
void f48(void) {
uint32_t x209 = 138U;
volatile int16_t x210 = -1;
volatile uint32_t x211 = UINT32_MAX;
volatile int32_t x212 = -25649499;
volatile uint32_t t48 = 253173823U;
t48 = (x209*(x210!=(x211-x212)));
if (t48 != 138U) { NG(); } else { ; }
}
void f49(void) {
int8_t x213 = 23;
int64_t x214 = -8184787215575LL;
static uint16_t x215 = 96U;
int8_t x216 = INT8_MAX;
volatile int32_t t49 = -2519;
t49 = (x213*(x214!=(x215-x216)));
if (t49 != 23) { NG(); } else { ; }
}
void f50(void) {
volatile int32_t x219 = INT32_MIN;
int8_t x220 = INT8_MIN;
static int64_t t50 = INT64_MAX;
t50 = (x217*(x218!=(x219-x220)));
if (t50 != INT64_MAX) { NG(); } else { ; }
}
void f51(void) {
int16_t x221 = -7082;
int8_t x222 = INT8_MAX;
int32_t x223 = -6;
int32_t x224 = INT32_MIN;
static volatile int32_t t51 = 76;
t51 = (x221*(x222!=(x223-x224)));
if (t51 != -7082) { NG(); } else { ; }
}
void f52(void) {
static volatile int8_t x225 = 5;
int16_t x227 = -384;
uint8_t x228 = 0U;
static int32_t t52 = 1;
t52 = (x225*(x226!=(x227-x228)));
if (t52 != 5) { NG(); } else { ; }
}
void f53(void) {
int16_t x230 = -41;
static volatile uint32_t x231 = UINT32_MAX;
int16_t x232 = -2800;
int64_t t53 = INT64_MIN;
t53 = (x229*(x230!=(x231-x232)));
if (t53 != INT64_MIN) { NG(); } else { ; }
}
void f54(void) {
volatile int64_t x233 = -1LL;
volatile int8_t x235 = INT8_MAX;
volatile int64_t x236 = -1LL;
static volatile int64_t t54 = -54511883818184LL;
t54 = (x233*(x234!=(x235-x236)));
if (t54 != -1LL) { NG(); } else { ; }
}
void f55(void) {
uint64_t x237 = UINT64_MAX;
uint32_t x238 = 29271401U;
static int32_t x240 = INT32_MIN;
uint64_t t55 = UINT64_MAX;
t55 = (x237*(x238!=(x239-x240)));
if (t55 != UINT64_MAX) { NG(); } else { ; }
}
void f56(void) {
volatile uint32_t x241 = 0U;
volatile int16_t x242 = -1;
int64_t x243 = INT64_MIN;
int8_t x244 = -1;
t56 = (x241*(x242!=(x243-x244)));
if (t56 != 0U) { NG(); } else { ; }
}
void f57(void) {
int32_t x245 = INT32_MAX;
int32_t x246 = INT32_MAX;
uint32_t x247 = 2056U;
int32_t t57 = INT32_MAX;
t57 = (x245*(x246!=(x247-x248)));
if (t57 != INT32_MAX) { NG(); } else { ; }
}
void f58(void) {
static uint8_t x249 = UINT8_MAX;
int8_t x250 = -1;
int64_t x251 = INT64_MIN;
uint64_t x252 = 87412488575085388LLU;
t58 = (x249*(x250!=(x251-x252)));
if (t58 != 255) { NG(); } else { ; }
}
void f59(void) {
int16_t x253 = INT16_MIN;
uint32_t x254 = UINT32_MAX;
volatile int32_t x256 = 19;
int32_t t59 = 113326;
t59 = (x253*(x254!=(x255-x256)));
if (t59 != -32768) { NG(); } else { ; }
}
void f60(void) {
uint64_t x267 = 242613838239244LLU;
static int64_t x268 = -1LL;
uint64_t t60 = 15LLU;
t60 = (x265*(x266!=(x267-x268)));
if (t60 != 54138608879051629LLU) { NG(); } else { ; }
}
void f61(void) {
uint64_t x269 = UINT64_MAX;
static volatile int16_t x270 = -1;
static int16_t x271 = -1;
int16_t x272 = INT16_MIN;
volatile uint64_t t61 = UINT64_MAX;
t61 = (x269*(x270!=(x271-x272)));
if (t61 != UINT64_MAX) { NG(); } else { ; }
}
void f62(void) {
int64_t x273 = -1LL;
int64_t x274 = INT64_MIN;
int64_t x276 = -38643014466052LL;
int64_t t62 = 4367863948273696723LL;
t62 = (x273*(x274!=(x275-x276)));
if (t62 != -1LL) { NG(); } else { ; }
}
void f63(void) {
uint16_t x277 = 347U;
static int64_t x278 = INT64_MAX;
int16_t x279 = INT16_MIN;
t63 = (x277*(x278!=(x279-x280)));
if (t63 != 347) { NG(); } else { ; }
}
void f64(void) {
static int32_t x281 = INT32_MIN;
uint8_t x282 = UINT8_MAX;
static int8_t x283 = INT8_MIN;
uint16_t x284 = 0U;
t64 = (x281*(x282!=(x283-x284)));
if (t64 != INT32_MIN) { NG(); } else { ; }
}
void f65(void) {
int16_t x285 = 68;
int16_t x286 = -625;
static volatile int64_t x287 = INT64_MIN;
int8_t x288 = INT8_MIN;
static int32_t t65 = 562040;
t65 = (x285*(x286!=(x287-x288)));
if (t65 != 68) { NG(); } else { ; }
}
void f66(void) {
uint64_t x289 = 2704399570433695800LLU;
uint32_t x290 = UINT32_MAX;
int32_t x291 = INT32_MIN;
t66 = (x289*(x290!=(x291-x292)));
if (t66 != 2704399570433695800LLU) { NG(); } else { ; }
}
void f67(void) {
volatile uint32_t x293 = UINT32_MAX;
static uint8_t x294 = 9U;
int16_t x296 = INT16_MIN;
uint32_t t67 = UINT32_MAX;
t67 = (x293*(x294!=(x295-x296)));
if (t67 != UINT32_MAX) { NG(); } else { ; }
}
void f68(void) {
int64_t x297 = INT64_MIN;
uint16_t x298 = 29U;
volatile int8_t x299 = 1;
volatile int16_t x300 = INT16_MIN;
t68 = (x297*(x298!=(x299-x300)));
if (t68 != INT64_MIN) { NG(); } else { ; }
}
void f69(void) {
int32_t x301 = INT32_MIN;
uint16_t x302 = 144U;
int32_t x303 = INT32_MAX;
static volatile int32_t x304 = 5116;
int32_t t69 = INT32_MIN;
t69 = (x301*(x302!=(x303-x304)));
if (t69 != INT32_MIN) { NG(); } else { ; }
}
void f70(void) {
volatile uint64_t x305 = 26515281LLU;
uint64_t x306 = UINT64_MAX;
int32_t x307 = 6972318;
uint32_t x308 = 0U;
t70 = (x305*(x306!=(x307-x308)));
if (t70 != 26515281LLU) { NG(); } else { ; }
}
void f71(void) {
int64_t x309 = -1LL;
uint64_t x310 = 446345482860399883LLU;
uint32_t x311 = UINT32_MAX;
uint64_t x312 = 9147590556423LLU;
static int64_t t71 = -297587LL;
t71 = (x309*(x310!=(x311-x312)));
if (t71 != -1LL) { NG(); } else { ; }
}
void f72(void) {
volatile uint32_t x314 = UINT32_MAX;
uint32_t x315 = 5407U;
uint32_t t72 = UINT32_MAX;
t72 = (x313*(x314!=(x315-x316)));
if (t72 != UINT32_MAX) { NG(); } else { ; }
}
void f73(void) {
int64_t x317 = INT64_MIN;
int64_t x318 = INT64_MIN;
static int64_t x319 = -7330505560350444LL;
int64_t t73 = INT64_MIN;
t73 = (x317*(x318!=(x319-x320)));
if (t73 != INT64_MIN) { NG(); } else { ; }
}
void f74(void) {
static uint8_t x321 = UINT8_MAX;
int32_t x322 = INT32_MIN;
int32_t x324 = -3890911;
int32_t t74 = 5479;
t74 = (x321*(x322!=(x323-x324)));
if (t74 != 255) { NG(); } else { ; }
}
void f75(void) {
int32_t x326 = -45556;
static uint32_t x327 = 2U;
int64_t x328 = -1LL;
t75 = (x325*(x326!=(x327-x328)));
if (t75 != 14335U) { NG(); } else { ; }
}
void f76(void) {
static int8_t x329 = -1;
int16_t x331 = -1;
static int32_t t76 = -5;
t76 = (x329*(x330!=(x331-x332)));
if (t76 != -1) { NG(); } else { ; }
}
void f77(void) {
static uint16_t x333 = 4U;
int64_t x334 = -6391450977747LL;
static uint8_t x335 = UINT8_MAX;
static int32_t t77 = 3237398;
t77 = (x333*(x334!=(x335-x336)));
if (t77 != 4) { NG(); } else { ; }
}
void f78(void) {
static volatile int16_t x337 = -1;
uint8_t x338 = 52U;
volatile int16_t x339 = INT16_MIN;
volatile uint64_t x340 = 64827LLU;
volatile int32_t t78 = -7766777;
t78 = (x337*(x338!=(x339-x340)));
if (t78 != -1) { NG(); } else { ; }
}
void f79(void) {
uint16_t x345 = 37U;
static int32_t x346 = -1;
int8_t x347 = INT8_MIN;
int16_t x348 = -3108;
volatile int32_t t79 = 35;
t79 = (x345*(x346!=(x347-x348)));
if (t79 != 37) { NG(); } else { ; }
}
void f80(void) {
uint64_t x349 = 199780572990590985LLU;
static volatile uint32_t x350 = UINT32_MAX;
int8_t x351 = -1;
volatile int16_t x352 = -1;
volatile uint64_t t80 = 1903073150LLU;
t80 = (x349*(x350!=(x351-x352)));
if (t80 != 199780572990590985LLU) { NG(); } else { ; }
}
void f81(void) {
uint64_t x361 = 2190084717024340LLU;
uint16_t x362 = 199U;
static int16_t x363 = INT16_MIN;
uint64_t x364 = UINT64_MAX;
volatile uint64_t t81 = 2LLU;
t81 = (x361*(x362!=(x363-x364)));
if (t81 != 2190084717024340LLU) { NG(); } else { ; }
}
void f82(void) {
static uint8_t x367 = UINT8_MAX;
int16_t x368 = 2;
volatile uint32_t t82 = 4839365U;
t82 = (x365*(x366!=(x367-x368)));
if (t82 != 5949762U) { NG(); } else { ; }
}
void f83(void) {
static uint32_t x369 = 8886245U;
volatile int16_t x370 = INT16_MIN;
int32_t x371 = INT32_MIN;
static uint32_t x372 = 0U;
volatile uint32_t t83 = 94501U;
t83 = (x369*(x370!=(x371-x372)));
if (t83 != 8886245U) { NG(); } else { ; }
}
void f84(void) {
static int64_t x373 = INT64_MIN;
int8_t x374 = INT8_MAX;
int32_t x375 = INT32_MAX;
uint32_t x376 = 111U;
t84 = (x373*(x374!=(x375-x376)));
if (t84 != INT64_MIN) { NG(); } else { ; }
}
void f85(void) {
static int64_t x381 = 1014006061LL;
int16_t x382 = -1;
int8_t x383 = INT8_MIN;
int8_t x384 = INT8_MIN;
int64_t t85 = -889201875LL;
t85 = (x381*(x382!=(x383-x384)));
if (t85 != 1014006061LL) { NG(); } else { ; }
}
void f86(void) {
int8_t x385 = INT8_MAX;
int8_t x387 = -2;
int8_t x388 = INT8_MIN;
volatile int32_t t86 = 3;
t86 = (x385*(x386!=(x387-x388)));
if (t86 != 127) { NG(); } else { ; }
}
void f87(void) {
int32_t x393 = 80184;
int8_t x394 = -1;
int32_t x396 = 23591534;
t87 = (x393*(x394!=(x395-x396)));
if (t87 != 80184) { NG(); } else { ; }
}
void f88(void) {
static int16_t x397 = INT16_MIN;
volatile uint16_t x398 = 1U;
uint32_t x399 = 33052587U;
static uint64_t x400 = 11221397182LLU;
static int32_t t88 = 5751;
t88 = (x397*(x398!=(x399-x400)));
if (t88 != -32768) { NG(); } else { ; }
}
void f89(void) {
static int16_t x405 = 6;
int32_t x406 = 233882;
int8_t x407 = 0;
int8_t x408 = INT8_MIN;
int32_t t89 = 58;
t89 = (x405*(x406!=(x407-x408)));
if (t89 != 6) { NG(); } else { ; }
}
void f90(void) {
int64_t x409 = -571663985LL;
int16_t x410 = INT16_MAX;
int64_t x411 = 29822331260LL;
int16_t x412 = INT16_MAX;
volatile int64_t t90 = -4181669LL;
t90 = (x409*(x410!=(x411-x412)));
if (t90 != -571663985LL) { NG(); } else { ; }
}
void f91(void) {
uint64_t x415 = UINT64_MAX;
int32_t t91 = 7955;
t91 = (x413*(x414!=(x415-x416)));
if (t91 != -128) { NG(); } else { ; }
}
void f92(void) {
uint16_t x418 = 413U;
uint16_t x419 = 1679U;
uint64_t x420 = 8979508LLU;
t92 = (x417*(x418!=(x419-x420)));
if (t92 != -2989180) { NG(); } else { ; }
}
void f93(void) {
int32_t x421 = -1;
uint32_t x423 = 244U;
int32_t x424 = 841;
volatile int32_t t93 = -3244374;
t93 = (x421*(x422!=(x423-x424)));
if (t93 != -1) { NG(); } else { ; }
}
void f94(void) {
int32_t x425 = -1;
uint32_t x426 = UINT32_MAX;
int64_t x427 = -1LL;
uint64_t x428 = UINT64_MAX;
static int32_t t94 = 3281549;
t94 = (x425*(x426!=(x427-x428)));
if (t94 != -1) { NG(); } else { ; }
}
void f95(void) {
int16_t x429 = INT16_MIN;
static int16_t x430 = -1;
static uint32_t x431 = 12397701U;
int8_t x432 = INT8_MIN;
static volatile int32_t t95 = 50903816;
t95 = (x429*(x430!=(x431-x432)));
if (t95 != -32768) { NG(); } else { ; }
}
void f96(void) {
static int64_t x434 = INT64_MIN;
static uint16_t x435 = 145U;
int8_t x436 = 5;
static int32_t t96 = INT32_MIN;
t96 = (x433*(x434!=(x435-x436)));
if (t96 != INT32_MIN) { NG(); } else { ; }
}
void f97(void) {
uint32_t x438 = UINT32_MAX;
static uint32_t x439 = UINT32_MAX;
volatile uint8_t x440 = 57U;
volatile int32_t t97 = -956841761;
t97 = (x437*(x438!=(x439-x440)));
if (t97 != -1) { NG(); } else { ; }
}
void f98(void) {
int16_t x441 = -1;
uint16_t x442 = 4107U;
static int16_t x443 = 12;
volatile int32_t t98 = -239221;
t98 = (x441*(x442!=(x443-x444)));
if (t98 != -1) { NG(); } else { ; }
}
void f99(void) {
int8_t x445 = -1;
int8_t x446 = -1;
int16_t x448 = -1;
t99 = (x445*(x446!=(x447-x448)));
if (t99 != -1) { NG(); } else { ; }
}
void f100(void) {
int64_t x449 = INT64_MIN;
int64_t x451 = 7496026558747439LL;
volatile int8_t x452 = 7;
static volatile int64_t t100 = INT64_MIN;
t100 = (x449*(x450!=(x451-x452)));
if (t100 != INT64_MIN) { NG(); } else { ; }
}
void f101(void) {
int8_t x453 = -1;
uint16_t x455 = UINT16_MAX;
uint64_t x456 = UINT64_MAX;
volatile int32_t t101 = 51678;
t101 = (x453*(x454!=(x455-x456)));
if (t101 != -1) { NG(); } else { ; }
}
void f102(void) {
static int32_t x458 = INT32_MIN;
int32_t t102 = 395342963;
t102 = (x457*(x458!=(x459-x460)));
if (t102 != 255) { NG(); } else { ; }
}
void f103(void) {
int32_t x461 = -11;
static uint8_t x462 = 17U;
uint8_t x463 = 24U;
volatile int32_t t103 = 14;
t103 = (x461*(x462!=(x463-x464)));
if (t103 != -11) { NG(); } else { ; }
}
void f104(void) {
volatile int8_t x466 = -1;
static int8_t x467 = 45;
static int32_t t104 = 5180329;
t104 = (x465*(x466!=(x467-x468)));
if (t104 != -1) { NG(); } else { ; }
}
void f105(void) {
uint64_t x473 = UINT64_MAX;
int64_t x474 = INT64_MIN;
t105 = (x473*(x474!=(x475-x476)));
if (t105 != UINT64_MAX) { NG(); } else { ; }
}
void f106(void) {
int32_t x477 = INT32_MIN;
int32_t x478 = INT32_MIN;
int32_t x479 = -1;
int8_t x480 = INT8_MAX;
int32_t t106 = INT32_MIN;
t106 = (x477*(x478!=(x479-x480)));
if (t106 != INT32_MIN) { NG(); } else { ; }
}
void f107(void) {
uint64_t x481 = 12762974LLU;
int16_t x483 = INT16_MIN;
uint64_t t107 = 63099LLU;
t107 = (x481*(x482!=(x483-x484)));
if (t107 != 12762974LLU) { NG(); } else { ; }
}
void f108(void) {
int32_t x485 = -455013352;
int16_t x487 = 4949;
static int8_t x488 = -3;
volatile int32_t t108 = -1047740;
t108 = (x485*(x486!=(x487-x488)));
if (t108 != -455013352) { NG(); } else { ; }
}
void f109(void) {
uint8_t x489 = UINT8_MAX;
int64_t x490 = 4689092352285LL;
volatile int8_t x491 = 1;
volatile int32_t t109 = 0;
t109 = (x489*(x490!=(x491-x492)));
if (t109 != 255) { NG(); } else { ; }
}
void f110(void) {
uint32_t x493 = UINT32_MAX;
int8_t x494 = INT8_MAX;
int16_t x495 = INT16_MIN;
static int16_t x496 = -1;
uint32_t t110 = UINT32_MAX;
t110 = (x493*(x494!=(x495-x496)));
if (t110 != UINT32_MAX) { NG(); } else { ; }
}
void f111(void) {
volatile uint32_t x498 = 323137U;
int64_t x500 = INT64_MAX;
t111 = (x497*(x498!=(x499-x500)));
if (t111 != 14) { NG(); } else { ; }
}
void f112(void) {
uint8_t x501 = UINT8_MAX;
int64_t x503 = -1LL;
volatile uint32_t x504 = 8U;
int32_t t112 = -451;
t112 = (x501*(x502!=(x503-x504)));
if (t112 != 255) { NG(); } else { ; }
}
void f113(void) {
int8_t x509 = INT8_MAX;
int32_t x510 = INT32_MIN;
int32_t x511 = INT32_MIN;
static int32_t x512 = INT32_MIN;
volatile int32_t t113 = 2144944;
t113 = (x509*(x510!=(x511-x512)));
if (t113 != 127) { NG(); } else { ; }
}
void f114(void) {
static uint32_t x514 = UINT32_MAX;
volatile int64_t x515 = 32563105553283218LL;
int32_t x516 = 7370028;
int32_t t114 = INT32_MIN;
t114 = (x513*(x514!=(x515-x516)));
if (t114 != INT32_MIN) { NG(); } else { ; }
}
void f115(void) {
int16_t x517 = INT16_MIN;
int64_t x518 = -188LL;
static int16_t x519 = 2;
volatile int32_t t115 = 91936;
t115 = (x517*(x518!=(x519-x520)));
if (t115 != -32768) { NG(); } else { ; }
}
void f116(void) {
static int16_t x522 = -1;
static int8_t x523 = -1;
int64_t x524 = INT64_MIN;
static int64_t t116 = 3369141LL;
t116 = (x521*(x522!=(x523-x524)));
if (t116 != -7126LL) { NG(); } else { ; }
}
void f117(void) {
static uint32_t x526 = 736U;
uint64_t x527 = 0LLU;
static volatile uint32_t x528 = 70749853U;
static volatile int32_t t117 = 1;
t117 = (x525*(x526!=(x527-x528)));
if (t117 != 10) { NG(); } else { ; }
}
void f118(void) {
volatile int16_t x533 = INT16_MAX;
int64_t x534 = INT64_MIN;
uint32_t x535 = 28U;
t118 = (x533*(x534!=(x535-x536)));
if (t118 != 32767) { NG(); } else { ; }
}
void f119(void) {
uint16_t x537 = 1U;
int8_t x538 = 0;
uint16_t x539 = 290U;
int32_t x540 = -1;
int32_t t119 = 0;
t119 = (x537*(x538!=(x539-x540)));
if (t119 != 1) { NG(); } else { ; }
}
void f120(void) {
static volatile int64_t x541 = INT64_MIN;
uint32_t x542 = UINT32_MAX;
int8_t x543 = INT8_MAX;
int8_t x544 = -1;
int64_t t120 = INT64_MIN;
t120 = (x541*(x542!=(x543-x544)));
if (t120 != INT64_MIN) { NG(); } else { ; }
}
void f121(void) {
static int32_t x545 = 9035287;
uint32_t x547 = 42U;
int32_t t121 = 195366621;
t121 = (x545*(x546!=(x547-x548)));
if (t121 != 9035287) { NG(); } else { ; }
}
void f122(void) {
int64_t x549 = -132269LL;
int8_t x550 = INT8_MAX;
uint8_t x551 = 12U;
uint8_t x552 = 0U;
int64_t t122 = -1568081341960LL;
t122 = (x549*(x550!=(x551-x552)));
if (t122 != -132269LL) { NG(); } else { ; }
}
void f123(void) {
int8_t x553 = -1;
uint64_t x554 = UINT64_MAX;
int16_t x556 = -1;
int32_t t123 = -1053987362;
t123 = (x553*(x554!=(x555-x556)));
if (t123 != -1) { NG(); } else { ; }
}
void f124(void) {
uint8_t x557 = 56U;
uint8_t x558 = UINT8_MAX;
static volatile uint8_t x559 = UINT8_MAX;
int8_t x560 = -1;
t124 = (x557*(x558!=(x559-x560)));
if (t124 != 56) { NG(); } else { ; }
}
void f125(void) {
volatile uint16_t x562 = 222U;
static volatile uint8_t x563 = UINT8_MAX;
static uint32_t x564 = 251U;
int32_t t125 = 198600;
t125 = (x561*(x562!=(x563-x564)));
if (t125 != -32768) { NG(); } else { ; }
}
void f126(void) {
uint64_t x565 = UINT64_MAX;
int16_t x566 = -125;
int16_t x567 = INT16_MIN;
int8_t x568 = -1;
uint64_t t126 = UINT64_MAX;
t126 = (x565*(x566!=(x567-x568)));
if (t126 != UINT64_MAX) { NG(); } else { ; }
}
void f127(void) {
static int64_t x574 = INT64_MIN;
uint8_t x575 = UINT8_MAX;
static uint32_t x576 = 348331U;
t127 = (x573*(x574!=(x575-x576)));
if (t127 != -128) { NG(); } else { ; }
}
void f128(void) {
int32_t x577 = -1;
volatile int32_t x578 = -179;
int16_t x579 = INT16_MAX;
int8_t x580 = -8;
int32_t t128 = -78072;
t128 = (x577*(x578!=(x579-x580)));
if (t128 != -1) { NG(); } else { ; }
}
void f129(void) {
volatile int16_t x581 = INT16_MAX;
uint32_t x582 = 108692U;
int32_t x583 = INT32_MAX;
static volatile uint16_t x584 = 1867U;
t129 = (x581*(x582!=(x583-x584)));
if (t129 != 32767) { NG(); } else { ; }
}
void f130(void) {
uint64_t x585 = 22810054111599LLU;
int32_t x587 = INT32_MIN;
uint64_t t130 = 1707365272739LLU;
t130 = (x585*(x586!=(x587-x588)));
if (t130 != 22810054111599LLU) { NG(); } else { ; }
}
void f131(void) {
int8_t x590 = INT8_MIN;
uint64_t x591 = 4319678280060LLU;
uint16_t x592 = UINT16_MAX;
volatile int64_t t131 = INT64_MAX;
t131 = (x589*(x590!=(x591-x592)));
if (t131 != INT64_MAX) { NG(); } else { ; }
}
void f132(void) {
int64_t x597 = 52515294291231132LL;
int32_t x599 = INT32_MIN;
int16_t x600 = INT16_MIN;
static int64_t t132 = 50860924LL;
t132 = (x597*(x598!=(x599-x600)));
if (t132 != 52515294291231132LL) { NG(); } else { ; }
}
void f133(void) {
volatile int64_t x601 = INT64_MIN;
int16_t x602 = INT16_MIN;
int32_t x604 = -112;
int64_t t133 = INT64_MIN;
t133 = (x601*(x602!=(x603-x604)));
if (t133 != INT64_MIN) { NG(); } else { ; }
}
void f134(void) {
volatile uint16_t x605 = 1650U;
int32_t x606 = -1;
uint8_t x607 = 12U;
int64_t x608 = -420283836474LL;
int32_t t134 = -252;
t134 = (x605*(x606!=(x607-x608)));
if (t134 != 1650) { NG(); } else { ; }
}
void f135(void) {
int32_t x609 = INT32_MIN;
int64_t x610 = 4182436LL;
volatile int16_t x611 = -1;
int32_t x612 = INT32_MAX;
volatile int32_t t135 = INT32_MIN;
t135 = (x609*(x610!=(x611-x612)));
if (t135 != INT32_MIN) { NG(); } else { ; }
}
void f136(void) {
static int8_t x617 = 1;
int8_t x618 = 49;
int16_t x619 = -1;
uint8_t x620 = UINT8_MAX;
volatile int32_t t136 = 1;
t136 = (x617*(x618!=(x619-x620)));
if (t136 != 1) { NG(); } else { ; }
}
void f137(void) {
int32_t x621 = 1;
volatile int32_t x622 = INT32_MAX;
int64_t x624 = -1LL;
volatile int32_t t137 = -15115;
t137 = (x621*(x622!=(x623-x624)));
if (t137 != 1) { NG(); } else { ; }
}
void f138(void) {
uint8_t x625 = 1U;
int16_t x626 = INT16_MAX;
static uint32_t x628 = UINT32_MAX;
int32_t t138 = -6778;
t138 = (x625*(x626!=(x627-x628)));
if (t138 != 1) { NG(); } else { ; }
}
void f139(void) {
static volatile int8_t x631 = INT8_MIN;
uint8_t x632 = 2U;
t139 = (x629*(x630!=(x631-x632)));
if (t139 != -128) { NG(); } else { ; }
}
void f140(void) {
int32_t x633 = 11949;
uint16_t x634 = UINT16_MAX;
int32_t x635 = -240608;
t140 = (x633*(x634!=(x635-x636)));
if (t140 != 11949) { NG(); } else { ; }
}
void f141(void) {
volatile uint8_t x637 = 8U;
uint16_t x638 = 0U;
int16_t x639 = -17;
int32_t x640 = INT32_MIN;
int32_t t141 = -29713;
t141 = (x637*(x638!=(x639-x640)));
if (t141 != 8) { NG(); } else { ; }
}
void f142(void) {
uint64_t x642 = 3507133474340179100LLU;
int64_t x643 = INT64_MIN;
volatile int16_t x644 = -1;
static volatile int32_t t142 = 3106;
t142 = (x641*(x642!=(x643-x644)));
if (t142 != -23154) { NG(); } else { ; }
}
void f143(void) {
int64_t x645 = 3663336LL;
uint32_t x646 = 181U;
int8_t x647 = INT8_MIN;
int16_t x648 = -9;
t143 = (x645*(x646!=(x647-x648)));
if (t143 != 3663336LL) { NG(); } else { ; }
}
void f144(void) {
int64_t x649 = INT64_MIN;
int16_t x650 = -1;
uint64_t x652 = 4218952010LLU;
int64_t t144 = INT64_MIN;
t144 = (x649*(x650!=(x651-x652)));
if (t144 != INT64_MIN) { NG(); } else { ; }
}
void f145(void) {
int16_t x654 = 1;
int64_t x655 = -1LL;
uint16_t x656 = UINT16_MAX;
t145 = (x653*(x654!=(x655-x656)));
if (t145 != INT64_MIN) { NG(); } else { ; }
}
void f146(void) {
static uint64_t x657 = UINT64_MAX;
static uint64_t x658 = 5295027LLU;
t146 = (x657*(x658!=(x659-x660)));
if (t146 != UINT64_MAX) { NG(); } else { ; }
}
void f147(void) {
int32_t x666 = -1885385;
uint64_t x667 = 671056861498737541LLU;
volatile int32_t t147 = 1327;
t147 = (x665*(x666!=(x667-x668)));
if (t147 != -1) { NG(); } else { ; }
}
void f148(void) {
int32_t x669 = INT32_MAX;
int32_t x670 = 647456;
uint16_t x671 = 1U;
volatile int32_t t148 = INT32_MAX;
t148 = (x669*(x670!=(x671-x672)));
if (t148 != INT32_MAX) { NG(); } else { ; }
}
void f149(void) {
static uint32_t x674 = 495974885U;
volatile uint16_t x675 = UINT16_MAX;
volatile int16_t x676 = -56;
int32_t t149 = 39050;
t149 = (x673*(x674!=(x675-x676)));
if (t149 != -55) { NG(); } else { ; }
}
void f150(void) {
uint32_t x677 = 24427274U;
static uint32_t x678 = UINT32_MAX;
static int8_t x680 = INT8_MIN;
t150 = (x677*(x678!=(x679-x680)));
if (t150 != 24427274U) { NG(); } else { ; }
}
void f151(void) {
static int32_t x681 = INT32_MAX;
static volatile int64_t x682 = -1329852081466970838LL;
int16_t x684 = INT16_MAX;
int32_t t151 = INT32_MAX;
t151 = (x681*(x682!=(x683-x684)));
if (t151 != INT32_MAX) { NG(); } else { ; }
}
void f152(void) {
int8_t x686 = INT8_MIN;
static int64_t x688 = -20362557122594339LL;
int32_t t152 = 1;
t152 = (x685*(x686!=(x687-x688)));
if (t152 != 112) { NG(); } else { ; }
}
void f153(void) {
volatile int8_t x693 = INT8_MIN;
uint64_t x694 = UINT64_MAX;
int64_t x695 = -14042636353LL;
int64_t x696 = INT64_MIN;
volatile int32_t t153 = 110;
t153 = (x693*(x694!=(x695-x696)));
if (t153 != -128) { NG(); } else { ; }
}
void f154(void) {
int8_t x697 = -1;
static volatile int32_t x699 = INT32_MAX;
int32_t t154 = -503440;
t154 = (x697*(x698!=(x699-x700)));
if (t154 != -1) { NG(); } else { ; }
}
void f155(void) {
static uint16_t x701 = 2U;
int16_t x703 = -4043;
volatile uint8_t x704 = 1U;
volatile int32_t t155 = -62;
t155 = (x701*(x702!=(x703-x704)));
if (t155 != 2) { NG(); } else { ; }
}
void f156(void) {
int32_t x705 = INT32_MIN;
int32_t x706 = INT32_MIN;
int32_t x707 = -1;
t156 = (x705*(x706!=(x707-x708)));
if (t156 != INT32_MIN) { NG(); } else { ; }
}
void f157(void) {
int32_t x709 = 237531778;
uint64_t x710 = 10014554812LLU;
int8_t x711 = INT8_MIN;
static int32_t x712 = INT32_MIN;
int32_t t157 = -3981;
t157 = (x709*(x710!=(x711-x712)));
if (t157 != 237531778) { NG(); } else { ; }
}
void f158(void) {
int8_t x713 = INT8_MAX;
volatile int8_t x714 = -1;
static volatile uint8_t x715 = UINT8_MAX;
int8_t x716 = INT8_MAX;
t158 = (x713*(x714!=(x715-x716)));
if (t158 != 127) { NG(); } else { ; }
}
void f159(void) {
static volatile int64_t x718 = 639885341928896LL;
int32_t x719 = INT32_MIN;
uint32_t x720 = 5753126U;
volatile int64_t t159 = -408649334LL;
t159 = (x717*(x718!=(x719-x720)));
if (t159 != -3482499739965406404LL) { NG(); } else { ; }
}
void f160(void) {
volatile uint32_t x721 = 7172U;
uint16_t x722 = 0U;
volatile int64_t x723 = 1LL;
uint8_t x724 = UINT8_MAX;
t160 = (x721*(x722!=(x723-x724)));
if (t160 != 7172U) { NG(); } else { ; }
}
void f161(void) {
static volatile uint64_t x725 = UINT64_MAX;
uint16_t x726 = UINT16_MAX;
volatile int16_t x727 = INT16_MAX;
int16_t x728 = INT16_MIN;
volatile uint64_t t161 = 18453LLU;
t161 = (x725*(x726!=(x727-x728)));
if (t161 != 0LLU) { NG(); } else { ; }
}
void f162(void) {
static int16_t x729 = 635;
uint32_t x732 = 56U;
static int32_t t162 = 1;
t162 = (x729*(x730!=(x731-x732)));
if (t162 != 635) { NG(); } else { ; }
}
void f163(void) {
volatile int8_t x735 = -1;
int32_t x736 = INT32_MIN;
volatile int32_t t163 = 1346111;
t163 = (x733*(x734!=(x735-x736)));
if (t163 != 0) { NG(); } else { ; }
}
void f164(void) {
int16_t x742 = INT16_MIN;
int64_t x743 = INT64_MIN;
int8_t x744 = INT8_MIN;
static uint32_t t164 = 3238U;
t164 = (x741*(x742!=(x743-x744)));
if (t164 != 36691U) { NG(); } else { ; }
}
void f165(void) {
volatile int64_t x745 = INT64_MIN;
int64_t t165 = INT64_MIN;
t165 = (x745*(x746!=(x747-x748)));
if (t165 != INT64_MIN) { NG(); } else { ; }
}
void f166(void) {
int64_t x749 = INT64_MIN;
static int64_t x750 = -913847887281LL;
int32_t x752 = INT32_MAX;
int64_t t166 = INT64_MIN;
t166 = (x749*(x750!=(x751-x752)));
if (t166 != INT64_MIN) { NG(); } else { ; }
}
void f167(void) {
static int32_t x754 = 3;
int64_t x755 = -1LL;
int16_t x756 = 0;
uint32_t t167 = 264963288U;
t167 = (x753*(x754!=(x755-x756)));
if (t167 != 199U) { NG(); } else { ; }
}
void f168(void) {
int64_t x757 = INT64_MIN;
volatile int64_t x758 = INT64_MIN;
uint64_t x759 = 0LLU;
int8_t x760 = 0;
t168 = (x757*(x758!=(x759-x760)));
if (t168 != INT64_MIN) { NG(); } else { ; }
}
void f169(void) {
int32_t x761 = INT32_MIN;
static int64_t x762 = 9582486619646560LL;
static uint64_t x763 = 11846370LLU;
uint32_t x764 = 205U;
static volatile int32_t t169 = INT32_MIN;
t169 = (x761*(x762!=(x763-x764)));
if (t169 != INT32_MIN) { NG(); } else { ; }
}
void f170(void) {
int32_t x765 = 406;
volatile int64_t x767 = INT64_MAX;
uint64_t x768 = UINT64_MAX;
volatile int32_t t170 = -7643;
t170 = (x765*(x766!=(x767-x768)));
if (t170 != 406) { NG(); } else { ; }
}
void f171(void) {
int64_t x770 = INT64_MAX;
volatile int16_t x772 = 1933;
volatile int32_t t171 = -454265642;
t171 = (x769*(x770!=(x771-x772)));
if (t171 != -1) { NG(); } else { ; }
}
void f172(void) {
volatile int32_t x773 = -1;
uint64_t x774 = UINT64_MAX;
int8_t x775 = -1;
static uint64_t x776 = 6904039010655738026LLU;
int32_t t172 = 5771886;
t172 = (x773*(x774!=(x775-x776)));
if (t172 != -1) { NG(); } else { ; }
}
void f173(void) {
int32_t x777 = -1;
int16_t x778 = -1;
int16_t x779 = -5448;
static uint64_t x780 = 376536LLU;
int32_t t173 = -52472;
t173 = (x777*(x778!=(x779-x780)));
if (t173 != -1) { NG(); } else { ; }
}
void f174(void) {
int8_t x782 = -1;
static int32_t x783 = -4737236;
static uint64_t x784 = 1071100246971LLU;
volatile int64_t t174 = -449LL;
t174 = (x781*(x782!=(x783-x784)));
if (t174 != -1LL) { NG(); } else { ; }
}
void f175(void) {
uint64_t x785 = 543593306996LLU;
int8_t x786 = INT8_MIN;
uint8_t x787 = UINT8_MAX;
int64_t x788 = INT64_MAX;
uint64_t t175 = 7280339736453641LLU;
t175 = (x785*(x786!=(x787-x788)));
if (t175 != 543593306996LLU) { NG(); } else { ; }
}
void f176(void) {
int8_t x789 = INT8_MAX;
int16_t x790 = INT16_MIN;
uint32_t x791 = 1749462754U;
int8_t x792 = -1;
volatile int32_t t176 = -1;
t176 = (x789*(x790!=(x791-x792)));
if (t176 != 127) { NG(); } else { ; }
}
void f177(void) {
uint64_t x793 = 17724999740LLU;
volatile uint16_t x794 = UINT16_MAX;
uint16_t x795 = 21758U;
volatile uint64_t t177 = 52496938LLU;
t177 = (x793*(x794!=(x795-x796)));
if (t177 != 17724999740LLU) { NG(); } else { ; }
}
void f178(void) {
int16_t x797 = -1;
int64_t x800 = -1LL;
volatile int32_t t178 = -45410531;
t178 = (x797*(x798!=(x799-x800)));
if (t178 != -1) { NG(); } else { ; }
}
void f179(void) {
int16_t x801 = INT16_MIN;
volatile int8_t x802 = INT8_MAX;
int8_t x803 = INT8_MIN;
t179 = (x801*(x802!=(x803-x804)));
if (t179 != -32768) { NG(); } else { ; }
}
void f180(void) {
static volatile uint64_t x805 = 171966LLU;
uint32_t x806 = 7587U;
int16_t x808 = INT16_MIN;
uint64_t t180 = 13635554LLU;
t180 = (x805*(x806!=(x807-x808)));
if (t180 != 171966LLU) { NG(); } else { ; }
}
void f181(void) {
static int16_t x809 = INT16_MIN;
static int8_t x810 = INT8_MIN;
int64_t x812 = -48179069275245993LL;
volatile int32_t t181 = 126076666;
t181 = (x809*(x810!=(x811-x812)));
if (t181 != -32768) { NG(); } else { ; }
}
void f182(void) {
static int64_t x817 = INT64_MIN;
static int64_t x820 = 544LL;
int64_t t182 = INT64_MIN;
t182 = (x817*(x818!=(x819-x820)));
if (t182 != INT64_MIN) { NG(); } else { ; }
}
void f183(void) {
uint16_t x822 = 4247U;
int8_t x823 = 7;
volatile uint32_t t183 = 1U;
t183 = (x821*(x822!=(x823-x824)));
if (t183 != 1084429U) { NG(); } else { ; }
}
void f184(void) {
static uint32_t x825 = UINT32_MAX;
uint32_t x826 = 78742499U;
volatile int32_t x827 = INT32_MIN;
int32_t x828 = -317988;
uint32_t t184 = UINT32_MAX;
t184 = (x825*(x826!=(x827-x828)));
if (t184 != UINT32_MAX) { NG(); } else { ; }
}
void f185(void) {
uint32_t x833 = 47947U;
static int8_t x834 = INT8_MAX;
static uint8_t x835 = 101U;
int8_t x836 = INT8_MIN;
static uint32_t t185 = 22914595U;
t185 = (x833*(x834!=(x835-x836)));
if (t185 != 47947U) { NG(); } else { ; }
}
void f186(void) {
static int32_t x841 = -2290;
static uint16_t x842 = 0U;
int8_t x844 = INT8_MIN;
int32_t t186 = -33905702;
t186 = (x841*(x842!=(x843-x844)));
if (t186 != -2290) { NG(); } else { ; }
}
void f187(void) {
static int32_t x845 = INT32_MAX;
uint8_t x846 = UINT8_MAX;
int16_t x847 = INT16_MAX;
int16_t x848 = INT16_MIN;
int32_t t187 = INT32_MAX;
t187 = (x845*(x846!=(x847-x848)));
if (t187 != INT32_MAX) { NG(); } else { ; }
}
void f188(void) {
int16_t x850 = -1;
static volatile uint16_t x851 = UINT16_MAX;
static int8_t x852 = INT8_MIN;
volatile int32_t t188 = 8048362;
t188 = (x849*(x850!=(x851-x852)));
if (t188 != 56) { NG(); } else { ; }
}
void f189(void) {
int32_t x853 = INT32_MIN;
uint64_t x855 = 276511258697546684LLU;
volatile uint64_t x856 = UINT64_MAX;
int32_t t189 = INT32_MIN;
t189 = (x853*(x854!=(x855-x856)));
if (t189 != INT32_MIN) { NG(); } else { ; }
}
void f190(void) {
uint16_t x857 = UINT16_MAX;
static uint64_t x858 = 738146813539646209LLU;
uint16_t x859 = 5133U;
static volatile uint32_t x860 = UINT32_MAX;
volatile int32_t t190 = -6;
t190 = (x857*(x858!=(x859-x860)));
if (t190 != 65535) { NG(); } else { ; }
}
void f191(void) {
static uint8_t x861 = UINT8_MAX;
volatile int64_t x862 = INT64_MIN;
int64_t x864 = -1LL;
volatile int32_t t191 = 4084149;
t191 = (x861*(x862!=(x863-x864)));
if (t191 != 255) { NG(); } else { ; }
}
void f192(void) {
int16_t x865 = 1;
int8_t x866 = -1;
uint16_t x867 = UINT16_MAX;
volatile int16_t x868 = 34;
volatile int32_t t192 = 0;
t192 = (x865*(x866!=(x867-x868)));
if (t192 != 1) { NG(); } else { ; }
}
void f193(void) {
volatile int8_t x869 = INT8_MAX;
volatile int16_t x870 = 0;
int64_t x871 = INT64_MIN;
int8_t x872 = INT8_MIN;
static volatile int32_t t193 = -2120;
t193 = (x869*(x870!=(x871-x872)));
if (t193 != 127) { NG(); } else { ; }
}
void f194(void) {
int32_t x873 = 23274045;
static volatile uint32_t x874 = 0U;
int16_t x875 = INT16_MIN;
uint16_t x876 = UINT16_MAX;
volatile int32_t t194 = -3357377;
t194 = (x873*(x874!=(x875-x876)));
if (t194 != 23274045) { NG(); } else { ; }
}
void f195(void) {
static int64_t x877 = INT64_MIN;
uint32_t x879 = 160694582U;
int16_t x880 = -1;
t195 = (x877*(x878!=(x879-x880)));
if (t195 != INT64_MIN) { NG(); } else { ; }
}
void f196(void) {
static uint16_t x881 = 17000U;
int16_t x882 = INT16_MIN;
int16_t x883 = 19;
volatile int32_t x884 = -39;
static volatile int32_t t196 = 2;
t196 = (x881*(x882!=(x883-x884)));
if (t196 != 17000) { NG(); } else { ; }
}
void f197(void) {
volatile int32_t x885 = -1;
uint16_t x886 = UINT16_MAX;
volatile int8_t x887 = INT8_MIN;
int32_t t197 = -5514023;
t197 = (x885*(x886!=(x887-x888)));
if (t197 != -1) { NG(); } else { ; }
}
void f198(void) {
int32_t x889 = INT32_MIN;
int16_t x891 = -1;
int8_t x892 = INT8_MAX;
volatile int32_t t198 = INT32_MIN;
t198 = (x889*(x890!=(x891-x892)));
if (t198 != INT32_MIN) { NG(); } else { ; }
}
void f199(void) {
int16_t x893 = -1;
int8_t x894 = INT8_MIN;
static uint64_t x895 = 9256852464293612LLU;
uint16_t x896 = 6U;
t199 = (x893*(x894!=(x895-x896)));
if (t199 != -1) { NG(); } else { ; }
}
int main(void) {
f0();
f1();
f2();
f3();
f4();
f5();
f6();
f7();
f8();
f9();
f10();
f11();
f12();
f13();
f14();
f15();
f16();
f17();
f18();
f19();
f20();
f21();
f22();
f23();
f24();
f25();
f26();
f27();
f28();
f29();
f30();
f31();
f32();
f33();
f34();
f35();
f36();
f37();
f38();
f39();
f40();
f41();
f42();
f43();
f44();
f45();
f46();
f47();
f48();
f49();
f50();
f51();
f52();
f53();
f54();
f55();
f56();
f57();
f58();
f59();
f60();
f61();
f62();
f63();
f64();
f65();
f66();
f67();
f68();
f69();
f70();
f71();
f72();
f73();
f74();
f75();
f76();
f77();
f78();
f79();
f80();
f81();
f82();
f83();
f84();
f85();
f86();
f87();
f88();
f89();
f90();
f91();
f92();
f93();
f94();
f95();
f96();
f97();
f98();
f99();
f100();
f101();
f102();
f103();
f104();
f105();
f106();
f107();
f108();
f109();
f110();
f111();
f112();
f113();
f114();
f115();
f116();
f117();
f118();
f119();
f120();
f121();
f122();
f123();
f124();
f125();
f126();
f127();
f128();
f129();
f130();
f131();
f132();
f133();
f134();
f135();
f136();
f137();
f138();
f139();
f140();
f141();
f142();
f143();
f144();
f145();
f146();
f147();
f148();
f149();
f150();
f151();
f152();
f153();
f154();
f155();
f156();
f157();
f158();
f159();
f160();
f161();
f162();
f163();
f164();
f165();
f166();
f167();
f168();
f169();
f170();
f171();
f172();
f173();
f174();
f175();
f176();
f177();
f178();
f179();
f180();
f181();
f182();
f183();
f184();
f185();
f186();
f187();
f188();
f189();
f190();
f191();
f192();
f193();
f194();
f195();
f196();
f197();
f198();
f199();
return 0;
}
|
SAN DIEGO (KUSI) – After a tip came into KUSI that ofo bikes were being stacked up and sold as scrap metal for $3 a piece, we went to check it out.
We got THIS exclusive video of dozens upon dozens of the banana-yellow bikes stacked up at a local recycling center.
The recycling center just east of Downtown San Diego, purchased the irreparable bikes as scrap metal.
Neighbors of the recycling center said they’d rather see the yellow bikes piled up there than strewn across the streets of San Diego or even worst in the ocean.
The Chinese bike-sharing startup has already laid off employees across the board in North America, in marketing, communications, and engineering, and it’s dramatically pulled back its US operations in just the past month.
ofo was one of the first dockless bike-share companies to set up in the US, challenging existing docked programs like Ford GoBike in San Francisco and Citi Bike in New York. Those docked bike-share programs were largely operated by a company named Motivate, which sold to Lyft earlier this month for a reported $250 million.
That deal was followed by Uber buying the dockless bike-sharing startup Jump Bikes in April for a reported $200 million.
ofo believed the dockless model, in addition to being more convenient for customers, would help it expand quickly and affordably.
In January, Taylor, ofo’s then-head of North America, told Forbes that a typical docked bike program spent $80,000 to $100,000 to set up each dock, and $1,500 to $2,000 per bike.
By comparison, he said, ofo spent “a couple hundred bucks” per bike. “We provide a better bike and it’s more cost effective,” he said.
ofo has struggled with theft and vandalism particularly by the homeless in San Diego according to police. The seeming disposability of the bikes, paired with its use by homeless in the area prompted hundreds of public complaints about the bikes being left strewn across sidewalks or tossed into the ocean. |
def create_dummy_files():
backend_specific_objects = read_init()
dummy_files = {}
for backend, objects in backend_specific_objects.items():
backend_name = "[" + ", ".join(f'"{b}"' for b in backend.split("_and_")) + "]"
dummy_file = "# This file is autogenerated by the command `make fix-copies`, do not edit.\n"
dummy_file += "from ..file_utils import requires_backends\n\n"
dummy_file += "\n".join([create_dummy_object(o, backend_name) for o in objects])
dummy_files[backend] = dummy_file
return dummy_files |
Landmarks and outlines of the Russian tax system-2050 In the current situation, where new progressive technologies transform the social relations, the topic of this article becomes especially relevant. The reason is that an insight into global trends for the next three decades and their extrapolation to individual socio-economic systems is important for the effective functioning and development of such systems. However, such futurological issues in the subject areas have not been sufficiently studied or developed by researchers so far. This also applies directly to the tax sphere, where practitioners have established the tasks for the development and implementation of a digital platform and the creation of a fiscal ecosystem. The object of this research involves the Russian tax system. The purpose is to develop landmarks and outlines for the development of such a system for the period up to 2050. The research is based on systemic and integrative approaches, as well as the methods of analysis and synthesis. The research provided the following solutions: 1) contemporary economic theories and concepts related to the changing global technological landscape were organized into a system; 2) some trends in the development of society in 2025-2050 were analyzed in the context of advancements in technology (the report of Huawei and the study by K. Kelly); 3) a brief description was given of the current state of the Russian tax system and its prospects of development; 4) based on the analysis and synthesis of information on the theories, global trends and the current situation in the taxation system, a matrix of system transformations was developed it reflects the genesis of the smart-society and one of its components, the tax system; 5) some promising development directions of the taxation theory and practice were suggested, in particular, the rationale and methodology of the tax ecosystem as a highly complex system. |
<filename>doc/en/example/py2py3/conftest.py
import sys
import pytest
py3 = sys.version_info[0] >= 3
class DummyCollector(pytest.collect.File):
def collect(self):
return []
def pytest_pycollect_makemodule(path, parent):
bn = path.basename
if "py3" in bn and not py3 or ("py2" in bn and py3):
return DummyCollector(path, parent=parent)
|
Eradication therapy for Helicobacter pylori infection based on the antimicrobial susceptibility test in children: A singlecenter study over 12 years Helicobacter pylori (H. pylori) infection causes chronic gastritis, duodenal and to a lesser extent, gastric ulcers, and gastric cancer. Most H. pylori infections are acquired in childhood, and effective treatment of childhood infection is very important. Esophagogastroduodenoscopy (EGD) is useful for endoscopic diagnosis, mucosal tissue biopsy, and culture examination for H. pylori in children and adults. In this paper, we report results of susceptibility tests and eradication rates in H. pyloripositive children who underwent EGD over a 12year period. |
. The suitability assessment of regional construction land is one of the important prerequisites for the spatial arrangement in regional planning, and also, the important foundation for the reasonable utilization of regional land resources. With the support of GIS, and by using the regional comprehensive strength and spatial accessibility analysis and the eco-environmental sensitivity analysis, this paper quantitatively analyzed the development potential and its ecological limitation of the central and southern parts of Hebei Province. Besides, based on the cost-benefit analysis, the potential-limitation model was accordingly developed, and the three land suitability scenarios under different developmental concepts were captured through the interaction matrix. The results indicated that both the comprehensive strength and the development potential of the study area showed a primacy distribution pattern, and presented an obvious pole-axis spatial pattern. The areas with higher eco-environmental sensitivity were mainly distributed in the west regions, while those with lower eco-environmental sensitivity were in the east regions. Regional economic development concept had important effects on the regional ecological security pattern and urban growth. The newly developed principles and methods for the land suitability assessment in this paper could not only scientifically realize the spatial grid of regional development potential and capture the future land development trend and spatial distribution, but also provide scientific basis and effective ways for urban and regional planning to realize region 'smart growth' and 'smart conservation'. |
. INTRODUCTION Findings from several epidemiological studies have revealed that major depression is associated with an increased risk of developing cardiovascular diseases (CVD) and presenting complications and new events in subjects with already-established CVD. The pathophysiological mechanisms responsible for this increased cardiovascular risk in major depression remain unclear. DEVELOPMENT The aim of this work is to review the literature on the possible pathophysiological mechanisms involved in the relation between major depression and CVD, with special emphasis on the studies dealing with cardiovascular autonomic dysfunction and heart rate variability. Likewise, recent hypotheses concerning the neural mechanisms underlying autonomic dysfunction in subjects with major depression are also discussed. CONCLUSIONS The evidence that is currently available allows us to hypothesise that there are anomalies in the functioning of the central autonomic neural network in subjects with major depression, and more specifically in the hippocampus, prefrontal cortex and the brain stem nuclei. Such abnormalities, in association with lower central levels of serotonin give rise to a predominance of the sympathetic flow and a loss of cardiac vagal tone. The resulting cardiovascular autonomic dysfunction could be the main cause of the increased cardiovascular risk observed in major depression. In the future, studying the autonomic nervous system may be a useful tool in the development of new therapeutic strategies aimed at reducing cardiovascular morbidity and mortality in subjects with depression. |
Anetoderma: a case report and review of the literature. Anetoderma is a rare benign dermatosis caused by a loss of mid-dermal elastic tissue resulting in well-circumscribed areas of pouchlike herniations of flaccid skin. Anetoderma is classically categorized as either primary (idiopathic) or secondary (following an inflammatory dermatosis in the same location). We report a case of primary anetoderma (PA) occurring in a human immunodeficiency virus 1 (HIV-1)-infected man. We review the clinical presentation, possible etiologies, associated conditions, and limited treatment options of this disease. |
Writing in Salon, David Denvir notes that instead of attacking cities as scary, crime-ridden places like the conservatives of yesteryear, the modern Republican party is mostly just pretending they don't exist. Nowhere is this more evident than in the GOP presidential primaries, where, with the exception of one particularly stupid Gingrichian comment about putting inner city kids to work as janitors, cities and urban issues have scarcely even been mentioned.
The first two primaries are held in rural, low-population states, which serves to skew the campaign rhetoric away from the urban. But Denvir notes that the candidates' blind eye to urbanity is rooted in a deeper, more complex problem:
The specter of the black ghetto still scripts urban dwellers as villains (often as thieves robbing the citizen either directly, or as in this Rick Santorum comment, indirectly: “I don’t want to make black people’s lives better by giving them other people’s money”). But unlike the era of Ronald Reagan’s welfare queen, today cities are more ignored than attacked. And this goes well beyond Iowa. “The core of the Republican constituency in metropolitan America are the growing, racially and economically exclusive ‘outer suburbs’ whose privileged status Republicans seek to protect at all costs,” says former mayor of Albuquerque David Rusk, now a consultant. He cited New Jersey Gov. Chris Christie as an exemplar of the trend.
Yet crime rates have fallen, almost across the board, in American cities, so they're no longer that scary. Rich young Americans are moving into cities , lending them a veneer of cool. But they largely remain Democratic strongholds. Perhaps because of this, the GOP has evidently determined that engaging urban issues at all is a waste of time.
Which is weird. Cities, of course, are the primary engines of the nation's economy. And Americans may not be flocking to cities as quickly as the rest of the world, but the trend still looks to be heading towards urbanization. So it's screwy indeed that one of our two political parties can get away with scarcely acknowledging such a vast swath of the population–and the place that most Americans will likely soon live, especially as ever-rising gas prices make long commutes less tenable.
Obviously, our politics–especially our outsized, epically-proportioned presidential campaign politics–panders to a mythic and nostalgic semi-reality that voters prefer to real reality. But we really should be talking about cities, about smart growth, and about sustainable development. We need to be talking about how to make our cities better places to live, how to address poverty in the inner city, and how to keep tomorrow's denser, more connected communities healthy and happy. Not to mention that the era of the McMansion is ending, and catering exclusively to the exurbs is an unsustainable strategy. Ignoring cities at this point isn't just odd; it will soon become bad politics. |
“Brick by Boring Brick” by the band Paramore describes the adventures of a girl in a fantasy land. The wonder quickly turns into a nightmare and the friendly creatures turn against her. What is the meaning of this video? The answer is concealed in the symbolism of the video and alludes to a disturbing practice: mind control.
I’ve been often asked if the symbolism described in my previous articles are found in music videos outside of the R&B genre I usually analyze. The answer is sadly ‘yes’ and Paramore’s Brick by Boring Brick is a stunning example. This pop-punk band, described as “emo without being whiny or bratty” primarily appeals to kids and teenagers.
They have obtained worldwide success and numerous awards for their singles crushcrushcrush and Decode. The band has been featured in numerous movies (Twilight) and video games. The newest album of the band, named Brand New Eyes, introduces to the fans symbolism they are probably not familiar with. Looking at the promotional material, readers of this site will probably recognize signs and symbols used by other pop stars as well. To make it simple: Paramore seems to have been influenced by the Illuminati. Brick by Boring Brick steers away from the usual high school themes of the band to tackle a subject that is totally oblivious to most teenagers: mind control and, more precisely, Monarch Programming.
That Darned One Eye Symbol
As seen with Lady Gaga, Rihanna and other artists using mind control symbolism in their videos, Paramore has adopted the “One-Eye” symbol in their promotional pictures:
Please don’t tell me it is a coincidence.
Monarch Programming
As discussed in previous articles, Monarch Programming is a mind-control technique used mostly on children to make them dissociate from reality.
“One of the primary reasons that the Monarch mind-control programming was named Monarch programming was because of the Monarch butterfly. The Monarch butterfly learns where it was born (its roots) and it passes this knowledge via genetics on to its offspring (from generation to generation). This was one of the key animals that tipped scientists off, that knowledge can be passed genetically. (…) The primary important factor for the trauma-based mind-control is the ability to disassociate. It was discovered that this ability is passed genetically from generation to generation. American Indian tribes (who had traumatic ritual dances and who would wait motionless for hours when hunting), children of Fakirs in India (who would sleep on a bed of nails or walk on hot coals), children of Yogis (those skilled in Yoga, who would have total control over their body while in a trance), Tibetan Buddhists, children of Vodoun, Bizango and other groups have a good ability to disassociate. The children of multigenerational abuse are also good at dissociation. The Illuminati families and European occultists went to India and Tibet to study occultism and eastern philosophy. These Europeans learned yoga, tantric yoga, meditations, and trances and other methods to disassociate. These skills are passed on to their children via genetics. A test is run when the children are about 18 months old to determine if they can dissociate enough to be selected for programming or not.” -The Illuminati Formula Used to Create an Undetectable Total Mind Controlled Slave
During sexual abuse, electroshock therapy and all kinds of sadistic tortures, mind control slaves are encouraged to dissociate from reality and to go to “a happy place”. The use of fairy tale imagery is used to reinforce programming and to create an alternate reality. The victim’s brain, in self-preservation mode, creates a new persona (an “alter”) as a defense mechanism to the abuse. The blurring of the lines between reality and fantasy makes the slave totally oblivious to his/her true state.
Paramore’s latest album is called “Brand New Eyes”, which has obvious mind control/Illuminati connotations. The cover features a pinned butterfly with its wings separated from its body … symbolic indeed.
Brick by Boring Brick
Paramore’s song is, at face value, about a girl escaping her problems and acting childish only to realize that it makes things worse. Behind this first degree meaning, lies a second layer of interpretation: the song describes, in chilling detail, the reality of a mind-control slave. The video manages to assemble all of the symbolism usually associated with Monarch Programming in about three minutes, leaving no doubt concerning this secondary meaning of the song.
Right from the start of the video, the subject matter of the song is made very clear. The setting is totally unreal and synthetically created. A little girl, apparently a child version of the singer Hayley, runs towards a strange world, bearing monarch butterfly wings on her back, symbolizing that she is a Monarch slave. She almost reluctantly enters a symbolic gateway, representing the start of her dissociative state. The door violently shuts down behind her, which hints the viewers to the fact that this wonderland is forcibly induced on the child. The lyrics of the first verse describe the reality of the slave.
Well she lives in the fairy tale
Somewhere too far for us to find
Forgotten the taste and smell
Of a world that she’s left behind
It’s all about the exposure the lens I told her
The angles are all wrong now
She’s ripping wings off of butterflies
The girl lives in a “fairy tale”, which is her dissociative mind state. It is “too far for us to find” due to the fact that this world can only be found in the confines of her consciousness. The slave has been removed from her family and the real world to live in a confined environment. She has “forgotten the taste and smell” of the “real world” she has left behind. She lives in a prison for kids, a human rat laboratory and she is constantly manipulated by her handlers. All of her senses are subject to constant pressure and pain and her perception of reality is completely distorted: “The angles are all wrong now“. She is a Monarch slave and is thus “ripping wings off of butterflies“.
Keep your feet on the ground
When your head’s in the clouds
The dissociative state experienced by Monarch slaves is often described as a sensation of weightlessness. While her feet are on the ground, her consciousness is in an alternate reality or “in the clouds“.
The girl in the video walks around this strange world filled with fairy tale characters which are reminiscent of those found in Alice in Wonderland or the Wizard of Oz, the stories most commonly associated with mind control. The blurriness of the scenes and the presence of mushrooms in the background refer to the use of hallucinogenic drugs during Monarch Programming.
The girl enters a castle, representing her inner consciousness. Mirrors, reflections and the girl’s multiplication symbolize the girl’s fragmented/compartmentalized mind state.
The girl stands still while an independent, alternate personality, looking back at her through the mirror brushes her hair. Mirrors and castles are triggers that are often used in Monarch Programming.
“The premise of trauma-based mind control (a version of which was known as the MK Ultra program) is to compartmentalize the brain, and then use techniques to access the different sections of the brain while the subject is hypnotized. Entire systems can be embedded into a person’s mind, each with its own theme, access codes and trigger words. Some of the most common and popular symbolisms and themes in use are Alice in Wonderland, Peter Pan and The Wizard of Oz, mirrors, porcelain/harlequin masks, the phoenix/phoenix rising, rainbows, butterflies, owls, keys, carousels, puppets/marionettes and dolls,willow trees, tornadoes, spirals/helixes, castles, rings, hallways and doors, elevators and stairs.”
-Source
The second verse of the song describes a disturbing reality of Monarch slaves.
So one day he found her crying
Coiled up on the dirty ground
Her prince finally came to save her
And the rest you can figure out
But it was a trick
And the clock struck 12
This is the picture painted by this verse: the slave’s handler enters her “cell”, where she is coiled up and deeply traumatized. The floor is dirty. It has been documented that victims of mind control are forced to live rooms littered with feces (I can’t make this stuff up). Her “prince”, who is, in fact, her handler, comes in to “save her from her pain”. Handlers are often portrayed as the slave’s savior, who will guide them through traumatic events. The line “And the rest you can figure out” alludes to the worst: the “prince” came to rape her. It was a trick, he was not a prince, only a sadistic handler furthering the girl’s trauma with sexual abuse. During those repeated assaults, the slaves are forced to dissociate from reality. The lyrics of the song’s bridge aptly define this concept.
Well you built up a world of magic
Because your real life is tragic
Yeah you built up a world of magic
She has built, brick by brick, a wall in her consciousness that dissociates her from reality. She escapes into a world of magic due to the extreme trauma she has to live through on a daily basis.
The Awakening
Probably because the girl’s curiosity concerning her own mind has led her too far, the world of wonders quickly becomes nightmarish. Creepy puppets make their way out of the mirrors. The characters of her fairytale world suddenly become terrifying. An evil-looking character, dressed as a thief holding an ax, approaches her. Is she being reprimanded by her handlers for “not following the script” of her programming? The girl is understandably freaked out and runs away. The lyrics explain this difference between reality and fiction.
If it’s not real
You can’t hold it in your hand
You can’t feel it with your heart
And I won’t believe it But if it’s true
You can see it with your eyes
Or even in the dark
And that’s where I want to be, yeah
The girl runs out of the castle and falls into the grave dug by … Paramore? That is not really cool of them. Hayley gets up, throws the girl’s doll into the grave and they start burying her. At face value, this can be interpreted as the burial of the “young irresponsible girl” living in a fairy tale. On a second level, this can be seen as the burial of the innocence of a child after experiencing traumatic events.
If you have keen eyes, you can notice a white rabbit inside the hole. Is it the white rabbit of Alice in Wonderland? As Morpheus says in the Matrix:
“You take the red pill – you stay in Wonderland and I show you how deep the rabbit hole goes. “
Whatever the meaning one attributes to the burial, the message of the video is not to sympathetic to the girl’s quest for self-knowledge and emancipation. Seems like they’re saying “This is what you get for trying to know your real self”.
In Conclusion
After viewing the video a couple of times, I asked myself if the song was actually a denunciation of mind-control practices … maybe it was trying to inform and warn people on the subject. So I visited some Paramore-related sites and forums to see if the song had sparked discussions concerning its deeper meaning. I quickly came back to the reality of things: Young people listen to this music and they have absolutely NO IDEA what’s going on. About 97% of educated adults are totally unaware of the existence of mind control (let alone its symbolism), so to expect high schoolers know about this is totally absurd. Here are some actual comments from fans about this song: “I luv the Badabada part!“, “Hayley looks great in blonde!“or “I don’t like the burying part!“.
So with that in mind, I keep asking myself: Why do we use symbolism and triggers associated with mind control in videos aimed at the young people? They are totally oblivious to the reality of Monarch programming, so why do we expose them to it? After realizing that the group has adopted some of the Illuminati symbolism discussed in previous articles, the answer became very clear: They are part of the System, with a capital “S”. This System hypnotically conditions people to accept mind control as part of their daily lives and the trend is becoming increasingly apparent. I can already hear the naysayers saying “nay” to everything and finding ways to rationalize everything that has been discussed here. Maybe they should ponder on those words:
“Even as he dances to the tune of the elite managers of human behavior, the modern man scoffs with a great derision at the idea of the existence and operation of a technology of mass mind control emanating from media and government. Modern man is much too smart to believe anything as superstitious as that! Modern man is the ideal hypnotic subject: puffed up on the idea that he is the crown of creation, he vehemently denies the power of the hypnotist’s control over him as his head bobs up and down on a string.”
-Michael A. Hoffman II, Secret Societies and Psychological Warfare |
<filename>howls/dinner-reservation/internal/client/service.go
package client
import (
"context"
"log"
"github.com/TheBigBadWolfClub/go-lab/howls/dinner-reservation/internal"
"github.com/TheBigBadWolfClub/go-lab/howls/dinner-reservation/internal/middlewares"
"github.com/TheBigBadWolfClub/go-lab/howls/dinner-reservation/internal/table"
)
// Client the entity used in domain and business use cases.
type Client struct {
ID int64
Name string
Size int
CheckIn string
TableID int64
}
// Service use cases supported by client entity.
type Service interface {
Create(context.Context, Client) (int64, error)
List(context.Context) ([]Client, error)
CheckIn(context.Context, string, int) error
CheckOut(context.Context, string) error
FilterByCheckIn(context.Context) ([]Client, error)
}
type service struct {
store Repository
tableService table.Service
}
// NewService instance of client Service, that orchestrates use-cases related to client entity.
func NewService(store Repository, tableService table.Service) *service {
return &service{
store: store,
tableService: tableService,
}
}
// Create save a client into permanent storage.
func (s *service) Create(ctx context.Context, client Client) (int64, error) {
return s.store.Save(ctx, client)
}
// List Get all client as that will attend to the party.
func (s *service) List(ctx context.Context) ([]Client, error) {
return s.store.FetchAll(ctx)
}
// CheckIn a client arrived into the party.
func (s *service) CheckIn(ctx context.Context, name string, size int) error {
findClient, err := s.store.Get(ctx, name)
if err != nil {
return err
}
findClient.Size = size
availableSeats, err := s.tableService.AvailableSeats(ctx, findClient.TableID)
if err != nil {
return err
}
if availableSeats < findClient.Size {
log.Printf("%s::%s::%s", ctx.Value(middlewares.RIDKey), "checkIn client", internal.ErrNoAvailableSeat)
return internal.ErrNoAvailableSeat
}
return s.store.UpdateCheckIn(ctx, findClient)
}
// CheckOut a client leaves the party.
func (s *service) CheckOut(ctx context.Context, name string) error {
return s.store.Delete(ctx, name)
}
// FilterByCheckIn list all client currently in the party.
func (s *service) FilterByCheckIn(ctx context.Context) ([]Client, error) {
return s.store.FetchCheckedIn(ctx)
}
|
<gh_stars>1-10
export interface IFormValues {
/* field name and field value for all fields */
[key: string]: any;
}
export interface IErrors {
[key: string]: string[];
}
|
Wednesday offered a behind-the-scenes look at Trump National Golf Club in Bedminster, N.J., as the course prepares to host the 72nd U.S. Women's Open.
Several golfers, including 2016 champion Brittany Lang, were on hand, along with officials from the USGA and Trump National (including Eric and Donald Jr., sons of President Donald Trump) for Preview Day.
Much of the talk Wednesday avoided politics, but it's hard to deny Trump National's place on the political landscape, with the president using the club at times as his weekend escape from the White House.
Check out the photos at the top of this story for a rare look inside the property line at Trump National in Bedminster. |
Model-based evaluation of nitrogen removal in a tannery wastewater treatment plant. Computer modelling has been used in the last 15 years as a powerful tool for understanding the behaviour of activated sludge wastewater treatment systems. However, computer models are mainly applied for domestic wastewater treatment plants (WWTPs). Application of these types of models to industrial wastewater treatment plants requires a different model structure and an accurate estimation of the kinetics and stoichiometry of the model parameters, which may be different from the ones used for domestic wastewater. Most of these parameters are strongly dependent on the wastewater composition. In this study a modified version of the activated sludge model No. 1 (ASM 1) was used to describe a tannery WWTP. Several biological tests and complementary physical-chemical analyses were performed to characterise the wastewater and sludge composition in the context of activated sludge modelling. The proposed model was calibrated under steady-state conditions and validated under dynamic flow conditions. The model was successfully used to obtain insight into the existing plant performance, possible extension and options for process optimisation. The model illustrated the potential capacity of the plant to achieve full denitrification and to handle a higher hydraulic load. Moreover, the use of a mathematical model as an effective tool in decision making was demonstrated. |
Mortgage rates plummeted to their lowest levels in three years this week.
Weak first-quarter economic growth, persistent global economic worries and last week’s anemic jobs report all contributed to pushing down bond yields. Because mortgage rates tend to follow the yield on the 10-year Treasury, home loan rates retreated.
According to the latest data released Thursday by the Federal Home Loan Mortgage Corp., the 30-year fixed-rate average sank to a low not seen since May 2013, falling to 3.57 percent with an average 0.5 point. (Points are fees paid to a lender equal to 1 percent of the loan amount.) It was 3.61 percent a week ago and 3.85 percent a year ago. The 30-year fixed rate has dropped 44 basis points since the first of the year. (A basis point is 0.01 percentage point.)
[Why prequalified doesn’t always mean you’ll get that mortgage]
The 15-year fixed-rate average tumbled to 2.81 percent with an average 0.5 point. It was 2.86 percent a week ago and 3.07 percent a year ago.
The five-year adjustable rate average fell to 2.78 percent with an average 0.5 point. It was 2.8 percent a week ago and 2.89 percent a year ago.
“Disappointing April employment data once again kept a lid on Treasury yields, which have struggled to stay above 1.8 percent since late March,” Sean Becketti, Freddie Mac chief economist, said in a statement. “Prospective homebuyers will continue to take advantage of a falling rate environment that has seen mortgage rates drop in 14 of the previous 19 weeks.”
[Home buyers don’t seem to be using new tool to shop for mortgages]
Economists who had predicted home loan rates would steadily rise in 2016 are now revising their expectations but cautioning to expect continued volatility with rates bouncing up and down.
“The average 30-year rate will likely remain under 4 percent throughout the spring and summer and into the early fall,” said Jonathan Smoke, chief economist at realtor.com. “The average forecast sees the 30-year conforming rate ending the year at 4.21 percent, which would be 12 basis points higher than we ended 2015.”
Nearly three-quarters of the experts surveyed by Bankrate.com, which puts out a weekly mortgage rate trend index, said that rates will remain relatively unchanged this week. Less than 10 percent believe they will rise.
Meanwhile, mortgage applications were flat this week, according to the latest data from the Mortgage Bankers Association.
The market composite index — a measure of total loan application volume — ticked up 0.4 percent from the previous week. The refinance index inched up 0.5 percent, while the purchase index nudged up 0.4 percent.
The refinance share of mortgage activity accounted for 52.8 percent of all applications. |
// +build linux freebsd openbsd netbsd
package dnsclientconf
import (
"net"
"github.com/lohchab/dns-client-conf/dhclient"
"github.com/lohchab/dns-client-conf/resolvconf"
)
type dNSConfig struct {
resolvConfigPath, dhclientConfigPath, dhclientConfigPathBackup string
iface *net.Interface
}
func NewDNSConfigurator() DNSConfigurator {
iface, _ := net.InterfaceByName(InterfaceName)
return &dNSConfig{ResolvConfigPath, DhclientConfigPath, DhclientConfigPathBackup, iface}
}
func (dnsconf *dNSConfig) GetNameServers() (addrs []net.IP, err error) {
return resolvconf.GetNameServers(dnsconf.resolvConfigPath)
}
func (dnsconf *dNSConfig) AddNameServers(addrs []net.IP) (err error) {
err = dhclient.AddNameServers(addrs, dnsconf.dhclientConfigPath, dnsconf.dhclientConfigPathBackup)
if err != nil {
return err
}
return dnsconf.ReloadNameServers()
}
func (dnsconf *dNSConfig) DHCPNameServers() (err error) {
err = dhclient.RemoveNameServers(dnsconf.dhclientConfigPath, dnsconf.dhclientConfigPathBackup)
if err != nil {
return err
}
return dnsconf.ReloadNameServers()
}
func (dnsconf *dNSConfig) SetInterface(iface *net.Interface) {
dnsconf.iface = iface
}
|
Experimental study of the heat transport processes in dusty plasma fluid. The results are given of an experimental investigation of heat transport processes in fluid dusty structures in rf-discharge plasmas under different conditions: for discharge in argon, and for discharge in air under an action of electron beam. The analysis of steady-state and unsteady-state heat transfer is used to obtain the coefficients of thermal conductivity and thermal diffusivity under the assumption that the observed heat transport is associated with a thermal conduction in the dusty component of plasmas. The temperature dependence of these coefficients is obtained, which agrees qualitatively with the results of numerical simulation for simple monatomic liquids. |
It's the time of year when darkness comes early and people begin to sum up how this year has gone and next year will unfold. It's also the time of year that predictions about developments in the technology industry over the next 12 months are in fashion. I've published cloud computing predictions over the past several years, and they are always among the most popular pieces I write.
Looking back on my predictions, I'm struck not so much by any specific prediction or even the general accuracy (or inaccuracy) of the predictions as a whole. What really comes into focus is how the very topic of cloud computing has been transformed.
Four or five years ago, cloud computing was very much a controversial and unproven concept. I became a strong advocate of it after writing Virtualization for Dummies and being exposed to Amazon Web Services in its early days. I concluded that the benefits of cloud computing would result in it becoming the default IT platform in the near future.
I'm pleased to say that my expectation has indeed come to pass. It's obvious that cloud computing is becoming the rule, with noncloud application deployments very much the exception. Skeptics continue to posit shortcomings regarding cloud computing, but the scope of their argument continues to shrink.
Today, the arguments against cloud computing are limited. "Some applications require architectures that aren't well-suited for cloud environments," critics say, suggesting that cloud environments aren't universally perfect for every application. This shouldn't really be a surprise.
Other critics cite some mix of security and compliance, although the sound level of this issue is far lower in the past. In his 2014 predictions, Forrester's James Staten says, "If you're resisting the cloud because of security concerns, you're running out of excuses" and notes that cloud security has pretty much proven itself.
I've always taken a different perspective: The alarm raised about cloud security was just air cover for IT personnel who didn't want to change their established practices. Furthermore, the concern about security would disappear not because cloud providers suddenly "proved" they were secure enough but because recalcitrant IT personnel read the writing on the wall and realized they had to embrace cloud computing or face the prospect of a far larger change - unemployment.
With the triumph of cloud computing underway, what will 2014 hold for developments in the field? This is going to be an especially exciting year for cloud computing. The reason is simple: In 2014, the realization that the cloud can spawn entirely new types of applications will come into general awareness. Expect to see many articles next year proclaiming the wondrousness of one cloud application or another and how cloud computing made such an application possible when it never could have existed before. Count on it.
With that in mind, here's my set of 2014 cloud computing predictions. As in the past, I present the list broken into two sections: Five end-user predictions and five vendor/cloud provider prognostications. I do this because too many predictions focus on the vendor side of things. From my perspective, the effect of cloud computing on users is just as important and worthy of attention.
A couple of years ago, Mark Andreessen proclaimed that " software is eating the world." Next year, this will become profoundly obvious. Simply put, every product or service is getting wrapped in IT, and all these applications will find their home in the cloud.
To provide one personal example, my doctor suggested at my annual physical that I begin tracking my blood pressure. The solution to sending him the results? Instead of tracking the numbers in a spreadsheet and forwarding it via email, he recommended a blood pressure monitor that connects to a smartphone app, automatically sends the results to his firm and communicates to an application that downloads the numbers into my medical record.
The net effect of the ongoing shift to IT-wrapped products and services is that global IT spend will increase significantly as IT shifts from back-office support to frontline value delivery. The scale of IT will outstrip on-premises capacity and result in massive adoption of cloud computing.
If applications are becoming more central to business offerings, then those who create the applications become more important. The analyst firm RedMonk refers to this trend as "the developer as kingmaker," since developers are now crucial in business offering design and implementation. There's an enormous upwell of change in development practices, driven by the ongoing shift to open source and the adoption of agile and continuous delivery processes. This improves the productivity and creativity of developers, and it leads developers to release more interesting and important applications.
It's no secret that developers drove much of the early growth of cloud computing, frustrated by the poor responsiveness of central IT and attracted by the immediate availability of resources from cloud providers. That early adoption cemented developer expectations that access to cloud resources should be easy and quick, which will result in much more cloud adoption by increasingly important developers.
It will be interesting to watch mainstream companies address the new importance of developers. Many have traditionally downplayed the importance of IT and treated it as a cost center to be squeezed - or, via outsourcing, eliminated entirely. When these companies start to ramp up their app development efforts, they will confront an expensive resource pool with plenty of job options. They may choose to outsource these applications to specialized agencies and integrators whose high prices can be paid without upsetting the general pay structure within the company.
Gartner made a big splash last year when it forecast that CMOs will control more than 50 percent of IT spending by 2017. One cannot know, of course, how this forecast will turn out, but the two predictions above clearly reflect a similar perspective: When it comes to applications, end-users are increasingly in the driver's seat. The question is whether those business-oriented applications will be deployed on-premises or in the cloud.
Look for many stories in 2014 about companies launching new Internet-enabled business offerings and recognize that 99 percent of them rely on the cloud for back-end processing.
For the past several years, IT organizations have acknowledged that cloud computing provides undoubted benefits, but security and privacy concerns necessitate that an internal cloud be implemented before "real" applications be deployed. Many of these private cloud initiatives have been extended processes, though, bogged down by budgeting, lengthy vendor assessments, employee skill building and, yes, internal politics.
While this kind of delay could be accepted as part of the growing pains of shifting to a new platform, 2014 will force companies to really assess the progress of their private cloud efforts. Application workload deployment decisions are being made every day; every decision that puts the application in the public cloud means one more application that will never be deployed internally. Forget the "transfer your cloud applications to a production-quality internal cloud" rhetoric often spouted by vendors. The reality is that, once deployed, applications find their permanent home.
This means IT organizations planning private cloud environments have a short runway to deliver something. Otherwise, the cloud adoption decision will be made in a de facto fashion. Moreover, the measure of that private cloud will be how well it matches up to the convenience and functionality of public providers. A "cloud" that makes IT operator's jobs more convenient but does nothing for cloud users will end up a ghost town, bypassed by developers and business units on their journey to agility and business responsiveness.
Just as private cloud computing will face some hard truths in 2014, so, too, will the vision of hybrid cloud computing as a single homogenous technology spanning internal data centers and external cloud environments hosted by the internal technology provider. The reality is that every enterprise will use multiple cloud environments delivered with heterogeneous platforms. The crucial need will be to create or obtain capabilities to manage the different cloud environments with a consistent management framework - i.e., cloud brokerage.
The need for cloud brokerage extends beyond the technical, by the way. While a "single pane of glass" using consistent tools and governance across a variety of cloud environments is crucial, managing utilization and cost in those environments is just as important, and that will become increasingly evident in 2014.
Financial brokerage capability will come into focus next year because companies will have rolled out significant applications and find that resource use and overall costs are unpredictable, given the "pay per use" model of public cloud providers. Having tools that track resource use and make recommendations to optimize utilization and overall cost will join technical brokerage as a key cloud computing requirement.
By the way, cloud brokerage is a way for IT to deliver value to end users. Once IT recognizes it's in the business of infrastructure management and not asset ownership, it will recognize the immense value it can deliver to assist developers and business users make the best use of public cloud computing environments.
One of the most striking things about Amazon is how rapidly it is evolving its service and how often is delivers new functionality. Sometimes the cloud service provider (CSP) industry resembles one of those movies in which one character speeds through a scene while all the other actors move at an agonizingly slow pace.
It's traditional that once a vendor gets a new product or service established, its pace of innovation drops as it confronts the need to help its customers adopt its initial innovative offering. This phenomenon even has a storied name, Crossing the Chasm, from the eponymous book, which refers to how vendors have to become more like their existing competitors in order to achieve mainstream success.
As AWS crosses the $4 billion revenue mark, it doesn't seem to be decelerating its innovation progress. Far from it. At its recent re:invent 2013 conference, AWS announced five major new offerings and pointed out that it would deliver more than 250 new offerings or service improvements in 2013.
Nothing indicates that 2014 will be any different; expect many new AWS services and service offerings. In a recent set of posts on AWS hardware and software infrastructure, I note that, during its first years, AWS created a global, highly scaled infrastructure that reliably delivers foundation capabilities in computing, storage and networking.
Today, AWS can leverage those building blocks to create higher-level functionality targeted at emerging needs of its customers. For example, the just-announced AWS Kinesis event processing service uses EC2, Elastic Load Balancer, DynamoDB, and IAM as ingredients, along with Kinesis service-specific code, combined as part of a new recipe.
Amazon achieves its innovation because it approaches cloud computing as a software discipline, not an extension of hosting, as most of its competitors do. Next year will see further evidence that the software approach delivers customer value much more quickly than the hardware approach.
In a way, AWS has had a free ride to this point. Most of its competition has come from the hosting world, and, as noted, is unable to take a software approach to the domain. The inevitable result: AWS has improved, and grown, much more rapidly than other CSPs.
That unopposed free run will end in 2014. Both Google and Microsoft have AWS in their crosshairs and are rolling out serious competitive offerings, designed for an all-out battle royale. Both have, finally, recognized that their initial cloud offerings were inadequate. (Both, in my mind, seemed like offerings that customers should find superior to AWS, and both companies appeared baffled that the advantages of the offerings, though clear to them, went unappreciated by potential customers.) With Version 2.0, both companies deliver directly competitive cloud offerings.
Microsoft has an obvious opportunity here. It has an enormous installed base and a huge developer community. Its offering integrates directly with existing development tools and makes it easy to host an application in Azure. Its greatest challenge may not be in technology, but in redirecting the inertia of its existing business and partner base. I've heard rumblings about touchy Microsoft resellers wondering what their role in the Microsoft future will be. The inevitable temptation for the company will be to water down its Azure initiative to placate existing partners. That would be crippling, but it's understandable why the dynamics of existing relationships might prevent (or hamper) Microsoft's Azure progress. Nevertheless, Microsoft has plainly come to understand that AWS represents a mortal threat and has wheeled to go up against it.
Google is in a different position. It has no installed base threatened by AWS. Nevertheless, it has decided to come right after Amazon, using its deep pockets and outstanding technical resources as weapons. This very interesting post, describing numerous advantages Google Compute Engine has vis-à-vis AWS, makes it clear that Google is directly aiming at technical shortcomings in the AWS offerings. In a way, this is a refreshing change from the feeble attempts of other erstwhile competitors, who insisted on trumpeting "advantages" over AWS that nobody really cares about.
The Google offering is, perhaps, the more intriguing of the two. Over the past decade, Google has been far more innovative than Microsoft; that alone implies that it might be the most creative opponent AWS faces over the next year.
In any case, for AWS the CSP market will no longer be like shooting fish in a barrel, and 2014 will present the beginning of a multi-year tussle for dominance among these three.
Nearly everyone has heard of the " network effect," which refers to the added value to a group of users when one more user joins. It's sometimes summed up as, "If there's just one fax machine, it's pretty much useless;" unless many people have fax machines you can send faxes to or receive faxes from, owning a fax machine doesn't provided much value. (It's a funny turn of events that we're pretty much back to the early state of affairs with fax machines - hardly anyone has one and, yes, the remaining ones aren't worth much).
With respect to technology platforms, there's a symbiotic relationship between the network effect of the number of users and the richness of the platform functionality. This often isn't based on - or not solely on - the capability of the platform itself but rather, the complementary third-party services or products. More users makes a platform more attractive for third-party offerings, which makes the platform more attractive for users deciding which platform to adopt.
Today, the richness of the CSP ecosystems is completely lopsided. Not only does AWS provide a far richer services platform than its competitors, it has by far the larger number of complementary services provided by third parties. In 2014, as more applications get deployed to public cloud providers, the importance of the ecosystem will come into focus.
The richness of a platform's ecosystem directly affects how quickly applications can be created and delivered. Cloud platforms that have a paltry ecosystem are fated to suffer, even if their foundation services, such as virtual machines and network capability, are better than Amazon's.
Clearly, Microsoft should be able to roll out a rich ecosystem, since a key to Windows' success is its ecosystem; much of it, presumably, should port to Azure fairly easily. We'll see how Google progresses on this next year. By the end of next year, the discussion will on from who has the best VMs to who best enables applications, with a recognition that the richness of a platform's ecosystem is crucial.
VMware has been in a funny position with respect to cloud computing. Its undoubted platform advantages inside the corporate data center haven't been matched by a concomitant public cloud success. For whatever reason - or, perhaps, for a number of reasons - VMware's public CSP partners haven't been able to generate large adoption for the VMware flavor of cloud computing.
VMware is now taking another run at this, with an approach explicitly designed to extend and integrate on-premises environments into a VMware-directed hybrid cloud offering. Certainly, this approach holds a lot of promise. The capability to seamlessly transfer a workload from internal to external environments could solve a lot of headaches for IT organizations.
This approach, dubbed vCHS, can provide benefits beyond simple technology consistency, in that it would enable IT organizations to focus on one set of personnel skills, thereby reducing costs and complexity.
Next year will be important for VMware and its vCHS offering. As noted above, companies are making decisions right now that will set their course for the future. If VMware hopes to play as important a role in public cloud computing as it does in internal data center environments, it needs to be part of those decisions. There's not a lot of time left to gain a spot on short lists. You can be sure that VMware recognizes how important 2014 will be to its future and that it's planning an aggressive campaign to maintain its market-leading position.
Amazon has had a clear field to this point. Most of its competition has, in effect, competed on the wrong front, or at least chosen to try and differentiate on offering aspects about which most adopters are apathetic. One key difference between AWS and most of its competition is cost. While much of Amazon's competition has aligned its pricing with existing hosting models, requiring significant commitments in terms of both amount of resource and duration of contract, Amazon makes it easy to get started for a few dollars, with no commitment at all.
In effect, this has meant that Amazon is competing with itself - and, to its credit, it has reduced prices since it first launched AWS. That field of one is going to expand this year with the arrival of Microsoft and Google. The result will be a ferocious price war, with all three companies repeatedly dropping costs to maintain (Amazon) or attain (Microsoft and Google) market share. Not only is this a battle for market dominance, it reflects the nature of cloud computing: A capital-intensive industry in which maintaining high utilization is critical.
For other cloud providers, witnessing this competitive melee won't just be a jolly spectator sport. Every cloud provider is going to be confronted - on a daily and ongoing basis - with three deep-pocketed competitors one-upping each other every time they drop their prices. Inevitably, other CSPs will suffer collateral damage as potential customers bring the list prices of the big three into contract negotiations and expect them to match what they are offering. For those without low cost of capital and their own deep pockets, next year will be the beginning of a long, slow descent into a financial morass, solved only by industry consolidation or shuttering their offerings.
The airline industry is instructive in this regard. As with cloud computing, airlines are a capital-intensive business; airplanes cost a lot, while seats are sold on a low-cost, low-commitment basis. The key to the airline industry is yield management, which is its version of infrastructure utilization. The past few years have witnessed multiple airline bankruptcies and merger-mania. Next year the cloud computing market will look a lot like the airline industry - great for customers, but perilous for providers.
Well, there you have it - my 2014 cloud computing predictions. In a sense, what has happened in the industry to this point has been the prologue for the main cloud computing story. Next year represents the beginning of the main story. In 2014, we'll see cloud computing become the dominant platform for IT from now on. There will be many successes as users learn to take advantage of the new capabilities cloud computing offers, along with challenges to many in the industry - both users and vendors - who struggle to make a successful transition to the platform of the future.
Bernard Golden is senior director of Cloud Computing Enterprise Solutions group at Dell. Prior to that, he was vice president of Enterprise Solutions for Enstratius Networks, a cloud management software company, which Dell acquired in May 2013. He is the author of three books on virtualization and cloud computing, includingVirtualization for Dummies. Follow Bernard Golden on Twitter @bernardgolden. |
Predicting Unconsciousness from a Pediatric Brain Injury Threshold The objective of this study was to utilize tissue deformation thresholds associated with acute axonal injury in the immature brain to predict the duration of unconsciousness. Ten anesthetized 3- to 5-day-old piglets were subjected to nonimpact axial rotations (110260 rad/s) producing graded injury, with periods of unconsciousness from 0 to 80 min. Coronal sections of the perfusion-fixed brain were immunostained with neurofilament antibody (NF-68) and examined microscopically to identify regions of swollen axons and terminal retraction balls. Each experiment was simulated with a finite element computational model of the piglet brain and the recorded head velocity traces to estimate the local tissue deformation (strain), the strain rate and their product. Using thresholds associated with 50, 80 and 90% probability of axonal injury, white matter regions experiencing suprathreshold responses were determined and expressed as a fraction of the total white matter volume. These volume fractions were then correlated with the duration of unconsciousness, assuming a linear relationship. The thresholds for 80 and 90% probability of predicting injury were found to correlate better with injury severity than those for 50%, and the product of strain and strain rate was the best predictor of injury severity (p = 0.02). Predictive capacity of the linear relationship was confirmed with additional (n = 13) animal experiments. We conclude that the suprathreshold injured volume can provide a satisfactory prediction of injury severity in the immature brain. |
<reponame>TvanBronswijk/memecity<filename>memecity.engine.ui/UI/Overlay/OverlayBarItem.cpp
#include "OverlayBarItem.h"
void memecity::engine::ui::overlay::OverlayBarItem::render()
{
multimedia_manager->render_rect(x, y, width, 10, true, background);
multimedia_manager->render_rect(x, y, width/max*value, 10, true, forground);
}
|
/**
* The type Training sample.
*
* @since 4.3.3
*/
public class TrainingSample implements Comparator<TrainingSample>, Comparable<TrainingSample> {
/**
* The Base file path.
*/
// raw inputs
public String baseFilePath;
/**
* The Version a file path.
*/
public String versionAFilePath;
/**
* The Version b file path.
*/
public String versionBFilePath;
/**
* The Manual merged file path.
*/
public String manualMergedFilePath;
/**
* The Git merged file path.
*/
public String gitMergedFilePath;
/**
* The Starting line of conflict block.
*/
public Integer startingLineOfConflictBlock;
/**
* The Entity id.
*/
public String entityID;
/**
* The Base content str.
*/
public String baseContentStr;
/**
* The Base content array.
*/
public List<String> baseContentArray;
/**
* The Version a content str.
*/
public String versionAContentStr;
/**
* The Version a content array.
*/
public List<String> versionAContentArray;
/**
* The Version b content str.
*/
public String versionBContentStr;
/**
* The Version b content array.
*/
public List<String> versionBContentArray;
/**
* The Manual merged str.
*/
public String manualMergedStr;
/**
* The Manual merged array.
*/
public List<String> manualMergedArray;
/**
* The Version a operation type.
*/
public OperationType versionAOperationType = OperationType.NONE;
/**
* The Version b operation type.
*/
public OperationType versionBOperationType = OperationType.NONE;
// input features
// intra-revision metrics: [0: base; 1: mergeTo; 2: mergeFrom]
/**
* The Intra revision syntactic features.
*/
public int[][] intraRevisionSyntacticFeatures = new int[3][11];
// inter-revision metrics: [0: base; 1: mergeTo; 2: mergeFrom]
/**
* The Inter revision diff features.
*/
public double[][] interRevisionDiffFeatures = new double[3][12];
/**
* The Remark.
*/
public String remark = ConflictResolvingEngine.REMARK;
/**
* The Oracle merge solution.
*/
// output features
public MergeSolutionType oracleMergeSolution = MergeSolutionType.OTHERS;
/**
* The Predicted merge solution.
*/
public MergeSolutionType predictedMergeSolution = MergeSolutionType.OTHERS;
/**
* Sets raw metrics.
*
* @param conflictEntity the conflict entity
*/
public void setRawMetrics(ThreeWayConflictEntity conflictEntity) {
if (conflictEntity.ownerBlock != null && conflictEntity.ownerBlock.ownerFileSet != null) {
baseFilePath = conflictEntity.ownerBlock.ownerFileSet.getPath(ThreeWayModel.BASE);
versionAFilePath = conflictEntity.ownerBlock.ownerFileSet.getPath(ThreeWayModel.OURS);
versionBFilePath = conflictEntity.ownerBlock.ownerFileSet.getPath(ThreeWayModel.THEIRS);
manualMergedFilePath = conflictEntity.ownerBlock.ownerFileSet.getPath(ThreeWayModel.MANUAL);
gitMergedFilePath = conflictEntity.ownerBlock.ownerFileSet.getPath(ThreeWayModel.MANUAL);
startingLineOfConflictBlock = conflictEntity.ownerBlock.startingLine;
}
entityID = conflictEntity.entityID;
baseContentStr = conflictEntity.getContentStr(ThreeWayModel.BASE);
this.baseContentArray = conflictEntity.getContent(ThreeWayModel.BASE);
versionAContentStr = conflictEntity.getContentStr(ThreeWayModel.OURS);
this.versionAContentArray = conflictEntity.getContent(ThreeWayModel.OURS);
versionBContentStr = conflictEntity.getContentStr(ThreeWayModel.THEIRS);
this.versionBContentArray = conflictEntity.getContent(ThreeWayModel.THEIRS);
this.manualMergedArray = conflictEntity.getContent(ThreeWayModel.MANUAL);
manualMergedStr = conflictEntity.getContentStr(ThreeWayModel.MANUAL);
versionAOperationType = conflictEntity.getOperationType(ThreeWayModel.OURS);
versionBOperationType = conflictEntity.getOperationType(ThreeWayModel.THEIRS);
}
@Override
public int compareTo(TrainingSample o) {
// TODO Auto-generated method stub
return 0;
}
@Override
public int compare(TrainingSample o1, TrainingSample o2) {
// TODO Auto-generated method stub
return 0;
}
/**
* Extract intra revision syntactic features.
*
* @param synFeatures the syn features
* @param i the
*/
public void extractIntraRevisionSyntacticFeatures(SyntacticFeatureSet synFeatures, int i) {
// intra-revision metrics: [0: base; 1: mergeTo; 2: mergeFrom]
int[] result = intraRevisionSyntacticFeatures[i];
result[0] = synFeatures.isOnlyContainComments() ? 1 : 0;
result[1] = synFeatures.commentNumber;
result[2] = synFeatures.codeStatementNumber;
result[3] = synFeatures.newlyDefinedVariables.size();
result[4] = synFeatures.newlyInitiatedObjects.size();
result[5] = synFeatures.usedPreviousVariables.size();
result[6] = synFeatures.forStmtNumber;
result[7] = synFeatures.whileStmtNumber;
result[8] = synFeatures.foreachStmtNumber;
result[9] = synFeatures.ifOrElseBranchNumber;
result[10] = synFeatures.getMethodCallees().size();
}
/**
* Extract inter revision syntactic features.
*
* @param featureId the feature id
* @param sourceFeatures the source features
* @param targetFeatures the target features
*/
public void extractInterRevisionSyntacticFeatures(
int featureId, SyntacticFeatureSet sourceFeatures, SyntacticFeatureSet targetFeatures) {
int sharedVariables = 0;
int newlyAdded = 0;
int newlyDeleted = 0;
for (String var : targetFeatures.usedPreviousVariables) {
if (sourceFeatures.usedPreviousVariables.contains(var)) {
sharedVariables++;
} else {
newlyAdded++;
}
}
if (sourceFeatures.usedPreviousVariables.size() == 0) {
this.interRevisionDiffFeatures[featureId][0] = 0;
} else {
this.interRevisionDiffFeatures[featureId][0] =
sharedVariables * 1.0 / sourceFeatures.usedPreviousVariables.size();
}
this.interRevisionDiffFeatures[featureId][1] = newlyAdded;
this.interRevisionDiffFeatures[featureId][2] = newlyDeleted;
// o mergeTo����ֵ���������У���base�����ͬ������������������������������
sharedVariables = 0;
newlyAdded = 0;
newlyDeleted = 0;
for (String var : targetFeatures.assignedVariables) {
if (sourceFeatures.assignedVariables.contains(var)) {
sharedVariables++;
} else {
newlyAdded++;
}
}
if (sourceFeatures.assignedVariables.size() == 0) {
this.interRevisionDiffFeatures[featureId][3] = 0;
} else {
this.interRevisionDiffFeatures[featureId][3] =
sharedVariables * 1.0 / sourceFeatures.assignedVariables.size();
}
this.interRevisionDiffFeatures[featureId][4] = newlyAdded;
this.interRevisionDiffFeatures[featureId][5] = newlyDeleted;
// o mergeTo��new�Ķ����У���base�����ͬ������������������������������
sharedVariables = 0;
newlyAdded = 0;
newlyDeleted = 0;
for (String var : targetFeatures.newlyInitiatedObjects) {
if (sourceFeatures.newlyInitiatedObjects.contains(var)) {
sharedVariables++;
} else {
newlyAdded++;
}
}
if (sourceFeatures.newlyInitiatedObjects.size() == 0) {
this.interRevisionDiffFeatures[featureId][6] =
sharedVariables * 1.0 / sourceFeatures.newlyInitiatedObjects.size();
} else {
this.interRevisionDiffFeatures[featureId][6] =
sharedVariables * 1.0 / sourceFeatures.newlyInitiatedObjects.size();
}
this.interRevisionDiffFeatures[featureId][7] = newlyAdded;
this.interRevisionDiffFeatures[featureId][8] = newlyDeleted;
// o mergeTo�����õķ��������У���base�����ͬ������������������������������
// o mergeTo�����õ���伯���У���base�����ͬ������������������������������
sharedVariables = 0;
newlyAdded = 0;
newlyDeleted = 0;
for (String var : targetFeatures.methodCallees) {
if (sourceFeatures.methodCallees.contains(var)) {
sharedVariables++;
} else {
newlyAdded++;
}
}
if (sourceFeatures.methodCallees.size() == 0) {
this.interRevisionDiffFeatures[featureId][9] = 0;
} else {
this.interRevisionDiffFeatures[featureId][9] = sharedVariables * 1.0 / sourceFeatures.methodCallees.size();
}
this.interRevisionDiffFeatures[featureId][10] = newlyAdded;
this.interRevisionDiffFeatures[featureId][11] = newlyDeleted;
}
} |
Seismic behaviour of brickwork chimneys in buildings The construction of chimneys of solid bricks in buildings with sloped roofs was commonplace in Bulgaria for almost a century. The collapse of a chimney during an earthquake could potentially lead to damages greatly exceeding the loss of the chimney itself, e.g. partial damage to the roof tiling and leaks, as well as material damage, injury or loss of life due to debris fall. A FEM model was created, in which the storeys of the building are represented in a generalised way, while the chimney is modelled explicitly as a cantilever supported at roof level. The internal forces in chimneys with heights ranging from 0.5 m to 2.0 m, belonging to buildings with height ranging from two to seven storeys were computed. Acceleration records from real earthquakes acting at the base of the building with varying peak ground acceleration and predominant period were used for input loading. The maximum tensile stresses at the bed joints were computed and were compared to the typical tensile strength of the mortars used for chimney construction, to assess the possibility of collapse. A simple, low-tech method for upgrading of existing chimneys by applying a coat of cement-based plaster with embedded fiberglass mesh is proposed. Introduction The widespread construction of chimneys made by solid brick units and lime or lime-cement mortar in buildings with sloped roofs began after Bulgaria became an independent country in 1878, and continued until industrialized construction became commonplace. During the 2012 earthquake with magnitude M = 5.8 and epicentre in the vicinity of the city of Pernik, a large number of masonry chimneys in the city collapsed, figure 1. Debris of masonry chimneys were also observed on the pavements in the capital city, Sofia, although virtually no structural damage occurred there. Such behaviour could potentially lead to damages greatly exceeding the loss of the chimney itself, e.g. partial damage to the roof tiling and leaks, as well as material damage, injury or loss of life due to debris falling down from the roof. The problem is further aggravated by the location of buildings with masonry chimneys within a city. Given their relatively old age, they are concentrated in or close to the city centres, where streets are generally narrower and the number of pedestrians and parked vehicles larger, thus increasing the probability of secondary damage. Research on the seismic behaviour of chimneys is focused mostly on industrial ones,,, and reference therein, which pose a greater threat, are structurally more challenging and possess some heritage value, which is rarely the case for chimneys of residential or office buildings. No past research on the seismic response of 'small' chimneys is known to the author. The objective of this study is to DCB2020 IOP Conf. Series: Materials Science and Engineering 951 012017 IOP Publishing doi:10.1088/1757-899X/951/1/012017 2 clarify their behaviour when subjected to ground shaking, and to assess the likelihood of failure for different levels of ground motion and mortar tensile strength. (a) a house with collapsed chimneys (b) a block with heavy damage to chimneys Analysis model This study only considers a scenario whereby a chimney collapses before significant damage has occurred to the building it belongs to. Indeed, if the building collapses or suffers heavy damage, a damage to the chimney will be of little or no consequence. In other words, cases where collapse of the chimney is caused by damage or collapse of the building are not considered. This assumption allows us to use elastic material behaviour, considering dissipative mechanisms through increased damping at larger ground motion levels, instead of explicitly specifying nonlinear material behaviour. Also, with tensile failure at the bed joints being the reason for the onset of collapse, elastic analysis is justified as far as the behaviour of the chimney is concerned, considering the low, sometimes negligible strength of mortars used for chimney construction. The modelling and the analyses were carried out by the computer program SeismoStruct (https://seismosoft.com). Model parameters The analysis model is shown in figure 2. It is a finite element model, in which the storeys of the building are represented in a generalised way by their intrinsic stiffness and mass, while the chimney is modelled explicitly as a cantilever supported at roof level. The lumped mass value of 258 t at each floor level is calculated assuming it represents a floor in a building with floor area of 200 m 2, 0.25 m thick external walls, 0.12 m thick internal walls and standard finishes. A single frame element is specified between two floors. Its bending stiffness is adjusted so as a seven story building has a natural period of 0.7 sec, considering that as rule of thumb, buildings like the ones being studied have a natural period of 0.1 sec per story. The rotational degrees of freedom at each floor level are restrained, and the base node at which the ground motion is input is fully restrained. The resulting model is thus of a symmetrical shear building. The range of building heights investigated is between 2 stories and 7 stories. Eigenvalue analysis produced fundamental periods of 0.237, 0.334, 0.423, 0.512, 0.605 sec for the 2, 3, 4, 5 and 6 story buildings respectively, which agrees well with the 0.1 sec per story assumption. Single-story buildings were excluded from the study for two reasons: a) they are very rare (at least in Bulgaria), and b) their natural periods are so short that they are not expected to significantly amplify the base ground motion. On the other hand, buildings with brick chimneys higher than 7 stories are extremely rare. A separate model was created for each of the six individual building heights. A chimney is fixed to the top floor node and is modelled by four frame elements of equal length. The Young modulus was specified as 10,000 MPa, a typical value for brick masonry, while the specific weight was specified as 20 kN/m 3, a little higher than the usual value of 18 kN/m 3 to allow for additional loads such as plaster and concrete crowns on top of the chimney. Also due to limitations of the program, DCB2020 IOP Conf. Series: Materials Science and Engineering 951 012017 IOP Publishing doi:10.1088/1757-899X/951/1/012017 3 the wall between the two flues of the stack is not included in the model, so the increased specific weight caters for this too. For the chimney the mass is automatically lumped at the nodes by the program. The range of chimney heights investigated is between 0.5 m and 2.00 m at 0.25 m increments, a total of seven chimney heights. This range was decided upon by visual inspection of the skyline of the city of Sofia and other Bulgarian cities. In order to reduce the number of runs all seven chimneys were included in a model of a particular building height. This would not alter the response significantly, as the total weight of the seven chimneys is 3 t compared to the floor mass of 258 t. The cross section of the chimney used in the analyses represents a double-flue stack. This choice was done for two reasons: a) single-flue stacks which would undoubtedly produce higher stresses are very rare; b) the contribution of the bending moment about the strong axis to the total normal stress is small enough as to assume that the behaviour of double-flue stack is not overly conservative, and may be used to represent the behaviour of stacks with more flues. The resulting model is deemed simple, yet sufficient for the purpose of this study. The building is modelled so that its most important modes of vibration can be activated, and the ground motion filtered and modified accordingly while it reaches the top of the building and becomes input motion for the chimney. Input ground motion and damping The 2012 Pernik earthquake was recorded at three locations in Sofia, with the ground motions described and analysed in detail in. The shapes of these three ground motions were used as reference, while the peak ground acceleration (PGA) of the stronger horizontal component was scaled to levels of 0.05 g, 0.10 g, 0.15 g and 0.20 g in order to investigate the behaviour with increasing strength of the ground shaking. The buildings studied here are of different structural typology given the time span of almost a century during which they were built. They can be masonry buildings with deformable or RC floors, with walls unframed or framed by RC elements. In terms of seismic design, they can be pre-code, lowcode or sometimes moderate-code buildings. The upper bound of 0.20 g was chosen because according to the results of vulnerability studies presented in, and, the investigated building types are likely to develop significant damage for higher PGA, and thus may collapse together with the chimneys. To account for the different degree of energy dissipation from all sources within the framework of linear elastic analysis, a different damping ratio was adopted for each PGA level listed above, namely 2%, 3%, 4% and 5% of the critical respectively. The damping was specified as Rayleigh type, whereby for the fundamental period of the building it has the values given above, according to the PGA, and for the fundamental period of the 2.00 m high chimney it is a constant of 2%, considering the low levels of stress the chimney is exposed to. Note that the use of varying damping ratios precludes the simple scaling of response values which could be done if a constant damping were adopted throughout the analyses. All three components of the ground motion were applied simultaneously at the base node of the model and time-history analysis carried out with duration of 25 sec., sufficient to include the strongest part of each record. Results and discussion Three tri-axial ground motion records, four levels of PGA, seven chimney heights and six building heights were used in the analyses to produce a total of 504 data sets. For each of them the maximum tensile stress perpendicular to the bed joint at the node where the chimney stack is attached to the building is computed by the formula, where y, x, and N are the bending moments about the principle axes of the cross section and the axial force, while y, x, and A are the section moduli about the principle axes of the cross section and the cross-sectional area. The resulting stresses are compared to the tensile strength of the bed-joint mortar. If the stress is higher than the strength, the stack is assumed to have failed in tension, and therefore in high risk of collapsing. Whether collapse would actually occur cannot be established with the analysis implemented here, and is beyond the scope of this study. Common features of the response A number of important trends were observed for all data sets as follows: The bending moments about the principle axes of the chimney stack obtained from the analyses have the same values as the theoretical bending moments in a cantilever subjected to an uniformly distributed load =, where is the mass per unit length of the stack, and is the acceleration at the top of the building/base of stack. The influence of the fluctuation of the axial force due to the vertical vibrations is negligible when obtaining the maximum of the time history of stress values computed by equation. Practically the same maximum is obtained when the contribution of the axial force is computed using the constant selfweight. The reason for the above is that the chimney stack itself is practically rigid, having a fundamental period of 0.027 sec. Thus no dynamic amplification occurs within the chimney stack. Therefore, for practical purposes, the bending moments in a chimney stack can be obtained quickly and reliably by knowing the acceleration at their base in the direction of the two principle axes. It is not necessary to include the chimney stack explicitly in the computational model. It was confirmed that the chimney stack can be regarded as an independent (secondary) structure fixed to the building it belongs to instead of to the ground. Therefore, the way the building amplifies the ground motion is crucial to the response of the chimney stack. Indeed, it was observed that due to matching of the response spectrum peaks of the ground motion and the fundamental period of the model, the response of a particular stack height to a particular ground motion strongly depends on the height of the building (in terms of stories) it is attached to. Vulnerability of the chimneys The tensile strength t of lime or lime-cement mortars used for masonry varies, with values ranging from 0.04 MPa to above 0.20 MPa been reported by technical recommendations of professional organizations, or experimental studies. Given the old age and low maintenance of the buildings under consideration, it is assumed that 0.20 is the highest tensile strength that can be realistically expected. A number of lower strength levels were also used in the vulnerability study, namely 0.10 MPa, The results of the vulnerability study are summarized in figure 3. Chimney heights missing from the figures either have a failure rate of 0% (no failure) or 100% (total failure) for the whole range of PGAs. Many valuable insights can be gained from the results, which may be used to shape disaster mitigation strategies. If a chimney is constructed by good quality mortar ( t = 0.20 MPa), it is likely to fail only if its height is above 1.50 m. At the same time, even chimneys with height 2.00 m would most likely survive a shaking with PGA of 0.10 g. At the other end, with poor tensile strength ( t = 0.005 MPa), only chimneys with height of up to 0.75 m may be considered safe, with this being only for very low levels of ground shaking -PGA of up to 0.50 g. It is interesting to note, that for PGA = 0.05 g, the stress at the base of the 0.50 m chimney remains compressive, thus precluding collapse. For PGA = 0.20 g, chimneys of all heights would fail. If we assume that low tensile strength is prevalent, these results are in good agreement with field observations from the 2012 Pernik earthquake. In the city of Pernik, which experienced PGA of around 0.20 g, the majority of the chimneys in the block of houses show in figure 1, collapsed. These houses are very old pre-war buildings, showing no signs of maintenance. In the city of Sofia, which experienced PGA between 0.05g in the city centre and 0.10 g at locations closer to the epicentre, brick debris possibly from chimneys, were observed on the pavements. Upgrading method for collapse prevention Considering the demonstrated high vulnerability of brick chimneys to relatively low levels of ground shaking it is desirable to propose a strengthening method which at best should be cost effective and lowtech to make it implementable by an average bricklayer or plasterer. A design methodology for improving the flexural capacity of masonry walls using CFRP sheets is proposed in. Using FRP technology however, requires somewhat specialized skills, and the sheets are not cheap either. In the following it will be demonstrated that adequate seismic retrofit of the chimneys may be achieved by using ordinary glass fibre mesh for plastering. Although glass fibre mesh is not meant for structural use, its strength is occasionally quoted by manufacturers, e.g. 1.88 kN/5cm strip is reported at https://terazid.com/shop/produkti-zatoploizolaciq/stuklofiburna-mreja-terazid/, which is equivalent to 37.6 kN/m. Next, we compute the flexural capacity about the weak axis of the cross section shown in figure 2, assuming it is cracked. The mesh will provide a tensile force Nt = 0.64 x 37.6 = 24.1 kN. Assuming very conservatively the compression zone to spread over the width of the whole brick, 0.12 m, will result in lever arm of 0.32 m, and capacity of the section M = 24.1 x 0.32 = 7.71 kNm. This is well over the largest bending moment obtained from the analyses My,max = 6.85 kNm for PGA = 0.20 g and chimney height 2.00 m. Hence a single layer of glass fibre mesh embedded in suitable plaster will be sufficient to prevent failure of the chimneys considered in this study. Summary and conclusions The seismic response of chimneys in buildings was investigated. It was shown that for a given ground motion, the internal forces in the chimneys depend entirely on the dynamic properties of the building. Fragility curves for chimneys of heights ranging from 0.50 m to 2.00 m and tensile strengths of mortar ranging from 0.005 MPa to 0.20 MPa were developed. The high strength fragility curves indicate that if good quality materials and workmanship are applied, most chimneys will be safe against weak to medium ground shaking. On the other hand, low strength fragility curves which agree well with field observations from the 2012 Pernik earthquake, indicated that most chimneys will be damaged even for weak ground shaking. A simple low-tech retrofit method using glass fibre mesh for plaster was proposed and shown to be adequate for the range of structural and loading parameters of the current study. It is recommended that this method is used whenever large scale roof repairs are done. During such repairs chimneys are often plastered, so embedding a glass fibre mesh in the process can be done at very little extra cost. |
A computer for babies may sound like the stuff of science fiction, but a Canadian company has just made it reality. Last month, Rullingnet Corp. launched Vinci, a 7-inch touch-screen tablet that sells for $389 to $479 and is marketed exclusively for children 4 and younger.
To some parents, Vinci is an exciting, if pricey, step in the future of early childhood education. For others, the idea of buying a tablet for a baby is excessive, if not downright creepy. As Rullingnet points out, this is a serious computer.
Although the Vinci is believed to be the first tablet designed for babies as young as 1 week old, researchers said modern parents are increasingly likely to hand over a computer to a baby. A recent study by Parenting magazine and BlogHer found that 29 percent of Generation X moms say their children had played with a laptop by age 2, and that number grows to 34 percent for Generation Y moms. Roughly one-third of Gen Y moms also report that by age 2 their children were familiar with cellphones, smartphones and digital cameras. A slightly smaller percentage of Gen X moms say the same thing.
But some parents see a fundamental difference between giving a 2-year-old an iPad loaded with an episode of “Bob the Builder” to ease the stress of a long airplane flight, versus buying that 2-year-old a tablet of her own. One friend worried that his daughter wouldn’t want to put it down.
Yang said she was inspired to create Vinci by her own daughter, who had come to prefer her mom’s iPad to other baby toys before her first birthday. Worried that her daughter might chip a tooth on the iPad or drop it on her foot, Yang set out to create a tablet that would be lightweight and easy to grasp.
Vinci is smaller than the iPad and about the same weight as the latest Apple model. Like tablets for grown-ups, the screen is black and shiny, but it is suspended in a rubbery red frame to protect it from banging, shaking and dropping. It is not Wi-Fi enabled, so children cannot inadvertently download inappropriate material.
It comes loaded with a few stories and games that encourage children to think about feelings as well as numbers and letters. It can play music videos, and Yang said more apps are in development. For now, Vinci is available only through Amazon.com and Fred Segal’s Lifesize children’s boutique, where store manager Victoria Wilson said several had sold. A Vinci spokeswoman said 600 have sold in the first month of release.
Yang is not positioning Vinci as a toy. Rather, she says it is “a new category of learning system.” This might help her to market the Vinci to the same parents who made Baby Einstein’s learning videos such a commercial success and who are fueling sales of the Your Baby Can Read program. These are the parents who want to give their children a head start in a competitive world and believe that providing a structured learning experience at an early age is the way to do it.
But early childhood development experts remain skeptical.
Most parents seem to share the skepticism. |
Demand-Pull and Cost-Push Effects on Labor Income in Turkey, 197390 In this paper we attempt to assess the changes in the Turkish production structure, and labor income in particular, between the 1970s and the 1990s. During this period a shift has taken place from an inward-looking policy towards an outward-oriented one. For our analysis we use two partially closed (or extended) inputoutput models. The demand-driven model is traditional for this type of analysis and examines the effects of a demand pull (for example, an increase in exports). For an open economy, however, it is not only important to investigate the effects of a demand pull, but also to examine how a cost push (for example, an increase in import prices) affects total gross output, value added, or labor income, for example. To study the effects of a cost push we introduce the partially closed supply-driven inputoutput model. Instead of analyzing the effects of a specific exogenous demand pull or cost push, we focus on various types of multiplier. |
/*
* Name: i_ilm.c
*
* Purpose: MX driver for Oxford Instruments ILM (Intelligent Level Meter)
* controllers.
*
*
* Author: <NAME>
*
*--------------------------------------------------------------------------
*
* Copyright 2008-2010 Illinois Institute of Technology
*
* See the file "LICENSE" for information on usage and redistribution
* of this file, and for a DISCLAIMER OF ALL WARRANTIES.
*
*/
#define MXI_ILM_DEBUG FALSE
#include <stdio.h>
#include <stdlib.h>
#include "mx_util.h"
#include "mx_record.h"
#include "i_isobus.h"
#include "i_ilm.h"
MX_RECORD_FUNCTION_LIST mxi_ilm_record_function_list = {
NULL,
mxi_ilm_create_record_structures,
NULL,
NULL,
NULL,
mxi_ilm_open
};
MX_RECORD_FIELD_DEFAULTS mxi_ilm_record_field_defaults[] = {
MX_RECORD_STANDARD_FIELDS,
MXI_ILM_STANDARD_FIELDS
};
long mxi_ilm_num_record_fields
= sizeof( mxi_ilm_record_field_defaults )
/ sizeof( mxi_ilm_record_field_defaults[0] );
MX_RECORD_FIELD_DEFAULTS *mxi_ilm_rfield_def_ptr
= &mxi_ilm_record_field_defaults[0];
MX_EXPORT mx_status_type
mxi_ilm_create_record_structures( MX_RECORD *record )
{
static const char fname[] = "mxi_ilm_create_record_structures()";
MX_ILM *ilm;
/* Allocate memory for the necessary structures. */
ilm = (MX_ILM *) malloc( sizeof(MX_ILM) );
if ( ilm == (MX_ILM *) NULL ) {
return mx_error( MXE_OUT_OF_MEMORY, fname,
"Can't allocate memory for MX_ILM structure." );
}
/* Now set up the necessary pointers. */
record->record_class_struct = NULL;
record->record_type_struct = ilm;
record->record_function_list = &mxi_ilm_record_function_list;
record->superclass_specific_function_list = NULL;
record->class_specific_function_list = NULL;
ilm->record = record;
return MX_SUCCESSFUL_RESULT;
}
MX_EXPORT mx_status_type
mxi_ilm_open( MX_RECORD *record )
{
static const char fname[] = "mxi_ilm_open()";
MX_ILM *ilm;
MX_ISOBUS *isobus;
char command[10];
char response[40];
int c_command_value;
mx_status_type mx_status;
if ( record == (MX_RECORD *) NULL ) {
return mx_error( MXE_NULL_ARGUMENT, fname,
"MX_RECORD pointer passed is NULL.");
}
#if MXI_ILM_DEBUG
MX_DEBUG(-2,("%s invoked for record '%s'.", fname, record->name ));
#endif
ilm = (MX_ILM *) record->record_type_struct;
if ( ilm == (MX_ILM *) NULL ) {
return mx_error( MXE_CORRUPT_DATA_STRUCTURE, fname,
"MX_ILM pointer for record '%s' is NULL.", record->name);
}
if ( ilm->isobus_record == NULL ) {
return mx_error( MXE_CORRUPT_DATA_STRUCTURE, fname,
"isobus_record pointer for record '%s' is NULL.", record->name);
}
isobus = ilm->isobus_record->record_type_struct;
if ( isobus == (MX_ISOBUS *) NULL ) {
return mx_error( MXE_CORRUPT_DATA_STRUCTURE, fname,
"MX_ISOBUS pointer for ISOBUS record '%s' is NULL.",
ilm->isobus_record->name );
}
/* Tell the ILM to terminate responses only with a <CR> character. */
mx_status = mxi_isobus_command( isobus, ilm->isobus_address,
"Q0", NULL, 0, -1,
MXI_ILM_DEBUG );
if ( mx_status.code != MXE_SUCCESS )
return mx_status;
/* Ask for the version number of the controller. */
mx_status = mxi_isobus_command( isobus, ilm->isobus_address,
"V", response, sizeof(response),
ilm->maximum_retries,
MXI_ILM_DEBUG );
if ( mx_status.code != MXE_SUCCESS )
return mx_status;
#if MXI_ILM_DEBUG
MX_DEBUG(-2,("%s: ILM controller '%s' version = '%s'",
fname, record->name, response));
#endif
if ( strncmp( response, "ILM", 3 ) != 0 ) {
return mx_error( MXE_DEVICE_IO_ERROR, fname,
"ILM controller '%s' did not return the expected "
"version string in its response to the V command. "
"Response = '%s'",
record->name, response );
}
/* Send a 'Cn' control command. See the header file
* 'i_ilm.h' for a description of this command.
*/
c_command_value = (int) ( ilm->ilm_flags & 0x3 );
snprintf( command, sizeof(command), "C%d", c_command_value );
mx_status = mxi_isobus_command( isobus, ilm->isobus_address,
command, response, sizeof(response),
ilm->maximum_retries,
MXI_ILM_DEBUG );
return mx_status;
}
|
<gh_stars>0
// Package permerror creates errors that have a `Temporary` function to be
// used with grpc's `FailOnNonTempDialError` option.
//
// Designed in the spirit of github.com/pkg/errors, the returned errors all
// implement the non-exported causer interface.
package permerror
import (
"github.com/pkg/errors"
)
// TemporaryType represents whether an error is Temporary or not
type TemporaryType int
const (
// Unknown means the error does not have a Temporary function
Unknown TemporaryType = 0
// Temporary means the error has a Temporary function and it returned true
Temporary TemporaryType = 1
// Permanent means the error has a Temporary function and it returned false
Permanent TemporaryType = 2
)
// IsTemporary returns the result of err.Temporary if it exists, otherwise false
// .. does not inspect the cause
func IsTemporary(err error) TemporaryType {
switch err := err.(type) {
case interface {
Temporary() bool
}:
if err.Temporary() {
return Temporary
}
return Permanent
default:
return Unknown
}
}
// MakePermanent forces an error to be permanent
func MakePermanent(cause error) error {
return &madePermanent{cause: cause}
}
// New returns an error message and marks it as permanent
func New(msg string) error {
return &permError{msg: msg}
}
// Wrap wraps an error and marks it as permanent unless the
// underlying error says otherwise
func Wrap(cause error) error {
return &wrapError{cause: cause}
}
// WithMessage wraps an error and marks it as permanent unless the
// underlying error says otherwise
func WithMessage(cause error, msg string) error {
return &permErrorWrapper{
cause: cause,
msg: msg,
}
}
type permErrorWrapper struct {
cause error
msg string
}
func (pe *permErrorWrapper) Error() string { return pe.msg + ": " + pe.cause.Error() }
func (pe *permErrorWrapper) Cause() error { return pe.cause }
func (pe *permErrorWrapper) Temporary() bool {
switch IsTemporary(errors.Cause(pe.cause)) {
case Temporary:
return true
default:
// default to permanent if not explcitly specified
return false
}
}
type permError struct {
msg string
}
func (pe *permError) Error() string { return pe.msg }
func (*permError) Temporary() bool { return false }
type madePermanent struct {
cause error
}
func (mp *madePermanent) Error() string { return mp.cause.Error() }
func (mp *madePermanent) Cause() error { return mp.cause }
func (*madePermanent) Temporary() bool { return false }
type wrapError struct {
cause error
}
func (we *wrapError) Error() string { return we.cause.Error() }
func (we *wrapError) Cause() error { return we.cause }
func (we *wrapError) Temporary() bool {
switch IsTemporary(errors.Cause(we.cause)) {
case Temporary:
return true
default:
// default to permanent if not explcitly specified
return false
}
}
|
#[async_trait]
pub trait OutputFilter: Send + 'static {
type Item: Sized + Send;
async fn filter(&mut self, entry: Self::Item) -> Option<Self::Item>;
fn on_load(&mut self) {}
}
|
The Natural Cleaners,Vancouver's efficient, green cleaning service gets VO stamp of approval. Fast, efficient, this service and all-natural cleaning products. Owner Danny Loiselle talks with us about what got turned him into a clean machine.
The Natural Cleaners, Vancouver's healthy home cleaning service, gets around. Fast, efficient, this all-natural cleaning service uses their own product. Owner Danny Loiselle left his job with BC Hydro to pursue the challenge of creating his own company. Armed with his own natural cleaning solution and an appreciation for a spotless home, he set out on the city of Vancouver with The Natural Cleaners and hasn't looked back.
The Natural Cleaners healthy home cleaning service was designed based on the concept of working for a purpose and with a compassionate team. "The founders' big intention of creating this company was to incorporate, in his world, a place where people are caring for people, and with The Natural Cleaners, they really are," Louiselle said.
"We use very good, truly natural products. We have a team of people that care, because I care about them, and I treat them well. We've got our cleaning system down, which basically covers each room strategically. When these things are combined with our great product and a staff that care, it ensures that each customer will receive a good service," Danny said confidently.
He said that he saw an opportunity within the healthy home cleaning market to do things differently, and couldn't resist taking the plunge into a completely different field of work, this time as the boss.
"If you want to serve people, you have to think about the customer, put yourself in their position and think what you would want from a cleaning service. What would you want from the people you would hire to clean your house? I would want a crew that did a thorough job without leaving my home full of toxic chemicals."
His success he said, started with selecting the right people to work in the company. A happy employee produces the best results. People are happy providing a cleaning service when they are comfortable in their work environment. When commenting about his cleaning staff of ten, he said, "we have a wide variety of employees---from a young hipster music-loving kind of guy to an ex-hair stylist to a yoga instructor. All of our employees are polite, pleasant and helpful."
"When you give customers a quality job, you're cleaning their world and changing their energy. This is rewarding." |
Breast feeding duration in consecutive offspring: a prospective study from southern Brazil The association between breast feeding duration in two consecutive pregnancies was studied in a prospective study in southern Brazil. In a populationbased sample of 5960 women giving birth in 1982, 1386 delivered a second child within 4 years. The data were analyzed using life table techniques. The duration of breast feeding of the second child increased directly according to the duration the previous child had been breast fed. In particular, when the previous child had been breast fed for 6 months or more, the subsequent child was clearly more likely to be breast fed. However, when the previous child had been breast fed for under 6 months, the differences among subsequent children disappeared after 36 months. These differences were still present after stratification by family income, maternal education and parity. Mothers with a previous unsuccessful or problematic breast feeding experience should receive special priority in promotion campaigns. |
<filename>conanfile_base.py<gh_stars>0
#!/usr/bin/env python
from conans import ConanFile,tools
from conans.errors import ConanInvalidConfiguration
import os
class ConanFileBase(ConanFile):
name = "grpc"
version = "1.23.0"
description = "Google's RPC library and framework."
topics = ("conan", "grpc", "rpc")
url = "https://github.com/inexorgame/conan-grpc"
homepage = "https://github.com/grpc/grpc"
author = "Bincrafters <<EMAIL>>"
license = "Apache-2.0"
exports = ["LICENSE.md","conanfile_base.py"]
exports_sources = ["CMakeLists.txt"]
generators = "cmake"
short_paths = True # Otherwise some folders go out of the 260 chars path length scope rapidly (on windows)
protobuf_version="3.9.1"
_source_subfolder = "source_subfolder"
_build_subfolder = "build_subfolder"
def source(self):
sha256 = "86d7552cb79ab9ba7243d86b768952df1907bacb828f5f53b8a740f716f3937b"
tools.get("{}/archive/v{}.zip".format(self.homepage, self.version), sha256=sha256)
extracted_dir = "grpc-" + self.version
os.rename(extracted_dir, self._source_subfolder)
cmake_path = os.path.join(self._source_subfolder, "CMakeLists.txt")
# See #5
tools.replace_in_file(cmake_path, "_gRPC_PROTOBUF_LIBRARIES", "CONAN_LIBS_PROTOBUF")
|
def process_command_line():
args = parse_command_line()
code_file = args.code_file[0]
processed_code = strip_file_to_string(code_file, args.to_empty, args.strip_nl,
args.no_ast, args.no_colon_move, args.no_equal_move,
args.only_assigns_and_defs, args.only_test_for_changes)
if args.inplace:
args.outfile = [code_file]
if not args.only_test_for_changes:
if not args.outfile:
print(processed_code, end="")
else:
with open(args.outfile[0], "w") as f:
f.write(str(processed_code))
else:
if processed_code:
print("True")
exit_code = 0
else:
print("False")
exit_code = 1
sys.exit(exit_code) |
Wnt3a-/--like phenotype and limb deficiency in Lef1(-/-)Tcf1(-/-) mice. Members of the LEF-1/TCF family of transcription factors have been implicated in the transduction of Wnt signals. However, targeted gene inactivations of Lef1, Tcf1, or Tcf4 in the mouse do not produce phenotypes that mimic any known Wnt mutation. Here we show that null mutations in both Lef1 and Tcf1, which are expressed in an overlapping pattern in the early mouse embryo, cause a severe defect in the differentiation of paraxial mesoderm and lead to the formation of additional neural tubes, phenotypes identical to those reported for Wnt3a-deficient mice. In addition, Lef1(-/-)Tcf1(-/-) embryos have defects in the formation of the placenta and in the development of limb buds, which fail both to express Fgf8 and to form an apical ectodermal ridge. Together, these data provide evidence for a redundant role of LEF-1 and TCF-1 in Wnt signaling during mouse development. Signaling by Wnt/wg proteins is involved in the regulation of cell fate decisions and cell proliferation (for review, see Cadigan and Nusse 1997;). Members of the LEF-1/TCF family of transcription factors can interact with -catenin, a downstream component of the Wnt signaling pathway, and activate transcription. To date, four members of this family have been identified in mammals; lymphoid enhancer factor-1 (LEF-1), T cell factor -1 (TCF-1), TCF-3 and TCF-4 (;;van de ;a). All four proteins have a virtually identical DNA-binding domain and -catenin interaction domain. These transcription factors can augment gene expression in association with -catenin and in response to Wnt-1 signaling in tissue culture transfection assays (van de ;;a). In addition, LEF-1/TCF proteins can associate with the proteins CBP and Groucho, which confer repression in the absence of a Wnt/wg signal (;;;Waltzer and Bienz 1998). Finally, LEF-1 can interact with the protein ALY and functions as an architectural component in the assembly of a multiprotein enhancer complex (). Consistent with the presumed role of these transcription factors in Wnt/wg signaling, mutations in a Drosophila ortholog of Lef1 generate a wingless (wg) pheno-type (;van de ). However, the role of the mammalian transcription factors in signaling by Wnt proteins in vivo has been obscure because mutations of the Lef1, Tcf1, or Tcf4 genes did not generate any phenotype that resembles known Wnt mutations. Lef1 −/− mice show a block in the development of teeth, hair follicles, and mammary glands, a null mutation in Tcf1 results in an incomplete arrest in T lymphocyte differentiation and Tcf4 −/− mice have a defect in the development of the small intestine (b). In contrast, null mutations in the Wnt3a and Wnt4 genes, which are expressed in the early mouse embryo in regions that also express Lef1 and Tcf1, result in defects in the formation of paraxial mesoderm, and kidneys, respectively ). The partial overlap in the expression of Lef1 and Tcf1 in mouse development (), therefore, raises the question as to whether genetic redundancy can account for the lack of a Wnt-like phenotype in mice carrying targeted mutations in either transcription factor gene. Redundancy of the expression and function of Lef1 and Tcf1 in the early mouse embryo To compare in more detail the expression pattern of individual Lef1/Tcf genes in early mouse development, we performed whole mount in situ hybridization with probes specific for the four known members of this gene family (Fig. 1). Lef1 and Tcf1 are both expressed abun-dantly in the primitive streak of E8.5 embryos, whereas Tcf3 and Tcf4 are expressed predominantly in the primordia of the fore-and midbrain. In E8.5 embryos, expression of Tcf3 can also be detected in the primordia of the hindbrain and in the forming somites. In E9.5 embryos, additional overlapping Lef1 and Tcf1 expression is detected in the forelimbs and branchial arches, consistent with a previous in situ hybridization analysis of sections of embryos (). Thus, Lef1 and Tcf1, but not the other members of the gene family are expressed in an overlapping pattern in the primitive streak and in limb buds. We examined a potential redundancy between LEF-1 and TCF-1 by generating compound homozygous mice carrying null alleles of both genes. Lef1 −/− Tcf1 −/− embryos were recovered at the expected frequency between E6.5 and 9.5, but their frequency was markedly reduced after E10.5. Scanning electron microscopy of E9.5 embryos revealed caudal and limb bud defects in the com-pound mutant homozygotes (Fig. 2a,b). In addition, the allantois of the mutant embryos forms an abnormal Figure 1. Expression of members of the LEF-1/TCF family of transcription factors in early mouse development. Expression was analyzed at embryonic day 8.5 (E8.5) and E9.5 by whole mount in situ hybridization with cDNA probes for Lef1 (a,c), Tcf1 (b,d), Tcf3 (e,g), and Tcf4 (f,h). In E8.5 embryos, expression of Lef1 and Tcf1 is detected predominantly in the primitive streak (PS) and in E9.5 embryos expression is detected in the primitive streak and unsegmented presomitic mesoderm, the forelimb bud (FL) and the branchial arches. Tcf3 expression is detected in newly formed somites and in the primordia of the fore-, mid-, and hindbrain. Expression of Tcf4 at E9.0 (f) is detected in the midbrain, around the optic placode, in the P2 region of the diencephalon, and the forming hindgut. littermates. In the mutant embryo, somites can be detected up to the forelimb (FL), but not in the caudal region. The caudal extremity of the tail (T) shows an abnormal morphology, is deformed and comparably smaller than the wild type. The mutant embryo also has a smaller telencephalic vesicle (TE) and a less pronounced isthmus (I) between midbrain and the hindbrain. The mass of cells labeled A corresponds to the remnants of the allantois, which is not fused to the placenta (data not shown). Histology of sagital sections of E9.5 wild-type (wt; c,e) and Lef1 −/− Tcf1 −/− (d,f) embryos. Somites and a well-developed heart (H) can be seen in the wild-type embryo. In the mutant embryo, no identifiable somites can be detected in the disformed caudal half of the embryo, which contains multiple neural tube-like structures (NT). The mutant embryo has, however, a heart (d). In the region anterior to the forelimb level (bracket in c,d) somites can be detected in both wild-type and mutant embryos (e,f). However, somites in the mutant embryo have incomplete structure and lack clear segmental boundaries. (g,h) Transverse sections of E9.5 wild-type and mutant embryos at a caudal level at which normally the first somites form. In the mutant embryo (h) three neural tubes are detected. Normal lateral mesoderm (LM) and visceral mesoderm (VM) are formed in the mutant embryo. (Arrow) Position of the notochord. (a-f) Rostral is to the left and caudal to the right. mass of cells, which may be related to the lack of a placenta in E10.5 mutant animals (data not shown). The telencephalic vesicle of the mutant embryos is also smaller than that of wild-type embryos. Histological analysis of E9.5 Lef1 −/− Tcf1 −/− embryos confirmed the caudal defects and revealed a severe deficiency in the formation of somites (Fig. 2d). Anterior to the level of the forelimb bud, somites of abnormal morphology were detected (Fig. 2f), whereas no somites were found in the caudal region, which is highly deformed and contains multiple tube-like structures (Fig. 2d). Transverse sections in the caudal region confirmed the lack of somites and revealed multiple (up to five) tubular structures in the mutant embryos (Fig. 2h). This phenotype indicates a deficiency in the formation of presomitic (paraxial) mesoderm. Mesoderm is first formed during gastrulation when cells delaminate from the epiblast into the primitive streak at the posterior region of the embryo. Within the primitive streak, spatially defined precursors generate different mesodermal fates, such as axial, paraxial, intermediate, and lateral mesoderm Beddington 1987, 1992). The mutant embryos contain both lateral and visceral mesoderm (Fig. 2h) and have a heart, a dorsolateral mesoderm derivative (Fig. 2d), suggesting that the absence of LEF-1 and TCF-1 affects the differentiation of specific mesodermal cell types. Defects in paraxial mesoderm differentiation in To further define the mesodermal defects in the double mutant embryos, we performed whole-mount in situ hybridization with several probes that identify specific mesodermal cell populations (Fig. 3). Expression of the sclerotome marker, Pax1, was detected in the seven to nine somites anterior of the forelimb buds, albeit at a reduced level (Fig. 3b). No Pax1 expression was found in the region posterior to the forelimb buds, consistent with the lack of caudal somites. Pax3 transcripts, which are normally found in the presomitic paraxial mesoderm, dermamyotome and dorsal neural tube, are detected in the dermamyotomes anterior to the forelimb level and in broad bands in the caudal region ( Fig. 3d). Transverse sections showed that the hybridization pattern of Pax3 in the caudal region represents the dorsal half of multiple tubes rather than dermamyotomes (Fig. 3f). Moreover, the mutant embryos failed to express Notch1 in the area of presomitic mesoderm formation (Fig. 3h), although The sclerotome marker, Pax1, shows a regular pattern of somitic expression throughout the wild-type embryo, whereas weak expression is detected only in nine somites anterior to the forelimb level (arrowhead) in the mutant embryo (a,b). Pax3, normally expressed in the dorsal neural tube and the dermamyotome is expressed in the mutant embryo in a nonsegmental pattern caudal to the forelimb level (arrowhead; c,d). Transverse sections of the caudal region of these embryos (thin line) are shown in e and f. Pax3 expression is detected in the dermamyotome and in the dorsal neural tube of a wild-type embryo and in three neural tubes of the mutant embryo. Expression of Notch1 in the unsegmented presomitic mesoderm (PM; bracket) is observed in wild-type but not in mutant embryos (g,h). Notch1 expression is also detected in the forelimb bud of the wildtype embryo (arrowhead in g), but it is absent in the forelimb bud of the mutant embryo (arrowhead; h). Expression of Notch1 in the neural tube is detected in transverse sections of both wild-type and mutant embryos (i,j). In the mutant embryo, one of the neural tubes is not closed. Expression of Wnt5a in the presomitic mesoderm is detected in the wild-type embryo (k) but not in the mutant embryo(l). Expression of Wnt5a is also detected in the forelimb bud (arrowhead) of the wild-type, but not mutant embryo. (m-p) Pattern of expression of the dorsal CNS marker, Wnt1, in whole-mount hybridization and in transverse sections of the caudal region at the level indicated by a thin line. Wnt1 is expressed in the brain of wild-type and mutant embryos at a similar level, but expression is increased in the CNS of the mutant embryo. In the region posterior to the forelimb level of the mutant embryo, additional signals can be detected in extra bands and patches of cells (arrowheads; n) that represent an open neural tube and three additional neural tubes (p). The presomitic mesoderm marker, Tbx6, is expressed in the tailbud and presomitic mesoderm of the wild-type, but not mutant embryo (q,r). expression was detected in the neural tube (Fig. 3j). We also examined expression of Wnt5a, which is normally expressed in the tail bud region that forms paraxial mesoderm Beddington 1987, 1992;). No Wnt5a expression was detected in the tail bud region of Lef1 −/− Tcf1 −/− embryos (Fig. 3l). To examine the identity of the tubular structures in the caudal region of Lef1 −/− Tcf1 −/− embryos, we analyzed the expression of Wnt1 as a marker for dorsal CNS (;). In mutant embryos, Wnt1 expression was generally increased in the CNS and was also found in multiple stripes and clusters of cells in the caudal region (Fig. 3n). Moreover, transverse sections in the caudal region of the mutant embryo showed that the dorsal part of each of the tubular structures contains Wnt1expressing cells (Fig. 3p). Thus, Lef1 −/− Tcf1 −/− embryos appear to lack paraxial mesoderm posterior to the forelimb level and they form ectopic neural tubes, phenotypes that are virtually identical to those reported for Wnt3a −/− mice (;). The formation of additional neural tubes, however, is also observed in mouse mutant Fgfr1 −/− chimeras and in embryos carrying mutations in the Tbx6 gene (;Chapman and Popaioannou 1998). In particular, the targeted mutation of the T-box transcription factor gene, Tbx6, which is related to Brachyury, also causes paraxial mesoderm defects similar to those observed in Wnt3a −/− and Lef1 −/− Tcf1 −/− embryos (;Chapman and Papaioannou 1998). Therefore, we examined the expression of Tbx6 and detected no expression in the caudal region of the Lef1 −/− Tcf1 −/− embryos (Fig. 3r). This absence of detectable Tbx6 expression in the tail bud region of the compound homozygous mutant embryos suggests that either the cells that normally express Tbx6 are missing, and/or alternatively, that LEF-1/TCF-1 and Wnt signaling may act upstream of Tbx6. Because expression of Tbx6 in E9.5 embryos requires Brachyury (), which we have identified as a direct target for LEF-1/TCF proteins (J. Galceron, S.C. Hsu, and R. Grosschedl, unpubl.), we favor the view that LEF-1/TCF proteins act upstream of Tbx6. To address the issue of whether the ectopic neural tubes are formed at the expense of paraxial mesoderm, as seen in Wnt3a −/− mice (), we examined wild-type and Lef1 −/− Tcf1 −/− embryos at E8.5, a stage at which cells ingress through the primitive streak (Tam and Beddington 1987). In transverse sections of the primitive streak region, mesodermal cells of mesenchymal morphology were detected under the ectoderm layer in wild-type embryos, whereas only densely packed cells of epithelial morphology and tubular arrangement were found in mutant embryos (Fig. 4a,b). We also examined whether proliferation and apoptosis is altered in the caudal region of the Lef1 −/− Tcf1 −/− embryos by counting the numbers of dividing and apoptotic cells. No significant differences were detected between wild-type and mutant embryos (data not shown). Therefore, the ectopic neural tubes appear to be formed at the expense of paraxial mesoderm. The generation of cells underlying the primitive ectoderm suggests that the Lef1 −/− Tcf1 −/− mutant mice have no obvious defect in the delamination of epithelial cells, which is impaired in Fgfr1 −/− mutant mice (). Thus, the defect in Lef1 −/− Tcf1 −/− embryos might occur at a subsequent differentiation step. The striking similarity of the paraxial mesoderm defect in Lef1 −/− Tcf1 −/− mice and Wnt3a −/− mice raised the question of whether these genes are connected in a feedback loop. We examined the expression of Wnt3a in E 8.5 embryos and detected similar expression in the posterior region of wild-type and Lef1 −/− Tcf1 −/− embryos, consistent with a function of LEF-1 and TCF-1 downstream of Wnt3a. In transverse sections, Wnt3a expression was detected only in the primitive ectoderm of wild-type embryos, whereas it was also found in the underlying ecto- pic neural tissue of the mutant embryo (Fig. 4e,f). Although the area of expression is expanded in the compound mutant embryos as compared with wild-type embryos, we favor the view that the expanded expression is due to the generation of excess neural ectoderm at the expense of mesoderm, rather than a loss of negative regulation, which has been shown to operate in the absence of Wnt signals (;Waltzer and Bienz 1998). Finally, we examined which cells in the primitive streak contain LEF-1 protein. By immunohistochemistry of E9.5 embryos with anti LEF-1 antibodies, we detected abundant LEF-1 protein in the presomitic mesoderm and in the somites, but not in the primitive ectoderm (Fig. 4g). This expression pattern is consistent with a model for paraxial mesoderm differentiation in which cells that migrate through the primitive streak upregulate the expression of LEF-1 and become competent for a Wnt3a signal from the ectodermal cells to assume a mesodermal rather than neuroectodermal cell fate. Arrest of limb development in Lef1 −/− Tcf1 −/− embryos Lef1 and Tcf1 are also expressed in an overlapping pattern in the forelimb bud of E9.5 embryos. The early limb bud protrudes from the lateral body wall and is comprised of lateral plate mesoderm and ectoderm. As the limb bud develops, three distinct signaling centers are formed that are required for proper outgrowth and patterning of a limb (Johnson and Tabin 1997;Martin 1998). The apical ectodermal ridge (AER) is a specialized epithelial structure that is located at the distal margin of the bud. In the mouse, the AER expresses at least four Wnt genes (Wnt3, Wnt4, Wnt6, and Wnt7b) and four fibroblast growth factor (Fgf) genes, (Fgf2, Fgf4, Fgf8, Fgf9) (for review, see Martin 1998). In the underlying distal mesoderm, which includes the progress zone and contains precursors of the limb mesenchymal cells, slug, Fgf10, and Msx1 are expressed. Msx1 transcripts are also found along the entire anterior and posterior margins (). The zone of polarizing activity (ZPA) is defined by expression of sonic hedgehog at the posterior margin of the bud (for review, see Johnson and Tabin 1997;Martin 1998). In addition, the dorsal ectoderm of the limb bud is known to produce signals that regulate the dorsal-ventral (D-V) axis and is characterized by the expression of Wnt7a (;Parr and MacMahon 1995). Wnt5a is expressed in both the ventral limb ectoderm and in a graded manner in the limb mesenchyme (). Lef1 expression is found in the mesenchyme of the limb bud but not in the AER, whereas Tcf1 is expressed in both mesenchyme and AER (). Morphological analysis of the forelimb buds of E9.5 wild-type and Lef1 −/− Tcf1 −/− mutant embryos by scanning electron microscopy indicated that the mutant embryos contain nascent limb buds that are significantly smaller than wild-type limb buds (Fig. 5a,b). We analyzed the expression of molecular markers for the AER and the distal mesoderm. Histological sections of E9.5 wild-type and mutant embryos, hybridized with a Pax3 probe, in-dicated that cells from the dermamyotome containing presumed limb muscle precursors, migrate into the lateral plate mesoderm of both wild-type and mutant embryos (Fig. 5c,d). However, the early AER marker Fgf8 is abundantly expressed in wild-type embryos but not in mutant embryos (Fig. 5e,f). In addition, we examined the expression of Engrailed1 (En1), which is a marker for D-V polarity of the limb bud and is expressed in the ventral ectoderm even prior to the formation of an AER (Parr and McMahon 1995;). En1 is also expressed at the mid-/hindbrain boundary and is a target of Wnt-1 signaling (Danielian and McMahon 1996). In Lef1 −/− Tcf1 −/− embryos, En1 is not expressed in the forelimb bud, although abundant expression can be detected at the mid-/hindbrain boundary (Fig. 5g,h). The maintenance of En1 expression in the mid-/hindbrain boundary suggests that Wnt1 signaling in this region of the embryo is mediated by another member of the LEF-1/TCF family of transcription factors, or is independent of these proteins. The absence of detectable En1 expression in the limb bud may also reflect a dorsalization of the limb (;). Therefore, we examined the expression of Lmx1b, which, like Wnt7a, is involved in dorsal cell fate decisions (;;). Lmx1b expression is detected in the limb buds of compound mutant embryos (Fig. 5j), although the level of expression is lower and the area of expression is broadened relative to the wild-type embryo (Fig. 5i). In transverse sections of the wild-type limb buds, Lmx1b is restricted to the dorsal mesenchyme (Fig. 5k), whereas expression was found in both dorsal and ventral mesenchyme of the Lef1 −/− Tcf1 −/− limb bud (Fig. 5l). This pattern of Lmx1b expression is reminiscent of an earlier stage limb bud and the lack of a D-V border has been shown to result in a failure to form an AER (Johnson and Tabin 1997). Finally, we examined the expression of the mesoderm marker Msx1 and found that the level of expression is reduced in the mutant limb buds and the domain of expression is broadened (data not shown). This broadened expression domain of Lmx1b and Msx1 may reflect an impairment of regional specification of the limb bud in the Lef1 −/− Tcf1 −/− embryos. Expression of Wnt5a, which is normally expressed in the ventral limb ectoderm and in the limb mesenchyme (), was not detected in the Lef1 −/− Tcf1 −/− limb buds (Fig. 3l). Taken together, these data indicate that the transcription factors LEF-1 and TCF-1 also regulate limb development in a redundant manner. Discussion Our study shows that null mutations in the transcription factor genes Lef1 and Tcf1 result in a defect in the formation of paraxial mesoderm, which is virtually identical to that seen in Wnt3a-deficient mice. In particular, the Lef1 −/− Tcf1 −/− mice form excess neural ectoderm at the expense of paraxial mesoderm. This presumed role of Wnt3a signaling through LEF-1/TCF proteins in a cell fate decision is consistent with the recent finding that injection of a dominant-negative Wnt into premigratory neural crest cells of zebra fish promotes neuronal fates at the expense of pigment cells (). In addition, signaling by Wnt1 and Wnt3a was shown to regulate the expansion of dorsal neural precursors in mouse embryos (). The reduction in size of the telencephalic vesicle may also be related to the deficiency of signaling by Wnt3a, which is expressed together with other Wnt proteins at the medial edge of the telencephalon (). Lef1 −/− Tcf1 −/− mice also fail to form the placenta, a phenotype seen in a less pronounced form in Wnt2-deficient mice (). Thus, our analysis shows a redundant role of these transcription factors in signaling by at least one, and most likely multiple Wnt proteins in the mouse. The defect of limb development in Lef1 −/− Tcf1 −/− mice suggests a role of Wnt signaling in this developmental process. To date, the only Wnt mutation that has been shown to have a defect in limb development is Wnt7a (Parr and McMahon 1995). However, recent studies in the chick, in which an activated form of -catenin was expressed in limb buds via retroviral transfer, showed that Wnt-7a functions in limb morphogenesis through a -catenin independent pathway (). In contrast, a dominant-negative form of LEF-1 interfered with the function of Wnt3a in inducing the expression of Bmp2, Fgf4, and Fgf8 in the AER (). Moreover, this study showed that Wnt3a upregulates expression of Lef1 in the mesoderm and acts through this transcription factor (). In the mouse, Wnt3a, which is distinct from Wnt3, is not expressed at a detectable level, suggesting that LEF-1 and TCF-1 may mediate the effects of another Wnt signal. The limb bud phenotype of Lef1 −/− Tcf1 −/− mice is reminiscent of that of the limbless mutant in chick, which initiates limb bud morphogenesis, but fails to express En1, Fgf4, Fgf8, and Tcf1, and has a defect preceding the formation of an AER (;;). The lack of AER expression in the Lef1 −/− Tcf1 −/− mice can be accounted for by an arrest of limb bud development prior to the formation of an AER. According to this view, development of the limb bud requires Wnt signaling through LEF/TCF proteins in the mesenchyme. Lef1 −/− Tcf1 −/− limb buds also fail to express Fgf8, consistent with the role of Wnt signaling in inducing Fgf8 in the chick (). The pronounced similarity of the Lef1 −/− Tcf1 −/− and Wnt3a −/− phenotypes raises the question of the role of these transcription factors in signaling by other Wnt proteins that are expressed in spatially and temporally overlapping patterns in early mouse development. The absence of Wnt5a expression in both Lef1 −/− Tcf1 −/− and Wnt3a −/− embryos suggests that LEF-1 and TCF proteins could also regulate signaling by multiple Wnt proteins in (NT) The neural tube. (e-l) Analysis of molecular markers in E9.5 wild-type and mutant embryos by whole mount in situ hybridization. Expression of Fgf8, an early marker of the AER in the developing forelimb bud is detected in a wildtype (e) but not mutant embryo (f). (arrowhead) Position of the forelimb bud in these and the other panels. (g,h) Expression of En1 is detected in the ventral region of the wild-type but not mutant forelimb bud. However, En1 expression is detected at the mid-hindbrain boundary. (i,j) Expression of Lmx1b, a marker of the dorsal mesenchyme of the emerging limb buds () is detected in both the wild-type and mutant embryos. However, the region of Lmx1b expression is broadened in the mutant limb bud and the level of Lmx1b expression is lower relative to the wild type limb bud. The expression of Lmx1b at the mid-hindbrain boundary is not affected in the mutant embryo. (k,l) Transverse sections at the level of the limb bud of the embryos shown in (i and j) stained with fast neutral red. Lmx1b expression is restricted to the dorsal mesenchyme of the wild-type limb bud (i) but is extended along the entire limb margin in the mutant embryo (j). an indirect manner through feedback loops. However, we cannot rule out that the lack of expression of specific Wnt proteins reflects the absence of specific cell types. In addition, some Wnt proteins, such as Wnt7a in the chick, may not involve a transcriptional response by LEF-1/TCF proteins (), and conversely not all transcriptional activities of LEF-1 are dependent on association with -catenin (). Therefore, analysis of additional mutant alleles of Lef1 and Tcf1 will be required to further dissect the regulatory network of Wnt signaling. Histology and immunohystochemistry Embryos were dissected out and fixed in Carnoys fixative (60% ethanol, 30% chloroform, 10% acetic acid) at the indicated ages, dehydrated in ethanol, embedded in paraffin, sectioned at 7 m, and stained with 0.1% cresyl violet for conventional analysis. Immunohystochemistry was performed on transverse sections of embryos similarly processed with a rabbit polyclonal serum raised against the full-length LEF-1 protein at 1/50 dilution as described in (van ). Immunodetection was performed with the ABC method (ABC Elite Kit, Vector Labs). Scanning electron microscopy Embryos were taken from timed pregnancies and fixed at 4°C overnight in 4% PFA in PBS. Embryos were then washed in PBS, dehydrated in ethanol, critically point dried, placed on brass stubs, and coated with 25 nm of gold-palladium. Specimens were viewed and photographed in a JEOL 840 scanning electron microscope. Whole mount in situ hybridization Embryos were fixed and processed following published protocols () with the following modifications: Endogenous peroxidases were quenched with 6% H 2 O 2 for 2 hr prior to proteinase K digestion and hybridization. Hybridization was performed for 40 hr at 63°C in 5 SSC (pH 4.5), 50% formamide, 5 mM EDTA, 50 g/ml yeast tRNA, 0.2% Tween 20, 0.5% CHAPS, and 100 g/ml heparin. Color was developed with NBT/BCIP substrate. Embryos were postfixed and photographed in 50% glycerol in PBS. After color developing, some embryos were washed in PBS, cryoprotected in 30% sucrose in PBS, embedded in OCT, cryosectioned at 20 m and stained with nuclear fast red before mounting in Permount. |
#include <stdio.h>
#define MAX 200;
void swap(int *a, int *b)
{
int temp=*a;
*a=*b;
*b=temp;
return;
}
int return_min(int A[], int n)
{
int min=MAX;
int i;
for(i=0;i<n;i++)
if(min>A[i])
min=A[i];
return(min);
}
int main()
{
int i,n,value;
scanf("%d",&n);
int A[n];
for(i=0;i<n;i++)
scanf("%d",&A[i]);
int min=MAX;
int min_index;
for(i=0;i<n;i++)
if(min>=A[i])
{
min=A[i];
min_index=i;
}
int max=0;
int max_index;
for(i=0;i<n;i++)
if(max<A[i])
{
max=A[i];
max_index=i;
}
if(min_index>max_index)
value=max_index+n-min_index-1;
else if(min_index<max_index)
value= n-min_index-1+max_index-1;
printf("%d",value);
} |
A High-Intensity Exercise Intervention Improves Older Women Lumbar Spine and Distal Tibia Bone Microstructure and Function: A 20-Week Randomized Controlled Trial Introduction: The effects of ageing on bone can be mitigated with different types of physical training, such as power training. However, stimuli that combine increasing external and internal loads concomitantly may improve bone quality. The goal of this study was to assess the efficacy of a combined power and plyometric training on lumbar spine and distal tibia microstructure and function. Methods: 38 sedentary elderly women between 60 and 70 years were randomly allocated in experimental (N = 21) and control group (N = 17). The effects of the 20-week protocol on lumbar spine microstructure and tibia microstructure and function were assessed by trabecular bone score (TBS), high resolution peripheral quantitative computed tomography (HR-pQCT) and microfinite element analysis. Results: when compared to the effects found in the control group, the experimental group showed significant improvements in lumbar spine TBS (Hedges g = 0.77); and in distal tibia trabecular thickness (g = 0.82) and trabecular bone mineral density (g=0.63). Conclusion: our findings underscore the effectiveness of the proposed intervention, suggesting it as a new strategy to slow down and even reverse the structural and functional losses in the skeletal system due to ageing. While weight-bearing endurance activities, such as walking, are inexpensive, easily available and widely used, they seem to offer a modest stimulus to bone mineral density, at least at the spine level. Consequently, strength training emerged as one of the most successful strategies to induce osteogenesis. Nevertheless, more recent approaches aimed at guaranteeing structural improvements at the musculoskeletal system along with higher functional capacity while maintaining higher quality of life. In addition to improve functional capacity, power training seems to be a better strategy to attenuate losses on the senescent skeletal system. Stengel et al. reported lower bone mineral density (BMD) losses at the lumbar spine on postmenopausal women after a two-year high velocity training than on the group that trained at moderate velocity. Since the training volume was the same in both groups, the authors reported that higher loading rates on the musculoskeletal system due to higher velocities could be the cause of these findings. The stretch-shortening cycle potentiation of a muscle induces even higher loading rates on the musculo-skeletal system and it is widely used by athletes to increase muscle power. This strategy allows augmenting the force production in the concentric phase of the movement by releasing previously stored elastic energy in the muscle. The use of this elastic energy can be optimized by a high muscle preactivation followed by a rapid transition between eccentric and concentric phases of the movement. Via plyometric training, an intervention methodology that induces this strategy, adult athletes showed increases in muscle power as well as in bone mass. This strategy has already been applied with older adults; however, an issue arises concerning the instructions given to ensure that the potentiation mechanism was accurately employed. Indeed, while some authors refer solely to the training volume, such as number of jumps per session, others state that the jumps were performed quickly, neglecting the focus on the rapid eccentric-concentric transition,. Another limitation to evaluate the effect of any intervention strategy on the bone is the instrument resolution. Cancellous bone has a higher bone turnover rate than cortical one ; therefore, it is expected that the mechanical load from training induces different stimulus on these bone structures. Indeed, Hamilton et al. reported in a review different responses to exercise training on the bone microstructure of elderly volunteers. While only half of the studies found positive effects of the intervention on trabecular bone, the majority found positive effects on cortical bone. The authors concluded that to assess the bone response to an intervention, it is necessary to measure both cortical and trabecular structures. In addition, it can be agreed that the main goal of an intervention protocol is to increase the load on the bone safely and improve its functionality; however, mores studies are needed to evaluate the effects of a physical intervention program on bone function. The reported effects of physical exercise protocols on bone are limited to structural or microstructural analysis - ; and the meaning of a change in this structure for functional purposes is not clear. Therefore, it is crucial to measure bone function in order to assess the validity of an exercise intervention program on elderly people. Finally, an important aspect is to identify the minimum intervention time. The effect of a 19-week exercise program on the distal tibia trabecular bone density of elderly stroke survivors was reported in Pang et al.. While other studies consider implicitly that a minimum of 24 weeks is required to determine effects on bone from physical exercise, this assumption can be revised in the light of more accurate results provided by new experimental procedures, including measurement devices with higher resolution or intervention techniques. We hypothesize that the combination of power and plyometric training can be very efficient to induce changes on the elderly bone, even in relatively short time spans. The main goal of this work is to investigate the effects of a 20-week exercise program, based in the combination of power and plyometric training, on lumbar spine and tibia bone microstructure and function. Additionally, it will be checked if this intervention time is enough to induce measurable changes in bone structure and function. A. STUDY DESIGN AND PARTICIPANTS This was a randomized controlled trial with a parallel-group study design to assess the effects of 20 weeks of high impact exercises and power training on lumbar spine and distal tibia bone microstructure and function in older women. The sample size was estimated from a previous study on the distal tibia trabecular bone density (standardized effect size = 0.9) found in Pang et al.. Therefore, with a 5% significant level and an expected power of 0.85 the required sample size was 28, 14 for each group. Considering possible dropouts, the experimental group (EG) was increased by 50% while the control group (CG) was increased by 20%. Therefore, thirty-eight elderly women volunteered to participate in this study, and they were randomly allocated to the experimental group N = 21(66.9 ± 4.2 years) and N = 17 in the control group (65.0 ± 3.4 years). The inclusion criteria were: women between 60 and 70 years old, absence of cardiovascular, osteoarticular, musculoskeletal or neurological disorders, uncompensated visual problems, depression or mental illness, negative history of falling or dizziness during one year prior to the study, absence of osteometabolic diseases (such as hyperparathyroidism) or chronic diseases (diabetes mellitus, kidney or liver failure, hyperthyroidism). They were not using medication that may interfere with bone metabolism (such as bisphosphonates, teriparatide, glucocorticoids). The participants were considered sedentary or participating, at most, in sporadic aerobic physical activities (maximum biweekly frequency). Attending less than 75% of the exercise sessions (for the EG) and the absence in the final evaluation VOLUME 8, 2020 (for both groups) were defined as the exclusion criteria. This study was registered in ClinicalTrials.gov and attends CONSORT 2010 statement. All participants were informed about the experimental procedures and gave their signed consent informing their involvement in the study was voluntary. The study was approved by the Local Ethical Committee. B. TRAINING PROTOCOL Thrice-weekly sessions of 60 minutes in non-consecutive days for 20 weeks were applied to the EG. The training session was divided into a main part (55 minutes) and a cool down (with stretching exercises). Fourteen exercise stations were applied: drop jump (2 stations), squat jump, leg press, knee extension, knee flexion, ankle dorsiflexion in a low pulley, body weight ankle plantarflexion, chest press, seated row, abdominal muscles exercise and resting (3 stations). Thus, the jump stations had a focus on the external load applied to the musculoskeletal system while the power exercises had a focus on the internal load on the bone. Three sets of 10 repetitions in each station were executed before station rotation; and, at every three exercises, a resting station was given. To avoid order influence, the order of the stations was changed every session and high physical demand exercises (jump stations) were distributed over the training session. The two initial weeks were used as a familiarization period with the training. At week 3, and every four weeks after that, Brzycki estimation of 1 repetition maximum (1RM) was obtained for each exercise and the load intensity for lower (50% of 1RM) and upper limbs (60% of 1RM) were established. For the drop jump exercise, a step with 9 cm height was used (for the first 6 weeks) that later was replaced by a step with 18 cm height. A 1:2 instructor/participant ratio was needed to ensure the exercises were executed with the fastest concentric phase or eccentric/concentric transitions. Instructions were constantly given to maximize power output, except for upper limbs and abdominal exercises that were asked to be executed at moderate speed. Participants in both groups were evaluated before and after the intervention period. The modified Baecke questionnaire for older adults was used to assess the participants physical activity level. Borg's perceived exertion scale, given at the end of each session, was used to characterize the intervention protocol. In order to characterize the jumps intensity, a force plate (AMTI BP600900), with a sampling rate of 200 Hz, was used. After the intervention period, the EG participants performed six repetitions of the drop jump (3 with 9 cm and 3 with 18 cm) and three repetitions of the squat jump on a force plate. C. LUMBAR SPINE MICROSTRUCTURE All DXA measurements as well as the trabecular bone score (TBS) analysis, prior and after the intervention period, were performed by the same experienced operator. Total lumbar spine (L1-L4) areal BMD (aBMD) was assessed with a bone densitometer (Hologic Inc. Bedford, MA, USA, Discovery model). The DXA scans were assessed according to the International Society for Clinical Densitometry guidelines. To calculate lumbar spine TBS a software by TBS iNsight R, version 2.2.0.1 (Med-Imaps, Merignac, France), was used. The TBS software takes into account the pixel gray-level variation in the total lumbar spine aBMD image so that low/large number of pixel value variation of high/small amplitude indicates a 2D projection of a deteriorated/good trabecular structure. This new method to describe skeletal microarchitecture from DXA images is strongly correlated with bone histomorphometry,. Indeed, it is correlated with micro-computed tomography measures of bone connectivity density, trabecular number, trabecular separation and with vertebral mechanical function. Therefore, low/high TBS values are correlated with worse/better trabecular bone structure. D. DISTAL TIBIA MICROSTRUCTURE AND FUNCTION: BONE MORPHOMETRY AND TISSUE MINERAL DENSITY All microarchitecture and function analysis, prior and after the intervention period, were performed by the same experienced operator. Distal tibia microarchitecture and volumetric BMD (vBMD) were assessed with a high-resolution peripheral quantitative computed tomography (HR-pQCT) system (Xtreme CT Scanco Medical AG, Brttisellen, Switzerland) on the dominant limb. The 2D detector array in combination with a 0.08 mm point-focus x-ray tube allowed an acquisition of 110 CT slices with a nominal resolution (voxel size) of 82 m, providing a 3D representation of approximately 9 mm of the distal tibia. The settings used were: effective energy of 60 kVp, x-ray tube current of 95 mA, and matrix size of 1536 1536. The participants were asked to seat comfortably with the dominant leg immobilized with a carbon fiber cast. Verbal cues were given to avoid movement artifacts. The operator defined a reference line at the endplate of the tibia and the first CT slice taken was 22.5 mm proximal to the reference line. Using a threshold-based algorithm, the entire volume of interest was separated into cortical and trabecular regions. One third of the apparent cortical bone density value (Ct.BMD) was used to discriminate cortical from trabecular region. The microstructure parameters used were: 1. vBMD in milligram hydroxyapatite per cubic centimeter (mg HA/cm 3 ) for total (Tt.BMD), trabecular (Tb.BMD) and cortical (Ct.BMD) regions; 2. trabecular microstructure parameters: bone volume fraction (BV/TV, 1), thickness (Tb.Th, mm), number (Tb.N,mm −1 ) and separation (Tb.Sp, mm); and 3. cortical microstructure parameters: thickness (Ct.Th, mm), porosity (Ct.Po, 1) and mean pore diameter (Ct.Po.Dm, mm 2 ). E. MICROFINITE ELEMENT ANALYSIS Microfinite element model to represent distal tibia biomechanical properties were created in Scanco Finite Element Analysis Software (Scanco Medical AG), using the distal tibia HR-pQCT scan. It was defined a Young's modulus of 10 GPa and Poisson's ratio of 0.3 for each element. Bone strength (i.e., failure load), based on biomechanical properties, was derived by scaling the resulting load from a test simulating 1% compression, such that 2% of all elements had an effective strain >7000 microstrain. The following structural functional parameters were used: stiffness (S, N/mm), estimated ultimate failure load (F. ult, N), trabecular and cortical von Mises stress (Tb.VM and C.VM, respectively, N/mm2). F. STATISTICAL ANALYSIS All statistical procedures were executed in Statistical Package for the Social Sciences (SPSS for Windows, 20.0, Chicago, IL, USA). A visual inspection of the data was firstly performed in order to identify outliers. Then, Shapiro-Wilk test was used to assess the data distribution; and Levene test was used to assess the data cedasticity. Since all dependent variables exhibited normal distribution, multivariable normality was assumed. Box's M was used to test the equality of the variance/covariance matrices. An unpaired t-test was used to identify differences, prior intervention, between groups on age, mass, height, physical activity level, femoral neck T-score, lumbar spine T-score and in all bone microstructure and function dependent variables. To compare the effects of the intervention period between groups, a general linear mixed effect model was conducted. Time and Group was assumed to be a fixed effect in the model with the participants as a random effect. The magnitude of the intervention period effects between groups was determine by Hedges' g effect size and respective 95% confidence interval. Cohen's effect size benchmark of trivial (−0.2≤d≤0.2), small (−0.5≤d<−0.2 and 0.2<d≤0.5), moderate (−0.8≤d<−0.5 and 0.5<d≤0.8) and large (d<−0.8 and d<0.8) was employed. III. RESULTS A total of 353 older women reply to the public advertising and after the medical report analysis and the interview 43 attended all requirements. Due to schedule-related incompatibility, the final sample was composed by 38 participants. The participants in the EG attended, on average, to 92% of the planned training sessions and none attended less than 80%. No EG nor CG participants met the exclusion criteria, therefore, the study had no dropouts. A. PARTICIPANTS CHARACTERISTICS AT BASELINE Initial assessment of both groups prior the intervention period showed no differences on age, anthropometric characteristics, physical activity level nor on femur and lumbar spine bone density (Table 1). All bone microstructure and function dependent variables were tested at baseline and no differences were found between the EG and the CG (P-values for the independent t tests ranged between 0.218 and 0.882). All variables showed a significant pre/post intervention correlation (Pearson's r ranging from 0.995 to 0.665, with P-values ranging from < 0.001 to 0.005), allowing the inclusion of the preintervention value as a covariable. It must be noted that both groups were found to be, at the beginning of the study, within the same TBS category -partial degraded microarchitecture (TBS≤1.2 defines degraded microarchitecture, TBS between 1.20 and 1.35 is partially degraded microarchitecture, and TBS ≥1.35 is considered normal) -. B. TRAINING INTENSITY The mean perceived exertion measured after the training session was 9.5 (9.3 − 9.7), for all training sessions in the 20-week period. In the first training stage of the drop jump (9cm step) the participants produced an impact of 2.4 (2.2 -2.6) BW (body weight) and in the second (18cm step) an impact of 3.3 (2.9 − 3.7) BW. In the squat jump exercise a mean impact of 4.0 (3.7 -4.3) BW was produced with a mean height of 9.7 (8.4 -11.0) cm. C. OUTCOME MEASURES Significant changes were found between groups after the intervention period solely for the bone trabecular structure and function (Table 2). While cortical bone seems to remain unchanged, an improvement was found in lumbar spine TBS and tibia trabecular thickness of the EG (Fig. 1). IV. DISCUSSION Our findings support that the proposed physical exercise protocol is a viable tool to reverse bone losses in osteopenic elderly women with low to moderate physical activity level, even at short intervention periods (less than 6 months). Moreover, this is the first study to show the effects of an exercise program on lumbar spine microstructure via trabecular bone score as well as its effects on distal tibia bone function via microfinite element analysis. VOLUME 8, 2020 The combination of impact exercises, that increase external loads of the skeletal system, with strength training, that increase internal loads, has been suggested as an optimal protocol to improve BMD in postmenopausal women. Our results support and extend this recommendation. Activities focused on increasing muscle contraction forces, such as in hypogravity environments (e.g., swimming or cycling) should produce high forces on bone ; however, they did not induce bone structural or functional gains in older adults according to previous studies,. However, internal loads seem to amplify the effects of the external forces (i.e., ground reaction forces) resulting that a combination of internal and external loads seems to be the best strategy to improve a senescent bone. Our findings support the efficacy of this strategy on elderly women bone microstructure with a low perception of physical demand -9.5 on Borg's perceived exertion scale (between the descriptors ''Very light'' and ''Fairly light''). Although we found significant effects of this intervention on tibia trabecular bone mineral density, the same effects were not seen in the cortical portion. Similar results on tibia cortical vBMD (−0.06%) were obtained by Liu-Ambrose et al. after a 25-week strength training protocol, however, a second experimental group (agility) revealed an expressive increase of 5.32%. Nonetheless, the use of bisphosphonate therapy on participants was not controlled, and this is a confounding factor that did not allow attributing the bone changes to the intervention program. Karinkanta et al., in turn, found no significant nor clinical changes on elderly women distal tibia trabecular (0.00%) and cortical (−0.17%) vBMD after a 12-month combined resistance and balance-jumping training. Although our protocol did not show significant increases in distal tibial vBMD (total, trabecular nor cortical), the moderate effect sizes obtained (0.62, 0.65 and 0.47, respectively) suggest its efficacy in the decrease of vBMD losses (Fig. 1). Paradoxically, tibia trabecular bone changes induced by the intervention period in the EG expressed a decreased number of trabeculae and an increased trabecular separation, suggesting a tissue deterioration. Since both parameters were preserved in the CG, a possible explanation is that thinner trabeculae were resorbed due to increased mechanical stimulus, reducing its number,, augmenting its thickness. Thinner trabeculae might be susceptible to resorption induced by mechanical stimuli, which contribute to thicker trabeculae and a wide space between the remaining trabeculae,. Nonetheless, the bone volume fraction preservation (0.16%) suggests no harmful changes, corroborated also by an increasing ability to resist loads shown in the finite element analysis. Indeed, tibia stiffness and von Mises stress values significantly increased after the intervention period, indicating an increased resistance of the bone structure. This microfinite element analysis provides a direct estimate of the bone mechanical properties and indicates an improvement at a microstructural level consistent with a bone tissue more resistant to fractures. Although the The graph on the right expresses Hedges' g effect size and 95% confidence intervals. A blank mark denotes significant difference (p < 0.05) between the two groups and a solid mark the absence of significant differences. The shaded area specifies the interval in which the effect size of the difference between groups is trivial (−0.2 < g < 0.2). change in the stiffness represents an enhancement of the bone tissue, the change in von Mises stress seems to have more important implications, since it represents the trabecular bone increased ability to endure forces in different directions. Hence, since the forces acting on the musculoskeletal system are three-dimensional, this parameter offers higher ecological validity to understand the functional effects of an intervention. Karinkanta et al. found a decrease (−1.20%) in bone strength index with a combined resistance and balancejumping protocol. Since the control group experienced a higher reduction on this parameter (−2.93%), the authors argued that the proposed protocol was able to attenuate the losses on bone strength. Our findings, however, support that the presented intervention protocol enhances bone strength. Similarly, Allison et al. found significant improvement in femoral neck biomechanical variables after 12 months of a home-based impact exercise intervention in elderly men. Unilateral jumps that elicited a load of 2.7 to 3.0 BW were found to increase femoral neck cross-sectional moment of inertia (2.4%) and decrease its buckling ratio (−8.3%), in the exercise leg, but also in the control leg, (0.9% and −4.6%, respectively); suggesting interference between them. Nonetheless, it was a one-year daily intervention with twice the mechanical load offered in the present study. We obtained positive results in only 20 weeks with half the load, indicating a continuous improvement during the intervention period. Ashe et al., showed no differences after 12 months of strength training (biweekly sessions) in tibia cortical volumetric bone mineral density (−0.45%) nor in bone strength (0.05%) assessed by cross-sectional moment of inertia. Due to lack of similar methodologic approaches, the positive effects of our high-intensity exercise intervention strategy can only be compared to the effects of a drug-therapy intervention. In a 12-month pharmacologic intervention in postmenopausal women, Tsai et al. found that a combined teriparatide and denosumab therapy improved tibia stiffness and failure load by 5.3% and 4.5%, respectively. Although these responses are superior to those found in the present study (2.25% and 1.92%, respectively), they were reached achieved after an much longer intervention and using drugs that could induce serious side effects such as atrial fibrillation or esophageal cancer. Furthermore, it has also been reported that pharmacological intervention may have poor compliance either due to unintentional (forgetfulness) or intentional nonadherence (due to treatment costs and fear of side effects). Moreover, a physical exercise intervention such as the proposed in our work has positive sideeffects, as it not only promotes bone gains but also improves musculoskeletal system functional capacity. We found the loads applied by the intervention protocol suitable to induce changes in the lumbar spine microstructure of osteopenic elderly women. While the changes in the EG (1.07 %) allowed approaching the TBS upper bound (1.300 -''good microarchitecture''), the absence of stimulus in the CG induced a decrease in TBS (−2.05 %), reaching the lower threshold (1.200 -''degraded microarchitecture''). This outcome suggests a significant decrease of risk fracture induced by the intervention protocol that, due to the lack of studies with similar approach, can only be compared with the effects of a pharmacologic intervention. A physical exercise program seems to be superior since not only improves bone status but contributes to increasing functional capacity and quality of life. Furthermore, it can be argued that the effects of pharmacologic intervention might be overestimated in the elderly population. For instance, Krieg et al. found an annual increase of 0.20% in older women TBS with different bisphosphonates drugs. Therefore, the exercise-based intervention we presented could be a first line of defense against bone deterioration before the pharmacological treatment that has also showed significant impact on bone architecture,,. It is very likely that the combination of different treatments, (exercise, pharmacological) with individualized planning for each patient, would produce better results. In a cross-sectional study, Heini and colleagues showed that female athletes that underwent different types of physical activity had small differences in TBS score. The authors suggested low impact exercises (like walking or endurance running) may lead to lower TBS scores when compared to high impact exercise, such the type of exercise that we proposed in this work. VOLUME 8, 2020 There is a general consensus about the health benefits of exercise, that can be recommended as a prescription,. In this respect, the set of exercises recommended showed clear improvements in bone function and they could be easily integrated in a more complete exercise program involving cardio-pulmonary, motor control and balance training. Our results support the hypothesis that continuing the proposed intervention for longer periods may delay the natural ageing bone decay and potentially increase bone health and this points out a line of future research. The investigation of the mechanisms by which the intervention program improved the participants' bone health was not our main goal. However, it was considered that the compression forces produced by the muscles and the impacts, which are enhanced by the proposed training method, would explain our results. The impact, more specifically the strain energy density on trabecular bone elicited by the intervention, agree with the classic models of bone formation,,. The results of the present study must consider some limitations. Hip fracture is probably the biggest problem in elderly population. However, we were not able to assess the efficacy of this intervention protocol on this bone site. Moreover, caution must be taken when prescribing any exercise protocol to frail older adults due to the risk of falls and because the loads applied by the physical activity, and even more jump exercises may be too high for a deteriorated bone structure and lead to a fracture. Although another limitation of this work is that we could not apply the finite element modeling to lumbar spine, TBS is found to be intimately related with the tissue microstructure and with its function. Indeed, it is correlated to the number of trabeculae, its separation and connectivity density; that represents a fracture resistant or prone micro-architecture yielded by higher or lower scores, respectively. V. CONCLUSION In conclusion, we found the 20-week power/plyometric training protocol able to improve tibial bone age-related microstructure degradation enhancing its functional stiffness and resistance to fracture in elderly women. Moreover, the presented high-intensity exercise intervention was able to induce changes in lumbar spine microarchitecture consistent with a more fracture-resistant status. Considering its high adherence could be a fundamental part of a nonpharmacologic strategy to reverse bone loss in older adults. |
/*
* Copyright (c) 2008-2017 Haulmont.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.haulmont.cuba.web.security.providers;
import com.haulmont.cuba.security.auth.AuthenticationDetails;
import com.haulmont.cuba.security.auth.Credentials;
import com.haulmont.cuba.security.auth.SimpleAuthenticationDetails;
import com.haulmont.cuba.security.global.LoginException;
import com.haulmont.cuba.security.global.UserSession;
import com.haulmont.cuba.web.security.AnonymousUserCredentials;
import com.haulmont.cuba.web.security.LoginProvider;
import com.haulmont.cuba.web.security.WebAnonymousSessionHolder;
import org.springframework.core.Ordered;
import org.springframework.stereotype.Component;
import javax.annotation.Nullable;
import javax.inject.Inject;
import java.io.Serializable;
import java.util.Locale;
import java.util.Map;
@Component("cuba_AnonymousLoginProvider")
public class AnonymousLoginProvider implements LoginProvider, Ordered {
@Inject
protected WebAnonymousSessionHolder anonymousSessionHolder;
@SuppressWarnings("RedundantThrows")
@Nullable
@Override
public AuthenticationDetails login(Credentials credentials) throws LoginException {
if (!(credentials instanceof AnonymousUserCredentials)) {
throw new ClassCastException("Credentials cannot be cast to AnonymousUserCredentials");
}
AnonymousUserCredentials anonymousCredentials = (AnonymousUserCredentials) credentials;
UserSession anonymousSession = anonymousSessionHolder.getAnonymousSession();
Locale credentialsLocale = anonymousCredentials.getLocale();
if (credentialsLocale != null) {
anonymousSession.setLocale(credentialsLocale);
}
if (anonymousCredentials.getTimeZone() != null
&& Boolean.TRUE.equals(anonymousSession.getUser().getTimeZoneAuto())) {
anonymousSession.setTimeZone(anonymousCredentials.getTimeZone());
}
anonymousSession.setAddress(anonymousCredentials.getIpAddress());
anonymousSession.setClientInfo(anonymousCredentials.getClientInfo());
if (anonymousCredentials.getSessionAttributes() != null) {
for (Map.Entry<String, Serializable> attribute : anonymousCredentials.getSessionAttributes().entrySet()) {
anonymousSession.setAttribute(attribute.getKey(), attribute.getValue());
}
}
return new SimpleAuthenticationDetails(anonymousSession);
}
@Override
public boolean supports(Class<?> credentialsClass) {
return AnonymousUserCredentials.class.isAssignableFrom(credentialsClass);
}
@Override
public int getOrder() {
return HIGHEST_PLATFORM_PRECEDENCE + 10;
}
} |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""ip_range module
Used by the `ctlplane_ip_range` role.
"""
import netaddr
from ansible.module_utils.basic import AnsibleModule
from yaml import safe_load as yaml_safe_load
DOCUMENTATION = '''
---
module: ip_range
short_description: Check the size of an IP range
description:
- Check if the size of an IP range against a minimum value.
- Used by the `ctlplane_ip_range` role.
- "Owned by the DFG: DF"
options:
start:
required: true
description:
- Start IP
type: str
end:
required: true
description:
- End IP
type: str
min_size:
required: true
description:
- Minum size of the range
type: int
author: "<NAME>"
'''
EXAMPLES = '''
- hosts: webservers
tasks:
- name: Check the IP range
ip_range:
start: 192.0.2.5
end: 192.0.2.24
min_size: 15
'''
def check_arguments(start, end, min_size):
'''Validate format of arguments'''
errors = []
# Check format of arguments
try:
startIP = netaddr.IPAddress(start)
except netaddr.core.AddrFormatError:
errors.append('Argument start ({}) must be an IP'.format(start))
try:
endIP = netaddr.IPAddress(end)
except netaddr.core.AddrFormatError:
errors.append('Argument end ({}) must be an IP'.format(end))
if not errors:
if startIP.version != endIP.version:
errors.append("Arguments start, end must share the same IP "
"version")
if startIP > endIP:
errors.append("Lower IP bound ({}) must be smaller than upper "
"bound ({})".format(startIP, endIP))
if min_size < 0:
errors.append('Argument min_size({}) must be greater than 0'
.format(min_size))
return errors
def check_IP_range(start, end, min_size):
'''Compare IP range with minimum size'''
errors = []
iprange = netaddr.IPRange(start, end)
if len(iprange) < min_size:
errors = [
'The IP range {} - {} contains {} addresses.'.format(
start, end, len(iprange)),
'This might not be enough for the deployment or later scaling.'
]
return errors
def main():
module = AnsibleModule(
argument_spec=yaml_safe_load(DOCUMENTATION)['options']
)
start = module.params.get('start')
end = module.params.get('end')
min_size = module.params.get('min_size')
# Check arguments
errors = check_arguments(start, end, min_size)
if errors:
module.fail_json(msg='\n'.join(errors))
else:
# Check IP range
range_errors = check_IP_range(start, end, min_size)
if range_errors:
module.fail_json(msg='\n'.join(range_errors))
else:
module.exit_json(msg='success')
if __name__ == '__main__':
main()
|
<filename>src/modules/questionnaire/questionnaire.repository.ts
import { Repository } from 'typeorm';
import { EntityRepository } from 'typeorm/decorator/EntityRepository';
import { QuestionnaireEntity } from './questionnaire.entity';
@EntityRepository(QuestionnaireEntity)
export class QuestionnaireRepository extends Repository<QuestionnaireEntity> { } |
Kinect may be touted as the Xbox 360's answer to motion gaming, but it's not just about making you the controller. Kinect can also let you interact with television, movies, music and the Xbox 360 like Tom Cruise in Minority Report, just without the fancy gloves.
This video is a quite long, but in it you'll see how to send harassing voice messages to your Xbox Live friends, how to whip through movies by grabbing the air over your head and pulling and how Kinect can hear you whisper in a room filled with bass-pumping Texas rock.
Kinect's ability to turn the air in front of your face into a touch screen that can control movies, music and a bit of the dashboard is the sort of magical experience Microsoft has been selling since they unveiled Kinect.
That said, the offerings at launch are very limited. You can create avatars, you can message friends, you can check out Last.FM, Zune or ESPN and that's about it.
What you can't do is use the Kinect's ability to recognize you as a new form of parental controls. Nor can you turn your Xbox 360 on or off. And you can't (at least yet) control my favorite non-gaming app on the dashboard: Netflix.
Still, what it does it does very well and I'm sure more is coming. |
package org.dynjs.runtime.linker.js.shadow;
import org.dynjs.runtime.JSObject;
/**
* @author <NAME>
*/
public interface ShadowObjectManager {
JSObject getShadowObject(Object primary);
JSObject getShadowObject(Object primary, boolean create);
void putShadowObject(Object primary, JSObject shadow);
}
|
// This is declared in Java code and in Object.h.
// It should never be called with JV_HASH_SYNCHRONIZATION
void
java::lang::Object::sync_init (void)
{
throw new IllegalMonitorStateException(JvNewStringLatin1
("internal error: sync_init"));
} |
Physical and Optical Characterization of Mn-doped ZnS Nanoparticles Prepared via Reverse Micelle Method Assisted by Compressed CO2 This paper presents the study on the Mn-doped ZnS (ZnS:Mn) nanoparticles synthesized using reverse micelle method assisted by compressed CO2 at 60 bar and 40°C. Effects of retention time (30-120 min) on the particle formation were studied. The optical properties, size, morphology, and structure of the resulting particles were investigated. The results showed that ZnS:Mn nanoparticles were successfully formed with particle size ranging from 2 to 3 nm, which showed quantum size effects. The highest photoluminescence (PL) intensity ratio was found at 60 min retention time. The increase in photoluminescence (PL) intensity ratio indicated an increased homogeneity of nanoparticle growth with decreased surface defects. Introduction Zinc sulfate (ZnS) is a nontoxic II-VI semiconductor material with direct and large bandgap that is widely studied because of its numerous potential applications, including solar cells, bio-imaging, wavelength-tunable lasers, and electronic and optoelectronic nanodevices. The presence of optical impurities affects the physical and chemical characterization of ZnS. Incorporating suitable dopants is an approach to enhancing the potential of ZnS semiconductor as a phosphor. Issues commonly encountered in the synthesis of nano-particles are size control and agglomeration of the particles. In the case of doped ZnS nanoparticles, a suitable synthesis method is not only required to prevent agglomeration but also to improve the optical properties. Among various techniques for producing nanoparticles, reverse micelle method is considered a promising technique to prepare less agglomerated and more monodispersed nanoparticles. The advantage of this method is the ability to control excess particle growth when particle size approaches that of a H2O nanodroplet. However, difficulty is usually encountered in post process recovery step that affect particle size and dispersion. In recent years, some researchers have carried out combination of antisolvent CO2 and reverse micelle method for the formation of nanoparticles. It is expected that the properties of the water-in-oil in reverse micelle system can be easily tuned by the pressure and temperature of CO2. Nanoparticle synthesis using reverse micelle assisted by CO2 as antisolvent is an interesting phenomena and needs further study. Therefore, the aim of this work is to produce ZnS:Mn nanoparticles with better particle characteristics and optical properties. ZnS:Mn Particle Synthesis The ZnS:Mn solution was prepared using reverse micelle method reported previously. In this study, the desired amount of ZnS:Mn2+ (4%) sample solutions were then loaded into a 1 L standard steel cylinder with a sight glass. The cell was then sealed and pressurized with CO2 antisolvent with four different retention times (30 -120 min). The conditions in the vessel was kept at relatively low pressure of 60 bar and temperature of 40oC. The precipitated ZnS:Mn particles were collected and then washed by aqueous ethanol solution, centrifuged and dried in an oven Particle Characterization The X-ray diffraction (XRD) patterns of the powdered samples were investigated using a D8 Advance AXS X-ray diffractometer at diffraction patterns ranging from 20° to 80°. The crystal size was estimated at full width half maximum (FWHM) of the XRD peaks. The fluorescence measurements were performed on a photoluminescence (PL) SP920 spectrophotometer. Physical characterizations of the particles were then performed using transmission electron microscopy (TEM). Physical Characterization of ZnS:Mn The XRD patterns (Figure 1) of ZnS:Mn nanoparticles at different retention times exhibit three diffraction peaks at 2 = 28.50°, 48.30o, and 56.50°, which corresponded to the,, and planes, respectively. These peaks were indexed as a zinc blende structure corresponding to JCPDS file no. 05-0566 with no impurity peak observed. The average of crystal size as calculated according to the Debye-Scherrer formula ranging between 2.0 to 3.0 nm (Table 1), which fell within the quantum confinement region. However, additional peak appeared at 32° when retention time was set at 120 min. It shows that the dissolution of CO2 at long retention time in the micelle core leads to micellar instability and gives excess energy to segregate the Mn2+ atom out of the ZnS lattice. Generally, the shape of the particles for the samples are spherical and they are homogeneously dispersed as characterized by TEM (Figure 2). These results provide indirect evidence of the role of CO2 to effectively improve surfactant capping on the surface of ZnS:Mn nanoparticles. Figure 3 shows that even though the ZnS:Mn particles showed small size differences at different retention times, the relative intensities of PL varied significantly. This might be due to the control of the droplet size by surfactant concentration in the water-heptane matrix. The orange emission intensity increased linearly with the quenching of the blue emission and started to decrease when using the samples with 90 min retention time. Table 1 and Figure 3 show high value of PL intensity ratio at 60 min exposure to compressed CO2, thus indicating the formation of monodisperse nanoparticles. This phenomenon could be discussed based on the nanoparticles-surfactant-solvent interaction. Upon pressurization with CO2, the gas continues to dissolve into the micelle core and decreases the solvation of the AOTcap nanoparticles, thus causing the removal of water in the reverse micelle droplet. Therefore, trap sites on the nanoparticle surface were decreased and crystallinity was promoted, which dramatically improved the PL intensity. Previously, several researchers studied the modification of ZnS:Mn nanocrystal by using different surfactants, W (/) values and types of mixing method in a reverse micelle of H2O/AOT/n-heptane. However, tuning the properties of surfactant solution using CO2 is a new progress in modification of nanoparticles. The mechanism for CO2 to stabilize reverse micelle and improve the surface defect of nanoparticles has never been discussed before. Therefore, this research will greatly contribute to the understanding of nanoparticles formation using compressed CO2 as the antisolvent. Conclusion A quaternary AOT/CO2/n-heptane/H2O reverse micelle system was found to be effective in producing uniform, monodisperse and nanosize Mn-doped ZnS particles. The ZnS:Mn nanoparticles exhibited strong quantum-size effect due to dopant incorporation into the ZnS host structure, which reduced surface defects and enhanced optical properties. |
package config
import (
"errors"
"net"
"os"
"testing"
"github.com/stretchr/testify/assert"
)
func TestNewDefaultConfig(t *testing.T) {
cfg := NewDefaultConfig()
assert.Equal(t, cfg.LogLevel, "info")
assert.Equal(t, cfg.LogOutput, "stderr")
assert.Equal(t, cfg.Provisioner, XDSV3FileProvisioner)
assert.Equal(t, cfg.GRPCListen, DefaultGRPCListen)
assert.Equal(t, cfg.EtcdKeyPrefix, DefaultEtcdKeyPrefix)
assert.Equal(t, cfg.APISIXHomePath, DefaultAPISIXHomePath)
assert.Equal(t, cfg.APISIXBinPath, DefaultAPISIXBinPath)
assert.Equal(t, cfg.RunMode, StandaloneMode)
}
func TestConfigValidate(t *testing.T) {
cfg := NewDefaultConfig()
cfg.Provisioner = "redis"
assert.Equal(t, cfg.Validate(), ErrUnknownProvisioner)
cfg.Provisioner = ""
assert.Equal(t, cfg.Validate(), errors.New("unspecified provisioner"))
cfg = NewDefaultConfig()
cfg.GRPCListen = "127:8080"
assert.Equal(t, cfg.Validate(), ErrBadGRPCListen)
cfg.GRPCListen = "127.0.0.1:aa"
assert.Equal(t, cfg.Validate(), ErrBadGRPCListen)
cfg.GRPCListen = "hello"
assert.Equal(t, cfg.Validate(), ErrBadGRPCListen)
cfg.Provisioner = "xds-v3-grpc"
assert.Equal(t, cfg.Validate(), ErrEmptyXDSConfigSource)
}
func TestGetRunningContext(t *testing.T) {
assert.Nil(t, os.Setenv("POD_NAMESPACE", "apisix"))
rc := getRunningContext()
assert.Equal(t, rc.PodNamespace, "apisix")
assert.Nil(t, os.Setenv("POD_NAMESPACE", ""))
rc = getRunningContext()
assert.Equal(t, rc.PodNamespace, "default")
assert.NotNil(t, net.ParseIP(rc.IPAddress))
}
|
<filename>pkg/hcloud/apis/mock/provider_spec.go
/*
Copyright (c) 2021 SAP SE or an SAP affiliate company. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package mock provides all methods required to simulate a driver
package mock
import (
"github.com/23technologies/machine-controller-manager-provider-hcloud/pkg/hcloud/apis"
)
const (
TestCluster = "xyz"
TestImageName = "ubuntu-20.04"
TestProviderSpec = "{\"cluster\":\"xyz\",\"zone\":\"hel1-dc2\",\"imageName\":\"ubuntu-20.04\",\"serverType\":\"cx11-ceph\",\"placementGroupID\":\"42\",\"sshFingerprint\":\"00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff\"}"
TestServerType = "cx11-ceph"
TestSSHFingerprint = "00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff"
TestZone = "hel1-dc2"
TestInvalidProviderSpec = "{\"test\":\"invalid\"}"
)
// ManipulateProviderSpec changes given provider specification.
//
// PARAMETERS
// providerSpec *apis.ProviderSpec Provider specification
// data map[string]interface{} Members to change
func ManipulateProviderSpec(providerSpec *apis.ProviderSpec, data map[string]interface{}) *apis.ProviderSpec {
for key, value := range data {
manipulateStruct(&providerSpec, key, value)
}
return providerSpec
}
// NewProviderSpec generates a new provider specification for testing purposes.
func NewProviderSpec() *apis.ProviderSpec {
return &apis.ProviderSpec{
Cluster: TestCluster,
Zone: TestZone,
ImageName: TestImageName,
ServerType: TestServerType,
SSHFingerprint: TestSSHFingerprint,
PlacementGroupID: TestPlacementGroupID,
}
}
|
<reponame>mblackgeo/flask-cognito-jwt-example<gh_stars>0
from os.path import abspath, dirname, join
from typing import Optional
from pydantic import BaseSettings, Field
class CDKConfig(BaseSettings):
NAMESPACE: str = Field(default="webapp")
ENV: str = Field(default="prod")
AWS_REGION: str = Field(default="eu-west-1")
AWS_ACCOUNT: str = Field(...)
AWS_DOMAIN_NAME: Optional[str] = Field(None)
AWS_API_SUBDOMAIN: Optional[str] = Field(None)
AWS_COGNITO_SUBDOMAIN: Optional[str] = Field(None)
class Config:
env_prefix = ""
case_sentive = False
env_file = abspath(join(dirname(__file__), "..", "..", ".env"))
env_file_encoding = "utf-8"
cfg = CDKConfig()
|
/**
* Free the resources (subscribers), associated with the topic
*
* @param topicName The topic name to unsubscribe
* @throws MiddlewareException Thrown when it is not possible to destroy the {@link Subscriber} instance
*/
public void unsubscribe(String topicName) throws BrokerException {
List<Subscriber> subscribersByTopic = subscribers.remove(topicName);
if (subscribersByTopic != null) {
for (Subscriber subscriber : subscribersByTopic) {
subscriber.cleanUp();
}
}
} |
<reponame>manibhushan05/transiq<filename>web/transiq/restapi/models.py<gh_stars>0
from django.conf import settings
from django.db.models.signals import post_save
from django.dispatch import receiver
from rest_framework.authtoken.models import Token
from django.db import models
from django.contrib.auth.models import User
from simple_history.models import HistoricalRecords
from employee.models import Employee
from restapi.signals import booking_status_mapping_post_save_handler
from sme.models import Sme
from team.models import ManualBooking
USER_CATEGORIES = (
('customer', 'Customer'),
('employee', 'Employee'),
('supplier', 'Supplier'),
('broker', 'Broker')
)
EMP_ROLES = (
('office_data_entry', 'Office Data Entry'),
('ops_executive', 'Ops Executive'),
('accounts_payable', 'Accounts Payable'),
('accounts_receivable', 'Accounts Receivable'),
('sales', 'Sales'),
('traffic', 'Traffic'),
('city_head', 'City Head'),
('management', 'Management'),
('tech', 'Technology'),
)
EMP_STATUS = (
('active', 'Active'),
('inactive', 'Inactive'),
)
ACCESS_PERMISSIONS = (
('read_only', 'Read Only'),
('edit', 'Edit'),
)
CONSUMER_PLATFORMS = (
('web', 'Web'),
('mobile', 'Mobile'),
('all', 'All'),
)
EMP_BOOKING_STATUS_ACTION = (
('responsible', 'Responsible'),
('dependent', 'Dependent'),
)
BOOKING_STATUSES = (
('confirmed', 'Confirmed'),
('loaded', 'Loaded'),
('lr_generated', 'Lr Generated'),
('advance_paid', 'Advance Paid'),
('unloaded', 'Unloaded'),
('pod_uploaded', 'PoD Uploaded'),
('pod_verified', 'PoD Verified'),
('invoice_raised', 'Invoice Raised'),
('invoice_confirmed', 'Invoice Confirmed'),
('balance_paid', 'Balance Paid'),
('party_invoice_sent', 'Party Invoice Sent'),
('inward_followup_completed', 'Inward Followup Completed'),
('complete', 'Complete'),
)
BOOKING_STATUSES_LEVEL = (
('primary', 'Primary'),
('secondary', 'Secondary'),
)
BOOKING_STATUS_STAGE = (
('in_progress', 'In Progress'),
('done', 'Done'),
('reverted', 'Reverted'),
('escalated', 'Escalated'),
)
TD_FUNCTIONS = (
('new_inquiry', 'Submit New Inquiry'),
('customer_inquiries', 'Customer Inquiries'),
('open_inquiries', 'Open Inquiries'),
('my_inquiries', 'My Inquiries'),
('pending_payments', 'Pending Payments'),
('pending_lr', 'Pending LR'),
('in_transit', 'In Transit'),
('invoice_confirmation', 'Invoice Confirmation'),
('delivered', 'Delivered'),
('confirm_booking', 'New Booking'),
('lr_generation', 'Generate LR'),
('pay_advance', 'Pay Advance'),
('pay_balance', 'Pay Balance'),
('send_invoice', 'Send Invoice'),
('verify_pod', 'Verify PoD'),
('raise_invoice', 'Raise Invoice'),
('confirm_invoice', 'Confirm Invoice'),
('inward_entry', 'Inward Entry'),
('process_payments', 'Process Payments'),
('reconcile', 'Reconcile'),
)
@receiver(post_save, sender=settings.AUTH_USER_MODEL)
def create_auth_token(sender, instance=None, created=False, **kwargs):
if created:
Token.objects.create(user=instance)
class UserCategory(models.Model):
category = models.CharField(unique=True, max_length=15, null=True, choices=USER_CATEGORIES)
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='usercategory_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s' % (self.category)
class EmployeeRoles(models.Model):
role = models.CharField(unique=True, max_length=35, null=True, choices=EMP_ROLES, default='office_data_entry')
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='employeeroles_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s' % self.role
def get_role(self):
rol = '' if not self.role else self.get_role_display()
return {'id': self.id, 'role': rol}
class EmployeeRolesMapping(models.Model):
employee = models.ForeignKey(Employee, blank=True, null=True, related_name='employee_role_mapping',
on_delete=models.CASCADE)
employee_role = models.ForeignKey(EmployeeRoles, blank=True, related_name='employee_role',
null=True, on_delete=models.CASCADE)
employee_status = models.CharField(max_length=15, null=True, choices=EMP_STATUS, default='inactive')
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='employeerolesmapping_created_by',
on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s, %s, %s' % (self.employee.username, self.employee_role.role, self.employee_status)
def get_employee_role_username(self):
role = '' if not self.employee_role else self.employee_role.get_role()
e_name = '' if not self.employee else self.employee.username.username
return {'role': role, 'username': e_name}
class BookingStatuses(models.Model):
status = models.CharField(max_length=35, null=True, choices=BOOKING_STATUSES, default='confirmed')
# Time limit is in minutes
time_limit = models.IntegerField(blank=True, null=True, default=0)
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='bookingstatus_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s, %s' % (self.get_status_display(), self.time_limit)
def get_status(self):
return '' if not self.status else self.get_status_display()
class BookingStatusChain(models.Model):
booking_status = models.ForeignKey(BookingStatuses, null=True, related_name='booking_status', on_delete=models.CASCADE)
level = models.CharField(max_length=15, null=True, choices=BOOKING_STATUSES_LEVEL, default='secondary')
primary_preceded_booking_status = models.ForeignKey(BookingStatuses, null=True,
related_name='primary_preceded_booking_status', on_delete=models.CASCADE)
primary_succeeded_booking_status = models.ForeignKey(BookingStatuses, null=True,
related_name='primary_succeeded_booking_status', on_delete=models.CASCADE)
secondary_preceded_booking_status = models.ForeignKey(BookingStatuses, null=True,
related_name='secondary_preceded_booking_status', on_delete=models.CASCADE)
secondary_succeeded_booking_status = models.ForeignKey(BookingStatuses, null=True,
related_name='secondary_succeeded_booking_status', on_delete=models.CASCADE)
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='bookingstatuschain_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s' % self.booking_status.get_status_display()
def get_booking_status(self):
return '' if not self.booking_status else self.booking_status.get_status_display()
class EmployeeRolesBookingStatusMapping(models.Model):
employee_roles_mapping = models.ForeignKey(EmployeeRolesMapping, blank=True, null=True, on_delete=models.CASCADE)
booking_status_chain = models.ForeignKey(BookingStatusChain, blank=True, null=True, on_delete=models.CASCADE)
assignment_status = models.CharField(max_length=15, null=True, choices=EMP_STATUS, default='inactive')
action = models.CharField(max_length=15, null=True, choices=EMP_BOOKING_STATUS_ACTION, default='responsible')
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='er_bs_mapping_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s, %s, %s' % (self.employee_roles_mapping.employee.username,
self.employee_roles_mapping.employee_role.role, self.booking_status_chain.booking_status.status)
class BookingStatusesMapping(models.Model):
manual_booking = models.ForeignKey(ManualBooking, blank=True, null=True, related_name='bookings',
on_delete=models.CASCADE)
booking_status_chain = models.ForeignKey(BookingStatusChain, blank=True, null=True, on_delete=models.CASCADE)
booking_stage = models.CharField(max_length=15, null=True, choices=BOOKING_STATUS_STAGE, default='in_progress')
due_date = models.DateTimeField(null=True, blank=True)
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='bs_mapping_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s, %s, %s' % (self.manual_booking.booking_id if self.manual_booking else '',
self.booking_status_chain.booking_status.status if self.booking_status_chain else '',
self.booking_stage)
def get_booking_status_mapping(self):
return {'booking_id': self.manual_booking.booking_id, 'booking_status': self.booking_status_chain.booking_status.status,
'booking_stage': self.booking_stage}
post_save.connect(booking_status_mapping_post_save_handler, sender=BookingStatusesMapping)
class BookingStatusesMappingComments(models.Model):
booking_status_mapping = models.ForeignKey(BookingStatusesMapping, blank=True, null=True,
related_name='booking_status_mapping_comments', on_delete=models.CASCADE)
comment = models.CharField(max_length=50, null=True)
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='bs_mapping_cmts_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def get_booking_status_comment(self):
bs_mapping = '' if not self.booking_status_mapping else self.booking_status_mapping.get_booking_status_mapping()
return {'id': self.id, 'booking_status_mapping': bs_mapping, 'comment': self.comment, 'created_on': self.created_on}
def __str__(self):
return '%s' % self.id
class BookingStatusesMappingLocation(models.Model):
booking_status_mapping = models.ForeignKey(BookingStatusesMapping, blank=True, null=True, on_delete=models.CASCADE)
#booking or vehicle
latitude = models.DecimalField(max_digits=5, decimal_places=2, null=True)
longitude = models.DecimalField(max_digits=5, decimal_places=2, null=True)
district = models.CharField(max_length=100, null=True)
city = models.CharField(max_length=100, null=True)
state = models.CharField(max_length=100, null=True)
country = models.CharField(max_length=100, null=True, default='India')
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='bs_mapping_location_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s, %s, %s, %s' % (self.booking_status_mapping.manual_booking.booking_id,
self.booking_status_mapping.booking_status_chain.booking_status.status, self.latitude,
self.longitude)
class TaskDashboardFunctionalities(models.Model):
functionality = models.CharField(unique=True, max_length=35, null=True, choices=TD_FUNCTIONS, default='new_inquiry')
consumer = models.CharField(max_length=35, null=True, choices=CONSUMER_PLATFORMS, default='web')
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='td_functionality_created_by', on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s' % self.functionality
def get_functionality(self):
func = '' if not self.functionality else self.functionality
return {'id': self.id, 'functionality': func, 'consumer': self.consumer}
class EmployeeRolesFunctionalityMapping(models.Model):
td_functionality = models.ForeignKey(TaskDashboardFunctionalities, blank=True, null=True, on_delete=models.CASCADE)
employee_role = models.ForeignKey(EmployeeRoles, blank=True, null=True, on_delete=models.CASCADE)
caption = models.CharField(max_length=35, null=True, blank=True)
access = models.CharField(max_length=20, null=True, choices=ACCESS_PERMISSIONS, default='edit')
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='employeerolesfunctionality_created_by',
on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s, %s, %s' % (self.td_functionality.functionality, self.employee_role.role, self.caption)
def get_employee_role_functionality(self):
role = '' if not self.employee_role else self.employee_role.get_role()
func = '' if not self.td_functionality else self.td_functionality.functionality
return {'role': role, 'functionality': func}
class SmePaymentFollowupComments(models.Model):
sme = models.ForeignKey(Sme, blank=True, null=True, related_name="sme_payment_followup", on_delete=models.CASCADE)
comment = models.CharField(max_length=150, null=True, blank=True)
due_date = models.DateField(blank=True, null=True)
created_on = models.DateTimeField(auto_now_add=True)
created_by = models.ForeignKey(User, null=True, related_name='sme_payment_followup_created_by',
on_delete=models.CASCADE,
limit_choices_to={'is_staff': True})
updated_on = models.DateTimeField(auto_now=True)
changed_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
history = HistoricalRecords()
@property
def _history_date(self):
return self.__history_date
@_history_date.setter
def _history_date(self, value):
self.__history_date = value
@property
def _history_user(self):
return self.changed_by
@_history_user.setter
def _history_user(self, value):
self.changed_by = value
def __str__(self):
return '%s, %s, %s' % (self.sme_id, self.comment, self.due_date)
|
On 23 June last year, Malcolm Burge told a friend he was leaving his London home for a few days to attend a funeral in the West Country. Four days later, he parked his car in a lay-by amid the timeless beauty of Somerset’s Cheddar Gorge and ignited a can of petrol.
As the flames engulfed him and the heat blew out the windows and tyres of his Skoda, a group of teenagers rushed to Mr Burge’s aid in the gathering gloom of a summer’s evening. Amid frantic shouts, the 66-year-old pensioner got out of his vehicle and stood by the door, unmoving as the fire burned.
We’ll tell you what’s true. You can form your own view. From 15p €0.18 $0.18 $0.27 a day, more exclusives, analysis and extras.
Teigan Baker, one of the group of youngsters, recalled: “He was completely on fire. I could see his face and he looked scared and I said ‘you have to stop and roll’. He dropped slowly to his knees before starting to roll.”
As the flames died out, the full extent of Mr Burge’s injuries became apparent and the teenagers endeavoured to try and keep him talking as he lapsed in and out of consciousness, asking him his name. Ms Baker said: “He replied ‘Malcolm’. I remember him saying ‘I have come down on holiday, I am from London’.”
Mr Burge, who had suffered 100 per cent second-degree burns to his body, was rushed to hospital by air ambulance but he could not survive his injuries. He died at 5am the following day.
The youngsters, who had frantically doused Mr Burge with bottled water, later noted that while they had tried to save him from his blazing car, there had been five or six other vehicles in the vicinity. None of the 15 occupants of those cars came to the aid of the burning man.
It was not the first time that his fellow humanity had failed to respond while Malcolm Burge – a modest, at times shy man who had dedicated himself to the care of his parents, and the dead in his role a cemetery gardener – suffered.
On the day of his death, a letter arrived at his rented home confirming he was now the subject of court action from his local authority seeking recovery of an £800 debt he had repeatedly said he could not pay. His bank balance stood at £50.
A coroner ruled this week that Mr Burge, who like his father before him had worked tending the graves at the City of London Cemetery, committed suicide after a 50 per cent cut in his housing benefit left him ensnared in bureaucracy and begging for help from Newham Borough Council, which was in turn engulfed by its caseload.
The inquest heard this week that shortly before his death, Mr Burge had written to Newham Borough Council saying: “I can’t remember the last time I had £800 in my possession. I have no savings or assets. I’m not trying to live. I’m trying to survive.”
In his final letter to the local authority, which received no reply, he said: “I’m now more stressed, depressed and suicidal than any of my previous letters.”
Shape Created with Sketch. What Britain thinks of benefits: perception, reality and winning votes Show all 9 left Created with Sketch. right Created with Sketch. Shape Created with Sketch. What Britain thinks of benefits: perception, reality and winning votes 1/9 We think more immigrants claim benefits than they do A YouGov poll for the Sunday Times earlier in January showed that the British public are way off with their estimation of how many immigrants claim jobseekers allowance 2/9 Immigration and benefits Three quarters (76%) of us oppose immigrants being allowed benefits in their first year of residency Getty 3/9 Two thirds of us don't like the system as it is Two thirds (66%) of us think the benefits system is unfit for purpose.. something the Conservatives have saying since they first unveiled the cuts Getty Images 4/9 Benefits Street documentaries don't help Nearly half of us (45%) think people on benefits are portrayed unfairly. In Scotland, 62% think the portrayal of people on benefits is unfair (compared to 45% in the whole of the UK). In London this changes to 40% Channel 4 5/9 Toughen up benefit rules Two-thirds (66%) want tougher rules about who can claim benefits (picture shows James Turner Street in Birmingham, the setting for Channel 4's documentary series 'Benefits Street') Creative Commons/Peter Whatley 6/9 We're wrong on benefit fraud According to a study published by Royal Statistical Society and King's College in July, the public think that £24 of every £100 of benefits is fraudulently claimed. Official estimates are that just 70 pence in every £100 is fraudulent - so the public conception is out by a factor of 34 Dan Kitwood/Getty Images 7/9 We would prefer to make it harder for immigrants to claim benefits A similar poll by YouGov for the Sunday Times in January showed that support for limiting migrants' benefits was widespread 8/9 Poverty and inequality is a big issue for us An Ipsos Mori poll from January showed that poverty and inequality is becoming increasingly important for British people 9/9 Benefits is less of an issue than it has been The same Ipsos Mori poll from January showed that pensions/benefits and social security was by far a more pressing issue for other governments, at least by the British public's perception 1/9 We think more immigrants claim benefits than they do A YouGov poll for the Sunday Times earlier in January showed that the British public are way off with their estimation of how many immigrants claim jobseekers allowance 2/9 Immigration and benefits Three quarters (76%) of us oppose immigrants being allowed benefits in their first year of residency Getty 3/9 Two thirds of us don't like the system as it is Two thirds (66%) of us think the benefits system is unfit for purpose.. something the Conservatives have saying since they first unveiled the cuts Getty Images 4/9 Benefits Street documentaries don't help Nearly half of us (45%) think people on benefits are portrayed unfairly. In Scotland, 62% think the portrayal of people on benefits is unfair (compared to 45% in the whole of the UK). In London this changes to 40% Channel 4 5/9 Toughen up benefit rules Two-thirds (66%) want tougher rules about who can claim benefits (picture shows James Turner Street in Birmingham, the setting for Channel 4's documentary series 'Benefits Street') Creative Commons/Peter Whatley 6/9 We're wrong on benefit fraud According to a study published by Royal Statistical Society and King's College in July, the public think that £24 of every £100 of benefits is fraudulently claimed. Official estimates are that just 70 pence in every £100 is fraudulent - so the public conception is out by a factor of 34 Dan Kitwood/Getty Images 7/9 We would prefer to make it harder for immigrants to claim benefits A similar poll by YouGov for the Sunday Times in January showed that support for limiting migrants' benefits was widespread 8/9 Poverty and inequality is a big issue for us An Ipsos Mori poll from January showed that poverty and inequality is becoming increasingly important for British people 9/9 Benefits is less of an issue than it has been The same Ipsos Mori poll from January showed that pensions/benefits and social security was by far a more pressing issue for other governments, at least by the British public's perception
After receiving ten separate demands for payment, Mr Burge wrestled with a Kafkaesque telephone system which kept him on hold until an automated voice told him to consult a website he had no idea to access. With legal threats gathering and seemingly caught in a bureaucratic limbo, the gardener, who had at times battled depression in his life, took the decision to drive himself to a much-loved location and take his life in the most harrowing circumstances.
His nephew, Paul Higdon, told The Independent: “My uncle was the kind of man who wrote his correspondence by hand. He used carbon paper to make copies. He told Newham he was feeling stressed and suicidal. What he received in return were pro forma letters or silence.
“Clearly he wasn’t capable of using the internet or navigating phone systems. We have moved into a digital age but in so doing we have left a lot of people behind. There are human beings at the receiving end of these decisions and the council did not respond appropriately.”
The death of Mr Burge fits into a wider picture of concern about what happens when vulnerable individuals come into contact with the benefits system, whether via local authorities or government agencies.
The Department of Work and Pensions (DWP) acknowledged this week that it has reviewed 49 cases where employment benefit recipients were “sanctioned” – having their payments stopped for a period of weeks or months after failing to comply with the rules – and subsequently died.
They included David Clapson, 59, a former soldier and diabetic who was found dead in his home last July after his benefits were slashed and he did not apply for hardship payments. He had no food in his stomach and no credit on the electricity card needed to keep going the fridge that stored his insulin. His bank balance was £3.44.
Employment minister Esther McVey and her department have declined to release details of the reviews but she this week told MPs no link had been found between the deaths and the sanctioning policy. She told the work and pensions select committee: “We ensured that we followed all of our processes correctly.”
The DWP said that it has an extensive safety net in place across the benefits system and sanctioning remains an action of last resort for jobseekers backed up by hardship funds.
But an increasing number of people, ranging from campaigners to coroners, argue that Britain’s increasingly stringent welfare system has at times become inflexible to the needs of the most vulnerable.
Campaign group Black Triangle, which monitors deaths of benefits recipients, including the disabled and those with mental illness, said it was aware of 80 deaths with potential links to cuts or sanctioning, ranging from heart attacks among people previously classified as too disabled to work to cases of destitution.
John McArdle, co-founder of the group, said: “This is the tip of the iceberg. Vulnerable people are being cut loose and made to feel they are in a catastrophic situation.
“There should be procedures to ensure public bodies cannot push these individuals from pillar to post. People are dying and we should know the circumstances surrounding these deaths.”
For Mr Burge, the path that led to the ultimate despair began in January 2013 when his housing benefit was halved from £90 to £45 per week. Due to a backlog in processing reductions, Newham continued to pay Mr Burge the sum in full for six months. By the time his reduced payments, which the DWP said was not linked to welfare reform, were processed he had been overpaid by £809.79.
What followed was a series of letters in which the groundsman voiced his increasing exasperation at his unravelling finances and Newham sought resolution by at first deducting a weekly sum from his meagre income before eventually resorting to legal action.
It was a harsh indignity for a man from a proudly working class background who, according to his family, had a traditional attitude to debt. His sister, Carol Higdon, told the inquest: “He was a very quiet and proud man. We knew nothing about this until after his death.” His niece, Sharon Watts, added: “His pride kept him away from asking us. We would have helped him.”
Born in 1948, Mr Burge had spent all but four years of his life living in the grounds of the City of London cemetery. The imposing Victorian graveyard in Manor Park, east London, is one of the largest in Europe and his father – Mervin – was head groundsman, raising his family in the grey-stone lodge house.
Mr Burge, an active man who counted squash, golf and snooker among his pastimes as well as a talent for producing homemade wine, followed his father into the gardening trade, working at the cemetery and elsewhere in adjoining Wanstead.
But when his mother died in 1992 and his father contracted Parkinson’s Disease, Mr Burge gave up the work he loved to become Mervin’s full-time carer, moving to a smaller property in the grounds of the cemetery.
It was to this house that his family came last year in the immediate days after the harrowing events in Cheddar Gorge. They found no suicide note but some evidence that Mr Burge had settled his intentions before he had left for the West Country. His niece, Sharon Watts, told the inquest: “His phone book was open next to the phone with my brothers’, mine and my mother’s numbers on it, and it said ‘in case of emergency please call’.”
The family underline that they cannot be certain that the issue of the housing benefit debt was the sole factor in Mr Burge’s suicide. But they add that it was clearly preying on his mind.
Mr Higdon said: “Things could have been done differently by the council. There was a combination of unfortunate factors and we don’t know how direct a contribution each made to his death.
“At the same time, when we visited his house he had clearly left out his communications with the council. His other documents had been destroyed but all his letters to Newham were laid out. It was clearly of importance to him.”
Michael Rose, the West Somerset coroner, said he would be writing to Newham to ask that it establish a system to enable those in a vulnerable position to get in touch without having to rely on “laptops, iPads or the internet”.
He said: “This is a tragic tale of a man who had lived all of his life in the city of London being caught up in the changes to the government benefit system. And while it seems clear to me now that he was a man who needed help and was in distress, unfortunately Newham Borough Council were unable to give it to him.”
The local authority said it acknowledged “delays and deficiencies” in its communications with Mr Burge and apologised “if this contributed to his death in any way”.
For the man who felt he had to travel far from home to end his life, there was at least a return to the place that had provided sanctuary throughout his life. His remains were placed in the memorial garden of the City of London Cemetery, the place he knew best.
* For confidential support call the Samaritans on 08457 90 90 90 or visit a local Samaritans branch - see www.samaritans.org for details.
We’ll tell you what’s true. You can form your own view.
At The Independent, no one tells us what to write. That’s why, in an era of political lies and Brexit bias, more readers are turning to an independent source. Subscribe from just 15p a day for extra exclusives, events and ebooks – all with no ads.
Subscribe now |
Security Assistance Reform: Section 1206 Background and Issues for Congress This report provides background on the pre-Section 1206 status of security assistance authorities and the factors contributing to the enactment of Section 1206, which provides the Secretary of Defense with authority to train and equip foreign military forces and foreign maritime security forces. It then sets out the purposes of the legislation and scope of its activities, restrictions on its use, the DOD-State Department planning process, and funding. It concludes with a discussion of issues for Congress. |
Jahrom city daffodil farms in Fars province due to particular climate, are harvested earlier than other parts of Iran.
daffodil’s harvest started from October till mid-February and products sending to other cities of Shiraz, Tehran and other parts of the Iran.
About Fars Province
Fars Province is one of the thirty-one provinces of Iran and in the south of the country. Its administrative center is Shiraz. It has an area of 122,400 km². In 2011, this province had a population of 4.6 million people, of which 67.6% were registered as urban dwellers (urban/suburbs), 32.1% villagers (small town/rural), and 0.3% nomad tribes.
The etymology of the word “Persian” (From Latin Persia, from Ancient Greek Περσίς (Persis)), found in many ancient names associated with Iran, is derived from the historical importance of this region. Fars Province is the original homeland of the Persian people. |
#ifndef _TREENODE_HPP_
#define _TREENODE_HPP_
#pragma once
#include <iostream>
class TreeNode {
public:
TreeNode(int x);
~TreeNode();
int getHeight();
int getNumberOfNodes();
bool contains(int x);
std::string preorderTraversal();
std::string inorderTraversal();
std::string postorderTraversal();
bool isLeaf();
bool isFull();
private:
int max(int x, int y);
int val;
TreeNode *left, *right;
};
#endif |
The three musketeers: cellulitis, phlegmon and abscess. The three musketeers: Cellulitis, Phlegmon and Abscess. Read PDF. Madam, One of the major debilitating complications of diabetes is diabetic foot. 15% of. Aug 15, 2016. Cellulitis and Abscess: ED simple cellulitis / abscess v.1... Created three care algorithms (two for the Emergency Department, and one for. Official Full-Text Paper (PDF): Diabetic foot: spectrum of MR imaging findings.. The three musketeers: cellulitis, phlegmon and abscess. JPMA J Pak Med. Cellulitis and Abscess Management in the Era of Resistance to. Antibiotics.. Table 3. Number of abscesses and the total number of skin or soft tissue infections, by practice and audit... Available at: http://www.cdc.gov/ncidod/dhqp/ pdf/ar/CA. Once infected, the condition is then termed a pilonidal abscess, and treatment should be. Qureshi AZ: The three musketeers: cellulitis, phlegmon and abscess. Aug 14, 2014 pilonidal cysts reported in U.S. hospitals.3 Abscess formation is a possible result of. The diagnosis was cellulitis with a phlegmon formation, and no.. Qureshi AZ: The three musketeers: cellulitis, phlegmon and abscess.. |
<gh_stars>0
package utils;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
class ViewTest {
View view = new View();
@Test
public void printjudge(){
view.printJudge(new int[]{0,1});
}
} |
import {NotFound} from "ts-httpexceptions";
export class RecipeNotFoundError extends NotFound {
constructor(private id: string) {
super("Recipe not found");
}
}
|
Turkish officials have blocked all access to the popular Wikipedia website within the country.
Turkey's Information and Communication Technologies Authority (BTK) said on April 29 that it carried out the ban on wikipedia.org but did not give a reason for the move.
Turkish media said the ban is a result of Wikipedia failing to remove content that officials claim is promoting terror and also linking Turkey with terror groups.
A formal court order on the ban is expected to follow in the coming days.
The blockage, which affected all language editions of the website in Turkey, was detected at about 8 a.m. local time after an administrative order was made by authorities, according to the Turkey Blocks monitoring group.
Turkey is well known for temporarily blocking access to popular websites, including Facebook and Twitter, and in March 2014 began blocking YouTube for several months. |
The Pittsburgh Steelers announced via Twitter that they have released quarterback Troy Smith.
Smith won the Heisman Trophy in 2006 playing for Ohio State, but was never able to carve out more than a backup role in the NFL. He appeared in 14 games over three seasons for the Baltimore Ravens from 2007-09, and was with the San Francisco 49ers in 2010.
In 2011, Smith was with the Omaha Nighthawks of the United Football Leauge, but signed with the Steelers in January. |
Evaluation of sustainable development and travel agencies within the scope of Agenda 2030: A bibliometric analysis : A bibliometric examination of sustainable development and travel agency research from 1997 to 2021 was used to uncover intellectual frameworks, developing trends, and future research prospects. CiteSpace was used to do a comprehensive search of 302 core articles from Web-of-Science and to analyze the results. The findings showed a normal growth in the amount of research, with major study topics. The articles with the most citations are mostly from the last 15 years. The USA has a solid leadership in publications, followed by Taiwan and Sweden. The network of authors shows a core structure on the margins where the European Commission and Bucharest University Economy Studies are ranked first. The discovery of structural flaws, the publication of critical papers, and the emergence of new emerging trends emphasize the priorities in the sustainable development and travel agency domains, pointing to new study prospects. This research is unique in that it performs a temporal and dynamic analysis of the last 24 years utilizing CiteSpace to analyze co-citation and co-occurrence networks. |
<reponame>cvangysel/ears<gh_stars>1-10
/*==========================================================================
* Copyright (c) 2009, <NAME> and <NAME>. All rights reserved.
*
* Use of the Entity and Association Retrieval System (EARS)
* is subject to the terms of the software license set forth
* in the LICENSE file included with this software.
*==========================================================================
*/
/*!
* \file DoubleDocUnigramCounter.cpp
* \brief DocUnigramCounter able to deal with doubles
* \date 2009-12-04
* \version 1.0
*/
// EARS
#include "DoubleDocUnigramCounter.hpp"
///
ears::DoubleDocUnigramCounter::DoubleDocUnigramCounter(
const lemur::utility::WeightedIDSet &docSet,
const lemur::api::Index &homeIndex )
: ind(homeIndex),
lemur::utility::ArrayCounter<double>(homeIndex.termCountUnique()+1 )
{
docSet.startIteration();
while ( docSet.hasMore() ) {
int docID;
double wt;
docSet.nextIDInfo( docID, wt );
countDocUnigram( docID, wt );
}
}
///
void
ears::DoubleDocUnigramCounter::countDocUnigram( lemur::api::DOCID_T docID,
double weight )
{
lemur::api::TermInfoList *tList = ind.termInfoList( docID );
const lemur::api::TermInfo *info;
tList->startIteration();
while ( tList->hasMore() ) {
info = tList->nextEntry();
incCount( info->termID(), weight*info->count() );
}
delete tList;
}
|
<reponame>dexafree/SayCheese
//
// ImgurPartialAlbum.h
// ImgurSession
//
// Created by <NAME> on 24/07/13.
// Distributed under the MIT license.
//
#import "IMGModel.h"
#import "IMGObject.h"
typedef NS_ENUM(NSInteger, IMGAlbumPrivacy){
IMGAlbumDefault = 1,
IMGAlbumPublic = 1,
IMGAlbumHidden,
IMGAlbumSecret
};
typedef NS_ENUM(NSInteger, IMGAlbumLayout){
IMGDefaultLayout = 1,
IMGBlogLayout = 1,
IMGGridLayout,
IMGHorizontalLayout,
IMGVerticalLayout
};
/**
Model object class to represent common denominator properties to gallery and user albums. https://api.imgur.com/models/album
*/
@interface IMGBasicAlbum : IMGModel <NSCopying,NSCoding,IMGObjectProtocol>
/**
Album ID
*/
@property (nonatomic, readonly, copy) NSString *albumID;
/**
Title of album
*/
@property (nonatomic, readonly, copy) NSString *title;
/**
Album description
*/
@property (nonatomic, readonly, copy) NSString *albumDescription;
/**
Album creation date
*/
@property (nonatomic, readonly) NSDate *datetime;
/**
Image Id for cover of album
*/
@property (nonatomic, readonly, copy) NSString *coverID;
/**
Cover image width in px
*/
@property (nonatomic, readonly) CGFloat coverWidth;
/**
Cover image height in px
*/
@property (nonatomic, readonly) CGFloat coverHeight;
/**
account username of album creator, not a URL but named like this anyway. nil if anonymous
*/
@property (nonatomic, readonly, copy) NSString *accountURL;
/**
Privacy of album
*/
@property (nonatomic, readonly, copy) NSString *privacy;
/**
Type of layout for album
*/
@property (nonatomic, readonly) IMGAlbumLayout layout;
/**
Number of views for album
*/
@property (nonatomic, readonly) NSInteger views;
/**
URL for album link
*/
@property (nonatomic, readonly) NSURL *url;
/**
Number of images in album
*/
@property (nonatomic, readonly) NSInteger imagesCount; // Optional: can be set to nil
/**
Array of images in IMGImage form
*/
@property (nonatomic, readonly, copy) NSArray *images; // Optional: can be set to nil
/**
For custom init with gallery ID and cover ID
*/
-(instancetype)initWithGalleryID:(NSString*)objectID coverID:(NSString*)coverID error:(NSError *__autoreleasing *)error;
#pragma mark - Album Layout setting
/**
Returns string for layout constant
@param layoutType layout constant
@return string for layout constant
*/
+(NSString*)strForLayout:(IMGAlbumLayout)layoutType;
/**
Returns constant for string layout
@param string for layout constant
@return layout layout constant
*/
+(IMGAlbumLayout)layoutForStr:(NSString*)layoutStr;
#pragma mark - Album Privacy setting
/**
Returns string for privacy constant
@param privacy privacy constant
@return string for privacy constant
*/
+(NSString*)strForPrivacy:(IMGAlbumPrivacy)privacy;
/**
Returns constant for privacy layout
@param string for privacy constant
@return privacy privacy constant
*/
+(IMGAlbumPrivacy)privacyForStr:(NSString*)privacyStr;
@end
|
Challenges in adapting international best practices in cancer prevention, care, and research for Qatar. The World Health Organization recommends that all countries develop a cancer control program. Qatar is the first country in the Gulf Cooperation Council to develop such a plan, with its National Cancer Strategy 2011-2016. Three years into implementation, meaningful progress has been made, particularly in reducing patient waiting times, creating a multidisciplinary approach to cancer treatment, and fostering international research collaboration. Challenges include attracting sufficient numbers of trained health care workers, reaching a diverse population with messages tailored to their needs, and emphasizing cancer prevention and early detection in addition to research and treatment. Qatar's example shows that best practices developed in North America, Western Europe, and Australasia can be assimilated in a very different demographic and cultural context when such approaches are tailored to local characteristics and circumstances. |
Mobile terminals, or mobile (cellular) telephones, for mobile telecommunications systems like GSM, UMTS, D-AMPS and CDMA2000 have been used for many years now. In the older days, mobile terminals were used almost exclusively for voice communication with other mobile terminals or stationary telephones. More recently, the use of modern terminals has been broadened to include not just voice communication, but also various other services and applications such as www/wap browsing, video telephony, electronic messaging (e.g. SMS, MMS, email, instant messaging), digital image or video recording, FM radio, music playback, exercise analysis, electronic games, calendar/organizer/time planner, word processing, etc.
One problem with mobile terminals is inadvertent actuation of keys of the keypad. This can result in undesired phone calls, or even worse, deletion of content in the mobile terminal, such as phone book records or photographs.
In the prior art, it is known to allow the user to lock the keypad to reduce the risk of inadvertent key actuations. However, when unlocking the keypad, the key sequence is often awkward with keys needed to be pressed in a certain sequence, to reduce the risk of inadvertent unlocking of the keypad.
Another problem in the prior art is with using the mobile terminal as a clock to tell the time. To allow this functionality, the terminal always shows the time, even when the keypad is locked. The problem with this is that power is used to show the time even though most of the time the user does not actually look at the display.
Consequently, there is a need to provide a mobile communication terminal and method providing a user interface which is easier to use in conjunction with keypad locking. |
Please enable Javascript to watch this video
(Earle, AR) It’s usually a mom or a dad who breaks up a fight on the playground, but in this situation, police say it was a mom who took a bad situation and made it worse.
Jaelisa Payne is new to Earle, as her family just moved from Georgia... and she quickly learned their daughter would become the victim of a bully.
Payne and her friends had just finished swimming and were hanging out on the playground when she said a bully from school showed up with her mother. She said the girl said nothing, just drew back her fist and started fighting
The high school freshman's clothes were ripped from her, while teenage boys hooted for more. The girl was pinned down on the grass with her breasts exposed to the crowd.
The mother of the teen who started it all was said by witnesses to be egging on the fight, not trying to stop it.
Jaelisa's mom called police, who said the mom in the video will be arrested.
“If you're parents, you're accountable for what the children do, you can't egg the children on to do this kind of stuff”, said Captain Rodney Davis.
“The mother was pointing down on my child saying whip her, beat her”, said Jaelisa’s mother, Alisa.
“If anyone's child is with their breasts out you should feel some kind of compunction," she said.
Police credit this all to school getting out and a lot of teens just not having anything to do. |
Religion, Pragmatism, and Dissent: Theodore Parker's Experience as a Minister On October 16, 1859, John Brown led an unsuccessful raid on the Harpers Ferry Armory. He planned to seize the cache of weapons in order to arm local slaves, to march south, and to deplete Virginia of the slaves who supported its economy. While it failed to realize this objective, the raid succeeded in driving a wedge between the Union and the Confederate States. The rift that Brown helped create grew into the gaping wound of the Civil War. Four years later, Abraham Lincoln surveyed the site of the most gruesome aspect of that wound: Soldiers Cemetery at Gettysburg, Pennsylvania. His Gettysburg Address signaled a turn in the war and a turn in the Unions favor. It is remembered as a significant step in the project that had been initiated at Harpers Ferry. |
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Library containing helpers for adding post export metrics for evaluation.
These post export metrics can be included in the add_post_export_metrics
parameter of Evaluate to compute them.
"""
from typing import Any, Dict, List, Optional, Tuple
import tensorflow as tf
from tensorflow_model_analysis import types
from tensorflow_model_analysis.post_export_metrics import metric_keys
from tensorflow_model_analysis.post_export_metrics import post_export_metrics
from tensorflow_model_analysis.proto import metrics_for_slice_pb2 as metrics_pb2
from tensorflow_model_analysis.slicer import slicer_lib as slicer
# pylint: disable=protected-access
@post_export_metrics._export('fairness_indicators')
class _FairnessIndicators(post_export_metrics._ConfusionMatrixBasedMetric):
"""Metrics that can be used to evaluate the following fairness metrics.
* Demographic Parity or Equality of Outcomes.
For each slice measure the Positive* Rate, or the percentage of all
examples receiving positive scores.
* Equality of Opportunity
Equality of Opportunity attempts to match the True Positive* rate
(aka recall) of different data slices.
* Equality of Odds
In addition to looking at Equality of Opportunity, looks at equalizing the
False Positive* rates of slices as well.
The choice to focus on these metrics as a starting point is based primarily on
the paper Equality of Opportunity in Supervised Learning and the excellent
visualization created as a companion to the paper.
https://arxiv.org/abs/1610.02413
http://research.google.com/bigpicture/attacking-discrimination-in-ml/
* Note that these fairness formulations assume that a positive prediction is
associated with a positive outcome for the user--in certain contexts such as
abuse, positive predictions translate to non-opportunity. You may want to use
the provided negative rates for comparison instead.
"""
_thresholds = ... # type: List[float]
_example_weight_key = ... # type: str
_labels_key = ... # type: str
_metric_tag = None # type: str
# We could use the same keys as the ConfusionMatrix metrics, but with the way
# that post_export_metrics are currently implemented, if both
# post_export_metrics were specified we would pop the matrices/thresholds in
# the first call, and have issues with the second.
thresholds_key = metric_keys.FAIRNESS_CONFUSION_MATRIX_THESHOLDS
matrices_key = metric_keys.FAIRNESS_CONFUSION_MATRIX_MATRICES
def __init__(self,
thresholds: Optional[List[float]] = None,
example_weight_key: Optional[str] = None,
target_prediction_keys: Optional[List[str]] = None,
labels_key: Optional[str] = None,
metric_tag: Optional[str] = None,
tensor_index: Optional[int] = None) -> None:
if not thresholds:
thresholds = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
# Determine the number of threshold digits to display as part of the metric
# key. We want lower numbers for readability, but allow differentiation
# between close thresholds.
self._key_digits = 2
for t in thresholds:
if len(str(t)) - 2 > self._key_digits:
self._key_digits = len(str(t)) - 2
super().__init__(
thresholds,
example_weight_key,
target_prediction_keys,
labels_key,
metric_tag,
tensor_index=tensor_index)
def get_metric_ops(
self, features_dict: types.TensorTypeMaybeDict,
predictions_dict: types.TensorTypeMaybeDict,
labels_dict: types.TensorTypeMaybeDict
) -> Dict[str, Tuple[types.TensorType, types.TensorType]]:
values, update_ops = self.confusion_matrix_metric_ops(
features_dict, predictions_dict, labels_dict)
# True positive rate is computed by confusion_matrix_metric_ops as 'recall'.
# pytype: disable=unsupported-operands
values['tnr'] = tf.math.divide_no_nan(values['tn'],
values['tn'] + values['fp'])
values['fpr'] = tf.math.divide_no_nan(values['fp'],
values['fp'] + values['tn'])
values['positive_rate'] = tf.math.divide_no_nan(
values['tp'] + values['fp'],
values['tp'] + values['fp'] + values['tn'] + values['fn'])
values['fnr'] = tf.math.divide_no_nan(values['fn'],
values['fn'] + values['tp'])
values['negative_rate'] = tf.math.divide_no_nan(
values['tn'] + values['fn'],
values['tp'] + values['fp'] + values['tn'] + values['fn'])
values['false_discovery_rate'] = tf.math.divide_no_nan(
values['fp'], values['fp'] + values['tp'])
values['false_omission_rate'] = tf.math.divide_no_nan(
values['fn'], values['fn'] + values['tn'])
# pytype: enable=unsupported-operands
update_op = tf.group(update_ops['fn'], update_ops['tn'], update_ops['fp'],
update_ops['tp'])
value_op = tf.transpose(
a=tf.stack([
values['fn'], values['tn'], values['fp'], values['tp'],
values['precision'], values['recall']
]))
output_dict = {
self._metric_key(self.matrices_key): (value_op, update_op),
self._metric_key(self.thresholds_key): (tf.identity(self._thresholds),
tf.no_op()),
}
for i, threshold in enumerate(self._thresholds):
output_dict[self._metric_key(
metric_keys.base_key(
'positive_rate@%.*f' %
(self._key_digits, threshold)))] = (values['positive_rate'][i],
update_op)
output_dict[self._metric_key(
metric_keys.base_key(
'true_positive_rate@%.*f' %
(self._key_digits, threshold)))] = (values['recall'][i],
update_op)
output_dict[self._metric_key(
metric_keys.base_key(
'false_positive_rate@%.*f' %
(self._key_digits, threshold)))] = (values['fpr'][i], update_op)
output_dict[self._metric_key(
metric_keys.base_key(
'negative_rate@%.*f' %
(self._key_digits, threshold)))] = (values['negative_rate'][i],
update_op)
output_dict[self._metric_key(
metric_keys.base_key(
'true_negative_rate@%.*f' %
(self._key_digits, threshold)))] = (values['tnr'][i], update_op)
output_dict[self._metric_key(
metric_keys.base_key(
'false_negative_rate@%.*f' %
(self._key_digits, threshold)))] = (values['fnr'][i], update_op)
output_dict[self._metric_key(
metric_keys.base_key('false_discovery_rate@%.*f' %
(self._key_digits, threshold)))] = (
values['false_discovery_rate'][i], update_op)
output_dict[self._metric_key(
metric_keys.base_key('false_omission_rate@%.*f' %
(self._key_digits, threshold)))] = (
values['false_omission_rate'][i], update_op)
return output_dict # pytype: disable=bad-return-type
def populate_stats_and_pop(
self, unused_slice_key: slicer.SliceKeyType, combine_metrics: Dict[str,
Any],
output_metrics: Dict[str, metrics_pb2.MetricValue]) -> None:
matrices = combine_metrics.pop(self._metric_key(self.matrices_key))
thresholds = combine_metrics.pop(self._metric_key(self.thresholds_key))
# We assume that thresholds are already sorted.
if len(matrices) != len(thresholds):
raise ValueError(
'matrices should have the same length as thresholds, but lengths '
'were: matrices: %d, thresholds: %d' %
(len(matrices), len(thresholds)))
for threshold, raw_matrix in zip(thresholds, matrices):
# Adds confusion matrix table as well as ratios used for fairness metrics.
if isinstance(threshold, types.ValueWithTDistribution):
threshold = threshold.unsampled_value
output_matrix = post_export_metrics._create_confusion_matrix_proto(
raw_matrix, threshold)
(output_metrics[self._metric_key(metric_keys.FAIRNESS_CONFUSION_MATRIX)]
.confusion_matrix_at_thresholds.matrices.add().CopyFrom(output_matrix))
# If the fairness_indicator in enabled, the slicing inside the tfx evaluator
# config will also be added into this metrics as a subgroup key.
# However, handling the subgroup metrics with slices is still TBD.
@post_export_metrics._export('fairness_auc')
class _FairnessAuc(post_export_metrics._PostExportMetric):
"""Metric that computes bounded AUC for predictions in [0, 1].
This metrics calculates the subgroup auc, the background positive subgroup
negative auc and background negative subgroup positive auc. For more
explanation about the concepts of these auc metrics, please refer to paper
[Measuring and Mitigating Unintended Bias in Text
Classification](https://ai.google/research/pubs/pub46743)
"""
_target_prediction_keys = ... # type: List[str]
_labels_key = ... # type: str
_metric_tag = None # type: str
_tensor_index = ... # type: int
def __init__(self,
subgroup_key: str,
example_weight_key: Optional[str] = None,
num_buckets: int = post_export_metrics._DEFAULT_NUM_BUCKETS,
target_prediction_keys: Optional[List[str]] = None,
labels_key: Optional[str] = None,
metric_tag: Optional[str] = None,
tensor_index: Optional[int] = None) -> None:
"""Create a metric that computes fairness auc.
Predictions should be one of:
(a) a single float in [0, 1]
(b) a dict containing the LOGISTIC key
(c) a dict containing the PREDICTIONS key, where the prediction is
in [0, 1]
Label should be a single float that is either exactly 0 or exactly 1
(soft labels, i.e. labels between 0 and 1 are *not* supported).
Args:
subgroup_key: The key inside the feature column to indicate where this
example belongs to the subgroup or not. The expected mapping tensor of
this key should contain an integer/float value that's either 1 or 0.
example_weight_key: The key of the example weight column in the features
dict. If None, all predictions are given a weight of 1.0.
num_buckets: The number of buckets used for the curve. (num_buckets + 1)
is used as the num_thresholds in tf.metrics.auc().
target_prediction_keys: If provided, the prediction keys to look for in
order.
labels_key: If provided, a custom label key.
metric_tag: If provided, a custom metric tag. Only necessary to
disambiguate instances of the same metric on different predictions.
tensor_index: Optional index to specify class predictions to calculate
metrics on in the case of multi-class models.
"""
self._subgroup_key = subgroup_key
self._example_weight_key = example_weight_key
self._curve = 'ROC'
self._num_buckets = num_buckets
self._metric_name = metric_keys.FAIRNESS_AUC
self._subgroup_auc_metric = self._metric_key(self._metric_name +
'/subgroup_auc/' +
self._subgroup_key)
self._bpsn_auc_metric = self._metric_key(
f'{self._metric_name}/bpsn_auc/{self._subgroup_key}')
self._bnsp_auc_metric = self._metric_key(self._metric_name + '/bnsp_auc/' +
self._subgroup_key)
super().__init__(
target_prediction_keys=target_prediction_keys,
labels_key=labels_key,
metric_tag=metric_tag,
tensor_index=tensor_index)
def check_compatibility(self, features_dict: types.TensorTypeMaybeDict,
predictions_dict: types.TensorTypeMaybeDict,
labels_dict: types.TensorTypeMaybeDict) -> None:
post_export_metrics._check_feature_present(features_dict,
self._example_weight_key)
post_export_metrics._check_feature_present(features_dict,
self._subgroup_key)
self._get_labels_and_predictions(predictions_dict, labels_dict)
def get_metric_ops(
self, features_dict: types.TensorTypeMaybeDict,
predictions_dict: types.TensorTypeMaybeDict,
labels_dict: types.TensorTypeMaybeDict
) -> Dict[str, Tuple[types.TensorType, types.TensorType]]:
# Note that we have to squeeze predictions, labels, weights so they are all
# N element vectors (otherwise some of them might be N x 1 tensors, and
# multiplying a N element vector with a N x 1 tensor uses matrix
# multiplication rather than element-wise multiplication).
predictions, labels = self._get_labels_and_predictions(
predictions_dict, labels_dict)
predictions = post_export_metrics._flatten_to_one_dim(
tf.cast(predictions, tf.float64))
labels = post_export_metrics._flatten_to_one_dim(
tf.cast(labels, tf.float64))
weights = tf.ones_like(predictions)
subgroup = post_export_metrics._flatten_to_one_dim(
tf.cast(features_dict[self._subgroup_key], tf.bool))
if self._example_weight_key:
weights = post_export_metrics._flatten_to_one_dim(
tf.cast(features_dict[self._example_weight_key], tf.float64))
predictions, labels, weights = (
post_export_metrics
._create_predictions_labels_weights_for_fractional_labels(
predictions, labels, weights))
# To let subgroup tensor match the size with prediction, labels and weights
# above.
subgroup = tf.concat([subgroup, subgroup], axis=0)
labels_bool = tf.cast(labels, tf.bool)
pos_subgroup = tf.math.logical_and(labels_bool, subgroup)
neg_subgroup = tf.math.logical_and(
tf.math.logical_not(labels_bool), subgroup)
pos_background = tf.math.logical_and(labels_bool,
tf.math.logical_not(subgroup))
neg_background = tf.math.logical_and(
tf.math.logical_not(labels_bool), tf.math.logical_not(subgroup))
bnsp = tf.math.logical_or(pos_subgroup, neg_background)
bpsn = tf.math.logical_or(neg_subgroup, pos_background)
ops_dict = {}
# Add subgroup auc.
ops_dict.update(
post_export_metrics._build_auc_metrics_ops(
self._subgroup_auc_metric, labels, predictions,
tf.multiply(weights, tf.cast(subgroup, tf.float64)),
self._num_buckets + 1, self._curve))
# Add backgroup positive subgroup negative auc.
ops_dict.update(
post_export_metrics._build_auc_metrics_ops(
self._bpsn_auc_metric, labels, predictions,
tf.multiply(weights, tf.cast(bpsn, tf.float64)),
self._num_buckets + 1, self._curve))
# Add backgroup negative subgroup positive auc.
ops_dict.update(
post_export_metrics._build_auc_metrics_ops(
self._bnsp_auc_metric, labels, predictions,
tf.multiply(weights, tf.cast(bnsp, tf.float64)),
self._num_buckets + 1, self._curve))
return ops_dict
def populate_stats_and_pop(
self, slice_key: slicer.SliceKeyType, combine_metrics: Dict[str, Any],
output_metrics: Dict[str, metrics_pb2.MetricValue]) -> None:
for metrics_key in (self._subgroup_auc_metric, self._bpsn_auc_metric,
self._bnsp_auc_metric):
if slice_key:
combine_metrics.pop(metric_keys.lower_bound_key(metrics_key))
combine_metrics.pop(metric_keys.upper_bound_key(metrics_key))
combine_metrics.pop(metrics_key)
else:
post_export_metrics._populate_to_auc_bounded_value_and_pop(
combine_metrics, output_metrics, metrics_key)
# pylint: enable=protected-access
|
The Miami Dolphins need to beef up at linebacker, so which prospects could they target after the first round in the 2017 NFL Draft
The Miami Dolphins shocked a lot of the football world last season, reeling off six straight victories in the middle of the season, and nine out of 11 total. That impressive streak was good enough to get the Dolphins to their first playoff appearance since 2008. By all rights, it was a great start to the Adam Gase era in Miami.
But that success was had largely in spite of the defense. Miami finished in the bottom three against the rush last season, and were middle of the pack against the pass. Their success against the pass was largely due to the fact that teams could run at will on Miami, so why pass more than you have to? If the Dolphins are going to build on the successes of last season, they are going to need to improve their front seven in a big way.
Lucky for them, the 2017 NFL Draft is the perfect opportunity to improve a defense. This draft class is one of the deepest I have ever seen at every level of the defense. Though they put in some work during the offseason at the linebacker position, it’s likely the Dolphins aren’t done addressing the position. If the team decides to beef up the defensive line, offensive line, or another position in the first round, who are some linebackers they could target after round 1? Let’s take a look at the top 5 linebackers available after Day 1. |
# -*- coding: utf-8 -*-
from collections import namedtuple
import datetime
import unittest
from .context import convert_rows
class ConvertTestSuite(unittest.TestCase):
payload = {
"version": "0.6",
"reqId": "0",
"status": "ok",
"sig": "1788543417",
"table": {
"cols": [
{
"id": "A",
"label": "datetime",
"type": "datetime",
"pattern": "M/d/yyyy H:mm:ss",
},
{
"id": "B",
"label": "number",
"type": "number",
"pattern": "General",
},
{
"id": "C",
"label": "boolean",
"type": "boolean",
},
{
"id": "D",
"label": "date",
"type": "date",
"pattern": "M/d/yyyy",
},
{
"id": "E",
"label": "timeofday",
"type": "timeofday",
"pattern": "h:mm:ss am/pm",
},
{
"id": "F",
"label": "string",
"type": "string",
},
],
"rows": [
{
"c": [
{"v": "Date(2018,8,1,0,0,0)", "f": "9/1/2018 0:00:00"},
{"v": 1.0, "f": "1"},
{"v": True, "f": "TRUE"},
{"v": "Date(2018,0,1)", "f": "1/1/2018"},
{"v": [17, 0, 0, 0], "f": "5:00:00 PM"},
{"v": "test"},
],
},
{
"c": [
None,
{"v": 1.0, "f": "1"},
{"v": True, "f": "TRUE"},
None,
None,
{"v": "test"},
],
},
],
},
}
def test_convert(self):
cols = self.payload['table']['cols']
rows = self.payload['table']['rows']
result = convert_rows(cols, rows)
Row = namedtuple(
'Row', 'datetime number boolean date timeofday string')
expected = [
Row(
datetime=datetime.datetime(2018, 9, 1, 0, 0),
number=1.0,
boolean=True,
date=datetime.date(2018, 1, 1),
timeofday=datetime.time(17, 0),
string='test',
),
Row(
datetime=None,
number=1.0,
boolean=True,
date=None,
timeofday=None,
string='test',
),
]
self.assertEqual(result, expected)
|
#include "process_log.hpp"
#include "../libs/libexception.hpp"
#include <iostream>
#include <regex>
namespace ctguard::research {
static std::tuple<bool, std::vector<std::pair<std::string, std::string>>, std::vector<std::pair<std::string, std::string>>> check_rule(
const event & ev, const rule & rl, std::map<rule_id_t, struct rule_state> & rules_state, bool verbose)
{
bool match_something{ false };
bool is_active{ false };
std::vector<std::pair<std::string, std::string>> modified_fields;
std::vector<std::pair<std::string, std::string>> modified_traits;
if (rl.unless_rule().id != 0) {
std::lock_guard<std::mutex> lg{ rules_state[rl.id()].mutex };
bool is_child{ false };
for (const auto & i : rl.parent_ids()) {
// cppcheck-suppress useStlAlgorithm
if (ev.rule_id() == i) {
is_child = true;
break;
}
}
if (is_child) {
rules_state[rl.id()].unless_triggered = std::time(nullptr);
rules_state[rl.id()].unless_timeout = rl.unless_rule().timeout;
rules_state[rl.id()].unless_event = ev.update(rl);
} else if (ev.rule_id() == rl.unless_rule().id) {
rules_state[rl.id()].unless_triggered = 0;
} else {
std::ostringstream oss;
for (const auto & i : rl.parent_ids()) {
oss << i << " ";
}
throw libs::lib_exception{ "Unknown child case: " + std::to_string(ev.rule_id()) + "(event) " + oss.str() + "(parent) " +
std::to_string(rl.unless_rule().id) + "(unless)" };
}
return { false, modified_fields, modified_traits };
}
// activation_group
if (!rl.activation_group().group_name.empty()) {
bool found_activation_group{ false };
for (const std::string & ev_group : ev.groups()) {
// cppcheck-suppress useStlAlgorithm
if (ev_group == rl.activation_group().group_name) {
found_activation_group = true;
match_something = true;
break;
}
}
const auto & iter = rules_state.find(rl.id());
if (iter == rules_state.end()) {
if (verbose) {
std::cout << "init rstate|";
}
if (found_activation_group) {
rules_state[rl.id()].mevents.emplace(std::time(nullptr), ev);
}
} else {
std::lock_guard<std::mutex> lg{ iter->second.mutex };
auto & saved_events{ iter->second.mevents };
if (found_activation_group) {
saved_events.emplace(std::time(nullptr), ev);
}
// delete too old entries
{
const auto current_time = std::time(nullptr);
while (!saved_events.empty()) {
const auto & i = saved_events.begin();
if (i->first + rl.activation_group().time < current_time) {
saved_events.erase(i);
} else {
// break, cause elements are sorted by time
break;
}
}
}
if (saved_events.size() >= rl.activation_group().rate) {
if (rl.same_field().empty()) {
is_active = true;
std::ostringstream otriggers;
for (const auto & evs : saved_events) {
otriggers << evs.second.logstr() << '\n';
}
modified_traits.emplace_back("trigger_logs", otriggers.str());
if (verbose) {
std::cout << "active(" << saved_events.size() << "/" << rl.activation_group().rate << ")|";
}
} else {
// count same fields
const auto & actual_field = ev.fields().find(rl.same_field());
if (actual_field != ev.fields().end()) {
rule_activation_rate_t same_rate{ 0 };
std::ostringstream otriggers;
for (const auto & i : saved_events) {
const auto & stored_field = i.second.fields().find(rl.same_field());
if (stored_field != i.second.fields().end() && actual_field->second == stored_field->second) {
same_rate++;
otriggers << i.second.logstr() << '\n';
}
}
if (same_rate >= rl.activation_group().rate) {
is_active = true;
modified_traits.emplace_back("trigger_same_logs", otriggers.str());
if (verbose) {
std::cout << "active_s(" << same_rate << "/" << rl.activation_group().rate << ")|";
}
} else if (verbose) {
std::cout << "inactive_s(" << same_rate << "/" << rl.activation_group().rate << ")|";
}
} else if (verbose) {
std::cout << "inactive_s(no field)|";
}
}
} else if (verbose) {
std::cout << "inactive(" << iter->second.mevents.size() << "/" << rl.activation_group().rate << ")|";
}
}
} else {
is_active = true;
}
if (!rl.trigger_group().empty()) {
match_something = true;
if (rl.trigger_group() == "!ALWAYS") {
if (verbose) {
std::cout << "group always match|";
}
} else {
bool found_group = false;
for (const auto & iter : ev.groups()) {
// cppcheck-suppress useStlAlgorithm
if (iter == rl.trigger_group()) {
found_group = true;
break;
}
}
if (!found_group) {
if (verbose) {
std::cout << "not matching (group)\n";
}
return { false, modified_fields, modified_traits };
}
if (verbose) {
std::cout << "group match|";
}
}
}
if (!rl.trigger_fields().empty()) {
match_something = true;
for (const auto & iter : rl.trigger_fields()) {
if (verbose) {
std::cout << "field '" + iter.first + "' matching...|";
}
bool found = false;
for (const auto & field : ev.fields()) {
if (field.first == iter.first) {
found = true;
switch (iter.second.first) {
case rule_match::exact:
if (field.second != iter.second.second) {
if (verbose) {
std::cout << "field '" << iter.first << "' exact mismatch\n";
}
return { false, modified_fields, modified_traits };
}
break;
case rule_match::empty:
if (!field.second.empty()) {
if (verbose) {
std::cout << "field '" << iter.first << "' not empty\n";
}
return { false, modified_fields, modified_traits };
}
break;
case rule_match::regex:
std::smatch match;
const std::regex regex{ iter.second.second };
if (!std::regex_search(field.second, match, regex)) {
if (verbose) {
std::cout << "field '" << iter.first << "' regex mismatch\n";
}
return { false, modified_fields, modified_traits };
}
break;
}
}
}
if (!found && iter.second.first != rule_match::empty) {
if (verbose) {
std::cout << "field '" << iter.first << "' not found\n";
}
return { false, modified_fields, modified_traits };
}
}
}
if (!rl.trigger_traits().empty()) {
match_something = true;
for (const auto & iter : rl.trigger_traits()) {
if (verbose) {
std::cout << "trait '" + iter.first + "' matching...|";
}
bool found = false;
for (const auto & field : ev.traits()) {
if (field.first == iter.first) {
found = true;
switch (iter.second.first) {
case rule_match::exact:
if (field.second != iter.second.second) {
if (verbose) {
std::cout << "trait '" << iter.first << "' exact mismatch\n";
}
return { false, modified_fields, modified_traits };
}
break;
case rule_match::empty:
if (!field.second.empty()) {
if (verbose) {
std::cout << "field '" << iter.first << "' not empty\n";
}
return { false, modified_fields, modified_traits };
}
break;
case rule_match::regex:
std::smatch match;
const std::regex regex{ iter.second.second };
if (!std::regex_search(field.second, match, regex)) {
if (verbose) {
std::cout << "trait '" << iter.first << "' regex mismatch\n";
}
return { false, modified_fields, modified_traits };
}
break;
}
}
}
if (!found && iter.second.first != rule_match::empty) {
if (verbose) {
std::cout << "trait '" << iter.first << "' not found\n";
}
return { false, modified_fields, modified_traits };
}
}
}
if (rl.reg().has_value()) {
match_something = true;
std::smatch match;
const std::string & to_match = ev.fields().find("log") != ev.fields().end() ? ev.fields().find("log")->second : ev.logstr();
if (!std::regex_search(to_match, match, *rl.reg())) {
if (verbose) {
std::cout << "no regex match\n";
}
return { false, modified_fields, modified_traits };
}
if (verbose) {
std::cout << "regex match|";
}
for (size_t i = 1; i < match.size(); ++i) {
if (match[i] != std::string("")) {
modified_fields.emplace_back(rl.regex_fields()[i - 1], match[i]);
}
}
}
if (!rl.parent_ids().empty()) {
match_something = true;
}
if (verbose) {
std::cout << "full match: " << std::boolalpha << (match_something && is_active) << "(" << match_something << " && " << is_active << ")\n";
}
return { match_something && is_active, modified_fields, modified_traits };
}
static void update_event(event & ev, const rule & rl, std::vector<std::pair<std::string, std::string>> & modified_fields,
std::vector<std::pair<std::string, std::string>> & modified_traits)
{
ev.description(rl.description());
ev.always_alert(rl.always_alert());
ev.add_groups(rl.groups());
ev.priority(rl.priority());
ev.rule_id(rl.id());
ev.interventions(rl.interventions());
for (auto & iter : modified_fields) {
ev.fields()[iter.first] = std::move(iter.second);
}
for (auto & iter : modified_traits) {
ev.traits()[iter.first] = std::move(iter.second);
}
}
template<typename Cont, typename Prop>
static inline void erase_if(Cont & c, Prop p)
{
for (auto it = c.begin(); it != c.end();) {
if (p(*it)) {
it = c.erase(it);
} else {
++it;
}
}
}
static void update_state(std::map<rule_id_t, struct rule_state> & rules_state, const rule & rl, const event & ev)
{
// reset activation rule
if (!rl.activation_group().group_name.empty() && rl.activation_group().reset) {
const auto & iter = rules_state.find(rl.id());
if (iter != rules_state.end()) {
std::lock_guard<std::mutex> lg{ iter->second.mutex };
if (rl.same_field().empty()) {
iter->second.mevents.clear();
} else {
const auto & actual_field = ev.fields().find(rl.same_field());
if (actual_field != ev.fields().end()) {
erase_if(iter->second.mevents, [&rl, &actual_field](const auto & elem) {
const auto & stored_field = elem.second.fields().find(rl.same_field());
return stored_field != elem.second.fields().end() && actual_field->second == stored_field->second;
});
}
}
}
}
}
static void check_top_rules(event & e, const std::vector<rule> & rules, unsigned depth, std::map<rule_id_t, struct rule_state> & rules_state, bool verbose)
{
std::vector<std::tuple<const rule *, std::vector<std::pair<std::string, std::string>>, std::vector<std::pair<std::string, std::string>>>>
top_matching_rules;
// TODO(cgzones): check for too big depth
if (verbose) {
std::cout << " Processing rules (#" << rules.size() << ") level " << depth << "\n";
}
for (auto const & r : rules) {
if (verbose) {
std::cout << " Checking level " << depth << " rule " << r.id() << " ...";
}
auto result = check_rule(e, r, rules_state, verbose);
if (std::get<0>(result)) {
top_matching_rules.emplace_back(&r, std::move(std::get<1>(result)), std::move(std::get<2>(result)));
}
}
if (verbose) {
std::cout << " Matching level " << depth << " rule: " << top_matching_rules.size() << "\n";
}
if (!top_matching_rules.empty()) {
priority_t max_priority{ 0 };
rule_id_t min_id{ static_cast<rule_id_t>(-1) };
std::vector<std::pair<std::string, std::string>> modified_fields;
std::vector<std::pair<std::string, std::string>> modified_traits;
const rule * fit{ nullptr };
for (auto & ex : top_matching_rules) {
const rule * rl = std::get<0>(ex);
if (rl->priority() > max_priority || (rl->priority() == max_priority && rl->id() < min_id)) {
fit = rl;
min_id = fit->id();
max_priority = fit->priority();
modified_fields = std::move(std::get<1>(ex));
modified_traits = std::move(std::get<2>(ex));
}
}
if (fit == nullptr) {
throw libs::lib_exception{ "No fitting rule found !!THIS SHOULD NEVER HAPPEN!!" };
}
if (verbose) {
std::cout << " Level " << depth << " rule fit: " << fit->id() << "\n";
}
update_event(e, *fit, modified_fields, modified_traits);
update_state(rules_state, *fit, e);
if (verbose) {
std::cout << " Checking child rules (#" << fit->children().size() << ") ...\n";
}
check_top_rules(e, fit->children(), depth + 1, rules_state, verbose);
}
}
static void format_log(event & e, const std::vector<format> & formats, bool verbose)
{
for (const auto & f : formats) {
std::smatch match;
if (!std::regex_match(e.logstr(), match, f.reg())) {
if (verbose) {
std::cout << f.name() << " not matching|";
}
continue;
}
if (verbose) {
std::cout << f.name() << " matching\n";
}
for (size_t i = 1; i < match.size(); ++i) {
e.fields()[f.fields()[i - 1]] = match[i];
}
e.traits()["format"] = f.name();
return;
}
if (verbose) {
std::cout << "no format match\n";
}
e.traits()["format"] = "unknown";
}
event process_log(const libs::source_event & se, bool verbose, const rule_cfg & rules, std::map<rule_id_t, struct rule_state> & rules_state)
{
if (verbose) {
std::cout << "\n Processing '" << se.message << "' ...\n";
}
event e{ se };
if (verbose) {
std::cout << " Format (#" << rules.formats.size() << ") ... ";
}
format_log(e, rules.formats, verbose);
if (verbose) {
std::cout << " Traits:\n";
for (const auto & elem : e.traits()) {
std::cout << " " << elem.first << " -> ##" << elem.second << "##\n";
}
std::cout << " Extracted fields:\n";
for (const auto & elem : e.fields()) {
std::cout << " " << elem.first << " -> ##" << elem.second << "##\n";
}
std::cout << " Groups: ##" << e.groups_2_str() << "##\n";
std::cout << " Priority: " << e.priority() << "\n";
}
check_top_rules(e, rules.std_rules, 1, rules_state, verbose);
if (verbose) {
std::cout << " Traits:\n";
for (const auto & elem : e.traits()) {
std::cout << " " << elem.first << " -> ##" << elem.second << "##\n";
}
std::cout << " Extracted fields:\n";
for (const auto & elem : e.fields()) {
std::cout << " " << elem.first << " -> ##" << elem.second << "##\n";
}
std::cout << " Groups: ##" << e.groups_2_str() << "##\n";
std::cout << " Priority: " << e.priority() << "\n";
std::cout << " Checking group rules...\n";
}
check_top_rules(e, rules.group_rules, 1, rules_state, verbose);
if (verbose) {
std::cout << " Traits:\n";
for (const auto & elem : e.traits()) {
std::cout << " " << elem.first << " -> ##" << elem.second << "##\n";
}
std::cout << " Extracted fields:\n";
for (const auto & elem : e.fields()) {
std::cout << " " << elem.first << " -> ##" << elem.second << "##\n";
}
std::cout << " Groups: ##" << e.groups_2_str() << "##\n";
std::cout << " Priority: " << e.priority() << "\n";
std::cout << " End extracting.\n";
}
return e;
}
} /* namespace ctguard::research */
|
class InputParameter:
"""
Defines a parameter that can be supplied when the model is executed.
Name, varType, and default_value are always available, because they are computed
from a variable assignment line of code:
The others are only available if the script has used define_parameter() to
provide additional metadata
"""
def __init__(self):
#: the default value for the variable.
self.default_value = None
#: the name of the parameter.
self.name = None
#: type of the variable: BooleanParameter, StringParameter, NumericParameter
self.varType = None
#: help text describing the variable. Only available if the script used describe_parameter()
self.desc = None
#: valid values for the variable. Only available if the script used describe_parameter()
self.valid_values = []
self.ast_node = None
@staticmethod
def create(ast_node, var_name, var_type, default_value, valid_values=None, desc=None):
if valid_values is None:
valid_values = []
p = InputParameter()
p.ast_node = ast_node
p.default_value = default_value
p.name = var_name
p.desc = desc
p.varType = var_type
p.valid_values = valid_values
return p
def set_value(self, new_value):
if len(self.valid_values) > 0 and new_value not in self.valid_values:
raise InvalidParameterError(
"Cannot set value '{0:s}' for parameter '{1:s}': not a valid value. Valid values are {2:s} "
.format(str(new_value), self.name, str(self.valid_values)))
if self.varType == NumberParameterType:
try:
# Sometimes a value must stay as an int for the script to work properly
if isinstance(new_value, int):
f = int(new_value)
else:
f = float(new_value)
self.ast_node.n = f
except ValueError:
raise InvalidParameterError(
"Cannot set value '{0:s}' for parameter '{1:s}': parameter must be numeric."
.format(str(new_value), self.name))
elif self.varType == StringParameterType:
self.ast_node.s = str(new_value)
elif self.varType == BooleanParameterType:
if new_value:
if hasattr(ast, 'NameConstant'):
self.ast_node.value = True
else:
self.ast_node.id = 'True'
else:
if hasattr(ast, 'NameConstant'):
self.ast_node.value = False
else:
self.ast_node.id = 'False'
else:
raise ValueError("Unknown Type of var: ", str(self.varType))
def __str__(self):
return "InputParameter: {name=%s, type=%s, defaultValue=%s" % (
self.name, str(self.varType), str(self.default_value)) |
import * as functions from "firebase-functions";
import * as dialogflowSdk from "dialogflow";
import * as credentials from "./cred/dialogflow.json";
import {
dialogflow,
DialogflowConversation,
SignIn,
Carousel,
Suggestions,
List
} from "actions-on-google";
import { Boards } from "./boards";
import { Board, BoardData } from "./board";
import { Card } from "./card";
import { Column } from "./column.js";
const app: any = dialogflow({ clientId: "4s0aroxhzpnfj3o44wvk" });
const entitiesClient = new dialogflowSdk.SessionEntityTypesClient({
credentials: credentials
});
const projectId = "globoards-80562";
type gloItems = "cards" | "card" | "columns" | "cards";
const updateEntity = async (name: string, data: any[], convId: string) => {
let entities = data.map(element => {
return {
value: String(element.id),
synonyms: [String(element.name)]
};
});
let session: string = `projects/${projectId}/agent/sessions/${convId}`;
const sessionEntityTypeRequest = {
parent: session,
sessionEntityType: {
name: session + `/entityTypes/${name}`,
entityOverrideMode: "ENTITY_OVERRIDE_MODE_OVERRIDE",
entities: entities
}
};
return entitiesClient
.createSessionEntityType(sessionEntityTypeRequest)
.then((responses: any) => {
return responses;
})
.catch(error => {
console.error(error);
});
};
app.intent(
"cards-create",
async (
conv: DialogflowConversation,
{ cardName, cardDescription }: { cardName: string; cardDescription: string }
) => {
let board: Board;
let data: BoardData = conv.contexts.get("board").parameters
.data as BoardData;
let column = conv.contexts.get("column").parameters.data as Column;
board = Board.fromData(data);
if (!board.findColumnById(column.id)) {
return ask(
conv,
`I did not find the ${column.name} in board ${
board.name
}. Please try again. `
);
}
let card: Card = await board.createCard(
cardName,
column.id,
cardDescription
);
conv.ask(`I have created ${card.name}, what can I do for you next?`);
conv.contexts.set("card", 3, { data: card });
return conv.screen
? ask(conv, card.getVisualCard())
: ask(conv, card.getVoiceCard());
}
);
app.intent(
"cards-get-by-updated",
async (
conv: DialogflowConversation,
{
boardId,
filter,
columns
}: { boardId: string; filter: string; columns: string }
) => {
if (filter === "Column" || columns) {
return conv.followup("event_columns", {});
}
let cards: Card[] = [];
if (boardId) {
let board: Board = await Board.getBoard(conv["token"], boardId);
await board.loadCards();
cards = board.cards;
} else {
let boards: Boards = new Boards(conv["token"]);
await boards.getBoards();
await boards.loadAllCards();
for (let board of boards.boards) {
if (board && board.cards) {
cards = [...cards, ...board.cards];
}
}
}
if (cards) {
cards = cards.filter(
e => e.updatedAt.getTime() / 1000 + 86400 > new Date().getTime() / 1000
);
} else {
return ask(
conv,
`There are no new cards for today, what can I do for you next?`
);
}
//TODO: Add filter by assignee
let element;
let contextData = {};
let items = cards.reduce((obj, item) => {
element = item;
obj[item.id] = {
title: `${element["name"]}`,
description: `Updated at ${element["updatedAt"].toDateString()}, ${element.description}`
};
contextData[item.id] = item;
return obj;
}, {});
let cardsFilteredObj: cardsFiltered = {
cards: cards,
items: items,
type: 'filtered-cards',
contextData: contextData
}
return replyWithList(conv, cardsFilteredObj, 'Cards', '`There are no new cards for today, what can I do for you next?`', 'Here are the cards updated in last 24 hours. What can I do for you next?', ['Get boards', 'New card', 'New column']);
}
);
interface cardsFiltered {
cards: Card[];
items: any;
type: string;
contextData: any;
}
app.intent('board-set-favourite', async (conv: DialogflowConversation, { boardId }: { boardId: string }) => {
let board: Board;
if(!boardId && !conv.contexts.get("board")) {
return ask(conv, `Which board would you like to set as favourite?`); //TODO: Reply with list
} else if(!boardId && conv.contexts.get("board")) {
board = conv.contexts.get("board").parameters.data as Board;
} else {
board = await Board.getBoard(conv['token'], boardId);
}
conv.ask(new Suggestions(['Get boards', 'Get favourite']));
return ask(conv, `Board ${board.name} set as favourite! What can I do you next?`);
});
app.intent(
"cards-select",
async (conv: DialogflowConversation, params, option) => {
let board: Board;
let card: Card;
if(conv.contexts.get("board")) {
let data: BoardData = conv.contexts.get("board").parameters
.data as BoardData;
board = Board.fromData(data);
await board.loadCards();
card = board.findCardById(option);
}
if(!card && conv.contexts.get("cards")) {
let items = conv.contexts.get("cards").parameters.data;
card = items[option]; //TODO: Create context
}
conv.ask(`Here is the ${card.name}, what can I do for you next?`);
conv.contexts.set("card", 3, { data: card });
conv.ask(new Suggestions(['Archive']));
return conv.screen
? ask(conv, card.getVisualCard())
: ask(conv, card.getVoiceCard());
}
);
app.intent('columns-create', async (conv: DialogflowConversation, { columnName, boardId }: { columnName: string, boardId: string }) => {
let board: Board;
if(!boardId && conv.contexts.get('board')) {
let data = conv.contexts.get("board").parameters.data as BoardData;
board = Board.fromData(data);
// tslint:disable-next-line:no-parameter-reassignment
boardId = board.id;
} else if(!boardId && !conv.contexts.get('board')) {
return ask(conv, `In which board should I create the column?`);
} else {
board = await Board.getBoard(conv['token'], boardId);
}
let error = await board.createColumn(boardId, columnName);
if(error) {
return ask(conv, `Something went wrong, please try again.`);
}
return ask(conv, `Column ${columnName} in the board ${board.name} created. What can I do for you next?`);
});
app.intent('sign-in-no', (conv: DialogflowConversation) => {
return conv.close(`Okay, please come back later when you can sign in to continue, bye!`);
});
app.intent('goodbye', (conv: DialogflowConversation) => {
return conv.close(`Okay, bye for now!`);
});
app.intent(
"columns-select",
async (conv: DialogflowConversation, params, option) => {
let board: Board;
let data: BoardData = conv.contexts.get("board").parameters
.data as BoardData;
board = Board.fromData(data);
let column = board.findColumnById(option);
conv.contexts.set("column", 3, { data: column });
conv.contexts.set("board", 3, { data: board });
await board.loadCards();
let boardColumn: boardGetColumn = {
board: board,
column: column,
type: `board-column`
};
return replyWithList(
conv,
boardColumn,
`Cards in ${column.name}`,
`There are no cards in ${column.name}, what can I do for you next?`,
`Column ${column.name}, here are the cards! What can I do for you next?`,
["New card", "Get boards", "New column"]
);
}
);
interface boardGetColumn {
board: Board;
column: Column;
type: string;
}
app.intent(
"cards-get",
async (conv: DialogflowConversation, { boardId }: { boardId: string }) => {
let board: Board;
if (conv.contexts.get("boards") && boardId) {
let boards = new Boards(
conv["token"],
conv.contexts.get("boards").parameters.data
);
board = boards.getBoard(boardId);
} else if (conv.contexts.get("board")) {
let data: BoardData = conv.contexts.get("board").parameters
.data as BoardData;
board = Board.fromData(data);
} else {
let boards = new Boards(conv["token"]);
await boards.getBoards();
board = boards.boards[0];
}
await board.loadCards();
return replyWithList(conv, board, 'Title', 'There are no cards, what can I do for you next?', `Here are the cards of ${board.name}!`, ['Get boards', 'New card']);
}
);
function replyWithList(
conv: DialogflowConversation,
item: any,
title: string,
no_boards: string,
message: string,
suggestions: string[] = []
) {
if (item.length === 0) {
return ask(conv, no_boards);
}
let listData, items;
if (item.type === "boards") {
// tslint:disable-next-line:no-parameter-reassignment
item = item as Boards;
listData = item.getContextData();
items = item.getListFormat().items;
conv.contexts.set('boards', 3, { data: listData });
} else if (item.type === "board-column") {
// tslint:disable-next-line:no-parameter-reassignment
let board = item.board as Board;
let column = item.column as Column;
conv.contexts.set("cards", 3, { data: listData});
items = board.getCardsAsListByColumn(column.id).items;
listData = board.getContextData();
if(item.length === 1) {
conv.ask(`Here is the card, what can I do for you next?`);
return ask(conv, conv.screen ? item.cards[0].getVisualCard() : items.cards[0].getVoiceCard());
}
} else if(item.type === 'filtered-cards') {
// tslint:disable-next-line:no-parameter-reassignment
item = item as cardsFiltered;
items = item.items;
listData = item.contextData;
conv.contexts.set('cards', 3, { data: listData });
if(item.length === 1) {
conv.ask(`Here is the card, what can I do for you next?`);
return ask(conv, conv.screen ? item.cards[0].getVisualCard() : items.cards[0].getVoiceCard());
}
} else if(item.type === 'board') {
// tslint:disable-next-line:no-parameter-reassignment
item = item as Board;
let board = item;
items = board.getCardsAsList().items;
listData = board.getContextData();
conv.contexts.set("board", 3, { data: board });
conv.contexts.set("cards", 3, { data: listData });
if(item.length === 1) {
conv.ask(`Here is the card, what can I do for you next?`);
return ask(conv, conv.screen ? item.cards[0].getVisualCard() : items.cards[0].getVoiceCard());
}
}
if (conv.screen) {
if (suggestions.length !== 0) {
conv.ask(new Suggestions(suggestions));
}
conv.ask(message);
console.log(JSON.stringify(items));
return ask(
conv,
new Carousel({
items: items
})
);
} else {
return replyWith3Items(conv, item.type, [], listData);
}
/*
return conv["hasScreen"]
? replyWithGroupCard(conv, boards[0])
: replyWithGroupTextCard(conv, boards[0]);*/
}
let updateEntityBoards = async (
conv: DialogflowConversation,
no_boards: string
) => {
let boards = new Boards(conv.user.access.token);
await boards.getBoards();
if (!boards.length || boards.length === 0) {
return boards;
}
let entityResult = await updateEntity("boards", boards, conv.id);
if (!entityResult) {
console.error(JSON.stringify(entityResult));
}
return boards;
};
function ask(conv: DialogflowConversation, message: any, prompts?: string[]) {
if (prompts) {
conv.contexts.set("fallbacks", 3, { fallbacks: [...prompts], number: 0 });
conv.noInputs = [
...prompts,
`I am sorry, I am not sure how to help. Please come back later, bye!`
];
} else {
conv.contexts.set("fallbacks", 3, { fallbacks: [], number: 0 });
conv.noInputs = [];
}
let responses = [...conv.responses];
responses.push(message);
conv.contexts.set("response", 3, {
data: responses,
intent: conv.intent,
params: conv.parameters
});
return conv.ask(message);
}
app.intent("Default Welcome Intent", async (conv: DialogflowConversation) => {
if (conv.user.access.token) {
let boards: Boards = await updateEntityBoards(
conv,
"Welcome! You have no boards, create one to continue. How can I help you next?"
);
return replyWithList(
conv,
boards,
"Boards",
"Welcome! You have no boards, create one to continue. How can I help you next?",
"Welcome! Here are you boards, select one to continue.",
["Get boards", "Get cards", "Create board", "Create column"]
);
} else {
conv.ask(new Suggestions(['Yes', 'No']));
return ask(
conv,
`Welcome! You need to sign in to manage your boards, can we do that now?`
);
}
});
app.intent("sign-in-yes", (conv: DialogflowConversation) => {
return conv.ask(new SignIn(`To get access to your Glo Boards`));
});
app.intent(
"boards-select",
async (
conv: DialogflowConversation,
{ boardId }: { boardId: string },
option
) => {
let boards: Boards;
if (!boardId && !option) {
boards = await updateEntityBoards(
conv,
"Welcome! You have no boards, create one to continue. How can I help you next?"
);
return replyWithList(
conv,
boards,
"Boards",
"You have no boards, create one to continue. How can I help you next?",
"Here are your boards, please pick one to continue.",
["Get boards"]
);
}
let id = boardId || option;
boards = conv.contexts.get("boards")
? new Boards(conv["token"], conv.contexts.get("boards").parameters.data)
: new Boards(conv["token"]);
if (boards.isUndefined()) {
await boards.getBoards();
}
let board: Board = boards.getBoard(id);
if (!board) {
return ask(conv, `Sorry, something went wrong, please try again`);
}
conv.contexts.set("board", 3, { data: board }); //TODO: No screen
conv.contexts.set("columns", 3, {});
await updateEntity("columns", board.columns, conv.id);
let { items } = board.getColumnsAsList();
conv.ask(new Suggestions(['Get all cards', 'New column']));
if(board.columns.length > 1) {
conv.ask(`Here are the columns, pick one to get the cards.`);
return ask(conv, new List({ title: "Columns", items: items }));
} else if(board.columns.length === 1) {
let column = board.columns[0];
let boardColumn: boardGetColumn = {
board: board,
column: column,
type: `board-column`
};
return replyWithList(
conv,
boardColumn,
`The board has only one column ${column.name}, here are the cards. How can I help you next?`,
`There is only one column ${column.name} in the board and it has no cards, what can I do for you next?`,
`The board has only one column ${column.name}, here are the cards. How can I help you next?`,
["New card", "Get boards", "New column"]
);
} else {
return ask(conv, `There are no columns in ${board.name}, try to create one! What can I do for you next?`); }
}
);
app.intent(
"board-create",
async (
conv: DialogflowConversation,
{ boardName }: { boardName: string }
) => {
try {
let board = new Board(conv["token"], null, null, null, null, null);
board = await board.createBoard(boardName);
//TODO: Add board context
return ask(
conv,
`I have created a board with a name ${boardName}, what can I do for you next?`
);
} catch (error) {
console.error(error);
return ask(
conv,
`Something went wrong when I tried to create ${boardName}, please try again later, what can I do for you next?`
);
}
}
);
app.middleware((conv: DialogflowConversation) => {
console.log(`Intent: ${conv.intent}`);
conv.data["start"] = new Date().getTime();
conv["hasScreen"] = conv.surface.capabilities.has(
"actions.capability.SCREEN_OUTPUT"
);
conv["hasAudioPlayback"] = conv.surface.capabilities.has(
"actions.capability.AUDIO_OUTPUT"
);
conv["hasBrowser"] = conv.surface.capabilities.has(
"actions.capability.WEB_BROWSER"
);
conv["token"] = conv.user.access.token;
});
app.intent("sign-in-result", (conv: DialogflowConversation, params, signin) => {
if (signin.status !== "OK") {
return conv.close(
`Sign in is needed to authentificate the requests, please come back when you are ready, bye!`
);
} else {
return conv.followup('event_welcome');
}
});
app.intent("get-boards", async (conv: DialogflowConversation) => {
try {
let boards = new Boards(conv["token"]);
await boards.getBoards();
return replyWithList(
conv,
boards,
"Boards",
"You have no boards, create one to continue. How can I help you next?",
"Here are you boards, select one to continue",
["Get boards"]
);
} catch (error) {
console.error(error);
return conv.ask(error);
}
});
function replyWith3Items(
conv: DialogflowConversation,
type: gloItems,
prompts: string[] = [],
data?: any
) {
let listData = data ? data : conv.contexts.get(type).parameters.data;
let numberValues = [];
let text = "";
if (conv.contexts.get("number-reply")) {
numberValues = conv.contexts.get("number-reply").parameters.data as any[];
text = conv.contexts.get("number-reply").parameters.text as string;
}
let contextValues = {};
conv.contexts.delete("number-reply");
conv.contexts.delete("groups");
let response: string = "";
let keys: string[] = [];
let numberKeys = Object.keys(numberValues);
for (const key of numberKeys) {
delete listData[numberValues[key].urlname];
}
let listKeys = Object.keys(listData);
if (listKeys.length > 1) {
response = `Here are the next ${type}, `;
if (listKeys.length <= 3) {
response = `Here are last ${type}, `;
}
for (let i = 0; i < 3; i++) {
let element = listData[Object.keys(listData)[i]];
contextValues[(i + 1).toString()] = element;
if (!element) {
break;
}
switch (i) {
case 0:
response = response + `first, ${element["name"]}.`;
break;
case 1:
response = response + ` Second, ${element["name"]}.`;
break;
case 2:
response = response + ` Third, ${element["name"]}.`;
break;
}
}
response = response + `Which ${type} would you like to hear more about?`;
conv.contexts.set(type, 2, { data: listData });
conv.contexts.set("number-reply", 2, {
data: contextValues,
text: response
});
return ask(conv, response, prompts);
} else if (listKeys.length === 1) {
delete listData[keys[0]];
let element = listData[Object.keys(listData)[0]];
return ask(
conv,
`<speak>Here is the last ${type} - ${
element["name"]
}. How can I help you next?</speak>`
);
} else {
conv.contexts.delete(type);
conv.contexts.set("number-reply", 2, {
data: contextValues,
text: response
});
conv.ask(`These were the last ${type}: `);
return ask(conv, text, prompts);
}
}
export const fulfillment = functions.https.onRequest(app);
/*
export const test = functions.https.onRequest(async (req, res) => {
try {
return res.status(200).send();
} catch (error) {
console.error(error);
return res.status(500).send(error);
}
});*/
|
Understanding Test Takers' Choices in a Self-Adapted Test: A Hidden Markov Modeling of Process Data With the rise of more interactive assessments, such as simulation- and game-based assessment, process data are available to learn about students' cognitive processes as well as motivational aspects. Since process data can be complicated due to interdependencies in time, our traditional psychometric models may not necessarily fit, and we need to look for additional ways to analyze such data. In this study, we draw process data from a study on self-adapted test under different goal conditions and use hidden Markov models to learn about test takers' choice making behavior. Self-adapted test is designed to allow test takers to choose the level of difficulty of the items they receive. The data includes test results from two conditions of goal orientation (performance goal and learning goal), as well as confidence ratings on each question. We show that using HMM we can learn about transition probabilities from one state to another as dependent on the goal orientation, the accumulated score and accumulated confidence, and the interactions therein. The implications of such insights are discussed. INTRODUCTION With the rise of interactive assessment and learning programs, process data become available to infer about students' cognitive and motivational aspects. Process data can help us learn about students' strategies, preferences, and attitudes. In the context of problem solving, detecting strategies may reveal the cognitive processes needed to perform the task, and may even be considered as a factor in ability estimating (DiCerbo and Behrens, 2012;). However, interactive assessments such as simulation-and game-based assessments often afford opportunities to make choices about the course of game/simulation (e.g., which variables to try in the simulation, which path to take in the game) that are not directly connected to ability albeit may influence its assessment. Such choices may be a result of or reflect metacognitive or motivational aspects of task performance. For example, students' self-estimated knowledge and belief in their ability, students' tendency toward challenge, or whether students are motivated to do their best or just perform at minimum effort are just a few of the factors that may play a role in choices made in interactive assessment. Metacognition of task performance is rarely assessed as part of educational or academic assessments, yet it is acknowledged as important in student performance (). One aspect of metacognition is the Feeling of Knowledge (FOK; Koriat, 1993) that is evoked naturally when attempting to answer a question. The cognitive process of attempting to answer a question evokes the FOK based on the implicit and explicit accessibility cues (the easiness of accessing the answer, the vividness of the clues, the amount of information activated, etc.), and the content of that knowledge, its coherence, and the inferences that can be made from various clues retrieved (cf. Koriat, 1993Koriat,, 2000. The more information activated and the easier it is accessed, the more confident a person is in his or her answer. Asking people to evaluate their level of confidence in answering a question is the most common way to eliciting their FOK estimation and is a moderately valid predictor of actual knowledge (Koriat, 1993(Koriat,, 2000Wright and Ayton, 1994). Feeling of knowing and estimation of one's own ability relate to and affect a student's engagement or motivation when performing a task, which is called the "expectancy component" in the Expectancy-Value Model of motivation by Pintrich and colleagues (Pintrich, 1988;Pintrich and De Groot, 1990;Pintrich and Schunk, 2002). Another component of the Expectancy-Value Model is the perceived value of the task. One aspect of perceived value is the goal orientation toward the task. Research on goal orientation of task performance yields a primary distinction between "performance" and "learning" goals (Dweck and Leggett, 1988). Individuals with a performance goal strive to perform at their best to demonstrate their skills to themselves or others, while individuals with a learning goal toward a task strive to learn from the task caring less about demonstrating their skills. Although individuals often exhibit these attitudes in general (), studies have shown that the orientation goal can be changed via psychological intervention given prior to performing a task and even only by the instructions of the task. One of the pervasive findings regarding this distinction is that students with a learning goal are more motivated and seek more challenges (Dweck, 2006;;Yeager and Dweck, 2012). In this study we tap into motivational and metacognitive aspects of task performance via modeling process data. We are analyzing data from a previous study that applied the goal-orientation manipulation in a self-adapted test, while collecting also confidence ratings. Self-adapted testing is designed to allow test takers to choose the level of the difficulty of the items they receive. In her study, Arieli-Attali instructed participants in one condition to perform at their best on the test, with incentive of a reward; participants in the second condition were instructed to use the self-adapted test as a learning tool for a test the following day. Main findings showed that participants in the learning goal condition chose overall more difficult items (about half a level on average out of seven possible levels) compared to the performance goal condition, after controlling for pre-test performance, manifested both in the start of the test (the first choice) and the mean choices across all items. In addition, participants in the learning goal condition reverted to a strategy of choosing only the easiest level for all items significantly less frequently than those in the performance goal condition did (3.4% compared to 11.5%, respectively), and showed more exploratory behavior by choosing a wider range of difficulty levels (range of 3 levels compared to 2.5 levels in the performance goal condition).These results support the general theory and converge with previous findings by Dweck and colleagues about the higher motivation and tendency to seek more challenges when one is holding a learning goal orientation. Regarding confidence ratings, Arieli-Attali found that those in the learning goal condition showed under-confidence while those in the performance goal condition showed over-confidence (−1.4 vs. +1.9% respectively), similar to a recent study by Dweck and colleagues (). Using the process data from Arieli-attali's study will allow us to tap deeper into the dynamics of choices as changing over time and depending on goal orientation and confidence rating. Before we describe the details of the current study, we provide a brief summary of research on self-adapted testing. Self-adapted tests are designed to allow test takers to choose the level of difficulty of the items they receive (Rocklin and O'Donnell, 1987;;;Arieli-Attali, 2016). Such tests provide both product datawhich items were answered correctly-as well as process datawhat difficulty levels were chosen across time. Using an item response theory modeling approach, each test taker's ability can be estimated using the product data regardless of the item difficulty levels chosen. However, the difficulty preferences (the process data) may also be useful as an indication of the test taker's metacognitive and/or motivational state. Previous studies on self-adapted tests were primarily concerned with the product data and its reliability and validity. However, there were also studies that looked into the process data particularly to examine the strategies of test takers in choosing the difficulty levels (Rocklin, 1989;;;;Revuelta, 2004). In these studies, strategies were examined with regards to correct or incorrect responses to the adjacent preceding item, based on the assumption that the "results" on a previous item, whether correct or incorrect, would affect the next choice. Researchers were interested in uncovering the "rules, " if existed, in examinees' choices, mostly adopting the approach of defining predetermined rules and looking in the data to find them. For example, Rocklin defined a "flexible strategy" as a selection of an easier level after an incorrect response, and a more difficult level after a correct response. This strategy is intuitive and in fact simulates the sequence of item difficulty produced by a Computer Adaptive Test (CAT) algorithm that maximizes test accuracy, where test takers often receive an easier item after incorrect response, and a harder item after a correct response, based on item response theory (Hambleton and Swaminathan, 1985). Defining such a strategy is based on the intuition that this would also be the most "rational" strategy people are using in their choices. In addition to the flexible strategy, Rocklin defined two variations: the "failure tolerant" and "failure intolerant." In the former, selections do not change after incorrect response (thus, showing tolerance to incorrect/failure), and in the latter, selections do not change after correct responses. Findings from this study and another study that followed () showed that few test takers adhere to one of the three clear-cut categories, while most people exhibit more of a mixed strategy (or what termed as "sluggishly flexible") where test takers selected a harder level after one or a string of several correct responses, and selected an easier level after one or a string of several incorrect responses. In other studies (e.g., ;Revuelta, 2004) authors made somewhat different distinctions (such as totally rigid, partly flexible, and partly rigid); however, the findings were still very similar, showing that the majority of test takers are in the "partly rigid partly flexible" category, supporting previous findings. In Revuelta 's study, the author also reported that a majority of selections (about 60%) had the same difficulty level as the previous item. In the current study, we take a different approach to look at the sequences of difficulty choices. Although we still look at transitions, we adopt a hidden latent approach rather than direct analysis of the observed choices. In addition, due to the inter-dependencies among difficulty choices, we apply a hidden Markov model (HMM). Under an HMM we assume independence between the observed choices conditional on respective latent states, which follow a first-order Markov process such that the current state only depends on the previous state. We explain initial states and state transitions in terms of probabilities and the effects of covariates on these probabilities. The HMM approach, as well as other variations of Markov models, are becoming increasingly popular among the educational measurement community for cognitive modeling (;;LaMar, 2018;) and analyses involving serially dependent process data (;;;). We add to the literature an application of the HMM approach in characterizing test takers' behavior in selfadapted tests. The advantages of using this approach in our context are three-fold: the introduction of the latent state as the metacognitive and/or motivational state that drives the observed difficulty choices can separate the stochasticity in the underlying metacognitive process from measurement errors; it allows the same observed difficulty level to be a reflection of different latent states depending on the choices before and after (see Figure 5 below for a specific example); the estimation is robust against some design decisions such as the number of difficulty levels offered in different applications of self-adapted testing (whether 5, 7, or 9 difficulty levels are offered may change the observed sequence). THE CURRENT STUDY In this paper we conduct a secondary analysis of the data from Arieli-Attali. The original study evaluated how the goal orientation conditions affected test takers' item difficulty choices, as well as the influence of different feedback conditions that will not be considered here. The aim of the current analysis is to model test takers' choices of item difficulty under the two orientation goal conditions, while taking into account the correctness and confidence ratings of previous items. We applied a first order Markov process, that looked at the change of the current state/class as dependent on the previous one. However, we used accumulated correctness and confidence as predictors. That is, we assumed that accumulated prior results of overall success (accumulated correct answers) and overall state of FOK (accumulated confidence) would affect the latent state and hence the next observed choice. Using HMM we obtained the transition probabilities between the latent classes. Transition from a class with lower difficulty level to one with a higher difficulty level (i.e., an upward transition) represents a scenario where a test taker was willing to take on higher difficulty levels presumably due to increase in motivation, openness to challenge and exploration and/or increase in self-perceived ability due to evidence of success. On the contrary, a transition from choosing higher to lower difficulty items (i.e., a downward transition) illustrates the case where a test taker preferred to lower the difficulty, presumably due to a decrease in motivation or to alleviate stress, and/or as a strategy to get a better score/feedback (get more items correct). Our first research question concerned modeling the transitions between latent states given the current state in the two goal conditions. Based on Arieli-Attali ' results we anticipated that participants in the performance goal condition would not only have higher probability of choosing the lower difficulty state initially but also transition less from this state. Our second research question addressed transitions in difficulty as dependent on correctness of and confidence on past items responses. We hypothesized that overall accumulated correctness and confidence would interact such that being correct and confident would generally enhance upward transitions while being incorrect and unconfident would enhance downward transitions. Regarding transitions in the mis-match cases of being correct with low confidence (under-confident) or being incorrect with high confidence (over-confidence), we hypothesized overall more transitions in both directions resulting from the conflict between confidence and feedback about correctness. The paper is organized as follows: we first describe the data and the modeling approach. Next we provide some insights into the data using visualization of the raw data, the most common sequences and the patterns observed. We then report the results of the HMM analysis addressing specifically the two research questions. Lastly, we discuss these results in relation to their contribution to the emerging field of analyzing process data in assessment. Participants, Design, and Procedure Arieli-Attali reported a final sample of 583 adult participants (age range = 18-74 years, M = 33.09; 45% women), recruited through Amazon Mechanical Turk (limited to native English speakers and residents of the US or Canada), who participated in a task over 2 days. Ethics approval for the study was obtained from Fordham University Institutional Review Board and a written informed consent was obtained from all participants (for the IRB approval and informed consent form see appendix E in Arieli-Attali, 2016). Our analysis includes data only from Day 1 of the experiment. On Day 1, participants completed a 24-item non-adaptive pre-test and a 40-item self-adapted test, both comprising open-ended general knowledge items. We used the pre-test scores that were obtained in the form of percentage of correct responses (ranged from 0.22 to 1, with a mean of 0.75, and standard deviation of 0.16). Following completion of the pre-test, participants were randomly assigned to one of two goal conditions: 286 participants were in the performance goal condition (condition = 1), instructed to maximize their score on the test, and 297 were in the learning goal condition (condition = 0), instructed to use the test as a learning tool for the test the next day. During the self-adapted test, participants chose a difficulty level for each item out of seven difficulty levels offered. After responding to each question, participants rated their confidence in their answer on a scale from 0 to 100 with 10-point intervals. After submission of the answer and the confidence ratings, participants received feedback whether their answer was correct or not and were provided with the correct answer. Coding of correctness was 0 for incorrect and 1 for correct. The observed item difficulty levels were integers from 1 to 7, which we divided by 7 to arrive at a range comparable with other variables used in the model fitting. Confidence reporting was converted proportionally to a scale from 0 to 1. Modeling We modeled test takers' choices of item difficulty using a hidden Markov model (HMM; ;Bckenholt, 2005;Visser and Speekenbrink, 2010;Visser, 2011) that assumed the manifest variables (i.e., item difficulty choices) are conditionally independent given an underlying latent Markov chain with a finite number of latent states or classes of the general difficulty preferences. We assumed that there are M states in the Markov chain. In the following text, we use "state" and "class" interchangeably to refer to the latent state of the M-state Markov chain, which is denoted as S i,j, where integers i and j, respectively index participants and items. The categorical variable S i,j was an integer element from the finite set {1, 2,, M} and varies across people and items. In our measurement model (as shown in the upper panel of Figure 1), we assumed that the conditional distribution of the manifest choices of item difficulty, y i,j, given S i,j, was univariate normal with mean S i,j and variance of 2 S i,j. Although y i,j was ordinal in our current study, we treated it as continuous because we conceptualized the 7 manifest difficulty levels as a continuum representing participants' preferences of item difficulty and the intervals between any two points were approximately equal. The seven-level difficulty structure corresponded to the seven categories of a categorized item difficulty continuous scale (−3, −2, −1, 0, 1, 2, 3). The average difficulties of items at each difficulty level are: −3.3, −1.8, −0.9. −0.2, 0.5, 1.0, and 1.8 for level 1 through 7 respectively (corresponding to 92, 80, 68, 55, 41, 30, and 16% average probability of correct answer at each level). So the data were an ordinal approximation of a continuous variable. Practically, the rule of thumb is that ordinal variables with five or more categories can often be used as continuous without substantial harm to the analysis (Johnson and Creech, 1983;Norman, 2010;). There were 7 categories in our study. We preferred to treat the data as continuous rather than as categorical for ease of interpretation. Depending on the magnitude of S i,j, each class thus represented a more general item difficulty level that the participants feel FIGURE 1 | An illustration of a 3-state hidden Markov model. The latent categorical S i,j is linked to the observed variable y i,j, j = 1, 2,, 40 through a measurement model. m,i1, m = 1, 2, 3 is the probability of individual i's being initially in the class m and is explained by observed covariates I i,j. p lm,ij is the probability of individual i's transitioning from class l at item j − 1 to class m at item j, and is explained by observed covariates h i,j. comfortable choosing but may stochastically end at different manifest choices according to the measurement model. In the latent model (as shown in Figure 1), we assumed that the change process of S i,j followed a first-order Markov chain process, where the current state only depended on the previous state. We described the dynamics of S i,j through its initial state and transitions between the states. The former depends on a M 1 vector of initial state probabilities, i1 = , and the latter is characterized by a M M matrix of transition probabilities of moving from a state l to a state m, P ij = , whose k-th row is denoted as P ij,k. Individual differences in the dynamic processes of S i,j were assumed to lie in the initial state probabilities and the transition probabilities, represented by two multinomial logistic regression models as follows: where m = 1, 2,, M denotes the latent classes, I i,1, h i,j are vectors of covariates used for prediction in the logistic regressions, a m and c lm denote the logit intercepts, and b m, and d lm denote the regression coefficients of the covariates in the associated log-odds (LO) relative to a specified reference class. In the current study, we predicted the initial class probabilities, m,i1, using the goal condition (abbreviated as d), pre-test score (abbreviated as p), and their interactions, and explain the transition probabilities, p lm,ij, using the goal condition, accumulated correctness (abbreviated as r), accumulated confidence (abbreviated as f), and the interactions therein. The accumulated correctness and confidence at item j were calculated as the percentage of correctness or average confidence among items from the beginning to item j. For identification purposes, both Equations and require specification of a reference class where all parameters in the regression equation are zero, which ensures that the initial class probabilities across all classes and the probability of moving into any class from a single class sum to 1.0. m,i1 is the probability of individual i's being initially in the class m, and the regression coefficients b m denote the effects of the covariates in the LO of being initially in the class m relative to the reference class. p lm,ij is the probability of individual i's transitioning from class l at item j − 1 to class m at item j, and the slopes in d lm represent the effects of the covariates on the LO of transitioning from the lth class into the mth class relative to transitioning into the reference class. The choice of the reference class will only affect the logit regression parameters to be estimated, but will not influence the fit indices, the other parameter estimates, and the transformed estimated probabilities by a notable significant amount. Theoretically, the probability of being in the reference class cannot be zero in the model. Practically, it is recommended to choose a class that is presumably large enough and can make interpretation of results easier, for example, the normative class, the largest class, or the intermediate class. In this study, we used the default latent reference class of the R package depmixS4 (i.e., the first class), which turned out to be the medium class based on its mean estimate, but the findings should not be sensitive to this choice. We can summarize Equations and into vector forms of is the softmax (normalized exponential) function. In our full model (also shown in Table 1), I i,1 is a 3 1 vector of the covariates d, p, and their interaction dp, and h i,j is a 7 1 vector of the covariates including d, r, f, three two-way interactions (df, dr, and fr), and one three-way interaction (dfr). Accordingly Parameters of the model can be estimated using the expectation-maximization (EM) algorithm, where the expectation of the complete log-likelihood function of the parameters given the observations y i,j and states S i,j are iteratively maximized to yield parameter estimates. In the R package depmixS4 (Visser and Speekenbrink, 2010), the EM algorithm has been implemented for unconstrained models, using the standard glm routine and the nnet.default routine in the nnet package (Venables and Ripley, 2002) in the maximization step for maximizing different parts of the expectations obtained in the expectation step. For more information on the estimation, we direct the readers to check the Visser and Speekenbrink paper. Model fit of hidden Markov models can be compared using Akaike information criterion (AIC; Akaike, 1973) and Bayesian information criterion (BIC; ). The lower the AIC or BIC, the better the model fits the data. The fit of nested models can also be examined using likelihood ratio tests (LRT; ;). If p < 0.05, the more general model shows significant improvements in fit than the constrained model at the.05 level. Additionally, given a sequence of observations {y i,j } and a hidden Markov model, we could get the most probable sequence of the state estimates of {S i,j }, using the Viterbi algorithm. In the depmixS4 package, one can use the posterior() function to obtain the Viterbi most probable states, as well as the highest probabilities of a state sequence ending in a certain state at item j with all observations up to the item j taken into account. RESULTS In this section, we first provide a description and visualization of the data, along with the HMM general results about state classifications and initial state modeling, followed by two sets of our transitions modeling questions: modeling transitions between states in the two goal conditions; modeling transitions based on accumulated correctness and confidence and their interactions. Description of Data Here we summarize the most relevant characteristics of the data. First we present the choice sequences and the visualization of the data: Figure 2 was created using the R package TraMineR (), and shows all the difficulty choice sequences and the ten most frequent sequences for the performance (P) and learning (L) goal conditions. The most frequent sequences are those with no transitions, where participants chose a level and stayed with it for the entire 40item test, most frequently the extreme levels (level 1 and 7). Although there was not a clear difference between the conditions in the number or proportion of participants choosing to start and stay at the highest difficulty level (level 7; 3 participants in the performance goal condition, constituting 1.05%, and 5 in the learning goal condition, taking up 1.68%), substantially more participants chose to start at the lowest difficulty level (level 1) and stay there in the performance goal condition (33 or 11.54%) than in the learning goal condition (10 or 3.37%). In the learning goal condition there were also frequent sequences of starting and staying at level 2, 4, and 5 (as can be seen in the right-most panel), while in the performance goal condition these sequences were not frequent. Generally, there were also more switches in difficulty levels in the learning goal condition than in the performance goal condition. The average number of upward (i.e., from a lower manifest difficulty level to a higher one) and downward (i.e., d, condition; p, pre-test score; f, accumulated mean confidence; r, accumulated mean correctness. from a higher manifest difficulty level to a lower one) transitions in the learning condition were 7.43 and 6.51, respectively, both slightly higher than in the performance condition (6.07 and 5.40, respectively). Regarding the distribution of choices, among all chosen item difficulty levels (i.e., a total of 583 40 choices), 22.85% were at level 1, ranked as the highest proportion and followed by 19.67% at level 4, 16.13% at level 3, 14.22% at level 2, 10.73% at level 5, 8.87% at level 7, and 7.52% at level 6. The distribution of the manifest choices is displayed in Figure 3, which suggests that the marginal distribution of the data should follow a mixture distribution. The chosen item difficulty levels were negatively correlated with answer correctness (point-biserial correlation r pb = −0.30, p <0.001) and perceived confidence (r = −0.28, p <0.001), while the latter two variables were positively correlated (r pb = 0.60, p <0.001). To examine the item dependencies in the difficulty choices, we obtained the residuals of the manifest difficulty data after removing the participant and item effects in a generalized additive mixed model using the R package mgcv. The autocorrelation functions (ACFs) of the residuals are plotted in Figure 4 using the R package itsadug (van ), where the first panel displays the average ACF across participants, and the rest five are the ACFs for 5 randomly selected individuals. Although there were individual differences in the ACFs, on average the lag-1 autocorrelation was relatively high, around 0.44, suggesting the need of a first-order Markov model. Hidden Markov Modeling Results We used R package depmixS4 (Visser and Speekenbrink, 2010) to fit a series of HHM models to the data, which are summarized in Table 1 Table 1). We hence present the results from 3-state HMMs. The parameter estimates of S i,j and S i,j in the measurement model are summarized in Table 1. Based on Table 1 (0.13)] item difficulty levels. The estimated normal densities are shown as overlaid on the manifest distribution in Figure 3. The fitted mixture distribution of the hidden Markov models was still able to capture the manifest distribution of the chosen difficulty levels. Figure 5 shows four representative participants' trajectories of item difficulty choices, accumulated confidence, and accumulated correctness, accompanied by the estimated most probable state at each item colored differently in the background. For example, participant 27 in the learning goal condition stayed at the lowlevel difficulty across time (switching between level 1 and 2) and the most probable latent state throughout was the L latent class (background colored blue). The accumulated correctness was generally high (above 70%) and the accumulated confidence was relatively low (mostly below 50%), yet they co-varied across time. Participant 347 in the performance goal condition, on the other hand, chose high-difficulty items across time (levels 5, 6, and 7) and the most probable latent state was the H latent class (background colored pink). The levels of confidence and correctness for this participant were almost identical, with a decline at approximately item 8. Participants 374 and 468 showed more transitions in their choices of difficulty levels. Participant 468 showed a gradual increase in item difficulty choices reflected in the transition of the most probable latent state from L to M to H latent states (blue → green → pink) with a steady high accumulated correctness albeit moderately low accumulated confidence. Lastly, participant 374 showed many transitions upwards and downwards, while correctness and confidence were moderately low. Note that participant 374 provides an illustration of how the same manifest/observed difficulty level can be associated with different most probable latent states: level 4 (just above.5 on the y-axis) was linked to the H state when the surrounding difficulty choices were higher (between item 10 and 20), but linked to the M state when the preceding choices were lower (between item 25 and 30) (see arrows on the figure). Similar to Arieli-Attali in predicting choices, we used pre-test score (i.e., percentage of correctness), goal condition, and their interaction as predictors of initial difficulty latent state; the resulting Model is Model B1. As noted above Arieli-Attali reported that test takers' selection of difficulty on the first item differed across goal conditions, with lower difficulty chosen in the performance group, after controlling for pre-test performance. Our model analysis adds to this finding by using the three latent states rather than manifest difficulty levels. Parameter estimates and fit indices are shown in Table 1. Model B1 fits significantly better than Model B based on the LRT ( 2 = 71.26, df = 6, p < 0.05). As it is not intuitive for us to draw conclusions from the parameter estimates in the LO sense, we illustrate the logistic regression results in terms of expected probabilities evaluated at certain values of the predictors in stacked bar figures. Figure 6 indicate that when participants' pre-test scores are controlled, the expected probability of starting the test in a low-difficulty state compared to medium-or high-difficulty, is higher in the performance goal condition. Also it is evident from Figure 6, that within a condition, the higher the pre-test score, the higher the probability that the participant would initially be in a medium-or high-difficulty state. In particular, participants who answer fewer than half of the pre-test items correctly are more likely (above the FIGURE 5 | Four representative individuals' trajectories of item difficulty choices, accumulated confidence, and accumulated correctness, with the estimated most probable state at each item as identified by the 3-state hidden Markov model colored differently in the background. 50%) to be in the low-difficulty initial state. Participants who have higher or full pre-test scores are more likely to be in initial state of medium-or high-difficulty. Now we turn to model transitions. Research Question 1: Modeling Transitions in the Two Goal Conditions Our first research question addressed modeling transitions between states in the two goal conditions. We added a multinomial logistic regression of the transition probabilities with condition as predictor to Model B1 (i.e., Model B2a), which significantly improves the fit of Model B1 ( 2 = 35.89, df = 6, p < 0.05) and has a lower AIC value 1. Fitting results of Model B2a are presented in Table 1 and Figure 7. Figure 7 shows the expected probability of transitions to and from each of the three latent states separately for each condition. As this figure shows, in both conditions the most probable choice behavior is staying in the same latent difficulty state with probabilities of over 90% (recall that different manifest difficulty levels were included in each latent state). However, when looking at the transitions between conditions, the model predicts a higher likelihood of staying at low difficulty and a lower likelihood of upward transitions from low to medium difficulty in the performance goal condition. In other words, participants in the 1 Please note that the BIC of B2a is larger than that of B1. performance goal condition are expected to transition less from the low state, confirming and adding to the results reported by Arieli-Attali that test takers in the performance goal condition tended to choose the lower level more frequently than in the learning goal condition, shown here also when considering latent states and transitions between states. Note that transitions from the medium or high state (either upwards or downwards) were similar between the two goal conditions. Research Question 2: Modeling Transitions Based on Correctness and Confidence We next fitted a more general model than Model B1, with accumulated correctness and confidence across items as predictors without condition (i.e., Model B2b), to evaluate the influence of these characteristics on transitions. Parameter estimates and fit indices are presented in Table 1 and expected probabilities are displayed in Figure 8. Compared to Model B1, B2b fits the data significantly better ( 2 = 219.83, df = 18, p < 0.05) and has lower AIC and BIC values. Note that the figure presents the four extreme quadrants of the two continuous scales. The horizontal line represents the accumulated correctness showing the extreme ends of the scale as "all incorrect" and "all correct" (from left to right), while the vertical line represents the accumulated confidence, showing the extremes of lowest and highest confidence (from bottom to top). As this figure shows, with high accumulated correctness (top and bottom right-side panels), expected probability of transitions is low and staying at the same difficulty state has the highest likelihood across the confidence scale. However, when accumulated correctness decreases (toward the quadrants in the top and bottom leftside panels) there is higher likelihood for transitions in both directions, and the likelihood of transitions increases as the confidence increases (i.e., illustrating the interaction between these factors). In particular, we can see expected downward transitions from the medium state when confidence is low (22.3%; bottom left-side panel), and from the high-state when confidence is high (27.7%; top left-side panel), as expected. However, we can also see that when the accumulated confidence is highest (top left-side panel; indicating over-confidence) participants are more likely to transition upwards from the low state (66.1%) equally to either the medium-or high-state. In other words, staying at the same state is the least probable in this case relative to other quadrants and states (recall that this quadrant is the extreme end of the confidence scale, and transition upwards from the low state are expected to increase as confidence increases). To get a sense of the frequency of participants with different relations between accumulated correctness and confidence, in particular considering the representation within each of the four quadrants illustrated in Figure 8, we show in Figure 9 the relation between accumulated correctness and confidence after 10, 20, 30, and 40 items. As can be seen, the data cluster along the diagonal increasingly as the number of items increased, with sparse representation in the quadrants with mismatches between correctness and confidence. This suggests that test takers were overall well-calibrated in their confidence, with little representation of over-and under-confidence. We then further added back goal condition as a predictor of the transition probabilities to Model B2b (i.e., Model B3), which significantly improved the fit of Model B2b ( 2 = 68.67, df = 24, p < 0.05) and has a lower AIC value 2. Figure 10 shows the same transition probabilities as in Figure 8 split by goal condition. The downward transitions when accumulated correctness decreases are also evident when split into the goal condition and are more so in the learning goal condition. The findings about higher likelihood of upward transitions in the over-confident quadrant are still evident when split into the goal conditions, with somewhat more transitions in the performance goal compared to learning goal condition (73.8 and 65.2%, respectively at the extreme quadrant of the confidence scale). A new finding from this split analysis is that there are also more transitions in the performance goal condition when accumulated correctness is high but confident is low (27%; bottom right-side panel, the quadrant indicating under-confidence). DISCUSSION The purpose of our secondary data analysis from Arieli-Attali was to apply a hidden Markov model to test takers' choices of item difficulty in a self-adapted test. We investigated whether those choices could be modeled by the goal condition (learning vs. performance), as well as the test takers' correctness and confidence across items. Analysis of the data using the hidden Markov model identified three latent states of difficulty from the seven manifest levels. These three latent states correspond to low, medium and high difficulty levels, and may be an indication of a low, medium or high self-estimated ability and/or motivation. We first modeled test takers' initial difficulty state based on their pre-test scores and goal condition, confirming past results about preference of lower difficulty in the performance goal condition, showing it here also as a higher expected probability of starting in the low state in the performance goal condition after controlling for pre-test scores. The results here add to the understanding that this is not just the single first choice influenced by the goal orientation (in addition to the self-perceived ability), but rather it is the participant's latent state that is influenced and therefore drives the choices accordingly. This result further confirms that when the goal orientation is to excel at a task individuals may avoid taking on challenges. We then used the model to predict transitions across items, and found the highest likelihood was to remain at the same difficulty state across items. This is the main contribution of applying a latent state approach in this context, because manifested choices may show transitions attributable to random variability while actual latent states are less likely to change. When using only goal condition as a predictor, there was no difference in transitions from the middle-or high-states between the two goal conditions, however there was a slightly lower likelihood of upward transitions from the low state in the performance goal condition relative to the learning goal condition, confirming the overall finding that test takers in the performance goal condition applied a strategy of the "easy way out, " keeping low effort. The main contribution of this analysis is in the application of the HMM to model the interaction between answer correctness and confidence. We have shown that the likelihood of transitions increased when the accumulated correctness decreases. This result is intuitive as it means that participants were attentive to the correctness feedback and when they were overall wrong they tended to transition or change their metacognitive/motivational state. We found that downward transitions were more likely across the confidence scale as expected, but upward transitions were more likely when confidence increased for those who were in the low state, that is, we found that when confidence was highest, it reached the highest likelihood of about 2/3 upward transitions in the over-confidence end of the scale. This finding can be related to the literature on confidence and learning from errors by Metcalfe and colleagues (Butterfield and Metcalfe, 2001;Metcalfe and Xu, 2018). This line of research generally showed that people who made an error with high confidence were more likely to correct their mistake compared to a situation when the error was made with low confidence (the hypercorrection phenomenon). One of the explanations of this phenomenon is the surprise/attention explanation, which says that individuals experience surprise at being wrong when they were sure they were right, and as a consequence they rally their attentional resources (Butterfield and Metcalfe, 2006;). In our study we showed that individuals with high confidence who were proven incorrect were more likely to change difficulty state as reflected in more transitions upwards. The transitions upwards may be a reflection of being more attentive or putting forth more effort, similar to what occur under the hypercorrection phenomenon. We also found that accumulated correctness and confidence interacted with goal condition in predicting transitions. The transitions when accumulated correctness decreases were also likely when split into the goal conditions but the downward transitions have higher likelihood in the learning goal condition, while the upward transitions in the over-confidence case have higher likelihood in the performance goal condition. This analysis also revealed a new finding of higher likelihood of upward transitions in the performance goal condition when accumulated correctness was high but confident was low, i.e., in the under-confidence end of the scale. These two findings together, that in the performance goal condition test takers were more likely to transition upwards from the low state in both mismatched conditions (over-and under-confidence), indicate the specific interaction of the goal with correctness and confidence, and may suggest that when participants are instructed to do their best, they experience mis-match between what they think they know and what they actually know (feedback of correctness), and they are in the low state without possible downward transition, they try to "find their luck" someplace else or decide to put more effort. This finding may suggest that miscalibration between confidence and correctness could serve as a motivating factor, as being in the low state in the performance goal condition has been shown to stem from low motivation. This combined pattern was not found for the learning goal condition, suggesting that evidence about miscalibration when one is striving to learn has less of an effect (i.e., it had an effect in over-confidence, but not in under-confidence). These results are consistent with the literature on goal orientation, showing that participants who are encouraged to use the test for learning rather than focusing on performance are more likely to seek challenges and show resilience amid difficulties (Yeager and Dweck, 2012). However, our additional findings about the interaction between correctness, confidence, and goal orientation further shed light on the complexity of the choices made in self-adapted test. The interactions we found suggest that the test takers' goal (i.e., whether the participant needs to maximize one's score, as the goal of the test), confidence across items (as a reflection of one's internal states), and correctness (as an outside feedback) together may form a recursive feedback loop that results in the changes of an individual's motivational and/or metacognitive state and further affects choice behavior. To summarize, in this study we explored ways to learn about the motivation and feeling of knowledge of test takers and its affect on their actions while engaging in an interactive self-adapted test, via analyzing process data. Motivation and engagement is particularly crucial in low stakes assessment programs (such as the National Assessment of Educational Progress program, or the Trends in International Mathematics and Science Study), where test scores have no personal consequences for individuals, potentially resulting in low motivation to do one's best, and subsequently threatening the validity of the test scores. While low stakes programs make attempts to make their tests more interactive and appealing to participants in order to increase their engagement, we offer insights on how goal orientation, correctness and confidence influence choices that determine the course of the test. More research is needed to learn about how complex choice making in simulation-and game-based assessment can be modeled by factors inherent to the simulation or the game (such as curiosity, challenge seeking, sense of satisfaction, and the like). |
import os
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--model", default="sgpt-125M", type=str)
parser.add_argument("--pooling", default="weightedmean", type=str)
parser.add_argument("--datasets", default="sgpt-125M", type=str)
parser.add_argument("--margin", default=0.1, type=int)
args = parser.parse_args()
## prepare qrels
qrels_dir="/data/home/scv0540/run/my_dr/datasets/{}/{}_{}-pooling_{}".format(args.datasets,args.model,args.pooling,args.datasets)
if os.path.exists(qrels_dir):
os.system("rm -r {}".format(qrels_dir))
os.mkdir(qrels_dir)
qrels_path_list_file="{}/qrels_path.tsv".format(qrels_dir)
with open(qrels_path_list_file,'w') as f:
pass
# prepare checkpoints
checkpoint_dir="/data/home/scv0540/run/my_dr/checkpoints/{}_{}-pooling_{}".format(args.model,args.pooling,args.datasets)
if os.path.exists(checkpoint_dir):
os.system("rm -r {}".format(checkpoint_dir))
os.mkdir(checkpoint_dir)
checkpoint_path_list_file="{}/checkpoint_path.tsv".format(checkpoint_dir)
with open(checkpoint_path_list_file,'w') as f:
pass
# prepare results files
results_dir="/data/home/scv0540/run/my_dr/results/{}_{}-pooling_{}".format(args.model,args.pooling,args.datasets)
if os.path.exists(results_dir):
os.system( "rm -r {}".format(results_dir))
os.mkdir(results_dir)
# prepare score files
score_dir="/data/home/scv0540/run/my_dr/scores"
if not os.path.exists(score_dir):
os.mkdir(score_dir)
# prepare logs files
logs_dir="/data/home/scv0540/run/my_dr/logs/{}_{}-pooling_{}".format(args.model,args.pooling,args.datasets)
if os.path.exists(logs_dir):
os.system("rm -r {}".format(logs_dir))
os.mkdir(logs_dir)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.