content
stringlengths
7
2.61M
. OBJECTIVE To investigate the preventative effect of mutant kringle 5 (mK5) eye drops on corneal allograft rejection. METHODS It was a experimental study. The outbred strain F344 and Lewis rats were used as donors and recipients respectively. Sixty Lewis rats were randomly divided into B, C, D and E Group; Group A, F344 rats autograft control; Group B, allograft control (the control groups were given normal sodium only); Group C and D, allograft groups, were treated with 5 mg/L and 10 mg/L mK5 eye drops respectively; Group E, allograft group, was treated with 0.1% dexamethasone eye drops. The eye drops were applied one drop four times per day for two weeks, the occurrence and development of corneal allograft rejection and corneal neovascularization (CNV) was observed every other day by slit-lamp microscope, the grafts were evaluated clinically by means of Holland's scoring system and the area of CNV was calculated. Nine rats per group were killed on the 14th day, and the corneas were taken for histopathological examinations. Analysis of variance was used to analyze the outcomes. RESULTS The average graft survival time of Group B, C, D and E was (9.3 +/- 2.1), (21.1 +/- 7.3), (23.5 +/- 10.8) and (28.2 +/- 19.1) d respectively, Compared with Group B, Group C and D had a statistically significant prolongation of survival time (q = 10.24, 13.47; P < 0.05). Though treated with 0.1% dexamethasone eye drops (Group E) prolonged transplant survival time as compared with mK5 eye drops, but the difference was not statistically significant (q = 2.54, 1.49; P > 0.05). The occurrence of CNV in Group A was (3.1 +/- 0.8) d, Group B (2.6 +/- 0.5) d, Group C (6.4 +/- 0.5) d, Group D (7.8 +/- 0.7) d and Group E (5.3 +/- 1.0) d. Significant difference (q = 31.58, 51.21, 19.98; P < 0.05) was found between groups C, D, E and Group A. There were also significant difference between groups C, D, E and Group B (q = 43.87, 67.14, 24.53; P < 0.05). The CNV areas of Group C and Group D were also smaller than Group B (q = 30.76, 62.14; P < 0.05). The results was similar compared with Group E (q = 15.20, 25.64; P < 0.05). Fewer inflammatory cells and CNV were found in the cornea of the groups treated with mK5 eye drops. CONCLUSION Topical application of mK5 eye drops can prevent corneal graft rejection and corneal neovascularization in rats.
Advanced processors use pipelining techniques to execute instructions at very high speeds. A pipeline is like an assembly line. In an automobile assembly line, there are many steps, each contributing to the construction of the car. Each step operates in parallel with the other steps, though on a different car. In a processor pipeline, each step completes a part of an instruction. Like the assembly line, different steps are completing different parts of different instructions in parallel. Each of these steps is called a pipe stage. The stages are connected one to the next to form a pipe where instructions enter at one end, progress through the stages, and exit at the other end. A pipeline is most effective if it can process a steady stream of instructions in a sequential manner. When a branch is executed, it may change the instruction pointer (IP) to something other than its current value plus a predetermined fixed increment. If a branch changes the IP to the address of the branch target (given by the branch instruction), it is a “taken” branch. If it falls through, it is “not taken”. Knowledge of whether the branch will be taken or not, and the address of the branch target, typically becomes available when the instruction has reached the last or next to last stage of the pipe. This means that all instructions that issued later than the branch- and hence not as far along in the pipe as the branch—are invalid, i.e. they should not be executed, if the branch is taken, because the next instruction to be executed following the branch is the one at the target address. All of the time spent by the pipeline on the later issued instructions is wasted delay, thus significantly reducing the speed improvement that can be obtained from the pipeline. To alleviate the delay that may be caused by the branch, there are two steps that can be taken. First, find out whether the branch will be taken or not taken (the “direction” of the branch) earlier in the pipeline. Second, compute the target address earlier. One method for dealing with branches is to use hardware inside the processor to predict whether an address will result in a branch instruction being taken or not taken. Examples of such hardware include the 2-bit saturating counter predictor (see “Computer Architecture A Quantitative Approach”, David A. Patterson and John L. Hennessy, 2d Edition, Morgan Kauffman Publishers, pp. 262–271,) and the local history predictor which uses the past behavior (taken/not-taken) of a particular branch instruction to predict future behavior of the instruction. The use of a combination of two different predictors has been proposed to obtain more accurate predictions, where in U.S. Pat. No. 5,758,142 the final prediction at the output of a multiplexer is selected between a prediction provided using a branch past history table and one provided using a global branch history table, where the selection is made according to the most significant bit of a counter. Another technique uses the combination of the local history predictor and the saturating counter predictor to achieve more accurate predictions than either one can by itself, by using the branch history (obtained from a matching entry in a local history table) to index into a pattern history table, where the next execution of a branch is finally predicted by the value of a 2-bit saturating counter predictor. See article by T. Yeh and Y. N. Patt, “Alternative Implementations of Two-Level Adaptive Branch Prediction”, Proc. 19th Symposium on Computer Architecture (May 1992) Gold Coast, Australia 124–134. Implementation of both of these techniques, however, requires a relatively large area on the processor chip.
<filename>browser-tests/pages/unmarkedWsApplications.ts import { Selector } from 'testcafe'; export const unmarkedWsApplications = { unmarkedWsApplicationsList: { table: Selector('div[class^="table_tableWrapper"]'), firstApplicationLink: Selector('div[class^="pageContent"] div[class^="table_rowWrapper"]:first-of-type a'), }, };
# Generated by Django 3.1.5 on 2021-01-27 04:29 from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): initial = True dependencies = [ ("cms", "0001_initial"), ] operations = [ migrations.CreateModel( name="SlackWorkspace", fields=[ ( "id", models.AutoField( auto_created=True, primary_key=True, serialize=False, verbose_name="ID", ), ), ("created_at", models.DateTimeField(auto_now_add=True)), ("updated_at", models.DateTimeField(auto_now=True)), ("slack_id", models.CharField(max_length=16)), ("icon", models.CharField(max_length=512)), ("bot_user_id", models.CharField(max_length=128)), ("bot_access_token", models.CharField(max_length=128)), ( "blog", models.ForeignKey( on_delete=django.db.models.deletion.CASCADE, to="cms.blog" ), ), ], options={ "abstract": False, }, ), migrations.CreateModel( name="SlackUser", fields=[ ( "id", models.AutoField( auto_created=True, primary_key=True, serialize=False, verbose_name="ID", ), ), ("created_at", models.DateTimeField(auto_now_add=True)), ("updated_at", models.DateTimeField(auto_now=True)), ("slack_id", models.CharField(max_length=16)), ("icon", models.CharField(max_length=512)), ("slack_username", models.CharField(max_length=128)), ("real_name", models.CharField(max_length=128)), ("display_name", models.CharField(max_length=128)), ("avatar", models.CharField(max_length=512)), ( "author", models.ForeignKey( on_delete=django.db.models.deletion.CASCADE, to="cms.author" ), ), ( "workspace", models.ForeignKey( on_delete=django.db.models.deletion.CASCADE, to="slack.slackworkspace", ), ), ], options={ "abstract": False, }, ), ]
Towards a typology of third person plural impersonals Abstract Although third person plural impersonal constructions (3pl IMPs) are widely attested crosslinguistically, they have not yet been the main subject of a typological study. This paper aims to establish the basis for such a study by applying a tentative but rather elaborate typology of 3pl IMPs, developed by Cabredo Hofherr (Arbitrary readings of third person pronominals: 8194, 2003, Arbitrary pro and the theory of pro-drop, Oxford University Press, 2006), to a sample of European languages. The typology recognizes as many as five different types of impersonal uses of the 3pl: a) the universal, b) corporate, c) vague, d) inferential and e) specific. This typology is tested on the basis of ten European languages (Dutch, English, French, German, Greek, Hungarian, Italian, Polish, Russian and Spanish) using data stemming from a parallel translation corpus of Harry Potter and also acceptability judgments elicited from speakers by means of a questionnaire. The investigation reveals that while all the languages in the sample have universal, corporate and vague 3pl IMPs, the specific are at best marginal in English, Dutch, French, German and Polish, as are also the inferred in English, French and German. These findings are shown to correlate in part with the morphophonological realization of the 3pl IMP (as a free vs. a bound form), which may be interpreted as a reflection of the degree of grammaticalization of 3pl IMPs but also the result of the nature of the alternative impersonalizing constructions available among the relevant languages. The distribution of the five types of 3pl IMPs together with a sixth type is captured in the form of a preliminary semantic map, the nature of which requires substantiation by wider crosslinguistic data. The discussed typology of 3pl IMPs is found to be a promising basis for a wider typological investigation, provided a separate type of 3pl IMP involving the category of speech act verbs is included and the use of the 3pl for events which necessarily (not only contingently) involve singular individuals, nonhumans or alternatively the speech act participants are catered for.
1. Field of the Invention The present invention relates to a method for manufacturing a capacitor embedded in a printed circuit board (PCB), and more particularly, to a method for manufacturing a capacitor embedded in a PCB, which can remove a surface defect of a copper clad lamination (CCL) substrate acting as a bottom electrode, thereby improving the yield of the capacitor. 2. Description of the Related Art Discrete chip resistors or discrete chip capacitors have been mounted on a surface of a PCB. Recently, PCBs with embedded passive elements such as resistors or capacitors have been developed. In such a PCB technique, passive elements such as resistors or capacitors are embedded in outer or inner layers of the PCB by using new materials and processes, and the embedded passive elements act as an existing chip resistor or chip capacitor. For example, when a capacitor is buried in an inner or outer layer of the PCB and integrated as a part of the PCB regardless of a PCB size, the capacitor is referred to as an embedded capacitor and the PCB is referred to as an embedded capacitor PCB. The most important characteristic of the embedded capacitor PCB is that the capacitor need not be mounted on a surface of the PCB because the capacitor is formed as a part of the PCB. Three methods for the embedded capacitor PCB will be described below. A first method is to manufacture a polymer thick film type capacitor by depositing, thermally hardening and drying a polymer capacitor paste. According to the first method, a polymer capacitor phase is deposited on an inner layer of a PCB and is dried. Then, a copper paste is printed and dried to form an electrode. In this way, an embedded capacitor is manufactured. A second method is to manufacture an embedded discrete type capacitor by coating a ceramic filled photo-dielectric resin on a PCB. According to the second method, after photo-dielectric resin containing ceramic powder is coated on a substrate, a copper foil is laminated on the photo-dielectric resin to form a top electrode and a bottom electrode. Then, a circuit pattern is formed and the photo-dielectric resin is etched to form an embedded discrete type capacitor. A third method is to manufacture a capacitor by inserting a separate dielectric layer having a capacitance characteristic into an inner layer of a PCB such that the dielectric layer can replace a decoupling capacitor which has been mounted on a surface of the PCB. According to the third method, a power distributed decoupling capacitor is manufactured by inserting a dielectric layer with a power electrode and a ground electrode into the inner layer of the PCB. Meanwhile, compared with an external capacitor, the capacitor embedded in the PCB is difficult to secure a sufficient capacitance because its size is limited according to a volume of the PCB. Therefore, there is a demand for a technique for embedding a high-density capacitor in the PCB by implementing a high capacitance density per unit area. An example of the high-density capacitor is an external multi layered ceramic capacitor (MLCC) that is not embedded but mounted on the PCB. To this end, a thin film technology has been applied to the method for manufacturing an embedded capacitor in order to increase a permittivity of the dielectric layer but decrease a thickness thereof. However, when the thin film technology is used to form the dielectric layer thinly about several hundreds of nanometers in order to minimize a size of the capacitor, a formation defect of the dielectric layer may occur according to a surface state of the bottom electrode disposed under the dielectric layer. This leads to an increase of a leakage current and an electrical short between the bottom electrode and the top electrode. Hereinafter, problems of a conventional capacitor embedded in a PCB will be described below in detail with reference to FIGS. 1 and 2. FIG. 1 is a cross-sectional view of a conventional capacitor embedded in a PCB. Referring to FIG. 1, the conventional capacitor 100 includes a CCL substrate 110, a dielectric layer 120, and a top electrode 130. The CCL substrate 110 includes a reinforcement member 111 (e.g., FR-4) and copper foils 112 formed on both surfaces of the reinforcement member 111. The CCL substrate 110 acts as a bottom electrode of the embedded capacitor. A surface of the copper foil 112, that is, a surface of the CCL substrate 110 on which the dielectric layer 120 is formed, has surface defects such as a convex defect and a concave defect, depending on a surface state of the reinforcement member 111. These surface defects increase a leakage current of the embedded capacitor, degrading the characteristic and reliability of an embedded capacitor PCB. In addition, when the dielectric layer 120 is formed on the CCL substrate 110 having the surface defects, especially when it is formed thinly about several hundreds of nanometers in order to minimize a size of the capacitor, a defect occurs in the dielectric layer 120, as indicated by a reference symbol “F”. The defects will be described in detail with reference to FIG. 2. FIG. 2 is a photograph illustrating the problem of the conventional capacitor embedded in the PCB. Specifically, FIG. 2 illustrates a convex defect of the CCL substrate in which a convex portion of the CCL substrate 110 passes through the dielectric layer 120 and the top electrode 130 and thus is exposed. In addition, an enlarged convex portion is illustrated in FIG. 2. When the dielectric layer 120 and the top electrode 130 are sequentially formed on the CCL substrate 110 having the convex defect, the dielectric layer 120 cannot be formed in the convex defect. Therefore, the CCL substrate 110 acting as the bottom electrode is shorted to the top electrode 130. That is, the surface defect of the CCL substrate increases the leakage current of the embedded capacitor and shorts the bottom electrode to the bottom electrode. Consequently, the characteristic and reliability of the capacitor embedded in the PCB are degraded and its manufacturing yield is reduced.
Ultra-magnifying narrow-band imaging for endoscopic diagnosis of gastric intestinal metaplasia: a pilot image analysis study Background and study aims Narrow-band imaging (NBI) with or without magnification has recently been used for diagnosis of gastric intestinal metaplasia (GIM). Endocytoscopy is a newly developed endoscopic technique that enables ultra-high (500) magnification of the digestive tract mucosa. This study aimed to analyze the ultra-magnifying NBI characteristics of GIM. Patients and methods This was a retrospective observational study conducted in a cancer referral center. Patients who underwent ultra-magnifying NBI of the gastric mucosa using endocytoscopy were eligible. A soft black cap was used for non-contact observation. We compared the characteristic findings of GIM by ultra-magnifying NBI of metaplastic and non-metaplastic mucosae. A reference standard for GIM in this study was conventional magnifying NBI findings of GIM. Results We obtained 28 images of metaplastic mucosa and 32 of non-metaplastic mucosa from 38 patients. Ultra-magnifying NBI revealed the cobblestone-like cellular structure in the marginal crypt epithelium of metaplastic and non-metaplastic mucosa. Diagnostic values (sensitivity, specificity, accuracy and kappa value ) for the heterogeneous cellular structure and rough contour of the marginal crypt epithelium were 82% (68%96%), 94% (85%100%), 88% (80%96%), and 0.70, and 86% (73%99%), 94% (85%100%), 90% (82%98%), and 0.71, respectively. Conclusions The characteristic ultrastructural features of GIM were identified by ultra-magnifying NBI, warranting validation of diagnostic value in a prospective study. tric mucosa using a cap, we noticed that endocytoscopy showed ultra-magnifying reflected light images of the epithelium using NBI short wavelength light. The aim of this preliminary image analysis was to explore using endocytoscopy the characteristic ultra-magnifying NBI findings of GIM. Study design and participants This was a retrospective observational study conducted in a cancer referral center, Osaka International Cancer Institute, Japan. Patients who underwent esophagogastroduodenoscopy using endocytoscopy between January and May 2019 were eligible. We excluded patients with unevaluable images of the mucosa, that is, mucosa covered with mucus or blood, and out-of-focus mucosal images. All patients gave written informed consent for the endoscopic procedure and for the use of endoscopic images for clinical studies, and were provided the opportunity to opt out from this study. All clinical data and endoscopic images were anonymized for analysis. The study protocol was approved by the Institutional Review Board on 3 June 2019 (No. 19044). The endocytoscopy system consisted of an ultra-high magnification zoom videoendoscope (EVIS-H290EC; Olympus Co. Ltd., Tokyo, Japan), light source (CLV-290; Olympus), and image processor (CV-290; Olympus). A soft black cap (MAJ-1989; Olympus) was mounted on the tip of the endoscope. For noncontact observation, working distance was adjusted by the protruding length of the cap, to make the long dimension of the endoscopic image~2 mm (▶ Fig. 1). Structural enhancement was set at B8 for NBI. Brightness control was set at average mode. As standard clinical practice, the background gastric mucosa was observed by NBI with or without magnification to assess gastric cancer risk. After conventional magnifying NBI of the gastric mucosa, ultra-magnifying NBI was performed on the same area to obtain corresponding video images (▶ Video 1). The maximum magnification level of the endocytoscope was optimized for contact observation; therefore, the endoscopist had to adjust the magnification by fine movement of a lever to focus the endoscopic images during non-contact ultra-magnifying NBI. Water immersion using a scope insuffla-tion button or water pump (OFP-2; Olympus) was often used to avoid catching the light at the mucosal surface in endoscopic images and to achieve a natural magnification effect. Air insufflation and the water pump were set at the lowest pressure. All images were stored in the computerized image server (Sole-mioEndo; Olympus). In some patients, ultra-magnifying NBI was performed on the areas from where biopsy specimens were taken, according to the updated Sydney system, to compare endocytoscopic and histological findings. Endoscopic images and definitions of GIM The endoscopic findings of GIM were regarded as a reference standard for GIM. The metaplastic mucosa was defined as having any indicative endoscopic signs of GIM; that is, light blue crest (LBC), marginal turbid band, or white opaque substance (WOS) in conventional magnifying NBI. Non-meta- Fig. 1 Difference between contact and non-contact observation. A silhouette image of backward scattered light was seen under contact observation, while reflected images of the mucosa were seen under non-contact observation. The working distance was adjusted by projection length of the cap to make the long dimension of the endoscopic image~2 mm. plastic mucosa was defined as having none of the above-mentioned characteristic signs of GIM. When multiple areas were observed in the same patient, only images of the most representative area were chosen for analysis. In patients with endoscopic findings of GIM, images of one area of metaplastic and non-metaplastic mucosae were used for analysis. In patients without endoscopic findings of GIM, images of one area of non-metaplastic mucosa were used. Helicobacter pylori infection status Helicobacter pylori infection was diagnosed based on antibody titer and histological examination. When anti-H. pylori antibody titer was < 3 U/mL and no H. pylori-like organism was found in biopsy specimens, the patient was considered to be uninfected. When anti-H. pylori antibody titer was 3 to 9 U/mL and no H. pylori-like organism was found in biopsy specimens, the patient was considered to have had previous H. pylori infection. When patients had either anti-H. pylori antibody titer ≥ 10 U/mL or H. pylori-like organisms in biopsy specimens, the patient was considered to have active H. pylori infection. Outcome and statistical method Measured outcomes were sensitivity, specificity, and accuracy of each characteristic ultra-magnifying NBI finding for the metaplastic mucosa. The data were presented with 95 % confidence intervals (CIs). Interobserver variability for evaluation of each ultra-magnifying NBI finding between two endoscopists (H. I. and N. U.) was examined, and it was presented with kappa value. Study subjects Forty-five patients underwent endoscopic examination using endocytoscopy between March and June 2019. The following patients were excluded: four whose mucosa was covered with mucus that interfered with observation of cellular structure; two with unfocused images; and one whose ultra-magnifying images could not be matched to conventional magnifying images for determination of GIM status, leaving ten patients without endoscopic findings of GIM and twenty-eight patients with endoscopic findings of GIM for enrollment. Non-metaplastic mucosa was not assessed in six patients with endoscopic findings of GIM, therefore twenty-eight images of metaplastic mucosa and 32 images of non-metaplastic mucosa were finally extracted for analysis from 38 patients (▶ Fig. 2). Demographic of the patients and areas were listed in ▶ Table 1. In comparison with non-metaplastic mucosa (▶ Fig. 3a), we found there were characteristic ultra-magnifying NBI findings in cellular structure, contour of the marginal crypt epithelium, and presence of LBC and WOS in the metaplastic mucosa (▶ Fig. 4a and ▶ Fig. 5a). Heterogeneous cellular structure: in the non-metaplastic mucosa, the cellular structure was round or oval and appeared similar and homogeneously arranged like fish scales (▶ Fig. 3b). In contrast, in the metaplastic mucosa, the shape, size and arrangement of the cellular structure were heterogeneous, and round cells larger than the others were often distributed sporadically in the marginal crypt epithelium (▶ Fig. 4b). Rough contour of the marginal crypt epithelium: in the non-metaplastic mucosa, the contour of the marginal crypt epithelium was smooth and appeared as a line (▶ Fig. 3b). In the metaplastic mucosa, the contour of the marginal crypt epithelium was rough, and often appeared as multiple lines (▶ Fig. 4b). LBC: this was seen in the metaplastic mucosa but not in the non-metaplastic mucosa. Because roughness of the LBC became apparent in ultra-magnifying NBI, diagnosis of LBC was easier than that in conventional magnifying NBI. In conventional magnifying NBI, the LBC was observed only on the edge of the marginal crypt epithelium, whereas, in ultra-magnifying NBI images, it was observed also inside the marginal crypt epithelium (▶ Fig. 4b and ▶ Fig. 6). WOS: this was seen in the metaplastic mucosa but not in the non-metaplastic mucosa. In the ultra-magnifying NBI images, the WOS was seen as whitish matter within the cellular structure (▶ Fig. 5b). Besides these findings, in the metaplastic mucosa, each cellular component was often cloudier than that in the non-metaplastic mucosa (▶ Fig. 4b). Therefore, subepithelial capillaries looked hazy in the metaplastic mucosa compared with the non-metaplastic mucosa. In some areas at the periphery of the metaplastic mucosa, there was a distinctive boundary with the non-metaplastic mucosa (▶ Fig. 6). Prevalence of each finding in metaplastic and non-metaplastic mucosa, its diagnostic values, and interobserver variability were listed in ▶ Table 2. Because all diagnostic values of heterogeneous cellular structure and rough contours of the marginal crypt epithelium exceeded 80 %, they were regarded as promising diagnostic criteria for GIM in ultra-magnifying NBI. Histological findings of the metaplastic and non-metaplastic mucosa In two patients without endoscopic finding of GIM and three patients with endoscopic finding of GIM, ultra-magnifying NBI images of the non-metaplastic and metaplastic mucosae were captured from the biopsy sites. The surface and intraepithelial structure of the marginal crypt epithelium was compared between the non-metaplastic and metaplastic mucosa. The edge of the surface epithelium in the non-metaplastic mucosa was smooth (▶ Fig. 7a), whereas that in the metaplastic mucosa was uneven and somewhat jagged (▶ Fig. 7b). This suggests that these histological findings presented as rough contours of the marginal crypt epithelium in the ultra-magnifying NBI images (▶ Fig. 7c and ▶ Fig. 7d). For the intraepithelial structure, epithelial cells and intracellular distribution of mucin were uniform in the surface epithelium in the non-metaplastic mucosa (▶ Fig. 7a), whereas those in the metaplastic mucosa were nonuniform, and there were sporadic, large goblet cells (▶ Fig. 7b). This suggests that these findings presented as heterogeneous cellular structures with sporadic distribution of large cells in the ultra-magnifying NBI images (▶ Fig. 7c and ▶ Fig. 7d). Discussion In our study, non-contact ultra-magnifying NBI using endocytoscopy showed cellular structure in the gastric epithelium, and characteristic findings of GIM, heterogeneous cellular structure including sporadic large cells, and rough contours of the marginal crypt epithelium with LBCs. To the best of our knowledge, this is the first study to investigate ultrastructure of the epithelium of the gastric mucosa in vivo. Originally, the endocytoscopy system was designed for contact observation of the mucosa. Therefore, in conventional endocytoscopy, the objective lens makes contact with the mucosal surface and the superficial mucosal images are observed with backward scattering light through the mucosa. For instance, absorptive dyes are used to stain nuclei, and sometimes cytoplasm to contrast nuclear images, and the nuclear images are observed as a silhouette. In this method, nuclear findings are evaluated for diagnosis, but other findings of the epithelium are not assessable. Moreover, conventional contact endocytoscopy requires time and effort to remove mucus from the mucosal surface, and to stain nuclei and cytoplasm with application of dyes to the mucosal surface. To the best of our knowledge, this is the first report to indicate the ▶ Fig. 7 a Enlarged image of the marginal crypt epithelium in the non-metaplastic mucosa, showing homogeneous round cellar structure and smooth contour (white arrows). b Image of the marginal crypt epithelium in the metaplastic mucosa, showing heterogeneous cellular structure including sporadic large cells (yellow triangle) and rough multiple contours (yellow arrows). Images a and b correspond to the white and yellow rectangles in ▶ Fig. 6, respectively. Histological appearance of biopsy specimens from a non-metaplastic mucosa and d metaplastic mucosa. The former shows round and homogeneous arrangement of epithelial cells containing uniform mucin, with c smooth epithelial surface (blue arrows). The latter shows heterogeneous epithelial cells, including goblet cells (green triangle), and the d epithelial surface looked rough (green arrows). possible use of ultra-magnifying NBI to evaluate ultrastructure of the epithelium without any staining. Some investigators have used contact ultra-magnifying NBI to diagnose diminutive colorectal polyps, and invasive colorectal carcinoma. However, in contact observation, NBI images are dark and coarse because the NBI light is dimmer than white light and the short wavelength light does not penetrate the mucosa well. Therefore, only vascular findings are evaluated. In our non-contact observation, NBI light illuminated the mucosal surface and precise ultra-magnifying reflected images were captured. We found that the marginal crypt epithelium of the metaplastic mucosa had rough and multiple contours with LBCs. Scanning electron microscopy showed that the mucosal surface of GIM was undulated and when we observed the structure tangentially, it appeared to have multiple contours. Such three-dimensional aspects of the mucosal surface may not be recognized by histological sections. Ultra-magnifying NBI facilitates understanding the ultrastructure of the surface epithelium in the gastrointestinal tract. Beyond a certain magnification level of the ultra-magnifying NBI images, we found that the cellular structure inside the marginal crypt epithelium could be seen. The cellular structure of the non-metaplastic mucosa was round and homogeneous. In relation to the size of each component of the cellular structure and width of the marginal crypt epithelium, we speculated that the size of the cellular components in the ultra-magnifying NBI images corresponded to that of the cell. Moreover, scanning electron microscopy shows that the surface of the non-metaplastic gastric epithelium has a cobblestone-like appearance that consists of apices of surface mucous cells and intercellular clefts. This is similar to the findings of ultra-magnifying NBI. When a semi-transparent substance is illuminated, its components are visualized according light reflection at the border between substances with different refractive indices. Therefore, we suspected that each component of the cellular structure in the ultra-magnifying NBI images corresponded with mucin in the cell. Histological analysis of biopsy specimens showed that the structure of the cells and mucin in the nonmetaplastic mucosa was round and homogeneous, whereas that in the metaplastic mucosa was heterogeneous. We did not confirm that the small cellular components in the ultra-magnifying NBI images were identical to the real cells; therefore, the term cellular structure was used in this study. Further analysis is needed for histological findings that correspond to ultra-magnifying NBI images. In ultra-magnifying methylene blue chromoendoscopy using endocytoscopy, goblet cells are seen as small unstained circular areas in the marginal crypt epithelium, showing sensitivity of 89 % and specificity of 71 % for diagnosis of GIM. In ultra-magnifying NBI, sporadic distribution of large cells is consistent with the small unstained circular areas in ultra-magnifying chromoendoscopy. In the current analysis, sensitivity of this finding was 61 % (95 % CI 41 %-79 %), which was lower than that of ultra-magnifying chromoendoscopy. One reason for the low sensitivity of ultra-magnifying NBI is difficulty in recognition of the sporadic large cells compared to recognition of unstained areas among the stained epithelium. Generally, color contrast in chromoendoscopic images is higher than that in NBI images. It is considered that WOS is visualized by strong scattering and reflection of light by the lipid droplets that are absorbed into the mucosa of GIM. In histological examination of mucosa with WOS, lipid droplets are present inside the epithelial ▶ cells in all cases and, in 61.5 % of cases, lipid droplets are also distributed underneath the epithelial cells. In ultra-magnifying NBI, most WOS was observed inside the cellular structure partitioned by intercellular septa. Accordingly, we speculate that intraepithelial, rather than subepithelial, lipid droplets contributed to visualization of the WOS. Although ultra-magnifying NBI may improve endoscopic diagnosis of GIM, the clinical importance of this method should be investigated further. One of the objectives of diagnosis of GIM is risk assessment for developing gastric cancer. Endoscopic diagnosis of GIM has advantages over biopsy because it avoids risk of bleeding caused by multiple forceps biopsies, and it can quantify the extent of GIM in the gastric mucosa. The utility of near-focus NBI or magnifying NBI for risk staging of gastric cancer has recently been reported. Compared to those methods, non-contact ultra-magnifying observation is technically more demanding. It requires delicate maneuver of the lever to control the magnification and adjust the focus of the endoscopic images. Moreover, because light distribution of the current endocytoscope is not optimized for non-contact observation, an endoscopist has to adapt working distance between lens and the mucosa for appropriate illumination with fine push-pull movement of the scope. Furthermore, endocytoscopy is not readily available worldwide. Accordingly, the clinical impact of ultra-magnifying NBI on risk assessment of gastric cancer may not be as high as with conventional observation methods of NBI. We suspect that this method would be useful to reveal the ultrastructure of the gastrointestinal tract and explain the pathogenesis of the disease. Nevertheless, further advancement of technology to capture stable ultra-high magnification image is expected. We acknowledge two major limitations of this study. At first, we did not take biopsy specimens from all patients and evaluate the histological findings. We used conventional magnifying NBI as a reference standard instead of histology because precise correlation between ultra-magnifying images and histological findings of biopsy specimens is difficult, even with targeted biopsy. However, in endoscopic observation, we can continuously increase magnification and correlate conventional magnifying with ultra-magnifying NB images. Conventional magnifying NBI has high diagnostic accuracy for diagnosis of histological GIM (90 % sensitivity and 90 % specificity). Next, only representative good images were chosen for analysis, resulting in non-negligible overestimation of diagnostic accuracy. The diagnostic performance of ultra-magnifying NBI using the characteristic endoscopic findings revealed in this study should be investigated in a future prospective study in relation to histological findings. Conclusion In conclusion, analysis of ultra-magnifying NBI of GIM revealed the characteristic epithelial ultrastructure of GIM, heterogeneous cellular structure, and rough contours of the marginal crypt epithelium with LBC. Further investigation of actual diagnostic accuracy and clinical relevance of these endoscopic findings for GIM is warranted.
I have a major publisher and a buzzed-about debut. So why did a search for my novel lead to Sweet Valley High? Before reading George Packer’s recent New Yorker article on Amazon and its relationship with the publishing industry, I hadn’t considered what forces might be responsible for making a book more or less visible on Amazon. I’d only wondered why my debut novel was being buried beneath a pile of books that were about as far from my literary ambitions as possible. I first went looking for my book on Amazon months before its August 2014 release date, while I was still working on final edits. This is a self-absorbed thing to do, I know, but judging from all the posts I see on Facebook (Hey, you can pre-order my book on Amazon!), I know I’m not alone — and for good reason perhaps. Writing is a solitary act that demands you persevere through innumerable hardships and rejections; so when at last you see you have your own little corner of cyberspace, all those years of hard work and high seriousness feel a little bit more well-spent, and you’re one step closer to a realized dream. Or at least, that’s the hope. The first time I searched for my novel, nothing came back. A couple weeks later, same thing. But then, after my third or fourth attempt, success. I typed in the title — "SWEETNESS #9" — thinking I’d only have to put in a few letters before the search engine would autocomplete it. Not so. Next, thinking the computer might need a little help, I added my name: STEPHAN EIRIK CLARK. Then I hit ENTER, and though my novel did come up, it was at the end of a very long list of books, all of them related. Maybe you’ve read one or two? Sweet Valley High? Sweet Valley High? Needless to say, I didn’t encourage people to look for the book online. I felt ready to take up the cause of Common Good Books in my hometown of St. Paul and rally to the defense of America’s great independent bookstores. They didn’t make space on their front tables for a novel months before its release date; but they also didn’t do the equivalent of having Gogol and DeLillo carpool with Laverne & Shirley. Was it because I was a debut novelist that I was being treated so poorly? Or were things as they were during the Red Scare and the purges of Stalin, when all it took was a single whispered word, maybe just a shard of a single whispered word – “Sweet!” – for your fate to be thrown in with the dead and the dying and the damned. “When a buyer searches for a book on Amazon.com, the results are partly determined by the fees paid by publishers; publishers pay Amazon a percentage of the profits they make through the online store each year; those percentages have gone up; although publishers may be squeezed by Amazon, they remain dependent on the company for as much as 30% of their sales, a number that’s still growing." I don’t know if my publisher, Little, Brown, has given Amazon a white envelope thick with green bills, but in recent weeks my novel has distanced itself from Sweet Valley High (“What do you mean you broke up with Scott!”). Now, whenever you search for the book, you’ll find its company more appropriate, if still not quite literary. Most recently, I found the novel closely linked to "Pure, White, and Deadly: How Sugar Is Killing Us and What We Can Do to Stop It." Not bad for a novel that’s set against the backdrop of the flavorings industry and explores the consequences of a highly questionable artificial sweetener. Even better, when Amazon showed me other items related to my search for “Sweetness 9” (the site doesn’t recognize the # sign), it included Propylene Glycol, the ideal carrier for artificial flavorings. The computer is getting smarter, it seems. I suppose you only need to feed it a little cash.
package com.tassioauad.moviecheck.dagger; import android.support.v4.app.FragmentActivity; import com.tassioauad.moviecheck.model.dao.MovieWatchedDao; import com.tassioauad.moviecheck.model.dao.UserDao; import com.tassioauad.moviecheck.presenter.ListMovieWatchedPresenter; import com.tassioauad.moviecheck.view.ListMovieWatchedView; import com.tassioauad.moviecheck.view.fragment.ListMovieWatchedFragment; import dagger.Module; import dagger.Provides; @Module(library = true, includes = {AppModule.class, ApiModule.class, DaoModule.class}, injects = ListMovieWatchedFragment.class) public class ListMovieWatchedViewModule { private ListMovieWatchedView view; private FragmentActivity activity; public ListMovieWatchedViewModule(ListMovieWatchedView view, FragmentActivity activity) { this.view = view; this.activity = activity; } @Provides public ListMovieWatchedPresenter provideListMovieWatchedPresenter(MovieWatchedDao movieWatchedDao, UserDao userDao) { movieWatchedDao.setActivity(activity); return new ListMovieWatchedPresenter(view, movieWatchedDao, userDao); } }
/** * We are searching for custom setup files of extension '.config' at the location of the optimiter '.class' files., then we add as velues to foundClasses {@link Map}. * This custom config files are not really useful, so this will be removed. * @param foundClasses Input {@link Map}, with nulls at the values, and {@link AbstractAlgorithm} {@link Class} objects at keys. * @param optimizerClassLocation The location of the class files. * @param <T> */ @Deprecated public static <T> void findConfigFiles(Map<Class<? extends T>, String> foundClasses,String optimizerClassLocation) { Path p = Paths.get(optimizerClassLocation); if(!Files.exists(p)) return; try(final Stream<Path> pathsStream = Files.walk(p)) { pathsStream.forEach( filePath -> { if (Files.isRegularFile(filePath) && filePath.toString().contains(".config")) { Optional<Class<? extends T>> cl =foundClasses.keySet().stream().filter(c -> filePath.toString().equals(c.getName())).findFirst(); if(cl.isPresent()) { foundClasses.put(cl.get(), ""); } } } ); } catch (IOException e) { e.printStackTrace(); } }
The surgeon lost his hands. He couldn’t operate anymore. He lost his practice. Multiple sclerosis knocked Dr. Jim Jackson down. It took everything. It made him shaky and uncertain, robbed him of his strength. Broken and depressed, Jackson felt relegated to the couch to deteriorate for the rest of his days. And then one night in 2014, he went with his athletic daughter to the gym. He stumbled inside to watch her work out. That’s when he saw Johnny Mercurio, a local MMA fighter of some renown. Mercurio’s career hadn’t gone the way he had hoped. He was making more money leading workouts for soccer moms than he was in the octagon. Jackson watched Mercurio admonishing novice boxers to jab-jab-cross. “I could never do that,” Jackson remembers saying aloud. Four years later, the lives of both men have diverged in unexpected directions – thanks to each other. There was a time, not too long ago, that Jim Jackson could fix things. He could mend anterior cruciate ligaments and disjointed hips. For 20 years, he worked as an orthopedic surgeon at the Los Alamitos Medical Center seeing as many as 50 patients per day. And then, in 2008, someone in his office noticed he was walking with a limp. It was so minor, he hadn’t noticed. “You don’t recognize things going on with yourself,” said Jackson, who was a competitive cyclist logging 120 miles per week. Within days, Jackson found himself leaning against the operating table to keep his balance. He would hold his elbows close to his rib cage to calm the tremors in his hands. Jackson had surgery on his neck, but that didn’t fix him. In 2009, he was so unstable, he fell into his backyard pool. His daughter had to save him. He couldn’t be a doctor anymore. What he didn’t know at the time was that he had multiple sclerosis. Johnny Mercurio has been fighting since he was tiny. He entered his first martial arts tournament at age 7. He turned pro in mixed martial arts at age 20. Mercurio, now 32 and a resident of Huntington Beach, was a football player, too, and a decathlete. But his chance to be a Division I college athlete fell apart because he didn’t have the grades. “I was such a knucklehead kid,” he said. He dropped out of school and dropped into the world of fighting. He boxed, he kicked. He fought in squares, circles and octagons. His YouTube highlight reel is as exciting as it is bloody. In his first fight, he got his opponent in a choke hold and won in less than a minute. After that, he was hooked. He was 16-1 as an amateur. He says now that he didn’t treat his professional fights professionally. He didn’t study his opponents like he should or train properly. He amassed a 9-8 record. He won his last fight in 2017, but it was a fight earlier in the year that made him think about the rest of his life. So he started training other people to make some money. He charges about $50 per hour. That’s when Jim Jackson helped him stumble into an idea. Jackson’s first punch as a student of Johnny Mercurio was a mess. He missed the bag and fell down. Slowly but surely, Mercurio taught Jackson about balance and boxing. “He went from pushing his punches to snapping his punches,” Mercurio said. Jackson, who, at one time, could barely walk, found himself boxing. “He takes you through sequences and exercises that stimulate both sides of the brain,” Jackson said, his voice cracking with emotion. The more he pushed, the more his coordination came back. And as Mercurio watched training help Jim Jackson regain his confidence, he thought about how many Jim Jacksons there are in the world who need help. There was one more thing. The kid who had dropped out of college, was becoming friends with a surgeon. Johnny Mercurio suddenly felt like he wasn’t just a dumb fighter. Mercurio said he still has trouble reading, and having a friend like Jackson is very important to him. In 2018, Mercurio flew to Indianapolis to take a week-long course to become certified in “Rock Steady Boxing,” training people with Parkinson’s disease and other neurological impairments. He now trains 12 fighters who didn’t think they could fight. They don’t battle other fighters, they battle to keep themselves moving. Mercurio teaches six classes per week. None of it would have happened if Jim Jackson didn’t stumble into the gym. “He is patient zero,” Mercurio said. Mercurio held a “Moving Day” fundraiser for the Parkinson’s Association of Orange County. He was able to raise $1,000. He will work on fundraising for the PAOC in 2019. Mercurio hopes to fight again in 2019. And Jackson hopes to do something he never thought he would do. He’s going to a conference for orthopedic doctors in Las Vegas to try to restart his medical career. He knows he can’t be a surgeon anymore.
<filename>RNLinkUIKit/Classes/RNViewMaker.h<gh_stars>1-10 // // RNViewMaker.h // // Created by MuMu on 2016/12/5. // Copyright © 2016年 MuMu All rights reserved. // #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> @protocol RNViewMaker <NSObject> /// must be instance of UIView or UIView's subClass /// should be Override by subclass @property (nonatomic, strong, readonly) id make; @end @interface RNViewMaker : NSObject <RNViewMaker> - (RNViewMaker *(^)(BOOL))translatesAutoresizingMaskIntoConstraints; - (RNViewMaker *(^)(BOOL))userInteractionEnabled; - (RNViewMaker *(^)(NSInteger))tag; - (RNViewMaker *(^)(CGRect))frame; - (RNViewMaker *(^)(CGRect))bounds; - (RNViewMaker *(^)(CGPoint))center; - (RNViewMaker *(^)(CGAffineTransform))transform; - (RNViewMaker *(^)(CGFloat))contentScaleFactor; - (RNViewMaker *(^)(BOOL))multipleTouchEnabled; - (RNViewMaker *(^)(BOOL))exclusiveTouch; - (RNViewMaker *(^)(BOOL))autoresizesSubviews; - (RNViewMaker *(^)(UIViewAutoresizing))autoresizingMask; - (RNViewMaker *(^)(UIEdgeInsets))layoutMargins; - (RNViewMaker *(^)(BOOL))preservesSuperviewLayoutMargins; - (RNViewMaker *(^)(BOOL))clipsToBounds; - (RNViewMaker *(^)(CGFloat))cornerRadius; - (RNViewMaker *(^)(CGFloat,UIColor *color))border; - (RNViewMaker *(^)(CGFloat,unsigned long))borderWithHex; - (RNViewMaker *(^)(UIColor *))backgroundColor; - (RNViewMaker *(^)(unsigned long))backgroundColorFromHex; - (RNViewMaker *(^)(CGFloat))alpha; - (RNViewMaker *(^)(BOOL))opaque; - (RNViewMaker *(^)(BOOL))clearsContextBeforeDrawing; - (RNViewMaker *(^)(BOOL))hidden; - (RNViewMaker *(^)(UIViewContentMode))contentMode; - (RNViewMaker *(^)(UIColor *))tintColor; - (RNViewMaker *(^)(unsigned long))tintColorFromHex; - (RNViewMaker *(^)(UIViewTintAdjustmentMode))tintAdjustmentMode; @end
//! A helper module to hold utilities that are used across tests. This file //! DOES NOT container any of its own tests. pub mod factories; use crate::utils::factories::*; use diesel::{ connection::TransactionManager, Connection, PgConnection, RunQueryDsl, }; use diesel_factories::Factory; use gdlk_api::{ models, schema::{user_providers, users}, server::{create_gql_schema, GqlSchema}, util::{self, PooledConnection}, views::{RequestContext, UserContext}, }; use juniper::{ExecutionError, InputValue, Variables}; use serde::Serialize; use std::{collections::HashMap, sync::Arc}; /// Convert a serializable value into a JSON value. #[allow(dead_code)] // Not all test crates use this pub fn to_json<T: Serialize>(input: T) -> serde_json::Value { let serialized: String = serde_json::to_string(&input).unwrap(); serde_json::from_str(&serialized).unwrap() } /// Helper type for setting up and executing test GraphQL queries #[allow(dead_code)] // Not all test crates use this pub struct QueryRunner { schema: GqlSchema, context: RequestContext, } impl QueryRunner { /// Construct a new QueryRunner, which is used to execute GraphQL queries /// from a test. #[allow(dead_code)] // Not all test crates use this pub fn new() -> Self { let db_conn_pool = util::init_test_db_conn_pool().unwrap(); Self { schema: create_gql_schema(), context: RequestContext::load_context(Arc::new(db_conn_pool), None) .unwrap(), } } /// Get a new DB connection. While in testing the pool only holds a single /// connection, so the returned connection **needs to be dropped** before /// a new one is requested from the pool! To try to enforce that, this /// func is interntionally not public. fn db_conn(&self) -> PooledConnection { self.context.db_conn().unwrap() } /// Execute a block of code with a DB connection. Tests run with a single /// connection in the pool (to enforce that everything happens inside a /// transaction), so this restricts access to that connection. This prevents /// us from hanging onto a connection reference longer than its needed, /// which would block subsequent code and cause a test failure. #[allow(dead_code)] // Not all test crates use this pub fn run_with_conn<T>(&self, f: impl FnOnce(&PgConnection) -> T) -> T { f(&self.db_conn()) } /// Normally all test connections are initialized within a DB transaction. /// This prevents any changes made by tests from affecting the DB outside /// that test. In some cases though (e.g. if you have transaction logic /// in the code being tested), we don't want the test transaction. In those /// cases, you can use this method to disable the transaction. When you do, /// [QueryRunner] should clean up any data inserted. (WARNING: right now /// it doesn't clean all tables - scroll down for more info) #[allow(dead_code)] // Not all test crates use this pub fn disable_transaction(&self) { let conn = self.db_conn(); conn.transaction_manager() .commit_transaction(&conn) .unwrap(); } /// Set the user_provider in the user context. This will update the user /// context with that provider ID, so the user field will be re-loaded too. #[allow(dead_code)] // Not all test crates use this pub fn set_user_provider(&mut self, user_provider: models::UserProvider) { let conn = self.db_conn(); self.context.user_context = UserContext::load_context(&conn, user_provider.id).unwrap(); } /// Create a new user with the given roles, then update the user context /// to be authenticated as that new user. Returns the created user. #[allow(dead_code)] // Not all test crates use this pub fn log_in(&mut self, roles: &[models::RoleType]) -> models::User { let conn = self.db_conn(); let user = UserFactory::default().username("user1").insert(&conn); // Create a bogus user_provider for this user. We're not trying to test // the OpenID logic here, so this is fine. let user_provider = UserProviderFactory::default() .sub(&user.id.to_string()) // guarantees uniqueness .user(Some(&user)) .insert(&conn); // Insert one row into user_roles for each requested row user.add_roles_x(&conn, roles).unwrap(); self.context.user_context = UserContext::load_context(&conn, user_provider.id).unwrap(); user } /// Execute a GraphQL query #[allow(dead_code)] // Not all test crates use this pub async fn query<'a>( &'a self, query: &'a str, vars: HashMap<&str, InputValue>, ) -> (serde_json::Value, Vec<serde_json::Value>) { // Map &strs to Strings let converted_vars = vars .into_iter() .map(|(k, v)| (k.to_string(), v)) .collect::<Variables>(); let (data, errors): (juniper::Value<_>, Vec<ExecutionError<_>>) = juniper::execute( query, None, &self.schema, &converted_vars, &self.context, ) .await .unwrap(); // Map the output data to JSON, for easier comparison (to_json(data), errors.into_iter().map(to_json).collect()) } } impl Drop for QueryRunner { fn drop(&mut self) { // If the test wasn't inside a transaction, then whatever DB changes it // made will still be around - we want to clean those up. Ideally we // truncate all tables here, but that sounds like a lot of work that I // don't wanna do so just sticking with users for now. let conn = self.db_conn(); if (conn.transaction_manager() as &dyn TransactionManager<PgConnection>) .get_transaction_depth() == 0 { // TODO clean all tables here diesel::delete(user_providers::table) .execute(&conn) .unwrap(); diesel::delete(users::table).execute(&conn).unwrap(); } } }
#pragma once #include <cstdint> #include <utility> namespace zz { inline uint64_t encode(int64_t i) { return (i >> 63) ^ (i << 1); } inline int64_t decode(uint64_t i) { return (i >> 1) ^ (-(i & 1)); } } // namespace zz
package nl.javalon.sketchlab.security.authentication; import lombok.Getter; import nl.javalon.sketchlab.entity.tables.pojos.User; import nl.javalon.sketchlab.security.provider.AnonymousAuthenticationProvider; import nl.javalon.sketchlab.security.provider.AuthenticationProvider; import java.util.UUID; /** * Singleton representing the anonymous user. * @author <NAME> */ @Getter public class AnonymousUserAuthentication extends UserAuthentication { public static final AnonymousUserAuthentication INSTANCE = new AnonymousUserAuthentication(); /** * Constructor is private. Use the singleton instance. */ private AnonymousUserAuthentication() { super(new User(UUID.fromString("00000000-0000-0000-0000-000000000000"), "Anonymous", null, null, "ANONYMOUS", null)); } @Override public Class<? extends AuthenticationProvider> getProviderClass() { return AnonymousAuthenticationProvider.class; } }
Generation of Electronic Product Documentation The product knowledge manager (PKM) is a multiphase knowledge base system developed to aid in the life-cycle management of electronic products at Boeing Aerospace and Electronics. Numerous pieces of documentation (including source control drawings, fabrication drawings, acceptance test procedures, qualification test procedures) are required for nearly every electronic product developed by Boeing. This documentation is highly formatted and must comply with strict Boeing drafting standards as well as customer and program requirements. Phase 1 of PKM has been in production use since September 1987. It provides an intelligent user interface and a documentation facility to generate and manage drawings, documents, and their templates. The system is written in KEE and Lisp. Approximately 60 KEE knowledge bases provide information on products, programs, authorization, drafting standards, and documentation structure. Over 600 Lisp functions operate on the knowledge bases to perform user-requested operations.
<gh_stars>0 import { WidgetProperties } from "widgets/index"; const properties: WidgetProperties = { widgetType: "text", category: "general", configurable: false, hasSaga: false, initialHeight: 4, initialWidth: 4, initialOptions: {}, initialMeta: {}, }; export default properties;
How Do I Get a Surface Scratch Off of a Wood Floor? Erase shallow scratches with sandpaper or steel wool. Most modern wood floors have a polyurethane finish, and whether the finish was applied in a factory or on-site, it's not indestructible. It can be accidentally scratched when you move furniture or even if you have pets with sharp claws. More often than not, scratches stay on the surface and are easy to repair. You can often make surface scratches disappear by applying paste wax and buffing the area, but that's a temporary solution at best, because wax eventually wears off or discolors. To make a more longlasting repair, clean the part of the floor around the scratch with a soft cloth and a 1-to-1 solution of vinegar and water. After drying it with a second cloth, rub out the scratch with 220-grit sandpaper or 000 steel wool, going with the grain of the wood. Wipe on a coat of the same finish that's on the floor -- usually polyurethane -- and you're done. When a scratch has penetrated to the wood, the resulting discoloration makes it more noticeable, and you can usually take take care of that with wood stain. Wipe on the stain after you clean the floor, and then let it dry before rubbing down the surrounding finish with sandpaper or steel wool. Rubbing removes all the stain except that which has penetrated the wood. It also scuffs the finish so the new touch-up finish you apply with a cloth will adhere better. Deziel, Chris. "How Do I Get a Surface Scratch Off of a Wood Floor?" Home Guides | SF Gate, http://homeguides.sfgate.com/surface-scratch-off-wood-floor-105347.html. 11 January 2015.
import java.util.Scanner; public class p14_OnTimeForTheExam { public static void main(String[] args) { Scanner scan = new Scanner(System.in); int hourOfExam = Integer.parseInt(scan.nextLine()); int minuteOfExam = Integer.parseInt(scan.nextLine()); int hourOfArrival = Integer.parseInt(scan.nextLine()); int minuteOfArrival = Integer.parseInt(scan.nextLine()); int examTime = hourOfExam * 60 + minuteOfExam; int comeTime = hourOfArrival * 60 + minuteOfArrival; int difference = Math.abs((examTime) - (comeTime)); if (examTime < comeTime) { System.out.println("Late"); if (difference < 60) { System.out.printf("%d minutes after the start", difference); } else { int hours = difference / 60; int min = difference % 60; System.out.printf("%d:%02d hours after the start", hours, min); } } else if (difference == 0 || difference <= 30){ System.out.println("On time"); if (difference != 0 ){ System.out.printf("%d minutes before the start", difference); } }else { System.out.println("Early"); if (difference < 60){ System.out.printf("%d minutes before the start", difference); }else { int hour = difference / 60; int min = difference % 60; System.out.printf("%d:%02d hours before the start", hour, min); } } } }
An intercomparison of two acoustic doppler current profilers An AMETEK Straza DCP4400 Doppler current profiler and an RD Instruments RD-SC1200 Doppler current profiler were operated simultaneously for a period of 5 days in the Port of Miami in bottom-mounted upward looking configurations. An EG&G-VMCM mooring was deployed between the two Doppler systems for the same period, and drifter experiments were also conducted during the week for surface measurement intercomparison. Experimental acoustic beam side lobe deflectors were used for a period of time on the AMETEK system to explore their effects on surface layer measurements. The system operating characteristics are discussed, and the data retrieved from the systems are presented. Analysis and interpretation of the intercomparison of all measurements, a discussion of surface measurement capabilities and deflector results, as well as system performance analysis are also presented.
<reponame>slawekjaranowski/bc-java package org.bouncycastle.asn1.cmc; import org.bouncycastle.asn1.ASN1Choice; import org.bouncycastle.asn1.ASN1Encodable; import org.bouncycastle.asn1.ASN1EncodableVector; import org.bouncycastle.asn1.ASN1Integer; import org.bouncycastle.asn1.ASN1Object; import org.bouncycastle.asn1.ASN1Primitive; import org.bouncycastle.asn1.ASN1Sequence; import org.bouncycastle.asn1.DERSequence; import org.bouncycastle.asn1.DERUTF8String; /** * <pre> * -- Used to return status state in a response * * id-cmc-statusInfo OBJECT IDENTIFIER ::= {id-cmc 1} * * CMCStatusInfo ::= SEQUENCE { * cMCStatus CMCStatus, * bodyList SEQUENCE SIZE (1..MAX) OF BodyPartID, * statusString UTF8String OPTIONAL, * otherInfo CHOICE { * failInfo CMCFailInfo, * pendInfo PendInfo } OPTIONAL * } * </pre> */ public class CMCStatusInfo extends ASN1Object { private final CMCStatus cMCStatus; private final ASN1Sequence bodyList; private final DERUTF8String statusString; private final OtherInfo otherInfo; CMCStatusInfo(CMCStatus cMCStatus, ASN1Sequence bodyList, DERUTF8String statusString, OtherInfo otherInfo) { this.cMCStatus = cMCStatus; this.bodyList = bodyList; this.statusString = statusString; this.otherInfo = otherInfo; } private CMCStatusInfo(ASN1Sequence seq) { if (seq.size() < 2 || seq.size() > 4) { throw new IllegalArgumentException("incorrect sequence size"); } this.cMCStatus = CMCStatus.getInstance(seq.getObjectAt(0)); this.bodyList = ASN1Sequence.getInstance(seq.getObjectAt(1)); if (seq.size() > 3) { this.statusString = DERUTF8String.getInstance(seq.getObjectAt(2)); this.otherInfo = OtherInfo.getInstance(seq.getObjectAt(3)); } else if (seq.size() > 2) { if (seq.getObjectAt(2) instanceof DERUTF8String) { this.statusString = DERUTF8String.getInstance(seq.getObjectAt(2)); this.otherInfo = null; } else { this.statusString = null; this.otherInfo = OtherInfo.getInstance(seq.getObjectAt(2)); } } else { this.statusString = null; this.otherInfo = null; } } public static CMCStatusInfo getInstance(Object o) { if (o instanceof CMCStatusInfo) { return (CMCStatusInfo)o; } if (o != null) { return new CMCStatusInfo(ASN1Sequence.getInstance(o)); } return null; } public ASN1Primitive toASN1Primitive() { ASN1EncodableVector v = new ASN1EncodableVector(4); v.add(cMCStatus); v.add(bodyList); if (statusString != null) { v.add(statusString); } if (otherInfo != null) { v.add(otherInfo); } return new DERSequence(v); } public CMCStatus getCMCStatus() { return cMCStatus; } public BodyPartID[] getBodyList() { return Utils.toBodyPartIDArray(bodyList); } public DERUTF8String getStatusString() { return statusString; } public boolean hasOtherInfo() { return otherInfo != null; } public OtherInfo getOtherInfo() { return otherInfo; } /** * Other info implements the choice component of CMCStatusInfo. */ public static class OtherInfo extends ASN1Object implements ASN1Choice { private final CMCFailInfo failInfo; private final PendInfo pendInfo; private static OtherInfo getInstance(Object obj) { if (obj instanceof OtherInfo) { return (OtherInfo)obj; } if (obj instanceof ASN1Encodable) { ASN1Encodable asn1Value = ((ASN1Encodable)obj).toASN1Primitive(); if (asn1Value instanceof ASN1Integer) // CMCFail info is an asn1 integer. { return new OtherInfo(CMCFailInfo.getInstance(asn1Value)); } else if (asn1Value instanceof ASN1Sequence) // PendInfo is a sequence. { return new OtherInfo(PendInfo.getInstance(asn1Value)); } } throw new IllegalArgumentException("unknown object in getInstance(): " + obj.getClass().getName()); } OtherInfo(CMCFailInfo failInfo) { this(failInfo, null); } OtherInfo(PendInfo pendInfo) { this(null, pendInfo); } private OtherInfo(CMCFailInfo failInfo, PendInfo pendInfo) { this.failInfo = failInfo; this.pendInfo = pendInfo; } public boolean isFailInfo() { return failInfo != null; } public ASN1Primitive toASN1Primitive() { if (pendInfo != null) { return pendInfo.toASN1Primitive(); } return failInfo.toASN1Primitive(); } } }
// RUN: %clang_cc1 -triple x86_64-apple-darwin -emit-llvm -O0 %s -o - 2>&1 | FileCheck %s typedef unsigned long size_t; struct Foo { int t[10]; }; #define PS(N) __attribute__((pass_object_size(N))) #define PDS(N) __attribute__((pass_dynamic_object_size(N))) int gi = 0; // CHECK-LABEL: define i32 @ObjectSize0(i8* %{{.*}}, i64 %0) int ObjectSize0(void *const p PS(0)) { // CHECK-NOT: @llvm.objectsize return __builtin_object_size(p, 0); } // CHECK-LABEL: define i32 @DynamicObjectSize0(i8* %{{.*}}, i64 %0) int DynamicObjectSize0(void *const p PDS(0)) { // CHECK-NOT: @llvm.objectsize return __builtin_dynamic_object_size(p, 0); } // CHECK-LABEL: define i32 @ObjectSize1(i8* %{{.*}}, i64 %0) int ObjectSize1(void *const p PS(1)) { // CHECK-NOT: @llvm.objectsize return __builtin_object_size(p, 1); } // CHECK-LABEL: define i32 @DynamicObjectSize1(i8* %{{.*}}, i64 %0) int DynamicObjectSize1(void *const p PDS(1)) { // CHECK-NOT: @llvm.objectsize return __builtin_dynamic_object_size(p, 1); } // CHECK-LABEL: define i32 @ObjectSize2(i8* %{{.*}}, i64 %0) int ObjectSize2(void *const p PS(2)) { // CHECK-NOT: @llvm.objectsize return __builtin_object_size(p, 2); } // CHECK-LABEL: define i32 @DynamicObjectSize2(i8* %{{.*}}, i64 %0) int DynamicObjectSize2(void *const p PDS(2)) { // CHECK-NOT: @llvm.objectsize return __builtin_object_size(p, 2); } // CHECK-LABEL: define i32 @ObjectSize3(i8* %{{.*}}, i64 %0) int ObjectSize3(void *const p PS(3)) { // CHECK-NOT: @llvm.objectsize return __builtin_object_size(p, 3); } // CHECK-LABEL: define i32 @DynamicObjectSize3(i8* %{{.*}}, i64 %0) int DynamicObjectSize3(void *const p PDS(3)) { // CHECK-NOT: @llvm.objectsize return __builtin_object_size(p, 3); } void *malloc(unsigned long) __attribute__((alloc_size(1))); // CHECK-LABEL: define void @test1 void test1(unsigned long sz) { struct Foo t[10]; // CHECK: call i32 @ObjectSize0(i8* %{{.*}}, i64 360) gi = ObjectSize0(&t[1]); // CHECK: call i32 @ObjectSize1(i8* %{{.*}}, i64 360) gi = ObjectSize1(&t[1]); // CHECK: call i32 @ObjectSize2(i8* %{{.*}}, i64 360) gi = ObjectSize2(&t[1]); // CHECK: call i32 @ObjectSize3(i8* %{{.*}}, i64 360) gi = ObjectSize3(&t[1]); // CHECK: call i32 @ObjectSize0(i8* %{{.*}}, i64 356) gi = ObjectSize0(&t[1].t[1]); // CHECK: call i32 @ObjectSize1(i8* %{{.*}}, i64 36) gi = ObjectSize1(&t[1].t[1]); // CHECK: call i32 @ObjectSize2(i8* %{{.*}}, i64 356) gi = ObjectSize2(&t[1].t[1]); // CHECK: call i32 @ObjectSize3(i8* %{{.*}}, i64 36) gi = ObjectSize3(&t[1].t[1]); char *ptr = (char *)malloc(sz); // CHECK: [[REG:%.*]] = call i64 @llvm.objectsize.i64.p0i8({{.*}}, i1 false, i1 true, i1 true) // CHECK: call i32 @DynamicObjectSize0(i8* %{{.*}}, i64 [[REG]]) gi = DynamicObjectSize0(ptr); // CHECK: [[WITH_OFFSET:%.*]] = getelementptr // CHECK: [[REG:%.*]] = call i64 @llvm.objectsize.i64.p0i8(i8* [[WITH_OFFSET]], i1 false, i1 true, i1 true) // CHECK: call i32 @DynamicObjectSize0(i8* {{.*}}, i64 [[REG]]) gi = DynamicObjectSize0(ptr+10); // CHECK: [[REG:%.*]] = call i64 @llvm.objectsize.i64.p0i8({{.*}}, i1 true, i1 true, i1 true) // CHECK: call i32 @DynamicObjectSize2(i8* {{.*}}, i64 [[REG]]) gi = DynamicObjectSize2(ptr); } // CHECK-LABEL: define void @test2 void test2(struct Foo *t) { // CHECK: [[VAR:%[0-9]+]] = call i64 @llvm.objectsize // CHECK: call i32 @ObjectSize1(i8* %{{.*}}, i64 [[VAR]]) gi = ObjectSize1(&t->t[1]); // CHECK: call i32 @ObjectSize3(i8* %{{.*}}, i64 36) gi = ObjectSize3(&t->t[1]); } // CHECK-LABEL: define i32 @_Z27NoViableOverloadObjectSize0Pv int NoViableOverloadObjectSize0(void *const p) __attribute__((overloadable)) { // CHECK: @llvm.objectsize return __builtin_object_size(p, 0); } // CHECK-LABEL: define i32 @_Z34NoViableOverloadDynamicObjectSize0Pv int NoViableOverloadDynamicObjectSize0(void *const p) __attribute__((overloadable)) { // CHECK: @llvm.objectsize return __builtin_object_size(p, 0); } // CHECK-LABEL: define i32 @_Z27NoViableOverloadObjectSize1Pv int NoViableOverloadObjectSize1(void *const p) __attribute__((overloadable)) { // CHECK: @llvm.objectsize return __builtin_object_size(p, 1); } // CHECK-LABEL: define i32 @_Z27NoViableOverloadObjectSize2Pv int NoViableOverloadObjectSize2(void *const p) __attribute__((overloadable)) { // CHECK: @llvm.objectsize return __builtin_object_size(p, 2); } // CHECK-LABEL: define i32 @_Z27NoViableOverloadObjectSize3Pv int NoViableOverloadObjectSize3(void *const p) __attribute__((overloadable)) { // CHECK-NOT: @llvm.objectsize return __builtin_object_size(p, 3); } // CHECK-LABEL: define i32 @_Z27NoViableOverloadObjectSize0Pv // CHECK-NOT: @llvm.objectsize int NoViableOverloadObjectSize0(void *const p PS(0)) __attribute__((overloadable)) { return __builtin_object_size(p, 0); } int NoViableOverloadDynamicObjectSize0(void *const p PDS(0)) __attribute__((overloadable)) { return __builtin_dynamic_object_size(p, 0); } int NoViableOverloadObjectSize1(void *const p PS(1)) __attribute__((overloadable)) { return __builtin_object_size(p, 1); } int NoViableOverloadObjectSize2(void *const p PS(2)) __attribute__((overloadable)) { return __builtin_object_size(p, 2); } int NoViableOverloadObjectSize3(void *const p PS(3)) __attribute__((overloadable)) { return __builtin_object_size(p, 3); } const static int SHOULDNT_BE_CALLED = -100; int NoViableOverloadObjectSize0(void *const p PS(0)) __attribute__((overloadable, enable_if(p == 0, "never selected"))) { return SHOULDNT_BE_CALLED; } int NoViableOverloadObjectSize1(void *const p PS(1)) __attribute__((overloadable, enable_if(p == 0, "never selected"))) { return SHOULDNT_BE_CALLED; } int NoViableOverloadObjectSize2(void *const p PS(2)) __attribute__((overloadable, enable_if(p == 0, "never selected"))) { return SHOULDNT_BE_CALLED; } int NoViableOverloadObjectSize3(void *const p PS(3)) __attribute__((overloadable, enable_if(p == 0, "never selected"))) { return SHOULDNT_BE_CALLED; } // CHECK-LABEL: define void @test3 void test3() { struct Foo t[10]; // CHECK: call i32 @_Z27NoViableOverloadObjectSize0PvU17pass_object_size0(i8* %{{.*}}, i64 360) gi = NoViableOverloadObjectSize0(&t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize1PvU17pass_object_size1(i8* %{{.*}}, i64 360) gi = NoViableOverloadObjectSize1(&t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize2PvU17pass_object_size2(i8* %{{.*}}, i64 360) gi = NoViableOverloadObjectSize2(&t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize3PvU17pass_object_size3(i8* %{{.*}}, i64 360) gi = NoViableOverloadObjectSize3(&t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize0PvU17pass_object_size0(i8* %{{.*}}, i64 356) gi = NoViableOverloadObjectSize0(&t[1].t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize1PvU17pass_object_size1(i8* %{{.*}}, i64 36) gi = NoViableOverloadObjectSize1(&t[1].t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize2PvU17pass_object_size2(i8* %{{.*}}, i64 356) gi = NoViableOverloadObjectSize2(&t[1].t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize3PvU17pass_object_size3(i8* %{{.*}}, i64 36) gi = NoViableOverloadObjectSize3(&t[1].t[1]); // CHECK: call i32 @_Z34NoViableOverloadDynamicObjectSize0PvU25pass_dynamic_object_size0(i8* %{{.*}}, i64 360) gi = NoViableOverloadDynamicObjectSize0(&t[1]); } // CHECK-LABEL: define void @test4 void test4(struct Foo *t) { // CHECK: call i32 @_Z27NoViableOverloadObjectSize0PvU17pass_object_size0(i8* %{{.*}}, i64 %{{.*}}) gi = NoViableOverloadObjectSize0(&t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize1PvU17pass_object_size1(i8* %{{.*}}, i64 %{{.*}}) gi = NoViableOverloadObjectSize1(&t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize2PvU17pass_object_size2(i8* %{{.*}}, i64 %{{.*}}) gi = NoViableOverloadObjectSize2(&t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize3PvU17pass_object_size3(i8* %{{.*}}, i64 0) gi = NoViableOverloadObjectSize3(&t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize0PvU17pass_object_size0(i8* %{{.*}}, i64 %{{.*}}) gi = NoViableOverloadObjectSize0(&t[1].t[1]); // CHECK: [[VAR:%[0-9]+]] = call i64 @llvm.objectsize // CHECK: call i32 @_Z27NoViableOverloadObjectSize1PvU17pass_object_size1(i8* %{{.*}}, i64 [[VAR]]) gi = NoViableOverloadObjectSize1(&t[1].t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize2PvU17pass_object_size2(i8* %{{.*}}, i64 %{{.*}}) gi = NoViableOverloadObjectSize2(&t[1].t[1]); // CHECK: call i32 @_Z27NoViableOverloadObjectSize3PvU17pass_object_size3(i8* %{{.*}}, i64 36) gi = NoViableOverloadObjectSize3(&t[1].t[1]); } void test5() { struct Foo t[10]; int (*f)(void *) = &NoViableOverloadObjectSize0; gi = f(&t[1]); int (*g)(void *) = &NoViableOverloadDynamicObjectSize0; gi = g(&t[1]); } // CHECK-LABEL: define i32 @IndirectObjectSize0 int IndirectObjectSize0(void *const p PS(0)) { // CHECK: call i32 @ObjectSize0(i8* %{{.*}}, i64 %{{.*}}) // CHECK-NOT: @llvm.objectsize return ObjectSize0(p); } // CHECK-LABEL: define i32 @IndirectObjectSize1 int IndirectObjectSize1(void *const p PS(1)) { // CHECK: call i32 @ObjectSize1(i8* %{{.*}}, i64 %{{.*}}) // CHECK-NOT: @llvm.objectsize return ObjectSize1(p); } // CHECK-LABEL: define i32 @IndirectObjectSize2 int IndirectObjectSize2(void *const p PS(2)) { // CHECK: call i32 @ObjectSize2(i8* %{{.*}}, i64 %{{.*}}) // CHECK-NOT: @llvm.objectsize return ObjectSize2(p); } // CHECK-LABEL: define i32 @IndirectObjectSize3 int IndirectObjectSize3(void *const p PS(3)) { // CHECK: call i32 @ObjectSize3(i8* %{{.*}}, i64 %{{.*}}) // CHECK-NOT: @llvm.objectsize return ObjectSize3(p); } int IndirectDynamicObjectSize0(void *const p PDS(0)) { // CHECK: call i32 @ObjectSize0(i8* %{{.*}}, i64 %{{.*}}) // CHECK-NOT: @llvm.objectsize return ObjectSize0(p); } int Overload0(void *, size_t, void *, size_t); int OverloadNoSize(void *, void *); int OverloadedObjectSize(void *const p PS(0), void *const c PS(0)) __attribute__((overloadable)) __asm__("Overload0"); int OverloadedObjectSize(void *const p, void *const c) __attribute__((overloadable)) __asm__("OverloadNoSize"); // CHECK-LABEL: define void @test6 void test6() { int known[10], *opaque; // CHECK: call i32 @"\01Overload0" gi = OverloadedObjectSize(&known[0], &known[0]); // CHECK: call i32 @"\01Overload0" gi = OverloadedObjectSize(&known[0], opaque); // CHECK: call i32 @"\01Overload0" gi = OverloadedObjectSize(opaque, &known[0]); // CHECK: call i32 @"\01Overload0" gi = OverloadedObjectSize(opaque, opaque); } int Identity(void *p, size_t i) { return i; } // CHECK-NOT: define void @AsmObjectSize int AsmObjectSize0(void *const p PS(0)) __asm__("Identity"); int AsmObjectSize1(void *const p PS(1)) __asm__("Identity"); int AsmObjectSize2(void *const p PS(2)) __asm__("Identity"); int AsmObjectSize3(void *const p PS(3)) __asm__("Identity"); // CHECK-LABEL: define void @test7 void test7() { struct Foo t[10]; // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 360) gi = AsmObjectSize0(&t[1]); // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 360) gi = AsmObjectSize1(&t[1]); // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 360) gi = AsmObjectSize2(&t[1]); // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 360) gi = AsmObjectSize3(&t[1]); // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 356) gi = AsmObjectSize0(&t[1].t[1]); // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 36) gi = AsmObjectSize1(&t[1].t[1]); // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 356) gi = AsmObjectSize2(&t[1].t[1]); // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 36) gi = AsmObjectSize3(&t[1].t[1]); } // CHECK-LABEL: define void @test8 void test8(struct Foo *t) { // CHECK: [[VAR:%[0-9]+]] = call i64 @llvm.objectsize // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 [[VAR]]) gi = AsmObjectSize1(&t[1].t[1]); // CHECK: call i32 @"\01Identity"(i8* %{{.*}}, i64 36) gi = AsmObjectSize3(&t[1].t[1]); } void DifferingObjectSize0(void *const p __attribute__((pass_object_size(0)))); void DifferingObjectSize1(void *const p __attribute__((pass_object_size(1)))); void DifferingObjectSize2(void *const p __attribute__((pass_object_size(2)))); void DifferingObjectSize3(void *const p __attribute__((pass_object_size(3)))); // CHECK-LABEL: define void @test9 void test9(void *const p __attribute__((pass_object_size(0)))) { // CHECK: @llvm.objectsize DifferingObjectSize2(p); // CHECK-NOT: @llvm.objectsize DifferingObjectSize0(p); DifferingObjectSize1(p); // CHECK: call void @DifferingObjectSize3(i8* %{{.*}}, i64 0) DifferingObjectSize3(p); } // CHECK-LABEL: define void @test10 void test10(void *const p __attribute__((pass_object_size(1)))) { // CHECK: @llvm.objectsize DifferingObjectSize2(p); // CHECK: @llvm.objectsize DifferingObjectSize0(p); // CHECK-NOT: @llvm.objectsize DifferingObjectSize1(p); // CHECK: call void @DifferingObjectSize3(i8* %{{.*}}, i64 0) DifferingObjectSize3(p); } // CHECK-LABEL: define void @test11 void test11(void *const p __attribute__((pass_object_size(2)))) { // CHECK: @llvm.objectsize DifferingObjectSize0(p); // CHECK: @llvm.objectsize DifferingObjectSize1(p); // CHECK-NOT: @llvm.objectsize DifferingObjectSize2(p); // CHECK: call void @DifferingObjectSize3(i8* %{{.*}}, i64 0) DifferingObjectSize3(p); } // CHECK-LABEL: define void @test12 void test12(void *const p __attribute__((pass_object_size(3)))) { // CHECK: @llvm.objectsize DifferingObjectSize0(p); // CHECK: @llvm.objectsize DifferingObjectSize1(p); // CHECK-NOT: @llvm.objectsize DifferingObjectSize2(p); DifferingObjectSize3(p); } // CHECK-LABEL: define void @test13 void test13() { char c[10]; unsigned i = 0; char *p = c; // CHECK: @llvm.objectsize ObjectSize0(p); // Allow side-effects, since they always need to happen anyway. Just make sure // we don't perform them twice. // CHECK: = add // CHECK-NOT: = add // CHECK: @llvm.objectsize // CHECK: call i32 @ObjectSize0 ObjectSize0(p + ++i); // CHECK: = add // CHECK: @llvm.objectsize // CHECK-NOT: = add // CHECK: call i32 @ObjectSize0 ObjectSize0(p + i++); } // There was a bug where variadic functions with pass_object_size would cause // problems in the form of failed assertions. void my_sprintf(char *const c __attribute__((pass_object_size(0))), ...) {} // CHECK-LABEL: define void @test14 void test14(char *c) { // CHECK: @llvm.objectsize // CHECK: call void (i8*, i64, ...) @my_sprintf my_sprintf(c); // CHECK: @llvm.objectsize // CHECK: call void (i8*, i64, ...) @my_sprintf my_sprintf(c, 1, 2, 3); } void pass_size_unsigned(unsigned *const PS(0)); // Bug: we weren't lowering to the proper @llvm.objectsize for pointers that // don't turn into i8*s, which caused crashes. // CHECK-LABEL: define void @test15 void test15(unsigned *I) { // CHECK: @llvm.objectsize.i64.p0i32 // CHECK: call void @pass_size_unsigned pass_size_unsigned(I); } void pass_size_as1(__attribute__((address_space(1))) void *const PS(0)); void pass_size_unsigned_as1( __attribute__((address_space(1))) unsigned *const PS(0)); // CHECK-LABEL: define void @test16 void test16(__attribute__((address_space(1))) unsigned *I) { // CHECK: call i64 @llvm.objectsize.i64.p1i8 // CHECK: call void @pass_size_as1 pass_size_as1(I); // CHECK: call i64 @llvm.objectsize.i64.p1i32 // CHECK: call void @pass_size_unsigned_as1 pass_size_unsigned_as1(I); } // This used to cause assertion failures, since we'd try to emit the statement // expression (and definitions for `a`) twice. // CHECK-LABEL: define void @test17 void test17(char *C) { // Check for 65535 to see if we're emitting this pointer twice. // CHECK: 65535 // CHECK-NOT: 65535 // CHECK: @llvm.objectsize.i64.p0i8(i8* [[PTR:%[^,]+]], // CHECK-NOT: 65535 // CHECK: call i32 @ObjectSize0(i8* [[PTR]] ObjectSize0(C + ({ int a = 65535; a; })); } // CHECK-LABEL: define void @test18 void test18(char *const p PDS(0)) { // CHECK-NOT: llvm.objectsize gi = __builtin_dynamic_object_size(p, 0); gi = __builtin_object_size(p, 0); }
<reponame>zhpigh/KidsTC_Objective-C // // ProductDetailNaviView.h // KidsTC // // Created by 詹平 on 2017/2/6. // Copyright © 2017年 zhanping. All rights reserved. // #import <UIKit/UIKit.h> extern CGFloat const kProductDetailNaviViewH; typedef enum : NSUInteger { ProductDetailNaviViewActionTypeBack = 600, ProductDetailNaviViewActionTypeTime, ProductDetailNaviViewActionTypeMore, } ProductDetailNaviViewActionType; @class ProductDetailNaviView; @protocol ProductDetailNaviViewDelegate <NSObject> - (void)productDetailNaviView:(ProductDetailNaviView *)view actionType:(ProductDetailNaviViewActionType)type value:(id)value; @end @interface ProductDetailNaviView : UIView @property (weak, nonatomic) IBOutlet UILabel *nameL; @property (nonatomic, weak) id<ProductDetailNaviViewDelegate> delegate; - (void)didScroll:(CGFloat)offsety; @end
BBC Africa reaches a weekly audience of more than 90 million making it the largest international broadcaster in Africa. It has news and multiplatform content on radio, TV, digital and social media. We are expanding our TV and digital content in the following genres – Investigations, Business, Children’s News, Sport, Technology, Satire and Women’s Affairs. This role will work across all the BBC Africa social media platforms. We are recruiting for a Broadcast Journalist to join our growing team in Nairobi. This role will work across all the BBC Africa social media platforms. You will be responsible for creating and editing content for the BBC Africa social media platforms including the use of videos, online stories, gifs and Facebook Live. You will build and manage the communities and audiences across the social channels and be up to date with the new trends on social media. As well as working with colleagues in the regional service you will be expected to produce stories that have global appeal and pan African appeal. Shift work will be required which could include early/late shifts, weekends and public holidays so flexibility is essential. The ideal candidate will have first class communication skills with fluency in written and spoken English. Knowledge of Swahili or another African language would be advantageous. You will have demonstrable experience of success in growing and engaging with audiences on social media platforms. You will have excellent editorial experience with strong writing skills and an understanding and awareness of African audiences and the sorts of social content they are going to engage with.
def validate_salesforce_export(filename): with open(filename, encoding="cp1252") as fcsv: reader = csv.DictReader(fcsv) for row in reader: acct = row["Account Name"] if acct == "Opfocus Test": continue acct_valid = (acct in ALL_ORGS or acct == "Individual Contributors" or acct.endswith(" Household")) assert acct_valid, f"Account Name is not a valid org: {acct}" username = row["GitHub Username"] assert github_username(username), f"GitHub Username is not valid: {username}"
Incidence and Economic Burden of Infections in Cancer Patients Receiving Immune Checkpoint Inhibitors: A Retrospective Cohort Study BACKGROUNDAlong with antitumor effects, Immune Checkpoint Inhibitors (ICPI) have shown great potential in treating chronic infections such as HIV, Hepatitis B and malaria, in ex-vivo studies. However, several case reports and case series have suggested an increased infection risk in cancer patients. The purpose of our study was to assess the risk of infections in cancer patients receiving ICPI. We also attempted to evaluate the role of a multidisciplinary approach (Oncology and Infectious disease specialists) and the cost associated with treatment. METHODS:Records on all cancer patients over age ≥18 years old who had received at least one dose of ICPI between 2015 to 2018 at a major community teaching hospital in the central Massachusetts region were reviewed. Several risk factors associated with infection were identified. A two-tailed, unpaired t-test was used to analyze the association between risk factors and infection. We calculated the cumulative length of stay (LOS) and cost per admission with a multidisciplinary vs. non-multidisciplinary approach. The calculated total average cost per admission was compared to a matched population (without an oncologic diagnosis) admitted with infections similar to that in our study, to compare the economic burden. RESULTSRetrospective chart review of 169 cancer patients receiving ICPI showed sixty-two episodes of infection in thirty-seven (21.8%) patients and a mortality rate of 3.5% due to associated complications. Risk factors like COPD, prior chemotherapy and steroid use were significantly associated (P<0.05) with infections. Further sub-group analysis showed increase in cumulative LOS from 5.9 to 8.1 days but approximately similar average cost per admission ($52,047 vs. $54,510) with non-multidisciplinary vs. multidisciplinary approach. The calculated total cost per admission during an episode of infection in this cohort of patients was $35,484; three-fold higher when matched to similar infections in a general non-oncologic population ($11,527). CONCLUSIONSA significant incidence of infections and associated health care resource utilization continues to prevail in cancer patients despite the utility of ICPI. A multidisciplinary approach to manage the infections and associated complications in cancer patients receiving ICPI increased the cumulative LOS but not the average cost per admission. Cancer remains the second most common cause of death in the United States, despite a 27% decline in mortality rates from 1991 to 2016. 1 The causes of death in these patients are classi ed as cancer and non-cancer related, with infection and heart disease being the most common non-cancer related deaths. 2 The risk of infection in these patients is due to a complex interplay between host, environment, and treatment-related factors. The presence of multiple risk factors in the same patient is not uncommon. 3 Several factors predisposing to infection in solid tumors include disruption of natural anatomic barriers such as the skin and mucosal surfaces, obstruction, and treatment-related factors such as chemoradiation therapy, surgery, and the use of implantable devices. 3 Newer therapeutic approaches and antimicrobial prophylaxis continue to shape the spectrum of infections in these patients. While several studies assessed the infection risk with traditional chemotherapeutic agents, 4 the spectrum of infectious complications during and after Immune checkpoint inhibitors (ICPI) is not well established. Several molecules like PD-1, CTLA-4, lymphocyte activation gene-3, T-cell immunoglobulin and mucin protein-3 have been identi ed as immune checkpoint molecules in the recent past. 5 Among them, CTLA-4 is the rst clinically targeted immune checkpoint receptor that primarily regulates the early stages of T-cell activation, typically in lymph nodes or spleen. Blockade of CTLA-4 resulted in clonal expansion of cytotoxic T-lymphocytes and therapeutic action against cancer cells. Following the CTLA-4 discovery in 1987, PD-1 was discovered in 1992 by Honjo and colleagues while studying mechanism of Tcell death. PD-1, a member of cluster of differentiation 28 (CD28)/B7 family of co-stimulatory receptors, inhibits T-cell activation by engaging with PDL-1 and PDL-2 in the cancer microenvironment. It is suggested that PD-1 inhibition will have fewer side effects and greater antitumor activity than CTLA-4 inhibition due to its predominant action in the effector phase of T-cell response and increased selectivity for immunosuppressive signals delivered directly by the cancer. 6 Recently, several studies suggested increased response rate with dual blockade i.e. CTLA-4 and PD-1/PDL-1 rather than single agent blockade; although associated with increased toxicity. 7 Along with augmenting antitumor activity, it is postulated that the ICPIs promote viral clearance in chronic infections by complementing antiviral immune activity. There are reports of a reduction in viral load in an animal model of lymphocytic choriomeningitis infection upon blocking the PD-1 pathway due to restoration of T-cell effector functions. 8 A similar in vivo investigation in humans showed clinical improvement or stabilization of the human polyomavirus, JC virus (JCV) induced progressive multifocal leukoencephalopathy (PML) after receiving pembrolizumab in ve out of eight patients. 9 Analysis of the blood and CSF specimens prior to pembrolizumab showed upregulated PD-1, PDL-1 expression on cluster of differentiation four (CD4+) and cluster of differentiation eight (CD8+) lymphocytes, limiting the successful clearance of JCV. Administration of PD-1 blockade (pembrolizumab) prompted downregulation of PD-1 expression on lymphocytes and an increase in vitro CD4+ and CD8+ anti-JCV activity. 9 ICPIs continued to show great potential in treating several other chronic bacterial, viral, or parasitic infection, like HIV, Hepatitis B, and malaria by enhancing effector T-cell responses in ex vivo studies; 10 generating a theoretical hypothesis of probable abatement of infections with ICPI in cancer patients. However, several case reports and case series since the initiation of ICPI's use in cancer patients over the past decade reported persistent opportunistic infections risk and reactivation of tuberculosis. Review of the literature showed very limited studies assessing the infection risk in cancer patients receiving ICPI. 9,10 Also, given the tremendous overlap of presenting features of ICPI mediated in ammation with infectious processes, these clinical symptoms pose a diagnostic challenge due to the unfamiliarity of the role of ICPI in acute infections. The primary aim of this retrospective study was to assess the risk of infections in cancer patients receiving ICPI. Our study also aimed to assess the economic burden of a multidisciplinary approach to their treatment. Methods: A retrospective review of various types of cancer patients receiving PD-1 (pembrolizumab, durvalumab), PDL-1 (nivolumab) and CTLA-4 (ipilimumab) inhibitors between 2015 to 2018 was carried out at a major community teaching hospital (Saint Vincent Hospital Cancer Center) in the central Massachusetts region. All of these patients received immunotherapy, either as an initial agent or later due to initial treatment failure or intolerance. Based on the standard treatment protocols for speci c cancer, they were either on single or dual agents. Inclusion criteria included any cancer patient ≥18 years age who received at least one dose of Immune checkpoint inhibitors. Exclusion criteria included discontinuation of ICPI prior to initiation due to withdrawal of consent, adverse events from other chemotherapeutic agents, or progression of the disease. All records of cancer patients receiving Immune checkpoint inhibitors were reviewed. Data extracted included age, gender, body mass index (BMI), cancer type and metastasis sites, comorbidities , medication use such as steroids, granulocyte-colony stimulating factor (G-CSF) and antibiotics, chemo/radiation therapy, ICPI type and number of doses, documented infections or ICPI-mediated in ammation, microbiology data, choice of antibiotics, length of hospital/ Intensive care unit (ICU) stay, mortality rate and the cost per admission with a multidisciplinary vs. non-multidisciplinary approach. For purposes of the study, various clinically signi cant events were de ned as in Table 1. We also attributed drug-induced pneumonitis or colitis as 'infection mimics' due to their overlapping clinical presentation and/or radiological resemblance. Clinical symptoms along with supportive radiological ndings. In uenza Clinical symptoms with positive serologies for in uenza type A or B. Enterocolitis Clinical symptoms and/or the presence of supportive radiological ndings. Clostridium di cile infection Identi cation of C. di cile toxin in stool by enzyme immunoassay in the setting of diarrhea. 18 Genitourinary infections Clinical symptoms ( ank or suprapubic pain, dysuria, cloudy/foul-smelling urine, increased urinary frequency, or urgency) in the context of a positive urine culture. Skin, soft tissue, and bone infections Clinical symptoms (swelling, redness, warmth, pain of skin or skin structures) along with positive cultures (in case of an abscess). For osteomyelitis, clinical symptoms in the setting of supportive radiological ndings. Febrile neutropenia Core temperature ≥38.3°C or ≥38.0°C for ≥1 hour in association with an absolute neutrophil count ≤500/L or expected to fall below 500/L. 19 Bloodstream infection Any bacterial infection caused by a recognized pathogen that was isolated from ≥1 blood culture in the context of a compatible clinical illness and the result deemed clinically signi cant by the treating clinician. Clinically documented infection Infection diagnosed by the treating physician based on the identi cation of a clinical focus (e.g. cellulitis, pneumonia, etc.) but without the isolation of an associated pathogen. Microbiologically documented infection (MDI) Bacterial, viral, fungal, and parasitic infections supported by microbiological evidence, such as a positive culture, antigen or PCR test results. Statistical analysis: The data was thoroughly explored using univariate analyses. A two-tailed, unpaired t-test was used to analyze the statistical signi cance of differences in continuous data. All reported p values are two-tailed and a p value ≤ 0.05 was considered signi cant. Statistical analysis was done using SPSS software v. 21.0 (SPSS Inc., Armonk, NY, USA). The study protocol was approved by the local institutional review board. Results: A total of 169 patients met the inclusion criteria. The baseline characteristics of the study population are described in Table 2. The median age of the patient population was 68 years {Interquartile range (IQR) = 77 − 62}. In addition to the risk factors mentioned in Table 2, several other risk factors such as neutropenia, recent hospitalization, catheter use, impaired gag re ex, and mucositis were also identi ed. Our study only included subjects with solid organ malignancies. Lung cancers constituted more than 50% of the study population. Pembrolizumab and nivolumab were the most commonly used ICPIs. The average number of doses per patient for each of the ICPIs were seven, eight, three and ve respectively for pembrolizumab, nivolumab, durvalumab, nivolumab-ipilimumab. Table 3. A coexisting diagnosis of drug-induced pneumonitis and colitis existed in 8.2% and 1.7% of cases respectively. paclitaxel was the commonest risk factor in those with bacteremia. A concurrent chemotherapy with pemetrexed and paclitaxel was most commonly associated with febrile neutropenia. Interestingly, we also noted that new-onset neutropenia after starting ICPI (pembrolizumab and durvalumab) and concurrent chemotherapy (pemetrexed and paclitaxel) in lung cancer patients were associated with febrile neutropenia and no cases of infection were identi ed in patients with neutropenia existing prior to starting ICPI. A mortality rate of 3.5%, due to infections and associated complications, was noted in our study. The cumulative LOS varied from 2 to 24 days with an average of 7 days. A multidisciplinary team including oncology and infectious disease specialists were variably involved. Oncology and Infectious diseases were consulted in 78% and 50% of cases respectively. Sub-group analysis to assess the role of multidisciplinary approach in managing infections in this set of patients showed increased average LOS from 5.9 to 8. Discussion: Acute infections continue to represent a signi cant risk in cancer patients irrespective of the choice of antineoplastic therapy. In our study, infections in patients with solid tumors receiving ICPI demonstrated an incidence of 21.8% and an event rate of 36.1%. The event rates were slightly higher than the incidence rates owing to the recurrence of infections. Del Castillo et al. reported a 2% incidence of infections in melanoma patients receiving ICPI, but, the risk increased to 13.5% with steroid or in iximab use. 9 Similar results were noted by Wang et al. in their study on ICPI induced diarrhea and colitis in patients with advanced malignancies. 10 Compared to Del Castillo et al., our study showed much higher incidence rates, owing to the probable immunosuppression from cancer, chronic medical co-morbidities, prior/concurrent chemoradiation therapy, multiple hospital/clinic visits, immunosuppressant use and infection mimics. Univariate analysis evaluating the association between various risk factors and infections showed a clinically signi cant association of COPD, prior chemotherapy and steroid use with infections. These results further supported Del Castillo et al. and Wang et al. ndings and also suggested an equivalent role of medical comorbidities in increasing risk of infections along with steroids. 9, 10 We also noted that newonset neutropenia, after starting an ICPI, was associated with a higher risk of infections than pre-existing neutropenia. Among the several infections that can affect such patients, bacterial pneumonias are a common complication among patients receiving Hematopoietic stem cell transplantation (HSCT) or chemotherapeutic agents, due to their complex immune dysfunction, lung architectural derangements, repeated encounters with the healthcare system and malnutrition. 11 Despite reports of febrile neutropenia being associated with approximately 10% of in-hospital mortality, 12 the cause of mortality noted in our study was due to pneumonia-related complications (3.5%). With a co-existing diagnosis of drug-induced pneumonitis in 8.2% of the cases, pneumonia was also the most commonly noted infection in our study (Graph 1). Microbiologically documented pneumonia was 6.4% and Methicillin-resistant Staphylococcus aureus (MRSA), Methicillin-susceptible Staphylococcal aureus (MSSA), Mycoplasma pneumoniae (M. pneumoniae), and Stenotrophomonas maltophilia (S. maltophilia) were the causative organisms. Pneumonia due to S. aureus and M. pneumoniae is noted in healthy adults, 13 but, S. maltophilia rarely causes pneumonia in immunocompetent hosts; 14 suggesting the increased prevalence being due to the immunocompromised status in these sets of patients. This was further supported by the identi cation of atypical pathogens in various other infections (Table 3) like Cytomegalovirus (CMV) colitis 15 and bloodstream infections (BSI) due to Candida albicans (C. albicans) 16 and Bacteroides thetaiotaomicron (B. thetaiotaomicron). 17 Furthermore, various studies only reported the role of ICPIs in a few chronic infections necessitating T-cell mediated clearance until now. Their role in several other infections modulated through other pathways has not been studied and remains unclear. In the subset of cancer patients receiving immunotherapy, oncology and infectious disease were consulted in 78% and 50% of cases respectively. These signi cant results suggest the preference of several physicians to seek a specialist's assistance in directing appropriate care, given the novelty of the agents. We also noted a 3.5% mortality rate during admissions for infections, due to several complications, leading to signi cant health care cost utilization, approximately 2.2 billion dollars ($2,218,035) as total hospital charges. The economic burden due to the infections in this set of patients is three fold higher ($35,484 vs. $11,527), when compared to the general non-oncologic population. The involvement of a multidisciplinary team to manage the infections and associated complications in cancer patients receiving ICPI showed increase in average LOS from 5.9 to 8.1 days when compared to a non-multidisciplinary approach. However, the average cost per admission approximately remained the same in both (non-multidisciplinary vs. multidisciplinary approach) arms ($52,047 vs. $54,510). Our study has several limitations such as a small sample size, uneven distribution of cancers and ICPI drug type, and the retrospective nature of the study. However, we believe the study provides insight into infectious complications in cancer patients receiving ICPIs and the associated cost burden, in a community setting. Conclusions: Our review of the literature showed several studies assessing the use of ICPI in chronic infections requiring T-cell mediated clearance, but limited studies analyzed the association of ICPI with acute infections in cancer patients. With our study showing an infection incidence of 21.8% and the economic burden of 2.2 billion dollars upon healthcare infrastructure, we suggest future studies to address the pathophysiology of ICPI as a risk factor for acute infections. Also, based on the microbiological ndings in our study, a high index of clinical suspicion for typical or atypical pathogens, including bacterial, fungal or viral; is warranted until further studies assess the role of ICPI in immunomodulation for acute infections in cancer patients. A multidisciplinary approach is advised while providing care in this subset of population, given the signi cant overlap of symptoms between the infection and infection-mimics i.e. drug-induced in ammation and the prevalence of opportunistic organisms. Abbreviations: Declarations: Ethics approval and consent to participate The study was approved by the local institutional review board at Metrowest Medical center, Framingham, Massachusetts, U.S.A. The study number is 2019-118. Consent to participate was not eligible as it is a retrospective chart review. Consent for publication Approved for submission and publication by all the named authors. Availability of data and material The data used to support the ndings of this study are available from the rst author upon request. Author's contributions: Figure 1 showing Incidence of various types of infections noted in our study.
<reponame>cthacker-udel/NCT-AndroidGUI package com.example.nctai_trading.exante.accountDetail; import com.google.gson.annotations.Expose; import com.google.gson.annotations.SerializedName; import java.util.List; public class AccountSummary { @SerializedName("account") @Expose private String account; @SerializedName("accountId") @Expose private String accountId; @SerializedName("currency") @Expose private String currency; @SerializedName("sessionDate") @Expose private String sessionDate; @SerializedName("timestamp") @Expose private Long timestamp; @SerializedName("netAssetValue") @Expose private String netAssetValue; @SerializedName("freeMoney") @Expose private String freeMoney; @SerializedName("moneyUsedForMargin") @Expose private String moneyUsedForMargin; @SerializedName("marginUtilization") @Expose private String marginUtilization; @SerializedName("currencies") @Expose private List<Currency> currencies = null; @SerializedName("positions") @Expose private List<Position> positions = null; public String getAccount() { return account; } public void setAccount(String account) { this.account = account; } public String getAccountId() { return accountId; } public void setAccountId(String accountId) { this.accountId = accountId; } public String getCurrency() { return currency; } public void setCurrency(String currency) { this.currency = currency; } public String getSessionDate() { return sessionDate; } public void setSessionDate(String sessionDate) { this.sessionDate = sessionDate; } public Long getTimestamp() { return timestamp; } public void setTimestamp(Long timestamp) { this.timestamp = timestamp; } public String getNetAssetValue() { return netAssetValue; } public void setNetAssetValue(String netAssetValue) { this.netAssetValue = netAssetValue; } public String getFreeMoney() { return freeMoney; } public void setFreeMoney(String freeMoney) { this.freeMoney = freeMoney; } public String getMoneyUsedForMargin() { return moneyUsedForMargin; } public void setMoneyUsedForMargin(String moneyUsedForMargin) { this.moneyUsedForMargin = moneyUsedForMargin; } public String getMarginUtilization() { return marginUtilization; } public void setMarginUtilization(String marginUtilization) { this.marginUtilization = marginUtilization; } public List<Currency> getCurrencies() { return currencies; } public void setCurrencies(List<Currency> currencies) { this.currencies = currencies; } public List<Position> getPositions() { return positions; } public void setPositions(List<Position> positions) { this.positions = positions; } }
<reponame>fylein/fyle-intacct-api from datetime import datetime, timezone import logging from django.utils.module_loading import import_string from apps.fyle.models import ExpenseGroupSettings from apps.workspaces.models import Workspace logger = logging.getLogger(__name__) def add_expense_id_to_expense_group_settings(workspace_id: int): """ Add Expense id to card expense grouping :param workspace_id: Workspace id return: None """ expense_group_settings = ExpenseGroupSettings.objects.get(workspace_id=workspace_id) ccc_expense_group_fields = expense_group_settings.corporate_credit_card_expense_group_fields ccc_expense_group_fields.append('expense_id') expense_group_settings.corporate_credit_card_expense_group_fields = list(set(ccc_expense_group_fields)) expense_group_settings.ccc_export_date_type = 'spent_at' expense_group_settings.save() def check_interval_and_sync_dimension(workspace: Workspace, refresh_token: str) -> bool: """ Check sync interval and sync dimension :param workspace: Workspace Instance :param refresh_token: Refresh token of an org return: True/False based on sync """ if workspace.source_synced_at: time_interval = datetime.now(timezone.utc) - workspace.source_synced_at if workspace.source_synced_at is None or time_interval.days > 0: sync_dimensions(refresh_token, workspace.id) return True return False def sync_dimensions(refresh_token: str, workspace_id: int) -> None: fyle_connection = import_string('apps.fyle.connector.FyleConnector')(refresh_token, workspace_id) dimensions = [ 'employees', 'categories', 'cost_centers', 'projects', 'expense_custom_fields' ] for dimension in dimensions: try: sync = getattr(fyle_connection, 'sync_{}'.format(dimension)) sync() except Exception as exception: logger.exception(exception)
The green fluorescent protein (GFP: Green Fluroesent Protein) from a jellyfish, Aequorea victoria, or a modified protein thereof is capable of recombinant expression in heterogeneous cells especially in various kinds of mammalian cells, and the obtained recombinant protein exhibits fluorescence performance in host cells. Using this feature, it has been attempted to use GFP from A. victoria and homologues thereof for various objects and applications as an in vivo fluorescent marker protein in the field of biochemistry, cell physiology and medicine (See Reference 1: Lippincott-Schwartz, J. G. H. Patterson, Science Vol. 300, 87-91 (2003); Reference 2: Tsien, R. Y., Annu. Rev. Biochem. Vol. 67, 509-544 (1998)). In addition, besides GFP from A. victoria, GFP-like proteins have been cloned from class Hydrozoa of phylum Cnidaria (Cnidaria) and further GFP-like proteins have been also cloned from class Anthozoa of phylum Cnidaria. Concerning these GFP-like proteins discovered in class Anhozoa of phylum Cnidaria, it is reported that they probably constitute a fluorescent protein family having bioevolutionarily the common origin (see Reference 3: Y. A. Labas et al., Proc. Natl. Acad Sci. U.S.A. Vol. 99, 4256-4261 (2002)). Concerning GFP from A. victoria, researches on the mechanism being essential to the exhibition of the fluorescence performance therein have progressed. First, it was revealed that in the process for folding into the natural steric structure, through which translated GFP polypeptide was converted into mature GFP having the fluorescence performance through the steps of cyclization of internal tripeptide site and subsequent oxidization thereto, which resulted in formation of a fluorophore. Furthermore, it has been also confirmed that SYG, 65-67th residues in the deduced amino acid sequence of wild type GFP from A. victoria is the internal tripeptide site, which forms a fluorophore. For example, it has been reported that a blue shift as compared with green fluorescence of wild type GFP is caused in the fluorescence in Y66H-GFP, in which mutation of Tyr to His at the 66th residue was made, showing blue fluorescence with a maximum at the wavelength of 448 nm. Furthermore, in S65T-GFP, where mutation of Ser to Thr at the 65th residue was made, the wavelength of a maximum of the fluorescence thereof was 510 nm, showing a slight red shift as compared with green fluorescence of wild type GFP. It has been also reported that formation of fluorophore, which was achieved through cyclization of an internal tripeptide: TYG site and subsequent oxidization, proceeds significantly more quickly in S65T-GFP than in SYG of wild type GFP. Besides the aforementioned introduction of a mutation into the 65-67th SYG site, it has been also reported that when mutations of T203H, T203F and T203Y, which respectively replaces Thr with His, Phe, and Tyr at the 203rd position in the wild type GFP from A. victoria, are introduced, the wavelength for the maximum in the fluorescence thereof shows a remarkable red shift to about 530 nm, resulting in yellow fluorescent protein (YFP: Yellow Fluorescent Protein). Moreover, it has been reported that EGFP (“enhanced”GFP), in which mutation of F64L replacing Phe with Leu at the 64th position adjacent to the 65-67th SYG site is made, exhibits a markedly improved maturation process accompanied by formation of fluorophore as compared with wild type GFP (see Reference 4: B. P. Cormack et al., Gene Vol. 173, 33-38 (1996)). In this way, with regard to GFP-like proteins from various sea animals belonging to the phylum Cnidaria represented by GFP from A. victoria, a number of attempts utilizing them as an in vivo fluorescent marker protein, which can be expressed in an animal cell, have been made. In the meantime, it is known that there exist lots of marine organisms, especially animal planktons which show bioluminescence. Accordingly, the existence of novel fluorescent proteins is demanded which constitute another type of protein family having a bio-evolutionarily different origin from the fluorescent protein family to which GFP from A. victoria belongs. Thus, search for a new fluorescent protein family is desired, which can be used as an in vivo fluorescent marker protein which can be expressed in a host animal cell.
// csv2feat.cpp : Defines the entry point for the console application. // #include "stdafx.h" #ifdef HAVE_CONFIG_H #include <config.h> #endif #if _WIN32 #include "../libbiokanga/commhdrs.h" #else #include "../libbiokanga/commhdrs.h" #endif const unsigned int cProgVer = 110; // increment with each release const int cDfltMinLengthRange = 4; // default is to accept sequence lengths from this min length up const int cMaxLengthRange = 1000000000; // max feature length accepted const int cMaxOutBuff = 32000; // max number of chars to buffer in output // processing modes typedef enum eProcMode { ePMStandard = 0 // standard processing } etProcMode; int Process(etProcMode Mode, // processing mode int MinLength, // probe elements must be of at least this length int MaxLength, // and no longer than this length int MinOverlap, // overlap onto features must be at least this length char *pszInLociFile, // CSV file containing elements to be mapped onto feature char *pszInFeatFile, // file containing features to map elements onto char *pszRsltsFile); // file to write results into int // eBSFSuccess, eBSFerrFeature or eBSFerrChrom Loci2Gene(etProcMode Mode, // processing mode char Strand, // loci are on this strand CBEDfile *pBED, // BED file containing features or genes to map loci onto char *pszChrom, // chromosome int LociStart, // loci start int LociEnd, // loci end int MinOverlap, // minimum number of bases required to overlap char *pszGene); // where to return gene loci maps onto CStopWatch gStopWatch; CDiagnostics gDiagnostics; // for writing diagnostics messages to log file char gszProcName[_MAX_FNAME]; // process name #ifdef _WIN32 int _tmain(int argc, char* argv[]) { // determine my process name _splitpath(argv[0],NULL,NULL,gszProcName,NULL); #else int main(int argc, const char** argv) { // determine my process name CUtility::splitpath((char *)argv[0],NULL,gszProcName); #endif int iScreenLogLevel; // level of screen diagnostics int iFileLogLevel; // level of file diagnostics char szLogFile[_MAX_PATH]; // write diagnostics to this file int Rslt; int iMode; // processing mode 0:standard int iMinLength; // probe elements must be of at least this length int iMaxLength; // and no longer than this length int iMinOverlap; // overlaps onto elements must be of at least this length char szInLociFile[_MAX_PATH]; // input element loci from this file char szInFeatFile[_MAX_PATH]; // input bioseq file features char szRsltsFile[_MAX_PATH]; // output stats to this file // command line args struct arg_lit *help = arg_lit0("hH","help", "print this help and exit"); struct arg_lit *version = arg_lit0("v","version,ver", "print version information and exit"); struct arg_int *FileLogLevel=arg_int0("f", "FileLogLevel", "<int>","Level of diagnostics written to logfile 0=fatal,1=errors,2=info,3=diagnostics,4=debug"); struct arg_int *ScreenLogLevel=arg_int0("S", "ScreenLogLevel", "<int>","Level of diagnostics written to logfile 0=fatal,1=errors,2=info,3=diagnostics,4=debug"); struct arg_file *LogFile = arg_file0("F","log","<file>", "diagnostics log file"); struct arg_int *Mode = arg_int0("m","procmode","<int>", "processing mode 0:standard"); struct arg_file *InLociFile = arg_file1("i","inloci","<file>", "element loci CSV file"); struct arg_file *InFeatFile = arg_file1("I","feat","<file>", "bioseq feature file"); struct arg_file *RsltsFile = arg_file1("o","output","<file>", "output file"); struct arg_int *MinLength = arg_int0("l","minlength","<int>", "minimum element length (default 4)"); struct arg_int *MaxLength = arg_int0("L","maxlength","<int>", "maximum element length (default 1000000000)"); struct arg_int *MinOverlap = arg_int0("M","minoverlap","<int>","minimum feature overlap (default 1)"); struct arg_end *end = arg_end(20); void *argtable[] = {help,version,FileLogLevel,ScreenLogLevel,LogFile, Mode,InLociFile,InFeatFile,RsltsFile,MinLength,MaxLength,MinOverlap, end}; char **pAllArgs; int argerrors; argerrors = CUtility::arg_parsefromfile(argc,(char **)argv,&pAllArgs); if(argerrors >= 0) argerrors = arg_parse(argerrors,pAllArgs,argtable); /* special case: '--help' takes precedence over error reporting */ if (help->count > 0) { printf("\n%s ", gszProcName); arg_print_syntax(stdout,argtable,"\n"); arg_print_glossary(stdout,argtable," %-25s %s\n"); printf("\nNote: Parameters can be entered into a parameter file, one parameter per line."); printf("\n To invoke this parameter file then precede its name with '@'"); printf("\n e.g. %s @myparams.txt\n\n",gszProcName); exit(1); } /* special case: '--version' takes precedence error reporting */ if (version->count > 0) { printf("\n%s Version: %d.%2.2d\n",gszProcName,cProgVer/100,cProgVer%100); exit(1); } if (!argerrors) { iScreenLogLevel = ScreenLogLevel->count ? ScreenLogLevel->ival[0] : eDLInfo; if(iScreenLogLevel < eDLNone || iScreenLogLevel > eDLDebug) { printf("\nError: ScreenLogLevel '-S%d' specified outside of range %d..%d",iScreenLogLevel,eDLNone,eDLDebug); exit(1); } if(FileLogLevel->count && !LogFile->count) { printf("\nError: FileLogLevel '-f%d' specified but no logfile '-F<logfile>'",FileLogLevel->ival[0]); exit(1); } iFileLogLevel = FileLogLevel->count ? FileLogLevel->ival[0] : eDLInfo; if(iFileLogLevel < eDLNone || iFileLogLevel > eDLDebug) { printf("\nError: FileLogLevel '-l%d' specified outside of range %d..%d",iFileLogLevel,eDLNone,eDLDebug); exit(1); } if(LogFile->count) { strncpy(szLogFile,LogFile->filename[0],_MAX_PATH); szLogFile[_MAX_PATH-1] = '\0'; } else { iFileLogLevel = eDLNone; szLogFile[0] = '\0'; } iMode = Mode->count ? Mode->ival[0] : ePMStandard; if(iMode < ePMStandard || iMode > ePMStandard) { printf("\nError: Requested processing mode '-m%d' not supported",iMode); exit(1); } iMinLength = MinLength->count ? MinLength->ival[0] : cDfltMinLengthRange; if(iMinLength < 1 || iMinLength > cMaxLengthRange) { printf("Error: Mininum element length '-l%d' is not in range 1..%d",iMinLength,cMaxLengthRange); exit(1); } iMaxLength = MaxLength->count ? MaxLength->ival[0] : cMaxLengthRange; if(iMaxLength < iMinLength || iMaxLength > cMaxLengthRange) { printf("Error: Maximum element length '-L%d' is not in range %d..%d",iMaxLength,iMinLength,cMaxLengthRange); exit(1); } iMinOverlap = MinOverlap->count ? MinOverlap->ival[0] : 1; if(iMinOverlap < 1 || iMinOverlap > iMinLength) { printf("Error: Minimum feature overlap length '-M%d' is not in range 1..%d",iMinOverlap,iMinLength); exit(1); } strncpy(szInLociFile,InLociFile->filename[0],_MAX_PATH); szInLociFile[_MAX_PATH-1] = '\0'; strncpy(szInFeatFile,InFeatFile->filename[0],_MAX_PATH); szInFeatFile[_MAX_PATH-1] = '\0'; strncpy(szRsltsFile,RsltsFile->filename[0],_MAX_PATH); szRsltsFile[_MAX_PATH-1] = '\0'; // now that command parameters have been parsed then initialise diagnostics log system if(!gDiagnostics.Open(szLogFile,(etDiagLevel)iScreenLogLevel,(etDiagLevel)iFileLogLevel,true)) { printf("\nError: Unable to start diagnostics subsystem."); if(szLogFile[0] != '\0') printf(" Most likely cause is that logfile '%s' can't be opened/created",szLogFile); exit(1); } gDiagnostics.DiagOut(eDLInfo,gszProcName,"Version: %d.%2.2d Processing parameters:",cProgVer/100,cProgVer%100); gDiagnostics.DiagOutMsgOnly(eDLInfo,"Processing mode: Standard"); gDiagnostics.DiagOutMsgOnly(eDLInfo,"Input CSV element loci file: '%s'",szInLociFile); gDiagnostics.DiagOutMsgOnly(eDLInfo,"Input bioseq feature file: '%s'",szInFeatFile); gDiagnostics.DiagOutMsgOnly(eDLInfo,"Output to file: '%s'",szRsltsFile); gDiagnostics.DiagOutMsgOnly(eDLInfo,"Minimum element length: %d",iMinLength); gDiagnostics.DiagOutMsgOnly(eDLInfo,"Maximum element length: %d",iMaxLength); gDiagnostics.DiagOutMsgOnly(eDLInfo,"Minimum feature overlap length: %d",iMinOverlap); #ifdef _WIN32 SetPriorityClass(GetCurrentProcess(), BELOW_NORMAL_PRIORITY_CLASS); #endif // processing here... Rslt = Process((etProcMode)iMode,iMinLength,iMaxLength,iMinOverlap,szInLociFile,szInFeatFile,szRsltsFile); gStopWatch.Stop(); Rslt = Rslt >=0 ? 0 : 1; gDiagnostics.DiagOut(eDLInfo,gszProcName,"Exit code: %d Total processing time: %s",Rslt,gStopWatch.Read()); exit(Rslt); } else { arg_print_errors(stdout,end,gszProcName); arg_print_syntax(stdout,argtable,"\nend of help\n"); exit(1); } } int Process(etProcMode Mode, // processing mode int MinLength, // core elements must be of at least this length int MaxLength, // and no longer than this length int MinOverlap, // must overlap by at least this number of bases onto feature char *pszInLociFile, // CSV file containing elements char *pszInFeatFile, // file containing features char *pszRsltsFile) // file to write fasta into { int NumFields; int Rslt; int SrcID; char *pszChrom; char *pszElType; char *pszRefSpecies; char *pszFieldVal; char szFeatHit[80]; char szLineBuff[cMaxOutBuff]; int BuffLen; int StartLoci; int EndLoci; int Len; int NumAccepted; int NumProcessed; int NumUnmapped; int NumUnderLen; int NumOverLen; int hRsltFile = -1; CBEDfile *pFeatFile = NULL; CCSVFile *pCSV = new CCSVFile; if(pCSV == NULL) { gDiagnostics.DiagOut(eDLFatal,gszProcName,"Unable to instantiate CCSVfile"); return(eBSFerrObj); } if((Rslt=pCSV->Open(pszInLociFile))!=eBSFSuccess) { while(pCSV->NumErrMsgs()) gDiagnostics.DiagOut(eDLFatal,gszProcName,pCSV->GetErrMsg()); gDiagnostics.DiagOut(eDLFatal,gszProcName,"Unable to open file: %s",pszInLociFile); delete pCSV; return(Rslt); } if((pFeatFile = new CBEDfile) == NULL) { gDiagnostics.DiagOut(eDLFatal,gszProcName,"Unable to instantiate CBEDfile object"); delete pCSV; return(eBSFerrObj); } if((Rslt = pFeatFile->Open(pszInFeatFile))!=eBSFSuccess) { while(pFeatFile->NumErrMsgs()) gDiagnostics.DiagOut(eDLFatal,gszProcName,pFeatFile->GetErrMsg()); gDiagnostics.DiagOut(eDLFatal,gszProcName,"Unable to open bioseq feature file '%s'",pszInFeatFile); delete pCSV; delete pFeatFile; return(Rslt); } #ifdef _WIN32 if((hRsltFile = open(pszRsltsFile, _O_RDWR | _O_BINARY | _O_SEQUENTIAL | _O_CREAT | _O_TRUNC, _S_IREAD | _S_IWRITE ))==-1) #else if((hRsltFile = open(pszRsltsFile, O_RDWR | O_CREAT |O_TRUNC, S_IREAD | S_IWRITE))==-1) #endif { gDiagnostics.DiagOut(eDLFatal,gszProcName,"Unable to create %s - %s",pszRsltsFile,strerror(errno)); delete pCSV; delete pFeatFile; return(eBSFerrCreateFile); } gDiagnostics.DiagOut(eDLInfo,gszProcName,"Output file created/truncated: '%s'",pszRsltsFile); NumUnmapped = 0; NumUnderLen = 0; NumOverLen = 0; NumAccepted = 0; NumProcessed = 0; BuffLen = 0; while((Rslt=pCSV->NextLine()) > 0) // onto next line containing fields { NumFields = pCSV->GetCurFields(); if(NumFields < 7) { gDiagnostics.DiagOut(eDLFatal,gszProcName,"Expected 7+ fields in '%s', GetCurFields() returned '%d'",pszInLociFile,NumFields); Rslt = eBSFerrFieldCnt; break; } if(!NumProcessed && pCSV->IsLikelyHeaderLine()) continue; NumProcessed += 1; pCSV->GetInt(7,&Len); if(Len < MinLength) { NumUnderLen += 1; continue; } if(Len > MaxLength) { NumOverLen += 1; continue; } pCSV->GetInt(1,&SrcID); pCSV->GetText(2,&pszElType); pCSV->GetText(3,&pszRefSpecies); pCSV->GetText(4,&pszChrom); pCSV->GetInt(5,&StartLoci); pCSV->GetInt(6,&EndLoci); Rslt = Loci2Gene(Mode,'*',pFeatFile,pszChrom,StartLoci,EndLoci,MinOverlap,szFeatHit); if(Rslt == eBSFerrFeature) // doesn't map onto any feature { NumUnmapped += 1; continue; } if(Rslt < eBSFSuccess) break; BuffLen += sprintf(&szLineBuff[BuffLen],"%d,\"%s\",\"%s\",\"%s\",%d,%d,%d", SrcID,pszElType,pszRefSpecies,pszChrom,StartLoci,EndLoci,Len); if(BuffLen > sizeof(szLineBuff)/2) { CUtility::SafeWrite(hRsltFile,szLineBuff,BuffLen); BuffLen = 0; } for(int FieldIdx = 8; FieldIdx <= NumFields; FieldIdx++) { pCSV->GetText(FieldIdx,&pszFieldVal); if(pCSV->GetQuoted(FieldIdx)) BuffLen += sprintf(&szLineBuff[BuffLen],",\"%s\"",pszFieldVal); else BuffLen += sprintf(&szLineBuff[BuffLen],",%s",pszFieldVal); if(BuffLen > sizeof(szLineBuff)/2) { CUtility::SafeWrite(hRsltFile,szLineBuff,BuffLen); BuffLen = 0; } } BuffLen += sprintf(&szLineBuff[BuffLen],",\"%s\"\n",szFeatHit); if(BuffLen > sizeof(szLineBuff)/2) { CUtility::SafeWrite(hRsltFile,szLineBuff,BuffLen); BuffLen = 0; } NumAccepted += 1; } if(Rslt >= eBSFSuccess) { gDiagnostics.DiagOut(eDLInfo,gszProcName,"Elements accepted: %d, Processed: %d, Unmapped: %d,UnderLen: %d, OverLen: %d", NumAccepted,NumProcessed,NumUnmapped,NumUnderLen,NumOverLen); } if(BuffLen) CUtility::SafeWrite(hRsltFile,szLineBuff,BuffLen); close(hRsltFile); delete pCSV; delete pFeatFile; return(Rslt < 0 ? NumAccepted : Rslt); } int // eBSFSuccess, eBSFerrFeature or eBSFerrChrom Loci2Gene(etProcMode Mode, // processing mode char Strand, // loci are on this strand CBEDfile *pBED, // BED file containing features or genes to map loci onto char *pszChrom, // chromosome int LociStart, // loci start int LociEnd, // loci end int MinOverlap, // minimum number of bases required to overlap char *pszGene) // where to return gene loci maps onto { int Ith; int ChromID; int FeatID; int InGeneFeatID; int GeneStart; int GeneEnd; int InGeneStart; int InGeneEnd; int OverlapStart; int OverlapEnd; // length must be at least the min overlap! if((1 + LociEnd - LociStart) < MinOverlap) return(eBSFerrFeature); *pszGene = '\0'; // is the chromosome known? if((ChromID = pBED->LocateChromIDbyName(pszChrom)) < 1) return(eBSFerrFeature); // processing may be strand specific... pBED->SetStrand(Strand); FeatID = 0; InGeneFeatID = 0; // if loci is intragenic then... if(pBED->InAnyFeature(ChromID,LociStart,LociEnd)) { // iterate over all genes loci maps onto and use shortest gene completely containing loci Ith = 1; do { FeatID = pBED->LocateFeatureIDinRangeOnChrom(ChromID,LociStart,LociEnd,Ith++); if(FeatID < 1) continue; pBED->GetFeature(FeatID,NULL,NULL,&GeneStart,&GeneEnd); if((1 + GeneEnd - GeneStart) < MinOverlap) // feature must be at least min overlap length continue; // check if at least minimum overlap onto feature if(GeneStart < LociStart) OverlapStart = LociStart; else OverlapStart = GeneStart; if(GeneEnd > LociEnd) OverlapEnd = LociEnd; else OverlapEnd = GeneEnd; if((1 + OverlapEnd - OverlapStart) < MinOverlap) continue; // go for smallest overlapped feature if(InGeneFeatID == 0) // if 1st feature to overlap { InGeneFeatID = FeatID; InGeneStart = GeneStart; InGeneEnd = GeneEnd; } else { if(GeneStart <= LociStart && GeneEnd >= LociEnd) { if(GeneStart >= InGeneStart && GeneEnd <= InGeneEnd) { InGeneFeatID = FeatID; InGeneStart = GeneStart; InGeneEnd = GeneEnd; } } } } while(FeatID > 0); } if(InGeneFeatID) FeatID = InGeneFeatID; else return(eBSFerrFeature); // unable to associate pBED->GetFeature(FeatID,pszGene,NULL,&GeneStart,&GeneEnd); return(eBSFSuccess); }
Anyone who investigates the topic of gold-based money for a little while soon runs into an incredible diversity of opinion, accompanied by various proposals that are so divergent as to seem to be coming from different planets. Along with this comes a bit of personal backbiting as each one likes to claim his solution is best. Many of the proposals are rather contrary to the actual practice of gold standard systems worldwide, during the period 1800-1971. Others follow that example closely. The Pragmatists. I come from the “pragmatist” tradition. The pragmatists feel that simply to put the world’s money back on a gold basis, something like an updated and corrected version of the Bretton Woods period 1944-1971, would be such a giant accomplishment that it is silly to make things more difficult by introducing various new demands. The pragmatists are mostly happy working with monopoly central banks as they exist today, including the Federal Reserve. They just want the Federal Reserve to act more like the Bank of England and other successful central banks did during the 19th century, rather than indulging in today’s seat-of-the-pants funny-money acrobatics. Other people find that monopoly currency issuers like the Federal Reserve are inherently prone to corruption, and would thus undermine any new arrangements. The history of the Federal Reserve since 1913 certainly shows a trend towards corruption, although the history of the Bank of England 1694-1913, as an effective monopoly currency issuer, was a shining example of long-term discipline. The Free Bankers. “Free banking” is really a name for “multiple independent currency issuers,” which may or may not have anything to do with the commercial banking business. (They were combined in the past, but I think that any future “free banking” system should have currency issuers that are segregated from commercial banking.) During the 1789-1860 period in the United States, hundreds of independent banks issued paper currency, all of them using the unified “dollar” standard of 23.2 troy grains of gold. A modified Free Banking system was in effect after 1863, effectively continuing to the end of the 1930s although the Federal Reserve became more and more dominant after its introduction in 1913. The Free Bankers argue that a monopoly currency issuer is inherently prone to corruption and abandonment of gold-standard principles. People have nowhere else to go. Diversifying currency issuance insures that any institution that becomes unreliable would find that it is shunned in favor of better-run institutions. Also, the currency would be theoretically free of control by any one party. If you “End the Fed,” you have to replace it with something, and a Free Banking alternative (perhaps as a parallel currency) is commonly mentioned as a solution. I did so myself in Gold: the Monetary Polaris (2013). The Government Currency Issuers. Historically, currency-issuing central banks (beginning with the Bank of England in 1694, which set the pattern going forward) were set up by private bankers, with great ambition and effort, basically as profit-making institutions. Issuing currency can be a wildly profitable business, especially when done on a large scale. Today, central banks are either supposedly nationalized, or the profits from currency issuance are officially remitted to national governments. But, some people think this is a bit of a ruse. In any case, being able to control the central bank allows great influence in all sorts of matters beyond profits alone. Some people thus argue that governments themselves should issue currency directly. Thus, all the profits and advantage would go toward the public good, and the currency would be free from the influence of the bankers. This notion can teeter dangerously close to “Modern Monetary Theory” arguments, but in principle the government would adhere to gold standard discipline and not overissue currency. This was particularly common among the governments of the American Colonies from 1690 to 1789. They issued paper currencies directly. Unfortunately, despite a nominal link to silver coin, they also blew up their currencies over and over again via overissuance, culminating finally in the hyperinflation of the Continental Dollar in 1776-1782. China had a similar history during 1100-1450. Historically, governments have tended to make a mess of things when they have been given direct control of the currency. The 100% Reservists: Given the difficulties of leaving currency issuance to either the bankers or governments themselves, some have argued that payment should be made entirely with gold bullion itself. In essence, mining companies make the money. I have chided some of these arguments unfairly with remarks like “I can’t really see myself paying for my Amazon.com order by Fedexing a gold coin to Seattle.” But, this was a misrepresentation. Typically, these proposals involve some kind of payment system where each banknote or checkable deposit account corresponds to equivalent bullion held in storage. Thus it is a “100% reserve” system. This was actually the norm throughout history, from 2600 B.C. until about 1800. As far back as Babylon of 1900 B.C., and earlier, monetary payment could be made in gold or silver bullion, or also as a deposit transfer (basically a checking account) recorded at the Temple, which served as the central bullion depository and payments clearinghouse. More recently, the 17th-century Bank of Amsterdam (in principle) served as the bullion payment clearinghouse, in a Dutch monetary system otherwise based on metallic coinage. Banknote issuance after 1800 was commonly left to private bankers and then central bankers, who then tended to maximize their profit with the use of debt as a primary reserve asset. 100% Reserve systems tend to be particularly robust, and immune to bank insolvency or currency overissuance. It also removes the issue of who receives the profits of currency issuance, since there are effectively no profits. Variants of this notion are already active today in institutions like GoldMoney (beginning 2001) and its successor, BitGold, which is run by the former CEO of Paypal Canada and is listed on the Toronto Stock Exchange. Many other proposals are already on the table, sometimes related to recent legislation liberalizing the use of gold as money in states like Utah. These “100% Reserve” systems may use banknotes, or may be “wholly electronic” like GoldMoney or BitGold, which is a funny term for a system that could also be called “wholly metallic.” In any case, the system can work well and eliminates many issues related to either central banks or government currency issuers. The main problem is that it could demand very large amounts of gold to be held in reserve if applied on a global scale. However, smaller scale systems could certainly be introduced with no particular issues, and already have been. The problem of small denomination – a coin below about a tenth of an ounce (3 grams) becomes impractical – can perhaps be resolved with some new technologies, for example “banknotes” of durable gold-impregnated plastic in denominations down to fractions of a gram, which function as full-weight coins. (The contained metal is equivalent to the face value.) Plus, they look really cool. It took me a long time to realize that all of these notions have validity and could be made into a practical working system. Each has a focus on certain issues, which then leads to certain conclusions. Unfortunately, you could also use each of these arguments to make an impractical and unworkable system, and there have been plenty of examples of that too over the years. Eventually, we will need a community of people who have mastered all of these viewpoints. Embrace them all in principle; and then, choose one that seems most appropriate. The United States, with its libertarian tradition, might opt for the Free Banking or 100% Reserve model (or both simultaneously, as a Free Banking model allows for many variations). China, with its authoritarian tradition, might opt for a government-owned and -controlled monopoly central bank that operates similarly to the Bank of England in the 19th century. Hong Kong, Algeria or Bermuda might piggyback upon another international gold-standard currency, with a currency-board system that does not require any gold reserves at all. Each government will find the solution that makes the most sense. But, to make an informed decision, you should be familiar with the options.
/** * Abstract class for anchor {@code <Text>}-ish spannable views, such as {@link TextView} or {@link * TextEdit}. * * <p>This is a "shadowing" view manager, which means that the {@link NativeViewHierarchyManager} * will NOT manage children of native {@link TextView} instances instanciated by this manager. * Instead we use @{link ReactBaseTextShadowNode} hierarchy to calculate a {@link Spannable} text * represented the whole text subtree. */ public abstract class ReactTextAnchorViewManager<T extends View, C extends ReactBaseTextShadowNode> extends BaseViewManager<T, C> { private static final int[] SPACING_TYPES = { Spacing.ALL, Spacing.LEFT, Spacing.RIGHT, Spacing.TOP, Spacing.BOTTOM, }; // maxLines can only be set in master view (block), doesn't really make sense to set in a span @ReactProp(name = ViewProps.NUMBER_OF_LINES, defaultInt = ViewDefaults.NUMBER_OF_LINES) public void setNumberOfLines(ReactTextView view, int numberOfLines) { view.setNumberOfLines(numberOfLines); } @ReactProp(name = ViewProps.ELLIPSIZE_MODE) public void setEllipsizeMode(ReactTextView view, @Nullable String ellipsizeMode) { if (ellipsizeMode == null || ellipsizeMode.equals("tail")) { view.setEllipsizeLocation(TextUtils.TruncateAt.END); } else if (ellipsizeMode.equals("head")) { view.setEllipsizeLocation(TextUtils.TruncateAt.START); } else if (ellipsizeMode.equals("middle")) { view.setEllipsizeLocation(TextUtils.TruncateAt.MIDDLE); } else { throw new JSApplicationIllegalArgumentException("Invalid ellipsizeMode: " + ellipsizeMode); } } @ReactProp(name = ViewProps.TEXT_ALIGN_VERTICAL) public void setTextAlignVertical(ReactTextView view, @Nullable String textAlignVertical) { if (textAlignVertical == null || "auto".equals(textAlignVertical)) { view.setGravityVertical(Gravity.NO_GRAVITY); } else if ("top".equals(textAlignVertical)) { view.setGravityVertical(Gravity.TOP); } else if ("bottom".equals(textAlignVertical)) { view.setGravityVertical(Gravity.BOTTOM); } else if ("center".equals(textAlignVertical)) { view.setGravityVertical(Gravity.CENTER_VERTICAL); } else { throw new JSApplicationIllegalArgumentException( "Invalid textAlignVertical: " + textAlignVertical); } } @ReactProp(name = "selectable") public void setSelectable(ReactTextView view, boolean isSelectable) { view.setTextIsSelectable(isSelectable); } @ReactProp(name = "selectionColor", customType = "Color") public void setSelectionColor(ReactTextView view, @Nullable Integer color) { if (color == null) { view.setHighlightColor( DefaultStyleValuesUtil.getDefaultTextColorHighlight(view.getContext())); } else { view.setHighlightColor(color); } } @ReactPropGroup( names = { ViewProps.BORDER_RADIUS, ViewProps.BORDER_TOP_LEFT_RADIUS, ViewProps.BORDER_TOP_RIGHT_RADIUS, ViewProps.BORDER_BOTTOM_RIGHT_RADIUS, ViewProps.BORDER_BOTTOM_LEFT_RADIUS }, defaultFloat = YogaConstants.UNDEFINED ) public void setBorderRadius(ReactTextView view, int index, float borderRadius) { if (!YogaConstants.isUndefined(borderRadius)) { borderRadius = PixelUtil.toPixelFromDIP(borderRadius); } if (index == 0) { view.setBorderRadius(borderRadius); } else { view.setBorderRadius(borderRadius, index - 1); } } @ReactProp(name = "borderStyle") public void setBorderStyle(ReactTextView view, @Nullable String borderStyle) { view.setBorderStyle(borderStyle); } @ReactPropGroup( names = { ViewProps.BORDER_WIDTH, ViewProps.BORDER_LEFT_WIDTH, ViewProps.BORDER_RIGHT_WIDTH, ViewProps.BORDER_TOP_WIDTH, ViewProps.BORDER_BOTTOM_WIDTH, }, defaultFloat = YogaConstants.UNDEFINED ) public void setBorderWidth(ReactTextView view, int index, float width) { if (!YogaConstants.isUndefined(width)) { width = PixelUtil.toPixelFromDIP(width); } view.setBorderWidth(SPACING_TYPES[index], width); } @ReactPropGroup( names = { "borderColor", "borderLeftColor", "borderRightColor", "borderTopColor", "borderBottomColor" }, customType = "Color" ) public void setBorderColor(ReactTextView view, int index, Integer color) { float rgbComponent = color == null ? YogaConstants.UNDEFINED : (float) ((int) color & 0x00FFFFFF); float alphaComponent = color == null ? YogaConstants.UNDEFINED : (float) ((int) color >>> 24); view.setBorderColor(SPACING_TYPES[index], rgbComponent, alphaComponent); } @ReactProp(name = ViewProps.INCLUDE_FONT_PADDING, defaultBoolean = true) public void setIncludeFontPadding(ReactTextView view, boolean includepad) { view.setIncludeFontPadding(includepad); } @ReactProp(name = "disabled", defaultBoolean = false) public void setDisabled(ReactTextView view, boolean disabled) { view.setEnabled(!disabled); } }
// Fill out your copyright notice in the Description page of Project Settings. #include "Widget_PlayerDash.h" #include "../../Core/WidgetAni_Mng.h" #include "Runtime/UMG/Public/Animation/WidgetAnimation.h" #include "Styling/SlateBrush.h" #include "Materials/MaterialInstanceDynamic.h" #include "Materials/MaterialInterface.h" #include "Kismet/KismetMathLibrary.h" void UWidget_PlayerDash::NativeConstruct() { Super::NativeConstruct(); m_pGage = Cast<UProgressBar>(GetWidgetFromName(TEXT("Gage"))); if (m_pGage == nullptr) { ULOG(TEXT("Error Gage")); } m_pGageText = Cast<UTextBlock>(GetWidgetFromName(TEXT("GageTxt"))); if (m_pGageText == nullptr) { ULOG(TEXT("Error GageText")); } m_pWidgetAni = NewObject<UWidgetAni_Mng>(); if (m_pWidgetAni != nullptr) { m_pWidgetAni->Init(this); } else { ULOG(TEXT("WidgetAniMng is nullptr")); return; } m_bGage = true; } void UWidget_PlayerDash::NativeTick(const FGeometry& MyGeometry, float InDeltaTime) { Super::NativeTick(MyGeometry, InDeltaTime); if (m_bGage) { float fVal = FMath::FInterpTo(m_pGage->Percent, 1.0f, InDeltaTime, 8.0f); SetValue(fVal); } else { float fVal = FMath::FInterpTo(m_pGage->Percent, 0.0f, InDeltaTime, 8.0f); SetValue(fVal); } } void UWidget_PlayerDash::SetValue(float fValue) { m_pGage->SetPercent(fValue); } void UWidget_PlayerDash::SetShow(bool bShow) { if (bShow) { m_pGageText->SetText(FText::FromString("F")); m_pWidgetAni->SetPlayAnimation("GageText"); m_pWidgetAni->SetPlayAnimation("GageAdd"); } else { m_pGageText->SetText(FText::FromString("E")); m_pWidgetAni->SetPlayAnimation("GageText"); m_pWidgetAni->SetPlayAnimation("GageRemove"); } m_bGage = bShow; }
African Identity in Asia: Cultural Effects of Forced Migration In contrast to the dispersion of slaves across the Atlantic, African movement to Asia has received scant attention. However, Britain's commemoration in 2007 of the bicentennial of its abolishing the trans-Atlantic slave trade has now stimulated interest in other African migrations.In a book that encompasses the strong military impact made by even first-generation African migrants in Asia, as well as the descendants of the royal Africans who governed Sachin and Janjira (India), Shihan de Silva Jayasuriya further demonstrates that African music and dance have not only survived the brutalities of forced migration but have also contributed to the local Middle Eastern and South Asian arts scene. Combining historical accounts, both documented and oral, this groundbreaking work explores - through case studies, and through the processes of assimilation, social mobility, and marginalization - the silent history and conflicting identity of Asia's Africans.
Adaptivity of Tuning Functions in a Generic Recurrent Network Model of a Cortical Hypercolumn The representation of orientation information in the adult visual cortex is plastic as exemplified by phenomena such as perceptual learning or attention. Although these phenomena operate on different time scales and give rise to different changes in the response properties of neurons, both lead to an improvement in visual discrimination or detection tasks. If, however, optimal performance is indeed the goal, the question arises as to why the changes in neuronal response properties are so different. Here, we hypothesize that these differences arise naturally if optimal performance is achieved by means of different mechanisms. To evaluate this hypothesis, we set up a recurrent network model of a visual cortical hypercolumn and asked how each of four different parameter sets (strength of afferent and recurrent synapses, neuronal gains, and additive background inputs) must be changed to optimally improve the encoding accuracy of a particular set of visual stimuli. We find that the predicted changes in the population responses and the tuning functions were different for each set of parameters, hence were strongly dependent on the plasticity mechanism that was operative. An optimal change in the strength of the recurrent connections, for example, led to changes in the response properties that are similar to the changes observed in perceptual learning experiments. An optimal change in the neuronal gains led to changes mimicking neural effects of attention. Assuming the validity of the optimal encoding hypothesis, these model predictions can be used to disentangle the mechanisms of perceptual learning, attention, and other adaptation phenomena.
<gh_stars>1-10 /* -*-c++-*- OpenSceneGraph - Copyright (C) 1998-2006 <NAME> * * This library is open source and may be redistributed and/or modified under * the terms of the OpenSceneGraph Public License (OSGPL) version 0.0 or * (at your option) any later version. The full license is in LICENSE file * included with this distribution, and on the openscenegraph.org website. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * OpenSceneGraph Public License for more details. */ //osgManipulator - Copyright (C) 2007 <NAME>. #include <osgManipulator/TabBoxDragger> #include <osg/ShapeDrawable> #include <osg/Geometry> #include <osg/LineWidth> #include <osg/Quat> using namespace osgManipulator; TabBoxDragger::TabBoxDragger() { for (int i=0; i<6; ++i) { _planeDraggers.push_back(new TabPlaneDragger()); addChild(_planeDraggers[i].get()); addDragger(_planeDraggers[i].get()); } { _planeDraggers[0]->setMatrix(osg::Matrix::translate(osg::Vec3(0.0,0.5,0.0))); } { osg::Quat rotation; rotation.makeRotate(osg::Vec3(0.0f, -1.0f, 0.0f), osg::Vec3(0.0f, 1.0f, 0.0f)); _planeDraggers[1]->setMatrix(osg::Matrix(rotation)*osg::Matrix::translate(osg::Vec3(0.0,-0.5,0.0))); } { osg::Quat rotation; rotation.makeRotate(osg::Vec3(0.0f, 0.0f, 1.0f), osg::Vec3(0.0f, 1.0f, 0.0f)); _planeDraggers[2]->setMatrix(osg::Matrix(rotation)*osg::Matrix::translate(osg::Vec3(0.0,0.0,-0.5))); } { osg::Quat rotation; rotation.makeRotate(osg::Vec3(0.0f, 1.0f, 0.0f), osg::Vec3(0.0f, 0.0f, 1.0f)); _planeDraggers[3]->setMatrix(osg::Matrix(rotation)*osg::Matrix::translate(osg::Vec3(0.0,0.0,0.5))); } { osg::Quat rotation; rotation.makeRotate(osg::Vec3(1.0f, 0.0f, 0.0f), osg::Vec3(0.0f, 1.0f, 0.0f)); _planeDraggers[4]->setMatrix(osg::Matrix(rotation)*osg::Matrix::translate(osg::Vec3(-0.5,0.0,0.0))); } { osg::Quat rotation; rotation.makeRotate(osg::Vec3(0.0f, 1.0f, 0.0f), osg::Vec3(1.0f, 0.0f, 0.0f)); _planeDraggers[5]->setMatrix(osg::Matrix(rotation)*osg::Matrix::translate(osg::Vec3(0.5,0.0,0.0))); } setParentDragger(getParentDragger()); } TabBoxDragger::~TabBoxDragger() { } void TabBoxDragger::setupDefaultGeometry() { for (unsigned int i=0; i<_planeDraggers.size(); ++i) _planeDraggers[i]->setupDefaultGeometry(false); } void TabBoxDragger::setPlaneColor(const osg::Vec4& color) { for (unsigned int i=0; i<_planeDraggers.size(); ++i) _planeDraggers[i]->setPlaneColor(color); }
<filename>api/errors/error_errors.pb.go<gh_stars>0 // Code generated by protoc-gen-go-errors. DO NOT EDIT. package errors import ( fmt "fmt" errors "github.com/go-kratos/kratos/v2/errors" ) // This is a compile-time assertion to ensure that this generated file // is compatible with the kratos package it is being compiled against. const _ = errors.SupportPackageIsVersion1 func IsUnknownError(err error) bool { if err == nil { return false } e := errors.FromError(err) return e.Reason == AuthErrorReason_UNKNOWN_ERROR.String() && e.Code == 500 } func ErrorUnknownError(format string, args ...interface{}) *errors.Error { return errors.New(500, AuthErrorReason_UNKNOWN_ERROR.String(), fmt.Sprintf(format, args...)) } func IsBusinessError(err error) bool { if err == nil { return false } e := errors.FromError(err) return e.Reason == AuthErrorReason_BUSINESS_ERROR.String() && e.Code == 400 } func ErrorBusinessError(format string, args ...interface{}) *errors.Error { return errors.New(400, AuthErrorReason_BUSINESS_ERROR.String(), fmt.Sprintf(format, args...)) } func IsNotLogin(err error) bool { if err == nil { return false } e := errors.FromError(err) return e.Reason == AuthErrorReason_NOT_LOGIN.String() && e.Code == 401 } func ErrorNotLogin(format string, args ...interface{}) *errors.Error { return errors.New(401, AuthErrorReason_NOT_LOGIN.String(), fmt.Sprintf(format, args...)) } func IsNotAuthority(err error) bool { if err == nil { return false } e := errors.FromError(err) return e.Reason == AuthErrorReason_NOT_AUTHORITY.String() && e.Code == 403 } func ErrorNotAuthority(format string, args ...interface{}) *errors.Error { return errors.New(403, AuthErrorReason_NOT_AUTHORITY.String(), fmt.Sprintf(format, args...)) } func IsConflictError(err error) bool { if err == nil { return false } e := errors.FromError(err) return e.Reason == AuthErrorReason_CONFLICT_ERROR.String() && e.Code == 409 } func ErrorConflictError(format string, args ...interface{}) *errors.Error { return errors.New(409, AuthErrorReason_CONFLICT_ERROR.String(), fmt.Sprintf(format, args...)) } func IsParamsError(err error) bool { if err == nil { return false } e := errors.FromError(err) return e.Reason == AuthErrorReason_PARAMS_ERROR.String() && e.Code == 422 } func ErrorParamsError(format string, args ...interface{}) *errors.Error { return errors.New(422, AuthErrorReason_PARAMS_ERROR.String(), fmt.Sprintf(format, args...)) } func IsPreconditionRequired(err error) bool { if err == nil { return false } e := errors.FromError(err) return e.Reason == AuthErrorReason_PRECONDITION_REQUIRED.String() && e.Code == 428 } func ErrorPreconditionRequired(format string, args ...interface{}) *errors.Error { return errors.New(428, AuthErrorReason_PRECONDITION_REQUIRED.String(), fmt.Sprintf(format, args...)) }
Bilateral deep brain stimulation (DBS) of the subthalamic nucleus (STN) or the globus pallidus interna (GPi) for treatment of advanced Parkinson's disease. Parkinson's disease is a chronic, progressive neurodegenerative disease characterized by resting tremor, rigidity, bradykinesia, and postural instability. No known treatment halts the progression of Parkinson's disease and there is no cure. Although pharmacologic treatment with levodopa and adjunctive drugs can usually restore smooth motor function for up to 510 years after onset, effectiveness gradually diminishes with time. Eventually, most patients experience drug-related complications, such as motor fluctuations and dyskinesias. The most severe motor complications of levodopa tend to occur among patients with early onset (i.e., before age 40) Parkinson's disease. Because the degenerative nature of Parkinson's disease is not restricted solely to the dopaminergic systems, the brain is affected more globally as the disease advances. Thus, symptoms that are unresponsive to dopamine-active medications ultimately develop. Such symptoms are dementia, dysautonomia, and motor symptoms that affect speech, swallowing, and gait, as well as sleep disturbances, fatigue, and depression. Deep brain stimulation (DBS), a new surgical treatment for Parkinson's disease, employs high-frequency stimulation to stimulate a targeted region of the brain. Introduced in the late 1980s by Benabid and colleagues in France, DBS is a surgical procedure consisting of the placement of an electrode or electrodes into one of several possible targets in the brain. The electrode is then connected to a computerized pulse generator that is implanted subcutaneously, in a manner similar to that used for a pacemaker. Stimulation parameters are adjusted to maximize therapeutic effects. Currently, three possible sites may be selected as targets for DBS treatment of Parkinson's disease: the ventralis intermediate nucleus of the thalamus (Vim), the globus pallidus pars interna (GPi), and the subthalamic nucleus (STN). Of these, only the device for unilateral chronic DBS of the ventralis intermediate nucleus (Vim) of the thalamus has received premarket application (PMA) approval from the U.S. Food and Drug Administration (FDA) for treatment of patients with tremor-dominant Parkinson's disease or other tremor disorders. Because it is associated with a higher incidence of speech, swallowing, and cognitive dysfunction, bilateral DBS of the Vim is seldom performed. In December 1997, the Blue Cross and Blue Shield Association (BCBSA) Medical Advisory Panel (MAP) found that unilateral DBS of the thalamus for patients with disabling, medically unresponsive tremor due to essential tremor or Parkinson's disease met the Technology Evaluation Center (TEC) criteria. More recent evidence suggests that bilateral DBS of the GPi or the STN may alleviate the entire constellation of parkinsonian symptoms (tremor,
def add_annotations(samples: List[dict]) -> List[dict]: for sample in samples: text = "" for key in sample.losses.keys(): if "loss" in key: text += f"{key}: {round(sample.losses[key], 5)}\n" text += f"IMG: {sample.filepath.name}" sample.losses["text"] = text return samples
An Artificially Induced Planktothrix rubescens Surface Bloom in a Small Kettle Lake in Southern Ontario Compared to Blooms World-wide ABSTRACT To combat hypolimnetic anoxia and sediment phosphorus release in a small, mesotrophic kettle lake on the Oakridge Moraine north of the Metropolitan Toronto, Southern Ontario, oxygenation and aeration was applied to the hypolimnion alternately during the summer of 1998 until mid-November and then to the entire water column until the end of December. This treatment coincided with the proliferation of a toxic strain of the purple cyanobacterium, Planktothrix rubescens, from almost undetectable values to bloom conditions under ice in the following winter and spring. Although small numbers of P. rubescens have been detected during several years before the treatment, prolonged artificial mixing in the fall and winter of 1998 distributed numerous filaments throughout the water column and to the surface when light was suitably low for these algae to survive and grow. Algae were supported by simultaneous entrainment and mixing of nutrients from the enriched bottom water. Such blooms of P. rubescens and related bluegreens have been found in many lakes with comparable characteristics and during similar episodes like those of the study lake. Lakes were typically stratified, mesotrophic hardwater lakes, with phosphorus levels that have recently been increasing to levels above 20 g L−1. Blooms occurred during periods of low light and enhanced mixing, in several cases after treating the lake with whole-lake aeration and mixing. Recommendations to prevent such blooms in Lake Wilcox are the discontinuation of artificial mixing during periods of natural destratification in the fall and winter, the prevention of further eutrophication, and the installation of an in-lake treatment, such as hypolimnetic withdrawal, to decrease internal phosphorus loading from anoxic sediment surfaces.
<filename>ik-expression/src/org/wltea/expression/format/reader/OperatorTypeReader.java /** * */ package org.wltea.expression.format.reader; import java.io.IOException; import java.util.HashSet; import java.util.Set; import org.wltea.expression.format.Element; import org.wltea.expression.format.ExpressionReader; import org.wltea.expression.format.FormatException; import org.wltea.expression.format.Element.ElementType; /** * 读取运算符类型 * @author 林良益,卓诗垚 * @version 2.0 * Sep 21, 2008 */ public class OperatorTypeReader implements ElementReader { private static final Set<String> OPERATOR_WORDS = new HashSet<String>(); static{ OPERATOR_WORDS.add("+"); OPERATOR_WORDS.add("-"); OPERATOR_WORDS.add(">"); OPERATOR_WORDS.add("<"); OPERATOR_WORDS.add(">="); OPERATOR_WORDS.add("<="); OPERATOR_WORDS.add("=="); OPERATOR_WORDS.add("!="); OPERATOR_WORDS.add("*"); OPERATOR_WORDS.add("/"); OPERATOR_WORDS.add("%"); OPERATOR_WORDS.add("&&"); OPERATOR_WORDS.add("||"); OPERATOR_WORDS.add("!"); OPERATOR_WORDS.add("#"); OPERATOR_WORDS.add("?:"); OPERATOR_WORDS.add("?"); OPERATOR_WORDS.add(":"); } /** * 判断字符串是否是合法的操作符 * @param tokenText * @return */ public static boolean isOperatorWord(String tokenText){ return OPERATOR_WORDS.contains(tokenText); } /** * 从流中读取运算符类型的ExpressionToken * @param sr * @return * @throws FormatException 不是合法的运算符类型时抛出 * @throws IOException */ public Element read(ExpressionReader sr) throws FormatException, IOException { int index = sr.getCruuentIndex(); StringBuffer sb = new StringBuffer(); int b = sr.read(); if (b == -1) { throw new FormatException("表达式已结束"); } char c = (char)b; sb.append(c); if (isOperatorWord(sb.toString())) { if (sb.length() == 1) {//两个符号的运算符优先,如<=,不应该认为是<运算符 sr.mark(0); b = sr.read(); if (b != -1) { if (isOperatorWord(sb.toString() + (char)b)) { return new Element(sb.toString() + (char)b, index, ElementType.OPERATOR); } } sr.reset(); } return new Element(sb.toString(), index, ElementType.OPERATOR); } while ((b = sr.read()) != -1) { c = (char)b; sb.append(c); if (isOperatorWord(sb.toString())) { return new Element(sb.toString(), index, ElementType.OPERATOR); } if (VariableTypeReader.STOP_CHAR.indexOf(c) >= 0) {//单词停止符 throw new FormatException("不是有效的运算符:" + sb.toString()); } } throw new FormatException("不是有效的运算符结束"); } /** * 测试是否为运算符 * @param sr * @return * @throws IOException */ public static boolean isOperatorStart(ExpressionReader sr) throws IOException { sr.mark(0); try { StringBuffer sb = new StringBuffer(); int b = sr.read(); if (b == -1) { return false; } char c = (char)b; sb.append(c); if (isOperatorWord(sb.toString())) { return true; } while ((b = sr.read()) != -1) { c = (char)b; sb.append(c); if (isOperatorWord(sb.toString())) { return true; } if (VariableTypeReader.STOP_CHAR.indexOf(c) >= 0) {//单词停止符 return false; } } return false; } finally{ sr.reset(); } } }
<gh_stars>0 package com.amy.scrolldetectorexample.sub; import android.os.Bundle; import android.support.annotation.Nullable; import android.support.v7.app.AppCompatActivity; import com.amy.scrolldetectorexample.R; public class WebViewActivity extends AppCompatActivity { @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); getActionBar().setTitle("ScrollView"); setContentView(R.layout.web_view_layout); } }
Thermography is the use of an infrared imaging and measurement camera to see and measure thermal energy emitted from an object. Thermography is the use of an infrared imaging and measurement camera to see and measure thermal energy emitted from an object. Thermal, or infrared, energy is light that is not visible because its wavelength is too long to be detected by the human eye; it’s the part of the electromagnetic spectrum that is perceived as heat.
import { BorderCollapsePropertyCombined } from '@johanneslumpe/css-types'; import { style } from '../../style'; import { StyleOptions } from '../../types'; export interface BorderCollapseProps<T> { /** * The **`border-collapse`** CSS property sets whether cells inside a `<table>` have shared or separate borders. * * @see https://developer.mozilla.org/docs/Web/CSS/border-collapse */ style$BorderCollapse: T; } export const borderCollapse = < T = BorderCollapsePropertyCombined, Theme = never, Breakpoints = never >({ themeProp, }: Partial<StyleOptions<BorderCollapseProps<T>, Theme>> = {}) => style<BorderCollapseProps<T>, Theme, Breakpoints>({ cssProp: 'borderCollapse', prop: 'style$BorderCollapse', themeProp, });
The Motley Fool Singapore » Uncategorized » Warren Buffett Reveals the Biggest Mistake We Make When It Comes to Money According to Warren Buffett, there are two simple and costly mistakes most of us make when managing our personal finances. Warren Buffett of Berkshire Hathaway is the third richest man on the planet, and knows a thing or two about success when it comes to money and investing. When he first took over Berkshire Hathaway in 1964, the book value of the company stood at US$19. At the end of last year it was US$114,214 – a growth rate of 19.7% every year for 48 years! Similarly, US$19 placed in the S&P 500, an American stock market index,… Register by giving us your email below to continue reading all of the content on the site. Also receive a free Email Newsletter from the Motley Fool. (You may unsubscribe any time) According to Warren Buffett, there are two simple and costly mistakes most of us make when managing our personal finances. Warren Buffett of Berkshire Hathaway is the third richest man on the planet, and knows a thing or two about success when it comes to money and investing. When he first took over Berkshire Hathaway in 1964, the book value of the company stood at US$19. At the end of last year it was US$114,214 – a growth rate of 19.7% every year for 48 years! Similarly, US$19 placed in the S&P 500, an American stock market index, would’ve only grown to around US$1,400 over that same time period. He is a man whose wisdom should be trusted. And while he is full of investment advice, he can also be counted on to give insight when it comes to personal finances as well. On personal finances Buffett recently teamed up with Quicken Loans, a retail mortgage lender in the USA, to offer someone the chance to win US$1 billion for a perfect NCAA tournament bracket. To digress a little, the NCAA tournament is a basketball tournament amongst college teams in the USA. Dozens of teams would play one another in a series of games where the winner progresses to the next round and faces another winner, while the losers pack up for home. To get a perfect NCAA tournament bracket, a person has to guess the outcome of each game correctly and according to a math professor, Jeff Bergen from DePaul University, the odds of doing so are 1 in 9.2 quintillion (or 1 in 9,223,372,036,854,780,000). Coming back to personal finances, when Buffett went on the Dan Patrick Show (a sports show in the States) to discuss the bracket-challenge, Dan asked him a simple question: “What’s the biggest mistake we make when it comes to money?” Buffett came up with a direct, but vitally important response: “Well, I think the biggest mistake is not learning the habits of saving properly early. Because saving is a habit. And then, trying to get rich quick. It’s pretty easy to get well-to-do slowly. But it’s not easy to get rich quick.” So often when money and investing is considered, it’s easy to fall into the trap of thinking that saving can wait until a later date, and that the best investments are the ones that no one knows about. However, those thoughts are undeniably mistaken. A powerful example Consider a scenario of two people, each 25 years old, David who makes $40,000 a year and Michael who makes $80,000 a year. Each year, they get a 2.5% raise and work until they are 73. Let’s say the only difference is David starts saving 10% of his income when he’s 25, but Michael decides to wait until he’s 40, while he’s making $115,000 a year. Let’s also err on the conservative side of things and say that money grows at an annual rate of 5% each year, which is a hair-breadth less than the average historical annual price-return of the Straits Times Index (SGX: ^STI) since the start of 1988. The Straits Times Index has grown by an average compounded rate of 5.1% from 834 points back then to 3,055 today. And, that’s not including dividends, which could easily add another 2 to 3 percentage points to the index’s total annual return given how the SPDR Straits Times Index ETF (SGX: ES3), an exchange traded fund that tracks the Straits Times Index, carries a yield of 2.9% currently. But either way by the time each is 50, they would’ve each taken a little more than $144,000 out of their paychecks and put it toward retirement. But when they retire at 73, do you know who would end up with more money? Well, Michael edges ahead – but not by much. Despite earning half as much money over the course of his lifetime, David would end up with roughly only 11% less than Michael. David would have $1.27 million in savings when they retired, and Michael would have $1.43 million: What is even more remarkable than David ending up with just slightly less money despite earning half as much in salary, is that Michael ended up having to save 60% more money than David (roughly $610,000 in savings for Michael versus $375,000 for David). If you decide to get ambitious and say the money grows at 8.5% a year, David actually ends up with almost 30% more than Michael, with $3.7 million in savings versus $2.9 million. What we can learn All too often in investing people are taught, “in order to make money, you must have money,” and the stock market is only useful if you find the next big company where your money is doubled in a matter of days. Yet as Buffett expounds, and the example above shows, the true key to becoming rich is patient saving starting today and an understanding that wealth accumulation happens over the course of a lifetime.
package brotli import ( "encoding/binary" "github.com/andybalholm/pack" ) // M0 is an implementation of the pack.MatchFinder interface based // on the algorithm used by snappy, but modified to be more like the algorithm // used by compression level 0 of the brotli reference implementation. type M0 struct { // Lazy turns on "lazy matching," for higher compression but less speed. Lazy bool MaxDistance int MaxLength int } func (M0) Reset() {} const ( m0HashLen = 5 m0TableBits = 14 m0TableSize = 1 << m0TableBits m0Shift = 32 - m0TableBits // m0TableMask is redundant, but helps the compiler eliminate bounds // checks. m0TableMask = m0TableSize - 1 ) func (m M0) hash(data uint64) uint64 { hash := (data << (64 - 8*m0HashLen)) * kHashMul64 return hash >> (64 - m0TableBits) } // FindMatches looks for matches in src, appends them to dst, and returns dst. // src must not be longer than 65536 bytes. func (m M0) FindMatches(dst []pack.Match, src []byte) []pack.Match { const inputMargin = 16 - 1 const minNonLiteralBlockSize = 1 + 1 + inputMargin if len(src) < minNonLiteralBlockSize { dst = append(dst, pack.Match{ Unmatched: len(src), }) return dst } if len(src) > 65536 { panic("block too long") } var table [m0TableSize]uint16 // sLimit is when to stop looking for offset/length copies. The inputMargin // lets us use a fast path for emitLiteral in the main loop, while we are // looking for copies. sLimit := len(src) - inputMargin // nextEmit is where in src the next emitLiteral should start from. nextEmit := 0 // The encoded form must start with a literal, as there are no previous // bytes to copy, so we start looking for hash matches at s == 1. s := 1 nextHash := m.hash(binary.LittleEndian.Uint64(src[s:])) for { // Copied from the C++ snappy implementation: // // Heuristic match skipping: If 32 bytes are scanned with no matches // found, start looking only at every other byte. If 32 more bytes are // scanned (or skipped), look at every third byte, etc.. When a match // is found, immediately go back to looking at every byte. This is a // small loss (~5% performance, ~0.1% density) for compressible data // due to more bookkeeping, but for non-compressible data (such as // JPEG) it's a huge win since the compressor quickly "realizes" the // data is incompressible and doesn't bother looking for matches // everywhere. // // The "skip" variable keeps track of how many bytes there are since // the last match; dividing it by 32 (ie. right-shifting by five) gives // the number of bytes to move ahead for each iteration. skip := 32 nextS := s candidate := 0 for { s = nextS bytesBetweenHashLookups := skip >> 5 nextS = s + bytesBetweenHashLookups skip += bytesBetweenHashLookups if nextS > sLimit { goto emitRemainder } candidate = int(table[nextHash&m0TableMask]) table[nextHash&m0TableMask] = uint16(s) nextHash = m.hash(binary.LittleEndian.Uint64(src[nextS:])) if m.MaxDistance != 0 && s-candidate > m.MaxDistance { continue } if binary.LittleEndian.Uint32(src[s:]) == binary.LittleEndian.Uint32(src[candidate:]) { break } } // Invariant: we have a 4-byte match at s. base := s s = extendMatch(src, candidate+4, s+4) origBase := base if m.Lazy && base+1 < sLimit { newBase := base + 1 h := m.hash(binary.LittleEndian.Uint64(src[newBase:])) newCandidate := int(table[h&m0TableMask]) table[h&m0TableMask] = uint16(newBase) okDistance := true if m.MaxDistance != 0 && newBase-newCandidate > m.MaxDistance { okDistance = false } if okDistance && binary.LittleEndian.Uint32(src[newBase:]) == binary.LittleEndian.Uint32(src[newCandidate:]) { newS := extendMatch(src, newCandidate+4, newBase+4) if newS-newBase > s-base+1 { s = newS base = newBase candidate = newCandidate } } } if m.MaxLength != 0 && s-base > m.MaxLength { s = base + m.MaxLength } dst = append(dst, pack.Match{ Unmatched: base - nextEmit, Length: s - base, Distance: base - candidate, }) nextEmit = s if s >= sLimit { goto emitRemainder } if m.Lazy { // If lazy matching is enabled, we update the hash table for // every byte in the match. for i := origBase + 2; i < s-1; i++ { x := binary.LittleEndian.Uint64(src[i:]) table[m.hash(x)&m0TableMask] = uint16(i) } } // We could immediately start working at s now, but to improve // compression we first update the hash table at s-1 and at s. x := binary.LittleEndian.Uint64(src[s-1:]) prevHash := m.hash(x >> 0) table[prevHash&m0TableMask] = uint16(s - 1) nextHash = m.hash(x >> 8) } emitRemainder: if nextEmit < len(src) { dst = append(dst, pack.Match{ Unmatched: len(src) - nextEmit, }) } return dst }
Q: What can I do as a student to be a better candidate for a hardware design job? If you were looking to hire a student still in school for an internship position at a high-tech company (say, Microsoft, Apple, or Google) for hardware design. What kind of skills would you be looking for? What can I do with limited time and budget to attain those skills outside of what my school covers? A: There is no substitute for having actually done stuff. I have not and would never hire a EE that hadn't done a bunch of personal projects on the side. Electronic components are cheap and available, so anyone that has a true passion for electronics will have tinkered and built a few things, usually starting at least by late grade school. If you want to be a nuclear engineer, you can't get a kilo of plutonium and start messing around with it, but EEs have no such excuse. These projects don't have to be pretty, don't have to work, and don't have to do what you originally intended them to. The point is that you did them because you wanted to, and hopefully learned something from them, and you have to be able to talk about them intelligently. However, this is not something you should do because it will help you get a job. If that's your only reason, bail out now, EE isn't for you. In fact, if you're in college and you haven't already done some tinkering on the side, bail out now, EE isn't for you. I know that may sound harsh, but you seriously have to ask yourself why you want to be a EE. The right answer is that electronics fascinates you, you want to understand how circuits work and how to make circuits that do what you want. You are always thinking about ways to hook up available components, dreaming up things to build just to see them work and to be able to say you designed that. In other words, the right answer is that you have a passion for it. Anyone that has a passion for it will have done considerable tinkering, which will always be ahead of any formal education on the topic. When you do finally get to a electronics class, you already know all the basics, but occasionally you learn something that suddenly explains that puff of smoke you got when tinkering in high school, why that amplifier you tried to make from a few transistors just made a motor sounds, etc. So no, there is nothing you should specifically go out and do to help get a EE intern job because if you are going to be a great EE you are already doing these things. If not, you're just kidding yourself. You might be able to squeak past some hiring managers that themselves squeaked past someone else. However, that is a short term gain at best. If you get someone that knows what they are doing, and you probably will, you won't be able to bluff your way thru the interview. I once interviewed someone that had just graduated with a BS in EE from a local college. He brought his transcript and had impeccable grades. I think they were all As with maybe one or two Bs the whole 4 years. One question I always ask in a interview is to have you pick a project you have worked on and start describing it to me. Of course I don't know anything about your project, but I can tell a lot about how you think about things and how you present them. I'll ask questions how various things worked or how the design of this or that is. There is no way to bluff thru something like that. When I asked this kid about such a project, he said they hadn't done any in school. That's a little strange, but OK, so I asked him what he'd done on the side. I still remember his exact words, which were "What do you mean? That wasn't required.". Immediate end of interview. A: "Hardware design" covers a wide range of tasks, so it's all-too-easy to focus on tasks that are only applicable to particular domains. General skills that would serve you well, that you can learn relatively cheaply, include: Circuit design Logic power supply Signal amplifiers Overcurrent/overvoltage protection Embedded microcontrollers Thermal management/heat sink selection Schematic creation (Multisim or similar) PCB layout (Ultiboard or similar) Proper voltage clearances Proper trace widths PCB assembly Good soldering technique Circuit testing and debugging Clear documentation What's going to get you in the door is attitude. If you do something that they can see, something that demonstrates that you do more than what's required of you because you want to learn, that puts you well ahead. If you've done something extra, and done it well, that will make you look very good. And if you can explain what went wrong, and how you fixed it, that's best of all. You're going to make mistakes. Dealing with them is possibly the most valuable skill on that list. But you only develop that skill by making mistakes, and you only make mistakes when you do stuff.
Frequency of visits to a health care provider, health promoting behaviors, and perceived health status among African American women ABSTRACT The primary purpose of this study was to examine whether the self-reported number of health care visits over a 1-year period was associated with engagement in health promoting behaviors (i.e., healthy eating and physical activity) and perceived health status among a cross-sectional sample of African American women who were pre-hypertensive/hypertensive and/or overweight or obese (N = 180). The study participants were recruited in predominantly African American churches and had their data collected in April and May of 2009. Age, income, and education were also examined as moderators in the aforementioned relationships. Results revealed that the self-reported number of health care visits was significantly positively associated with healthy eating and perceived health status. Income moderated the relationship between self-reported number of health care visits and engagement in healthy eating. These results provide support for health promotion programs for African American women with program components that explain the relationships among routine care from a health care provider, engagement in health promoting behaviors, and prevention of chronic health conditions.
import logging import os import sys from argparse import Namespace from pathlib import Path from urllib.parse import urlencode from flask import Flask, jsonify, request from flask_cors import CORS from werkzeug.exceptions import InternalServerError from sqllineage import DATA_FOLDER, DEFAULT_HOST, DEFAULT_PORT from sqllineage import STATIC_FOLDER from sqllineage.utils.constant import LineageLevel from sqllineage.utils.helpers import extract_sql_from_args logger = logging.getLogger(__name__) app = Flask( __name__, static_url_path="", static_folder=os.path.join(os.path.dirname(__file__), STATIC_FOLDER), ) CORS(app) @app.errorhandler(InternalServerError) def handle_500(e): original = getattr(e, "original_exception", None) return jsonify({"message": str(original)}), 400 @app.route("/") def index(): return app.send_static_file("index.html") @app.route("/lineage", methods=["POST"]) def lineage(): # this is to avoid circular import from sqllineage.runner import LineageRunner req_args = Namespace(**request.get_json()) sql = extract_sql_from_args(req_args) lr = LineageRunner(sql, verbose=True) resp = { "verbose": str(lr), "dag": lr.to_cytoscape(), "column": lr.to_cytoscape(LineageLevel.COLUMN), } return jsonify(resp) @app.route("/script", methods=["POST"]) def script(): req_args = Namespace(**request.get_json()) sql = extract_sql_from_args(req_args) return jsonify({"content": sql}) @app.route("/directory", methods=["POST"]) def directory(): payload = request.get_json() if payload.get("f"): root = Path(payload["f"]).parent elif payload.get("d"): root = Path(payload["d"]) else: root = Path(DATA_FOLDER) data = { "id": str(root), "name": root.name, "is_dir": True, "children": [ {"id": str(p), "name": p.name, "is_dir": p.is_dir()} for p in sorted(root.iterdir(), key=lambda _: (not _.is_dir(), _.name)) ], } return jsonify(data) cli = sys.modules["flask.cli"] cli.show_server_banner = lambda *x: None # type: ignore def draw_lineage_graph(**kwargs) -> None: host = kwargs.pop("host", DEFAULT_HOST) port = kwargs.pop("port", DEFAULT_PORT) querystring = urlencode({k: v for k, v in kwargs.items() if v}) path = f"/?{querystring}" if querystring else "/" print(f" * SQLLineage Running on http://{host}:{port}{path}") app.run(host=host, port=port)
// Current version of void breaks schemagen so this includes manual additions public class VOID { /** <p>The RDF model that holds the vocabulary terms</p> */ private static Model m_model = ModelFactory.createDefaultModel(); /** <p>The namespace of the vocabulary as a string</p> */ public static final String NS = "http://rdfs.org/ns/void#"; /** <p>The namespace of the vocabulary as a string</p> * @see #NS */ public static String getURI() {return NS;} /** <p>The namespace of the vocabulary as a resource</p> */ public static final Resource NAMESPACE = m_model.createResource( NS ); /** <p>Announcement of an RDF dump of the dataset.</p> */ public static final Property dataDump = m_model.createProperty( "http://rdfs.org/ns/void#dataDump" ); public static final Property exampleResource = m_model.createProperty( "http://rdfs.org/ns/void#exampleResource" ); public static final Property feature = m_model.createProperty( "http://rdfs.org/ns/void#feature" ); public static final Property linkPredicate = m_model.createProperty( "http://rdfs.org/ns/void#linkPredicate" ); /** <p>The sink target of an interlinking</p> */ public static final Property objectsTarget = m_model.createProperty( "http://rdfs.org/ns/void#objectsTarget" ); public static final Property sparqlEndpoint = m_model.createProperty( "http://rdfs.org/ns/void#sparqlEndpoint" ); public static final Property statItem = m_model.createProperty( "http://rdfs.org/ns/void#statItem" ); /** <p>The source target of an interlinking</p> */ public static final Property subjectsTarget = m_model.createProperty( "http://rdfs.org/ns/void#subjectsTarget" ); public static final Property subset = m_model.createProperty( "http://rdfs.org/ns/void#subset" ); public static final Property target = m_model.createProperty( "http://rdfs.org/ns/void#target" ); /** <p>Defines a simple URI look-up protocol for accessing a dataset.</p> */ public static final Property uriLookupEndpoint = m_model.createProperty( "http://rdfs.org/ns/void#uriLookupEndpoint" ); /** <p>Defines a regular expression pattern matching URIs in the dataset.</p> */ public static final Property uriRegexPattern = m_model.createProperty( "http://rdfs.org/ns/void#uriRegexPattern" ); /** <p>A vocabulary that is used in the dataset.</p> */ public static final Property vocabulary = m_model.createProperty( "http://rdfs.org/ns/void#vocabulary" ); public static final Resource Dataset = m_model.createResource( "http://rdfs.org/ns/void#Dataset" ); public static final Resource Linkset = m_model.createResource( "http://rdfs.org/ns/void#Linkset" ); public static final Resource TechnicalFeature = m_model.createResource( "http://rdfs.org/ns/void#TechnicalFeature" ); // Manual public static final Property classes = m_model.createProperty( "http://rdfs.org/ns/void#classes" ); public static final Property classPartition = m_model.createProperty( "http://rdfs.org/ns/void#classPartition" ); public static final Property distinctObjects = m_model.createProperty( "http://rdfs.org/ns/void#distinctObjects" ); public static final Property distinctSubjects = m_model.createProperty( "http://rdfs.org/ns/void#distinctSubjects" ); public static final Property documents = m_model.createProperty( "http://rdfs.org/ns/void#documents" ); public static final Property entities = m_model.createProperty( "http://rdfs.org/ns/void#entities" ); public static final Property inDataset = m_model.createProperty( "http://rdfs.org/ns/void#inDataset" ); public static final Property openSearchDescription = m_model.createProperty( "http://rdfs.org/ns/void#openSearchDescription" ); public static final Property properties = m_model.createProperty( "http://rdfs.org/ns/void#properties" ); public static final Property property = m_model.createProperty( "http://rdfs.org/ns/void#property" ); public static final Property propertyPartition = m_model.createProperty( "http://rdfs.org/ns/void#propertyPartition" ); public static final Property rootResource = m_model.createProperty( "http://rdfs.org/ns/void#rootResource" ); public static final Property triples = m_model.createProperty( "http://rdfs.org/ns/void#triples" ); public static final Property uriSpace = m_model.createProperty( "http://rdfs.org/ns/void#uriSpace" ); }
/********************************************************************************************************************* * Copyright 2013-2014 Tobii Technology AB. All rights reserved. * EyeXNotification.h *********************************************************************************************************************/ #if !defined(__TOBII_TX_NOTIFICATIONS_API__H__) #define __TOBII_TX_NOTIFICATIONS_API__H__ /*********************************************************************************************************************/ /** txGetNotificationType Gets the TX_NOTIFICATIONTYPE of a notification. @param hNotification [in]: A TX_CONSTHANDLE to the notification. Must not be TX_EMPTY_HANDLE. @param pNotificationType [out]: A pointer to a TX_NOTIFICATIONTYPE which will be set to the type of the notification. Must not be NULL. @return TX_RESULT_OK: The type of the notification was successfully retrieved. TX_RESULT_EYEXNOTINITIALIZED: The EyeX client environment is not initialized. TX_RESULT_INVALIDARGUMENT: An invalid argument was passed to the function. */ TX_C_BEGIN TX_API TX_RESULT TX_CALLCONVENTION txGetNotificationType( TX_CONSTHANDLE hNotification, TX_NOTIFICATIONTYPE* pNotificationType ); TX_C_END typedef TX_RESULT (TX_CALLCONVENTION *GetNotificationTypeHook)( TX_CONSTHANDLE hNotification, TX_NOTIFICATIONTYPE* pNotificationType ); /*********************************************************************************************************************/ /** txGetNotificationData Gets the data of a notification. @param hNotification [in]: A TX_CONSTHANDLE to the notification. Must not be TX_EMPTY_HANDLE. @param phObject [out]: A pointer to a TX_HANDLE to which the handle of the object used as data will be copied. This handle must be released using txReleaseObject to avoid leaks. Must not be NULL. The value of the pointer must be set to TX_EMPTY_HANDLE. @return TX_RESULT_OK: The data of the notification was successfully retrieved. TX_RESULT_EYEXNOTINITIALIZED: The EyeX client environment is not initialized. TX_RESULT_INVALIDARGUMENT: An invalid argument was passed to the function. TX_RESULT_NOTFOUND: The notification does not have any data. */ TX_C_BEGIN TX_API TX_RESULT TX_CALLCONVENTION txGetNotificationData( TX_CONSTHANDLE hNotification, TX_HANDLE* phObject ); TX_C_END typedef TX_RESULT (TX_CALLCONVENTION *GetNotificationDataHook)( TX_CONSTHANDLE hNotification, TX_HANDLE* phObject ); /*********************************************************************************************************************/ #endif /* !defined(__TOBII_TX_NOTIFICATIONS_API__H__) */ /*********************************************************************************************************************/
Fabrication of IIIV semiconductor core-shell nanowires by SA-MOVPE and their device applications We fabricated various kinds of IIIV semiconductor nanowires and core-shell nanowires using selective area metalorganic vapor phase epitaxy (SA-MOVPE) on oriented substrates, such as GaAs, GaAs/AlGaAs, InP, InP/InAs/InP on IIIV substrates, and InAs and GaAs on Si. As for device applications, we fabricated GaAs/GaAsP core-shell nanowire photo-excited lasers, and InP core-shell pn junction solar cells. We also demonstrate IIIV semiconductor nano-wires grown on Si substrates.
The Role of -Folate Receptor-Mediated Transport in the Antitumor Activity of Antifolate Drugs Purpose: Raltitrexed, pemetrexed, lometrexol, and ZD9331 are antifolate drugs transported into cells via the ubiquitously expressed reduced-folate carrier. They display also high affinity for the -folate receptor (-FR), a low capacity folate transporter that is highly overexpressed in some epithelial tumors. The role of -FR in the activity of the antifolates has been evaluated in two -FR-overexpressing cell lines grown in a physiological concentration of folate (20 nm R,S-Leucovorin). Experimental Design and Results: A431-FBP cells (transfected with the -FR) were 35-fold more sensitive to the antifolates than A431 cells. KB cells (constitutive -FR overexpression) were less sensitive to the drugs when coexposed to 1 m folic acid to competitively inhibit binding to the -FR. Raltitrexed, pemetrexed, and lometrexol are polyglutamated in cells leading to drug retention, e.g., the raltitrexed 4- and 24-h IC50s in A431 cells were ∼0.6 and 0.008 m, respectively, compared with 0.003 m for 72-h continuous exposure. A431-FBP cells were ∼3-fold more sensitive to raltitrexed and pemetrexed at all exposure times. ZD9331 is not polyglutamated, and the 4- and 24-h IC50s in A431 cells were >100 and ∼100 m, respectively, reducing to 2 and 0.1 m, respectively, in A431-FBP cells. The ZD9331 4- and 24-h IC50s in KB cells were 20 and 1 m, respectively, and reversible by coaddition of 1 m folic acid. An in situ thymidylate synthase assay demonstrated continued thymidylate synthase inhibition after ZD9331-treated A431-FBP and KB, but not A431, cells were placed in drug-free medium for 16 h. A model is proposed in which the antifolates accumulate in the -FR/endosomal apparatus, leading to slow release into the cytoplasm. In particular, this leads to cellular retention of the nonpolyglutamatable ZD9331. Conclusions: Antifolate drugs, particularly ZD9331, have the potential for increased efficacy in tumors that highly overexpress the -FR.
Hippocampus and striatum encode distinct task regularities that guide human timing behavior The brain encodes the statistical regularities of the environment in a task-specific yet flexible and generalizable format. How it does so remains poorly understood. Here, we seek to understand this by converging two parallel lines of research, one centered on striatal-dependent sensorimotor timing, and the other on hippocampal-dependent cognitive mapping. We combined functional magnetic resonance imaging (fMRI) with a visual-tracking and time-to-contact (TTC) estimation task, revealing the widespread brain network supporting sensorimotor learning in real-time. Hippocampal and caudate activity signaled the behavioral feedback within trials and the improvements in performance across trials, suggesting that both structures encode behavior-dependent information rapidly. Critically, hippocampal learning signals generalized across tested intervals, while striatal ones did not, and together they explained both the trial-wise performance and the regression-to-the-mean biases in TTC estimation. Our results suggest that a fundamental function of hippocampal-striatal interactions may be to solve a trade-off between specificity vs. generalization, enabling the flexible and domain-general expression of human timing behavior broadly.
// dbPutBlockIndex uses an existing database transaction to update or add the // block index entries for the hash to height and height to hash mappings for // the provided values. func dbPutBlockIndex(dbTx database.Tx, hash *wire.ShaHash, height int32) error { var serializedHeight [4]byte byteOrder.PutUint32(serializedHeight[:], uint32(height)) meta := dbTx.Metadata() hashIndex := meta.Bucket(hashIndexBucketName) if err := hashIndex.Put(hash[:], serializedHeight[:]); err != nil { return err } heightIndex := meta.Bucket(heightIndexBucketName) return heightIndex.Put(serializedHeight[:], hash[:]) }
<filename>src/uml/generalization.c<gh_stars>0 /**~classification~ * Generalization [Class] * * Description * * A Generalization is a taxonomic relationship between a more general Classifier and a more specific Classifier. Each * instance of the specific Classifier is also an instance of the general Classifier. The specific Classifier inherits the features * of the more general Classifier. A Generalization is owned by the specific Classifier. * * Diagrams * * Classifiers, Generalization Sets * * Generalizations * * DirectedRelationship * * Attributes * *  isSubstitutable : Boolean [0..1] = true * * Indicates whether the specific Classifier can be used wherever the general Classifier can be used. If true, the * execution traces of the specific Classifier shall be a superset of the execution traces of the general Classifier. If * false, there is no such constraint on execution traces. If unset, the modeler has not stated whether there is such * a constraint or not. * * Association Ends * *  general : Classifier [1..1]{subsets DirectedRelationship::target} (opposite * A_general_generalization::generalization) * * The general classifier in the Generalization relationship. * *  generalizationSet : GeneralizationSet [0..*] (opposite GeneralizationSet::generalization) * * Represents a set of instances of Generalization. A Generalization may appear in many GeneralizationSets. * *  specific : Classifier [1..1]{subsets DirectedRelationship::source, subsets Element::owner} (opposite * Classifier::generalization) * * The specializing Classifier in the Generalization relationship. **/
<gh_stars>0 // +build !prod package build func IsProduction() bool { return false }
import Mock from 'mockjs'; import { resultSuccess } from './_util'; const list = Mock.mock({ 'items|30': [ { id: '@id', title: '@ctitle', mobile: '@phone', name: '@cname', description: '@cparagraph', created_at: '@datetime', updated_at: '@datetime', age: '@natural(10,50)', color: '@color', email: '@email', }, ], }); const data = { hu_num: 42, yun_num: 87755, ce_num: 3, create_time: 1636352741, online_num: 101, total_num: 110, seven_days: [ { id: 9, num: 7, time: '20211130', }, { id: 8, num: 80, time: '20211129', }, { id: 0, num: 280, time: '20211128', }, { id: 0, num: 0, time: '20211127', }, { id: 7, num: 5, time: '20211126', }, { id: 6, num: 20, time: '20211125', }, { id: 5, num: 5, time: '20211124', }, ], }; export default [ { url: '/v1/home/info', method: 'get', response: () => { return resultSuccess(data); }, }, { url: '/v1/home/list', method: 'get', response: () => { const items = list.items; return { code: 0, result: { total: items.length, list: items, }, }; }, }, ];
This invention relates to a therapeutic bed, and in particular to prone positioning beds. Patient positioning has been used for some time as a treatment for patient comfort, to prevent skin breakdown, improve drainage and to facilitate breathing. One of the goals of patient positioning has been maximisation of ventilation to improve systematic oxygenation. Various studies have demonstrated the beneficial effects of body positioning and mobilisation on impaired oxygen transport. The support of patients in a prone position can be advantageous in enhancing extension and ventilation of the dorsal aspect of the lungs. The present invention particularly relates to therapeutic beds of the type comprising a base frame, a patient support platform rotatably mounted on the base frame for rotational movement about a longitudinal rotational axis of the patient support platform, and drive means for rotation of the patient support platform on the base frame. In our previously filed patent application, publication no. WO 97\2323, we described a therapeutic bed of this type for supporting a patient in either a supine position or a prone position and for using kinetic therapy. This type of bed is particularly suited for the treatment of patients with respiratory problems. The beds advantageously allow rotation of the patient on the patient support platform and, where required, rotation of the patient support platform into a prone support position which is particularly desirable in the treatment of patients with severe respiratory problems. In such therapy, a patient may be heavily intubated with a number of tubes extending over a side of the bed between the patient on the bed and associated apparatus mounted on stands or the like alongside the bed for either delivering liquids to the patient or draining liquids from the patient. Also, there may be a number of wires extending from sensors on the patient to various monitors adjacent the bed. These ventilation and drainage tubes, medication supply tubes, monitoring cables and the like are collectively called patient care lines throughout this patent specification. The term “patient care lines” as used in this patent specification is taken to mean any tubes, pipes, conduits, cables and the like lines for delivery or drainage of fluids to or from a patient, for monitoring a patient's condition and generally speaking for treating a patient on the patient support platform of the bed. These patient care lines present a problem, particularly when rotating the patient support platform between a supine support position and a prone support position, in that they can easily become entangled and may be inadvertently pulled away from the patient. To avoid this a nurse or other attendant has to carefully handle and adjust the patient care lines as necessary whilst the bed is rotating. This can be extremely awkward. Access to the patient and the patient care lines is difficult when the patient support platform is at or approaching the prone support position. Another problem that arises is in ensuring that the patient is correctly secured to the patient support platform before rotating the patient support platform away from a horizontal supine support position. Again, a nurse has to check all the patient retaining strapping, rails and supports are secure prior to rotation of the patient support platform into the prone support position. This tends to be very time consuming. Also, it is not always easy to check the strapping or other restraints are correctly and securely engaged. To rotate the patient support platform between the supine support position and the prone support position, typically a number of nursing staff are required to rotate the patient support platform and at the same time, handle the tubing and wiring to prevent entanglement or dislodgement. Thus, a number of nursing staff may be diverted from other duties for a considerable time. Consequently, the operational efficiency is adversely effected and costs increased for the hospital. The present invention is directed towards overcoming these problems.
<filename>watchdog.cpp #include "watchdog.h" #include "transmit.h" void watchdog_init() { #ifdef TRANSMITTER_WIFI iwdg_init(IWDG_PRE_256, 3500); // 30 second watchdog, 40kHz processor #endif #ifdef TRANSMITTER_GSM Watchdog.enable(30000); #endif } void watchdog_feed() { #ifdef TRANSMITTER_WIFI iwdg_feed(); #endif #ifdef TRANSMITTER_GSM Watchdog.reset(); #endif }
Models for evaluating the antiinflammatory effects of inhibitors of arachidonic acid metabolism Inhibitors of arachidonic acid metabolism were characterized by their ability to modulate slow reacting substance (SRS) and prostaglandin E2 (PGE2) release from stimulated mouse peritoneal macrophages invitro. Differential effects of cyclooxygenase (CO) and lipoxygenase (LO) enzyme inhibitors and compounds which inhibit both enzymes were demonstrated using several animal models of inflammation. Carrageenanimpregnated sponges implanted subcutaneously in rats and immunecomplexes injected intraperitoneally in mice produced inflammatory responses characterized respectively by polymorphonuclear (PMN) cell infiltration and by increased vascular permeability. Dual CO/LO inhibitors (eg. BW 755c and timegadine) were capable of suppressing both parameters and reduced SRS and PGE2 formation invivo. In contrast, selective CO inhibitors (e.g. indomethacin, naproxen and R830) were less active against permeability, and potentiated SRS release. Although selective CO inhibitors reduced PMN migration, this occurred at doses which exceeded those required for inhibition of PGE2. Compounds possessing LO inhibitory activity suppressed the cellular component of an Arthus type reaction in the rat pleural cavity, but were less active than selective CO inhibitors against carrageenaninduced paw oedema in rats.
import { ChangeDetectionStrategy, Component, ElementRef, forwardRef, HostBinding, Inject, Input, Optional, } from '@angular/core'; import {TuiComparator} from '@taiga-ui/addon-table/types'; import {defaultSort} from '@taiga-ui/addon-table/utils'; import {tuiDefaultProp} from '@taiga-ui/cdk'; import {TUI_ELEMENT_REF} from '@taiga-ui/core'; import {TuiHeadDirective} from '../directives/head.directive'; import {TuiTableDirective} from '../directives/table.directive'; @Component({ selector: 'th[tuiTh]', templateUrl: './th.template.html', styleUrls: ['./th.style.less'], changeDetection: ChangeDetectionStrategy.OnPush, providers: [ { provide: TUI_ELEMENT_REF, useExisting: ElementRef, }, ], }) export class TuiThComponent<T> { @Input() @tuiDefaultProp() sorter: TuiComparator<T> | null = this.head ? (a, b) => defaultSort(a[this.key], b[this.key]) : null; @Input() @tuiDefaultProp() resizable = false; @Input() @HostBinding('class._sticky') @tuiDefaultProp() sticky = false; @HostBinding('style.width.px') width: number | null = null; constructor( @Optional() @Inject(TuiHeadDirective) private readonly head: TuiHeadDirective<T> | null, @Optional() @Inject(forwardRef(() => TuiTableDirective)) readonly table: TuiTableDirective<T> | null, ) {} get key(): keyof T { if (!this.head) { throw new Error('Trying to sort with no key'); } return this.head.tuiHead; } get isCurrent(): boolean { return !!this.sorter && !!this.table && this.sorter === this.table.sorter; } onResized(width: number) { this.width = width; } }
def simulate(self, t:numpy.ndarray, parameters:dict=None, verbosity:int=30, reset_afterwards:bool=False, suppress_stdout:bool=True) -> List[TimeSeries]: if parameters is not None: self.set_parameters(parameters) exp_sim = CVode(self.bioprocess_model) exp_sim.verbosity = verbosity exp_sim.store_event_points = False exp_sim.discr = 'BDF' exp_sim.stablimit = True if self.integrator_kwargs is not None: for key in list(self.integrator_kwargs.keys()): exec(f'exp_sim.{key} = self.integrator_kwargs["{key}"]') _t = numpy.array(t, dtype=numpy.float64).flatten() if len(_t) == 1: tfinal = float(_t) ncp_list = None elif len(_t) > 1: ncp_list = _t tfinal = numpy.max(_t) try: if suppress_stdout: f = io.StringIO() with redirect_stdout(f): t, y = exp_sim.simulate(tfinal=tfinal, ncp_list=ncp_list) else: t, y = exp_sim.simulate(tfinal=tfinal, ncp_list=ncp_list) except CVodeError as e: print(f'CVodeError occured with flag {e.value}. CVodeError message was: {e}.') raise e unq, unq_idx = numpy.unique(t, return_index=True) model_predictions = [ ModelState( name=_name, timepoints=numpy.array(t)[unq_idx], values=numpy.array(_y)[unq_idx], replicate_id=self.replicate_id, ) for _name, _y in zip(self.bioprocess_model.states, y.T) ] if self.observer is not None: observations = self.observer.get_observations(model_predictions) else: observations = [] if reset_afterwards: self.reset() simulations = [] simulations.extend(model_predictions) simulations.extend(observations) return simulations
BRESLAU — The Region of Waterloo is buying a farm near the airport for $4.1 million to provide space for a new terminal. The substantial land acquisition goes against the 20-year master plan for the Region of Waterloo International Airport approved in April that calls for new investment and infrastructure expansion only as passenger numbers rise. While Coun. Sean Strickland said the plan does allow for strategic land purchases, he objected to the purchase of this property. "We don't need the land now. We may never need the land," Strickland said in an interview. "We're a long way from the passenger volume we would need to build a new terminal." Strickland was the only councillor to object to the purchase at Wednesday's council meeting. "We're in the process of bringing our debt load down on the airport," he said. The land purchase will be funded by debentures. It was in the airport capital plan for 2020, and now it will be moved up to 2018. The region will advance more than $4.4 million, which includes allowances for sale transaction costs (legal and closing fees) and for the possibility environmental issues are found on the property. Planning and works committee chair Coun. Tom Galloway acknowledged that the new airport master plan includes passenger thresholds for capital investment, but it also stipulates the region can consider buying land that becomes available — and that's what happened in this case. "We didn't go seek this property out. It has been for sale for some time," Galloway said.
The holiday shopping season is here and eReaders and tablets are poised to be the hot items of the season. To help you navigate through all of the devices out there, we have compiled our Holiday Gift Guide To eReaders list featuring the latest eReaders on the market. To help bring this list to life we will be sharing video demonstrations of a couple of the eReaders from our list. The above video is a demonstration of Barnes & Noble’s Nook eReader. The black-and-white eInk eReader has a six inch screen and retails for $149 or $199 with 3G. It is Wi-Fi enabled and tied to the Barnes & Noble bookstore. You can give one a whirl yourself at your local B&N Nook Boutique.
A man who died after being pulled from the Potomac River Tuesday night has been identified as a D.C. police officer who was arrested and charged last week with producing child pornography, according to a department statement issued Wednesday morning. Marc Washington, 32, of Waldorf, had been freed from jail on Monday after his lawyer fought with prosecutors for two days over whether the officer should be released pending trial. A federal judge had given the seven-year veteran a 24-hour curfew in his father’s southern Maryland home and forced his father to surrender the deed to ensure that his son would return to court. Police said that they got a 911 call at about 8:15 p.m. Tuesday from a man. The U.S. Park Police responded to the first block of Ohio Drive SW, at Hains Point, and found an empty car with clothing located nearby. Police and Fire Department dive teams found a man in the cold river shortly before 9:30 p.m. He was rushed to an area hospital where he was pronounced dead, according to Gwendolyn Crump, the D.C. police department’s chief spokeswoman. Authorities would only say that the investigation was continuing. The police statement did not say how Washington died. Washington was charged with showing up at the home of a 15-year-old girl in Southeast Washington who had gone missing and returned. Police said he went into her bedroom the night of Dec. 1, told her to undress and took partially nude pictures of her. The mother later called police, who arrested him a short time later. Police have put another officer on desk duty as they investigate whether he tipped off Washington about his impending arrest. A police dispatcher had also inadvertently sent text of the mother’s complaint to computer screens of officers throughout the 7th District, where Washington was assigned. Court charging documents allege that Washington deleted pictures of the girl and others before police arrested him, but that investigators were able to retrieve most of the images. Police also said they found pictures of other females, including two who appeared to be minors, and were urging people to call with information. Prosecutors argued that Washington posed a threat to the community because his alleged actions occurred while he was on duty and armed. His attorney said that without his gun and badge, Washington posed no danger. He faced up to 30 years in federal prison if convicted. Days after Washington was arrested, police searched the apartment of another 7th District officer and linked him to a child pornography ring. That also involved a teenage runaway and other women, according to a search warrant affidavit. That officer has been placed on desk duty. He has not been arrested or charged, and police officials have said they do not believe the two cases are connected. Last week, D.C. Police Chief Cathy L. Lanier said the allegations tarnish the 4,000-member department, and she called the charges against Washington the most egregious because the alleged offenses occurred while he was on-duty. Get updates on your area delivered via e-mail
<reponame>alterem/smartCityService package com.zhcs.dao; import java.util.List; import java.util.Map; import com.zhcs.entity.BaseCodeEntity; //***************************************************************************** /** * <p>Title:BaseCdeDao</p> * <p>Description: 基础代码 (T_BASE_CDE)</p> * <p>Copyright: Copyright (c) 2017</p> * <p>Company: 深圳市智慧城市管家信息科技有限公司 </p> * @author 刘晓东 - Alter * @version v1.0 2017年2月23日 */ //***************************************************************************** public interface BaseCodeDao extends BaseDao<BaseCodeEntity> { /** * 获取指定类型的所有数据 */ List<BaseCodeEntity> selectByType(String type); BaseCodeEntity selectByTypeValue(Map<String, String> map); }
from __future__ import print_function import os, sys, imp from time import time import multiprocessing as MP import numpy as np # try: # from mpi4py import MPI # hasMPI = True # comm = MPI.COMM_WORLD # except ImportError: # hasMPI = False # FLORENCE BASE CLASS class Base(object): """Kuru base class. General data such as directories, files, analysis session, etc that needs to be loaded a priori are stored pwd: Florence's top level directory session: {'FEM','BEM','Coupled'} Session to run __NO_DEBUG__: Enter debug mode of the package (if false). Activates all numerical checks __VECTORISATION__: Activate numpy's (einsum) for computing elemental matrices with no loops __PARALLEL__: Activate multiprocessing for either shared or distributed memory or both numCPU: Number of concurrent cores/hyperthreads for parallelisation __MEMORY__: {'SHARED','DISTRIBUTED','AUTO','HYBRID'} Option for shared/distributed memory parallelisation C: [int] order of basis functions. Note that C=P-1 where P is polynomial degree norder: [int] number of quadrature points plot: [tuple of ints] plot flag for BEM nrplot: [tuple] plot flag for Newton-Raphson convergence write: [boolean] flag for writting simulation results in .vtu/.mat/.eps/.dat formats """ FloatType=np.float32 IntType=np.int32 if sys.version_info.major == 2: Range = xrange else: Range = range pwd = os.path.dirname(os.path.realpath('__file__')) session = 'FEM' # session = 'BEM' # session = 'Coupled' __NO_DEBUG__ = False __VECTORISATION__ = True __PARALLEL__ = False nCPU = 1 __MEMORY__ = 'SHARED' C = 0 norder = 2 plot = (0,3) nrplot = (0,'last') write = 0 # # PROBLEM SPATIAL DIMENSION- 1D, 2D, 3D # ndim = 2 # nvar = ndim # Fields = 'Mechanics' # # Fields = 'ElectroMechanics' # Formulation = 'DisplacementApproach' # # Formulation = 'DisplacementElectricPotentialApproach' # Analysis = 'Static' # # Analysis = 'Dynamic' # AnalysisType = 'Linear' # # AnalysisType = 'Nonlinear' Timer = 0 # DECIDE WHICH PARALLEL MODEL TO ACTIVATE def ParallelModel(self): if self.__MEMORY__ == "shared": pass elif self.__MEMORY__ == "distributed": print(comm.rank) isScaledJacobianComputed = False
Discussion in 'China & Far East' started by Get Ya Wig Split, Apr 16, 2019. The expressive president of the Philippines, Rodrigo Duterte, once gushed about his Chinese counterpart, “I just simply love Xi Jinping”. But the infatuation has faded. Upset that Chinese vessels have been mobbing the main Philippine-occupied island in the South China Sea, Mr Duterte rasped at China to “lay off”, and threatened an aggressive response. The same day, April 4th, American and Philippine forces practised storming a beach facing the South China Sea, in their biggest joint exercises since 2016, the year Mr Duterte announced a “separation” from America, his country’s only formal military ally. The Philippine pivot from America to China, dreamt up by his government to ease confrontation with China over overlapping claims in the South China Sea, has become a pirouette. For more than three months a flotilla of fishing vessels from China’s maritime militia has been swarming around Philippine-occupied Thitu, an island in the Spratly archipelago which is home both to a small military base and 200-odd civilians (see map). The manoeuvres appear to be a response to Philippine construction work on the island, to repair the airstrip and build a beaching ramp for small craft. Mr Duterte has responded with characteristic bluster. “I have soldiers there,” he warned the Chinese. “If you make a move there, that’s another story. I will tell my soldiers: ‘Prepare for suicide missions.’” The Chinese foreign ministry responded, slightly more stodgily, by noting that the Philippines and China had only recently “reiterated our commitments to further cooperation and talked about measures to enhance mutual trust”. Since the 1990s China has been occupying reefs and rocks in the South China Sea claimed by the Philippines and other littoral countries, and building on them. In 2012, after the Philippine navy tried to arrest some Chinese fishermen near Scarborough Shoal, which both China and the Philippines claim, Chinese vessels have patrolled the surrounding waters and at times turned away Philippine fishermen. The Philippines asked an international tribunal to adjudicate. In 2016, just after Mr Duterte became president, the tribunal ruled in the Philippines’ favour, saying China’s claim to the shoal was baseless. Jingoism sells well in the Philippines (as it does in China), and in the run-up to his election Mr Duterte threatened to jump on a jet ski and defend the Philippines’ claim to Scarborough Shoal single-handedly. But once in office, he opted instead to cosy up to China. He has kept quiet about the tribunal’s ruling, which Chinese leaders had rejected. China, in turn, has pledged big investments in roads, ports and railways around the Philippines. And although it still turns away some Philippine vessels, it has not built any military installations on Scarborough Shoal. But mid-term elections are nearing. The opposition has been cudgelling Mr Duterte for selling out to China. Not much of the promised investment has materialised. And now the Chinese are testing boundaries around Thitu. Small wonder, then, that Mr Duterte, who is as mercurial as he is expressive, appears to have had a change of heart. But as even he acknowledges, the Philippines would lose a war with China, so it would be foolish to start one. Russian Warships Arrive in Philippines as Manila Distances Itself From U.S. Three Russian ships docked in Manila, the Philippines for a five-day port call on Monday aimed at improving navy-to-navy relations amid heightened tensions in the South China Sea. The arrival of two anti-submarine ships Admiral Tributs and Admiral Vinogradov and a tanker ship is the sixth visit by the Russian navy under Philippine President Rodrigo Duterte who has distanced himself from treaty ally U.S. and sought closer ties with Russia. The Philippines and China have both laid claim to large portions of the South China Sea, with China building militarized artificial islands across the important shipping lane. The Russian warships will hold goodwill exercises including joint drills on navigation and communication as well as trainings with the Philippines’ quick response forces, the Philippine News Agency reported. The Russian visit comes in the middle of U.S.-Philippine military drills involving 7,500 troops that wrap up on April 12, according to CNN. Russia’s ships are scheduled to take part in joint naval drills with China in late April and early May after visiting several countries in the region, the state-run TASS news agency reported. Has Pakistan changed tack on the Haqqanis?
<filename>cppfd/parallel_mesh.cpp #include "parallel_mesh.h" #include <iostream> #include <array> #include <map> #include <cmath> #ifdef OUTPUT_VTK_FILES #include <vtkSmartPointer.h> #include <vtkUniformGrid.h> #include <vtkMultiBlockDataSet.h> #include <vtkXMLMultiBlockDataWriter.h> #endif const uint64_t uint64_nan = static_cast<uint64_t>(-1); ParallelMesh:: ParallelMesh(uint64_t n_x, uint64_t n_y, double cell_size, uint16_t n_p, uint16_t n_q, double o_x, double o_y) : n_cells_x(n_x) , n_cells_y(n_y) , n_blocks_x(n_p) , n_blocks_y(n_q) , h(cell_size) , origin({o_x, o_y}) , q_x (n_x / n_p) , q_y (n_y / n_q) , r_x (n_x % n_p) , r_y (n_y % n_q){ // Compute cutoff between wide and narrow blocks this->cutoff_x = this->r_x * (this->q_x + 1); this->cutoff_y = this->r_y * (this->q_y + 1); // iterate over row (Y) major over mesh chunks for (uint64_t q = 0; q < n_q; q++){ // determine row height uint64_t n = (q < this->r_y) ? this->q_y + 1 : this->q_y; // initialze column (X) horizontal origin o_x = this->origin[0]; // iterate over column (X) minor over mesh chunks for (uint64_t p = 0; p < n_p; p++){ // determine column width uint64_t m = (p < this->r_x) ? this->q_x + 1 : this->q_x; // create default mesh chunk boundary point types std::map<PointIndexEnum, PointTypeEnum> pt = { {PointIndexEnum::CORNER_0, PointTypeEnum::GHOST}, {PointIndexEnum::CORNER_2, PointTypeEnum::SHARED_OWNED}, {PointIndexEnum::CORNER_1, PointTypeEnum::GHOST}, {PointIndexEnum::CORNER_3, PointTypeEnum::GHOST}, {PointIndexEnum::EDGE_0, PointTypeEnum::GHOST}, {PointIndexEnum::EDGE_1, PointTypeEnum::SHARED_OWNED}, {PointIndexEnum::EDGE_2, PointTypeEnum::SHARED_OWNED}, {PointIndexEnum::EDGE_3, PointTypeEnum::GHOST}, {PointIndexEnum::INTERIOR, PointTypeEnum::INTERIOR} }; // override outer boundary point types when applicable if (q == 0) pt[PointIndexEnum::EDGE_0] = pt[PointIndexEnum::CORNER_0] = pt[PointIndexEnum::CORNER_1] = PointTypeEnum::BOUNDARY; if (p == n_p - 1) pt[PointIndexEnum::EDGE_1] = pt[PointIndexEnum::CORNER_1] = pt[PointIndexEnum::CORNER_2] = PointTypeEnum::BOUNDARY; if (q == n_q - 1) pt[PointIndexEnum::EDGE_2] = pt[PointIndexEnum::CORNER_2] = pt[PointIndexEnum::CORNER_3] = PointTypeEnum::BOUNDARY; if (p == 0) pt[PointIndexEnum::EDGE_3] = pt[PointIndexEnum::CORNER_3] = pt[PointIndexEnum::CORNER_0] = PointTypeEnum::BOUNDARY; // append new mesh block to existing ones this->mesh_chunks.emplace (std::piecewise_construct, std::forward_as_tuple(std::array<uint64_t,2>{q,p}), std::forward_as_tuple(m, n, this->h, pt, o_x, o_y)); // slide horizontal origin rightward o_x += m * this->h; } // p // slide vertical origin upward o_y += n * this->h; } // q } LocalCoordinates ParallelMesh:: GlobalToLocalCellIndices(uint64_t m, uint64_t n) const{ // return invalid values when global coordinates are out of bounds if (m < 0 || m >= this->n_cells_x || n < 0 || n >= this->n_cells_y) return {uint64_nan, uint64_nan, uint64_nan, uint64_nan}; // compute X-axis local coordinates uint64_t p, i; if (m < this->cutoff_x){ // coordinate falls in wider blocks auto d = ldiv(m, this->q_x + 1); p = d.quot; i = d.rem; } else{ // coordinate falls in narrower blocks auto d = ldiv(m - this->cutoff_x, this->q_x); p = d.quot + this->r_x; i = d.rem; } // compute Y-axis local coordinates uint64_t q, j; if (n < this->cutoff_y){ // coordinates fall in wider blocks auto d = ldiv(n, this->q_y + 1); q = d.quot; j = d.rem; } else{ // coordinates fall in narrower blocks auto d = ldiv(n - this->cutoff_y, this->q_y); q = d.quot + this->r_y; j = d.rem; } // return valid indices return {p, q, i, j}; } LocalCoordinates ParallelMesh:: GlobalToLocalPointIndices(uint64_t m, uint64_t n) const{ // return invalid values when global coordinates are out of bounds if (m < 0 || m >= this->get_n_points_x() || n < 0 || n >= this->get_n_points_y()) return {uint64_nan, uint64_nan, uint64_nan, uint64_nan}; // return early for global mesh origin case if (m == 0 && n == 0) return {0, 0, 0, 0}; // bottom left cell ownership of point when available uint64_t m_c = (m == 0) ? 0 : m - 1; uint64_t n_c = (n == 0) ? 0 : n - 1; LocalCoordinates loc_c = this->GlobalToLocalCellIndices(m_c, n_c); // return valid indices return { loc_c.block[0], loc_c.block[1], (m == 0) ? 0 : loc_c.local[0] + 1, (n == 0) ? 0 : loc_c.local[1] + 1 }; } std::array<uint64_t,2> ParallelMesh:: LocalToGlobalCellIndices(const LocalCoordinates& loc) const{ // return invalid values when indices are out of bounds uint64_t p = loc.block[0]; uint64_t q = loc.block[1]; uint64_t i = loc.local[0]; uint64_t j = loc.local[1]; if (p < 0 || q < 0 || i < 0 || j < 0 || p >= this->n_blocks_x || q >= this->n_blocks_y) return {uint64_nan, uint64_nan}; // compute X-axis global coordinate uint64_t m; if (p < this->r_x){ // return invalid values when local index is out of bounds if (i > this->q_x) return {uint64_nan, uint64_nan}; // coordinate falls in wider blocks m = (this->q_x + 1) * p + i; } else{ // return invalid values when local index is out of bounds if (i >= this->q_x) return {uint64_nan, uint64_nan}; // coordinate falls in narrower blocks m = this->cutoff_x + this->q_x * (p - this->r_x) + i; } // compute Y-axis global coordinate uint64_t n; if (q < this->r_y){ // return invalid values when local index is out of bounds if (j > this->q_y) return {uint64_nan, uint64_nan}; // coordinate falls in wider blocks n = (this->q_y + 1) * q + j; } else{ // return invalid values when local index is out of bounds if (j >= this->q_y) return {uint64_nan, uint64_nan}; // coordinate falls in narrower blocks n = this->cutoff_y + this->q_y * (q - this->r_y) + j; } // return valid indices return {m, n}; } std::array<uint64_t,2> ParallelMesh:: LocalToGlobalPointIndices(const LocalCoordinates& loc) const{ // return invalid values when indices are out of bounds uint64_t p = loc.block[0]; uint64_t q = loc.block[1]; uint64_t i = loc.local[0]; uint64_t j = loc.local[1]; if (p < 0 || q < 0 || i < 0 || j < 0 || p >= this->n_blocks_x || q >= this->n_blocks_y) return {uint64_nan, uint64_nan}; // compute X-axis global coordinate uint64_t m; if (p < this->r_x){ // return invalid values when local index is out of bounds if (i > this->q_x + 1) return {uint64_nan, uint64_nan}; // coordinate falls in wider blocks m = (this->q_x + 1) * p + i; } else{ // return invalid values when local index is out of bounds if (i > this->q_x) return {uint64_nan, uint64_nan}; // coordinate falls in narrower blocks m = this->cutoff_x + this->q_x * (p - this->r_x) + i; } // compute Y-axis global coordinate uint64_t n; if (q < this->r_y){ // return invalid values when local index is out of bounds if (j > this->q_y + 1) return {uint64_nan, uint64_nan}; // coordinate falls in wider blocks n = (this->q_y + 1) * q + j; } else{ // return invalid values when local index is out of bounds if (j > this->q_y) return {uint64_nan, uint64_nan}; // coordinate falls in narrower blocks n = this->cutoff_y + this->q_y * (q - this->r_y) + j; } // return valid indices return {m, n}; } #ifdef OUTPUT_VTK_FILES std::string ParallelMesh:: write_vtm(const std::string& file_name) const{ // assemble full file name with extension std::string full_file_name = file_name + ".vtm"; // aggregate all mesh chunks as VTK multi-block data set vtkSmartPointer<vtkMultiBlockDataSet> mbs = vtkSmartPointer<vtkMultiBlockDataSet>::New(); mbs->SetNumberOfBlocks(this->mesh_chunks.size()); uint16_t i = 0; for (const auto& it_mesh_chunks : this->mesh_chunks) mbs->SetBlock(i++, it_mesh_chunks.second.make_VTK_uniform_grid().GetPointer()); // write VTK multi-block data set (vtm) file vtkSmartPointer<vtkXMLMultiBlockDataWriter> output_file = vtkSmartPointer<vtkXMLMultiBlockDataWriter>::New(); output_file->SetFileName(full_file_name.c_str()); output_file->SetInputData(mbs); output_file->Write(); // return fill name with extension return full_file_name; } #endif
package credentials //Environmental virables that may be used by the provider const ( ENVCredentialFile = "ALIBABA_CLOUD_CREDENTIALS_FILE" ENVEcsMetadata = "ALIBABA_CLOUD_ECS_METADATA" PATHCredentialFile = "~/.alibabacloud/credentials" ) // Provider will be implemented When you want to customize the provider. type Provider interface { resolve() (*Config, error) }
Weekly Variations of Well-Being and Interactions with Training and Match Intensities: A Descriptive Case Study in Youth Male Soccer Players The aim of this study was two-fold: (i) analyze the weekly variations of well-being and training/match intensity measures in youth soccer players, and (ii) test relations between well-being and training intensity outcomes. The study followed a descriptive case study design. Twenty-seven under-17 male soccer players were monitored for well-being and training intensity parameters over seventeen consecutive weeks. An adjusted version of the Hooper questionnaire was used to monitor the perceptive sleep quality, readiness, fatigue, and delayed onset muscle soreness (DOMS) early in the morning. The CR-10 Borgs scale was also used for monitoring the rate of perceived exertion (RPE) of players after training sessions. Repeated-measures analysis of variance was executed to test the between-week variations of both well-being and training intensity outcomes. Moreover, Pearson product moment correlation was used to test the relations between well-being and training intensity outcomes. Repeated measures ANOVA revealed significant differences between weeks in the sleep quality (F = 0.422; p < 0.001; p2 = 0.140), readiness (F = 0.8.734; p < 0.001; p2 = 0.251), fatigue (F = 4.484; p < 0.001; p2 = 0.147), DOMS (F = 3.775; p = 0.001; p2 = 0.127), RPE (F = 7.301; p < 0.001; p2 = 0.219), and session-RPE (F = 17.708; p < 0.001; p2 = 0.405). Correlations between well-being and training intensity outcomes in the same week revealed moderate correlations between fatigue and session-RPE (r = 0.325). As conclusions, it was found that well-being and training intensity fluctuates over the season, while well-being outcomes seems to be related with training intensity, although with a small magnitude. Introduction Managing the training process while monitoring the impact of training stimulus on the soccer players makes up part of the tasks of coaches and practitioners. Currently, applying an athlete's monitoring cycle in which training demands is a well-implemented practice in soccer clubs. As an example, a study summarizing results of a survey performed at twenty-eight European soccer clubs revealed that 100% of the clubs use monitoring processes, most of them monitoring locomotor demands and half of them additionally monitoring psychophysiological demands. Additionally, in a survey conducted on 84 coaches and 88 practitioners it was revealed that coaches and practitioners sometimes adjust training sessions based on previous training intensity monitoring, and that training intensity reports are often provided to coaches. Thus, monitoring locomotor and psychophysiological demands imposed by training and/or matches is a usual practice in both adults and youth categories. While monitoring training demands is current practice, other factors should be considered for properly understanding the impact of training and match stimulus on the players' responses. Thus, an athletes' monitoring cycle is proposed as a recommended practice to implement in any training scenario. The athletes' monitoring cycle consists of monitoring training demands (e.g., locomotor/mechanical and psychophysiological), as well as the well-being and the readiness of players. In this conceptual framework, perceptual well-being is related to training intensity, namely representing the way players are coping with training demands. The authors of this concept also suggest that poor perceptual well-being and high training demands should adhere to an adjustment in the training dose, while a high training demand followed by a good level of perceptual well-being is a signal to continue the training process. Descriptive studies have been tried to test this interaction between training intensity and well-being outcomes in soccer, covering adults, and youth. While direct relations between well-being outcomes (e.g., sleep quality, delayed onset muscle soreness (DOMS) mood, fatigue, stress) and training intensity (e.g., rate of perceived exertion, RPE) have revealed small-to-moderate magnitudes of correlation, specific original studies have been suggesting large magnitudes of correlations between well-being outcomes and some measures that identify accumulated training intensities and variability of these demands. Possibly, fluctuations over the season can be a cause of that. Seasonal variations of training intensities and well-being outcomes in soccer players have been described. In youth, well-being scores seem to be more stable in the middle of the season, while in early and ending phases of the season presents greater variability. Interestingly, also in youth soccer players, significantly greater accumulated training demands were found in the middle of the season. Although well-established psychometric instruments such as CR-10 Borg's scale have been confirmed for their validity, reliability, and sensitivity, as well as well-being questionnaires as proposed by Hooper and colleagues, there are some factors that can influence the direction and magnitude of correlations between well-being and training intensity. For example, relationships between well-being outcomes and training intensity in the same day can be different to testing well-being and training intensity outcomes considering days of difference. This has not been reported, and it could be interesting to understand the possible delayed effects of accumulated training demands or accumulated poor well-being reports in the following training process. Possibly, better identification of such relationships may provide useful insight to coaches and parents for being attentive to some signals in players. Considering the above-mentioned gap in the current research, it is important to describe the variations of well-being and training intensity outcomes and particularly inspect the relations between these outcomes with special attention to the effects of previously accumulated training intensity or accumulated well-being scores on the variations of the other parameters. Thus, the purpose of this study was two-fold: (i) analyze the weekly variations of well-being measures in youth soccer players, and (ii) test relations between well-being and training intensity outcomes. Study Design The study followed a descriptive case study design. Setting The observational period occurred between 29 July 2021 and 17 November 2021. Seventeen consecutive weeks were observed, including a total of 64 training sessions and 19 matches. The details about the observed period can be observed in Table 1. Over the period, the players were asked to fill out a wellness questionnaire (adjusted version of the Hooper questionnaire) and to rate the perceived exertion (RPE) regarding the effort associated with the training intensity. Moreover, the duration of the training sessions and/or matches was registered for further data treatment. Players only registered wellness scores in the same days in which the training and/or match occurred. The wellness scores were provided before the training started, while the RPE was scored between 20 and 30 minutes after the end of the training session and/or match. Typically, training sessions were structured in a warm-up, followed by analytic exercises focusing on the conditioning of players (e.g., aerobic, anaerobic, speed, or change-of-direction) and a period of exercise with small-sided games and positioning games. After that, a short period of 11 vs. 11 between players and a period of cool-down was implemented. Participants Convenience sampling was used in the current study. The players were recruited from the same team. Twenty-seven male soccer players (age: 16.3 ± 0.3 years; height: 1.8 ± 0.1 m; body mass: 67.7 ± 7.4 kg; body mass index: 22.1 ± 0.9 kg/m 2 ) voluntarily participated in the observational period. The following eligibility criteria was considered for including players in the data treatment: (i) reported wellness and RPE scores every time they were part of training sessions and/or matches; (ii) participate in >90% of the training sessions occurring in the period of observation; (iii) participate in at least 50% of the matches occurring in the observational period; (iv) not exceed more than one week in missing data. The study design and protocol were preliminarily explained and detailed to the players and their parents. After being informed about the risks and benefits, they signed a free consent. The study has followed the ethical standards for the study in humans, in accordance with the Declaration of Helsinki. Well-Being Questionnaire An adjusted version from the proposed Hooper questionnaire was used. An ordinal 10-point scale was used. The score and verbal anchors can be found in Table 2. The questionnaire was preliminarily introduced to the athletes in the previous two weeks before starting the observational period, aiming to familiarize them. The scores were provided before each training session and/or match, about thirty minutes before. The scores were provided individually, and the answers were registered by the observer in a database. The main outcomes extracted for further data treatment were the scores in sleep quality, readiness, fatigue, and delayed onset muscle soreness (DOMS) categories analyzed by the questionnaire. Training and Match Intensity The training intensity was monitored using the CR-10 Borg's scale. The score of the scale varies between 0 (nothing at all) and 10 (extremely strong) to the question "how intense was your training session?". The scores can be provided from 0.5 to 0.5. The CR-10 Borg's scale was applied between 20 to 30 minutes from the end of training session and/or match. The scores were provided individually, and the observer collected the information in a database. The players were previously familiarized with the scale. The score provided was used as RPE outcome for further statistical treatment. Additionally, the session-RPE per each training session and/or match was calculated as follows: CR10 Borg's scale score time of the entire session (minutes). The session-RPE was also used as main outcome of the current research. Statistical Procedures The descriptive statistics are presented in the form of average and standard deviation. Normality and homogeneity of the sample was tested using the Shapiro-Wilk test and Levene's test, respectively. After confirmation of the normality and homogeneity assumptions (p > 0.05), a repeated measures ANOVA was conducted to analyze the variations of wellness and training intensity scores over the seventeen weeks. The Bonferroni's post hoc test was used to test the pairwise comparisons. Partial eta squared was executed to determine the effect size of analysis of variation. Aiming to analyze the relations between wellness and training intensity outcomes, a Pearson product moment correlation test was executed. Average and confidence intervals of correlation coefficient (r) were presented. Magnitude of correlations were classified based on the following thresholds Results Descriptive statistics of well-being and training intensity outcomes can be found in Table 3. Moreover, a graphical representation of the outcomes over the period of observation can be observed in Figure 1. Correlations between well-being and training intensity outcomes scored in the same week are presented in Table 4. Moderate correlations were found between fatigue and session-RPE (r = 0.325). Small magnitudes of correlation were found between session-RPE and sleep (r = −0.119), readiness (r = −0.235), and DOMS (r = 0.161). Small magnitudes of correlation were found between RPE (r = 0.170) and fatigue and DOMS (r = 0.111). Table 4. Correlation coefficient (r and (95%confidence interval)) between well-being and training/match intensity outcomes of the same week. Table 5 presents the correlations between well-being outcomes and the training intensities reported in the week immediately following the well-being reports. Small magnitudes of correlation were found between session-RPE and readiness (r = −0.115) and fatigue (r = 0.262). Small magnitudes of correlation were found between RPE and fatigue (r = 0.164) and DOMS (r = 0.102). Table 5. Correlation coefficient (r) between well-being of the previous week and training/match intensity outcomes of the following week. Correlation coefficients between training intensity and well-being outcomes reported the week after the training intensity reports can be found in Table 6. Small magnitudes of correlation were found between RPE and readiness (r = −0.135), fatigue (r = 0.202), and DOMS (r = 0.122). Similarly, small magnitudes of correlation were found between session-RPE and readiness (r = −0.167), fatigue (r = 0.282) and DOMS (r = 0.134). Table 6. Correlation coefficient (r) between training/match intensity of the previous week and well-being outcomes of the following week. Discussion The aims of this study were to analyze the variations of well-being and intensity measures across 17 weeks in youth soccer players and to test associations between well-being and training intensity measures. Regarding the first aim, several significant differences between weeks for all well-being measures and training intensity were found. Specifically, sleep quality was reported as good or higher for all weeks, and overall it seems that weeks with two matches reported higher values of sleep quality. This finding seems to be in line with previous studies that found that high-intensity training sessions performed in the evenings for young soccer players, or matches for professional soccer players, had no impact on sleep quality. Readiness showed a tendency of higher values from week 6 forward. It seems that weeks with more matches cause a perception of higher readiness. Following the same line, fatigue and DOMS perceptions where higher values occurred in the first weeks and from week 6 forward, a tendency to lower the values occurred. Intensity measures of RPE and session-RPE seem to be in line with well-being measures, although there were some variations as well after week 6; a tendency in lowering the RPE and session-RPE values was observed until the last week analyzed. Following a previous study, the well-being results seem to be in line, although different approaches for data analysis had been used. In the Nobari et al. study, weekly accumulated data was used instead of weekly average data and the original Hooper index was used, but the results seem to be aligned. Other studies found lower values during mid-season (weeks 14 to 31) for sleep quality, DOMS, and fatigue than in earlier seasons (weeks 6 to 13). Although our study presents a different design and only 17 weeks in analysis, we would speculate different results because our data seems to support that the weeks with higher number of matches show a tendency to increase the well-being perception and to reduce the intensity. Indeed, this was in opposition to a previous study conducted with professional soccer players where weeks with two matches showed higher values of fatigue and DOMS than weeks with only one match. Regarding intensity, previous studies also showed higher values from week 6 forward when compared to the results of the present study, but there was one study that showed higher values in the first month that tended to be reduced in the following two months which seems to be in line with the present study. Despite the differences between studies, the RPE and session-RPE values found in this study seems to overcome the range values found in a recent systematic review conducted in young soccer players (RPE = 2.3-6.3 A.U.; session-RPE = 156-394 A.U.). From the second aim of this study, there was a moderate correlation between fatigue and session-RPE and small correlations between session-RPE and sleep, readiness, and DOMS; RPE and DOMS in the same week. The correlation between fatigue and session-RPE was also found in another study that used weekly accumulated data. In fact, that study found correlations between session-RPE and DOMS and fatigue. Another study in young soccer players also showed that fatigue, DOMS, and sleep were largely related to session-RPE. In professional soccer players, session-RPE also displayed moderate correlations with fatigue and DOMS. The previous correlations seem to support the findings of the present study. Despite the fact that some differences exist, it seems that with higher intensity, higher levels of fatigue and DOMS occur, while at the same time higher levels of intensity seem to be associated with better readiness and sleep quality. This was also observed in our analysis when readiness and fatigue values were associated with both RPE and session-RPE of the week after. Furthermore, both RPE and session-RPE also showed associations with readiness, fatigue, and DOMS. It seems that this was the first study that conducted this type of analysis. Therefore, future studies should consider it to amplify knowledge in this field. As mentioned in the beginning of this discussion, and despite the correlation shown, our data revealed that weeks with two matches tended to show better well-being and lower intensity. However, it important to highlight that the number of matches was not considered in the correlation analysis, which is required for future studies. The present study presents some limitations, namely: the small sample size that came from only one team; an analysis of 17 weeks and not the entire season; the lack of locomotor measures (e.g., high-speed running, sprint, and accelerations) that could amplify the present results; and the lack of dietary control and supplementation. Finally, an intra-individual analysis considering the interaction between locomotor demands, playing position, physical fitness, and lifestyle was not analyzed and should be performed in future research aiming to explain the causes for variations. Therefore, future studies should avoid previous limitations and use: larger sample sizes and full-season analysis and external load measures. In addition, other contextual variables such as match results could influence the results and should be considered for future studies as previously suggested. For instance, a match win showed to provide better sleep quality when compared with a draw or a loss. In the same line, match location should be taken in consideration in future analysis because it has been shown that away matches that required longer distance of travelling showed sleep/wake behavior impairment. Moreover, analysis of dietary intake and supplementation should be considered, namely trying to establish relationships with wellness and coping with training demands. Moreover, some studies have shown the importance of playing positions due to the different physical and physiological demands and several variations in well-being, as well as playing status (starters and non-starters) that reflect differences across the season in young soccer players. For that reason, they should be considered in future research. Lastly, similar designs should be replicated not only for young soccer players, but also for professional elite men and women players. Additionally, future studies should analyze the influence of congested periods (weeks with two or more matches) compared with regular weeks (weeks with only one match). Nonetheless, this study should be considered by coaches and their staff to acknowledge the importance of internal intensity and wellbeing measures such as readiness, sleep quality, fatigue, and DOMS variables as a mandatory daily task. Conclusions This study showed that well-being and training intensity fluctuates over the weeks. In addition, well-being measures seem to be related to training intensity, although with a small magnitude (only a moderate correlation was found between session-RPE and fatigue). Even so, this study showed a tendency of lower internal intensity and better well-being in the weeks with two matches. Funding: This work is funded by Fundao para a Cincia e Tecnologia/Ministrio da Cincia, Tecnologia e Ensino Superior through national funds and when applicable co-funded EU funds under the project UIDB/50008/2020. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Escola Superior de Desporto e Lazer ethical committee with the code CTC-ESDL-CE001-2021. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Data available on request due to privacy. The data presented in this study are available on request from the first author. Conflicts of Interest: The authors declare no conflict of interest.
Attempts continue to stop the popular former president Lula da Silva from standing in Brazil’s presidential election in October (Report, 5 April). Polls show he would be the likely winner, yet since the removal of President Dilma Rousseff, Lula has been subjected to a concerted campaign against him, where his basic human rights have been breached. As part of this, Lula has been subjected to a political prosecution and conviction, ignoring evidence of his innocence, and triggering a crisis of confidence in the rule of law. This is not just about one man but the future of democracy in Brazil. We believe he should be allowed to stand and the Brazilian people allowed to decide their own future. • The imprisonment of ex-president Luiz Inácio Lula da Silva would be a wake-up call for the Brazilian left. In Lula of Brazil: The Story so Far, I recorded his considerable achievements, up to the end of his first term in office. These included his role in the overthrow of the military dictatorship, his success in moves towards social equality, particularly through payments to poorer families, and his promotion of Brazil on the global scene. But the Brazilian left cannot rely on one man, whom the justice system has found to have flaws. With elections this year, Lula’s Workers’ party must regain the idealism at its foundation, fight corruption at all levels, and devise a strategy which deals with the economic, social and environmental challenges of the country. President Temer, and many Congress deputies, are facing much more serious allegations of corruption than those that brought down Lula. The models that worked for the left in the early years of this century are inadequate now. The threat of a return to a reactionary past is very pressing. Witness the popularity of the rightwing Jair Bolsonaro, an admirer of the military regime, in the run-up to this year’s presidential elections. The anger and disappointment at the judges’ ruling must be converted into a positive desire to clean up the political system, end the recession and take Brazil forward again.
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdint.h> #define __STDC_FORMAT_MACROS #include <inttypes.h> #define PCRE2_CODE_UNIT_WIDTH 8 #include <pcre2.h> #include <raims/session.h> using namespace rai; using namespace ms; using namespace kv; using namespace md; uint64_t SubDB::psub_start( PatternArgs &ctx ) noexcept { SubStatus status = this->pat_tab.start( ctx ); if ( status == SUB_OK || status == SUB_UPDATED ) { this->update_bloom( ctx ); if ( status == SUB_OK ) { this->fwd_psub( ctx ); return this->sub_seqno; } if ( status == SUB_UPDATED ) return ctx.seqno; } return 0; } uint64_t SubDB::psub_stop( PatternArgs &ctx ) noexcept { SubStatus status = this->pat_tab.stop( ctx ); if ( status == SUB_OK || status == SUB_UPDATED ) { this->update_bloom( ctx ); if ( status == SUB_OK ) { this->fwd_psub( ctx ); this->pat_tab.remove( ctx ); return this->sub_seqno; } if ( status == SUB_UPDATED ) return ctx.seqno; } return 0; } bool SubDB::add_bloom( PatternArgs &ctx, BloomRef &b ) noexcept { bool rsz = false; if ( ctx.rt->detail_type == NO_DETAIL ) rsz = b.add_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash ); else if ( ctx.rt->detail_type == SUFFIX_MATCH ) rsz = b.add_suffix_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash, ctx.rt->u.suffix ); else if ( ctx.rt->detail_type == SHARD_MATCH ) rsz = b.add_shard_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash, ctx.rt->u.shard ); else fprintf( stderr, "bad detail\n" ); return rsz; } void SubDB::del_bloom( PatternArgs &ctx, BloomRef &b ) noexcept { if ( ctx.rt->detail_type == NO_DETAIL ) b.del_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash ); else if ( ctx.rt->detail_type == SUFFIX_MATCH ) b.del_suffix_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash, ctx.rt->u.suffix ); else if ( ctx.rt->detail_type == SHARD_MATCH ) b.del_shard_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash, ctx.rt->u.shard ); else fprintf( stderr, "bad detail\n" ); } void SubDB::update_bloom( PatternArgs &ctx ) noexcept { bool rsz = false; this->update_seqno++; if ( ctx.is_start ) { if ( ctx.sub_count == 1 ) rsz = this->add_bloom( ctx, this->bloom ); if ( ( ctx.flags & INTERNAL_SUB ) != 0 && ctx.internal_count == 1 ) rsz |= this->add_bloom( ctx, this->internal ); if ( ( ctx.flags & EXTERNAL_SUB ) != 0 && ctx.external_count == 1 ) rsz |= this->add_bloom( ctx, this->external ); } else { if ( ctx.sub_count == 0 ) this->del_bloom( ctx, this->bloom ); if ( ( ctx.flags & INTERNAL_SUB ) != 0 && ctx.internal_count == 0 ) this->del_bloom( ctx, this->internal ); if ( ( ctx.flags & EXTERNAL_SUB ) != 0 && ctx.external_count == 0 ) this->del_bloom( ctx, this->external ); } if ( rsz ) this->resize_bloom(); } static bool cvt_wild( PatternCvt &cvt, const char *pat, uint16_t patlen, const uint32_t *seed, PatternFmt fmt, uint32_t &hash ) noexcept { if ( fmt == RV_PATTERN_FMT ) { if ( cvt.convert_rv( pat, patlen ) != 0 ) { fprintf( stderr, "bad pattern: %.*s\n", (int) patlen, pat ); return false; } } else if ( fmt == GLOB_PATTERN_FMT ) { if ( cvt.convert_glob( pat, patlen ) != 0 ) { fprintf( stderr, "bad pattern: %.*s\n", (int) patlen, pat ); return false; } } else { fprintf( stderr, "bad pattern fmt(%u): %.*s\n", fmt, (int) patlen, pat ); return false; } hash = kv_crc_c( pat, cvt.prefixlen, seed[ cvt.prefixlen ] ); return true; } uint64_t SubDB::internal_psub_start( const char *pat, uint16_t patlen, PatternFmt fmt, SubOnMsg *cb ) noexcept { PatternCvt cvt; PatternArgs ctx( pat, patlen, cvt, true, cb, this->sub_seqno + 1, INTERNAL_SUB, 0 ); if ( ! cvt_wild( cvt, pat, patlen, this->pat_tab.seed, fmt, ctx.hash ) ) return 0; return this->psub_start( ctx ); } uint64_t SubDB::internal_psub_stop( const char *pat, uint16_t patlen, PatternFmt fmt ) noexcept { PatternCvt cvt; PatternArgs ctx( pat, patlen, cvt, false, NULL, 0, INTERNAL_SUB, 0 ); if ( ! cvt_wild( cvt, pat, patlen, this->pat_tab.seed, fmt, ctx.hash ) ) return 0; return this->psub_stop( ctx ); } uint64_t SubDB::external_psub_start( NotifyPattern &pat, uint32_t tport_id ) noexcept { PatternArgs ctx( pat.pattern, pat.pattern_len, pat.cvt, true, NULL, this->sub_seqno + 1, EXTERNAL_SUB, tport_id ); ctx.hash = pat.prefix_hash; return this->psub_start( ctx ); } uint64_t SubDB::external_psub_stop( NotifyPattern &pat, uint32_t tport_id ) noexcept { PatternArgs ctx( pat.pattern, pat.pattern_len, pat.cvt, false, NULL, 0, EXTERNAL_SUB, tport_id ); ctx.hash = pat.prefix_hash; return this->psub_stop( ctx ); } void SubDB::fwd_psub( PatternArgs &ctx ) noexcept { const char * sub_prefix = ( ctx.is_start ? P_PSUB : P_PSTOP ); size_t sub_prelen = ( ctx.is_start ? P_PSUB_SZ : P_PSTOP_SZ ); SubjectVar s( sub_prefix, sub_prelen, ctx.pat, ctx.cvt.prefixlen ); MsgEst e( s.len() ); e.seqno () .pattern ( ctx.patlen ) .fmt () .ref_count(); MsgCat m; m.reserve( e.sz ); m.open( this->user_db.bridge_id.nonce, s.len() ) .seqno ( ++this->sub_seqno ) .pattern ( ctx.pat, ctx.patlen ) .fmt ( (uint32_t) ctx.cvt.fmt ); uint32_t h = s.hash(); m.close( e.sz, h, CABA_RTR_ALERT ); m.sign( s.msg, s.len(), *this->user_db.session_key ); d_sub( "psub(%.*s) %" PRIu64 "\n", (int) ctx.patlen, ctx.pat, ctx.cvt.prefixlen ); size_t count = this->user_db.transport_tab.count; for ( size_t i = 0; i < count; i++ ) { TransportRoute *rte = this->user_db.transport_tab.ptr[ i ]; if ( ! rte->is_set( TPORT_IS_EXTERNAL ) ) { NotifyPattern npat( ctx.cvt, ctx.pat, ctx.patlen, ctx.hash, this->my_src_fd, false, 'M' ); if ( ctx.is_start ) rte->sub_route.do_notify_psub( npat ); else rte->sub_route.do_notify_punsub( npat ); EvPublish pub( s.msg, s.len(), NULL, 0, m.msg, m.len(), rte->sub_route, this->my_src_fd, h, CABA_TYPE_ID, 'p' ); rte->forward_to_connected_auth( pub ); } } } SubStatus PatTab::start( PatternArgs &ctx ) noexcept { ctx.rt = this->tab.upsert( ctx.hash, ctx.pat, ctx.patlen, ctx.loc ); if ( ctx.rt == NULL ) return SUB_ERROR; if ( ctx.loc.is_new ) { if ( ! ctx.rt->start( ctx ) ) { this->tab.remove( ctx.loc ); return SUB_ERROR; } this->list.push( ctx.seqno, ctx.hash, ACTION_PSUB_START ); return SUB_OK; } if ( ctx.rt->add( ctx ) ) return SUB_UPDATED; return SUB_EXISTS; } SubStatus PatTab::stop( PatternArgs &ctx ) noexcept { ctx.rt = this->tab.find( ctx.hash, ctx.pat, ctx.patlen, ctx.loc ); if ( ctx.rt == NULL ) return SUB_NOT_FOUND; if ( ! ctx.rt->rem( ctx ) ) return SUB_UPDATED; return SUB_OK; } void PatTab::remove( PatternArgs &ctx ) noexcept { this->list.pop( ctx.rt->start_seqno ); ctx.rt->release(); this->tab.remove( ctx.loc ); } #if 0 void PatTab::prefix_count( PatternArgs &ctx ) noexcept { RouteLoc loc; PatRoute * rt = this->tab.find_by_hash( ctx.hash, loc ); ctx.count = 0; while ( rt != NULL ) { if ( ctx.cvt.prefixlen == rt->prefix_len && ::memcmp( ctx.pat, rt->value, rt->prefix_len ) == 0 ) { rt->ref_index = ctx.count++; } rt = this->tab.find_next_by_hash( ctx.hash, loc ); } } #endif PatRoute * PatTab::find_sub( uint32_t hash, uint64_t seqno ) noexcept { kv::RouteLoc loc; PatRoute * rt = this->tab.find_by_hash( hash, loc ); while ( rt != NULL ) { if ( rt->start_seqno == seqno ) break; rt = this->tab.find_next_by_hash( hash, loc ); } return rt; } bool PatTab::prefix_hash_exists( uint16_t prefix_len, uint32_t hash ) noexcept { kv::RouteLoc loc; PatRoute * rt = this->tab.find_by_hash( hash, loc ); while ( rt != NULL ) { if ( prefix_len == rt->prefix_len /*&& ::memcmp( sub, rt->value, prefix_len ) == 0*/ ) { return true; } rt = this->tab.find_next_by_hash( hash, loc ); } return false; } void PatTab::release( void ) noexcept { kv::RouteLoc loc; for ( PatRoute *rt = this->tab.first( loc ); rt != NULL; rt = this->tab.next( loc ) ) { rt->release(); } this->tab.release(); } bool PatRoute::start( PatternArgs &ctx ) noexcept { size_t erroff; int error; bool pattern_success = false; this->re = NULL; this->md = NULL; /* if prefix matches, no need for pcre2 */ if ( ctx.cvt.prefixlen + 1 == ctx.patlen && ( ( ctx.cvt.fmt == RV_PATTERN_FMT && ctx.pat[ ctx.cvt.prefixlen ] == '>' ) || ( ctx.cvt.fmt == GLOB_PATTERN_FMT && ctx.pat[ ctx.cvt.prefixlen ] == '*' ) ) ) pattern_success = true; else { this->re = pcre2_compile( (uint8_t *) ctx.cvt.out, ctx.cvt.off, 0, &error, &erroff, 0 ); if ( this->re == NULL ) { fprintf( stderr, "re failed\n" ); } else { this->md = pcre2_match_data_create_from_pattern( this->re, NULL ); if ( this->md == NULL ) fprintf( stderr, "md failed\n" ); else pattern_success = true; } } if ( pattern_success && this->from_pattern( ctx.cvt ) ) { this->prefix_len = (uint16_t) ctx.cvt.prefixlen; this->start_seqno = ctx.seqno; this->on_data = ctx.cb; this->ref.init( ctx.flags, ctx.tport_id ); ctx.sub_count = 1; ctx.internal_count = this->ref.internal_ref; ctx.external_count = this->ref.external_refs; return true; } if ( this->md != NULL ) pcre2_match_data_free( this->md ); if ( this->re != NULL ) pcre2_code_free( this->re ); return false; } bool PatRoute::add( PatternArgs &ctx ) noexcept { if ( this->ref.add( ctx.flags, ctx.tport_id ) ) { if ( ( ctx.flags & INTERNAL_SUB ) != 0 ) this->on_data = ctx.cb; ctx.sub_count = this->ref.ref_count(); ctx.internal_count = this->ref.internal_ref; ctx.external_count = this->ref.external_refs; ctx.seqno = this->start_seqno; return true; } return false; } bool PatRoute::rem( PatternArgs &ctx ) noexcept { if ( this->ref.rem( ctx.flags, ctx.tport_id ) ) { if ( ( ctx.flags & INTERNAL_SUB ) != 0 ) this->on_data = NULL; ctx.sub_count = this->ref.ref_count(); ctx.internal_count = this->ref.internal_ref; ctx.external_count = this->ref.external_refs; if ( ctx.sub_count == 0 ) return true; ctx.seqno = this->start_seqno; } return false; } bool PatRoute::match( const char *sub, size_t sublen ) const noexcept { if ( this->re == NULL ) { return sublen >= (size_t) this->prefix_len && /* len has > or * suffix */ ::memcmp( this->value, sub, this->prefix_len ) == 0; } return pcre2_match( this->re, (const uint8_t *) sub, sublen, 0, 0, this->md, 0 ) == 1; } void PatRoute::release( void ) noexcept { if ( this->md != NULL ) pcre2_match_data_free( this->md ); if ( this->re != NULL ) pcre2_code_free( this->re ); } bool SubDB::recv_repsub_result( const MsgFramePublish &, UserBridge &, const MsgHdrDecoder & ) noexcept { return true; } bool SubDB::recv_psub_start( const MsgFramePublish &pub, UserBridge &n, const MsgHdrDecoder &dec ) noexcept { if ( dec.test_2( FID_PATTERN, FID_FMT ) ) { UserRoute & u_rte = *n.user_route; TransportRoute & rte = u_rte.rte; PatternCvt cvt; BloomDetail d; uint32_t fmt; dec.get_ival<uint32_t>( FID_FMT, fmt ); PatternArgs ctx( (const char *) dec.mref[ FID_PATTERN ].fptr, (uint16_t) dec.mref[ FID_PATTERN ].fsize, cvt, true, NULL, 0, 0, 0 ); if ( ! cvt_wild( cvt, ctx.pat, ctx.patlen, this->pat_tab.seed, (PatternFmt) fmt, ctx.hash ) ) return true; if ( d.from_pattern( ctx.cvt ) ) { if ( d.detail_type == NO_DETAIL ) { n.bloom.add_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash ); } else if ( d.detail_type == SUFFIX_MATCH ) { n.bloom.add_suffix_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash, d.u.suffix ); } else if ( d.detail_type == SHARD_MATCH ) { n.bloom.add_shard_route( (uint16_t) ctx.cvt.prefixlen, ctx.hash, d.u.shard ); } } NotifyPattern npat( ctx.cvt, ctx.pat, ctx.patlen, ctx.hash, u_rte.mcast_fd, false, 'M' ); rte.sub_route.do_notify_psub( npat ); if ( debug_sub ) n.printf( "psub_start %.*s\n", (int) pub.subject_len, pub.subject ); this->user_db.forward_pub( pub, n, dec ); } return true; } bool SubDB::recv_psub_stop( const MsgFramePublish &pub, UserBridge &n, const MsgHdrDecoder &dec ) noexcept { if ( dec.test_2( FID_PATTERN, FID_FMT ) ) { UserRoute & u_rte = *n.user_route; TransportRoute & rte = u_rte.rte; PatternCvt cvt; BloomDetail d; uint32_t fmt; dec.get_ival<uint32_t>( FID_FMT, fmt ); PatternArgs ctx( (const char *) dec.mref[ FID_PATTERN ].fptr, (uint16_t) dec.mref[ FID_PATTERN ].fsize, cvt, false, NULL, 0, 0, 0 ); if ( ! cvt_wild( cvt, ctx.pat, ctx.patlen, this->pat_tab.seed, (PatternFmt) fmt, ctx.hash ) ) return true; if ( d.from_pattern( cvt ) ) { if ( d.detail_type == NO_DETAIL ) n.bloom.del_route( (uint16_t) cvt.prefixlen, ctx.hash ); else if ( d.detail_type == SUFFIX_MATCH ) n.bloom.del_suffix_route( (uint16_t) cvt.prefixlen, ctx.hash, d.u.suffix ); else if ( d.detail_type == SHARD_MATCH ) n.bloom.del_shard_route( (uint16_t) cvt.prefixlen, ctx.hash, d.u.shard ); } NotifyPattern npat( cvt, ctx.pat, ctx.patlen, ctx.hash, u_rte.mcast_fd, false, 'M' ); rte.sub_route.do_notify_punsub( npat ); if ( debug_sub ) n.printf( "psub_stop %.*s\n", (int) pub.subject_len, pub.subject ); this->user_db.forward_pub( pub, n, dec ); } return true; }
Physical and psychosocial factors associated with wrist or hand pain among Australian hospital-based nurses Objective To assess the personal, physical and psychosocial factors associated with wrist or hand pain in Australian hospital-based nurses. Methods Wrist or hand pain, associated disability and sickness absence, demographic, occupational, physical, psychosocial and personal factors among nurses working for three hospitals in Melbourne, Australia, were assessed in a cross-sectional study. Factors associated with wrist or hand pain in the past month were assessed using logistic regression. Results This analysis was based on 1111 participants. The prevalence of wrist or hand pain in the past month was 15.3%. Repeated movements of the wrist or finger >4h (OR 2.63, 95% CI 1.80 to 3.84), high job strain (1.54, 1.04 to 2.28), job insecurity (1.55, 1.04 to 2.28), somatisation tendency (2.73, 1.75 to 4.26), pain catastrophising (1.56, 1.03 to 2.37), better mental (0.97, 0.95 to 0.99) and physical (0.96, 0.94-0.98) health and well-being were associated with wrist or hand pain in the past month, after adjusting for possible confounding factors. When all significant factors were examined in the same model, repeated movements of the wrist or finger >4h (2.50, 1.71 to 3.67), somatisation (2.61, 1.65 to 4.13) and better physical health and well-being (0.96, 0.94 to 0.99) remained independently associated with wrist or hand pain in the past month. Conclusions This study highlights that wrist or hand pain is prevalent in hospital nurses. Workplace physical factors and personal factors were associated with wrist or hand pain. Further longitudinal investigation is needed to examine the predictive nature of these factors.
import sys N,K = map(int,input().split()) int_array = list(map(int,input().split())) if N < 0 or K < 0 or N > 50 or K > 50: sys.exit() for I in int_array: if I > 50: sys.exit() rev_sort_int_array = sorted(int_array,reverse=True) result = 0 for J in range(K): result += rev_sort_int_array[J] print(result)
// Copyright 2021 The Fuchsia Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #![cfg(test)] use { crate::*, blobfs_ramdisk::BlobfsRamdisk, fuchsia_async as fasync, fuchsia_pkg_testing::{PackageBuilder, SystemImageBuilder}, pkgfs_ramdisk::PkgfsRamdisk, std::io, }; #[fasync::run_singlethreaded(test)] async fn test_pkgfs_install_update_after_gc() { // GC doesn't work without a working system image let system_image_package = SystemImageBuilder::new().pkgfs_non_static_packages_allowlist(&["example"]).build().await; let blobfs = BlobfsRamdisk::start().unwrap(); system_image_package.write_to_blobfs_dir(&blobfs.root_dir().unwrap()); let pkgfs = PkgfsRamdisk::builder() .blobfs(blobfs) .system_image_merkle(system_image_package.meta_far_merkle_root()) .start() .unwrap(); let d = pkgfs.root_dir().expect("getting pkgfs root dir"); let pkg = example_package().await; install(&pkgfs, &pkg); assert_eq!(ls_simple(d.list_dir("packages/example").unwrap()).unwrap(), ["0"]); verify_contents(&pkg, subdir_proxy(&d, "packages/example/0")) .await .expect("valid example package"); let pkg2 = PackageBuilder::new("example") .add_resource_at("a/b", "Hello world 2!\n".as_bytes()) .build() .await .expect("build package"); install(&pkgfs, &pkg2); assert_eq!(sorted(ls(&pkgfs, "packages").unwrap()), ["example", "system_image"]); assert_eq!(ls_simple(d.list_dir("packages/example").unwrap()).unwrap(), ["0"]); verify_contents(&pkg2, subdir_proxy(&d, "packages/example/0")) .await .expect("pkg2 replaced pkg"); assert_eq!( sorted(ls(&pkgfs, "versions").unwrap()), sorted(vec![ pkg2.meta_far_merkle_root().to_string(), system_image_package.meta_far_merkle_root().to_string() ]) ); // old version is no longer accesible. assert_error_kind!( d.metadata(&format!("versions/{}", pkg.meta_far_merkle_root())).map(|m| m.is_dir()), io::ErrorKind::NotFound ); { let blobfs_dir = pkgfs.blobfs().root_dir().unwrap(); // Old blobs still in blobfs. let expected_blobs = sorted( pkg.list_blobs() .unwrap() .into_iter() .chain(pkg2.list_blobs().unwrap()) .chain(system_image_package.list_blobs().unwrap()) .map(|m| m.to_string()) .collect(), ); assert_eq!(sorted(ls_simple(blobfs_dir.list_dir(".").unwrap()).unwrap()), expected_blobs); // Trigger GC d.remove_dir("ctl/do-not-use-this-garbage").unwrap(); // pkg blobs are in blobfs no longer let expected_blobs = sorted( pkg2.list_blobs() .unwrap() .into_iter() .chain(system_image_package.list_blobs().unwrap()) .map(|m| m.to_string()) .collect(), ); let got_blobs = sorted(ls_simple(blobfs_dir.list_dir(".").unwrap()).unwrap()); assert_eq!(got_blobs, expected_blobs); } drop(d); pkgfs.stop().await.expect("stopping pkgfs"); } #[fasync::run_singlethreaded(test)] async fn test_pkgfs_shadowed_cache_package() { let pkg = example_package().await; let system_image_package = SystemImageBuilder::new() .cache_packages(&[&pkg]) .pkgfs_non_static_packages_allowlist(&["example"]) .build() .await; let blobfs = BlobfsRamdisk::start().unwrap(); system_image_package.write_to_blobfs_dir(&blobfs.root_dir().unwrap()); pkg.write_to_blobfs_dir(&blobfs.root_dir().unwrap()); let pkgfs = PkgfsRamdisk::builder() .blobfs(blobfs) .system_image_merkle(system_image_package.meta_far_merkle_root()) .start() .unwrap(); let d = pkgfs.root_dir().expect("getting pkgfs root dir"); assert_eq!(ls_simple(d.list_dir("packages/example").unwrap()).unwrap(), ["0"]); verify_contents(&pkg, subdir_proxy(&d, "packages/example/0")) .await .expect("valid example package"); let pkg2 = PackageBuilder::new("example") .add_resource_at("a/b", "Hello world 2!\n".as_bytes()) .build() .await .expect("build package"); install(&pkgfs, &pkg2); assert_eq!(sorted(ls(&pkgfs, "packages").unwrap()), ["example", "system_image"]); assert_eq!(ls_simple(d.list_dir("packages/example").unwrap()).unwrap(), ["0"]); verify_contents(&pkg2, subdir_proxy(&d, "packages/example/0")) .await .expect("pkg2 replaced pkg"); assert_eq!( sorted(ls(&pkgfs, "versions").unwrap()), sorted(vec![ pkg2.meta_far_merkle_root().to_string(), system_image_package.meta_far_merkle_root().to_string() ]) ); // cached version is no longer accesible. assert_error_kind!( d.metadata(&format!("versions/{}", pkg.meta_far_merkle_root())).map(|m| m.is_dir()), io::ErrorKind::NotFound ); { let blobfs_dir = pkgfs.blobfs().root_dir().unwrap(); // Old blobs still in blobfs. let expected_blobs = sorted( pkg.list_blobs() .unwrap() .into_iter() .chain(pkg2.list_blobs().unwrap()) .chain(system_image_package.list_blobs().unwrap()) .map(|m| m.to_string()) .collect(), ); assert_eq!(sorted(ls_simple(blobfs_dir.list_dir(".").unwrap()).unwrap()), expected_blobs); // Trigger GC d.remove_dir("ctl/do-not-use-this-garbage").unwrap(); // cached pkg blobs are in blobfs no longer let expected_blobs = sorted( pkg2.list_blobs() .unwrap() .into_iter() .chain(system_image_package.list_blobs().unwrap()) .map(|m| m.to_string()) .collect(), ); let got_blobs = sorted(ls_simple(blobfs_dir.list_dir(".").unwrap()).unwrap()); assert_eq!(got_blobs, expected_blobs); } drop(d); pkgfs.stop().await.expect("stopping pkgfs"); }
<reponame>pop/rust-action-heroes //! //! The really interesting stuff happens in the game_level module! //! //! Check that out! //! mod game_level; pub(crate) use game_level::*;
SIGIR membership directory 1997 The use of this directory for commercial or promotional purposes is prohibited. ACM/SIGIR has used its best efforts in collecting and preparing material for inclusion in this directory, but does not warrant that the information herein is complete or accurate, and does not assume, and hereby disclaims, any liability to any person for any loss or damage caused by errors or omissions in this directory whether such errors or omissions result from negligence, accident, or any other cause.
/** * build time series index using different random vectors for each R: This is the original method. * Implemeted by Mijung * NOTE: to reuse this method please rename this methods as buildIndex and raname the previous method */ void IndexBuilder::buildIndexAllR(int Rindex, pair<int,int> range_L, int querylen) { timeval begin,end; time_t delta; gettimeofday(&begin, nullptr); TimeSeriesBuilder& tBuilder = LSHManager::getInstance()->getTimeSeriesBuilder(); LSH_HashFunction& lshHash = LSHManager::getInstance()->getLSH_HashFunction(); gettimeofday(&end, nullptr); delta = (end.tv_sec - begin.tv_sec)*1000000 + (end.tv_usec - begin.tv_usec); cout << "getTimeSeriesBuilder/LSH_HashFunction " << delta << " microseconds" << endl; gettimeofday(&begin, nullptr); int ts_cnt = tBuilder.getCountAllTimeSeries(); gettimeofday(&end, nullptr); delta = (end.tv_sec - begin.tv_sec)*1000000 + (end.tv_usec - begin.tv_usec); cout << "getAllTimeSeries " << delta << " microseconds" << endl; int cnt=0; time_t time1 = 0; time_t time2 = 0; time_t time3 = 0; vector<pair<Uns32T,Uns32T>> hindex; int i=0; for(int i=0;i<ts_cnt;i++) { RealT *vals = tBuilder.getCompactTs(i); size_t length = tBuilder.getTsLength(i); int idx=0; for(int j=0;j<length-querylen+1;j+=LSHGlobalConstants::POS_TIMESERIES) { cnt++; gettimeofday(&begin, nullptr); hindex.clear(); time_t t = lshHash.getIndex(Rindex, range_L, (RealT*)&(vals[j]), querylen, hindex); time3 += t; gettimeofday(&end, nullptr); time1 += (end.tv_sec - begin.tv_sec)*1000000 + (end.tv_usec - begin.tv_usec); gettimeofday(&begin, nullptr); for(int l=0;l<hindex.size();l++) { HashIndex& hashIndex = LSHManager::getInstance()->getHashIndex(Rindex,range_L.first+l); hashIndex.put(hindex[l].first, hindex[l].second, (LSHGlobalConstants::NUM_TIMESERIES_POINTS)*i+idx); } gettimeofday(&end, nullptr); time2 += (end.tv_sec - begin.tv_sec)*1000000 + (end.tv_usec - begin.tv_usec); idx++; } } cout << "AllTimeSeries size=" << ts_cnt << endl; cout << "getIndex " << time1 << " microseconds" << endl; cout << "getIndex pre-processing " << time3 << " microseconds" << endl; cout << "hashIndex.put " << time2 << " microseconds" << endl; }
The proceedings of Lok Sabha were on Wednesday adjourned for nearly 50 minutes during Question Hour amid slogan shouting by SP and TMC members over different issues. Immediately after the House paid obituary to the former member of the House Kunji Lal, Dharmendra Yadav (SP), whose forehead was seen bandaged, raised the issue of lathicharge by Uttar Pradesh Police on SP workers in Allahabad yesterday. The Question Hour lasted barely for five minutes. A peeved Speaker adjourned the House for nearly 50 minutes till noon when the slogan shouting continued. Samajwadi Party workers clashed with police in several parts of Uttar Pradesh Tuesday after the state government prevented their president Akhilesh Yadav from flying to Allahabad on the grounds of law and order. Protests broke out in Allahabad, Jaunpur, Jhansi, Kannauj, Balrampur, Jalaun, Azamgarh and Gorakhpur, among other places, where SP supporters smashed windscreens of vehicles and clashed with the police. Yadav said Tuesday he was stopped by authorities at the Lucknow airport in a bid to prevent him from visiting Allahabad, triggering outrage by party lawmakers in the state legislature and workers outside the airport.
AME Church in Charleston, S.C. The FBI and ATF are helping local authorities investigate a string of recent suspicious fires at black churches in the south. The church fires broke out in four states, Tennessee, South Carolina, North Carolina and Georgia. Three of the four were determined to be arsons, and the other is under investigation, Buzzfeed.com reports. “They’re being investigated to determine who is responsible and what motives are behind them,” said FBI spokesperson Paul Bresson. “I’m not sure there is any reason to link them together at this point. Officials were investigating a fifth fire in Elyria, Ohio. The fires follow the deadly, racially-motivated shooting by Dylann Roof at a black church in Charleston, South Carolina. Posted: 6/29/15 at 9:59 AM under News Story.
package com.ocaml.ide.console; import com.intellij.navigation.ItemPresentation; import com.intellij.openapi.util.Pair; import com.ocaml.OCamlBaseTest; import com.ocaml.ide.console.debug.OCamlREPLOutputParser; import com.ocaml.ide.console.debug.groups.TreeElementGroupKind; import com.ocaml.ide.console.debug.groups.elements.*; import com.ocaml.sdk.repl.OCamlREPLConstants; import org.jetbrains.annotations.NotNull; import org.junit.Test; import java.util.List; @SuppressWarnings("JUnit4AnnotatedMethodInJUnit3TestCase") public class OCamlREPLOutputParserTest extends OCamlBaseTest { private void assertVariable(String message, String expectedValue, String expectedText, String expectedLocation) { List<Pair<OCamlTreeElement, TreeElementGroupKind>> r = assertResult(message); assertSize(1, r); Pair<OCamlTreeElement, TreeElementGroupKind> e = r.get(0); assertElement(e, expectedValue, expectedText, expectedLocation, OCamlVariableElement.class); } private void assertFunction(String message, String expectedText, String expectedLocation) { List<Pair<OCamlTreeElement, TreeElementGroupKind>> r = assertResult(message); assertSize(1, r); Pair<OCamlTreeElement, TreeElementGroupKind> e = r.get(0); assertElement(e, OCamlREPLConstants.FUN, expectedText, expectedLocation, OCamlFunctionElement.class); } private void assertType(String message, String expectedValue, String expectedText) { List<Pair<OCamlTreeElement, TreeElementGroupKind>> r = assertResult(message); assertSize(1, r); Pair<OCamlTreeElement, TreeElementGroupKind> e = r.get(0); assertElement(e, expectedValue, expectedText, null, OCamlTypeElement.class); } private void assertException(String message, String expectedText) { List<Pair<OCamlTreeElement, TreeElementGroupKind>> r = assertResult(message); assertSize(1, r); Pair<OCamlTreeElement, TreeElementGroupKind> e = r.get(0); assertElement(e, null, expectedText, null, OCamlExceptionElement.class); } private @NotNull List<Pair<OCamlTreeElement, TreeElementGroupKind>> assertResult(String message) { // get the result List<Pair<OCamlTreeElement, TreeElementGroupKind>> res = OCamlREPLOutputParser.parse(message); assertNotNull(res); return res; } private void assertVariableElement(@NotNull Pair<OCamlTreeElement, TreeElementGroupKind> parse, String expectedValue, String expectedText, String expectedLocation) { assertElement(parse, expectedValue, expectedText, expectedLocation, OCamlVariableElement.class); } private void assertFunctionElement(@NotNull Pair<OCamlTreeElement, TreeElementGroupKind> parse, String expectedText, String expectedLocation) { assertElement(parse, OCamlREPLConstants.FUN, expectedText, expectedLocation, OCamlFunctionElement.class); } private void assertModuleElement(@NotNull Pair<OCamlTreeElement, TreeElementGroupKind> parse, String expectedText) { assertElement(parse, null, expectedText, null, OCamlModuleElement.class); } private <T> void assertElement(@NotNull Pair<OCamlTreeElement, TreeElementGroupKind> parse, String expectedValue, String expectedText, String expectedLocation, Class<T> aClass) { // get the string OCamlTreeElement e = parse.first; assertInstanceOf(e, aClass); if (expectedValue == null) { assertTrue(e.isValueNull()); } else { assertEquals(expectedValue, e.getValue()); } ItemPresentation presentation = e.getPresentation(); assertEquals(expectedText, presentation.getPresentableText()); assertEquals(expectedLocation, presentation.getLocationString()); } @Test public void testSimpleVariable() { assertVariable( "val hw : string = \"Hello, World!@.\"", "\"Hello, World!@.\"", "hw = \"Hello, World!@.\"", "string"); } @Test public void testVariableList() { assertVariable("val l : int list = [3; 4; 5]", "[3; 4; 5]", "l = [3; 4; 5]", "int list"); } @Test public void testVariableConstructor() { assertVariable("val t : nucleotide option = Some A", "Some A", "t = Some A", "nucleotide option"); } @Test public void testWithNewLine() { assertVariable("val x : int\n= 5", "5", "x = 5", "int"); } @Test public void testReallyLongVariable() { assertVariable("val big_list : int list =\n" + "[3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3;\n" + "4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4;\n" + "5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5;\n" + "3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3;\n" + "4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4;\n" + "5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5;\n" + "3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3;\n" + "4; 5; 3; 4; 5]", "[3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; " + "4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; " + "5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; " + "3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; " + "4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; " + "5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; " + "3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; 4; 5; 3; " + "4; 5; 3; 4; 5]", "big_list = [3; 4; 5; ...; 3; 4; 5]", "int list"); } @Test public void testVariableAnd() { List<Pair<OCamlTreeElement, TreeElementGroupKind>> res = assertResult("val x : int = 5\n" + "val y : int = 3"); assertSize(2, res); assertVariableElement(res.get(0), "5", "x = 5", "int"); assertVariableElement(res.get(1), "3", "y = 3", "int"); } @Test public void testFunction() { assertFunction("val f1 : 'a -> int = <fun>", "f1 = <fun>", "'a -> int"); } @Test public void testFunctionWithLongType() { assertFunction("val f2 : int -> int -> int -> int -> int -> int = <fun>", "f2 = <fun>", "int -> int -> int -> int -> int -> int"); } @Test public void testFunctionWithNewLine() { assertFunction("val f3 : float -> ('a -> float) -> 'a -> float -> float -> float -> float =\n" + "<fun>", "f3 = <fun>", "float -> ('a -> float) -> 'a -> float -> float -> float -> float"); } @Test public void testFunctionAnd() { List<Pair<OCamlTreeElement, TreeElementGroupKind>> res = assertResult("val f4 : 'a -> int = <fun>\n" + "val f5 : 'a -> int = <fun>"); assertSize(2, res); assertFunctionElement(res.get(0), "f4 = <fun>", "'a -> int"); assertFunctionElement(res.get(1), "f5 = <fun>", "'a -> int"); } @Test public void testFunctionAndVariable() { List<Pair<OCamlTreeElement, TreeElementGroupKind>> res = assertResult("val f6 : 'a -> int = <fun>\n" + "val v : int = 5"); assertSize(2, res); assertFunctionElement(res.get(0), "f6 = <fun>", "'a -> int"); assertVariableElement(res.get(1), "5", "v = 5", "int"); } @Test public void testFunctionWithLabels() { assertFunction("val f7 : x:int -> y:int -> int = <fun>", "f7 = <fun>", "x:int -> y:int -> int"); } @Test public void testSimpleType() { assertType("type t", null, "t"); } @Test public void testType() { assertType("type nucleotide = A | C | G | T", "A | C | G | T", "nucleotide = A | C | G | T"); } @Test public void testLongType() { assertType("type acide =\n" + "Ala\n" + "| Arg\n" + "| Asn\n" + "| Asp\n" + "| Cys\n" + "| Glu\n" + "| Gln\n" + "| Gly\n" + "| His\n" + "| Ile\n" + "| Leu\n" + "| Lys\n" + "| Phe\n" + "| Pro\n" + "| Ser\n" + "| Thr\n" + "| Trp\n" + "| Tyr\n" + "| Val\n" + "| START\n" + "| STOP", "Ala | Arg | Asn | Asp | Cys | Glu | Gln | Gly | His | Ile | Leu | Lys | Phe | Pro | Ser | Thr | Trp | Tyr | Val | START | STOP", "acide = Ala | Arg ...ART | STOP"); } @Test public void testSimpleException() { assertException("exception E1", "E1"); } @Test public void testException() { assertException("exception E2 of int * int", "E2 of int * int"); } @Test public void testEmptyModule() { List<Pair<OCamlTreeElement, TreeElementGroupKind>> res = assertResult("module X1 : sig end"); assertModuleElement(res.get(0), "X1"); } @Test public void testSimpleModule() { List<Pair<OCamlTreeElement, TreeElementGroupKind>> res = assertResult("module X2 : sig type t = int val compare : 'a -> 'a -> int end"); assertModuleElement(res.get(0), "X2"); } @Test public void testModule() { String module = "module My_Set :\n" + "sig\n" + "type elt = X2.t\n" + "type t = Set.Make(X2).t\n" + "val empty : t\n" + "val is_empty : t -> bool\n" + "val mem : elt -> t -> bool\n" + "val add : elt -> t -> t\n" + "end"; List<Pair<OCamlTreeElement, TreeElementGroupKind>> res = assertResult(module); assertModuleElement(res.get(0), "My_Set"); // assertFunctionElement(res.get(2), "empty", "t"); // assertFunctionElement(res.get(3), "is_empty", "t -> bool"); // assertFunctionElement(res.get(4), "mem", "elt -> t -> bool"); // assertFunctionElement(res.get(5), "add", "elt -> t -> t"); } }
COVID-19 management: The vaccination drive in India Objective We undertook the study to present a comprehensive overview of COVID-19 related measures, largely centred around the development of vaccination related policies, their implementation and challenges faced in the vaccination drive in India. Methods A targeted review of literature was conducted to collect relevant data from official government documents, national as well as international databases, media reports and published research articles. The data were summarized to assess Indian government's vaccination campaign and its outcomes as a response to COVID-19 pandemic. Results The five-point strategy adopted by government of India was COVID appropriate behaviour, test, track, treat and vaccinate. With respect to vaccination, there have been periodic shifts in the policies in terms of eligible beneficiaries, procurement, and distribution plans, import and export strategy, involvement of private sector and use of technology. The government utilized technology for facilitating vaccination for the beneficiaries and monitoring vaccination coverage. Conclusion The monopoly of central government in vaccine procurement resulted in bulk orders at low price rates. However, the implementation of liberalized policy led to differential pricing and delayed achievement of set targets. The population preference for free vaccines and low profit margins for the private sector due to price caps resulted in a limited contribution of the dominant private health sector of the country. A wavering pattern was observed in the vaccination coverage, which was related majorly to vaccine availability and hesitancy. The campaign will require consistent monitoring for timely identification of bottlenecks for the lifesaving initiative. Introduction The early instances of coronavirus disease 2019 (COVID- 19), were reported as clusters of pneumonia of unknown aetiology from Wuhan city of China in December 2019. The spread of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was exceptionally quick with more than 150 countries getting affected by March 2020. This alarming magnitude and severity led to declaration of the disease as a pandemic by World Health Organization on 11 th March 2020. India recorded its first case on 30 th January 2020. The initial spread was limited to international travellers and their contacts until March 2020. India witnessed the first peak of COVID-19 in September 2020, after which there was a continuous decline until the end of 2020. Subsequently, India was inflicted with the devastating second wave in March 2021, which had a multi-dimensional effect that exacerbated inequalities in the country. India is home to over 1.3 billion people, accommodating wide diversities in terms of ethnicity, religious traditions, languages, geographic regions, and social stratifications. Being the second most populous country in the world, it accounts for 18% of the world's population. Around 40% of India's population is below the age of 18 years. The above 60 years cohort makes up 8.6% of population and the age dependency ratio in the country is 49%. 65% population is concentrated in the rural areas and the population density is estimated to be 464 people per square kilometre. The population disaggregation by gender values sex ratio at 943 females to 1000 males. The per capita gross domestic product (GDP) in 2020 was US$ 1900.7. Economic disparities are prevalent with estimated 22% of population (26% in rural India and 14% in urban India) living below poverty line. India had a tradition of centralized planning and policy making and decentralized implementation. But over the years, it has adopted the road of fiscal federalism and decentralized decision making. A mixed healthcare system has been established in India, with involvement of both public and private sector. The Indian public health system has a three-tier structure comprising of sub-centres and primary health centres (more recently known as Health and Wellness centres) for primary healthcare, community health centres for secondary healthcare, and district hospitals, and medical colleges for tertiary healthcare services. The services provided in public health facilities are majorly government funded and due of low investment of 1.28% GDP, there are persistent availability and accessibility issues. The private sector has therefore, emerged as a dominant stakeholder with 62% of Indian health infrastructure, and accounts for 70% of treatment care. Prior to the pandemic, India had a total of 43,486 private hospitals, 1.18 million beds, 59,264 intensive care units (ICUs), and 29,631 ventilators in private sector. On the other side in the public health system, there were 25,778 public hospitals, 713,986 beds, 35,700 ICUs, and 17,850 ventilators. The substantial reliance on private provisioning of care along with underfunded public healthcare system has resulted in high out-of-pocket payments and intensified social and economic inequities. The COVID-19 pandemic necessitated a robust public health strategy and strengthening of weak links in the health system. Accordingly, a series of policy measures were introduced over the year, based on the case load and health system's capacity at different timepoints. The fivepoint strategy adopted by government of India has had been "COVID-19 appropriate behaviour, test, track, treat and vaccinate". In the initial months, various travel advisories were generated to regulate international as well as domestic travel, including universal screening of passengers from all international flights followed by mandatory quarantine.. On 25 th March, 21 days' nationwide lockdown was announced by government of India, which inhibited movement completely and led to suspension of nearly all non-essential services. Consecutively, this lockdown was extended until May 2020 and states were ordered to ensure strict enforcement under Disaster Management Act 2005. Phased reopening of the country started from June 2020. In addition to lockdowns and travel restrictions, measures have been taken to strengthen the health system response in terms of testing, contact tracing, treatment, as well as vaccination. Since the beginning of the pandemic, development of safe and efficacious vaccine against COVID-19 has been a global priority. Under normal circumstances, development of vaccines may take multiple years, but enhanced global cooperation, earmarked funding, existing vaccine technology, accelerated regulatory processes and operational innovation led to launch of vaccines in less than a year. In this paper, we discuss in detail the implementation of various COVID-19 related policies adopted by Indian government with special focus on vaccination drive. Firstly, we provide a comprehensive overview of policy measures adopted by Indian government in order to enhance testing and tracking of infected individuals, ensure access to required treatment, and promote COVID-19 appropriate behaviours. Secondly, we map the course of changing vaccination policy during the course of the pandemic, besides including the development of vaccines, roll out of the program, vaccine coverage and the role of private sector in augmenting the vaccination drive till March 2022. Finally, we discuss the potential factors that influenced the COVID-19 vaccination programme and highlight certain aspects which were crucial to make vaccination campaign effective in India. Methodology A chronological investigation was carried out to outline the trajectory of COVID-19 and the policy responses to contain the disease and mitigate its effects in India Additionally, we conducted a review of scientific journal articles in PubMed with a targeted search strategy. The search terms included "COVID-19", "corona virus", "coronavirus", "SARS-CoV-2", "COVID", "vaccination", "immunization", "immunisation", "vaccination policies", "vaccination strategies", "vaccine procurement", "vaccine pricing policies" and "India" (Supplementary file S1). The search was restricted to articles from 2019 to 2021 and to published literature in English language. A total of 798 articles were screened based on the PICO strategy. Articles on biomedical research on COVID-19 vaccines, and articles representing information on vaccination from countries other than India were excluded. After full text screening, 12 articles which followed the inclusion criteria were analysed. The references from the included articles were also screened to present a comprehensive narrative analysis of COVID-19 vaccination strategies adopted by India. Overview of COVID-19 related policy measures taken by the Indian government The COVID-19 outbreak emerged as an unprecedented challenge for the governments, communities as well as individuals. The primary responses to the pandemic were development of policies based on the recommendations from experts and implementation of standardized measures. The measures taken by our government to facilitate testing, tracking and treatment of COVID-19 cases and ensuring COVID appropriate behaviour have been delineated below: Testing strategy and upscale At the start of pandemic, there was only one designated COVID-19 testing laboratory in the country and the testing was restricted to people with history of international travel, symptomatic and asymptomatic high-risk contacts of positive patients, hospitalized patients with severe acute respiratory infection (SARI) and symptomatic health workers. On 21 st March 2020, guidelines were formulated for COVID-19 testing at private pathology laboratories with a price capping at ₹ 4500 (US$ 60.75). In the following months, Cartridge based nucleic acid amplification (CBNAAT) and TrueNat were approved for COVID-19 detection ( Fig. 1). A fast-tracked mechanism for validation of diagnostic materials was initiated by ICMR. Initiatives were also taken to increase capacity of government and private medical colleges in testing under mentorship of leading virological/medical institutes. The number of laboratories increased consistently from 1 in January to 669 in May, 1614 in September, and 2172 by the end of 2020. During the second wave of COVID-19 in May 2021, there were 2504 testing laboratories across India. In view of COVID cases surge, ICMR issued advisory for optimized use of real-time reverse transcription polymerase chain reaction (RT-PCR) by increasing availability of rapid antigen tests (RAT) to avert shortages. Surveillance measures (track) Surveillance of patients with SARI and influenza like illnesses (ILI) begun during the early phase of pandemic as a containment measure. Integrated Disease Surveillance Programme (IDSP) issued an advisory for surveillance of travel related cases and contacts of suspects on 17 th January 2020. Cluster containment strategy was delineated in April 2020, which laid down the concept of containment, buffer zones and strict perimeter control. Additionally, areas were also classified as red, orange, and green zones on basis of total active cases and infection rate and accordingly restrictions were implemented in three zones. Arogya setua web-based application for public awareness was launched for contact tracing, mapping of likely hotspots and dissemination of information regarding COVID-19. During the second wave, a decentralised state driven containment frameworks were implemented to deal with the alarming COVID-19 surge. Health system preparedness (treatment and isolation measures) Government of India announced ₹ 150 billion (US$ 2.02 billion) for "India COVID-19 Emergency Response & Health System Preparedness" in April 2020. These funds were aimed to create a new three-tier facility arrangement for COVID-19 management, support, train and protect healthcare workforce, expand diagnostic facilities, deploy referral transport, initiate health promotion and risk communication activities, and enhance surveillance. The second instalment of ₹ 8.9 billion (US$ 120 million) with similar objectives was released in August 2020. In July 2021, an additional funding of ₹ 231.23 billion (US$ 3.12 billion) was announced for enhancing the capacity in terms of availability of beds, liquid medical oxygen tanks, ambulances, creation of paediatric units, strengthening of tertiary care centres, and IT interventions. Advisory for three-tier health facility arrangement was issued for appropriate management of COVID-19 in April 2020. Hostels, schools, hotels, stadiums, railway coaches etc. were turned into COVID care centres (CCC) for management of mild cases. The second category of dedicated COVID health centres (DCHC) for moderate cases were hospitals with oxygen support. Dedicated COVID hospitals (DCH) equipped with ICUs, ventilators, beds with oxygen supply provided services for severe COVID-19 cases. Therefore, along with utilization of existing secondary and tertiary care institutions, new institutions were constructed or acquired and by June 2020, 958 DCHs with 1,67,883 isolation beds, 21,614 ICU beds and 73,469 oxygen supported beds and 2,313 DCHCs with 1,33,037 isolation beds; 10,748 ICU beds and 46,635 oxygen supported beds were operationalised. The indigenous manufacturing capacity of personal protective equipment (PPEs) was enhanced by mid-May 2020. A daily production of three hundred thousand PPE and N95 masks was reported by end of May 2020. Export restrictions were imposed on certain active pharmaceutical ingredients, masks, and sanitizers from March to June 2020 to ensure local availability. Health promotion (COVID appropriate behaviour) Awareness campaigns were undertaken consistently to ensure COVID-19 appropriate behaviour. Use of masks outside home was made mandatory in April 2020. Ceilings on social gatherings were imposed to avoid crowding in public places. Niti Aayog launched behaviour change campaign on 25th June 2020. Short message services and caller tunes were also used through telecommunication service providers to spread awareness regarding appropriate behaviours. A dedicated helpline was introduced to educate the population on COVID-19 related queries on regular basis in mid-March. Around 3.5 million calls were received on the helpline until July 2020. On 14th October, Jan Andolan-a public movement was started to urge people to follow appropriate behaviours during festive season and winters. During the second wave in May, emphasis was given to reiterate importance of mask use, social distancing, sanitation, and ventilation for containment of disease. Fig. 1 provides a comprehensive view of COVID-19 related strategies adopted in India. The vaccination drive in India: from development to distribution The commencement of efforts for vaccine development overlapped with the first wave of pandemic in the country. A task force for focussed research on COVID-19 vaccine was constituted in April 2020 to promote development of vaccines. Various pharmaceuticals like Bharat Biotech (BB), Zydus Cadila, Serum Institute of India (SII) and Dr Reddy's laboratories began vaccine clinical trials for Covaxin, ZyCoV-D, Covishield and Sputnik V during the course of pandemic. Emergency use authorization for Covishield and Covaxin was granted in January 2021 followed by approval for Sputnik V in April, m-RNA-1273 (Moderna) in last week of June and Johnson and Johnson's single dose vaccine and Zydus Cadila's ZyCoV-D vaccine in August (Fig. 2). By the end of August 2021, few vaccines as Covovax (SII), Corbevax (Biological E limited) and BBV154 (BB) were in clinical trial phases and in early 2022, emergency use approvals were granted to Corbevax, Covovax, and Sputnik Light vaccines. The vaccine development in India is detailed in Fig. 2. Vaccination procurement and pricing policy National Expert Group on vaccine administration for COVID-19 (NEGVAC) was constituted in August 2020 for preparing blueprint of vaccination roll out. Operational guidelines for COVID-19 vaccination were laid down on 28 th December 2020, followed by 10 days dry run. Subsequently, government of India released advisory on COVID-19 vaccination which detailed precautions, contraindications, and comparison of two approved vaccines. The inoculation programme was launched on 16 th January across the country in three phases. The central and state government played important roles in COVID-19 vaccination. The central government was responsible for formulation of policies and guidelines, emergency use approvals to vaccines, provision of financial support to vaccine manufacturers for expansion of production capacity of vaccines, financing, procurement, and distribution of vaccines and monitoring of the vaccination programme. The role of states was to identify vaccination sites, undertake logistic management, train the human resources, update daily vaccination related data. Both the governments organized awareness campaigns to spread right information for uptake of the vaccination drive. The central government took the sole responsibility of vaccine purchase in first 3.5 months of the initiative. However, in May, there was a shift in procurement policy to make the pricing and procurement flexible and incentivize the manufacturers to scale up production. "Liberalized pricing and accelerated National COVID-19 vaccination strategy" was announced by the central government, outsizing the role of private sector, which mandated the vaccine manufacturers to earmark 50% of their monthly vaccine supplies to government of India and unrestricted the remaining 50% of the doses supply to state governments, private hospitals, and industrial establishments at pre-declared prices ( Fig. 3). The government justified the liberalized strategy as a medium to create parallel drives in a decentralized manner, wherein states and private health institutions would focus on procurement of vaccines for 18-44 years old cohorts while the central supply would facilitate vaccination of priority groups (above 45 years old population and frontline workers). The allocation of vaccine doses to the states/UTs was based on criteria of infection rate, speed of vaccination and extent of wastage of vaccines. First, the infection rate was computed on basis of active case load to give prioritization to states with higher viral load. Second, the parameter evaluating the performance of states in vaccination program was enumerated through seven-day average of vaccine consumption. This would act as an incentive for states to accelerate vaccination drive and promote efficiency. The extent of vaccine wastage was chosen as a unique parameter to minimize vaccine wastage, since, it was noted that by April 11, India had wasted 4.6 million doses, which was a significant number considering the constraints in vaccine availability. CoWIN application was used to monitor utilization, wastage, and coverage of COVID-19 vaccination. Many states reported hurdles in vaccination due to amended policy and the Supreme Court of India also intervened after reports of vaccine scarcity in the country. The government tweaked the policy on 21 st June, reverting to a more centralized approach of procurement where-in 75% of vaccines would be procured by the central government and provided to the states at zero cost. The element of 25% procurement of vaccines at pre-declared prices by private sector was retained. However, the low contribution of the commercial sector in the drive led to announcement of supply of vaccines to the private sector as per volume of demand without the restriction to reserve 25% vaccines for the commercial units in the first week of August (Fig. 3). COVID-19 vaccine financing The national budget 2021-22 allocated ₹350 billion (US$ 4.7 billion) for COVID-19 vaccination. Government reported spending ₹80.71 billion (US$ 1.08 billion) on purchase of vaccines and ₹16.54 billion (US $ 223 million) on operational costs until July 2021 and ₹196.75 billion (US$ 2.7 billion) by December 2021. Table 1 presents the timeline and volume of vaccine orders placed by the central government. It was noted that a total of 42 million doses (expected 160 million doses) were procured directly by state governments and private institutions from May to mid-July during the enforcement of liberalized COVID-19 vaccination policy. Table 1 also illustrates the first order being placed just 6 days before the commencement of vaccination drive as well as no orders being placed in April and June despite continuous reports of vaccine shortage in the country from March end. The government embarked on various measures to increase the availability of vaccines in the nation. ₹30 billion (US$ 405 million) and ₹15 billion (US$ 202.5) were granted to SII and Bharat Biotech for scaleup of domestic production capacity in April 2021 by government of India. In this context, SII upscaled production of Covishield considerably especially after the upliftment of embargo on raw material by the US in June, but Covaxin's production had been constant majorly due to cited quality issues in first few batches of the vaccine (Table 2). In order to augment the production of Covaxin in the forthcoming months, the Ministry of Science and Technology announced inclusion of 3 public sector companies and subsequently grants were released to facilitate preparedness under Mission COVID Suraksha.. Due to all these measures, there was expansion in the production capacity of Covishield as well as Covaxin by December 2021 ( Table 2). To supplement domestic vaccines with imports, 10% customs duty was waived off for imported COVID-19 vaccines in April, thus making imports cheaper. In the same month, government of India announced fast tracking of emergency approvals of internationally manufactured vaccines, already authorized for emergency use in the United States (US), United Kingdom (UK), European Union (EU), and those included in World Health Organization (WHO) list of approved vaccines. In addition, these foreign manufactured vaccines were exempted from bridge testing with effect from 1 st June 2021 and quality testing norms were relaxed for these vaccines. A team was constituted on 11 th June 2021, to investigate issues related to procurement of vaccines from foreign manufacturing units including indemnity issues. These strategies are likely to result in quicker import of vaccines. On the export front, India started vaccine assistance to other countries under the initiative of "Vaccine Maitri" in the fourth week of January and by end of March, approximately 66 million doses of Covishield and Covaxin had been dispatched to 95 countries in form of gifts, commercial agreements and under COVID-19 Vaccines Global Access Facility (COVAX). However, the alarming rise in people infected with COVID-19 in March-April led to anticipation of increased domestic demand for vaccination and a temporary halt on export of vaccines in last week of April (Supplementary file S2). Roll out of COVID-19 vaccination programme (risk-prioritization strategy) Countries across the globe adopted a prioritization strategy for vaccination, due to limited availability of vaccines as per population needs. NEGVAC, on similar lines, identified three priority groups on basis of potential risk of infection and mortalities and planned a phased vaccination roll out. The first group constituted of pandemic response teams i.e., healthcare workforce and frontline workers including personnel from police department, armed forces, home guards, prison staff, disaster management volunteers, civil defence organisation, municipal workers and revenue officials engaged in surveillance and containment activities. Protecting this group, which was most vulnerable to infection, was important to ensure availability of critical services and curb the spread the disease in the community. It was analysed that during the first COVID wave, 53% of deaths had occurred in above 60 years population and 35% mortalities were recorded in 45-60 years age group. Therefore, the second phase was scheduled for population above 45 years of age. It was preponed from planned window of mid-March to 1 st March due to rising COVID-19 infection load in the country and anticipated mortalities in the older age group and population with co-morbidities. The disastrous second wave ascribed to mutated COVID-19 virus, which resulted in higher mortalities in population less than 45 years of age as compared to the first wave, created an alarming situation in the country. Thus, 18-45 years population was identified as third priority group for vaccination in end of April 2021 and consequently vaccination was initiated for the younger population.. Subsequently, the campaign was extended to lactating and pregnant women after a careful investigation and approval by National technical Advisory Group on Immunization (NTAGI). It was approved after advocacy from World Health Organization, recommendations from Federation of Obstetric and Gynaecological Societies of India (FOGSI), and release of results from study conducted by ICMR which concluded higher COVID-19 infection in pregnant and lactating women in the second wave (28.7% v/s 14.2%) and higher fatality rate (5.7% v/s 0.7%) compared to the first wave. Due to global surge of COVID-19 cases credited to the omicron variant in January 2022, vaccination with Covaxin was initiated for 15-17 years age group and a precautionary third dose was recommended for healthcare, frontline workers, and senior citizens of the country with co-morbidities. Later in March 2022, the drive was extended to 12-14 years old cohort with Biological E's Corbevax. The timelines of roll out of different vaccination phases have been outlined in Fig. 4. Vaccination coverage The country witnessed a wavering pattern in vaccination campaign (Fig. 5). The vaccination rate was sluggish in the initial phase due to vaccine hesitancy among healthcare and frontline workers. Vaccine hesitancy in the priority group was linked to trust issues in vaccine safety and efficacy, due to quick development of vaccines, early emergency approval to Covaxin before release of results of phase 3 clinical trials and missing data related to adverse effects following immunization. During the pandemic, misinformation through social media fuelled scepticism towards the vaccines among the healthcare and frontline workers and this also had a ripple effect on the general population. Apart from this, high COVID infection rate among the frontline workers in the past and low risk of contracting COVID-19 infection due to its overall decline at population level during the initial phase of vaccination also led to low uptake of the initiative. A rise in vaccination rate was observed from the first week of March to first week of April, when the campaign was extended to general population (Fig. 5). Thereafter in April-May when the pandemic was at its peak, various states flagged issue of availability of vaccines, which led to decline in vaccination rate. Amidst shortage of vaccines and COVID-19 upsurge, the campaign was extended to 18-44 years old with mandatory pre-registration but most of the states deferred inoculations until the second week of May due to supply shortages. The vaccination drive seemed to be marred with vaccine hesitancy in rural areas and demand supply gaps in the initial weeks of July. The rate began to increase in third week of July after a consistent decline, and this was attributed to rise in supply of vaccines to the states. The boost in production of Covishield doses, scrapping of reservation of 25% of vaccines for the private sector, advanced visibility of vaccine availability to states for better planning, high risk of infections due to the predicted third wave in September-October, and sustained efforts of community health workers to combat vaccine hesitancy were the likely reasons behind increased accessibility to vaccines in August. It can be observed from Fig. 5 that second dose coverage started after 4 weeks in February, as per the protocol of interval of 4-6 weeks between shots. The vaccination rate for second dose plummeted after the second week of May after Ministry of Health and Family Welfare raised the gap between 2 doses of Covishield from 4-8 weeks to 12-16 weeks on 13 th May, citing experiences from UK. The rate increased after the second week in June probably due to high number of beneficiaries due for second dose. A rise in first dose can be appreciated in first week of January when the drive was extended to 15-17 aged adolescents in the country. Fig. 5 also depicts the trend of precautionary dose which was started from 10 th January 2022. Vaccination analysis for different age groups informed that rates of vaccination in age-appropriate groups were in harmony with timeline of different phases of the campaign and the vaccination rate improved as and when it was extended to new age groups (Fig. 6). It was also noted that the extension of vaccination drive to younger population did not undervalue the vaccination of senior citizens. Role of private health sector in vaccine delivery Government of India ensured participation of private health sector to battle with the pandemic and in the second month of COVID-19 vaccination, advisory was issued by government of India to the states for utilization of all private hospitals including 10,000 empanelled hospitals under Ayushman Bharat and 687 hospitals under the central government health scheme as COVID-19 vaccination centres. The private hospitals were allowed to charge a maximum of ₹250 (US$ 3.3) per person per dose until April 2021. The price rate was relatively low since according to the then vaccination policy when private hospitals purchased vaccines from the central government at rate of ₹150 (US$ 2) (Fig. 3). After the implementation of liberalized vaccine procurement and pricing policy, the private sector purchased 12.73 million doses directly from the vaccine manufacturers at higher rates (₹600 (US$ 8) for Covishield, ₹1200 (US$ 16) for Covaxin and ₹948 (US$ 12.8) For Sputnik V (Fig. 3). This resulted in exponential rise in prices of the vaccines and 8.31 million doses were administered in the commercialized sector from May to mid-June. Following the revision of COVID-19 pricing policy in June end, the prices capped per dose of Covishield, Covaxin, and Sputnik V in private facilities for the citizens were ₹780 (US$ 10.5), ₹1410 (US$ 19) and ₹1145 (US$ 15.4), respectively. It was analysed that 7% vaccinations were carried out in private vaccination centres from May to mid-July, which was much less compared to the amount of vaccine supply reserved for the sector. Government of India mandated order of vaccines through CoWIN portal for private institutions, to keep a check on private vaccine procurement from 1 st July. It clarified the private health units about consumption based maximum order quantity, limit of four instalments to place orders, and clause of payments within 3 days of ordering vaccines. This initiative is likely to bring transparency in private procurement of vaccines. The low contribution of the dominant private sector to achieve set vaccination targets, led to government's decision of buying the vaccines not procured by the private sector reserved under its quota, which consequently led to rise in availability of vaccines in August. Milestones in vaccination campaign India has set ambitious targets for vaccination and plans to fully vaccinate above 18 years population by the end of 2021. Against an estimated target of 940 million eligible beneficiaries, 10% had been fully vaccinated and 37% had received at least one dose by end of August. The month wise announced targets for the nation were reviewed. The first declared target was to fully immunize 300 million people by July-August 2021. This required 600 million doses in first 7-8 months of the year. Considering the same, the daily target was estimated to be 2.64 million doses for the months of January, February, and March. On 1 st April, target to inoculate 5 million people per day was announced by the government. By end of June, chairman of COVID-19 working group declared a target of 10 million vaccinations each day from mid-July. Fig. 7 illustrates that the initiative has not succeeded according to the set targets and requires taking a leap on various fronts to achieve the herculean task of 1.3 billion inoculations. Table 3 demonstrates the vaccination coverage among different priority groups identified by the government. A positive vaccination trend was analysed for all the priority groups. To assess the implementation of the response to COVID-19 in terms of the vaccination strategy, we used the WHO Scientific Advisory Group for emergencies (SAGE) framework for allocation of COVID-19 vaccines and the findings have been presented in Supplementary File: S2. Discussion India claimed the milestone of administering more than 1.5 billion vaccine doses to its citizens until January 2022. The vaccine allocation and risk prioritization strategies were devised efficiently since the beginning of the drive. The government announced allocation of vaccine doses to the states/UTs in the first phase of vaccination drive based on the health worker database of the states, thus ensuring equal respect to the contribution of health workers in COVID-19 pandemic (Supplementary Material S2). Later, for subsequent phases of vaccination, criteria of infection rate, speed of vaccination and extent of wastage of vaccines was announced for vaccine allocation to promote efficiency and reduce wastage. It was debated that the indicators may lead to data manipulations, denial of vaccines to beneficiaries or vaccine utilization beyond the stipulated time, however considering the diversity of the vast country, the indicators have been pivotal to guide transparent distribution of vaccines. Effect of alterations in vaccination policies The introduction of liberalized vaccination policy was anticipated to facilitate co-operative federalism, decentralize the process, and increase efficiency, while keeping the vaccination of vulnerable groups unobstructed. However, this strategy shifted the onus of vaccination to states and private institutions and was foreseen as an unfair competition between states and of states with the private sector for purchase of vaccines. Consequently, it led to differential procurement pricing and manufacturers indicated higher rates for states and private institutions. It derailed the vaccination drive and led to vaccine scarcity, since the states were not able to procure vaccines due to low expertise in procurement, budgetary considerations, and high level of internal competition for vaccine procurement within the nation. The subsequent alterations in policy reflected the flexible approach of government to alter its decisions, based on evidence. Furthermore, the decisions on financial support for trials and production of vaccines and flexible import policies are likely to improve the availability of vaccines in future. Operational challenges in implementation of vaccination campaign The vaccination coverage trends presented a waxing and waning pattern, with unaccomplished targets. The availability of vaccines, hasty extension of roll-out to above 18 years population amidst vaccine shortage and vaccine hesitancy have acted as impediments to the success of the drive. The operational challenges faced in implementation of the nationwide drive were: a) Late and insufficient orders of vaccines The availability of vaccines in the nation is dependent on multiple factors as appropriate and timely procurement, strictness of regulatory approvals for vaccine use, proportion of exports, domestic production capacity and import policies. One of the reasons cited for shortage of vaccines have been late and insufficient orders of vaccine in India. (Table 4). Other reasons for vaccine insufficiency were associated with natural disaster at SII facility and embargo on raw materials and equipment by US under the Defence Protection Act. b) Global commitments India contributed to ensuring global equity to vaccines through vaccine maitri, COVAX initiative and by presenting the case of temporary waiver of intellectual property rights for COVID-19 vaccines and patents. The cumulative export was higher than domestic inoculation in first three months amid surge in COVID-19 cases, which was cited as an explanation for domestic shortage. The export restrictions in April 2021 after the late and insufficient orders of vaccines by the Indian government led to diversion of stock of vaccines reserved for low-and-middle income countries for achievement of domestic vaccination targets, and this resulted in global allegations of injustice (Supplementary material S2). c) Rural-urban disparity The states of India have depicted variations in vaccination rates, with western part being more vaccinated compared to the eastern part of the country which include poorer areas. The intra-state variations were seen in form of rural-urban disparity and capital bias, which were reflected as higher rate of inoculations in urban parts and capitals of the states, probably due to misinformation, digital, as well as lingual divide, logistical constraints as infrastructure, supply chain and skilled personnel and late start of vaccination in rural regions. d) Gender gap in vaccination Vaccination campaign has consistently reflected gender inequity and on 22 nd July, vaccination drive constituted of 53.4% males and 46.5% females. On April 10, there was a 2% disparity between vaccinated men and women, which increased to 12% on April 24. The ratio of per million vaccinated men and women peaked on May 25 at 1.348, after the vaccination drive was extended to 18+ population. The gender gap could be attributed to patriarchal socio-cultural norms, and gender differences in healthcare access including mobility and decision-making capacity issues, gender divide in technological access and digital literacy. Vaccination myths have had additional effect in keeping females away from this public health measure. Government cited late approval of vaccine use in pregnant and lactating women as a reason for the difference. e) Use of digital systems for registration The directive of pre-registration in CoWIN system for 18-44 years old population was a source of reluctance due to frequent technical hurdles, first come first serve appointments, delays in receiving one time passwords, issues with captcha submission, lack of availability of slots, privacy policy, access to smartphones, high bandwidth connectivity, digital literacy, and lingual exclusion with its availability only in ten languages. Due to the stated issues, on 21 st June, the mandatory registration for 18-44 years old was shifted to optional with additional opportunity of on-site registration for the beneficiaries. Nonetheless, the utilization of CoWIN for procurement, distribution, and monitoring of vaccination has been an ambitious attempt to adopt digital technologies and utilize real-time data for planning of policies. The decision of provision of 25% of vaccines to private sector, which accounted for 4-5% of the total vaccination sites and had a demand of not more than 10% of vaccines, also posed a threat to rational, equitable and ethical distribution. f) Involvement of the private sector The low performance of the private sector could be attributed to capping of prices which led to low profit margins for the private institutions, and population preference for free vaccines at government institutions. From a private sector perspective, the price capping inhibited the private players to move vaccination to tier 2, tier 3 areas and community-based sites, which would eventually increase vaccination rates, but would lead to higher administrative costs. But a scenario of non-capping of prices would have led to lower vaccination rates due to higher prices to general population, and hence would result in inequity and inefficiency. From people's perspective, a parallel drive with considerably high prices despite the capping expected them to pay as high as two-third of the total price of the vaccination program for 25% vaccination than the government spending for rest of the doses. The underperformance of the private sector indicated that successful vaccination campaign would rather require strengthening of health service system. g) Vaccine hesitancy Vaccine hesitancy has been a stumbling block for any vaccination program and there was reported reluctance for COVID-19 vaccines among health workers as well as general population. While there were reports of vaccine refusals from the young population, a survey commissioned by the government of India concluded vaccine hesitancy among 40% of people aged more than 70 years because of safety concerns, mistrust and being too old for vaccination. The reported reasons for hesitancy include lack of trust in safety and efficacy of developed vaccines in less time, fear of side effects such as infertility, effect on menstrual cycles, clotting and death, inconvenience of registration on CoWIN application, loss of productive work due to side effects and perceived low risk of COVID-19 infection. Issues related to travel to the covid vaccination centres and waiting time at the centres (since the vaccinators would open a vial only after required number of beneficiaries present at the centre to avoid wastage of vaccines) also acted as barriers to vaccination. A study concluded that vaccine acceptance significantly increased from 38% in mid-January to 77% in the first week of April 2021, after the second COVID-19 wave in the country, however situation did not change considerably in rural areas, and among the marginalized population belonging to lower socio-economic status in various states. To deal with the issue of hesitancy, local authorities, and cultural leaders were engaged to counter the narrative of misinformation and mobilize people to accept vaccines as life saving measure against COVID-19 in various villages of India. Chhattisgarh innovatively utilized folk songs to spread the right information about vaccines, Punjab appointed celebrities as vaccination ambassadors and Jharkhand relied on community-based organizations, who worked with local women and religious leaders to conduct successful vaccination programmes. Few districts in Chhattisgarh also engaged local youth as well as elderly women for community mobilization and vaccine related awareness. Another initiative to tackle the issue was taken through spread of right information by community health workers. A village named Janefal became a model for the nation after achieving 100% vaccination of eligible population in three months. The village administration created a taskforce comprising of medical officer, health workers, police personnel, and village local body members. The taskforce members took the vaccine in front of community members, made advocacy videos, and undertook regular home visits to convince population after mapping the eligible population. After realizing the reason of vaccine hesitancy as rooted in the fear of hospitals, the administration started conducting camps in the village, which led to better uptake of vaccination. Additionally in order to tackle the issue of online registrations for vaccination, the task force collected identity cards of all eligible population and registered them. Thus, a collective effort with bottom-up approach and root cause analysis by the local leadership led to complete vaccination status in the village and had a ripple effect in the neighbouring villages. The success story directs towards the need of moving away from hospital-based vaccination to satellite vaccination centres closer to inhabitants of the villages. Moreover, mobile vaccination vans equipped with basic infrastructure, vaccine storage, vaccinator and medical personnel have been deployed in the states of Maharashtra, Telangana, Karnataka, Kerala, and Delhi with help of civil society and private organizations to reach the inaccessible areas. Few regions also tried using disincentives and incentives to address vaccine hesitancy as mandatory requirement of vaccine certificates to acquire job in government schemes, obtain ration from public distribution system and access other government social security schemes. These coercive measures were however unacceptable from equity, rights and social justice perspective and could not fill the gaps in information, rather they led to higher misconceptions that government is trying to complete targets by adopting unfair means. This scepticism in vaccination and in the health system of the country could further lead to fake vaccination certificates and development of illegal markets for provision of such certificates. Therefore, it is essential to work closer to the communities and adopt a bottom-up inclusive approach to tackle hesitancy. While there is no "one size fits all" approach to tackle resistance to vaccination, it is essential to contextualize the successful strategies by addressing the determinants to hesitancy. A faster vaccination drive would require strong context specific information campaigns, financial and non-financial incentives, easy accessibility to vaccines with enhanced focus on rural areas and vaccination for the female gender. The review presents a comprehensive view of strategies implemented by the Indian government but is limited in its attempt to critically evaluate the effects of COVID-19 outbreak and related interventions as travel restrictions, lockdowns, health systems strengthening strategies etc. Another limitation of the study is that it does not present detailed data on vaccine trials and action of vaccines on human body and is more focussed on vaccination strategies adopted by the Indian government. Conclusion The global impact of COVID-19 has made it a priority internationally and all countries have tried to tackle the disastrous situation to the best of their capacities. Vaccination is an important strategy which has the potential to avert hideous consequences of the disease and reduce mortalities. The federal government played an important role in planning, procurement, distribution, and price setting of the vaccines and the state governments focused on implementation of the campaign. The monopoly of central government in vaccine procurement resulted in bulk orders at low price rates. However, the implementation of liberalized policy led to differential pricing and vaccine manufacturers quoted higher rates for the state governments and private health units, while maintaining comparatively low rates for central government. The subsequent revision in vaccination strategy, which enlarged the procurement role of central government accelerated the drive. The dependence on private sector for delivery of vaccines did not contribute significantly to fast track the program. The risk-based prioritization strategy adopted by the government streamlined the campaign and averted chaotic situation in presence of vaccine scarcity. However sudden extension of the program to 18-44 years old population despite system readiness issues was debated. India was in a privileged situation considering the existing infrastructure and experienced human resources required to deliver successful immunizations since decades. But availability of vaccines to battle COVID-19 waves and vaccine hesitancy among the citizens posed as consistent bottlenecks for achievement of set targets. The government utilized technology for maintaining a database of vaccination in terms of procurement, distribution, utilization, and monitoring of the vaccination coverage. It was also used by the citizens for registrations, location of nearest vaccination centres, and generation of certifications, but this technology leverage needs to be integrated with a strong privacy policy, digital literacy, and linguistic inclusion to promote access to vaccinations. The goal of inoculating the entire population would require a proactive approach to ensure availability, affordability, and accessibility to vaccines. These initiatives will have to be coupled with constant and targeted awareness campaigns and a bottom-up approach to fade hesitancy and raise demand for vaccines. National as well global studies, along with transparent sharing of captured data on procurement, distribution and vaccination coverage stratified by regions and various socio-demographic factors will help to periodically evaluate the bottlenecks in the programme in nascent stage, design context-based interventions to fortify the campaign and achieve equity and efficiency in the life saving initiative. Public Interest Summary The article presents a comprehensive review of COVID-19 related policies with focus on vaccination for containment of the pandemic in India. The five-point strategy adopted by government of India was "COVID appropriate behaviour, test, track, treat and vaccinate". There have been periodic shifts in the COVID-19 vaccination policies in terms of eligible beneficiaries, procurement, and distribution plans, import and export strategy, involvement of private sector and use of technology. The monopoly of central government in vaccine procurement resulted in bulk orders at low prices. However, the implementation of liberalized policy with division of responsibility of vaccine procurement among national government, state governments and private sector led to differential pricing and delayed achievement of targets. A wavering pattern was observed in the vaccination coverage, which was related majorly to vaccine availability and hesitancy. The campaign will require consistent monitoring for timely identification of bottlenecks for the lifesaving initiative. Funding None. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.hlpt.2022.100636.
#ifndef TILEDMAPFACTORY_HPP #define TILEDMAPFACTORY_HPP #include <QPair> #include <QFile> class TiledMap; typedef QPair<QString, QString> Option; /** Static class able to load/save a TiledMap. * *XML manipulation is realized with QtXML in DOM, with the help of *<a href=http://techbase.kde.org/Development/Tutorials/QtDOM_Tutorial>this tutorial</a>. * * @see TiledMap */ struct TiledMapFactory { static unsigned int computeLevelSizeX(const TiledMap &level); static TiledMap *loadLevel(const QString &path); static bool saveLevel(TiledMap &level, const QString &path); private: // Load static void loadMapTxtFormat(TiledMap &level, QFile &file); static Option processTxtOptionLine(const QString &line); static void loadMapXmlFormat(TiledMap &level, QFile &file); static bool interpretOption(const Option &option, TiledMap &lvl, bool oldFormat); static bool processTilesLine(const QString &line, TiledMap &lvl, unsigned int lineNb, bool oldFormat); // Save static bool saveMapTxtFormat(const TiledMap &level); static QString formTxtOptionLine(const QString &name, const QString &value); static bool saveMapXmlFormat(const TiledMap &level); }; #endif // TILEDMAPFACTORY_HPP
// Note: running sync task on pooled thread from EDT can lead to deadlock if pooled thread will try to invokeAndWait. private boolean checkIfForceDirectExecNeeded() { if (isSync && EDT.isCurrentThreadEdt() && !ApplicationManager.getApplication().isWriteThread()) { throw new IllegalStateException("Running sync tasks on pure EDT (w/o IW lock) is dangerous for several reasons."); } if (!isSync && isModal && EDT.isCurrentThreadEdt()) { throw new IllegalStateException("Running async modal tasks from EDT is impossible: modal implies sync dialog show + polling events"); } boolean forceDirectExec = isSync && ApplicationManager.getApplication().isDispatchThread() && (ApplicationManager.getApplication().isWriteAccessAllowed() || !isModal); if (forceDirectExec) { String reason = ApplicationManager.getApplication().isWriteAccessAllowed() ? "inside Write Action" : "not modal execution"; String failedConstraints = ""; if (isModal) failedConstraints += "Use Modal execution; "; if (myThreadToUse == ThreadToUse.POOLED) failedConstraints += "Use pooled thread; "; failedConstraints = StringUtil.defaultIfEmpty(failedConstraints, "none"); Logger.getInstance(ProgressRunner.class) .warn("Forced to sync exec on EDT. Reason: " + reason + ". Failed constraints: " + failedConstraints, new Throwable()); } return forceDirectExec; }
Qasim al-Khatib, member of the Syrian National Coalition (SNC), speaking to ARA News in Tel Abyad, Raqqa province. Click to email this to a friend (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on WhatsApp (Opens in new window) Click to share on Google+ (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) SDF official says US military support needed to defeat ISIS in Syria Coalition official: No imminent danger to Tabqa Dam, ISIS losing more ground in Raqqa Syrian Kurdish official says visit to Japan aimed at discussing Rojava situation, war on ISIS ARA News Tel Abyad, Syria – Qasim al-Khatib, member of the Syrian National Coalition (SNC), has visited the city of Tel Abyad (Gire Spi) in Raqqa province, northeastern Syria, after militants of the Islamic State (IS/ISIS) were expelled from the city by the joint forces. Al-Khatib has headed a delegation from the SNC to investigate allegations about the displacement of Arab civilians from Tel Abyad at the hand of Kurdish forces. In an exclusive interview with ARA News, al-Khatib said: “As member of the Syrian National Coalition (SNC) and as a citizen from Tel Abyad, it’s my pleasure to congratulate my people for the liberation of the city from Daesh (IS) terrorists.” “I am honored to be among the first arrivals to the city from the political opposition after IS defeat,” he said. Al-Khatib denied the widespread accusations by some Arab opposition factions in Syria to the Kurdish forces about the forced displacement of Arab inhabitants from Tel Abyad. “We have a trustworthy team of young Syrians who documented all developments in the area. There is no single evidence that Kurds have displaced Arab civilians from Tel Abyad,” the SNC member said. “There has been no such actions of displacement against Arabs or Turkmen. The Kurdish forces and allied rebels played a prominent role in cleansing the area from IS terrorists. They brought back security to Tel Abyad and its surroundings. People are now returning home after the terrorist group was removed by the joint forces,” al-Khatib told ARA News in Tel Abyad. Regarding any possible scenarios for the administration of the city of Tel Abyad, al-Khatib said: “We have submitted a plan to the Kurdish forces and rebel factions regarding the city’s administration; we asked for the removal of all armed military aspects from Tel Abyad, and asked for a civil administration in the city and the border crossing at the Turkish border. Our Kurdish brothers immediately approved out demand and respond ended to our appeal,” he added. “We share the same interests with the Kurdish Auto-Administration in northern Syria. Thus, we agreed on a roadmap for the city’s administration and its border crossing. The public institutions will be run by locals.” “We discussed also with a group of Syrian lawyers the possibility of forming civilian courts to hold accountable those involved with the Islamic State group in committing crimes against civilians,” al-Khatib concluded. Interview by: Redwan Bizar Source: ARA News For the latest news follow us on Twitter Join our Weekly Newsletter
<reponame>sravani-m/Web-Application-Security-Framework """ detect_reverse_proxy.py Copyright 2006 <NAME> This file is part of w3af, http://w3af.org/ . w3af is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation version 2 of the License. w3af is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with w3af; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """ import re import w3af.core.data.kb.knowledge_base as kb import w3af.core.controllers.output_manager as om from w3af.core.controllers.plugins.infrastructure_plugin import InfrastructurePlugin from w3af.core.controllers.exceptions import RunOnce from w3af.core.controllers.misc.decorators import runonce from w3af.core.data.kb.info import Info class detect_reverse_proxy(InfrastructurePlugin): """ Find out if the remote web server has a reverse proxy. :author: <NAME> (<EMAIL>) """ def __init__(self): InfrastructurePlugin.__init__(self) self._proxy_header_list = ['Via', 'Reverse-Via', 'X-Forwarded-For', 'Proxy-Connection', 'Max-Forwards', 'X-Forwarded-Host', 'X-Forwarded-Server'] @runonce(exc_class=RunOnce) def discover(self, fuzzable_request): """ :param fuzzable_request: A fuzzable_request instance that contains (among other things) the URL to test. """ # detect using GET if not kb.kb.get('detect_transparent_proxy', 'detect_transparent_proxy'): response = self._uri_opener.GET( fuzzable_request.get_url(), cache=True) if self._has_proxy_headers(response): self._report_finding(response) # detect using TRACE # only if I wasn't able to do it with GET if not kb.kb.get('detect_reverse_proxy', 'detect_reverse_proxy'): response = self._uri_opener.TRACE( fuzzable_request.get_url(), cache=True) if self._has_proxy_content(response): self._report_finding(response) # detect using TRACK # This is a rather special case that works with ISA server; example follows: # Request: # TRACK http://www.xyz.com.bo/ HTTP/1.1 # ... # Response headers: # HTTP/1.1 200 OK # content-length: 99 # ... # Response body: # TRACK / HTTP/1.1 # Reverse-Via: MUTUN ------> find this! # .... if not kb.kb.get('detect_reverse_proxy', 'detect_reverse_proxy'): response = self._uri_opener.TRACK( fuzzable_request.get_url(), cache=True) if self._has_proxy_content(response): self._report_finding(response) # Report failure to detect reverse proxy if not kb.kb.get('detect_reverse_proxy', 'detect_reverse_proxy'): om.out.information('The remote web server doesn\'t seem to have a reverse proxy.') def _report_finding(self, response): """ Save the finding to the kb. :param response: The response that triggered the detection """ desc = 'The remote web server seems to have a reverse proxy installed.' i = Info('Reverse proxy identified', desc, response.id, self.get_name()) i.set_url(response.get_url()) kb.kb.append(self, 'detect_reverse_proxy', i) om.out.information(i.get_desc()) def _has_proxy_headers(self, response): """ Performs the analysis :return: True if the remote web server has a reverse proxy """ for proxy_header in self._proxy_header_list: for response_header in response.get_headers(): if proxy_header.upper() == response_header.upper(): return True return False def _has_proxy_content(self, response): """ Performs the analysis of the response of the TRACE and TRACK command. :param response: The HTTP response object to analyze :return: True if the remote web server has a reverse proxy """ response_body = response.get_body().upper() #remove duplicated spaces from body whitespace = re.compile('\s+') response_body = re.sub(whitespace, ' ', response_body) for proxy_header in self._proxy_header_list: # Create possible header matches possible_matches = [proxy_header.upper( ) + ':', proxy_header.upper() + ' :'] for possible_match in possible_matches: if possible_match in response_body: return True return False def get_plugin_deps(self): """ :return: A list with the names of the plugins that should be run before the current one. """ return ['infrastructure.detect_transparent_proxy'] def get_long_desc(self): """ :return: A DETAILED description of the plugin functions and features. """ return """ This plugin tries to determine if the remote end has a reverse proxy installed. The procedure used to detect reverse proxies is to send a request to the remote server and analyze the response headers, if a Via header is found, chances are that the remote site has a reverse proxy. """
<reponame>maneo/lakon package org.grejpfrut.tiller.analysis; import junit.framework.TestCase; import org.grejpfrut.tiller.entities.Token; import org.grejpfrut.tiller.entities.Token.PartOfSpeech; import org.grejpfrut.tiller.utils.TillerConfiguration; /** * Morpheus toknizer test. * * @author <NAME> * */ public class MorpheusTokenizerTest extends TestCase { private static final String STOP_WORDS_LIST = "to,o,z,bez,albo,na,lub,i,a"; MorpheusTokenizer tokenizer; protected void setUp() throws Exception { TillerConfiguration config = new TillerConfiguration(null); config.setStopWords(STOP_WORDS_LIST); this.tokenizer = new MorpheusTokenizer(config); } public void testStemming() { Token token = this.tokenizer.getToken("Tymczasem"); assertEquals("Tymczasem", token.getText()); assertEquals("tymczasem", token.getBaseForms().get(0)); assertEquals(PartOfSpeech.UNKNOWN, token.getInfo().get(0)); token = this.tokenizer.getToken(":przesłaniający"); assertEquals(":przesłaniający", token.getText()); assertEquals("przesłaniać", token.getBaseForms().get(0)); assertEquals(PartOfSpeech.VERB, token.getInfo().get(0)); token = this.tokenizer.getToken("Dudczak"); assertEquals("Dudczak", token.getText()); assertEquals("dudczak", token.getBaseForms().get(0)); assertEquals(PartOfSpeech.UNKNOWN, token.getInfo().get(0)); token = this.tokenizer.getToken("turystyczną"); assertEquals("turystyczną", token.getText()); assertEquals("turystyczny", token.getBaseForms().get(0)); assertEquals(PartOfSpeech.ADJECTIVE, token.getInfo().get(0)); token = this.tokenizer.getToken("60 minut,"); assertEquals("60 minut,", token.getText()); token = this.tokenizer.getToken("to."); assertEquals("to.", token.getText()); assertEquals("to", token.getBaseForms().get(0)); assertTrue(token.isStopWord()); token = this.tokenizer.getToken("rzeka"); assertEquals("rzeka", token.getText()); assertEquals("rzeka", token.getBaseForms().get(0)); assertFalse(token.isStopWord()); assertEquals(PartOfSpeech.NOUN, token.getInfo().get(0)); token = this.tokenizer.getToken("dom"); assertEquals("dom", token.getText()); assertEquals("dom", token.getBaseForms().get(0)); assertFalse(token.isStopWord()); assertEquals(PartOfSpeech.NOUN, token.getInfo().get(0)); } }
def day_difference(dt1, dt2): try: month_days = 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 if (0 < dt1.d <= month_days[dt1.m - 1]) and (0 < dt2.d <= month_days[dt2.m - 1]): n1 = dt1.y * 365 + dt1.d for i in range(0, dt1.m): n1 += month_days[i] n1 += count_leap_years(dt1) n2 = dt2.y * 365 + dt2.d for i in range(0, dt2.m): n2 += month_days[i] n2 += count_leap_years(dt2) else: raise ValueError except: return "Dates not valid" else: return abs(n2 - n1) - 1
From Moscow to Washington to New Delhi and points in between, dismay and condemnation poured forth Thursday over the assassination of Pakistani opposition leader Benazir Bhutto, along with concern for the stability of the volatile region. In India, which has fought three wars against Pakistan, Prime Minister Manmohan Singh said Bhutto is irreplaceable, and noted she had striven to improve relations between the two nuclear-armed countries. "I was deeply shocked and horrified to hear of the heinous assassination," Singh said. "In her death, the subcontinent has lost an outstanding leader who worked for democracy and reconciliation in her country." Bhutto's assassination "is not only bad for Pakistan," said former Indian Foreign Minister Natwar Singh. "It is bad for the entire region." In a letter to Pakistani President Pervez Musharraf, French President Nicolas Sarkozy called the attack an "odious act" and said "terrorism and violence have no place in the democratic debate and the combat of ideas and programs." Bhutto, a former two-time prime minister of Pakistan, was killed in a suicide attack in Rawalpindi just 10 weeks after she returned to her homeland from eight years in exile. The articulate, poised 54-year-old had lashed out at the spread of Islamic extremism as she campaigned for next month's parliamentary elections. The United States had been at the forefront of foreign powers trying to arrange reconciliation between Bhutto and Musharraf, who under heavy U.S. pressure resigned as army chief and earlier this month lifted a state of emergency, in the hope it would put Pakistan back on the road to democracy. "Certainly, we condemn the attack on this rally," said deputy State Department spokesman Tom Casey. "It demonstrates that there are still those in Pakistan who want to subvert reconciliation and efforts to advance democracy." Sarkozy said Bhutto had paid "with her life her commitment to the service of her fellow citizens and to Pakistan's political life" and urged Pakistan's elections be held as scheduled on Jan. 8. In Britain, where Bhutto had attended Oxford University, Foreign Secretary David Miliband said he was "deeply shocked" by Bhutto's death. "Benazir Bhutto showed in her words and actions a deep commitment to her country," Miliband said. "She knew the risks of her return to campaign but was convinced that her country needed her. This is a time for restraint but also unity." Italian Premier Romano Prodi said he was filled with grief and called Bhutto "a woman who chose to fight her battle until the end with a single weapon � the one of dialogue and political debate." "The difficult path toward peace and democracy in that region must not be stopped, and Bhutto's sacrifice will serve as the strongest example for those who do not surrender to terrorism," Prodi said. In Moscow, Anatoly Safonov, Russian President Vladmir Putin's envoy on international cooperation against terrorism, expressed fears the assassination would trigger violent repercussions. "The already unstable situation in Pakistan will be further exacerbated by this powerful factor," Safonov said, according to the Interfax news agency. Russian Foreign Ministry spokesman Mikhail Kamynin condemned the attack, the RIA-Novosti news agency reported. "We hope that the leadership of Pakistan will succeed in taking all measures for guaranteeing security in the country," Kamynin said. French Foreign Minister Bernard Kouchner, who personally knew Bhutto, said he hails her memory and called on the international community to support Pakistan and its democracy. Sweden's Foreign Minister Carl Bildt said he had felt disgust when receiving the news of Bhutto's murder, which he called "bestial." "I feel a strong worry for the consequences this will have for Pakistan," he said.
Charlotte Rae, who endeared herself to a generation of TV fans as the affable Mrs. Garrett on the long-running NBC sitcom The Facts of Life, died Sunday at her home in Los Angeles, publicist Harlan Boll announced. She was 92. Rae, who earlier earned two Tony nominations and played Woody Allen's mother in Bananas (1971) and a long-suffering wife on the classic sitcom Car 54, Where Are You?, revealed in April 2017 that she had been diagnosed with bone cancer, seven years after a pancreatic cancer diagnosis. Rae originated the character of Edna Garrett in 1978 for NBC's Diff’rent Strokes and then went on to play her for seven seasons on the Facts of Life spinoff. In 1982, she received an Emmy nomination for outstanding lead actress in a comedy series. In a 1982 interview with the Spartanburg (S.C.) Herald, Rae reflected on the character that had made her a star. “I want to bring in as much humanity as possible, as well as the humor," she said. "I've tried to make her a human being with dimensions. The way they write her now is with a great deal of sensitivity and understanding. But I don't want her to be Polly Perfect, because she must have human failings and make mistakes. Born Charlotte Rae Lubotsky in Milwaukee on April 22, 1926, she was one of three daughters of Russian Jewish immigrants. Rae caught the acting bug early, performing with the Children's Theatre of Wauwatosa and acting on the radio. As a teenager, she won a summer apprenticeship with the Port Players, a professional summer theater company. She also was a regular on stage at Shorewood High School. Around 1948, Rae made the decision to leave school to seek fame and fortune in New York. A versatile singer and dancer, she could often be seen at the Blue Angel or the Village Vanguard. In 1951, Rae married film composer and music editor John Strauss. The couple had two sons, Larry and Andrew. Rae and Strauss stayed together for more than 25 years until he revealed that he was bisexual and wanted an open marriage. They divorced in 1976. Rae made her Broadway debut in 1952 in the musical comedy Three Wishes for Jamie, then followed with a turn as Mrs. Peachum in the 1954 revival of The Threepenny Opera. The cast also included Bea Arthur, John Astin and Paul Dooley. In 1956, she created the role of Mammy Yokum in the original Broadway production of L’il Abner. Rae received the first of two Tony noms in 1966, as best featured actress in a musical for Pickwick. The second came in 1969 when she was nominated as best actress in a play for Morning, Noon and Night. In 1955, Rae recorded the album Songs I Taught My Mother. Subtitled Silly, Sinful and Satiric Selections, it featured songs by Cole Porter, Lorenz Hart & Richard Rodgers, Marc Blitzstein, Vernon Duke and Harnick. Rae also began her extensive television career in the 1950s, with appearances on The United States Steel Hour and The Phil Silvers Show. A role that really brought her attention came in 1961 when she was cast as Sylvia Schnauser, the wife of Officer Leo Schnauser, played by Al Lewis, in the NBC sitcom Car 54, Where Are You? “I was doing a lot of drama until I took the comedy role in the series Car 54, Where Are You?, and I've been tagged as a comedian ever since,” she said in 1985. When not appearing onstage, Rae worked steadily in television throughout the 1960s and '70s on such series as The Defenders; The Partridge Family; McMillan & Wife; Love, American Style; All in the Family; Good Times; and Barney Miller. She played Molly the Mail Lady on Sesame Street and was a regular on The Rich Little Show. In 1975, she received an Emmy nomination for her performance in the telefilm Queen of the Stardust Ballroom. Rae also popped up in features, taking on comedic roles in Bananas as well as Hello Down There (1969), Jenny (1970),The Hot Rock (1972) and Hair (1979). Rae was a favorite of legendary TV producer Norman Lear. The two had met in the 1950s when he was writing for The Colgate Comedy Hour. In addition to giving her guest shots on his sitcoms, Lear cast her as Mrs. Bellotti in Hot L Baltimore, the short-lived 1975 adaptation of Lanford Wilson’s hit 1973 play. In 1978, Fred Silverman, then president of ratings-laggard NBC, was high on a sitcom concept called 45 Minutes From Harlem, about a wealthy, white New York industrialist who becomes a foster parent to two orphaned African-American boys who were children of a former employee. The show was being fashioned for Maude co-star Conrad Bain and a newly discovered child actor, Gary Coleman. Silverman wanted Lear to produce. To entice him, Silverman cast Rae as the household’s wisecracking maid. The ploy worked, and Lear’s company, Tandem Productions, produced the show. Rounding out the cast were Todd Bridges as Willis, the older brother of Coleman’s character Arnold, and Dana Plato as Kimberly, the biological daughter of Drummond. Diff’rent Strokes became a hit shortly after debuting in November 1978. Coleman was the breakout star as his catchphrase, “Wha’chu talkin' 'bout, Willis?” became part of TV history. But as the first season unfolded, the growing popularity of Edna Garrett became apparent. The producers designed an episode with an eye towards a spinoff. The season's last entry, "The Girls School," found Mrs. Garrett meeting and bonding with a group of youngsters at East Lake School for Girls, a prestigious prep school that Kimberly was attending. NBC execs liked what they saw and ordered a series. The Facts of Life debuted the following August. Though the basic premise of Edna Garrett guiding the girl’s through life’s traumas remained throughout its run, the sitcom evolved throughout its nine seasons. The first season featured seven young girls in the cast, including a young Molly Ringwald. By the second season, four of those girls were gone. The three that remained included Lisa Whelchel as Blair, the spoiled rich girl; Kim Fields as gossip Tootie; and Mindy Cohn as the naive Natalie. Nancy McKeon was added as Jo, a rough-around-the-edges Bronx girl. Mrs. Garrett also transformed from housemother to the school’s dietician. In season five, Mrs. Garrett went into business for herself, creating Edna’s Edibles, a gourmet food shop. Season seven saw the introduction of Blair’s cousin Geri Tyler, who had cerebral palsy. Played by Geri Jewell, who also had the disorder, it marked the first time a character with a disability had a recurring role on a series. By season seven, Rae was feeling that her character was getting stale and asked that her role be reduced. The producers decided to write her out entirely, marrying off Mrs. Garrett and sending her to Africa to work in the Peace Corps. Leachman was introduced as Garrett’s sister, Beverly, who came in to take over the shop and watch over the girls. After leaving Facts of Life, Rae remained busy, with work on TV shows including 101 Dalmatians: The Series; ER; The King of Queens; Murder, She Wrote; Sisters; and Girl Meets World; and such films as You Don’t Mess With the Zohan (2008), Love Sick Love (2012) and Ricki and the Flash (2015). As syndication fueled the popularity of Facts of Life, Rae was lured back to play the character that had made her a star. She revisited Mrs. Garrett in 1982 with The Facts of Life Goes to Paris and in 2001 for The Facts of Life Reunion. Rae skipped 1987’s The Facts of Life Down Under (that one featured Leachman as her sister). In 2015, Rae released her memoir, The Facts of My Life. Written with her son Larry Strauss, it revealed her struggle to come to grips with her husband’s sexuality and her battle with alcoholism. In addition to her son Larry and his wife, Eleanor, survivors include her sister, Miriam, and grandchildren Sean, Carly and Nora. Her other son Andrew died in 1999 of a heart attack. In lieu of flowers, her family asked that donations be made to The Actors Fund, Pancreatic Cancer Action Network (Pan-Can) or the Clare Foundation.
For other people named George Lucas, see George Lucas (disambiguation) American film director and producer George Walton Lucas Jr.[2] (born May 14, 1944) is an American filmmaker and entrepreneur. Lucas is known for creating the Star Wars and Indiana Jones franchises and founding Lucasfilm, LucasArts and Industrial Light & Magic. He was the chairman and CEO of Lucasfilm before selling it to The Walt Disney Company in 2012.[3] After graduating from the University of Southern California in 1967, Lucas co-founded American Zoetrope with filmmaker Francis Ford Coppola. Lucas wrote and directed THX 1138 (1971), based on his earlier student short Electronic Labyrinth: THX 1138 4EB, which was a critical success but a financial failure. His next work as a writer-director was the film American Graffiti (1973), inspired by his youth in early 1960s Modesto, California, and produced through the newly founded Lucasfilm. The film was critically and commercially successful, and received five Academy Award nominations including Best Picture. Lucas' next film, the epic space opera Star Wars (1977), had a troubled production; however, it was a surprise hit, becoming the highest-grossing film at the time, winning six Academy Awards and becoming a cultural phenomenon. Lucas produced and cowrote the sequels The Empire Strikes Back (1980) and Return of the Jedi (1983). With director Steven Spielberg, he created the Indiana Jones films Raiders of the Lost Ark (1981), Temple of Doom (1984), and The Last Crusade (1989). He also produced and wrote a variety of films through Lucasfilm in the 1980s and 1990s and during this same period Lucas' LucasArts developed high-impact video games, including Maniac Mansion (1987), The Secret of Monkey Island (1990) and Grim Fandango (1998) alongside many video games based on the Star Wars universe. In 1997, Lucas rereleased the Star Wars trilogy as part of a Special Edition, featuring several alterations; home media versions with further changes were released in 2004 and 2011. He returned to directing with the Star Wars prequel trilogy, comprising The Phantom Menace (1999), Attack of the Clones (2002), and Revenge of the Sith (2005). He later collaborated on served as executive producer for the war film Red Tails (2012) and wrote the CGI film Strange Magic (2015). Lucas is one of the American film industry's most financially successful filmmakers and has been nominated for four Academy Awards. His films are among the 100 highest-grossing movies at the North American box office, adjusted for ticket-price inflation.[4] Lucas is considered a significant figure in the New Hollywood era. Early life Lucas was born and raised in Modesto, California, the son of Dorothy Ellinore Lucas (née Bomberger) and George Walton Lucas Sr., and is of German, Swiss-German, English, Scottish, and distant Dutch and French descent.[5] He was interested in science fiction, including TV shows such as Flash Gordon. Long before Lucas began making films, he yearned to be a racecar driver, and he spent most of his high school years racing on the underground circuit at fairgrounds and hanging out at garages. On June 12, 1962, at age eighteen, while driving his souped-up Autobianchi Bianchina, another driver broadsided him, flipping over his car, nearly killing him, causing him to lose interest in racing as a career.[6][7] Lucas's father owned a stationery store,[8] and wanted George to work for him when he turned 18. Lucas had been planning to go to art school, and declared upon leaving home that he would be a millionaire by the age of 30. He attended Modesto Junior College, where he studied anthropology, sociology, and literature, amongst other subjects.[6] He also began shooting with an 8 mm camera, including filming car races.[6] At this time, Lucas and his friend John Plummer became interested in Canyon Cinema: screenings of underground, avant-garde 16 mm filmmakers like Jordan Belson, Stan Brakhage, and Bruce Conner.[10] Lucas and Plummer also saw classic European films of the time, including Jean-Luc Godard's Breathless, François Truffaut's Jules et Jim, and Federico Fellini's 8½.[10] "That's when George really started exploring," Plummer said.[10] Through his interest in autocross racing, Lucas met renowned cinematographer Haskell Wexler, another race enthusiast.[6][10] Wexler, later to work with Lucas on several occasions, was impressed by Lucas' talent.[6] "George had a very good eye, and he thought visually," he recalled.[10] Lucas then transferred to the University of Southern California (USC) School of Cinematic Arts. USC was one of the earliest universities to have a school devoted to motion picture film. During the years at USC, Lucas shared a dorm room with Randal Kleiser. Along with classmates such as Walter Murch, Hal Barwood, and John Milius, they became a clique of film students known as The Dirty Dozen. He also became good friends with fellow acclaimed student filmmaker and future Indiana Jones collaborator, Steven Spielberg. Lucas was deeply influenced by the Filmic Expression course taught at the school by filmmaker Lester Novros which concentrated on the non-narrative elements of Film Form like color, light, movement, space, and time. Another inspiration was the Serbian montagist (and dean of the USC Film Department) Slavko Vorkapić, a film theoretician who made stunning montage sequences for Hollywood studio features at MGM, RKO, and Paramount. Vorkapich taught the autonomous nature of the cinematic art form, emphasizing kinetic energy inherent in motion pictures. Film career 1965–69: Early career Lucas saw many inspiring films in class, particularly the visual films coming out of the National Film Board of Canada like Arthur Lipsett's 21-87, the French-Canadian cameraman Jean-Claude Labrecque's cinéma vérité 60 Cycles, the work of Norman McLaren, and the documentaries of Claude Jutra. Lucas fell madly in love with pure cinema and quickly became prolific at making 16 mm nonstory noncharacter visual tone poems and cinéma vérité with such titles as Look at Life, Herbie, 1:42.08, The Emperor, Anyone Lived in a Pretty (how) Town, Filmmaker, and 6-18-67. He was passionate and interested in camerawork and editing, defining himself as a filmmaker as opposed to being a director, and he loved making abstract visual films that created emotions purely through cinema.[10] After graduating with a bachelor of fine arts in film in 1967, he tried joining the United States Air Force as an officer, but he was immediately turned down because of his numerous speeding tickets. He was later drafted by the Army for military service in Vietnam, but he was exempted from service after medical tests showed he had diabetes, the disease that killed his paternal grandfather. In 1967, Lucas re-enrolled as a USC graduate student in film production.[11] Working as a teaching instructor for a class of U.S. Navy students who were being taught documentary cinematography, Lucas directed the short film Electronic Labyrinth: THX 1138 4EB, which won first prize at the 1967–68 National Student film festival, and was later adapted into his first full-length feature film, THX 1138. Lucas was awarded a student scholarship by Warner Bros. to observe and work on the making of a film of his choosing. The film he chose was Finian's Rainbow (1968) which was being directed by Francis Ford Coppola, who was revered among film school students of the time as a cinema graduate who had "made it" in Hollywood. In 1969, Lucas was one of the camera operators on the classic Rolling Stones concert film Gimme Shelter. 1969–77: THX 1138, American Graffiti, and Star Wars In 1969, Lucas co-founded the studio American Zoetrope with Coppola—whom he met during his internship at Warner Bros.—hoping to create a liberating environment for filmmakers to direct outside the perceived oppressive control of the Hollywood studio system.[12] His first full-length feature film produced by the studio, THX 1138, was not a success. Lucas then created his own company, Lucasfilm, Ltd., and directed the successful American Graffiti (1973). Lucas then set his sights on adapting Flash Gordon, an adventure serial from his childhood that he fondly remembered. When he was unable to obtain the rights, he set out to write an original space adventure that would eventually become Star Wars. Despite his success with his previous film, all but one studio turned Star Wars down. It was only because Alan Ladd, Jr., at 20th Century Fox liked American Graffiti that he forced through a production and distribution deal for the film, which ended up restoring Fox to financial stability after a number of flops.[13] Star Wars was significantly influenced by samurai films of Akira Kurosawa, spaghetti westerns, as well as classic swords & sorcery fantasy stories. Star Wars quickly became the highest-grossing film of all-time, displaced five years later by Spielberg's E.T. the Extra-Terrestrial. After the success of American Graffiti and prior to the beginning of filming on Star Wars, Lucas was encouraged to renegotiate for a higher fee for writing and directing Star Wars than the $150,000 agreed.[6] He declined to do so, instead negotiating for advantage in some of the as-yet-unspecified parts of his contract with Fox, in particular ownership of licensing and merchandising rights (for novelizations, T-shirts, toys, etc.) and contractual arrangements for sequels.[6] The studio was unconcerned to relinquish these rights, as its last major attempt in the field, with the film Doctor Dolittle (1967), had proved a discouraging failure.[14] Lucas exploited merchandising rights wisely, and Lucasfilm has earned hundreds of millions of dollars from licensed games, toys, and collectibles created for the franchise.[6] 1977–93: Hiatus from directing, Indiana Jones Labyrinth in 1986 Director Jim Henson (left) and Lucas working onin 1986 Following the release of the first Star Wars film, Lucas worked extensively as a writer and producer, including on the many Star Wars spinoffs made for film, television, and other media. Lucas acted as a writer and executive producer for the next two Star Wars films, commissioning Irvin Kershner to direct The Empire Strikes Back, and Richard Marquand to direct Return of the Jedi, while receiving a story credit on the former and sharing a screenwriting credit with Lawrence Kasdan on the latter.[15] He also acted as executive producer and story writer on all four of the Indiana Jones films, which his colleague and good friend Steven Spielberg directed. Other successful projects where Lucas acted as a producer or writer in this period include Kurosawa's Kagemusha (1980), Lawrence Kasdan's Body Heat (1981), Ewoks: Caravan of Courage (1984), Ewoks: Battle for Endor (1985), Jim Henson's Labyrinth (1986), Godfrey Reggio's Powaqqatsi (1986), Don Bluth's The Land Before Time (1988), and the Indiana Jones television spinoff The Young Indiana Jones Chronicles (1992–96). There were unsuccessful projects, however, including More American Graffiti (1979), Willard Huyck's Howard the Duck (1986), which was the biggest flop of Lucas's career, Ron Howard's Willow (1988), Coppola's Tucker: The Man and His Dream (1988), and Mel Smith's Radioland Murders (1994). The animation studio Pixar was founded in 1979 as the Graphics Group, one third of the Computer Division of Lucasfilm.[16] Pixar's early computer graphics research resulted in groundbreaking effects in films such as Star Trek II: The Wrath of Khan[17] and Young Sherlock Holmes,[17] and the group was purchased in 1986 by Steve Jobs shortly after he left Apple Computer. Jobs paid Lucas US$5 million and put US$5 million as capital into the company. The sale reflected Lucas' desire to stop the cash flow losses from his 7-year research projects associated with new entertainment technology tools, as well as his company's new focus on creating entertainment products rather than tools. A contributing factor was cash-flow difficulties following Lucas' 1983 divorce concurrent with the sudden dropoff in revenues from Star Wars licenses following the release of Return of the Jedi. The sound-equipped system THX Ltd. was founded by Lucas and Tomlinson Holman.[18] The company was formerly owned by Lucasfilm, and contains equipment for stereo, digital, and theatrical sound for films, and music. Skywalker Sound and Industrial Light & Magic, are the sound and visual effects subdivisions of Lucasfilm, while Lucasfilm Games, later renamed LucasArts, produces products for the gaming industry. 1993–2012: Return to directing, return to Star Wars and Indiana Jones After losing much of his fortune in a divorce settlement in 1987, Lucas had no desire to return to Star Wars, and had unofficially canceled his sequel trilogy by the time of Return of the Jedi. Nevertheless, the prequels, which were still only a series of basic ideas partially pulled from his original drafts of "The Star Wars", continued to tantalize him with technical possibilities that would make it worthwhile to revisit his older material. When Star Wars became popular once again, in the wake of Dark Horse's comic book line and Timothy Zahn's trilogy of novels, Lucas realized that there was still a large audience. His children were older, and with the explosion of CGI technology he was now considering returning to directing. By 1993, it was announced, in Variety among other sources, that Lucas would be making the prequels. He began penning more to the story, indicating that the series would be a tragic one, examining Anakin Skywalker's fall to the dark side. Lucas also began to change the prequels status relative to the originals; at first they were supposed to be a "filling-in" of history tangential to the originals, but now he saw that they could form the beginning of one long story that started with Anakin's childhood and ended with his death. This was the final step towards turning the film series into a "Saga". In 1994, Lucas began work on the screenplay of the first prequel, tentatively titled Episode I: The Beginning. In 1997, to celebrate the 20th anniversary of Star Wars, Lucas returned to the original trilogy and made numerous modifications using newly available digital technology, releasing them in theaters as the Star Wars Special Edition. For DVD releases in 2004 and Blu-ray releases in 2011, the trilogy received further revisions to make them congruent with the prequel trilogy. Besides the additions to the Star Wars franchise, Lucas released a Director's Cut of THX 1138 in 2004, with the film re-cut and containing a number of CGI revisions. The first Star Wars prequel was finished and released in 1999 as Episode I – The Phantom Menace, which would be the first film Lucas had directed in over two decades. Following the release of the first prequel, Lucas announced that he would also be directing the next two, and began working on Episode II.[22] The first draft of Episode II was completed just weeks before principal photography, and Lucas hired Jonathan Hales, a writer from The Young Indiana Jones Chronicles, to polish it. It was completed and released in 2002 as Star Wars: Episode II – Attack of the Clones. The final prequel, Star Wars: Episode III – Revenge of the Sith, began production in 2002 and was released in 2005. Numerous fans and critics considered the prequels inferior to the original trilogy,[25][26][27] though they were box office successes nonetheless.[28][29][30] From 2003 to 2005, Lucas also served as an executive producer on Star Wars: Clone Wars, an animated microseries on Cartoon Network created by Genndy Tartakovsky, that bridged the events between Attack of the Clones and Revenge of the Sith. George Lucas in 2007 Lucas collaborated with Jeff Nathanson as a writer of the 2008 film Indiana Jones and the Kingdom of the Crystal Skull, directed by Steven Spielberg. Like the Star Wars prequels, reception was mixed, with numerous fans and critics once again considering it inferior to its predecessors. From 2008 to 2014, Lucas also served as the executive producer for a second Star Wars animated series on Cartoon Network, Star Wars: The Clone Wars which premiered with a feature film of the same name before airing its first episode. The supervising director for this series was Dave Filoni, who was chosen by Lucas and closely collaborated with him on its development.[31][32][33][34][35] Like the previous series it bridged the events between Attack of the Clones and Revenge of the Sith. The animated series also featured the last Star Wars stories on which Lucas was majorly involved. In 2012, Lucas served as executive producer for Red Tails, a war film based on the exploits of the Tuskegee Airmen during World War II. He also took over direction of reshoots while director Anthony Hemingway worked on other projects. 2012–present: Semi-retirement I'm moving away from the business ... From the company, from all this kind of stuff. —George Lucas on his future career plans.[36] In January 2012, Lucas announced his retirement from producing large blockbuster films and instead re-focusing his career on smaller, independently budgeted features.[36][37][38] In June 2012, it was announced that producer Kathleen Kennedy, a long-term collaborator with Steven Spielberg and a producer of the Indiana Jones films, had been appointed as co-chair of Lucasfilm Ltd.[39][40] It was reported that Kennedy would work alongside Lucas, who would remain chief executive and serve as co-chairman for at least one year, after which she would succeed him as the company's sole leader.[39][40] With the sale of Lucasfilm to Disney, Lucas is currently Disney's second largest single shareholder after the estate of Steve Jobs.[41] Since 2014, Lucas is working as a creative consultant on the Star Wars sequel trilogy, including work on the first film, Star Wars VII: The Force Awakens.[42] As creative consultant on the film, Lucas' involvement included attending early story meetings; according to Lucas, "I mostly say, 'You can't do this. You can do that.' You know, 'The cars don't have wheels. They fly with antigravity.' There's a million little pieces ... I know all that stuff."[43] Lucas' son Jett told The Guardian that his father was "very torn" about having sold the rights to the franchise, despite having hand-picked Abrams to direct, and that his father was "there to guide" but that "he wants to let it go and become its new generation."[44] Among the materials turned over to the production team were rough story treatments Lucas developed when he considered creating episodes VII–IX himself years earlier; in January 2015, Lucas stated that Disney had discarded his story ideas.[45][46] Lucas with Secretary of State John Kerry in Washington, D.C., on December 5, 2015. The Force Awakens directed by J. J. Abrams, was released on December 18, 2015. Kathleen Kennedy executive produced, and will do so for all future Star Wars films.[47][48] The new sequel trilogy is being jointly produced by Lucasfilm and The Walt Disney Company, which had acquired Lucasfilm in 2012.[49] During an interview with talk show host and journalist Charlie Rose that aired on December 24, 2015, Lucas likened his decision to sell Lucasfilm to Disney to a "divorce" and outlined the creative differences between him and the producers of The Force Awakens. Lucas described the previous six Star Wars films as his "children" and defended his vision for them, while criticizing The Force Awakens for having a "retro feel", saying: "I worked very hard to make them completely different, with different planets, with different spaceships – you know, to make it new". Lucas also drew some criticism and subsequently apologized for his remark likening Disney to "white slavers".[50][51] It has been reported Lucas liked Rogue One: a Star Wars Story more than The Force Awakens.[52] Rogue One was directed by Gareth Edwards and told the story of the rebels who stole the plans for the original Death Star. In 2015, Lucas wrote the CGI film Strange Magic, his first musical. The film was produced at Skywalker Ranch. Gary Rydstrom directed the movie.[53] At the same time the sequel trilogy was announced a fifth installment of the Indiana Jones series also entered pre-development phase with Harrison Ford and Steven Spielberg set to return, for a release in 2019. Lucas originally did not specify whether the selling of Lucasfilm would affect his involvement with the film. In October 2016, Lucas announced his decision to not be involved in the story of the film, but would remain an executive producer.[54][55] Philanthropy Lucas has pledged to give half of his fortune to charity as part of an effort called The Giving Pledge led by Bill Gates and Warren Buffett to persuade America's richest individuals to donate their financial wealth to charities.[56][57] George Lucas Educational Foundation In 1991, The George Lucas Educational Foundation was founded as a nonprofit operating foundation to celebrate and encourage innovation in schools. The Foundation's content is available under the brand Edutopia, in an award-winning web site, social media and via documentary films. Lucas, through his foundation, was one of the leading proponents of the E-rate program in the universal service fund,[58] which was enacted as part of the Telecommunications Act of 1996. On June 24, 2008, Lucas testified before the United States House of Representatives subcommittee on Telecommunications and the Internet as the head of his Foundation to advocate for a free wireless broadband educational network.[59] Proceeds from the sale of Lucasfilm to Disney In 2012, Lucas sold Lucasfilm to The Walt Disney Company for a reported sum of $4.05 billion.[49] It was widely reported at the time that Lucas intends to give the majority of the proceeds from the sale to charity.[60][61] A spokesperson for Lucasfilm said, "George Lucas has expressed his intention, in the event the deal closes, to donate the majority of the proceeds to his philanthropic endeavors."[61] Lucas also spoke on the matter: "For 41 years, the majority of my time and money has been put into the company. As I start a new chapter in my life, it is gratifying that I have the opportunity to devote more time and resources to philanthropy."[61] Lucas Museum of Narrative Art By June 2013, Lucas was considering establishing a museum, the Lucas Cultural Arts Museum, to be built on Crissy Field near the Golden Gate Bridge in San Francisco, which would display his collection of illustrations and pop art, with an estimated value of more than $1 billion. Lucas offered to pay the estimated $300 million cost of constructing the museum, and would endow it with $400 million when it opened, eventually adding an additional $400 million to its endowment.[62] After being unable to reach an agreement with The Presidio Trust, Lucas turned to Chicago.[63] A potential lakefront site on Museum Campus in Chicago was proposed in May 2014.[64] By June 2014, Chicago had been selected, pending approval of the Chicago Plan Commission,[65] which was granted.[66] The museum project was renamed the Lucas Museum of Narrative Art.[67] On June 24, 2016, Lucas announced that he was abandoning his plans to locate the museum in Chicago, due to a lawsuit by a local preservation group, Friends of the Parks, and would instead build the museum in California.[68] On January 17, 2017, Lucas announced that the museum will be constructed in Exposition Park, Los Angeles California.[69] Other initiatives In 2005, Lucas gave US$1 million to help build the Martin Luther King Jr. Memorial on the National Mall in Washington D.C. to commemorate American civil rights leader Martin Luther King, Jr.[70] On September 19, 2006, USC announced that Lucas had donated $175–180 million to his alma mater to expand the film school. It is the largest single donation to USC and the largest gift to a film school anywhere.[71] Previous donations led to the already existing George Lucas Instructional Building and Marcia Lucas Post-Production building.[72][73] In 2013, Lucas and his wife Mellody Hobson donated $25 million to the Chicago-based not-for-profit After School Matters, of which Hobson is the chair.[63] On April 15, 2016, it was reported that Lucas had donated between $501,000 and $1 million through the Lucas Family Foundation to the Obama Foundation, which is charged with overseeing the construction of the Barack Obama Presidential Center on Chicago's South Side.[74] Personal life Time 100 2006 gala Lucas at the100 2006 gala In 1969, Lucas married film editor Marcia Lou Griffin,[75] who went on to win an Academy Award for her editing work on the original Star Wars film. They adopted a daughter, Amanda Lucas, in 1981,[76] and divorced in 1983.[75] Lucas subsequently adopted two more children as a single parent: daughter Katie Lucas, born in 1988, and son Jett Lucas, born in 1993.[76] His three eldest children all appeared in the three Star Wars prequels, as did Lucas himself. Following his divorce, Lucas was in a relationship with singer Linda Ronstadt in the 1980s.[77][78] Lucas began dating Mellody Hobson, president of Ariel Investments and chair of DreamWorks Animation, in 2006.[79][80][81] Lucas and Hobson announced their engagement in January 2013,[82] and married on June 22, 2013, at Lucas's Skywalker Ranch in Marin County, California.[83] They have one daughter together, Everest Hobson Lucas, who was born via gestational carrier on August 12, 2013.[84] Lucas was born and raised in a Methodist family.[6] The religious and mythical themes in Star Wars were inspired by Lucas's interest in the writings of mythologist Joseph Campbell,[85] and he would eventually come to identify strongly with the Eastern religious philosophies he studied and incorporated into his films, which were a major inspiration for "the Force". Lucas has come to state that his religion is "Buddhist Methodist". He resides in Marin County.[86][87] Lucas is a major collector of the American illustrator and painter Norman Rockwell. A collection of 57 Rockwell paintings and drawings owned by Lucas and fellow Rockwell collector and film director Steven Spielberg were displayed at the Smithsonian American Art Museum from July 2, 2010 to January 2, 2011 in an exhibition titled Telling Stories.[88] Lucas has said that he is a fan of Seth MacFarlane's hit TV show Family Guy. MacFarlane has said that Lucasfilm was extremely helpful when the Family Guy crew wanted to parody their works.[89] Lucas supported Democratic candidate Hillary Clinton in the run-up for the 2016 U.S. presidential election.[90] Awards and honors The American Film Institute awarded Lucas its Life Achievement Award on June 9, 2005.[91] This was shortly after the release of Star Wars: Episode III – Revenge of the Sith, about which he joked stating that, since he views the entire Star Wars series as one film, he could actually receive the award now that he had finally "gone back and finished the movie." Lucas was nominated for four Academy Awards: Best Directing and Writing for American Graffiti and Star Wars. He received the Academy's Irving G. Thalberg Award in 1991. He appeared at the 79th Academy Awards ceremony in 2007 with Steven Spielberg and Francis Ford Coppola to present the Best Director award to their friend Martin Scorsese. During the speech, Spielberg and Coppola talked about the joy of winning an Oscar, making fun of Lucas, who has not won a competitive Oscar. The Science Fiction Hall of Fame inducted Lucas in 2006, its second "Film, Television, and Media" contributor, after Spielberg.[92][93][a] The Discovery Channel named him one of the 100 "Greatest Americans" in September 2008.[94] Lucas served as Grand Marshal for the Tournament of Roses Parade and made the ceremonial coin toss at the Rose Bowl, New Year's Day 2007. In 2009, he was one of 13 California Hall of Fame inductees in The California Museum's yearlong exhibit. In July 2013, Lucas was awarded the National Medal of Arts by President Barack Obama for his contributions to American cinema.[95] In October 2014, Lucas received Honorary Membership of the Society of Motion Picture and Television Engineers.[96][97] In August 2015, Lucas was inducted as a Disney Legend,[98] and on 6 December 2015, he was an honoree at the Kennedy Center Honors.[99] Filmography Written works See also References Explanatory notes ^ [101] Film-maker Steven Spielberg was the Previously Lucas had received a special award at the 1977 World Science Fiction Convention (for Star Wars) and annual professional achievement awards voted by fantasy fans in 1981 and 1982.[102] After inducting 36 fantasy and science fiction writers and editors from 1996 to 2004, the Science Fiction and Fantasy Hall of Fame dropped "fantasy" and made non-literary contributors eligible.Film-maker Steven Spielberg was the inaugural "Film, Television and Media" inductee in 2005; Lucas the second in 2006.Previously Lucas had received a special award at the 1977 World Science Fiction Convention (for) and annual professional achievement awards voted by fantasy fans in 1981 and 1982. Citations Sources Kaminski, Michael (2008). The Secret History of Star Wars . Legacy Books Press;. ISBN 978-0978465230. Rinzler, J.W. (2007). The Making of Star Wars: The Definitive Story Behind the Original Film. LucasBooks. ISBN 978-0345494764.
The momentum building for September 24th’s Moving Planet day of action is extraordinary: hundreds of big, ambitious events are already planned all around the world. Many of you have organized some pretty big demonstrations with 350.org in the past (OK, really big — CNN called our mobilization in 2009 “the most widespread day of political action in history”), but this one might be the most impressive yet. Today, I wanted to share one story of how a group of young organizers in the Dominican Republic — “350Dominicana” — are using Moving Planet to create lasting change in their community. When the leaders of 350Dominicana heard about this September 24’s “Moving Planet” they knew they wanted to do something extraordinary. Last year, for the Global Work Party on 10/10/10, the 350Dominicana team got hundreds of people to paint and distribute the first set of recycling bins at a school on the island. Since then, they’ve continued to expand the program to 3 more schools, and are slated to expand to 6 more by later this year. So far, they’ve been able to divert 18,740 kg (41,315 lbs) of waste in just 3 ½ months. This year, as 350Dominicana began to plan a big bicycle mobilization for Moving Planet, they realized something — there isn’t a single bike lane in their capitol, Santo Domingo. The lack of sustainable transportation options isn’t just a challenge for cyclists, it’s also a major source of pollution. The Dominican Republic has doubled its CO2 emissions in the last 7 years and cars are the second biggest contributor. So, from now until Moving Planet, 350Dominicana and their allies will be campaigning to get the first bike lane painted in their capitol city. On September 24th, they’ll organize a mass bike ride to deliver petitions and a plan for the bike lane to their city leaders. From pushing for the first bike lane in Santo Domingo, to rallying to stop proposed coal plants in Andhra Pradesh, India to getting 15,000 people into the streets of Istanbul, Turkey to call for climate action, Moving Planet will be a single day for all of us to move away from fossil fuels — and demand that our leaders do the same. P.S. Stories help fuel this movement–they create collective inspiration and help spread ideas throughout the network. If you want to share your local organizing story of how you’re working to transform your community, just email it to “story@350.org“–and we’ll share the best ones with our global network.
#include <stdint.h> // // AUTOGENERATED BY BUILD // DO NOT MODIFY - CHANGES WILL BE OVERWRITTEN // uint32_t RESOURCE_ID_JS_SNAPSHOT = 1;
<filename>Renaissance/include/Renaissance/Core/Log.h #pragma once #include <memory> #include "Core.h" #include "spdlog/spdlog.h" #include "spdlog/sinks/stdout_color_sinks.h" #include "spdlog/fmt/ostr.h" namespace Renaissance { class Log { public: Log(); ~Log(); static void Init(); inline static std::shared_ptr<spdlog::logger>& GetCoreLogger() { return sCoreLogger; } inline static std::shared_ptr<spdlog::logger>& GetClientLogger() { return sClientLogger; } private: static std::shared_ptr<spdlog::logger> sCoreLogger; static std::shared_ptr<spdlog::logger> sClientLogger; }; } #ifdef REN_BUILD_SHIPPING // core logging macros #define REN_CORE_ERROR(...) #define REN_CORE_WARN(...) #define REN_CORE_INFO(...) #define REN_CORE_TRACE(...) #define REN_CORE_FATAL(...) // client logging macros #define REN_ERROR(...) #define REN_WARN(...) #define REN_INFO(...) #define REN_TRACE(...) #define REN_FATAL(...) #else // core logging macros #define REN_CORE_TRACE(...) ::Renaissance::Log::GetCoreLogger()->trace(__VA_ARGS__) #define REN_CORE_INFO(...) ::Renaissance::Log::GetCoreLogger()->info(__VA_ARGS__) #define REN_CORE_WARN(...) ::Renaissance::Log::GetCoreLogger()->warn(__VA_ARGS__) #define REN_CORE_ERROR(...) ::Renaissance::Log::GetCoreLogger()->error(__VA_ARGS__) #define REN_CORE_FATAL(...) ::Renaissance::Log::GetCoreLogger()->critical(__VA_ARGS__) // client logging macros #define REN_TRACE(...) ::Renaissance::Log::GetClientLogger()->trace(__VA_ARGS__) #define REN_INFO(...) ::Renaissance::Log::GetClientLogger()->info(__VA_ARGS__) #define REN_WARN(...) ::Renaissance::Log::GetClientLogger()->warn(__VA_ARGS__) #define REN_ERROR(...) ::Renaissance::Log::GetClientLogger()->error(__VA_ARGS__) #define REN_FATAL(...) ::Renaissance::Log::GetClientLogger()->critical(__VA_ARGS__) #endif
<reponame>zacfrulloni/Rust-Full-Stack package main import "fmt" func main() { array := []int{10, 20, 30} for i := 0; i < len(array); i++ { fmt.Println(array[i]) } }
package v2 import "encoding/xml" const ( DiskFormatVMDKStreamOptimized = "http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ) type File struct { ID string `xml:"ovf:id,attr"` Href string `xml:"ovf:href,attr"` } type References struct { File []File `xml:"File"` } type Disk struct { Capacity int64 `xml:"ovf:capacity,attr"` DiskID string `xml:"ovf:diskId,attr"` FileRef string `xml:"ovf:fileRef,attr"` Format string `xml:"ovf:format,attr"` } type DiskSection struct { Info string `xml:"Info,omitempty"` Disk Disk `xml:"Disk"` } type Network struct { Name string `xml:"ovf:name,attr"` Description string `xml:"Description"` } type NetworkSection struct { Info string `xml:"Info,omitempty"` Network Network `xml:"Network"` } type ProductSection struct { Info string `xml:"Info,omitempty"` Product string `xml:"Product,omitempty"` Vendor string `xml:"Vendor,omitempty"` Version string `xml:"Version,omitempty"` ProductURL string `xml:"ProductUrl,omitempty"` VendorURL string `xml:"VendorUrl,omitempty"` } type AnnotationSection struct { Info string `xml:"Info,omitempty"` Annotation string `xml:"Annotation,omitempty"` } type EulaSection struct { Info string `xml:"Info,omitempty"` License string `xml:"License,omitempty"` } type OperatingSystemSection struct { ID string `xml:"ovf:id,attr"` Info string `xml:"Info,omitempty"` Description string `xml:"Description"` } type System struct { ElementName string `xml:"vssd:ElementName"` InstanceID int `xml:"vssd:InstanceID"` VirtualSystemIdentifier string `xml:"vssd:VirtualSystemIdentifier"` VirtualSystemType string `xml:"vssd:VirtualSystemType"` } type Item struct { Address *int `xml:"rasd:Address,omitempty"` Caption string `xml:"rasd:Caption"` Description string `xml:"rasd:Description"` InstanceID int `xml:"rasd:InstanceID"` ResourceType int `xml:"rasd:ResourceType"` ResourceSubType string `xml:"rasd:ResourceSubType,omitempty"` VirtualQuantity int `xml:"rasd:VirtualQuantity"` } type StorageItem struct { AddressOnParent string `xml:"sasd:AddressOnParent,omitempty"` Caption string `xml:"sasd:Caption"` Description string `xml:"sasd:Description"` HostResource string `xml:"sasd:HostResource,omitempty"` InstanceID int `xml:"sasd:InstanceID"` Parent *int `xml:"sasd:Parent,omitempty"` ResourceType int `xml:"sasd:ResourceType"` AutomaticAllocation bool `xml:"sasd:AutomaticAllocation,omitempty"` } type EthernetPortItem struct { AutomaticAllocation bool `xml:"epasd:AutomaticAllocation,omitempty"` Caption string `xml:"epasd:Caption"` Connection string `xml:"epasd:Connection"` InstanceID int `xml:"epasd:InstanceID"` ResourceType int `xml:"epasd:ResourceType"` ResourceSubType string `xml:"epasd:ResourceSubType,omitempty"` } type VirtualHardwareSection struct { Info string `xml:"Info,omitempty"` System System `xml:"System"` Items []Item `xml:"Item"` StorageItems []StorageItem `xml:"StorageItem"` EthernetPortItems []EthernetPortItem `xml:"EthernetPortItem"` } type VirtualSystem struct { ID string `xml:"ovf:id,attr"` Info string `xml:"Info,omitempty"` ProductSection ProductSection `xml:"ProductSection"` AnnotationSection AnnotationSection `xml:"AnnotationSection"` EulaSection EulaSection `xml:"EulaSection"` OperatingSystemSection OperatingSystemSection `xml:"OperatingSystemSection"` VirtualHardwareSection VirtualHardwareSection `xml:"VirtualHardwareSection"` } type Envelope struct { XMLLang string `xml:"xml:lang,attr"` OVFVersion string `xml:"ovf:version,attr"` XMLNS string `xml:"xmlns,attr"` XMLNSOVF string `xml:"xmlns:ovf,attr"` XMLNSRASD string `xml:"xmlns:rasd,attr"` XMLNSVSSD string `xml:"xmlns:vssd,attr"` XMLNSXSI string `xml:"xmlns:xsi,attr"` XMLEPASD string `xml:"xmlns:epasd,attr"` References References `xml:"References"` DiskSection DiskSection `xml:"DiskSection"` NetworkSection NetworkSection `xml:"NetworkSection"` VirtualSystem VirtualSystem `xml:"VirtualSystem"` } func (c *Envelope) Build() ([]byte, error) { c.OVFVersion = "1.0" c.XMLNS = "http://schemas.dmtf.org/ovf/envelope/1" c.XMLNSOVF = "http://schemas.dmtf.org/ovf/envelope/1" c.XMLNSRASD = "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" c.XMLNSVSSD = "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" c.XMLNSXSI = "http://www.w3.org/2001/XMLSchema-instance" c.XMLEPASD = "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_StorageAllocationSettingData.xsd" if c.XMLLang == "" { c.XMLLang = "en-US" } return xml.Marshal(c) }
''' Generation helpers ''' import math from random import randint def generate_dots(num, width, height): ''' yield tuple of x, y coordinates ''' for _ in range(num): yield randint(0, width), randint(0, height) def generate_graph(dots, neighbourhood_size, distance_range=(10, 100)): ''' yield tuple of id and node with distances to its neighbours ''' def distance(node_id, neighbour_id): ''' return random or existing distance ''' if graph.get(neighbour_id): return graph[neighbour_id][node_id] return randint(*distance_range) graph = {} for node_id, _ in enumerate(reversed(dots)): node = {} for delta in range(1, neighbourhood_size + 1): neighbour_id = node_id + delta if neighbour_id < len(dots): node[neighbour_id] = distance(node_id, neighbour_id) neighbour_id = node_id - delta if neighbour_id >= 0: node[neighbour_id] = distance(node_id, neighbour_id) graph[node_id] = node return graph
Low Nitrogen to Phosphorus Ratios Favor Dominance by Blue-Green Algae in Lake Phytoplankton An analysis of growing season data from 17 lakes throughout the world suggests that the relative proportion of blue-green algae (Cyanophyta) in the epilimnetic phytoplankton is dependent on the epilimnetic ratio of total nitrogen to total phosphorus. Blue-green algae tended to be rare when this ratio exceeded 29 to 1 by weight, suggesting that modification of this ratio by control of nutrient additions may provide a means by which lake water quality can be managed.