content
stringlengths
7
2.61M
package de.fhpotsdam.unfolding.data.manual; import processing.core.PApplet; import de.fhpotsdam.unfolding.UnfoldingMap; import de.fhpotsdam.unfolding.geo.Location; import de.fhpotsdam.unfolding.providers.OpenStreetMap; import de.fhpotsdam.unfolding.utils.MapUtils; import processing.core.*; import de.fhpotsdam.unfolding.marker.SimplePointMarker; public class Algorim extends PApplet { private static final long serialVersionUID = 1L; UnfoldingMap map; public void setup() { size(800, 600, OPENGL); map = new UnfoldingMap(this, new OpenStreetMap.OpenStreetMapProvider()); map.zoomAndPanTo(new Location(52.5f, 13.4f), 10); MapUtils.createDefaultEventDispatcher(this, map); } public void draw() { background(0); map.draw(); } public Algorim(Location location) { super(); } public void draw(PGraphics pg, float x, float y) { pg.pushStyle(); pg.noStroke(); pg.fill(200, 200, 0, 100); pg.ellipse(x, y, 40, 40); pg.fill(255, 100); pg.ellipse(x, y, 30, 30); pg.popStyle(); } }
SAN DIEGO (CNS) - Inclement weather homeless shelters were activated in downtown San Diego this afternoon and will be open through Monday, according to city officials. Emergency shelter information is available by dialing 2-1-1, where callers will be provided with emergency shelter information and referrals to partner agencies who can accommodate immediate housing needs during severe weather, officials said. Due to the heavy rain in the forecast for this weekend, People Assisting the Homeless, or PATH, and St. Vincent de Paul Village have both activated their inclement weather shelters in the downtown San Diego area through Monday. The County of San Diego also activated its inclement weather response. Anyone looking for shelter is encouraged to call 2-1-1, a free, 24-hour, confidential phone service. Information is also available online.
Balance of Irgm protein activities determines IFNinduced host defense The immunityrelated GTPases (IRG), also known as p47 GTPases, are a family of proteins that are tightly regulated by IFNs at the transcriptional level and serve as key mediators of IFNregulated resistance to intracellular bacteria and protozoa. Among the IRG proteins, loss of Irgm1 has the most profound impact on IFNinduced host resistance at the physiological level. Surprisingly, the losses of host resistance seen in the absence of Irgm1 are sometimes more striking than those seen in the absence of IFN. In the current work, we address the underlying mechanism. We find that in several contexts, another protein in the IRG family, Irgm3, functions to counter the effects of Irgm1. By creating mice that lack Irgm1 and Irgm3, we show that several phenotypes important to host resistance that are caused by Irgm1 deficiency are reversed by coincident Irgm3 deficiency; these include resistance to Salmonella typhimurium in vivo, the ability to affect IFNinduced Salmonella killing in isolated macrophages, and the ability to regulate macrophage adhesion and motility in vitro. Other phenotypes that are caused by Irgm1 deficiency, including susceptibility to Toxoplasma gondii and the regulation of GKS IRG protein expression and localization, are not reversed but exacerbated when Irgm3 is also absent. These data suggest that members of the Irgm subfamily within the larger IRG family possess activities that can be opposing or cooperative depending on the context, and it is the balance of these activities that is pivotal in mediating IFNregulated host resistance.
def _get_content_hash(self) -> str: content = self._local_config relevant_content = {} for key in self._relevant_keys: data = content.get(key) if data is None and key not in self._legacy_keys: continue relevant_content[key] = data return sha256(json.dumps(relevant_content, sort_keys=True).encode()).hexdigest()
Low-dose Arsenic induces chemotherapy protection via p53/NF-B-mediated metabolic regulation Most chemotherapeutical drugs kill cancer cells chiefly by inducing DNA damage, which unfortunately also causes undesirable injuries to normal tissues, mainly due to p53 activation. We report a novel strategy of normal tissue-protection that involves p53/NF-B coordinated metabolic regulation. Pretreatment of untransformed cells with low doses of arsenic induced concerted p53 suppression and NF-B activation, which elicited a marked induction of glycolysis. Significantly, this metabolic shift provided cells effective protection against cytotoxic chemotherapy, coupling the metabolic pathway to cellular resistance. Using both in vitro and in vivo models, we demonstrated an absolute requirement of functional p53 in arsenic-mediated protection. Consistently, a brief arsenic-pretreatment selectively protected only normal tissues but not tumors from toxicity of chemotherapy. An indispensable role of glycolysis in protecting normal tissues was demonstrated by using an inhibitor of glycolysis, 2-deoxyglucose, which almost totally abolished low-dose arsenic-mediated protection. Together, our work demonstrates that low-dose arsenic renders normal cells and tissues resistance to chemotherapy-induced toxicity by inducting glycolysis. Introduction Chemotherapies kill cancer cells primarily by inducing DNA damage, which potently activates p53. Abundant evidence indicates that the toxicity caused by DNA-damaging anticancer therapy is mainly mediated by p53. Recent studies using mouse models indicate that a temporary suppression of p53 activity can significantly reduce DNA damageinduced cytotoxicity without compromising the tumor suppression function, raising a probability of brief p53 inhibition for cancer therapy protection. The transcription factor NF-B regulates various genes important for the immune response, cell proliferation, and cell survival. During the immune response, cells consume large amounts of glucose and primarily use aerobic glycolysis to produce enough energy to meet the bioenergetics demands of cellular proliferation and survival. In addition to its involvement in the immune response, the NF-B pathway has also been shown to be activated by irradiation-induced DNA damage, but the functional consequences of this response were shown to be multifaceted, as NF-B was capable of functioning either as a pro-survival or pro-death signal. Dynamic crosstalk has been demonstrated between the p53 and NF-B pathways. Although this crosstalk is highly context dependent and has been shown to function either as antagonistic or cooperative between the two pathways, p53 and NF-B are considered to overall function against one another to maintain cellular homeostasis. By expanding our recent study indicating that low-dose arsenic can suppress chemotherapeutic drug 5FU-induced p53 activation, we exploited the use of arsenic for protection of normal tissues against chemotherapy-associated damages. We show that lowdose arsenic protects sensitive tissues by inducing reciprocal p53 suppression and NF-B activation, and subsequent metabolic shift. Using colon carcinoma xenograft mouse models, we demonstrate that a brief pretreatment with low-dose arsenic protected selectively normal tissues, but not tumor cells, from 5-Fluorouracil (5FU) induced killing. A mutually exclusive interaction between p53 and NF- B in low-dose arsenic-induced protection When human fibroblasts were treated with 5FU, a distinct response of p53 and NF-B was observed. In contrast with p53, which was robustly induced in response to DNA damage (supplemental Fig. 1A), little NF-B activity was detected in 5FU-treated human fibroblasts, as reflected by a chiefly cytoplasmic p65 distribution (supplemental Fig. 1B). The lack of NF-B activity in 5FU-treated cells was not due to any defect of this pathway since there was a clear induction of p65 nuclear distribution by TNF, a known NF-B activator (supplemental Fig. 1B). This distinct response of p53 and NF-B led us to test the effect of low-dose arsenic, which we previously showed can inhibit 5FU-induced p53 activation. When compared with the control, pretreatment of human fibroblasts with 50 nM arsenic for 12 h resulted in marked suppression of both p53 activation and H2AX induction by 5FU (Fig. 1A. 5FU versus As+5FU), consistent with what was observed with epithelial cells. Interestingly, parallel to this impaired p53 response was NF-B activation, as indicated by an overt p65 nuclear distribution in arsenic-treated cells (Fig. 1B). Significantly, low dose arsenic-pretreatment was also associated with considerable NF-B activation in 5FU-treated cells (Fig. 1B). To tie the response of p53 and NF-B with cellular sensitivity to 5FU, we examined cell survival by performing an apoptotic assay. Consistent with the significantly diminished H2AX and p53 induction in arsenic-treated cells, the 5FU-induced apoptosis was considerably reduced (Fig. 1C). Collectively, the data demonstrated an inverse correlation of p53 and NF-B with cell survival, where p53, but not NF-B, activation was linked to 5FU-induced cell death while, conversely, low-dose arsenic-induced protection was associated with concerted suppression of p53 and stimulation of NF-B. A requirement of functional p53 in low-dose arsenic-induced protection We further investigated this seemingly opposite response of NF-B and p53 to arsenic by asking whether suppression of p53 function is necessary for NF-B activity. Cells were pretreated with a low dose of Nutlin-3a, a p53 specific activator. Interestingly, under the condition of mild p53 activation, low-dose arsenic-induced p65 nuclear distribution was completely blocked ( Fig. 2A, As versus Nutlin 3a+As), suggesting that p53 inhibition is necessary to allow NF-B activation. Correlated with the NF-B activity was cellular sensitivity. In Nutlin 3a-treated cells, arsenic was unable to induce protection. The levels of 5FU-induced H2AX were comparable in the presence or absence of arsenic (Fig. 2B). We further tested the p53 requirement by depleting the p53 expression with siRNA. Indeed, down-regulation of p53 nearly eliminated the difference of cellular sensitivity to 5FU between arsenic-treated and untreated cells (Fig. 2C). We also used a mutant p53-expressing mouse model to validate the in vitro findings. In contrast to wild-type p53 mice where arsenic prevented 5FU-induced body weight loss, p53 mutant mice showed little response to arsenic (supplemental Fig. 2). Together, the results indicate that functional p53 is essential for low-dose arsenic-induced protection. Low-dose arsenic-induced protection is mediated by a metabolic change Growing evidence indicates that both p53 and NF-B are involved in regulation of cellular metabolism, where p53 promotes oxidative phosphorylation whereas NF-B stimulates aerobic glycolysis. We tested the possibility that arsenic-induced p53 suppression coupled with NF-B stimulation may affect cellular metabolism by favoring glycolysis. Indeed, when compared to control cells, an equal number of low-dose arsenic-treated cells exhibited a clear increase of lactate production (Fig. 3A), which was blocked by the addition of 2-deoxyglucose (2-DG), an inhibitor of glycolysis, supporting a glycolytic metabolism. To substantiate this observation, we determined the level of glucose transporters 1 and 3 since the expression of glucose transporters are critical to glycolysis. Immunostaining revealed that the levels of GLUT-1 & 3 were indeed considerably induced by arsenic treatment (Fig. 3B). A close temporal correlation with arsenic-induced p65 nuclear localization and GLUT-3 induction suggested a NF-B mediated regulation (supplemental Fig. 3). Apart from GLUT-3, NF-B was reported to induce HIF1. Interestingly, arsenic induced not only a clear increase of the protein abundance but also nuclear distribution of HIF1 (Fig. 3C). Treatment with Capsaicin, an NF-B pathway inhibitor, blocked this effect of low-dose arsenic, consistent with NF-B-dependent regulation (Fig. 3C). We also used Nutlin-3a and capsaicin to demonstrate that p53 inhibition and NF-B stimulation were critical for the induction of GLUT-3 by arsenic ( Fig. 3D & E). The effect of capsaicin was further verified by depleting p65 expression with siRNA (supplemental Fig. 4). Together, our data indicate a functional interaction between p53 and NF-B in regulation of cell metabolism. By inhibiting p53 activity and permitting NF-B to function, low-dose arsenic induces glycolysis. We went on to test whether the observed increase in glycolytic metabolism contributes to the arsenic-induced resistance to 5FU. Two independent approaches, limiting glucose supply or 2-DG, were used to inhibit glycolysis. Low glucose cultures completely lost arsenicinduced protection as evidenced by a comparable level of apoptosis induction by 5FU in lymphocytes with or without pretreatment of arsenic (Fig. 4A). The requirement of glycolysis was further supported by the use of 2-DG, which nearly completely abrogated arsenic-induced protection (Fig. 4A). The crucial role of glycolysis in arsenic-mediated protection was also evident when H2AX induction was analyzed in fibroblasts ( Fig. 4B-D). We further substantiated the data derived from 2-DG by using RNAi by knocking down the expression of lactate dehydrogenase (LDH), an enzyme essential for glycolysis. A result almost identical to that of 2-DG was observed (Fig. 4E), supporting a requirement of glycolysis in arsenic-mediated protection. An important role of the pentose phosphate pathway (PPP) was also tested by depletion of the expression of glucose-6-phosphate dehydrogenase (G6PD), the rate-limiting enzyme of PPP. The arsenic-mediated protection was abrogated in G6PD-deficient cells (Fig. 4F). The immunostaining data in fibroblasts were further validated by the colony formation survival assay (Fig. 4G), supporting a critical role of the glycolytic and PPP pathways in arsenic-induced protection. Collectively, our data support a model in which low-dose arsenic induces a coordinated p53 inhibition and NF-B stimulation, which upregulated the expression of HIF1 and GLUT-3, leading to a metabolic shift to glycolysis, and it is the glycolytic and PPP pathways that render cells increased resistance to 5FU toxicity. To test the physiological relevance of low-dose arsenic-induced cellular resistance, we extended our study to mice. Immunohistochemistry of the small intestine, a tissue very sensitive to 5FU, indicated an overt increase of the GLUT-3 protein abundance in the arsenic and arsenic plus 5FU, but not the control or 5FU alone treated mice ( Figure 5A), consistent with the in vitro data. To complement the analysis of GLUT expression, we performed live animal imaging to monitor the uptake of labeled 2-DG. Strikingly, a clear increase of glucose uptake was evident in low-dose arsenic-treated mice. The glucose uptake in the arsenic plus 5FU-treated mice was also considerably increased, albeit slightly lower than that in mice treated with arsenic only ( Figure 5B). Inspection of the small intestine crypts morphology revealed a close correlation of glycolysis and protection ( Figure 5C). The importance of glycolysis was further supported by a pre-treatment of mice with 2-DG (200mg/kg body weight), which blocked arsenic-mediated protection (Figures 5C). Low dose arsenic selectively protects normal tissues without affecting the antitumor efficacy of 5FU We next exploited the distinct p53 status that separates normal tissues from most cancer cells to assess the potential of low-dose arsenic in selectively protecting normal tissues from 5FU-induced injury. For this purpose, we used a colon carcinoma cell line SW-480 to generate a mouse xenograft model. Treatments were initiated when the tumor reached an average volume approximately 100 mm 3. The mice were pretreated with or without 0.4 mg/kg sodium arsenite for three days before subjected to 5FU treatment. The tumor volume in the vehicle group continued increase with time (Fig. 6A). Low-dose arsenic treatment did not have any detectable effect on tumor growth (Fig. 6A). 5FU treatment (30mg/kg body weight i.v.) daily for one week resulted in marked tumor inhibition (Fig. 6A). Significantly, low-dose arsenic pretreatment showed little effect on 5FU-induced tumor suppression, as 5FU-induced tumor regression is indistinguishable in mice that were pretreated with or without low-dose arsenic (Fig. 6A). There was little difference between male and female mice in response to the treatment of 5FU and arsenic (Fig. 6A). Our data thus indicate that a brief pretreatment with low-dose arsenic does not detectably affect the efficacy of 5FU, at least in the human colon carcinoma xenograft mouse model. To examine whether low-dose arsenic could alleviate 5FU-induced toxicity in these tumorbearing mice, we monitored body weight change. In contrast to control and low-dose arsenic-treated mice, 5FU-treated mice exhibited a significant loss of body weight (Fig. 6B). This weight loss is most likely caused by 5FU toxicity and not the effect of tumors as we saw almost complete recession of tumors in these animals (Fig. 6A). Remarkably, the 5FUinduced body weight loss was almost completely prevented in both male and female mice by the arsenic pretreatment (Fig. 6B). To corroborate the result of body weight measurement, we assessed the effect of low-dose arsenic at the tissue level and observed a similar protection. 5FU-induced damage to the small intestine was markedly ameliorated by low-dose arsenic pretreatment (Fig. 6C). The protective effect of arsenic is also evident in the bone marrow. Bone marrow cell exhaustion was clearly observed in 5FU-treated mice. However, this decrease of bone marrow cellularity was considerably alleviated in low-dose arsenic-pretreated mice (Fig. 6D). Collectively, the results demonstrate that a brief treatment with low-dose arsenic is associated with a marked protection of normal tissues without compromising the ability of 5FU to kill carcinoma cells. Since arsenic has been reported as a carcinogen or co-carcinogen, it is of great importance to determine whether the use of arsenic as described aforementioned might increase cancer risk. Ionizing radiation, a classical carcinogen, was used as a positive control and indeed induced a significant increase of incidence of cancer, as reflected by the cancerassociated death. In contrast, there was no detectable cancer development in both arsenic-treated and control mice during 12-month period (Fig. 6E), indicating that such a brief use of low-dose arsenic in mice did not detectably increase cancer risk. Discussion p53 and NF-B are two transcription factors important in controlling cell survival or death. We demonstrate that cellular fate is determined by an integrated interaction of these two transcription factors. 5FU-induced cell death resulted from p53 activation coupled with little NF-B activity. Interestingly, low-dose arsenic provoked a very different effect on these two transcription factors by inducing reciprocal p53 inhibition and NF-B activation. Importantly, p53 suppression seemed to be pre-requisite for NF-B activation, as shown by that Nutlin 3a-induced p53 activation blocked low-dose arsenic-induced NF-B activation. Moreover, the interaction requires a functional p53 as low-dose arsenic failed to stimulate NF-B activity and was unable to reduce 5FU toxicity when the expression of p53 was depleted. The requirement of functional p53 enabled low-dose arsenic to selectively protect wild-type p53 expressing cells, as demonstrated with the tumor xenograft mouse model in which pretreatment with low-dose arsenic resulted in protection of animals against 5FUinduced acute toxicity to normal tissues without affecting the anti-tumor efficacy of 5FU. This requirement of wild type p53 is significant because p53 is very frequently inactivated in human cancers and the distinct p53 status between normal and tumor tissues enables low-dose arsenic to preferentially protect normal tissues. A unique feature of arsenic-induced effects is a biphasic dose response: the effects induced by low-dose arsenic are not only different in magnitude from that of high-dose arsenic but also in nature, i.e. cyto-protective versus cytotoxic. The protective effects observed in our study with low-dose arsenic are in agreement with published results. We, however, expanded the effects of low-dose arsenic to the functional interaction between p53 and NF-B in regulation of cellular metabolism. We demonstrate that by suppressing p53 activity and permitting NF-B to act on the metabolic pathways, low-dose arsenic induced a metabolic shift to glycolysis. NF-B initiated the glycolytic pathway by up-regulating the expression of GLUT3 and HIF1. GLUT3 encodes the glucose transporter protein GLUT-3, facilitating the uptake of glucose. HIF1 is the master transcription factor known to positively regulate a number of enzymes in the glycolytic pathway. This glycolytic induction is reminiscent of the Warburg effect, which offers the growth and survival advantages. We presented multiple lines of evidence to demonstrate that it was the glycolytic and PPP pathways that provided cells or tissues the ability to mount the defense mechanism against 5FU toxicity. Low-dose arsenic treatment failed to protect normal cells or tissues when glycolysis was suppressed by 1) limiting the glucose supply; 2) inhibiting hexokinase activity with 2-DG; 3) knocking down of LDH or G6PD. Considering Arsenic Trioxide being currently in clinic use, a brief treatment with low-dose arsenic has the potential as a novel approach of chemotherapy protection. siRNA mediated gene knockdown All siRNAs were purchased from Sigma-Aldrich. Multiple sequences of siRNA against each gene were used. siGL2, which targets the luciferase gene in pGL2 construct, was used as the negative control. siRNAs were reverse-transfected at 25 nM using Lipofectamine RNAiMAX (Invitrogen, #13778). Metabolic assays The extracellular lactate was measured in the cell culture medium with a lactate assay kit (BioVision, #K667-100). Lactate production was calculated as the difference of lactate concentrations between the medium and cell cultures. Cell viability and FACs analysis Cell viability was assessed using the trypan -blue exclusion assay. The percentages of viable cells were counted as follows: An annexin V apoptosis kit (Biovision, #K101-100) was used for the FACS-based assay as per manufacturer's instructions. Animal study All animal procedures were conducted in accordance with the Guidelines for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee (IACUC) at The University of Texas Health Science Center at San Antonio (UTHSCSA). Balb/c mice (6 weeks old) were purchased from Harlan laboratories. Mice were housed under pathogen-free conditions and maintained in a 12 h light/12 h dark cycle, with food and water supplied ad libitum. Individual mice were treated with or without sodium arsenite (0.4 mg/kg body weight i.p.) for three days. For live animal image, the animals were injected intravenously through tail vein with 100l of IRDye 800CW 2-DG (10nmol) 1 h after 5FU-treatment (100mg/kg body weight i.v). A caliper IVIS Spectrum system (Caliper, Alameda, CA) was used to capture images. Throughout this study animals were imaged using the same anesthesia protocol, 2% isoflurane in 100% oxygen at 2.5 L/ min. Body temperature was maintained at 37°C by a heated stage. The images were acquired with mice in supine position using the epi-illumination method. For mouse xenograft experiments, inoculums of 3 10 6 SW-480 cells in 0.1 mL of PBS was mixed with Matrigel at 4°C and then injected into the s.c. space on the right flank of mice. When tumors reached ~ 0.1cm, mice were randomized into experimental groups for treatments. Histological and immunohistochemical analysis Prior to embedding in paraffin, tissue specimens were fixed in 37% formalin and dehydrated. Hematoxylin and eosin staining were performed according to standard procedure. For immunohistochemical analysis, paraffin-embedded sections were deparaffinized with xylene, dehydrated in decreasing concentrations of ethanol. Antigen retrieval was performed in 10mM citrate buffer (pH 6.0). Endogenous peroxidase activity was blocked by treating tissue sections with 3% hydrogen peroxide. Sections were incubated with goat serum to block non-specific antibody binding followed by the incubation with primary antibody. The staining procedure was followed the manufacturer's instruction (ABC staining system, Santa-Cruz Biotechnology). Statistical analysis Experiments with cell lines were repeated at least 3 times. Two-way ANOVA was used for statistical analysis. For mouse experiments, the Mann Whitney U test was used for comparisons between different groups. Supplementary Material Refer to Web version on PubMed Central for supplementary material. A distinct response of p53 and NF-B in low-dose arsenic-induced protection. Human fibroblasts were pretreated with either PBS or 100 nM sodium arsenite for 12 h, followed by 5FU (375 M) or DMSO. The cells were harvested 1 h after the 5FU-treatment and subjected to either co-immunostaining with p53 and H2AX (A) or p53 and p65 (B). C, human lymphocytes were pretreated with or without 50 nM sodium arsenite for 12 h. The cells were then exposed to 375 M 5FU or DMSO. The cells were harvested 12 h later for apoptotic assay by FACS. The numbers are mean ± SD from 3 independent experiments. Requirement of functional p53 in low-dose arsenic-induced protection. A, fibroblasts were pretreated with DMSO (control) or Nutlin-3A (10 M) for 1 h and then with or without sodium arsenite (100 nM) for 12 h. The cells were harvested for immunostaining with p65 and DAPI couter-staining. B, fibroblasts were treated as in A followed by 5FU-treatment (375 M) for 1 h. The cells were harvested and stained for H2AX with DAPI counterstaining. C, fibroblasts were transfected with p53RNAi (the p53RNAi knock down efficiency was determined by RT-PCR and is shown in Supplemental Fig. 5) and subjected to the treatment and analysis as in A. Low-dose arsenic treatment induces glycolysis via concerted p53 suppression and NF-B stimulation. A, human fibroblasts were pretreated with DMSO or 2-DG (5mM) for 1 h, followed by either PBS or 100 nM sodium arsenite for 12 h. Culture media were collected for lactate concentration measurement. B, fibroblasts were treated with either PBS or 100 nM sodium arsenite for 12 h. The cells were subjected to immunostaining with anti-GLUT-1 or GLUT-3 antibodies. C, fibroblasts were pretreated with DMSO (control) or Capsaicin (300 M) for 1 h, followed by arsenic for 12 h. The cells were harvested and immunostained with HIF1 and DAPI. D, fibroblasts were pretreated with DMSO (control) or Nutlin-3A (10 M) for 1 h and then arsenic as described in C. The cells were subjected to immunostaining with anti-GLUT-3 and DAPI. E, fibroblasts were pretreated with DMSO (control) or Capsaicin (300 M) followed by arsenic as described in C. The cells were immunostained with GLUT-3 and DAPI. Glycolysis is essential for low-dose arsenic-mediated protection. A, human lymphocytes were cultured in normal (25 mM glucose) or low glucose (2 mM) media, treated as in Fig. 1C and subjected to apoptotic assay. Lymphocytes were pre-treated with 2-DG (5mM) for 1 h prior to addition of 5FU and then analyzed as in Fig. 1C. Human fibroblasts were cultured in either normal glucose (25 mM) (B) or low glucose (2 mM) (C) media, treated with or without 100 nM sodium arsenite followed by 5FU (375 M). The cells were fixed 1 h after the 5FU-treatment and immunostained with H2AX. D, fibroblasts were pre-treated with 2- The in vivo study of the low-dose arsenic-induced glycolysis. A, all animal procedures were performed in accordance with a protocol approved by the UTHSCSA Animal Care and Use Committee. Balb/C mice (4-6 weeks) purchased from Harlan laboratories and maintained on a 12 h light/12 h dark cycle, with food and water supplied ad libitum were pretreated (intra-peritoneal injection) with or without sodium arsenite 0.4 mg/kg body weight for consecutive 3 days. The animals were then treated with 5FU (100 mg/kg body weight) or DMSO and harvested 24-h later. The small intestines were harvested and the expression of GLUT-3 was examined by immunohistochemical staining. B, Mice were treated as in A and live animal imaging was performed using a procedure described in materials and methods to monitor the uptake of labeled glucose. The optical images are shown. C, mice were treated as in A. 100 l saline or 2-DG (200mg/kg body weight) was given via i.v. 12 h prior harvesting. The small intestines were harvested and stained with H&E. Low-dose arsenic selectively protects normal tissues without affecting the antitumor efficacy of 5FU. Athymic nude mice (Balb/c nu/nu, 4-6 weeks old) were from Harlan laboratories. Human colon carcinoma cell line SW-480 (cells as a 50% suspension in matrigel) as 3 million cells per mouse in a final volume of 100 l were injected subcutaneously at right flank of Balb/c nude mice. When the average tumor volume reached about 100 mm 3, mice were randomized into following groups; control; arsenite only; 5FU only; arsenite and 5FU. For arsenite pretreatment, mice were treated with sodium arsenite (0.4 mg/kg body weight) for 3 days (Day 0-3). Mice were then treated with either 5FU (30mg/kg body weight) via i.v. daily for one week (Day 4-10). Tumor volumes were measured every four days. Tumor volume was calculated using the equation: (volume = length width depth 0.5236 mm 3 ). Two independent experiments were done and the tumor volumes are means ± SE from total of 10 mice per group A (* and # are significantly different from control, P<0.05). B. Body weight of the mice as described in A was monitored throughout the experiment. The numbers are means ± SD from two independent experiments with total of 10 mice per group ($ is significantly different from control, P<0.05). At the completion of the experiments mice were sacrificed by cervical decapitation. Tissue samples were harvested for histology experiments. H&E staining were performed. Representatives of H&E staining of the small intestine (C) and bone marrow (D) were shown. E. Balb/c mice were treated with PBS or sodium arsenite (0.4 mg/kg body weight) for three days. Third group of mice were treated with ionizing radiation at a dose of 2 Gy for total of 3 doses (6Gy). The animals were monitored for 12 months for tumor development and survival.
/* * ***** BEGIN GPL LICENSE BLOCK ***** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. * * The Original Code is Copyright (C) 2001-2002 by NaN Holding BV. * All rights reserved. * * The Original Code is: all of this file. * * Contributor(s): none yet. * * ***** END GPL LICENSE BLOCK ***** */ /** \file RAS_MaterialBucket.h * \ingroup bgerast */ #pragma once #include "MT_Transform.h" #include "RAS_DisplayArrayBucket.h" class RAS_IPolyMaterial; class RAS_MaterialShader; class RAS_Rasterizer; /* Contains a list of display arrays with the same material, * and a mesh slot for each mesh that uses display arrays in * this bucket */ class RAS_MaterialBucket { public: RAS_MaterialBucket(RAS_IPolyMaterial *mat); virtual ~RAS_MaterialBucket(); // Material Properties RAS_IPolyMaterial *GetPolyMaterial() const; RAS_MaterialShader *GetShader() const; bool IsAlpha() const; bool IsZSort() const; bool IsWire() const; bool UseInstancing() const; /// Set the shader after its conversion or when changing to custom shader. void UpdateShader(); void AddDisplayArrayBucket(RAS_DisplayArrayBucket *bucket); void RemoveDisplayArrayBucket(RAS_DisplayArrayBucket *bucket); void MoveDisplayArrayBucket(RAS_MeshMaterial *meshmat, RAS_MaterialBucket *bucket); private: RAS_IPolyMaterial *m_material; RAS_MaterialShader *m_shader; RAS_DisplayArrayBucketList m_displayArrayBucketList; };
I Will Do It If I Enjoy It! The Moderating Effect of Seeking Sensory Pleasure When Exposed to Participatory CSR Campaigns In an attempt to gain differentiation, companies are allocating resources to corporate social responsibility (CSR) initiatives. At the same time, they are giving consumers a more active role in the process of creating value. In this sense, consumer participation represents a new approach to gain competitive advantage. However, the effectiveness of consumer participation in CSR campaigns still remains unknown. With the purpose of shedding light on this issue, this paper shows that participatory CSR campaigns lead to greater consumer perceptions of CSR, which in turn results in more favorable attitudes toward the company. Furthermore, the effect is stronger for sensory pleasure seekers, whose involvement with the experience is greater. The findings contribute to the CSR literature and reveal important implications for marketers. INTRODUCTION In the current marketplace, companies must attain differentiation and credibility to develop strong and long-term relationships with consumers. To achieve these goals, a growing number of firms are allocating resources to corporate social responsibility (CSR; ). Both the way consumers perceive the information on CSR and the level of stimulation this information generates influence attitudes and behaviors (Brown and Dacin, 1997;Sen and Bhattacharya, 2001). For example, inferences drawn from a company's prosocial actions can change even product evaluations (products are perceived as performing better), regardless of whether consumers are observing or experiencing the product (Chernev and Blair, 2015). A large body of research has empirically established that consumers' perceptions of firms' motives for engaging in CSR influence their evaluations of and responsiveness to CSR (). In general, consumers are aware that CSR can contribute to company image formation, and thus their interests in CSR activities continue to rise (;Schmeltz, 2012). However, to some extent current approaches to CSR are still disconnected from companies' global strategy, thus masking their opportunities to benefit society (Porter and Kramer, 2006). This flaw highlights the need to shed light on the connection between CSR actions and other mechanisms in order to assist consumer persuasion. In the past two decades, consumers have begun taking more active roles in companies' efforts to compete for and create value (Prahalad and Ramaswamy, 2000). That is, consumers are no longer passive audiences but active coproducers of value (). Bendapudi and Leone linked high levels of consumer participation to competitive effectiveness. In support of this, extant literature in marketing has found that consumer participation has a positive effect on consumer behavior (;). However, while the impact of CSR and participation on consumer behavior has been widely demonstrated in the literature, whether consumer participation in CSR activities increases the effectiveness of the latter still remains unexplored. In addition, the notion that consumers seek out pleasurable products and experiences (Hirschman and Holbrook, 1982) must be taking into account. Because participation may be associated with use of a product or going through an experience, the possibility of experiencing sensory pleasure may influence consumers' perceptions of the CSR activities in which they participate. With the aim of shedding light on this issue, the goal of this paper is 2-fold. First, we aim to demonstrate that the participatory nature of CSR campaigns influences consumer perceptions. Second, we assess whether the dispositional trait of sensory pleasure seeking moderates this effect. The structure of the paper is as follows: We begin with a review of the relevant literature and present the theoretical background. Then, we develop a set of hypotheses and describe the method. Finally, we report the main results and discuss conclusions. CONCEPTUAL FRAMEWORK AND HYPOTHESES In the past decade, researchers have shown interest in understanding how CSR activities influence consumer behavior (Marin and Ruiz, 2007;Boulouta and Pitelis, 2014). By engaging in CSR and signaling this engagement to consumers, companies can improve consumer-related outcomes (Luo and Bhattacharya, 2006). Companies can use CSR as an instrument to enhance firm image through its effects on consumers' intentions and attitudes (Brown and Dacin, 1997;Sen and Bhattacharya, 2001;Bhattacharya and Sen, 2004). That is, CSR initiatives can be central, distinctive, and enduring, thus contributing to more positive consumer evaluations of the company (). Consumers attribute many corporate motives to CSR engagement related mainly to company contributions to society (). Attribution theory states that people attribute causes to events and that their cognitive perceptions influence their subsequent attitudes and behavior (Kelley and Michela, 1980). In addition, and according to the persuasion knowledge model (Friestad and Wright, 1994), consumers accumulate knowledge on persuasive motives and tactics () and then use such knowledge to make inferences about firm ultimate motives. Thus, what consumers know about a company influences their associations. CSR associations reflect an organization's status and activities with respect to its perceived obligations to society and can exert different effects on consumer responses (Brown and Dacin, 1997). In summary, consumers' associations with CSR activities influence their evaluations of and responsiveness to CSR (Becker- ;). Using the persuasion knowledge model and attribution theory as theoretical foundations, we posit that a CSR campaign is a persuasive attempt to create positive consumer perceptions. Consumer Participation Being consumer oriented is not enough for firms to successfully compete in today's marketplaces. Firms must learn from and collaborate with consumers to create value that meets their individual and dynamic needs (Prahalad and Ramaswamy, 2000). Ulrich argues that involving consumers is a powerful way to increase consumer loyalty and commitment. The service literature lends further support to this claim, finding a positive and significant relationship between consumer participation and commitment. Previous research claims that as consumers' involvement with a firm increases, the company gains more opportunity to shape consumer perceptions. Thus, consumers with high levels of involvement may have perceptions of quality and levels of satisfaction that differ from those who are less involved (). In line with this, research in marketing has underscored the importance of consumer participation, or "the degree to which the consumer is involved in producing and delivering the service" (Dabholkar, 1990, p. 484). Participation can include tasks such as spending time interacting, responding to questions, or providing information on product specifications, brand preferences, and price range (Dabholkar and Sheng, 2012). The potential of consumer participation has attracted research attention because of the assumption that when consumers participate actively, organizations can gain competitive advantage through greater sales volume, enhanced operating efficiencies, positive word of mouth, reduced marketing expenses, and enhanced consumer loyalty (Reichheld and Sasser, 1990). One stream of research focuses on the reasons consumers should engage in the service provision process and deals with the economic benefits of consumer participation (Bendapudi and Leone, 2003). A second stream also considers consumer motivations to cocreate a service, analyzing the motivation of self-service consumers and exploring key factors that influence initial trial decisions, consumer traits, and situational factors on technology adoption (). A third stream focuses on managing consumers as partial employees (Bendapudi and Leone, 2003), assuming that consumers' active participation in service provision leads to greater perceived service quality and enhanced consumer satisfaction (Dabholkar, 1990;). In addition, extant literature in marketing has found that consumer participation has a positive effect on consumer behavior (;). Thus, research has shown the positive effect of participation in the areas of consumer decision making (;), brand loyalty (Bagozzi and Dholakia, 2006), commitment to the brand (), quality perceptions, word of mouth (Kim and Jung, 2007), trust (), affective commitment to the product () and sensory perceptions (Troye and Supphellen, 2012). Bendapudi and Leone show that when the service outcome is better than expected, participating consumers are more satisfied than non-participating consumers. Matzler et al. report that in contexts characterized by high consumer participation, consumer satisfaction and other postpurchase responses (e.g., positive word of mouth, loyalty) are more favorable. Additionally, participation has been related to higher employees' satisfaction and performance (). Therefore, as consumers' participation increases, subsequent outcomes become more positive. Encouraging consumer participation, then, may represent a good opportunity to gain competitive effectiveness and should deliver value to both customers and firms (Bendapudi and Leone, 2003). Consumer Participation and CSR Associations Prior research has shown that firms can generate more favorable attitudinal responses from consumers when they are proactively engaged in CSR activities rather than acting reactively (Becker- ;). This effect finds support in the employee participation literature, which shows that participation influences perceptions of, for example, service quality. In the same vein, Bowen suggests that as consumers increase their level of involvement with a firm, the firm gains the opportunity to shape their perceptions, and Kelley et al. report that consumers with high levels of service involvement have perceptions of service quality and levels of satisfaction that differ from consumers not highly involved in the participatory role. Furthermore, Claycomb et al. demonstrate that consumer participation results in more positive perceptions of the organization and that higher levels of consumer participation in the service delivery process are associated with positive perceptions of service encounter performance. In this context, consumer participation in a CSR campaign reflects the degree to which the consumer is involved in CSR activities. The findings on consumer participation related to a company's main activity can also apply to other activities developed by the company, such as those related to CSR. Therefore, we propose that the participatory nature of the CSR campaign will have a positive effect on consumer perceptions of CSR. We contend that consumers' participation in CSR activities will result in greater involvement, greater understanding, and deeper knowledge, which in turn will lead to perceptions of more CSR effort and, therefore, greater CSR associations (). CSR associations influenced by corporate efforts depend to some degree on effective firm communication with external audiences and represent consumers' perceptions. Therefore, we propose the following: H1: Consumers exposed to a participatory CSR campaign will have greater CSR associations than consumers exposed to a non-participatory campaign. Motivation for Sensory Pleasure and CSR Associations Prior research has documented that consumers seek out pleasurable products and experiences (Hirschman and Holbrook, 1982) and show motivational differences in pursuing favorable experiences and avoiding unpleasant ones (Chapman and Chapman, 1985). The motive for sensory pleasure (MSP) describes the individual drive to seek out pleasant auditory, visual, tactile, olfactory, and taste experiences and to similarly avoid unpleasant sensory experiences (). Recently, Eisenberger et al. noted that high MSP individuals engage in greater pursuit of favorable experiences. Moreover, personality theorists have examined dispositional differences in the enjoyment of sensory experiences (Chapman and Chapman, 1985). Thus, some individuals are high sensory pleasure seekers and others are less biased in relation to this pursuit of pleasure. According to Jackson (1984, p. 7), the highly sentient person "notices smells, sounds, sights, tastes, and the way things feel; remembers these sensations and believes they are an important part of life; is sensitive to many forms of experience; may maintain an essentially hedonistic or aesthetic view of life." As a result, consumers can serve as "moderators" of pleasure through their idiosyncratic reactions to product experiences (Alba and Williams, 2013). Prior research has shown the importance of pleasure in consumer behavior, demonstrating that emotional states (pleasure and arousal) are important determinants of purchase behavior (;Lpez Lpez and Ruiz de Maya, 2012). Fiore finds that sensory pleasure from a catalog page positively affected approach responses of global attitude. In addition, theoretical support exists for the link between pleasure and satisfaction. As Bign et al. note, consumers who derive pleasure from an experience are more likely to exhibit positive behavioral intentions, such as positive word of mouth, satisfaction, and intention to return to the store. However, motivation for sensory pleasure may work in the opposite direction. If consumers motivated for sensory pleasure do not experience what they are looking for (sensory pleasure), their interest in the stimulus may be low, which will imply a lower processing too (Petty and Cacioppo, 1986). In addition, while motivation for sensory pleasure is clearly related to emotional involvement (), Eisenberger et al. point out to the uniqueness of this personality trait as separated from need for cognition () and, as such, subjects highly motivated to seek for sensory pleasure will base their behavior on emotions associated to the activity rather than cognitions (that require processing) related to how good the company is doing with they CSR activity it is developing. The application of this reasoning to CSR activities, therefore, leads us to propose that those who are highly motivated to seek sensory pleasure will process much less the campaign and will show less CSR associations than those who are less motivated to search for sensory pleasure. Formally, H2: The higher the consumers' motivation for sensory pleasure, the lesser their CSR associations will be when exposed to a CSR campaign. The Moderating Role of Motivation for Sensory Pleasure Personal relevance theory holds that individuals have a level of interest in and give particular importance to a cause. Therefore, when a cause is important to consumers, they will feel more interested and involved in the action. Previous research has shown that involvement significantly moderates how stimulus cues influence brand evaluation and communication effectiveness (Maoz and Tybout, 2002). More important, involvement is positively related to information processing (Leigh and Menon, 1987). Therefore, because CSR campaigns influence consumers' cognitive responses, those more involved will process the information of the CSR campaign more thoroughly and will value the social nature of the campaign more than those less involved (Gupta and Pirsch, 2006). Consumer participation is a behavior that reflects a state of involvement () that can be increased by other sources of motivation. As Eisenberger et al. argue, "high MSP individuals' enhanced motivation produce greater pursuit of favorable nature experience." Accordingly, the nature of the campaign (participatory or non-participatory) should generate different responses, depending on the participants' additional involvement (i.e., the level of consumers' motivation to seek sensory pleasure). From these arguments, we propose that in a participatory campaign, consumers who are sensory pleasure seekers will be more involved with the campaign and, consequently, will have more CSR associations (perceive greater CSR). Therefore, we propose the following: H3: When exposed to a participatory CSR campaign, the effect on CSR associations will be stronger for consumers with high motivation for sensory pleasure than for those with low motivation for sensory pleasure. Consumer Skepticism of CSR Associations Skepticism refers to a person's tendency to doubt, disbelieve, and question (Forehand and Grier, 2003). Research in the field of economics and business views skepticism as a potential consumer response to the actions of companies (Skarmeas and Leonidou, 2013) and defines it as consumers' distrust of or disbelief in companies (Webb and Mohr, 1998). The limited research on this topic notes that skepticism toward a company (negative assessment) occurs when consumers attribute selfish motives to the company actions (Webb and Mohr, 1998;). Thus, skepticism predisposes consumers to doubt the veracity of the communication activities of the company (Obermiller and Spangenberg, 1998). Indeed, consumers show a natural tendency to be skeptical of advertising (Obermiller and Spangenberg, 1998), though the extent to which they are skeptical varies from consumer to consumer. The cognitive approach provides an explanation for consumer skepticism of persuasive communication (). Within this approach, the persuasion knowledge model (Friestad and Wright, 1994) states that consumers learn to interpret and evaluate the persuasion agents' goals and tactics and use this knowledge to cope with persuasion attempts. Consumers use the resulting knowledge to identify situations that motivate skepticism. Research on skepticism has been developed in different contexts, such as corporate social marketing (Forehand and Grier, 2003), environmental claims, communication of CSR (Vanhamme and Grobben, 2009), and CSR programs (). As a result, communicating CSR initiatives may be problematic (Pomering and Dolnicar, 2009) because consumer frequently perceive these initiatives as marketing actions that companies engage in out of their own self-interests (Haniffa and Cooke, 2005). Therefore, inferred motivations determine the level of consumer skepticism toward CSR messages and the credibility of social actions. If consumers perceive a company's motivation as selfish, they will be more skeptical about the campaign and will give less credibility to the company communication activities (). Because prior research has established that CSR campaigns include tactics that can raise suspicion of firm motives (), consumer skepticism can bias the perception of CSR engagement (). Suspicion about CSR activities will be stronger for skeptical consumers than for nonskeptical consumers, with the subsequent negative impact on CSR associations. Thus: H4: The more skeptical consumers are, the lesser their CSR associations will be. The Relationship between CSR Associations and Attitudes What consumers know about a company can influence their overall evaluations of and attitudes toward it (Luo and Bhattacharya, 2006). As part of their knowledge about the firm, consumers' perceptions of CSR are likely to influence their attitudes toward the firm and its social initiatives (Brown and Dacin, 1997). Attribution theory provides an appropriate framework for explaining how people attribute causes to events and how this cognitive perception affects their subsequent attitudes and behavior (Kelley and Michela, 1980). In this sense, CSR associations play an important role in consumers' responses to the company because they create a general context for evaluations (Sen and Bhattacharya, 2001). Consumers evaluate companies as well as their products in terms of CSR, and their perceptions of the motives for engaging in CSR influence their evaluations of and responsiveness to CSR (Becker- ;). Prior research has shown that consumers who are aware of a CSR initiative view the company as socially responsible (Brown and Dacin, 1997;Bhattacharya and Sen, 2004). As a result, the CSR activity has the potential to increase CSR associations and attitudes (). If consumers believe that a company is concerned with the well-being of society and is committed to "doing good, " they are more likely to have favorable attitudes toward the company (). More specifically, consumers who are aware of CSR initiatives report more positive attitudes and behavioral intentions (). Accordingly, we expect that these positive associations with CSR actions lead to more positive attitudes toward the company. Sample and Procedure We ran a field study in which 196 people were randomly selected on the street of a medium-sized European city. Upon arrival at the lab, participants were randomly assigned to one of two experimental conditions: a participatory CSR campaign condition or a non-participatory CSR campaign condition. They were exposed to an advertisement of a fictitious CSR campaign developed by a local brewer. The campaign was related to reforestation. In the participatory condition, the participants were invited to take part in the campaign by planting a tree, while the non-participatory scenario was purely informative and indicated that the company performed reforestation activities. After that, participants reported their attitudes toward the company, CSR associations, search for sensory pleasure, and skepticism. Participants took approximately 10 min to complete the questionnaire. The sample included 94 men (47.96%) and 102 women (52.04%), ranging from 18 to 35 years of age (M = 24.35), with 66.33% between 18 and 25 years. Graduate respondents accounted for 6.12%, undergraduates represented 30.10%, and respondents with a high school education (61.73%) or less (2.04%) accounted for the remainder. All research activities were performed in accordance with University of Murcia's institutional review board policies concerning research with human subjects. Prior to participation in the study, we gave participants an information sheet and told them the activity was part of an experiment. After completing the questionnaire, they were thanked and debriefed. The questionnaire comprised four scales adapted from previous research. We used three seven-point semantic differential scale items adapted from Lafferty and Goldsmith to measure attitude. We assessed CSR associations with a four-item Likert scale adapted from Dean. We measured motivation for sensory pleasure with five seven-point scale items (1 = strongly disagree; 7 = strongly agree) adapted from Eisenberger et al.. Finally, we used Skarmeas and Leonidou's six seven-point semantic differential scale items for skepticism and two items for the manipulation check (did company X ask for your participation in the reforestation campaign? yes/no; did the campaign ask you to do something specific? yes/no). Measurement Assessment Preliminary versions of the questionnaire were administered to a convenience sample of 20 consumers. We used the pretest results to improve the measures and design an appropriate structure for the questionnaire. Regarding the manipulation check, the 98 participants exposed to the participatory campaign confirmed that the campaign was participatory as required, while the 98 participants exposed to the non-participatory campaign confirmed the opposite. We performed a validation check for the resulting measurement scales to assess their reliability, validity, and unidimensionality. We evaluated the reliability of the constructs using Cronbach's alpha coefficients (see Table 1). Cronbach alphas for the four constructs were above 0.70. Confirmatory factor analysis tested the measurement model and obtained acceptable overall model fit statistics . We assessed reliability using the composite reliability index and the average variance extracted (AVE) index. For all the measures, both indices were higher than the evaluation criteria of 0.60 and 0.50, respectively (Bagozzi and Yi, 1988), as Table 1 shows. In line with Fornell and Larcker's suggested procedures, the scales showed acceptable convergent and discriminant validity. We assessed convergent validity by verifying that all indicators had statistically significant loadings on their respective latent constructs. The robust standard errors resulting from the use of the asymptotic covariance matrix were substantially larger (and the t-values smaller) than those produced by a model using the standard covariance matrix as input, validating the need for revised structural equation modeling (SEM) procedures in the face of strong non-normality in the data set. We also have evidence of discriminant validity. First, the phi matrix and associated robust standard errors presented in Table 2 ensured that unit correlation among latent variables was extremely unlikely (Bagozzi and Yi, 1988). Second, for all the pairwise relationships in the phi matrix, the AVE for each latent variable exceeded the square of the correlation between the variables. To provide a further check of discriminant validity, for each pair of the latent variables, we compared the scaled difference chi-square statistic of the hypothesized measurement model with a second model that constrained the correlation between those two latent variables to unity. The corrected chi-square difference tests using the Satorra-Bentler scaled chi-square values (Satorra and Bentler, 2001) indicated that the hypothesized measurement model was always superior to the constrained models. As a result, we are confident that each of the latent variables in our model exhibits discriminant validity with all other latent variables. Internal consistency and discriminant validity results enabled us to proceed with the estimation of the structural model. Assessment of potential common method bias was analyzed following the recommendations of Lindell and Whitney. We used the smallest positive value within the correlation matrix as a conservative estimate of bias. This happens to be the correlation between the motivation for sensory pleasure and CSR associations (r = 0.02). When we determined the statistical significance of the adjusted correlations, none of the correlations which were significant before the adjustment lost significance after the adjustment, indicating that the hypothesized relationships were not impacted by CMV. Table 3 reports the results of the SEM applied to test the hypotheses proposed in the theoretical model. We again used the asymptotic covariance matrix and robust maximum likelihood in model estimation. The model fit the data acceptably, as evidenced by the goodness-of-fit measures . RESULTS We tested the effect of the participatory nature of the CSR campaign on CSR associations. While the main effect of the participatory nature of the CSR campaign did not significantly influence CSR associations ( = 0.13; SE = 0.15), thus rejecting H1, its interaction with sensory pleasure did ( = 0.41; SE = 0.15), in support of H3. Therefore, on the basis of these results, we can affirm that sensory pleasure seeking moderates the effects of the participatory nature of the CSR campaign on CSR associations. The negative coefficient of the main effect of sensory pleasure seeking ( = −0.35; SE = 0.13) confirms the direct effect of sensory pleasure, as proposed in H2. In addition, in H4 we predicted that the more skeptical the consumers, the lesser their CSR associations would be, and our statistical test also found support for this relationship. That is, less skeptical consumers have greater CSR associations ( = 0.37; SE = 0.07). Accordingly, if consumers are less suspicious about the real motives of a CSR campaigns, their associations with CSR actions are more positive. Finally, the results confirm that CSR associations exert a significant and positive influence on consumers' attitudes toward the company ( = 0.31; SE = 0.07), as predicted in H5. Thus, as consumers generate more positive CSR associations, their attitudes toward the company become more favorable. The significant interaction effect was further analyzed through floodlight analysis using Johnson-Neyman's approach (), in order to calculate the range of values of sensory pleasure seeking for which the participatory nature of the CSR campaign has an effect on CSR associations different from zero. Results, obtained with the probemod R package, show that the effect of the participatory nature of the CSR campaign on CSR associations is positive and significant (i.e., confidence interval does not contain zero at p = 0.01) for values of sensory pleasure seeking above 5.02. With this information, we divide the sample into two groups, low sensory pleasure seeking subjects (with scores in this variable below 5.02) and high sensory pleasure seeking subjects (with scores above 5.02), with subsample sizes of 133 and 63, respectively. While the variable sensory pleasure seeking has been hypothesized in our study to interact only with the participatory nature of the CSR campaign, for the multi-group analysis we also considere its potential effect on the other relationships, as a way to check whether these interactions should have been included in the original model. Therefore, a model that imposed equality constraints on the three parameters (participatory nature of the CSR campaign-CSR associations, skepticism-CSR associations, and CSR associations-attitude toward the company) and a general model that allowed those parameters to vary freely across subgroups were compared. A chi-square difference test revealed that the unconstrained model represented a significant improvement in fit over the constrained model ( 2 = 14.60; DF = 3, p < 0.01). This result provide initial evidence to support the moderating effect of sensory pleasure seeking on the structural model. A further series of tests identified there is only one path moderated by sensory pleasure seeking (Table 4). More specifically, the results showed a significant moderating effect consistent with H3. For high sensory pleasure seekers, being exposed to a participatory CSR campaign (compared to a nonparticipatory one) has a significant influence on CSR associations ( = 1.10; SE = 0.24). However, this effect is not significant for low sensory pleasure seekers ( = −0.23; SE = 0.19). The significant change in chi-square ( 2 = 7.49; DF = 1, p < 0.01) indicates that this coefficient is different for the two groups. Additionally, the coefficients for the other two relationships displayed in Table 4 are significant for the two groups and the changes in chi-square indicate that they are not significantly different between the two groups. In other words, seeking sensory pleasure does not moderate the effect of skepticism on CSR associations nor the effect of the latter variable on consumer attitudes toward the company. In summary, these results fully confirm H3. The negative effect of seeking sensory pleasure on CSR associations can be related to how the company CSR activities have been described. As Eisenberger et al. demonstrate, high motivation for sensory pleasure individuals show increased interest in high-but no low-detail contextual information about the pleasantness character of the campaign. In other words, these subjects preference for very detailed information about possibilities of experiencing pleasure could have provoked a sense of frustration and lack of interest in the experiment stimuli as they were not very detailed when describing the company CSR activities (but this is a common characteristic to many ads). This lack of interest may have favor lower processing of the information and, therefore, less CSR associations. We ran an additional study to provide further support to the scenario we used. All research activities were performed in accordance with University of Murcia's institutional review board policies concerning human subjects research. We collected 41 questionnaires. Twenty-one participants were assigned to the participatory campaign whereas the remaining 20 were assigned to the non-participatory campaign. Through items ranging from 0 to 10, individuals rated the campaign in terms of credibility, realism, level of participation required and level of involvement required. They also rated their ability to imagine themselves immersed in the situation described. Results showed that both groups perceived the scenario they were exposed to as highly credible In summary, despite our manipulation was based on scenarios instead of real participation, subjects immersed themselves in those scenarios and those assigned to the participative condition indicated they perceive the CSR campaign as more participative than participants in the regular CSR campaign. GENERAL DISCUSSION In the current competitive marketplace, companies intensely seek differentiation and credibility. One mechanism to reach such goals is consumer participation in CSR campaigns. However, while the effect of participation on consumer behavior has received considerable attention (;), whether consumer participation in CSR activities increases the effectiveness of these activities remains unknown. The current research lends support to the contention that by proactively engaging consumers in CSR initiatives, firms can generate more favorable attitudinal responses than by acting in a reactionary manner (Becker- ;). Our findings on the effect of the participatory nature of a CSR campaign constitute a significant contribution both to the theory of consumer behavior and to business management. Thus, from a theoretical perspective, this research contributes to a better understanding of the effects of the participatory nature of CSR campaigns on consumer behavior. Specifically, although we did not find a general effect of the participatory nature of the campaign on consumers' perceptions of CSR activities, this does not mean that this effect does not exist. As the interaction shows, this effect is associated to consumers who are highly motivated to seek sensory pleasure. When the campaign is participatory, sensory pleasure seekers have greater perceptions of CSR activities than when the campaign is non-participatory. However, when consumers do not seek sensory pleasure, the fact that the campaign can offer possibilities to interact with the senses does not contribute to increase their CSR associations concerning the company. These results are in line with prior research suggesting that the motivation for sensory pleasure plays an important role in consumer responses (;) as well as those suggesting that enabling consumers participation leads to more positive outcomes (;;Troye and Supphellen, 2012;;Olsen and Mai, 2013;). In summary, our results show that motivation and involvement positively moderate the effect of a message on the valence of consumes' associations. A positive message (CSR activities undertaken by the company) contributes to more positive associations (CSR associations) when the consumer is more motivated (seek sensations) and involved (participates in the production of the CSR activity). In addition, in light of the negative attributions consumers may attach to companies' real motives for conducting CSR actions, our research demonstrates that skeptical consumers show less CSR associations because they may perceive the campaign as manipulative. This, in turn, results in less favorable attitudes toward the company. This result is in line with research that posits that consumers are often skeptical of advertising claims related to a company's participation in social or environmental issues (Obermiller and Spangenberg, 1998;). From a managerial perspective, this research shows that, by itself, participation may not be enough to gain consumers' involvement and, in turn, generate greater CSR associations and favorable attitudes. To obtain this effect, companies should emphasize the possibility of finding pleasure during the CSR campaign. Firms could try to activate sensory pleasure, beyond the personal predisposition of each individual, by promoting CSR as an action that will produce an enjoyable experience as involvement increases. Therefore, marketing managers should not only engage their customers in their CSR actions, so that they actively take part in their implementation (as opposed to adopting a passive role and trusting that the company will do what is promised), but also design participatory CSR activities that stimulate and match the consumers' hedonic motivations for searching for pleasure. In addition to participation and sensory pleasure seeking, marketers should pay attention to skepticism of the campaign. Some consumers tend to discredit CSR actions, as they interpret them as an attempt to manipulate their perceptions. They believe that the real motives behind CSR actions are not social oriented but benefit oriented. To minimize the impact of such skepticism, managers should provide consumers with clues that lend the campaigns more credibility. For example, they could report the results of previous initiatives to prove that the company is socially oriented and concerned about societal well-being. Despite the findings and implications, this research has some limitations. First, although participants were able to imagine themselves in the proposed scenarios, we must acknowledge that they do not allow consumers to really participate in the CSR campaign. Therefore, further research should assess whether our findings remain the same in such a real context. Second, we use only one company, which limits the generalizability of the results. Thus, future research should analyze whether the implementation of participatory CSR activities for different products can generate different results for different company sectors. Third, other variables related to consumer-company interactions, such as the relationship with company workers, may also affect the results. Finally, another avenue for research pertains to consumers' personality traits, which may moderate the effects. For example, proenvironmental attitudes or prosocial behavior may boost the positive influence of participation. AUTHOR CONTRIBUTIONS RL collected the data and the three authors have equally participated in literature review, data analysis and writing of the paper. FUNDING The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by the grant ECO2012-35766 from the Spanish Ministry of Economics and Competitiveness and by the Fundacin Sneca-Agencia de Ciencia y Tecnologa de la Regin de Murcia (Spain), under the II PCTRM 2007-2010. Authors also thank the support provided by Fundacin Cajamurcia.
#include <stdlib.h> #include <stddef.h> #include <stdio.h> #include "dynamic_array.h" void initialize_dynamic_array(dynamic_array_t *array, size_t initial_length, size_t size_of){ if(initial_length == 0){ printf("Error: dynamic array must be initialized with length of mat least 1\n"); exit(EXIT_FAILURE); } array->current_length = initial_length; array->elems = malloc(sizeof(size_of)*initial_length); array->size_of_elem = size_of; array->current_index = 0; } void add_to_dynamic_array(dynamic_array_t *array, void *any_type){ if(sizeof(any_type) != array->size_of_elem){ printf("Error: item being added to dynamic array is of incorrect size\n"); exit(EXIT_FAILURE); } if(array->current_index >= array->current_length-1) grow_dynamic_array(array); array->elems[array->current_index] = any_type; array->current_index+=1; } static void grow_dynamic_array(dynamic_array_t *array){ array->current_length *= 2; array->elems = realloc(array->elems, array->size_of_elem * array->current_length); printf("GREW: %lu\n", array->current_length); } void print_dynamic_array(dynamic_array_t *array){ for(size_t i = 0; i < array->current_index; i++) printf("arr[%lu]: %p\n", i, (void*)array->elems[i]); } void free_dynamic_array(dynamic_array_t *array){ free(array->elems); array->elems = NULL; }
/* * SimpleTimeoutHandler.cc * * Created on: Mar 20, 2017 * Author: xtikka */ #include "SimpleTimeoutSender.h" #include "../../messages/basicmessages/NetworkTimeoutMsg_m.h" SimpleTimeoutSender::SimpleTimeoutSender(NodeBase* owner) { node = static_cast<SimpleBot*>(owner); } SimpleTimeoutSender::~SimpleTimeoutSender() { } void SimpleTimeoutSender::handleMessage(BasicNetworkMsg* msg) { if (!node->isActive()) { NetworkTimeoutMsg* timeout = new NetworkTimeoutMsg(); timeout->setTimeouttype(msg->getType()); timeout->setSrcNode(node->getBasicID()->getBasicID()); timeout->setDstNode(msg->getSrcNode()); node->simpleSend(timeout, msg->getSrcNode()); } }
Risperidone treatment in 12 children with developmental disorders and attention-deficit/hyperactivity disorder. BACKGROUND Risperidone is a novel antipsychotic drug that has been tried in the treatment of several child psychiatric disorders. In an open clinical study, we evaluated the safety and efficacy of risperidone in children with developmental disorder and behavioral problems including attention-deficit/hyperactivity disorder (ADHD). METHOD Twelve patients aged 4 to 14 years who had a DSM-IV-diagnosed developmental disorder and ADHD in addition to other behavioral problems, in particular aggression, were treated with risperidone for a period of up to 2 years with daily doses ranging from 1 to 3 mg. Data were gathered from December 2002 to December 2004. RESULTS A positive clinical response was noted in 9 of the 12 patients within 3 months of study recruitment according to the Clinical Global Impressions-Improvement scale. Risperidone was well tolerated by all 12 patients. The most commonly reported side effect was sedation, which necessitated dosage reduction in 2 patients, but not discontinuation. CONCLUSIONS Our findings suggest that risperidone may be an effective and safe treatment for children and adolescents with developmental disorder and disruptive behaviors.
Selective Peptide Chain Extension at the Cterminus of Aspartic and Glutamic Acids Utilizing Nprotected (aminoacyl)benzotriazoles Aspartic and glutamic acids were selectively extended at each of the alternative Cterminals under mild conditions to afford diverse natural and unnatural Nprotected dipeptides and tripeptides in yields of 7396%. The reactions between Nprotected (aminoacyl)benzotriazoles and free amino acids or dipeptides proceeded with complete retention of chirality as supported by parallel experiments involving dAla, lAla, and dlAla in the preparation of dipeptides and tripeptides, monitored by NMR and HPLC analyses.
On Monday, Tim Cook, of Apple, and Marco Rubio, of the Senate, agreed on the answer to a question that Governor Mike Pence, of Indiana, has gone to absurd lengths to dodge. Both said that the idea of Indiana's Religious Freedom Restoration Act was to let certain businesses turn away gay and lesbian customers. The difference is that only one of them—the one with some Apple Watches to sell—thought that the act was a bad idea; the other one, who was speaking on Fox News, didn't. Rubio also said that he'd make a “big announcement” next month, probably having to do with running for President. His view was seconded by another Presidential candidate, Jeb Bush, who, after talking about how the spread of same-sex marriage had caused florists to face crises of conscience, said of the RFRA, "We're going to need this." Need it for what? "FIX THIS NOW" was the headline on the front page of the Indianapolis Star the next morning. The editorial inside said that the law, "whatever its original intent," had created a "deep mess," one that endangered the state's reputation and its economy. The newspaper argued that nothing short of a comprehensive state anti-discrimination law that would protect the rights of gays and lesbians would fix it. Governor Pence has said that he has no interest in such a law, although this morning he conceded that he would support an adjustment to the act—“a clarification, but it's also a fix." He wasn’t specific about the change, and it’s hard to know if it will be enough, particularly since Pence went on to say that the RFRA had been "smeared"—that it had never been discriminatory, and the problem was one of “perception.” The Indiana law is the product of a G.O.P. search for a respectable way to oppose same-sex marriage and to rally the base around it. There are two problems with this plan, however. First, not everyone in the party, even in its most conservative precincts, wants to make gay marriage an issue, even a stealth one—or opposes gay marriage to begin with. As the unhappy reaction in Indiana shows, plenty of Republicans find the anti-marriage position embarrassing, as do some business interests that are normally aligned with the party. Second, the law is not an empty rhetorical device but one that has been made strangely powerful, in ways that haven't yet been fully tested, by the Supreme Court decision last year in Burwell v. Hobby Lobby. That ruling allowed the Christian owners of a chain of craft stores to use the federal version of the RFRA to ignore parts of the Affordable Care Act. Ruth Bader Ginsburg, in her dissent, argued strongly that the majority was turning that RFRA into a protean tool for all sorts of evasions. As Jeffrey Toobin has noted, she was proved right even before the Indiana controversy. Both of those factors have combined to produce real confusion about the Indiana law. Some people are not being straightforward about its implications, whether because they are calculating, mortified, or—in the case of opponents, some of whom have also been unclear about what the law means—alarmed, but it also inhabits novel legal territory, so it is genuinely hard to know what those implications would be. Governor Pence has done much to muddle things even more. On Sunday, on "This Week," George Stephanopoulos asked Pence “a yes-or-no question” about whether "a florist in Indiana can now refuse to serve a gay couple without fear of punishment." He asked half a dozen times, but never got an answer: Pence: This is not about discrimination, this is about ... Stephanopoulos: But ... Pence: ... empowering people ... Stephanopoulos: But let me try to pin you ... Pence: ... government overreach here. Stephanopoulos: ... down here though. ... It's just a question, sir. Question, sir. Yes or no? Pence: Well—well, this—there's been shameless rhetoric about my state and about this law and about its intention all over the Internet. People are trying to make it about one particular issue. And now you're doing that as well. Pence strongly suggested that Indiana's law was identical to the federal Religious Freedom Restoration Act and to laws in twenty states that bear the same name. In a more careful formulation, in the Wall Street Journal on Tuesday, Pence said that the Indiana law "simply mirrors federal law that President Bill Clinton signed in 1993”—which is correct only if the mirror is the kind that adds twenty pounds when you look in it. Pence also said that Barack Obama, as an Illinois State Senator, had voted for a Religious Freedom Restoration Act with the "the very same language" as Indiana's law; Politifact rated that claim only "half true." The law has, in fact, been tweaked in ways that seem designed to maximize the effect of the Hobby Lobby decision. In that decision, the Justices found that the "person" with religious beliefs referred to in the federal RFRA could be a closely held for-profit company. (That is what Hobby Lobby is, so the opinion didn’t have to settle the question of whether other kinds of corporations had the same religious freedom.) The Indiana law went farther down that path: it explicitly covers "a partnership, a limited liability company, a corporation, a company, a firm, a society, a joint-stock company, an unincorporated association, or another entity." (It also says that the religious human beings behind a company only have to have "substantial ownership" of it—not even majority control—for the company to be covered by the act.) The Indiana law is also distinct in that the government does not need to be a party to the case, multiplying the potential number of lawsuits it can be used in. And there are, as Politifact and others have noted, contextual differences: other states, including Illinois, have laws against discriminating on the basis of sexual orientation, which prevent some of the potential ill uses of the Indiana RFRA (Existing laws explain why racial discrimination is less of a RFRA problem, as the Indianapolis Star pointed out in a very good survey of the law.) Indiana, again, does not, though some municipalities in the state do. And that was before Hobby Lobby. A RFRA of some kind is not, in the abstract, a terrible idea. In the simplest terms, RFRAs offer someone who violates certain laws a defense: the law is a restriction on my religious practice; the state has another, less intrusive way to accomplish what it wants, and exempting me does not thwart some compelling state interest. The inspiration for RFRAs was the prosecution of Native Americans for rituals involving peyote. It is logical for the government, for example, to find a way to allow a Sikh to keep his turban on even in a place where regulations say that hats must be removed, and should be easy to do without creating a security problem. But the idea of religious practice seems to have morphed to include a vague sense of offense at the lives of others. In Hobby Lobby, it was corporate owners who felt "implicated" by the contraceptive decisions of the employees whose health insurance they helped pay for. A Heritage Foundation paper cited a baker who thought that his religious freedom would be infringed upon if he delivered his goods to a same-sex wedding, because, he said, "when I do a cake, I feel like I am participating in the ceremony or the event or the celebration that the cake is for”—as if he were being forced to get gay-married himself. For the moment, Indiana's RFRA is open-ended. Its true reach will not be tested until some florists or bakers—or doctors or teachers, manufacturers or insurers—get to court, and perhaps gain victories that realize the most profound concerns about the law. (Hobby Lobby was once considered a long shot.) That contingency is why its supporters have been able to deny that it offers a license to discriminate. Pence, in his Wall Street Journal piece, took great offense at that notion, and then almost immediately cited a law professor who said that the law would give "valuable guidance to Indiana courts." As Pence put it, "RFRA only provides a mechanism to address claims, not a license for private parties to deny services." Perhaps it is more accurate, then, to call it a mechanism to discriminate, rather than a license. What it certainly will do is give some people more confidence to discriminate. But is that what Indiana really wants? And is that what the G.O.P.’s 2016 candidates should be looking for?
<gh_stars>0 package org.evergreen.verse.lambda; import com.amazon.ask.Skill; import com.amazon.ask.SkillStreamHandler; import com.amazon.ask.Skills; public class QTSkillStreamHandler extends SkillStreamHandler { private static Skill getSkill() { return Skills.standard() .addRequestHandlers( new CancelAndStopIntentHandler(), new QTIntentHandler(), new HelpIntentHandler(), new LaunchRequestHandler(), new SessionEndedRequestHandler(), new FallbackIntentHandler()) .build(); } public QTSkillStreamHandler() { super(getSkill()); } }
Massachusetts Provincial Congress Termination of the provincial assembly On May 20, 1774, the Parliament of Great Britain passed the Massachusetts Government Act in an attempt to better assert its authority in the often troublesome colony. In addition to annulling the provincial charter of Massachusetts, the act prescribed that, effective August 1, the members of the Massachusetts Governor's Council would no longer be elected by the provincial assembly, and would instead be appointed by the King and hold office at his pleasure. In October 1774, Governor Thomas Gage dissolved the provincial assembly, then meeting in Salem, under the terms of the Government Act. The members of the assembly met anyway, adjourning to Concord and organizing themselves as a Provincial Congress on October 7, 1774. With John Hancock as its president, this extralegal body became the de facto government of Massachusetts outside of Boston. It assumed all powers to rule the province, collect taxes, buy supplies, and raise a militia. Hancock sent Paul Revere to the First Continental Congress with the news that Massachusetts had established the first autonomous government of the Thirteen Colonies (The North Carolina Provincial Congress met earlier than the Massachusetts Congress, although it could be argued that North Carolina's body did not establish an actual government until 1775). Until the advent of the American Revolutionary War the congress frequently moved its meeting site, because a number of its leaders (John Hancock and Samuel Adams among them) were liable to be arrested by British authorities. War years After the war began, the provincial congress established a number of committees to manage the rebel activity in the province, starting with the need to supply and arm the nascent Continental Army that besieged Boston after the April 1775 Battles of Lexington and Concord. Pursuant to recommendations of the Second Continental Congress, it in 1775 declared that a quorum of the council (which under the colonial charter acted as governor in the absence of both the governor and lieutenant governor) would be sufficient to make executive decisions. Although the assembly adjourned from time to time, the council remained in continuous session until the new state constitution was introduced in 1780. This arrangement was only marginally satisfactory, and led to calls for a proper constitution as early as 1776. By 1778, these calls had widened, particularly in Berkshire County, where a protest in May of that year prevented the Superior Court from sitting. These calls for change led to a failed proposal for a constitution produced by the congress in 1778, and then a successful constitutional convention that produced a constitution for the state in 1780. The provisional government came to an end with elections in October 1780. Conventions of the People In 1774 there were conventions held in the counties of Massachusetts in order to deal with the political crisis at the time. With the dismissal of the Provincial Assembly by the Royal Governor Thomas Gage the people of Massachusetts with patriot sympathies desired to form their own provisional government. Much like the Massachusetts Convention of Towns which met in Boston in 1768, these conventions were extralegal assemblies designed to address the concerns of the people of the Province of Massachusetts Bay These meetings drafted their political causes for their convening and other grievances. These conventions, later styled "Conventions of the People", set the stage for the Provincial Congress and acted as its precursors. Suffolk Convention The Suffolk County convention took place in private homes in Dedham and Milton. Joseph Warren served as Chairman. The convention condemned the unconstitutional acts of the royal government (Massachusetts Government Act) and the presence of the British military in Boston. There were nineteen resolutions passed at the convention. Firstly the convention acknowledged King George III is the rightful monarch of the British Realm and that the colonists were the lawful subjects of the Crown. That the rights and liberties afforded to them were hard fought and that it was their duty to defend, maintain, and hand down those rights. The recent acts of the British Parliament are subverting the rights of the people. This includes the dissolution of the Provincial Assembly, the blockade of Boston Harbor, the subversion of legal protection, and presence of British troops in Boston. The rights of the colonists are natural, constitutional, and guaranteed by the charter of the province. The convention stated the Province is not required to follow or abide by these recent laws because they are the result of a "wicked administration" seeking to "enslave America." Any justices, magistrates, or officials in general which were appointed by the current government were illegitimate and unconstitutional. Anyone who cooperates with the said government will be acting and collaborating with an enemy force. All officers whose duty it is to make payment to the state ought not to make it to the civil government until there is a constitutional replacement. That any person who has accepted a position in the civil government, not by constitutional means but by "virtue of a mandamus from the king" has affronted the people of Massachusetts and become the enemies of the people of the colony. Therefore, the convention gave until September for all officials to resign their position. The convention stated the fortifications that were built on Boston Neck were acts of aggression against the people. The commander-in-chief of the British forces has also acted unjustly by seizing gunpowder from the Charlestown magazine, as it is not the property of the government. The convention also condemned an act in Canada which enacted French laws and established the Roman Catholic religion. The convention said that these laws are hostile to the Protestant people of all America, and dangerous to their civil liberties. The convention also declared that all officers should be stripped of their commission, and that new officers shall be selected by their respective towns based on ability. They went on to declare that the colonists will continue to act in the defensive to protect themselves, and show they were to the hostile party. It was further stated at the convention through resolution that as long as those who are fighting for the rights of their countrymen are being apprehended that officials of the government will be seized and held until the release of such persons. There was also a call to further boycott any and all merchandise that is the result of commerce with Great Britain, or any of its crown territories in the West Indies and Ireland. The convention form a local committee whose purpose was to organize local manufacturers and artisans in order to promote their goods. The Suffolk Convention called for a Provincial Congress to be called and that such a congress would align with the Continental Congress in Philadelphia until all rights are restored. There was further call to abstain from any violent acts which might damage private property in the province. The convention further went on to state that the committees of correspondence shall be dispatched in the event of invasion or emergency. Middlesex Convention The Middlesex County Convention took place in Concord in August 1774, with James Prescott serving as Chairman and Ebenezer Bridge serving as Clerk. The delegates resolved to say that the recent acts of the British Parliament are tyrannical and go against any notion of jurisprudence. The delegates reiterated their loyalty to the Crown, however they maintained their duty to protect their rights that had been granted through the Massachusetts Charter. The charter, said the convention, equally binds the colonists and the Crown, and that the acts of Parliament have broken that trust. The convention stated that their existed an unequal relationship between the colonists in New England and the government in Great Britain due to the severing of privileges without the colonists having the ability to respond politically. They also stated that because of this unequal relationship, and the subverting of the civil government through the Massachusetts Government Act that there can be no freedom for the people of Massachusetts as there is no true representative governmental body. This is further exacerbated, the convention claimed, by the removal of a just system of law with fair and independent jury trials. The delegates went on to express their view that this new order was a form of despotism which strips them of all liberty. The convention called into question the legality of an sworn official serving in the colonial civil government calling them unconstitutional, therefore no person was obliged to follow their authority. The courts and all the motions and cases which are products of them were also deemed to be unconstitutional and therefore were not legitimate in any way. The convention declared their support for the establishment of a Provincial Assembly in which delegates from each town would go and be represented. Essex Convention The Essex County convention was held in on September 6 and September 7 in Ipswich with Jeremiah Lee serving as Chairman and John Pickering Jr. as Clerk. The delegates resolved that the Parliament of Great Britain has passed acts detrimental to all the colonies in North America but to the Province of Massachusetts Bay in particular. The convention described these acts and the actions of the local Royal civil government as being overzealous, unconstitutional, and threatening to the peace of the colony. The delegates declared that their inalienable rights which are granted to them as Englishmen were under threat. The convention declared the courts and local officials serving under the Royal administration as unlawful and unconstitutional. The delegates called for the formation of a local assembly to be called so as to have their guaranteed rights restored. The delegates declared their loyalty to the Crown however said they would act to ensure that their rights and liberties would not go on being tarnished. Hampshire Convention In Northampton on September 22 and 23 in 1774, the delegates from towns of Hampshire county gathered in assembly. Ebenezer Hunt was selected as Clerk and Timothy Danielson as the Chairman. At the end of the convention the delegates had drafted nine resolutions. The delegates first reaffirmed their allegiance to the King as long as he sought to defend their rights guaranteed them by the colonial charter. They went on to declare that the colonial charter is a sacred document and agreement shared between two parties: the King and the people. It is unjust and unlawful, they declared, for one party to withdraw from the charter without the input from the other, affirming that nothing done in the colony could be described as the desire to sever this agreement. Thomas Gage was declared to be an unconstitutional governor of Massachusetts Bay. According to the delegates by undermining the authority of the constitutionally elected assembly and by enforcing acts of Parliament that are detrimental to the liberty of the inhabitants of Massachusetts Bay. The Convention echoed and supported the calls from the Middlesex Convention for the establishment of a Provincial Congress with each town sending delegates. It is only when there is a constitutionally beholden assembly that the civil officials throughout Massachusetts Bay could be seen as legitimate. Furthering these sentiments the convention asserted to role of the town meeting in the passage and management of laws. The final resolution of the assembly was to urge all the inhabitants of Hampshire County to "acquaint themselves with the military art" and to furnish all the lawful weaponry at their disposal. Plymouth Convention The convention for Plymouth County was held on its first day in Plympton, Massachusetts and in the Town of Plymouth for its second meeting. The dates of the convention were September 26 and 27, with Thomas Lothrop serving as Clerk and James Warren as Chairman. Wheras, the British administration, instead of cultivating that harmony and affection, which have so long subsisted, to the great and mutual advantage of both Britain and the colonies, have, for a series of years, without provocation, without justice, or good policy, in breach of faith, the laws of gratitude, the natural connections and commercial interest of both countries, been attacking with persevering and unrelenting injustice, the rights of the colonists; and have added, from one time to another, insults to oppressions, till both have become, more especially in this colony, intolerable, and every person who has the feelings of a man, and any sense of the rights of mankind, and the value of our happy constitution, finds it now necessary. to exert himself to the utmost of his power, to preserve them... — Plymouth Convention The conventions first resolution was to declare that all the inhabitants of the American colonies are entitled to their natural rights and are to not to be governed by any entity that they do not consent to. The delegates went to say that their only connection to Great Britain was through their inheritance of the colonial charter. They accused the Parliament of Britain of operating in a severe and unjust way, and curtailing their civil and religious liberties. The convention expressed that it was the duty of everyone in the Province to oppose entirely and to not in any way submit to this unjust government. The delegates said that the current Royal government is a "barrier of liberty, and security of life and property..." Because these officials are members of an unjust system, by accepting their positions they have marked themselves as enemies to the people they are supposed to be serving and living with. Therefore, the convention charges, these people who have neglected their own society have lost all virtue. The delegates called for the creation of a Provincial Congress in order to properly represent the people of Massachusetts Bay. They further called for the people of Plymouth County to arm themselves and to become accustomed with military discipline. Declaring that any money paid to the Royal civil government may be misappropriated to causes that may be a detriment to the people, the convention asked all people to stop making any payments until the government, or a government, exists with a constitutional foundation. The construction of fortifications on Boston Neck and the seizing of the gunpowder in Charlestown were also described as overtly hostile acts. Similar to the Suffolk Convention, the convention in Plymouth said that due to the violation of rights of those in Massachusetts Bay, Crown officials should be seized and not returned until all patriots are returned unharmed. The convention also reaffirmed the importance of the town meeting in these towns and declared that the local government should go on uninterrupted. Another resolution passed urged the people to interrupt and impede an attempt at the civil government to any business that runs counter to the constitutional order of society, even though the convention was ended with a plea to avoid any riots or any acts that would greatly disturb the Province. Bristol Convention The convention in Bristol County took place on September 28 and 29 at the courthouse in Taunton with Zephaniah Leonard as Chairman. The delegates in Bristol declared that King George III was their rightful monarch and that their relationship to the British Crown went back to the reign of King William III and Queen Mary II who granted them the Province's colonial charter. And according to the colonial charter, the delegates argued, they had the right to organize their own governance and decide their own laws and practices. The convention passed a resolution which stated that they were opposed to disorder and acts of mob violence, however would ensure that the rights of the people of Bristol County would not be subverted, finally stating that they reserve the right to call their county convention into assembly whenever they saw fit. Worcester Convention Assembly of the County of Worcester Committee of Correspondence Worcester County's committee of correspondence held a convention of its members in September and August 1774 in Worcester. Chosen as Chairman and Clerk were William Young and William Henshaw, respectively. The delegation selected a committee which drafted resolutions for the greater convention to vote on. Much like the other conventions held in Massachusetts Bay the convention reasserted their loyalty and constitutional connection to the British Crown in the person of King George III. They outlined the connection they have to their land is through the Massachusetts Charter which guarantees not only their allegiance to the Monarchy but also guarantees them certain rights and privileges. They went on to add that the destruction of this relationship, i.e. the cancelling of the agreement by one party without the consent of the other, ensures not only the severing of the union between the province and Royal Government politically but also destroys the allegiance of the people to the Crown. Delegates pointed to the acts of Parliament, which they beloved violated their chartered agreement, as being hostile. Adding that not only through political power had the Parliament shown hostility but through egregious taxation and the blocking of the port of Boston. As a result of these actions the assembly called on every American to do what was in their power to oppose these acts. They resolved to say that Americans by boycotting British goods would hurt the people and commerce of Great Britain than it would to the people of the American colonies. County-wide Convention At the county meeting the convention elected William Young as their President. The convention voted on and passed all the resolutions which had been drawn up by the assembly of the Committee of Correspondence. The convention then added resolutions of its own. Firstly that all people must do what they could to disrupt and prevent the sitting of the Courts which were a part of the Royal civil government. Instead of relying on the civil government, which they saw as unjust, the delegates resolved that every community ought to organize itself in a matter of security and order. Adding that these communities are charged with selecting amongst themselves representatives to represent them at the wider Provincial Congress. For military resolutions the convention determined that every member of the committee should obtain a full stock of gunpowder and that the town of the county should be properly armed in the event on an invasion. The delegates went on to say that the local militia should be administered in a manner which is respectful of the local population and it should abstain from destroying any property. They added that each town ought to select officers for its militia and that one third of the men in each town from ages 16 to 60 years old be available at a minute's notice. The convention called for printing offices to be set up in order to adequately inform the population as to the resolves and motions being undertaken at the convention and any future assembly. Second Congress The Provincial Congress met again in Cambridge on February 1 in 1775. John Hancock was unanimously reelected to be Congress President and Benjamin Lincoln was reappointed as Clerk, know styled Secretary. Delegates responding to meetings of Committees of Correspondence voted and argued on resolutions concerning the management of supplies and information for the militia and their encampment in and around Boston. Congress also reaffirmed that tax and revenue are to be paid to the then Receiver-General Henry Gardner instead of any Royal Officers who remained in an official post. Samuel Adams, John Adams, John Hancock, Thomas Cushing, and Robert Treat Paine were also chosen to remain as the delegates to the Continental Congress and were to attend its next session in May. In the absence of the President of the Congress (then Hancock who was charged with the duty of representing Massachusetts in Philadelphia) the Secretary was given the authority to manage and adjourn the Provincial Congress. Congress also reestablished its authority by stating that Committees of Correspondence must adhere to the rulings of the assembly and to until another constitutional assembly comes into being. A new Committee of Safety was chosen by delegates. The new members were to be John Hancock, Benjamin Church, Joseph Warren, Benjamin White, Richard Devens, Joseph Palmer, Abraham Watson, Azor Orne, John Pigeon, Jabez Fisher, and William Heath. The Committee of Safety was given new powers to determine on their own a Commissariat and its members. The Committee was also given full authority of the militia and all business which pertains to its upkeep and maintenance. FRIENDS AND FELLOW SUFFERERS: When a people entitled to that freedom, which your ancestors have nobly preserved, as the richest inheritance of their children, are invaded by the hand of oppression, and trampled on by the merciless feet of tyranny, resistance is so far from being criminal, that it becomes the christian and social duty of each individual. — Massachusetts Provincial Congress, To the Inhabitants of the Massachusetts Bay. 1775. With the escalating military conflict with Great Britain the Congress adopted measures as to safeguard and preserve supplies in the event of the confiscation of materials by Royal authorities or further hardship brought on by war. This included the stockpiling of straw as well as linen. The delegates further resolves that any person who did business with the Royal Army would mark themselves as an enemy of the people of Massachusetts Bay. Delegates dealt with the issue of securing funds for its delegates and to estimate the commercial and economic cost that has been incurred due to the Boston Port Bill. Delegates then decided that an agent ought to be sent to the Province of Quebec in order to determine what the political atmosphere was and where public opinion regarding the Intolerable Acts resided. Congress also sent correspondence to the Board of Selectmen of each town to organize and train the militia due to the immediate military threat from Great Britain. Additionally the Congress prioritized the manufacture and purchasing of as many weapons as needed for defense. A committee was then formed in order to better communicate with the other revolutionary New England governments, as well as colonial governments in Canada. March 16 was designated by Congress to be a public day of fasting and or prayer, and was to be done in respect to the current political crisis but also as continuation of custom from their forebears. Committee of Safety The Committee of Safety was the parallel military and executive organization of the Massachusetts Provincial Congress. While at first the Committee existed as a legislative committee that existed under the authority of a standing committee of delegates and the Provincial Congress, the Committee of Safety at one point evolved into the de facto executive of the provisional state as well as the Commander-in-Chief of Massachusetts' armed forces (Massachusetts Militia and the Massachusetts Naval Militia). First organized in the first congress of the provisional government in 1774, the committee was at first a technocratic organization tasked with oversight of the military situation in Massachusetts Bay, with the meetings of the second and third congress the committee was given increased power and authority to govern Massachusetts while the Congress was not in session. The Committee of Safety was given the authority to name its own members of the Commissariat and to procure and administer all military supplies in the province. With the conflict with the Kingdom of Great Britain expanding and the military of Massachusetts existing as a militia to be ready at a moments notice, the Congress saw a need for a permanent committee to oversee the martial affairs. The Congress only met occasionally and it was impractical to have the militia only answer to the Congress alone with the situation being so fluid. The first Congress in 1774 rested supreme authority in the legislature. The executive was to be an Executive Standing Committee that served jointly with the Massachusetts Governor's Council. The Committee of Safety received order from the congress and was tasked with carrying them out as well as maintaining reports of the military situation in Massachusetts Bay for the delegates of Congress. The Commissariat was at first separate and distinct from the Committee of Safety and there was also another committee formed to deal with the militia and Selectmen of the towns of Massachusetts Bay. This Committee had nine members, three limited to the Boston and five for the country. The Second Congress expanded the powers of the Committee. When delegates gathered in 1775 the Committee of Safety was given more authority and expanded powers. The Committee would be selected from delegates at the congress however they could now select their own Commissaires and were given control of the militia. This meant the Committee had the authority to muster the militia whenever it saw fit, determine the amount of men it saw as necessary, as well as naming officers it desired for commission. All matters of high importance were still subject to Congressional approval in order to make sure it did not have too much independent power. The Council of War was created in the Congress while it was in session to serve as the "oversight committee" of the group as well as give it official orders. Fearful of overstepping its own authority the Committee made constant recommendations to the Provincial Congress in matters it believd were outside its control. The Third Congress stripped many of the powers given to the Committee by the Second Congress. The Committee of Safety was to no longer administer the military alone and instead was subject to the authority of the Commander-in-Chief of the Continental Forces. Further, its powers were limited to oversight of provisions and goods for the military, caring for prisoners of war and Tory prisoners, caring for the poor, and administrate concerns of public health.
Supramolecular oligourethane gel as a highly selective fluorescent onoffon sensor for ions Stimuli-responsive supramolecular gels (SRSGs) are an important class of smart materials. It is of practical importance to develop an SRSG which can both detect and remove toxic metal ions. We have designed and synthesized an aggregation induced emission (AIE)-active oligourethane (OU) gelator which self-assembles into a supramolecular gel ( OUG ), through hydrogen-bonding, p p stacking and van der Waals interactions. By taking advantage of the weak and dynamic nature of these non-covalent bonds, OUG shows stimuli-response to multiple factors. Importantly, OUG has the capacity for real-time detection and high selectivity for Fe 3+, HSO 4 (cid:2) and F (cid:2). The lowest detection limits are in the range of 5.89 (cid:3) 10 (cid:2) 9 to 8.17 (cid:3) 10 (cid:2) 8 M, indicating high sensitivity. More importantly, OUG is shown to adsorb and separate Fe 3+ from aqueous solution, with an absorbing rate of up 97.5%. A simple writing board was fabricated, which could be written repeatedly and reused. OUG acts as a reversible and recyclable onoffon fluorescence sensor via competitive cation p and cationanion interactions. OUG has great potential as an environmentally sustainable probe for ions. Introduction Stimuli-responsive supramolecular gels (SRSGs) have the ability to respond to a chemical substance, 1 light, 2 heat, 3 pH 4 or pressure. 5 They have been applied in chemical sensors, 6 displays, 7 drug deliveries 8 and other fields. 9 Responsive behavior can be achieved by a gel-sol state transition or by changing the luminescence. 7,10 The latter response works by changing the gel's fluorescence intensity or color, and can be free from the influence of temperature, 2 pH, 11 an oxidizing agent, 8 and other factors. 12 Therefore luminescence detection has considerably higher sensitivity and more reliable real-time response. Traditional conjugated gelators usually suffer from aggregationcaused quenching (ACQ), which sharply weakens the emission behavior in aggregation or solid states, thereby limiting their applications. 16 The emergence of polymers/oligomers with aggregation-induced emission (AIE) properties has been a breakthrough in the field. 17,18 In addition to their excellent emission characteristics, AIE-active supramolecular gels show strong absorption activity and synergistic effects because of their large contact area with analytes. 19 Recently, our group has explored AIE-active poly/oligourethane-based unconventional luminophores, which are without typical polycyclic p-conjugated units. These materials show obvious advantages like environmental friendliness, excellent hydrophilicity, chain flexibility, ease of synthesis and structural versatility compared with traditional organic luminescent materials. Fe 3+ is an indispensable element in the process of oxygen uptake and metabolism. 23 However, an excess of Fe 3+ might cause pathological diseases like cancer and organ dysfunction. 24 F and HSO 4 also play essential roles in human biological processes, 25,26 although undue fluoride may cause kidney problems, dental and skeletal fluorosis. 27 HSO 4 can produce poisonous SO 4 2 under acidic conditions, which will stimulate the skin and eyes and can even cause respiratory paralysis. 28 Thus, methods to efficiently detect these ions have received extensive attention. The established detection techniques, such as inductively coupled plasma spectroscopy, 29 high performance liquid chromatography (HPLC) 30 and electrochemical methods, 31 all require tedious sample preparations, sophisticated instruments and professional operators. However, fluorescent sensor molecules, which convert and amplify the signals into a visible and easily recognized fluorescent output, offer a more significant practical method. 6,32 Herein, we report an AIE-active supramolecular oligourethane gel (OUG) and demonstrate its usage as a specific Fe 3+ sensor in an aqueous environment. The material is based on the following design criteria: (i) inserting benzophenone into an oligourethane (OU) backbone provides CQO units with prominent hydrogen-bonding sites for self-assembly, and formation of oxygen clusters, which could enhance fluorescent emission. 33 (ii) Inserting linear 1,6-diisocyanatohexane offers strong van der Waals interactions among alkyl chains, limiting internal rotation of the molecular chains, thereby blocking the non-radiative pathways and favoring AIE. Taking advantage of the rich hydrogen bond acceptors/donors (CQO/N-H) among the oligourethane skeleton, we introduced solvents with hydrogen-bonding acceptor units (CQO or SQO) as external crosslinking agents, to self-assemble a supramolecular oligourethane gel (OUG) relying on multiple hydrogen bonds. Synthesis and characterization The OU was synthesized through a facile procedure as shown in Scheme S1 (ESI ), by the reaction of 4,4 0 -dihydroxybenzophenone, hexamethylene diisocyanate and DABCO in anhydrous tetrahydrofuran and end-capping with polyethylene glycol monomethyl ether to give a viscous solution. The product was purified by a counter precipitation method. 1 H NMR and FTIR characterization data are given in Fig. 1a and in the ESI, confirming the structure of OU. The M n value of 1814 g mol 1, calculated from the 1 H NMR data, established that OU should be classified as an oligomer. 42 The FTIR spectra (Fig. S1, ESI ) showed absorbance bands at 3323 cm 1 and 1706 cm 1, assigned to stretching vibrations of N-H and CQO, indicating the formation of amide bonds. Absorbance bands at 2936 cm 1 and 2860 cm 1 correspond to v(-CH 2 -) and at 1163 cm 1 correspond to v(C-O-C) stretching vibrations. The UV-Vis absorption spectrum of OU in the solid-state (Fig. S2, ESI ) showed a major peak at 277 nm from a p-p* transition of the aromatic rings. 43 Self-assembly gelation OU spontaneously self-assembles in certain solvents (notably dimethyl formamide and dimethylsulfoxide) transforming into a supramolecular gel (Table S1, ESI ). The lowest critical gelation concentration (CGC) of OU is 4% (w/v, 10 mg mL 1 = 1%), and the corresponding gel-sol transition temperature (T gel ) is 85-87 1C. In order to gain an insight into the selfassembly mechanism, 1 H NMR, FTIR, XRD and urea addition experiments were conducted. 1 H NMR spectra were recorded for different concentrations of OU in DMSO-d 6 (Fig. 1b). The H a and H f proton signals are shifted ca. 0.04 and 0.03 ppm upfield compared to pure OUG upon adding 25 mM Fe 3+. Meanwhile, the signals of protons H d (the NH groups) shifted slightly downfield ca. 0.01 ppm. 44 These results confirmed the H-bonding interactions between amide groups and van der Waals interactions between alkyl chains. Comparing the FTIR data before and after gelation (Fig. S1, ESI ), the N-H stretching absorbance bands of OUG are broader and move to significantly higher wavenumbers (3323 to 3361 cm 1 ) in the solid state compared to the gel state: these data suggest hydrogen bonds play a critical role in the gelation process. 45,46 It is well known that adding urea, which has a high propensity to form hydrogen bonds, can disrupt existing hydrogen bonds in a supramolecular structure. 47,48 Accordingly, adding urea (10 equiv.) into OUG and heating the gel, led to the formation of a sol. It was observed that after adding urea, the sol did not revert back to gel, even when the OUG-urea mixture was cooled at 15 1C for several days, indicating that the gelation is driven by hydrogen bonds among OU molecular chains (Fig. 2a). Besides, the X-ray diffraction (XRD) peaks of OUG This journal is © The Royal Society of Chemistry 2020 2y = 20.541, 23.221 corresponding to d-spacings of 4.32 and 3.83, respectively, also indicated the presence of p-p stacking interactions (Fig. S11b, ESI ), further promoting the selfassembly behavior. OUG showed weak fluorescence in the sol state, however, after transforming to the gel state, the emission intensity of OUG at 439 nm increased 6 times (Fig. S3, ESI ), indicating that OU is an AIE-active gelator. 49 Stimuli-responsive behaviors OUG exhibits a high selectivity to Fe 3+ over other metal ions. By monitoring the change of fluorescence, we investigated the recognition characteristics of OUG towards metal ions. Using nitrate salts as the cation sources, an aqueous metal ion solution of Na +, Ca 2+, Co 2+, Cu 2+, Mn 2+, Ni 2+, Cr 3+, La 3+, Fe 3+, Sr 2+, Ce 3+, Ag +, Al 3+, Mg 2+, Cd 2+, Pb 2+ or Fe 2+ (c = 0.2 M) was added to the OUG to generate the corresponding metal-gels. As shown in Fig. 3a and c, initially, the OUG had a strong blue fluorescence emission. When different metal ions were added, only Fe 3+ quenched the fluorescence of OUG. Thus, the OUG could effectively and selectively detect Fe 3+. To further evaluate the sensitivity of OUG for Fe 3+, the fluorescence behavior of OUG was monitored by continuous titrations with Fe 3+. As shown in Fig. S5a (ESI ), with the increasing addition of Fe 3+ (0-1.1 equiv.), the emission intensity of the corresponding metal-gels (OUFeG) at 439 nm gradually decreased. The limit of detection (LOD) of OUG towards Fe 3+ was calculated to be 5.89 10 9 M based on the 3d/S method 54 (Fig. S4 and S5a, ESI ), confirming the high selectivity of OUG as a sensor for Fe 3+ compared with other reported sensor systems (Table S2, ESI ). The high selectivity of OUG to Fe 3+ is attributed to two reasons: firstly, unpaired electrons in Fe 3+ cause a paramagnetic effect, prompting energy dissipation of excited states through non-radiative pathways. 55 Secondly, the high ionic strength of Fe 3+ could easily induce the transfer of p-electrons from the urethane backbone to Fe 3+ through cation-p interactions. 56 both of these effects will cause the fluorescence quenching of OUG. A simple regeneration treatment verified the recyclability of OUG. An anion solution (F or HSO 4, 2 10 5 mol L 1 ; 10 mL) was added into metal-gel OUFeG, stirring the mixture for 5 min, centrifuging and recycling OUG for again detecting ions. As shown in Fig. S10 (ESI ), after five consecutive cycles, the intensity of the OUG signal is essentially unchanged, indicating the excellent recyclability and reversibility of the OUG for the detection of Fe 3+ and HSO 4 or F. (Fig. 4a), indicating that the OUG combined with Fe 3+ via cation-p interactions between the urethane groups and Fe 3+. 56,61 As Fig. 4b 2.95 ppm), which indicated the cation-anion interactions between Fe 3+ and F or HSO 4 could release the p-electrons of urethane groups, thus recovering the fluorescence of OUG. Mechanism of cation-anion sensor In the FTIR experiments (Fig. S11a, ESI ), when Fe 3+ was added into OUG to form OUFeG, the stretching absorbance bands of N-H, CQO and C-O-C shifted from 3361 cm 1, 1708 cm 1 and 1161 cm 1 to 3480 cm 1, 1673 cm 1 and 1158 cm 1 respectively, which further confirmed that Fe 3+ interacts with p-electrons of the urethane groups, thus influencing H-bonds between the amide groups. 6,57 After the addition of F or HSO 4 into the OUFeG, the CQO, N-H and C-O-C all reverted to their initial positions (Fig. S11a, ESI ). These observations suggested that F and HSO 4 competitively bound to Fe 3+ rather than to OUG. Moreover, the XRD peaks of OUG moved with adding Fe 3+ into OUG, and recovered when F or HSO 4 was added into OUFeG (Fig. S11b, ESI ). To get further insight into the mechanism of cation-anion sensing, as shown in Fig. 5, the SEM studies were carried out. Fig. 5a demonstrates that gel OUG shows a lamellar stacking structure with a smooth surface. This structure was converted into a honeycomb structure in the metal-gel OUFeG (Fig. 5b), while in gel OUFeG + HSO 4 and OUFeG + F, the image again showed a smooth lamellar stacking structure ( Fig. 5c and d). Such morphological change is attributed to the cation-p interactions between OUG and Fe 3+, breaking hydrogen bonding between the OUG chains and modifying the supramolecular structure. 57,61 After adding F or HSO 4 into OUFeG, p-electrons of OUG were released, hydrogen bonds are rebuilt and the morphology is recovered. These experimental results indicated that the fluorescence of OUG can be reversibly switched by Fe 3+ (off) and then by F or HSO 4 (on), through repeated competition between cation-p and cation-anion interactions (Fig. 2b). Application in the rapid removal of Fe 3+ The development of new sorbents for the sensing and extraction of metal ions from environmental and biological samples is of current importance. 62,63 The performance of OUG to effectively remove Fe 3+ from aqueous solution was analyzed by atomic absorption spectrometry ( Application as a writing display material Based on the above-mentioned ''on-off-on'' properties, the OUG has a great potential as a rewritable fluorescent display material. As a proof-of-concept a rewritable board was constructed (Fig. 6). The detailed steps are described as follows: (i) OUG sol (10%) was poured onto a clean quartz plate surface and dried under ambient conditions to give a film emitting strong blue fluorescence under ultraviolet radiation (365 nm). (ii) Writing the symbol ''Fe'' on the film with a brush dipped in This journal is © The Royal Society of Chemistry 2020 aqueous Fe 3+ solution (0.3 M), a dark ''Fe'' image was clearly displayed due to the fluorescence quenching effect of Fe 3+ on OUG. (iii) The whole OUG film was transformed into a nonfluorescent display board by brushing with Fe 3+ solution. (iv) Two new letters ''S'' and ''F'' could be written again with the same brushing method using HSO 4 and F solutions (0.3 M), respectively. Visually, the letters emitted blue fluorescence under a UV lamp. Combining these practically very simple processes with the excellent recyclability of OUG (discussed above; Fig. S10, ESI ) means that OUG has promising applications as a fluorescent writing display material. Conclusion In conclusion, a novel supramolecular AIE gel, OUG, was designed and synthesized by a straightforward ''one-pot'' procedure. The dynamic and reversible noncovalent interactions endow OUG with distinct advantages of a reversible and highly sensitive response to Fe 3+, HSO 4, and F, acting as an ''onoff-on'' fluorescent sensor for these cationic and anionic species. Importantly, OUG can absorb up to 97.5% Fe 3+ from a water environment. This rapid, simple, low cost and highly sensitive material has great potential for practical applications in intelligent sensing, handling heavy metal ion pollution and environmental remediation. Conflicts of interest There are no conflicts to declare.
Surface cleaning with liquid detergents poses an ongoing problem for consumers. Consumers utilizing liquid detergents as a light-duty liquid dishwashing detergent composition or as a hard surface cleaning composition frequently find surface imperfections such as soil residues, streaks, film and/or spots after washing. Besides, consumers prefer cleaning compositions to be dried faster after the cleaning process. Hence, there remains a need for liquid cleaning compositions which not only clean hard surfaces, but also deliver improved shine and fast-drying. At the same time, consumers using detergents in automatic dishwashing frequently find that items placed in a dishwasher to be washed are stained with different kinds of stains which are particularly difficult to remove, especially when it comes to tea and coffee stains. The problem is more acute when the detergent is phosphate free. It is an object of the present invention to provide polymers which are suitable as an additive to cleaning compositions for hard surfaces and which deliver improved shine and fast-drying as well as an improved stain removal from hard surfaces. The use of polyalkyleneimines in cleaning compositions is known. Traditionally, polyalkyleneimines have been used in laundry detergents to provide soil suspension benefits. Polyethyleneimines have also been used in hard surface cleaning compositions to provide different benefits. WO2011/051646 discloses a method of treating hard surfaces to improve soil resistance, particularly resistance to oily soils, which comprises applying to the surface a composition comprising a quaternised polyamine which has been block propoxylated and then block ethoxylated. WO2010/020765 discloses the use of a composition comprising a polyalkyleneimine and/or a salt or derivative thereof for the prevention of corrosion of non-metallic inorganic items during a washing or rinsing process. US2007/0275868A1 reads on a liquid detergent composition comprising an alkoxylated polyethylenimine with one or two alkoxylation modification per nitrogen atom. The degree of permanent quaternization may be from 0% to 30% of the polyethyleneimine backbone nitrogen atoms. WO2006/108856 reads on an amphiphilic water-soluble alkoxylated polyalkyleneimines comprising ethylenoxy and propylenoxy units and having a degree of quaternization of up to 50% for use as additives for laundry detergents and cleaning compositions. WO2009/060059 describes amphiphilic water-soluble alkoxylated polyalkyleneimines comprising ethylenoxy and propylenoxy units for use as additives for laundry detergents.
// RegisterPluginIfNotExists will register a NoSQL plugin only if a plugin with same name has not already been registered func RegisterPluginIfNotExists(pluginName string, plugin nosqlplugin.Plugin) { if _, ok := supportedPlugins[pluginName]; !ok { supportedPlugins[pluginName] = plugin } }
Help Provide Meals for Thousands of People! Free Meals Program Volunteering on Memorial Day. This volunteer opportunity is part of a program providing almost 1,000,000 nutritious meals in the San Francisco Tenderloin every year. The morning of Memorial Day is an especially valuable time to come and volunteer and one that can work well for people who have that day free from work. Special Conditions: Persons with disabilities are also encouraged to come. Please let us know beforehand if they need any accomodations.
/** * Plot options for {@link ChartType#TIMELINE} charts. */ public class PlotOptionsTimeline extends AbstractPlotOptions { private Boolean allowPointSelect; private Boolean animation; private String className; private Boolean clip; private Color color; private Boolean colorByPoint; private Number colorIndex; private Cursor cursor; private Boolean crisp; private DataLabels dataLabels; private String description; private Boolean enableMouseTracking; private Boolean exposeElementToA11y; private Boolean ignoreHiddenPoint; private ArrayList<String> keys; private String legendType; private String linecap; private String linkedTo; private Marker marker; private Number opacity; private String _fn_pointDescriptionFormatter; private Boolean selected; private Boolean shadow; private Boolean showCheckbox; private Boolean showInLegend; private Boolean skipKeyboardNavigation; private States states; private Boolean stickyTracking; private SeriesTooltip tooltip; private Boolean visible; private Number gapSize; private String gapUnit; private Number legendIndex; private PlotOptionsSeries navigatorOptions; private Number pointRange; private Boolean showInNavigator; public PlotOptionsTimeline() { } @Override public ChartType getChartType() { return ChartType.TIMELINE; } /** * @see #setAllowPointSelect(Boolean) */ public Boolean getAllowPointSelect() { return allowPointSelect; } /** * Allow this series' points to be selected by clicking on the markers, bars * or pie slices. * <p> * Defaults to: false */ public void setAllowPointSelect(Boolean allowPointSelect) { this.allowPointSelect = allowPointSelect; } /** * @see #setAnimation(Boolean) */ public Boolean getAnimation() { return animation; } /** * Enable or disable the initial animation when a series is displayed. * Please note that this option only applies to the initial animation of the * series itself. For other animations, see * {@link ChartModel#setAnimation(Boolean)} */ public void setAnimation(Boolean animation) { this.animation = animation; } /** * @see #setClassName(String) */ public String getClassName() { return className; } /** * A class name to apply to the series' graphical elements. */ public void setClassName(String className) { this.className = className; } /** * @see #setClip(Boolean) */ public Boolean getClip() { return clip; } /** * Disable this option to allow series rendering in the whole plotting area. * Note: Clipping should be always enabled when chart.zoomType is set * <p> * Defaults to <code>true</code>. */ public void setClip(Boolean clip) { this.clip = clip; } /** * @see #setColor(Color) */ public Color getColor() { return color; } /** * <p> * The main color or the series. In line type series it applies to the line * and the point markers unless otherwise specified. In bar type series it * applies to the bars unless a color is specified per point. The default * value is pulled from the <code>options.colors</code> array. * </p> * * <p> * In <a href= * "http://www.highcharts.com/docs/chart-design-and-style/style-by-css" * >styled mode</a>, the color can be defined by the * <a href="#plotOptions.series.colorIndex">colorIndex</a> option. Also, the * series color can be set with the <code>.highcharts-series</code>, * <code>.highcharts-color-{n}</code>, * <code>.highcharts-{type}-series</code> or * <code>.highcharts-series-{n}</code> class, or individual classes given by * the <code>className</code> option. * </p> */ public void setColor(Color color) { this.color = color; } /** * @see #setColorByPoint(Boolean) */ public Boolean getColorByPoint() { return colorByPoint; } /** * Defaults to <code>true</code> */ public void setColorByPoint(Boolean colorByPoint) { this.colorByPoint = colorByPoint; } /** * @see #setColorIndex(Number) */ public Number getColorIndex() { return colorIndex; } /** * <a href= * "http://www.highcharts.com/docs/chart-design-and-style/style-by-css" * >Styled mode</a> only. A specific color index to use for the series, so * its graphic representations are given the class name * <code>highcharts-color-{n}</code>. */ public void setColorIndex(Number colorIndex) { this.colorIndex = colorIndex; } /** * @see #setCursor(Cursor) */ public Cursor getCursor() { return cursor; } /** * You can set the cursor to "pointer" if you have click events attached to * the series, to signal to the user that the points and lines can be * clicked. */ public void setCursor(Cursor cursor) { this.cursor = cursor; } /** * @see #setCrisp(Boolean) */ public Boolean getCrisp() { return crisp; } /** * When true, each point or column edge is rounded to its nearest pixel * in order to render sharp on screen. * In some cases, when there are a lot of densely packed columns, * this leads to visible difference in column widths or distance between columns. * In these cases, setting crisp to false may look better, * even though each column is rendered blurry. *<p> * Defaults to <code>true</code>. */ public void setCrisp(Boolean crisp) { this.crisp = crisp; } /** * @see #setDataLabels(DataLabels) */ public DataLabels getDataLabels() { if (dataLabels == null) { dataLabels = new DataLabels(); } return dataLabels; } /** * <p> * Options for the series data labels, appearing next to each data point. * </p> * * <p> * In <a href= * "http://www.highcharts.com/docs/chart-design-and-style/style-by-css" * >styled mode</a>, the data labels can be styled wtih the * <code>.highcharts-data-label-box</code> and * <code>.highcharts-data-label</code> class names (<a href= * "http://jsfiddle.net/gh/get/library/pure/highcharts/highcharts/tree/master/samples/highcharts/css/series-datalabels" * >see example</a>). * </p> */ public void setDataLabels(DataLabels dataLabels) { this.dataLabels = dataLabels; } /** * @see #setDescription(String) */ public String getDescription() { return description; } /** * <p> * <i>Requires Accessibility module</i> * </p> * <p> * A description of the series to add to the screen reader information about * the series. * </p> * <p> * Defaults to: undefined */ public void setDescription(String description) { this.description = description; } /** * @see #setEnableMouseTracking(Boolean) */ public Boolean getEnableMouseTracking() { return enableMouseTracking; } /** * Enable or disable the mouse tracking for a specific series. This includes * point tooltips and click events on graphs and points. For large datasets * it improves performance. * <p> * Defaults to: true */ public void setEnableMouseTracking(Boolean enableMouseTracking) { this.enableMouseTracking = enableMouseTracking; } /** * @see #setExposeElementToA11y(Boolean) */ public Boolean getExposeElementToA11y() { return exposeElementToA11y; } /** * <p> * By default, series are exposed to screen readers as regions. By enabling * this option, the series element itself will be exposed in the same way as * the data points. This is useful if the series is not used as a grouping * entity in the chart, but you still want to attach a description to the * series. * </p> * <p> * Requires the Accessibility module. * </p> * <p> * Defaults to: undefined */ public void setExposeElementToA11y(Boolean exposeElementToA11y) { this.exposeElementToA11y = exposeElementToA11y; } /** * @see #setIgnoreHiddenPoint(Boolean) */ public Boolean getIgnoreHiddenPoint() { return ignoreHiddenPoint; } /** * Defaults to <code>true</code> */ public void setIgnoreHiddenPoint(Boolean ignoreHiddenPoint) { this.ignoreHiddenPoint = ignoreHiddenPoint; } /** * @see #setKeys(String...) */ public String[] getKeys() { if (keys == null) { return new String[] {}; } String[] arr = new String[keys.size()]; keys.toArray(arr); return arr; } /** * An array specifying which option maps to which key in the data point * array. This makes it convenient to work with unstructured data arrays * from different sources. */ public void setKeys(String... keys) { this.keys = new ArrayList<String>(Arrays.asList(keys)); } /** * Adds key to the keys array * * @param key * to add * @see #setKeys(String...) */ public void addKey(String key) { if (this.keys == null) { this.keys = new ArrayList<String>(); } this.keys.add(key); } /** * Removes first occurrence of key in keys array * * @param key * to remove * @see #setKeys(String...) */ public void removeKey(String key) { this.keys.remove(key); } /** * @see #setLegendType(String) */ public String getLegendType() { return legendType; } /** * Defaults to <code>point</>. */ public void setLegendType(String legendType) { this.legendType = legendType; } /** * @see #setLinecap(String) */ public String getLinecap() { return linecap; } /** * The line cap used for line ends and line joins on the graph. * <p> * Defaults to: round */ public void setLinecap(String linecap) { this.linecap = linecap; } /** * @see #setLinkedTo(String) */ public String getLinkedTo() { return linkedTo; } /** * The <a href="#series.id">id</a> of another series to link to. * Additionally, the value can be ":previous" to link to the previous * series. When two series are linked, only the first one appears in the * legend. Toggling the visibility of this also toggles the linked series. */ public void setLinkedTo(String linkedTo) { this.linkedTo = linkedTo; } /** * @see #setMarker(Marker) */ public Marker getMarker() { if (marker == null) { marker = new Marker(); } return marker; } /** * <p> * Options for the point markers of line-like series. Properties like * <code>fillColor</code>, <code>lineColor</code> and <code>lineWidth</code> * define the visual appearance of the markers. Other series types, like * column series, don't have markers, but have visual options on the series * level instead. * </p> * * <p> * In <a href= * "http://www.highcharts.com/docs/chart-design-and-style/style-by-css" * >styled mode</a>, the markers can be styled with the * <code>.highcharts-point</code>, <code>.highcharts-point-hover</code> and * <code>.highcharts-point-select</code> class names. * </p> */ public void setMarker(Marker marker) { this.marker = marker; } /** * @see #setOpacity(Number) */ public Number getOpacity() { return opacity; } /** * Opacity of a series parts: line, fill (e.g. area) and dataLabels. * <p> * Defaults to <code>1</code>. */ public void setOpacity(Number opacity) { this.opacity = opacity; } public String getPointDescriptionFormatter() { return _fn_pointDescriptionFormatter; } public void setPointDescriptionFormatter( String _fn_pointDescriptionFormatter) { this._fn_pointDescriptionFormatter = _fn_pointDescriptionFormatter; } /** * @see #setSelected(Boolean) */ public Boolean getSelected() { return selected; } /** * Whether to select the series initially. If <code>showCheckbox</code> is * true, the checkbox next to the series name will be checked for a selected * series. * <p> * Defaults to: false */ public void setSelected(Boolean selected) { this.selected = selected; } /** * @see #setShadow(Boolean) */ public Boolean getShadow() { return shadow; } /** * Whether to apply a drop shadow to the graph line. Since 2.3 the shadow * can be an object configuration containing <code>color</code>, * <code>offsetX</code>, <code>offsetY</code>, <code>opacity</code> and * <code>width</code>. * <p> * Defaults to: false */ public void setShadow(Boolean shadow) { this.shadow = shadow; } /** * @see #setShowCheckbox(Boolean) */ public Boolean getShowCheckbox() { return showCheckbox; } /** * If true, a checkbox is displayed next to the legend item to allow * selecting the series. The state of the checkbox is determined by the * <code>selected</code> option. * <p> * Defaults to: false */ public void setShowCheckbox(Boolean showCheckbox) { this.showCheckbox = showCheckbox; } /** * @see #setShowInLegend(Boolean) */ public Boolean getShowInLegend() { return showInLegend; } /** * Whether to display this particular series or series type in the legend. * The default value is <code>true</code> for standalone series, * <code>false</code> for linked series. * <p> * Defaults to: true */ public void setShowInLegend(Boolean showInLegend) { this.showInLegend = showInLegend; } /** * @see #setSkipKeyboardNavigation(Boolean) */ public Boolean getSkipKeyboardNavigation() { return skipKeyboardNavigation; } /** * If set to <code>True</code>, the accessibility module will skip past the * points in this series for keyboard navigation. */ public void setSkipKeyboardNavigation(Boolean skipKeyboardNavigation) { this.skipKeyboardNavigation = skipKeyboardNavigation; } /** * @see #setStates(States) */ public States getStates() { if (states == null) { states = new States(); } return states; } /** * A wrapper object for all the series options in specific states. */ public void setStates(States states) { this.states = states; } /** * @see #setStickyTracking(Boolean) */ public Boolean getStickyTracking() { return stickyTracking; } /** * Sticky tracking of mouse events. When true, the <code>mouseOut</code> * event on a series isn't triggered until the mouse moves over another * series, or out of the plot area. When false, the <code>mouseOut</code> * event on a series is triggered when the mouse leaves the area around the * series' graph or markers. This also implies the tooltip. When * <code>stickyTracking</code> is false and <code>tooltip.shared</code> is * false, the tooltip will be hidden when moving the mouse between series. * Defaults to true for line and area type series, but to false for columns, * pies etc. * <p> * Defaults to: true */ public void setStickyTracking(Boolean stickyTracking) { this.stickyTracking = stickyTracking; } /** * @see #setTooltip(SeriesTooltip) */ public SeriesTooltip getTooltip() { if (tooltip == null) { tooltip = new SeriesTooltip(); } return tooltip; } /** * A configuration object for the tooltip rendering of each single series. * Properties are inherited from <a href="#tooltip">tooltip</a>, but only * the following properties can be defined on a series level. */ public void setTooltip(SeriesTooltip tooltip) { this.tooltip = tooltip; } /** * @see #setVisible(Boolean) */ public Boolean getVisible() { return visible; } /** * Set the initial visibility of the series. * <p> * Defaults to: true */ public void setVisible(Boolean visible) { this.visible = visible; } /** * @see #setGapSize(Number) */ public Number getGapSize() { return gapSize; } /** * <p> * Defines when to display a gap in the graph. A gap size of 5 means that if * the distance between two points is greater than five times that of the * two closest points, the graph will be broken. * </p> * * <p> * In practice, this option is most often used to visualize gaps in time * series. In a stock chart, intraday data is available for daytime hours, * while gaps will appear in nights and weekends. * </p> * <p> * Defaults to: 0 */ public void setGapSize(Number gapSize) { this.gapSize = gapSize; } /** * @see #setGapUnit(String) */ public String getGapUnit() { return gapUnit; } /** * Together with <code>gapSize</code>, this option defines where to draw * gaps in the graph. * <p> * Defaults to: relative */ public void setGapUnit(String gapUnit) { this.gapUnit = gapUnit; } /** * @see #setLegendIndex(Number) */ public Number getLegendIndex() { return legendIndex; } /** * The sequential index of the series within the legend. * <p> * Defaults to: 0 */ public void setLegendIndex(Number legendIndex) { this.legendIndex = legendIndex; } /** * @see #setNavigatorOptions(PlotOptionsSeries) */ public PlotOptionsSeries getNavigatorOptions() { return navigatorOptions; } /** * <p> * Options for the corresponding navigator series if * <code>showInNavigator</code> is <code>true</code> for this series. * Available options are the same as any series, documented at * <a class="internal" href="#plotOptions.series">plotOptions</a> and * <a class="internal" href="#series">series</a>. * </p> * * <p> * These options are merged with options in * <a href="#navigator.series">navigator.series</a>, and will take * precedence if the same option is defined both places. * </p> * <p> * Defaults to: undefined */ public void setNavigatorOptions(PlotOptionsSeries navigatorOptions) { this.navigatorOptions = navigatorOptions; } /** * @see #setPointRange(Number) */ public Number getPointRange() { return pointRange; } /** * The width of each point on the x axis. For example in a column chart with * one value each day, the pointRange would be 1 day (= 24 * 3600 * 1000 * milliseconds). This is normally computed automatically, but this option * can be used to override the automatic value. * <p> * Defaults to: 0 */ public void setPointRange(Number pointRange) { this.pointRange = pointRange; } /** * @see #setShowInNavigator(Boolean) */ public Boolean getShowInNavigator() { return showInNavigator; } /** * Whether or not to show the series in the navigator. Takes precedence over * <a href="#navigator.baseSeries">navigator.baseSeries</a> if defined. * <p> * Defaults to: undefined */ public void setShowInNavigator(Boolean showInNavigator) { this.showInNavigator = showInNavigator; } }
//createDatabase creates a new database for testing, real creation is done by the cloudformation stack func (d *DynamoDB) createDatabase() error { emails := &dynamodb.CreateTableInput{ AttributeDefinitions: []*dynamodb.AttributeDefinition{ { AttributeName: aws.String("id"), AttributeType: aws.String("S"), }, { AttributeName: aws.String("email_address"), AttributeType: aws.String("S"), }, }, KeySchema: []*dynamodb.KeySchemaElement{ { AttributeName: aws.String("id"), KeyType: aws.String("HASH"), }, }, GlobalSecondaryIndexes: []*dynamodb.GlobalSecondaryIndex{ { IndexName: aws.String(d.emailAddressIndexName), KeySchema: []*dynamodb.KeySchemaElement{ { AttributeName: aws.String("email_address"), KeyType: aws.String("HASH"), }, }, Projection: &dynamodb.Projection{ ProjectionType: aws.String(dynamodb.ProjectionTypeKeysOnly), }, ProvisionedThroughput: &dynamodb.ProvisionedThroughput{ ReadCapacityUnits: aws.Int64(5), WriteCapacityUnits: aws.Int64(5), }, }, }, ProvisionedThroughput: &dynamodb.ProvisionedThroughput{ ReadCapacityUnits: aws.Int64(5), WriteCapacityUnits: aws.Int64(5), }, TableName: aws.String(d.emailsTableName), } _, err := d.dynDB.CreateTable(emails) if err != nil { if !strings.Contains(err.Error(), dynamodb.ErrCodeResourceInUseException) { return err } } return nil }
Tommy "Tiny" Lister -- best known as Deebo from "Friday" -- has agreed today to plead GUILTY in federal court for conspiring to commit mortgage fraud, which led to $3.8 MILLION in losses for unsuspecting lenders. According to the plea agreement -- obtained by TMZ -- Lister admitted to a diabolical scheme in which he and several individuals conspired to obtain four different mortgages on homes in L.A. using false information and bogus bank statements. Lister admitted he and his co-conspirators collected mortgages worth $5.7 million -- and defaulted on all four ... costing the lenders $2.6 million. He also admitted to withdrawing over $1.1 million in loans using the properties as collateral ... which he never paid back. All-in-all Lister has admitted swindling banks out of $3.8 million. Lister now faces up to 5 years in federal prison. He's due in court in September.
<reponame>cconger/AlphaEgg<gh_stars>1-10 """tm_interface allows capture and key signaling to a running trackmania instance""" import time import cv2 import d3dshot from tesseract import Tesseract # TODO: Replace this with a virtual controller from keyboard import PressKey, ReleaseKey, W, A, S, D, R, Enter BOTTOM_LEFT_SCREEN_REGION = (0, 690, 1270, 1410) class TrackManiaInterface(): """TrackManiaInterface is a class for wrapping the capture and sending to a Trackmania process""" def __init__(self, capture_box=BOTTOM_LEFT_SCREEN_REGION): self.capture = d3dshot.create("numpy") self.capture.display = self.capture.displays[1] self.capture_region = capture_box self.capture.capture(region=self.capture_region) def __del__(self): self.capture.stop() def disconnect(self): self.capture.stop() ReleaseKeys(W, A, S, D, R) def capture_screen(self, count=1): return self.capture.get_frame_stack(list(range(count)), stack_dimension="first") def standard_actions(self, action): if action == 0: # No input ReleaseKeys(W, A, S, D, R) elif action == 1: # Neutral Left ReleaseKeys(W, S, D, R) PressKeys(A) elif action == 2: # Neutral Right ReleaseKeys(W, A, S, R) PressKeys(D) elif action == 3: # Forward ReleaseKeys(A, S, D, R) PressKeys(W) elif action == 4: # Forward Left ReleaseKeys(S, D, R) PressKeys(W, A) elif action == 5: # Forward Right ReleaseKeys(A, S, R) PressKeys(W, D) elif action == 6: # Backwards ReleaseKeys(W, A, D, R) PressKeys(S) elif action == 7: # Bwards Left ReleaseKeys(W, D, R) PressKeys(S, A) elif action == 8: # Backwards Right ReleaseKeys(W, A, R) PressKeys(S, D) def reset(self): ReleaseKeys(W, A, S, D, R) PressKey(R) PressKey(Enter) time.sleep(1.5) ReleaseKey(R) ReleaseKey(Enter) def ReleaseKeys(*args): for arg in args: ReleaseKey(arg) def PressKeys(*args): for arg in args: PressKey(arg)
Chronic Glycemic Control in Surgical Patients admitted at a Tertiary Care Hospital Background: Surgeons are performing millions of operations on diabetic patients daily, and lack of awareness among diabetic patients is leading to complications. Objective: To determine chronic glycemic control in general surgical patients admitted at a tertiary care hospital. Methodology: This was a cross-sectional study conducted from June 2018 to January 2019, on fifty-seven consecutive patients, suffering from diabetes and needing surgical intervention in any form were included in this study. Diabetes status in terms of HbA1c, causes of admission to the surgical ward, and intervention done were noted. Data were analyzed by using SPSS 20. Results: Among these diabetic patients, 46 (80.70%) were male, and 36 (63.16%) were known diabetics. HbA1c level was normal in 9 (15.79%) patients, pre-diabetic in 13 (22.81%) and uncontrolled diabetic in 35 (61.4%) patients. Aetiologically diabetic foot was seen in 36 (63.16%) patients, abdominal catastrophe 12 (21.05%), leg swelling 7 (12.09%), 5 (8.77%) scrotal abscess, carbuncles 4 (7.02%), miscellaneous 5 (8.77%). Incision drainage and closure were done in 10 (17.54%), drainage wound debridement, decompression of compartment syndrome, and constructive procedure in 11 (19.3%), laparotomy 8 (8.77%) and watchful conservation in 4 (7.01%) patients. Two patients were saved from the mortal blow of the diabetic coma. Hypertension and nephropathy were seen in 8 patients each, and 5 patients have Hepatitis C, and 1 patient has ischemic heart disease. Conclusion: This study showed that many patients did not know their diabetes status, two-third of patients were having uncontrolled diabetes. There is a need for proper assessment and management of diabetic patients by consultants and young doctors in every discipline of medicine, especially surgery. Introduction Diabetes Mellitus is a chronic metabolic disorder characterized by hyperglycemia resulting from defects in insulin secretion or action or both. It has affected about 26 million people in the United 1 States alone. Currently, an epidemic of type 2 diabetes is being witnessed throughout the world. It is resulting in an ever-increasing number of diabetic patients with its complications. In Pakistan, the prevalence of type 2 diabetes is 2 estimated to be 11.77%. The current incidence of insulin-dependent diabetes mellitus varies between <1/100,000 to >40/100,000 of the world population. Well-defined stages can characterize 3 symptomatic type 1 diabetes. HbA1c is a marker used for the diagnosis and management of 4 diabetes. Regardless of the type of diabetes, patients with uncontrolled diabetes mellitus exhibit a significant increase in the rate of surgical and systemic complications, higher mortality, and 5 length of stay in the hospital. The postoperative complications include infections, cardiac events, and 6 acute renal failure. On the contrary, it is observed that normal glycemic or tight glycemic control has demonstrated increase 90-day mortality in intensive care patients. This finding has dampened the 7 enthusiasm for 'tight glycemic control'. Surgery performed electively or in the emergency, causes catabolic stress on the patient and leads to the secretion of counter-regulatory hormones both in normal or diabetic subjects. These hormones increase glycogenolysis, gluconeogenesis, lipolysis, Diabetes mellitus is a risk factor in the revised cardiac risk index of Lee. For elective surgery 10 HBA1c of less than 69mol/mol is recommended. Body sugar is assessed and monitored by plasma sugar, plasma ketone bodies, and Glycosylated hemoglobin HbA1C. Urinary sugar control monitoring is usually not practiced nowadays. The DexCom and MiniMed Medtronic systems can monitor short term serum glucose. These systems involve inserting a subcutaneous sensor that measures glucose concentrations in the interstitial 11 fluid for 72 hours. Glycosylated hemoglobin measurement is a useful index of long-term blood glucose levels monitoring. It reflects glycemic 12 control over 2-3 months. On the other hand, it is also seen that by adjusting glucose-lowering therapy many patients do not achieve glycemic targets seen in terms of glycosylated hemoglobin 13 level, literature has reported concerns about using only the glycosylated hemoglobin (HbA1c) level is misleading as it over-diagnose 14 prediabetics. Correlation of HbA1c levels with glycemic status is as; Normal HbA1c, 4-5.6%, Pre-diabetic HbA1c, 5.7%-6.5%, and Diabetic HbA1c, more than 6.5%. The objective of this study was to determine the chronic glycemic control in the general surgical patients, admitted in surgical ward at the tertiary care hospital. Methodology A descriptive observational study was conducted from June 2018 to January 2019, undertaken at Surgical Unit II, Sheikh Zayed Hospital, Rahim Yar Khan. It was carried out after the approval from the Institutional Ethical Review Board and informed verbal consent was taken from every patient. It included 57 consecutive surgical patients admitted for any surgical reason in the ward. Serum sugar was estimated in all the patients admitted in the ward. Patients with raised serum sugar levels were scrutinized by clinical evaluation and HbA1c. Patients neither having raised serum sugar level nor needing surgical consultation/intervention were excluded from the study. The diabetic patients needing either surgical consultant decision or surgical intervention were included in this study. Age, sex, awareness about diabetes status, surgical conditions needing surgical intervention/decision, management provided to the patient, and associated medical conditions were recorded. Results This study consisted of 57 diabetic patients who underwent surgery. The results are shown in Table I. It included 46 (80.7%) male patients, only 19 (33.34%) patients were unaware that they had diabetes, and 19 (34.4%) of these diabetic patients were neither taking any anti-diabetic medicine nor observing anti-diabetic lifestyle. HbA1c level were normal in 9 (15.79%) patients, pre-diabetic in 13 (22.81%) and uncontrolled diabetic in 35 (61.4%) patients. Aetiologically, diabetic foot was in 36 (63.16%) patients, abdominal catastrophe 12 (21%), leg swelling 7 (12.09%), 5 (8.77%) scrotal abscesses, carbuncles 4 (7%), and miscellaneous 5 (8.77%). 15 diabetics in the population. Though retinopathy, nephropathy, diabetic foot, hypertension cardiovascular involvement are common complications of diabetes yet our study verifies that all the organs are affected by diabetes mellitus. The involvement of foot is a common presentation in diabetics, in our study. In the majority of diabetic patients, there is an association of hypertension and nephropathy which correlates well 16 with the study of Akhtar et al, Hepatitis C is an additional co-morbidity in our part of the world. Our study endorses the recommendation of Eknithise et al that knowledge, perception, and practice toward self-care among elderly patients suffering from type 17 2 Diabetes Mellitus were poor. It is also observed by Kamran et al that due to lack of knowledge, about half of the people with diabetes use herbal 18 medicine. In modern diabetes management is the focus is to provide holistic and individualized patient care. It is based on structured education, self-management, and safe and effective glucose-lowering therapies. The research studies support concentrating diabetic care on consultants with special interests in diabetes as more innovative and integrated models of care and task-sharing care. These modules include the involvement of the pharmacists in patient care. As more studies are needed to identify the effect of 19 health system arrangements on various outcomes, the rest of the medical community and population at large must also be educated in the care of diabetes and its complications. Astonishingly in the modern world, Non-insulin-dependent diabetes is being controlled in the severely obese patients by gastric 20 bypass surgery, but our population is unaware of diabetes. In There is evidence that the prevalence of depression is moderately increased in pre-diabetic patients and undiagnosed diabetic patients, but the 23 ignorance about diabetes is seen in our study. It needs further evaluation in the Pakistani population. Now it is being appreciated that people with diabetes need a lot of selfmanagement and education (DSME). A wide variety of DSME programmes are being organized because for most people diabetes education is not truly embedded in routine clinical care. In comparison to drugs and devices, DSME lacks investment and funding. Collaboration and leadership are required to overcome these 24 deficiencies. Conclusion A large segment of our patients admitted in the surgical ward did not know that they have diabetes, and two-third was having uncontrolled diabetes. The proper screening and management of diabetic patients by consultants and young doctors in every discipline of medicine especially surgery is the need of time. Every health caregiver and patient must remain conscious of diabetes.
There’s a few more of these, and additional commentary, at The Atlantic. JOHN adds: Fifty years ago? It seems like only yesterday! Well, not exactly yesterday, but not quite a half century, either. I do recall how the Beatles swept away, for a while at least, pretty much all popular music as it existed at that time. One evening in a church basement after a high school football game (I was in junior high at the time), someone played “I Want to Hold Your Hand” over and over again because no one wanted to listen to anything else. Ah, the music of our youth. I have never understood why today’s teenagers don’t seem to fully appreciate the Beatles. But then I thought: the Beatles are now 50 years into the rear-view mirror. When I first listened to them in 1963, what music was then 50 years old? Here is a list of the top songs of 1913. (1913!!) Al Jolson figures prominently; “When Irish Eyes Are Smiling” was number one. If you had asked us, in 1963, to appreciate the music of 1913, our reaction would have been–to put it politely–negative.
def hist_probabilities(nda) : nvals = nda.size ph = np.array(hist_values(nda), dtype=float) ph /= nvals return ph
Barcelona football star Ronaldinho was granted Spanish nationality Monday. The Brazilian playmaker, 27, was given Spanish citizenship at a short ceremony on Monday at the courthouse in the Barcelona suburb of Gava, where he has resided since joining Barca from Paris Saint-Germain in 2003. All European clubs are keen for their foreign stars to acquire a European nationality, in order to free up another place in their squad for another non-EU player. After the ceremony Ronaldinho - who played poorly in Sunday's 0-0 draw in Santander - left for training at the Camp Nou. Later this week it is expected that Barca's Mexican teenager Giovani dos Santos will be granted Spanish citizenship, which will allow him to play in "La Liga." Barca now have only two non-EU players in their squad, Africans Samuel Eto'o and Toure Yaya.
Hey folks, Follow us this week into the bowels of the Crypt in this macabre world of the Necromancer. We’ll also discuss the Crypt and the Bedrock Beta. Please, watch your step on the way down, it’s quite slippery and the Necromancer has been hoping for a few new corpses… The Bedrock Beta Now, this is probably something you guys have been fearing for a few days, and we didn’t want to say it was happening until we knew it was actually the case, but: the beta is being delayed again. We aren’t going to be putting another date or timeframe on it as we don’t want to let you guys down again, but we are going to keep you as up to date as possible with our progress, starting today. Our goal has always been to design a transparent system from the ground up to ensure maximum modding potential, part of this is that ensuring all of the assets and data we use in the game are available to you in Dungeoneer. We believe this goal is a cornerstone to making sure our community has all the power they need to develop the best mods. Unfortunately this requires us to overcome some rather unique challenges. We’ve pushed through a number of anticipated and unexpected roadblocks in these first few months of development, however the most recent (unexpected) problem to arise has consumed a rather large amount of our resources and, as a result, caused core features to be delayed that are required for the beta. The short version of the problem is that we ran into a memory issue when assets are loaded into Dungeoneer (our custom toolkit) which requires us to rewrite parts of it from the ground up. Since we’re pushing back the Beta due to this roadblock we anticipate a delayed release of the final game, however there is a silver (maybe even gold) lining to this… Due to these delays certain areas of development are going to have some additional time on their hands, so we’re going to put together an additional mini campaign that will be free to all backers and pre-order customers. This mini campaign will be released in stages throughout the beta and will give you a taste of the story before the final release of the game. The TL;DR: Beta has been delayed due to a technical issue. No exact date yet, but we will be posting weekly progress in our updates. Release of the final game has also been pushed back. Extra Mini-campaign to be released free to all backers and pre-order customers during the beta. Now for the fun stuff! The Necromancer & The Crypt The Necromancer Doan was the first of his kind to walk the Overworld, his search to prolong life eventually soured as he was drawn by the allure of dark magic to aid his research. His success left him a husk of his former being, neither alive nor dead he continued his sinful research for time untold. Few of his books survived when the High Inquisitor Annaud incinerated his Archives (with the Necromancer chained to his bookshelves inside) but those that did hold the secrets of undeath he managed to unlock. When an unwitting mage reads the pages of his dark work they too are cursed with the same lust as Doan… Within the surviving tomes lies the secret of the Soul Pyre, which allows a Necromancer to trap the souls of the bodies cast upon it. These souls can be used by the godless Necromancer to empower dead bodies with life once more, raising them as undead Ghouls shackled to his will. As the Necromancer’s army grows so too does his power, yet if vanquished the Ghouls will wander aimlessly unless a Necromancer of equal skill is able to chain them to his own command. With the command of an undead army at his side, the Necromancer is a fierce opponent whose power is not to be taken lightly — a welcome ally to any Underlord who deals regularly with death. To draw them to your dungeon you will need a Crypt, and, naturally, a supply of fresh bodies. Abilities Army of Darkness (Passive): The Necromancer gains a defensive bonus for every nearby Ghoul & Revenant. The Necromancer gains a defensive bonus for every nearby Ghoul & Revenant. Necromancy (Passive): Every time a nearby unit dies (including undead), the Necromancer heals himself and all nearby Ghouls for 10% of their maximum health, and returns Revenants to full health. Every time a nearby unit dies (including undead), the Necromancer heals himself and all nearby Ghouls for 10% of their maximum health, and returns Revenants to full health. Raise Dead (Active): Raises a Ghoul from a Soul Pyre in the Crypt. Long cooldown. Raises a Ghoul from a Soul Pyre in the Crypt. Long cooldown. Mark for Death (Active): The Necromancer marks an enemy unit for death, causing all of his Ghouls and Revenants to attack it. The Necromancer marks an enemy unit for death, causing all of his Ghouls and Revenants to attack it. Revenant (Active): Revives a fallen unit (friend or foe) to temporarily fight for him. Revenants slowly lose health over time until they die once again. Revives a fallen unit (friend or foe) to temporarily fight for him. Revenants slowly lose health over time until they die once again. Reanimate (Active): Revives a fallen Ghoul. Long cooldown. Thank you for your understanding through all of this, we know you want to play your game (we do too!) and we hope you’ll bear with us while we work out these kinks. We really appreciate your continued patience and support, we’ll see you here next week (or every day on our forums). Until next time Underlord, – WFTO Team Click here to discuss this update on our forums!
Last summer, Pokémon Go players descended on Milwaukee County parks in droves, and those Poké masters allegedly left quite a mess. This has prompted a response from the people in charge of those public spaces. The Milwaukee county board passed an ordinance last week that will require Pokémon Go developer Niantic to acquire a permit to use the county’s park locations in the location-based monster-catching simulator, according to the Milwaukee Journal Sentinel. The board enacted this new rule in response to claims that Pokémon Go players caused thousands of dollars in damage to area parks that the county had to pay for itself. This rule won’t affect players because the Milwaukee board is only interested in targeting the company responsible for creating the game. I’ve reached out to the Milwaukee county board and Niantic to ask about this ordinance, and we’ll update this story with any new information. But this is a rule that Niantic probably doesn’t want to abide by because the Milwaukee board is potentially violating the developer’s First Amendment right to free expression. “[Cities and public officials] don’t have an option to file suit against Niantic. That’s not really in the cards,” Avvo chief legal officer Josh King explained to GamesBeat last summer. “If you want to look at the pure legal issue there, Niantic or Pokémon can associate any piece of property with, let’s call it a virtual signal. They’re well within their First Amendment rights to do that.” Put simply, this ordinance is essentially saying that Niantic’s software cannot use Milwaukee’s parks, and that doesn’t seem all that different than if the board tried to pass a rule prohibiting artists from referencing the county’s parks on television, in books, or on a map. To get the ordinance overturned, Niantic or another interested party would potentially have to file a lawsuit, and nothing like that has happened yet.
Non-invasive techniques, typically using ultrasound, are well-known for determining bladder volume, i.e. the amount of urine in the bladder. The reliability and accuracy of such ultrasound techniques have been well-documented and they are now well accepted by the medical community. Information concerning bladder volume is used by health professionals in the treatment of bladder dysfunction and to prevent over-filling of the bladder in those cases where there is a permanent or temporary loss of bladder sensation, due to spinal cord injuries and/or postoperative recovery, as well as other reasons. It is also well-recognized that an important aspect of good bladder health involves prevention of bladder distension. Typically, as bladder pressure increases, due to increase in volume of urine, ultimately leading to the point where bladder distension begins to occur, incontinent episodes will occur because the sphincter muscles are unable to retain the urine in the bladder. On many individuals, the point of incontinence occurs consistently at a particular volume. If this particular volume is known, then incontinent events can be prevented by using information on bladder pressure/distension. If the bladder continues to fill so that it becomes hyperdistended, renal damage, renal failure and in some cases even death can occur. Hyperdistension, like distension, can be successfully prevented, however, by measuring using bladder distension. At low bladder volumes, bladder distension information is typically not very useful. As the bladder fills, however, a quantization of bladder distension becomes more useful relative to ascertaining problematic conditions. Bladder distension information is potentially more useful than just straight volume measurements because normal bladder capacity varies widely across the human population. The same volume of urine in two different patients can have very different consequences. There have been previous attempts to quantize bladder distension, including the use of ultrasound back wall scatter characterization in determining bladder wall thickness. Bladder wall muscles will stretch and thin as the bladder fills. This thinning of the bladder wall can be directly measured by recording backscatter information at various known volumes for a particular patient. Such methods, however, are not particularly reliable or consistent and often do not directly correlate with actual distension of the bladder. In the present invention, a substantially different approach is taken, directed toward ascertaining the degree of roundness of the bladder as it fills, with increasing roundness being a reliable indication of pressure.
Influencing factors of hand hygiene in critical sections of a brazilian hospital Introduction: The aim of this study was to monitor adherence to hand hygiene by health professionals working in critical sections and to assess the factors that influenced adherence, such as physical structure of the units, use of procedure gloves, employment bond of the worker, and perception of patient safety climate. Methodology: Observational and correlational study carried out in critical areas of a university hospital in the Midwest region of Brazil. Results: The overall hand hygiene adherence rate was 46.2% (n = 3,025). Adherence was higher among nurses 59.8% (n = 607) than among nursing technicians (p < 0.001), and the section with the greatest adherence was the neonatal Intensive Care Unit 62.9% (n = 947) (p < 0.001). Unlike the neonatal unit, in the adult unit the dispensers of alcohol-based handrubs were poorly located, without arms reach, and the taps were manual. In this section, a greater frequency of procedure glove use was also observed, 90.6% (n = 536), as compared to the other sections (p < 0.001). Regarding safety climate perception, temporary employees had higher means as compared to regular employees (p = 0.0375). Conclusions: Hand hygiene adherence was affected and/or influenced by the physical structure, use of procedure gloves, work regime, and patient safety climate. Introduction Increased mortality, increased hospitalization time, increased economic burdens on health systems and potential transmission of multi-resistant microorganisms show that healthcare-associated infections (HAI) have a significant negative impact for patients, professionals and organizations, representing a serious current global public health problem. Intensive Care Units (ICU) are the main site of HAI occurrences, characterized by highly complex care provided to critical patients, with several invasive procedures, a marked severity profile of patients, greater demand for intensive care and antibiotic administration, among others. The World Health Organization (WHO), together with other national and international institutions, has developed approaches to improve occupational health and safety practices among professionals. Among them, the most recent, the "Multimodal Strategy for Improving Hand Hygiene Adherence" has five components: system change, training/ education, performance observation/feedback, reminders in the workplace and institutional security environment. This strategy proved to be successful in improving good practices of hand hygiene adherence. However, hand hygiene (HH) adherence has been considerably lower than that recommended worldwide. Among the factors that contribute to low HH adherence are structural, organizational and individual components. The infrastructure of health units is often represented by an insufficient number of washbasins, a deficit in the supply of liquid soap and paper towels, absence of HH posters and the availability of alcohol-based handrubs without arms reach of professionals at the time of care. As for the organizational components that negatively affect on HH adherence, it is worth mentioning the perceived unfavorable patient safety climate. Safety climate positively influences HH adherence as it refers to the involvement of management with patient safety issues. Thus, health institutions with a consistently higher safety culture have greater HH adherence than institutions with a more fragile safety culture. In this context, the work relationship is an element about the type of employment contract that influences health workers' perception of the patient safety climate, since those who have a temporary employment contract, and therefore, without guarantee of stability, can present positive results concerning safety climate, both because they have been in the institution for less time and because they fear some retaliation in the workplace. Another element that hinders HH adherence is the inappropriate use of procedure gloves. These gloves are an part of standard precautionary measures and are therefore mandatory in various clinical situations, in order to avoid contamination of health workers and transmission of microorganisms. However, HH must be performed before puutting the gloves and after removing them. Based on the above, the objective of this study was to monitor HH adherence by health professionals working in critical sections and to assess the factors that influenced adherence. Study design This is an observational, analytical and correlational study. Participants and setting The research took place in the Adult and Neonatal Intensive Care Units (ICU) and semi-intensive unit of a university hospital, with 124 beds, in the Midwest region of Brazil. The total population of professional nurses, nursing technicians, doctors, medical interns and physical therapists from critical sections of the university hospital was the object of this study (n = 172). That is, all 172 professionals on the work schedule were included. However, only 148 professionals effectively agreed to participate in the study. The reasons for the 24 professionals not participating were: nine professionals were not found on the days of data collection, eight refused to participate in the study, four were not approached because they were on vacation, and three were on sick leave. As pre-established eligibility criteria, the worker should be working at the institution for more than six months, revealing professional experience at the institution, and deliver direct care actions to patients during the data collection period. Professionals who performed exclusively administrative functions and who were learning biosafety measures at the time of data collection were excluded, in order not to influence the proposed objectives. Variables The dependent variable of the study was HH adherence. The independent variables were: professional categories, critical sections of activity, type of employment contract regime, use of gloves, structure of the units, and perceived patient safety climate. Measurement HH was monitored using the WHO observation form, a tool used worldwide to assess HH adherence by health professionals. This instrument is a checklist that is filled out by the researcher during direct observation. It consists of the five moments recommended by the WHO and the action taken, with three possibilities of filling: 1) rubbing with alcohol; 2) soap and water; 3) not performed. In option 3, the recorded cases were the health professional who did not wash his/her hands, and if he/she did, at the time of observation, was using procedure gloves. Each observation session lasted about 20 minutes. The hand hygiene compliance rate was calculated by the following formula: adherence (%) = number of hand hygiene actions/total number of opportunities 100, as recommended in the literature. The infrastructure was assessed using the questionnaire provided by WHO. Completed by the researcher, this instrument is a checklist that has 27 items related to physical resources for the sections, such as availability of water, number of beds, number of sinks with water, soap and paper towel available, number of dispensers with alcohol-based handrubs within reach, in conditions of use/refilled, presence/location of illustrative posters about HH, availability of procedure gloves, number of medical professionals, nurses and nursing technicians in each section, participation in HH training, and presence of an audit on HH adherence at the institution. The patient safety climate was measured using a self-administered instrument called Safety Attitudes Questionnaire (SAQ) Short Form 2006, adapted and validated for the reality of Brazilian hospitals in order to assess the perception of patient safety climate. It has a Likert-type ordinal scale (0-5 points, from strongly disagree to strongly agree) with 41 items divided into six domains: teamwork climate, safety climate, job satisfaction, stress perception, management perception (of section and hospital) and working conditions. The score ranges from 0 to 100 points and scores ≥ 75 are considered as positive. Participants also answered a sociodemographic and professional questionnaire that included the following variables: sex, age, length of professional experience, place of professional experience and participation in hand hygiene training. Bias The training for observers included simulation of the HH scenarios represented by the five moments with proper completion of the observation form. This training was planned and conducted by a specialist in the subject. After the training, the researchers observed 10 professionals and 53 HH opportunities simultaneously, during the morning and afternoon shifts. The interobserver agreement and Kappa coefficient were calculated, whose result was 0.90, classified, therefore, as almost perfect agreement. To minimize the Hawthorne effect, health professionals received information and signed an informed consent form six months before the observation. In addition, the observations occurred daily, timed in sessions of 20 minutes at most, during the morning, afternoon and evening shifts and on weekends. Statistical methods The processing and statistical analysis of the data were performed with software R. For comparisons of hand hygiene adherence between the variables "professional categories", "five moments", "activity sections", and "glove use", the chi-square and z of proportions tests were performed, as well as 95% confidence intervals. The descriptive analyses of the domains of the Safety Attitudes Questionnaire (SAQ) appear in frequency tables, and the scores of the means and medians of each domain were compared across professionals' activity sections and type of contract bond through the Kruskal-Wallis and Conover-Iman tests, which allowed visualizing the significance of the data through the calculated medians. Spearman's correlation between SAQ scores and hand hygiene adherence in the sections was performed, considering the data did not show normal distribution. To interpret the values of positive and negative correlations, Ajzen and Fishbein's classification was used, in which values less than 0.30 correspond to weak correlations with little clinical applicability; values below 0.30 and 0.50 are considered moderate correlations and those above 0.50, strong correlations. For all statistical tests, the 0.05 significance level was considered. Ethical considerations The Results A total of 3,025 HH opportunities were observed. Of these, 1,048 were in the adult ICU, 947 in the neonatal ICU and 1,030 in the semi-intensive care unit. The general HH compliance rate was 46.25%. As shown in Table 1, the chi-square test rejected the null hypothesis (p = 0) of equality for HH adherence As for physical infrastructure, in the adult ICU there were two sinks with hand-operated taps, one at the entrance to the isolation and one in the common area, next to the first bed. In the sections of the semi-intensive unit, the sinks had hand-operated taps. In the medication preparation room there was no liquid soap in the dispenser, and the disposal containers, which should be activated by foot pedal, were defective, hindering the disposal of paper towels and other materials. In the adult ICU, there were poorly located dispensers such as behind the bed or devices such as an infusion pump, mechanical respirator, among others. In the three sections investigated, there were no illustrative WHO posters at the points of care to remind professionals about HH adherence. Table 2 shows the data regarding infrastructure. Of the 3,025 HH opportunities observed, 1,399 HH actions were carried out and 1,626 were not carried out. Of these actions, in which professionals failed to cleanse their hands, 1,258 (77.36%) were related to the inappropriate use of gloves (p < 0.001). There was a greater frequency of glove use to the detriment of HH absence at the moments "before aseptic procedures" and "after body fluid exposure risk" as compared to the others (p < 0.001). Regarding the sections, glove use was significantly higher in the adult ICU unit than in the neonatal and semi-intensive ICUs (p < 0.001). Table 3 shows the data regarding the frequency of glove use in the 5 moments of hand hygiene and in the different sections. Table 4 shows the analysis of SAQ response frequencies, overall and by domains, compared to the type of employment contract. Temporary employees had higher scores than regular employees (hired by public tender) and this difference was significant (p = 0.0101). In addition, there was a statistically significant difference in the domains teamwork (p = 0.0375), stress perception (p = 0.0444), unit management perception (p = 0.0238) and hospital management perception (p = 0.0056). The correlation between SAQ domains by the sections investigated and HH adherence was assessed. In the neonatal ICU, there were positive and moderate correlations in the domains teamwork (r = 0.38, p = 0.0114), safety climate (r = 0.42, p = 0.0048), job satisfaction (r = 0.37, p = 0.0117) and total score (r = 0.40, p = 0.0091). In the semi-intensive unit, the correlations were considered positive and moderate with HH adherence in the domain unit management perception (r = 0.37, p = 0.151) and hospital management perception (r = 0.40, p = 0.0089). Discussion Despite the results of HH adherence being lower than the recommended in all professional categories, moments and sections, we observed that HH adherence is affected, or is influenced by the physical structure of the units, type of employment relationship, perceived patient safety climate, and use of procedure gloves. Different factors may be related to low HH adherence, among them, health services with inadequate physical structure, including poorly located sinks, inoperative dispensers of alcohol-based handrubs and without arms reach, use of procedure gloves, lack of training, among others. The higher rate of HH adherence in the neonatal ICU may have occurred because this unit has better infrastructure, with bottles of alcohol-based handrubs available at hand, as recommended by the WHO, and washbasins with automatic taps. The opposite happened in the semi-intensive unit, which presented inadequate infrastructure for HH, with less accessibility of alcoholbased handrubs in the environment of patient care and consequently less HH adherence. Difficult access sinks and dispensers, as well as installation in ergonomically incorrect points, can hinder HH adherence. Some studies showed that the greater distance between the patient's environment and the sink was associated with decreased HH adherence. Each additional meter, which must be covered by the health professional to reach a sink, decreased the likelihood of HH by approximately 10%. Likewise, a study carried out in a pediatric and neonatal ICU in the United Kingdom found that, as the visibility of sinks increased, the number of HH actions also increased. In this sense, it is important to consider that studies that implemented the WHO multimodal strategy and achieved satisfactory adherence rates over time invested mainly in infrastructure, which is the first element of this strategy. The use of gloves was observed alongside the negative action of HH in the five moments recommended by the WHO. The inappropriate glove use had a great impact on HH adherence and was perceived as one of the factors that can hinder this practice by health professionals, with an emphasis on the indications "before aseptic procedures" and "after body fluid exposure risk". The data from the present study showed that procedure gloves were used frequently by professionals before performing aseptic procedures, without previous hand cleansing. The risks resulting from this professional failure can endanger the patient's life, since lack of hand hygiene implies an increase in the transmission of microorganisms from the care environment to the gloves and later these will be in contact with the patient. At the time "after body fluid exposure risk," professionals removed gloves and did not wash their hands immediately after removal, as recommended by the WHO, and the same situation was observed in other countries in previous studies. It is noteworthy that in addition to the risks of HAI transmission to patients, one of the major risks associated with low HH adherence is the contamination of glove boxes, making them an environmental reservoir of pathogens. In our study, the section with the highest glove use adherence was the adult ICU (91%), a result that may be related to the low HH adherence found in this unit (44%). These findings are in line with other studies that attributed the use of gloves as one of the main risk factors for non-compliance with hand hygiene. Regarding the perceived patient safety climate, SAQ scores were low for all domains evaluated, corroborating research carried out in other Brazilian states and abroad. It is worth highlighting the lowest scores perceived by professionals in the domain "Unit and hospital management perception." This domain is a fundamental factor for patient safety, since it reflects the professional's agreement regarding the actions and involvement of the management or administration of the hospital and the units. Thus, creating a favorable atmosphere in the work environment, conducive to an open dialogue about errors, and a collaborative rather than punitive environment are some of the main actions of hospital and unit management that can have a positive impact on patient safety. The perception of a safety climate varied according to the different work regimes. Medical interns and temporary professionals had higher means than regular professionals hired by tender (p < 0.05). This finding may be associated with these professionals' shorter service time at the institution, since the opposite situation was observed in another similar study, in which the professionals with more service time at the institution had a better perception of individual and collective skills regarding the hospital's commitment with safety issues. Moreover, temporary professionals have little stability due to the adopted work regime, and they tend to have more positive responses to the safety climate because they fear retaliation in the work environment, although confidentiality of the data was highlighted several times during the study. Similar data were found in the research carried out by De Carvalho et al., with higher scores for temporary employees than for regular ones. It is worth mentioning that the employment relationship can influence when answering questionnaires of an organizational nature. Regular professionals hired by public tender have job security guaranteed by Brazilian labor laws, have more time in the institution and for these reasons can better perceive the problems experienced and are less afraid to expose the difficulties encountered. Regarding the correlation between SAQ domains in sections with HH adherence, the positive and moderate correlations found in the neonatal and semi-intensive ICU units showed that as the perception of patient safety climate increases, HH adherence responds positively, which reinforce the findings about the importance of safety climate perception by professionals in increasing HH adherence in hospitals and the respective reduction of HAIs. This research had limitations. One of them was data collection performed in a single institution, which reduces the number of observations and the representativeness of the professionals. Another limiting factor in was the Hawthorne effect, which can occur during observational studies. However, several observation sessions were carried out at different times of the day to minimize this effect. Conclusions Low HH adherence is influenced by infrastructure and glove use. Such data reveal the need for investment in adequate infrastructure, since greater access to washbasins and availability of alcohol-based handrubs tend to favor increased HH adherence. Regarding safety climate perception, the low scores in all domains and units evaluated showed an alert situation for the institution with an urgent need to implement actions that promote a favorable patient safety climate, since high safety climate perceptions are associated with adopting safe behaviors, improving communication, conducting training with a positive impact, reducing adverse events, among others, thus contributing to safe practices in patient care. In line with the results of this study, health institutions and their managers are expected to realize the importance of hand hygiene and at the same time seek to identify gaps and plan improvement actions based on the multimodal strategy.
package eu.dnetlib.iis.wf.importer.infospace.converter; import java.util.Stack; import org.xml.sax.Attributes; import org.xml.sax.SAXException; import org.xml.sax.helpers.DefaultHandler; /** * Funding tree XML handler retrieving funding class details. * * @author mhorst * */ public class FundingTreeHandler extends DefaultHandler { private static final String FUNDER_FUNDING_SEPARATOR = "::"; private static final String ELEM_FUNDER = "funder"; private static final String ELEM_FUNDING_LEVEL_0 = "funding_level_0"; private static final String ELEM_NAME = "name"; private static final String ELEM_SHORTNAME = "shortname"; private Stack<String> parents; private StringBuilder currentValue; private String funderShortName; private String fundingLevel0Name; // ------------------------ LOGIC -------------------------- @Override public void startDocument() throws SAXException { this.parents = new Stack<String>(); this.currentValue = null; this.funderShortName = null; this.fundingLevel0Name = null; } @Override public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { if (isWithinElement(qName, ELEM_SHORTNAME, ELEM_FUNDER) || isWithinElement(qName, ELEM_NAME, ELEM_FUNDING_LEVEL_0)) { this.currentValue = new StringBuilder(); } this.parents.push(qName); } @Override public void endElement(String uri, String localName, String qName) throws SAXException { this.parents.pop(); if (isWithinElement(qName, ELEM_SHORTNAME, ELEM_FUNDER)) { this.funderShortName = this.currentValue.toString().trim(); } else if (isWithinElement(qName, ELEM_NAME, ELEM_FUNDING_LEVEL_0)) { this.fundingLevel0Name = this.currentValue.toString().trim(); } this.currentValue = null; } @Override public void endDocument() throws SAXException { parents.clear(); parents = null; } @Override public void characters(char[] ch, int start, int length) throws SAXException { if (this.currentValue!=null) { this.currentValue.append(ch, start, length); } } /** * @return funding class based of funder short name and level0 name, null returned when neither found. */ public String getFundingClass() { StringBuilder strBuilder = new StringBuilder(); if (funderShortName!=null) { strBuilder.append(funderShortName); strBuilder.append(FUNDER_FUNDING_SEPARATOR); if (fundingLevel0Name!=null) { strBuilder.append(fundingLevel0Name); } return strBuilder.toString(); } else { if (fundingLevel0Name!=null) { strBuilder.append(FUNDER_FUNDING_SEPARATOR); strBuilder.append(fundingLevel0Name); return strBuilder.toString(); } else { return null; } } } // ------------------------ PRIVATE -------------------------- private boolean isWithinElement(String qName, String expectedElement, String expectedParent) { return qName.equals(expectedElement) && (expectedParent==null || !this.parents.isEmpty() && expectedParent.equals(this.parents.peek())); } }
import { HttpClient, HttpHeaders } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { catchError, retry } from 'rxjs/operators'; import { HelpersComponent } from '../../helpers/helpers.component'; @Injectable({ providedIn: 'root' }) export class MessagesettingsService { private headers = new HttpHeaders(); constructor( private _helperComponent: HelpersComponent, private _http: HttpClient) { } private _messageSettingsURL = "http://loadbalancer.danfishel.com/messageservice/api/v1/"; // Create Message Method sendMessageMethod(formData) { this.headers.append("Content-Type", "application/json"); this.headers.append("Accept", "application/json"); this.headers.append("Access-Control-Allow-Headers", "Origin"); return this._http.post<any>(this._messageSettingsURL + 'msgsend', formData, { headers: this.headers }).pipe(retry(1), catchError(this._helperComponent.handleError) ); } saveMessageTemplateMethod(formData) { return this._http.post<any>(this._messageSettingsURL + 'msgsave', formData, { headers: this.headers }).pipe(retry(1), catchError(this._helperComponent.handleError) ); } // Update Notification Method updateMessageTemplateMethod(id, formData) { return this._http.put<any>(this._messageSettingsURL + 'msgupd' + id, formData).pipe(retry(1), catchError(this._helperComponent.handleError) ); } // Get All Message Method getMessageMethod() { return this._http.get<any>(this._messageSettingsURL + 'msg').pipe(retry(1), catchError(this._helperComponent.handleError) ); } // Delete Message By ID Method deleteMessageTemplateMethod(id) { return this._http.delete<any>(this._messageSettingsURL + 'delmsg' + id).pipe(retry(1), catchError(this._helperComponent.handleError) ); } // Create Notification Method createNotificationMethod(formData) { return this._http.post<any>(this._messageSettingsURL + 'notify', formData).pipe(retry(1), catchError(this._helperComponent.handleError) ); } // Update Notification Method updateNotificationMethod(id, formData) { return this._http.put<any>(this._messageSettingsURL + 'notupd' + id, formData).pipe(retry(1), catchError(this._helperComponent.handleError) ); } // Delete Notificaton Method deleteNotificationMethod(id) { return this._http.delete<any>(this._messageSettingsURL + 'delnot' + id).pipe(retry(1), catchError(this._helperComponent.handleError) ); } }
Interaction of the Nck adapter protein with p21-activated kinase (PAK1). The p21-activated kinases (PAKs) link G protein-coupled receptors and growth factor receptors (S. Dharmawardhane, R. H. Daniels, and G. M. Bokoch, submitted for publication) to activation of MAP kinase cascades and to cytoskeletal reorganization (M. A. Sells, U. G. Knaus, D. Ambrose, S. Bagrodia, G. M. Bokoch, and J. Chernoff, submitted for publication). The proteins that interact with PAK to mediate its cellular effects and to couple it to upstream receptors are unknown. We describe here a specific interaction of the Nck adapter molecule with PAK1 both in vitro and in vivo. PAK1 and Nck associate in COS-7 and Swiss 3T3 cells constitutively, but this interaction is strengthened upon platelet-derived growth factor receptor stimulation. We show that Nck binds to PAK1 through its second Src homology 3 (SH3) domain, while PAK1 interacts with Nck via the first proline-rich SH3 binding motif at its amino terminus. The interaction of active PAK1 with Nck leads to the phosphorylation of Nck at multiple sites. Association of Nck with PAK1 may serve to link this important regulatory kinase to cell activation by growth factor receptors. Nck has been identified as one of the so-called adapter proteins, consisting of one Src homology 2 (SH2) 1 and three Src homology 3 (SH3) domains. Adapter proteins are believed to play important roles in coupling activated receptors, particularly protein-tyrosine kinases, to various signaling pathways. Nck has been shown to be recruited to both the activated EGF and PDGF receptors. However, there is limited knowledge about the effector proteins that interact with Nck. Overexpression of Nck has been shown to transform mammalian cells, suggesting that it interacts with protein components important for regulation of normal cell growth. Perhaps related to this phenotype, the binding of Nck through its second SH3 domain to the mammalian Ras exchange factor Son of Sevenless (SOS) has been reported. Additional proteins able to bind with Nck are the IRS-1 protein in insulin-stimulated cells via the Nck SH2 domain, WASP, the protein defective in Wiskott-Aldrich syndrome via the third Nck SH3 domain, and a poorly characterized protein termed Nap1 via the first Nck SH3 domain. Unidentified serine/threonine kinase(s) of 65-69 kDa have also been reported to associate with the second SH3 domain of Nck both in vitro and in vivo. Nck is rich in potential phosphorylation sites, and it has been observed that Nck becomes phosphorylated on serine, threonine, and tyrosine residues in response to activation of a number of growth factor receptors, including those for epidermal growth factor, nerve growth factor, and platelet-derived growth factor. Nck is also phosphorylated in response to forskolin and phorbol ester treatment, suggesting it serves as a substrate for cAMP-dependent protein kinase (PKA) and protein kinase C. The p21-activated kinases (PAKs) were identified as serine/ threonine kinases whose activity is regulated by the small GTPases Rac and Cdc42. PAKs appear to initiate the protein kinase cascades leading to activation of the p38 and c-Jun (JNK) kinases (16 -19). Additionally, there is evidence that PAKs mediate some of the cytoskeletal effects of Rac and Cdc42. 2,3 PAKs have been shown to be stimulated by both G protein-coupled receptors and growth factor receptors. 3 However, the mechanisms involved in this activation have not yet been elucidated. In this study, we describe the specific interaction of Nck with PAK1 both in vitro and in vivo. Stimulation of the PDGF receptor in Swiss 3T3 cells enhances the level of associated Nck and PAK1. We map the site of this interaction to the second SH3 domain of Nck and to the first proline-rich SH3 binding domain of PAK1. PAK1 phosphorylates Nck in vitro on serine/ threonine residues, at least some of which are distinct from those phosphorylated by cAMP-dependent protein kinase. Nck may thus serve as a means to link PAK1 to activation of tyrosine kinase receptors and additional signaling components. EXPERIMENTAL PROCEDURES Plasmids and Constructs-Full-length and isolated SH domains of Nck were generated by polymerase chain reaction using the human nck cDNA as template and cloned into pGEX-2T (Pharmacia Biotech Inc.) for expression as glutathione S-transferase (GST) fusion proteins. The Nck SH3 first domain construct encoded residues 1-68 of Nck; the SH3 second, residues 101-166; SH3 third, residues 191-257; the second plus third SH3 domains, residues 101-257; and the SH2 domain, residues 275-377. These proteins were isolated using glutathione-Sepharose beads according to the manufacturer's instructions (Pharmacia) and gave essentially single bands on Coomassie Blue-stained SDSpolyacrylamide gels, with the exception of GST-full-length Nck (designated GST-Nck), which degraded to yield several smaller fragments. Full-length Nck was cloned in-frame with the hemagglutinin tag into the mammalian expression vector pCGN. PAK1 was prepared in pCMV6 with an amino-terminal Myc epitope tag as described in Footnote 2. Point mutations in Pak1 were introduced using a unique siteelimination protocol. Transient expressions in COS-7 cells were * This work was supported by United States Public Health Service Grants GM39434 (to G. M. B.), CA63139 (to L. A. Q.), and AI35947 (to U. G. K.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 The abbreviations used are: SH2, Src homology 2; SH3, Src homology 3; EGF, epidermal growth factor; PDGF, platelet-derived growth factor; PKA, cAMP-dependent protein kinase; PAK, p21-activated kinase; GST, glutathione S-transferase; GTP␥S, guanosine 5-3-O-(thio)triphosphate; aa, amino acids. performed essentially as described in Ref. 16. Cell Culture-Swiss 3T3 and COS-7 cell lines were maintained in Dulbecco's modified Eagle's medium with 10% fetal calf serum, 10 mM HEPES, 2 mM L-glutamine at 37°C in an atmosphere of 10% CO 2. Preparation of Cell Lysates and Immunoprecipitation-Cells were plated in 100-mm tissue culture dishes and serum-starved for 16 -18 h prior to the start of the experiment. The cells were then treated with or without stimuli as indicated and then rapidly scraped from the dish into ice-cold lysis buffer (25 mM Tris-HCl, pH 7.5, 1 mM EDTA, 0.1 mM EGTA, 5 mM MgCl 2, 1 mM dithiothreitol, 150 mM NaCl, 10% glycerol, 1% Nonidet P-40, 2 mM sodium vanadate, 50 IU/ml aprotinin, 1 mM phenylmethylsulfonyl fluoride, 2 g/ml leupeptin). In some experiments, cells were lysed in hypotonic buffer (5 mM Tris-HCl, no NaCl) in the absence of Nonidet P-40. After 15 min on ice, the lysates were pelleted for 5 min at 1500 rpm at 4°C and the clarified supernatants removed. Aliquots of cell lysates were incubated with the indicated antibody overnight at 4°C and then with 80 l of a 1:1 slurry of Protein A or Protein G-Sepharose beads for 45-60 min. Beads were pelleted and washed twice with 1 ml of lysis buffer containing Nonidet P-40 and once with 1 ml of lysis buffer without Nonidet P-40 and then used for immunoblots. For kinase assays, the beads were washed an additional time with lysis buffer without Nonidet P-40 and then twice with 1 ml of kinase buffer (see below). Solid Phase Binding Experiments-To assess interactions of PAK1 with GST-Nck fusion proteins, COS-7 cell Nonidet P-40 lysates were incubated for 2 h at 4°C with equivalent amounts (5 g) of pure GST fusion protein and then washed as described above prior to SDS-polyacrylamide gel electrophoresis and transfer to nitrocellulose. Gel electrophoresis, transfer of proteins to nitrocellulose membranes, and immunoblotting were performed as described in Ref. 24. RESULTS AND DISCUSSION Nck Interacts with PAK1 in Vitro through Its Second SH3 Domain-PAKs have a carboxyl-terminal Ser/Thr kinase catalytic domain that is highly conserved in the Ste20-related kinases. In contrast, the amino-terminal regulatory domains of PAKs containing the GTPase binding site are relatively divergent. Studies in our laboratory on the ability of PAK1 to regulate the actin cytoskeleton have indicated the importance of the amino terminus of PAK for interactions with the actin cytoskeleton. 2 We therefore initiated experiments to identify proteins that interact with the amino terminus of PAKs. As shown in Fig. 1, we incubated lysates from COS-7 cells that were transiently transfected to overexpress PAK1 with various purified GST fusion protein constructs, washed them, and evaluated binding by immunoblotting. PAK1 did not bind to control GST-coated beads (lane 1) nor to several other GST fusion proteins including the SH3 domains from p120 Ras GAP and the SH2 domain of Nck (data not shown). In contrast, PAK1 bound effectively to full-length Nck (lane 2). Using the same methods, we examined binding to each of the individual SH3 domains of Nck, as well as to a construct encompassing both the second and third SH3 domains. PAK bound to the second SH3 domain but only weakly interacted with the first or third SH3 domains. Binding to the combined second and third SH3 domains was always greater than to the second SH3 domain itself, suggesting that the weak interaction with the SH3 third domain might synergize with the binding to the SH3 second domain. In contrast, the combined second and third Nck SH3 domain constructs bound PAK1 with comparable or slightly greater efficiency than did full-length Nck, indicating there was no additional synergy when the first SH3 domain was also present. Nck Binds to the First Proline-rich Motif of PAK1-PAK1 has two proline-rich motifs in the amino terminus that have the characteristic PXXP (where X indicates a variable amino acid) structure of SH3 binding domains. These consist of the sequences PPAPP (aa 12-16) and PLPPNP (aa 40 -45). We prepared peptides encompassing these domains and determined their ability to compete with PAK1 for Nck binding. As shown in Fig. 2, only the peptide (QDKPPAPPMRN) including the first putative SH3 binding site, but not the second (SKPLP-PNPEEK), effectively blocked PAK binding to Nck. Both a control peptide from another site in PAK1 rich in proline residues (DATPPPVIAPRPE, aa 182-194) and a peptide derived from an SH3 binding domain on the p85 subunit of phosphoinositide 3-kinase (KISPPTPKPRPPRPTPVAPG) were unable to block binding. Similar results were obtained using the peptides in a dot blot assay. The peptide results were confirmed by mutagenesis of proline 13 to alanine in order to disrupt the first SH3 binding motif. This mutation in PAK1 caused at least a 10-fold decrease in Nck binding affinity, as determined by semiquantitative dot blot assays (data not shown). Nck therefore binds, at least partially, through the first amino-terminal SH3 binding motif of PAK1. Interestingly, the amino acid sequence of this site is nearly identical in human PAK2 and mouse PAK3, and mouse PAK3 was recently reported to bind to full-length Nck in vitro. Therefore, we predict that interaction of Nck with all three PAK family kinases may be important for their regulation. PAK1 Interacts with Nck in Cells-In order to determine whether Nck binds to PAK1 in intact cells, we transiently overexpressed Nck in COS-7 cells and then immunoprecipitated with specific antibodies to evaluate the association with endogenous PAK1 (Fig. 3A). We observed that we precipitated a 68-kDa kinase, which autophosphorylated only in the presence of recombinant GTP␥S-loaded Rac or Cdc42 (Fig. 3A, first panel). This kinase co-migrated exactly with PAK1 precipitated from the same cells with a specific PAK1 antibody. Identification of this kinase as PAK1 was supported by the fact that we could also directly detect PAK1 in the Nck precipitate by immunoblotting (Fig. 3A, second panel). Conversely, immunoprecipitation with the PAK1 antibody brought down Nck as well (Fig. 3A, third panel). These data indicated that Nck FIG. 1. PAK1 binds to the second SH3 We have shown that PAK1 activity is stimulated by PDGF in Swiss 3T3 cells. 3 We therefore examined the interactions of endogenous Nck and PAK1 in Swiss 3T3 cells in the presence or absence of PDGF (Fig. 3B). In non-stimulated cells, Nck coprecipitated with a Ser/Thr kinase that co-migrated with endogenous PAK1 and whose activity was stimulated by addition of recombinant Rac-or Cdc42-GTP␥S (Fig. 3B, first panel). Again, we also observed that this associated kinase was PAK1 as determined by immunoblotting (Fig. 3B, second panel). We examined lysates from Swiss 3T3 cells after stimulation with PDGF for various times. At all times examined from 0 to 10 min, we observed an association of PAK1 with Nck. This is consistent with the report by Chou and Hanafusa that the association of a 68-kDa Ser/Thr kinase with Nck in cells was constitutive. In Fig. 3B, second panel, we show that there is a 2.5-fold increase (determined by densitometry) in the amount of PAK1 found in the Nck precipitate after treatment with PDGF for 2 min, suggesting that receptor activation increases or stabilizes the binding of these two proteins to each other. In contrast, treatment of the cells with phorbol 12-myristate 13acetate, which has been reported to stimulate the phosphorylation of Nck, did not increase the level of Nck-associated PAK1. PAK1 Phosphorylates Nck at Sites Distinct from PKA-We examined whether the interaction of Nck with PAK1 had any functional consequences on PAK1 activity in vitro. Binding of Nck to PAK1 had no direct stimulatory effect on PAK1 catalytic activity nor did it alter the ability of PAK1 to interact with and be stimulated by Rac-or Cdc42-GTP␥S (data not shown). However, we found that Nck served as an effective substrate for phosphorylation by recombinant constitutively active GST-PAK1 in vitro (Fig. 4). Full-length Nck was phosphorylated as efficiently by PAK1 as it was by PKA, for which Nck is a known substrate. Immunoprecipitated and non-activated wildtype PAK1 itself catalyzed very little phosphorylation of Nck but did phosphorylate when stimulated by Cdc42-GTP␥S, suggesting that activation of a PAK1-Nck complex by GTPase results in Nck phosphorylation. We used various GST-Nck fusion proteins to evaluate the regions on Nck that became phosphorylated and observed that PAK1 phosphorylated fragments containing the first SH3 domain (aa 1-68) and the SH2 domain (aa 275-377) of Nck. PAK1 also phosphorylated addi-tional sites in the second and third SH3 domain constructs that were not phosphorylated by PKA. These constructs encompass aa residues 101-166 and 191-257, respectively, of Nck, a region that contains multiple serine and threonine residues. The fusion protein containing both the second and third SH3 domains also includes residues 167-190, and this construct serves as a substrate for both PAK1 and PKA. Thus, the association of an activated PAK1 with Nck catalyzes the phosphorylation of Nck on multiple sites. The significance of these phosphorylations on Nck activity is unknown. In conclusion, the data we have presented here establish the specific interaction of PAK1 with the adapter protein Nck. In intact Swiss 3T3 cells, this interaction is increased by stimulation through the PDGF receptor, suggesting that Nck serves as a means to couple PAK1 activity to PDGF receptor activation. Since Nck has been reported to co-precipitate with the activated PDGF and EGF receptors, it will be of interest to examine whether Nck physically links PAK to such growth factor receptors. We have detected PAK1 in EGF receptor precipitates from A431 cells but have not yet determined if this association is mediated via Nck. 4 We have shown that the interaction of Nck with PAK1 involves the second SH3 domain on Nck and the first proline-rich SH3 binding motif at the amino terminus of PAK1. In other studies, we have established that this particular PAK1 SH3 Signaling Complexes Involving PAK1 25748 binding domain is critical to the activity of PAK1 to stimulate assembly of the actin cytoskeleton. 2 Mutations in the PAK1 amino terminus that increase its ability to assemble actin also enhance the affinity of PAK1 for binding Nck. 2 These data suggest that the interaction of proteins with SH3 binding domains at the PAK1 amino terminus is a dynamic process that contributes to the regulatory effects of PAK1 on cell function. The role which the Nck-PAK interaction plays in this process remains to be determined in future studies.
<reponame>EdJoPaTo/WebsiteChangedBot import beautify from 'js-beautify' import jsonStableStringify from 'json-stable-stringify' import {cachedGot} from './got.js' import {Mission} from './mission.js' const JAVASCRIPT_REQUIRED_WORDS = ['function', 'var', 'const'] const BEAUTIFY_OPTIONS = { end_with_newline: true, eol: '\n', indent_with_tabs: true, max_preserve_newlines: 2, } export async function getCurrent(entry: Mission): Promise<string> { const response = await cachedGot(entry.url) const {body} = response // eslint-disable-next-line default-case switch (entry.type) { case 'html': if (!/<html/i.test(body)) { throw new Error('The response body does not seem like html') } return beautify.html(body, BEAUTIFY_OPTIONS) case 'js': if (!JAVASCRIPT_REQUIRED_WORDS.some(o => body.includes(o))) { throw new Error('The response body does not seem like JavaScript') } return beautify.js(body, BEAUTIFY_OPTIONS) case 'txt': return body case 'xml': if (!/<\?xml/i.test(body)) { throw new Error('The response body does not seem like xml') } return beautify.html(body, BEAUTIFY_OPTIONS) case 'json': return jsonStableStringify(JSON.parse(body), {space: '\t'}) // Typescript detects missing cases in this switch case. No need for default then. // default: throw new Error(`A hunter for this mission type was not implemented yet: ${(entry as any).type as string}`) } }
Ntuzuma police are investigating a case of murder, following the grisly discovery of a woman's body in the Lindelani area yesterday. It's believed she had been stabbed in the neck and then thrown from a cliff. Officers from the Durban Search and Rescue Unit were called to the scene late last night and had to use a rope rescue system to haul her body up. Police spokesperson, Nqobile Gwala says the victim has not yet been identified. "Members of the search and rescue, together with the K9 Unit were called out to recover the body of an unknown woman. The victim had been thrown down a cliff," Mbhele said. Mbhele says the body was found to be in the early stages of decomposition. "A case of murder was opened for further investigation at the Ntuzuma police station."
export default class DeleteArticleRequest { public slug: string; constructor(slug: string) { this.slug = slug; } }
Field of the Invention This application relates to a point-of-sale. Particularly, this application relates to receiving and processing of voice input at a point-of-sale. Description of the Related Art A Point of Sale (POS) is the place where a retail transaction between a customer and a merchant is completed. At the POS, traditionally, a clerk uses a cash register, or a comparable POS device, to allow the customer to make a payment to the merchant in exchange for goods and/or services. At the POS, the clerk uses such devices to, for example, calculate the amount owed by the customer for the goods and/or services in question. The merchant also will provide options as to the form of payment used by the customer to pay for the goods and/or services. Once the payment is completed, a receipt for the transaction is issued. POS devices can provide this register functionality, as well as additional item, inventory, or customer functionality, such as the ability to track and record customer orders, manage inventory, and so on. Mobile POS (MPOS) devices, which are un-tethered from the cashier lanes, are useful for retailers who desire more personalized customer interaction, such as with regard to lifestyle and fashion brands, or who need to be able to reduce long customer wait times, that can occur in various situations, such as during peak hours or holidays. These MPOS devices provide fast scanning of items and can take most forms of payment. Under normal circumstances, they work quickly and efficiently. However, when item bar codes are difficult to scan, or when a transaction requires special processing (e.g., when attempting to apply an employee discount), the operator needs to do more than simply scan or press a couple of buttons. Various sales processes can be invoked by the MPOS devices to guide the clerk through a series of steps, such as by displaying a series of screens and menu options for the clerk to complete. On many mobile devices that have a touch screen, or even a dedicated hardware keyboard, using the MPOS device's manual controls to enter lengthy information or navigating through a chain of screens can become tedious, slow, and error prone.
May the Force Be with You! ForceVolume Mapping with Atomic Force Microscopy Information of the chemical, mechanical, and electrical properties of materials can be obtained using force volume mapping (FVM), a measurement mode of scanning probe microscopy (SPM). Protocols have been developed with FVM for a broad range of materials, including polymers, organic films, inorganic materials, and biological samples. Multiple force measurements are acquired with the FVM mode within a defined 3D volume of the sample to map interactions (i.e., chemical, electrical, or physical) between the probe and the sample. Forces of adhesion, elasticity, stiffness, deformation, chemical binding interactions, viscoelasticity, and electrical properties have all been mapped at the nanoscale with FVM. Subsequently, force maps can be correlated with features of topographic images for identifying certain chemical groups presented at a sample interface. The SPM tip can be coated to investigate-specific reactions; for example, biological interactions can be probed when the tip is coated with biomolecules such as for recognition of ligandreceptor pairs or antigenantibody interactions. This review highlights the versatility and diverse measurement protocols that have emerged for studies applying FVM for the analysis of material properties at the nanoscale. ■ INTRODUCTION Force−volume mapping (FVM) is a characterization mode of scanning probe microscopy (SPM) that is used to map material properties, which can be correlated directly with successively acquired topography images. Scanning probe microscopy encompasses a family of nanoscale measurements which use a microfabricated probe for sample characterizations. The term atomic force microscopy (AFM) is commonly used to describe SPM protocols for imaging surface morphology with nanoscale resolution and has also been referred to as scanning force microscopy. To accomplish FVM, an array of force measurements is obtained point-by-point in a grid pattern, using an approach developed by Radmacher et al. in 1994. 1 Properties that have been measured with FVM include adhesive forces, viscoelasticity, elastic modulus, chemical forces, dielectric conductance, and electrical properties. The grid of measurements can be compared directly with topography images to generate a 3D volume map. Unfortunately, the FVM mode usually requires longer acquisition times than conventional SPM imaging, which has limited the broad application of this characterization approach. Typically, FVM is accomplished by acquiring multiple measurements of forces or material properties along a defined grid of points within a selected 3D region of a sample, as depicted in Figure 1. An AFM probe is used for mapping tip− sample forces and for characterizing topography features, and resolution at the atomic level can routinely be achieved. Features within the corresponding topography frames can be correlated with FVM measurements to provide insight of structure/property interrelationships with nanoscale resolution. Data related to height, stiffness, tip−sample adhesion, energy dissipation, and mechanical or electrical properties can be mapped for samples using FVM, with instrument operation in ambient air, liquid media, or vacuum environments. In this review, we describe how FVM has been applied for characterizing multiple types of materials, ranging from biological samples to polymers and inorganic solids. Measurements of nanomechanical properties have been widely reported for biological samples; however there are also studies which used FVM to characterize organic films, inorganic materials, and polymer films. Measurement protocols and example studies will be described for FVM, to provide a fresh perspective about the capabilities and limitations of this highly versatile measurement tool. Volume Mapping with Force−Distance Curves. Twodimensional force volume maps can be constructed by collecting multiple force curves over selected areas of a sample within a defined grid. Force−distance curves are a graphical representation of the applied force versus the tip−sample distance and are acquired by monitoring the cantilever deflection and piezo displacement during an approach−retract cycle of the probe. 2 Force spectroscopy has been broadly applied to evaluate properties such as elasticity, stiffness, and adhesion at the nanoscale. 3 Force curves obtained with the force−volume mode are converted by post-processing, using software algorithms to extract force−indentation curves. The force, F, is a function of the piezo displacement, z, and can be expressed as The spring constant, k, is determined by the geometry of the SPM tip and Young's modulus, E, of the cantilever material, expressed as follows: These equations can be used to derive the spring constant for tips that have a rectangular geometry for the cantilever, with variables w = width, t = thickness, and l = length. Young's modulus can be derived from k. For force curves that have a linear approach, k can be derived experimentally from the slope of the curves. For FVM mode, multiple force curves are acquired at points of a defined grid pattern. The interactions between the tip and sample are measured locally and mapped point-by-point for a defined 3D region using force−distance curves. An example profile for an approach−retreat cycle is shown in Figure 2, which plots the tip deflection as a function of the tip−sample distance. The approach curve is highlighted in red, and the retraction is profiled in blue. Starting from the right there, is no interaction between the tip and sample initially (red line), which indicates zero force when the tip is far away from the surface. As the probe is brought closer to the sample, the tip will "snap-in" to touch the surface due to attractive forces. The contact line is shown on the left side of the plot for both the approach and retract portions of the curve, and typically there is a slight hysteresis which is revealed by the separation of the two lines. The region for the "jump-from-surface" is indicated when the tip deflection is negative (blue line); this portion of the curve is used to measure adhesion forces. Strong adhesion can result from the capillary force of surface films of water, particularly when measurements are made in humid environ- ments. The magnitude of the adhesion can be reduced by imaging in liquids. After the tip has been lifted from the sample, the deflection returns to zero at the right side (blue line), indicating the tip is no longer in contact with the sample. Example parameters that have been used with FVM, such as the grid sizes and types of probes which have been applied for studies of diverse sample materials are summarized in Table 1. 4 Experimental factors such as the imaging environment, temperature, and type of probe provide flexibility for designing experiments. For example, dynamic changes for samples can be evaluated by careful selection of the pH, temperature, or ionic strength or from the nature of the solvents used as imaging media. For FVM studies, typically, a square grid is defined, with 16−256 points tested in the x and y directions. The sizes of the areas that are mapped range from nanometers to several micrometers, to be chosen according to the nature of the sample. The elastic and adhesive properties of pathogenic bacteria, Streptococcus pneumoniae (S. pneumoniae) and S. mitis, were investigated using FVM, as reported by Marshal et al. 4e Forcecurve profiles were obtained for the center and edge areas of individual bacterial cells in PBS buffer, to evaluate the mechanical properties for cells containing a polysaccharide capsule compared to unencapsulated cells. The retraction curves from force volume maps were used to calculate the adhesion force between an AFM tip and bacterial cells for the strains, as shown in Figure 3 with FVM maps of the central areas of individual bacterium. Adhesion is the amount of force required to pull the tip away from the surface and was reported to measure less than 1 nN for each pull-off event, during retract cycles of force curves. Distinct force profiles were measured for each bacterial strain, shown in Figure 3A−E. The tip did not adhere well to the unencapsulated samples ( Figure 3B,D). The strongest adhesion was measured for the S. mitis wild-type bacterium encapsulated with the SK142 strain, shown in Figure 3C. Such studies provide information on the adhesion of bacteria to host surfaces which can be correlated with biochemical structure. Force−distance measurements with FVM were used to examine self-assembled monolayers (SAMs) of diacetylene thiol before and after polymerization, as reported by Wu et al. 4b Thiol-terminated polydiacetylenes have conjugated backbones that can be polymerized on surfaces by exposure to UV radiation. Investigations with FVM were used to compare the load-dependent frictional behavior of the diacetylene SAMs before and after polymerization. Contact mode AFM was used to map areas that measured 250 250 nm 2 with FVM (16 16 grids) to construct friction versus load curves for samples. Studies revealed changes in local ordering and frictional response after polymerization. Elasticity and adhesion force maps were acquired for several strains of lactic acid bacteria by Schaer-Zammaretti and Ubbink. 4c Force−distance curves (32 32 FVM grid) were obtained with silicon nitride nanoprobes at an applied force of 1 nN for areas measuring 1 1 m 2. Samples were studied in saline buffer, with controlled pH to compare differences for strains of Lactobacillus. Force maps were found to correlate with cell morphology and heterogeneities of surface constituents such as proteins and polysaccharides. A biomimetic membrane comprised of a triblock copolymer film with blocks of poly(dimethylsiloxane) and poly(2methyloxazoline) was studied using FVM by Rein et al. 4h Force maps of the snap-in adhesion were mapped for the fluid membrane using two types of AFM probes, over an area of 80 80 m 2. Commercial probes having a tetrahedral geometry with a force constant of 2 N/m were compared against ultrasharp silver/gallium probes with a force constant of 6 N/ m. Fewer artifacts were detected for the nanoneedle probes made of silver/gallium. The mechanism and dynamics of surface adsorption of henegg-white lysozyme to mica substrates was investigated in situ using FVM and tapping mode AFM, by Kim et al. 4g Timelapse images revealed changes over time for lysozyme adsorption onto mica from an aqueous solution (2 g/mL). Force−volume mapping was used to evaluate tip−sample interactions for distinguishing areas of protein clusters compared to uncovered substrate. Analysis and Post-processing of FVM Results. For data acquisition with FVM, each point (or pixel) of the FVM image grid contains 3D position information on xyz coordinates, as well as the approach and retract portions of force measurements. After collecting measurements with FVM, features from topographic images can be correlated with force curves acquired at each pixel of a defined grid volume. For a particular point of the grid, the values of xyz coordinates will provide data for volume, alongside the changes for cantilever deflection. Depending on the size of the grid, the data files can be quite large for subsequent analysis or postprocessing. Realtime or postprocessing of each data point of the FVM grid is used to extract information from force curves by using software programs or designed algorithms. 5 A number of programs are available for analysis of FVM, data and key features of several such programs are summarized in Table 2. 6 Vendor-sourced software packages for commercial instruments have been developed for real-time analysis of measurements. There are also open source programs such as Gwyddion, Scanning Probe Image Processor (SPIP), and Profilm Online which have been used for postprocessing with FVM microscopy images. Instrument manufacturers have developed multiple approaches to accomplish rapid positioning and acquisition of data for FVM characterizations. As the tip is brought in and out of contact with the surface, rich information of sample properties can be acquired in real time for point-by-point comparison to topographic features. For example, the Pulsed Force Mode (WiTec) can be applied using scanning speeds that are comparable to contact mode imaging for SPM studies with chemical force microscopy, electrostatic force imaging, electrochemistry, and adhesion measurements for operation in air or in fluids. 6s The nanomechanical measurement modes of conventional FVM, Quantitative Imaging (QI), and PeakForce Quantitative Nanomechanical Mapping (PF-QNM) were compared for bacterial the samples by Smolyakov et al. 7 All of the three modes, (FVM, QI, and PF-QNM) were found to provide consistent results for studies of the morphology and elastic modulus maps of living Pseudomonas aeruginosa bacterial cells; however, shorter acquisition times and higher resolution were considered to be advantages with the QI and PF-QNM modes. Probing the Nanomechanical Properties of Biological Samples with Force−Volume Mapping. The mechanical properties of biological samples such as tissue and cancer cells, amyloid fibrils, and bacteria have been measured using FVM to furnish insight into the elastic response and to be correlated with morphology. For example, FVM has been used to probe the development of several types of cancer including liver, cervical, breast cancer, and metastatic tumors residing in the brain. 8 The progression of a normal cell to become a metastatic cancer cell follows complex structural changes in the extracellular matrix and cellular architecture, which can be studied at the level of individual cells using the FVM mode. A sample of breast cancer cells was examined by FVM which revealed that healthy breast cells, benign cancer cells, and metastatic cancer cells each display a distinct mechanical signature of stiffness properties, as reported by Plodinec et al. 8c The progression of a normal cell to become a metastatic cancer cell follows complex structural changes in the extracellular matrix and cellular architecture. Stiffness measurements at the level of individual cells were examined by FVM, which revealed that healthy breast cells, benign cancer cells, and metastatic cancer cells each display a unique profile of elastic modulus; shown in Figure 4. Healthy cells isolated from a breast biopsy and benign breast cancer cells presented a uniform stiffness characterized by a single distinct peak for values plotted with 24 24 pixel maps, whereas malignant cancer cells displayed a broadened distribution of measurements with lower values for the elastic modulus. The histogram of the distribution of measurements for healthy cells isolated from a breast biopsy exhibited a unimodal stiffness distribution of 1.13 ± 0.8 kPa ( Figure 4A), and the benign breast cancer biopsy sample exhibited a stiffer and broader unimodal distribution of 3.68 ± 1.92 kPa ( Figure 4B). The metastatic cancer biopsy displayed a bimodal distribution of stiffness ( Figure 4C), with two prominent peaks at 0.57 ± 0.16 kPa (first peak) and 1.99 ± 0.73 kPa (second peak). The mechanical properties of articular cartilage of human, porcine, and murine samples were investigated using FVM mode by Darling et al.; example results are presented in Figure 5. 9 The AFM probe was modified with a borosilicate glass sphere to map the site-specific elastic modulus for sectioned tissue samples of articular cartilage. The pericellular matrix (PCM) of the articular cartilage was shown to be biochemically and structurally distinct from the extracellular matrix (ECM), and samples of human, porcine, and murine, which differed significantly in stiffness when comparing maps of elastic modulus. Stiffness differences for cartilage from human ( Figure Within regions of interest located in light microscopy images, 16 sites/region were sampled to generate 900−1600 indentation sites. An example test site from each sample ( Figure 5A,D,G) is displayed as a combined topography/FVM image. The distribution is presented using the compression behavior of amyloid fibrils for radially applied force that was examined using FVM histograms ( Figure 5C,F,I). A combined topography/FVM image and the contour maps ( Figure 5B,E,H) show the location of areas of the PCM that were sampled. A comparison of the distribution of the elastic moduli measurements for the PCM and ECM is represented in histograms ( Figure 5C,F,I), showing that the PCM is characteristically less stiff than the ECM, and the distribution of elastic moduli values is notably broader for the ECM than for the PCM. Investigations with FVM were used to study the stiffness of the outer membrane of individual bacterial cells of Escherichia coli by Longo et al. 10 Commercial probes with nominal spring constants of 0.06 N/m were used to acquire indentation measurements within scan areas of 2, 5, and 10 m regions. An example image for an area containing four bacterial cells is revealed in the topography frame of Figure 6A. Each pixel of the corresponding FVM image ( Figure 6B) represents the sites where force−distance measurements were acquired. Force measurements were converted to values of Young's modulus The compression elasticity of amyloid fibrils was examined using FVM mode, by Zhou et al. 11 For FVM experiments, a glucagon peptide consisting of 29 amino acid residues was chosen for characterization during selected intervals of fibrillogenesis. High resolution images (300 300 nm 2 ) of fibrils were acquired for defining areas with 32 32 grid maps to measure elasticity changes for three positions along the length of the glucagon fibrils. Structural heterogeneities were revealed for elasticity with comparisons of twisting conformations and fibril thickness. Measuring the Young's Modulus of Soft Materials with FVM. Force−volume mapping can be applied to measure the mechanical properties of soft samples at the scale of nanometers with multiple approach−retract cycles, which can then be used to derive highly local measurements of Young's modulus. Nanomechanical mapping of elastic response using FVM has been applied for sample materials such as biological cells and polymer blends. 12 Several models have been employed for calculating elastic modulus values from SPM force curves. 13 Strategies for extracting information of the elastic properties from force−distance measurements have been previously described by Bahrami et al. 13a and by Lin et al. 13d The most widely used models are based on the Derjaguin−Muller−Toporov (DMT) 13b and the Johnson− Kendall−Roberts (JKR) models that describe relationships between the applied force, adhesion force, and the tip radius for calculations of nanomechanical properties. 13c Both the DMT and JKR approaches are extensions of the Hertzian The Young's modulus of bacterial samples was measured using FVM, and example measurements are presented in Figure 7, as reported by Gaboriaud et al. 12f A grid map (10 10) that was generated with force−volume mode is shown in Figure 7A for a region of the surface of a single bacterial cell. Within the 4 4 m 2 area of the digital FVM image, an approach−retract cycle was acquired, and the colors indicate indentation from the deflection of the tip. The brighter square colors correspond to soft areas. Examples of individual force approach and retract cycles are plotted in Figure 7B for four distinct areas of the sample. Regions of the substrate (i, iv) show a linear relationship of hard-wall repulsion when the tip and the sample are brought into contact, which indicates that the substrate is not deformable. For force profiles of regions of the cell (ii, iii) the curve shapes are not linear at low load force and large indentation occurs at high load, which indicated a higher elastic modulus compared to that of the substrate. Dynamic studies with FVM were used to measure the elastic modulus of neuronal soma cells at selected temperatures by Sunnerberg et al. 12a Force versus indentation curves were obtained using a spherical AFM tip for samples of soma cells which measured 12 ± 4 m 2 in diameter. A 16 16 grid of force versus indentation curves was taken with each cell sample. The indentation curves were fitted with a Hertzian model for a spherical indenter to obtain values of the local elastic modulus. The FVM experiments revealed that the modulus increased with a decrease in temperature, for measurements that were made at 37 and 25°C. The effects of glutamate-induced excitotoxicity were studied for neuron cells to evaluate changes for mechanical properties, cell volume, and structure using FVM, by Efremov et al. 12b Nanomechanical maps for areas measuring 80 80 m 2 were examined with grids of 40 40 measurement points that were characterized with a fast force−volume mode. Values of Young's modulus were calculated by fitting FVM force curves with Hertz's model. Hyperosmotic stress was applied to the cells by adding sucrose to the cell medium, and time-lapse experiments were conducted to evaluate whether cells recovered from the stress. Measurement of Young's modulus was determined quantitatively at the nanometer level using FVM for a sample of a biphasic polymer, by Reynaud et al. 12c The biphase polymer system was composed of poly(methyl methacrylate) (PMMA) in a polyacrylate matrix. Indentation measurements were obtained for areas measuring 100 100 m 2, with a maximum cantilever deflection setting of 15 nm. The modulus measurements for the polymer blends obtained with FVM were in close agreement with the values measured for control samples of the pure polymers. Values of the Young's modulus of polystyrene−polybutadiene polymer blends were studied using FVM by Kramer et al. 12d A stiffness map is shown for a 6 6 m 2 area of the polymer film that was acquired with a 100 100 force− volume grid ( Figure 8A). The brighter white regions correspond to stiffer polymer domains of polystyrene, compared with the softer black regions of butadiene. The arrow points to one of the gray regions of the matrix which has an intermediate stiffness value, between 0.1 and 0.7 MPa. To quantitatively measure Young's modulus for the sample, force− distance curves were acquired for the blend using a cantilever with a spring constant of 8 N/m and a radius of 110 nm. To calculate values of Young's modulus, models from both DMT and the JKR theories were used for post-processing of data obtained with force curves. Mechanical properties of adhesion, stiffness, and dissipation were evaluated for samples of polypropylene using volume mapping with Peak Force QNM, as reported by Voss et al. 12e Areas of crystalline isotactic and amorphous elastomer were characterized, and changes were evaluated after steps of wet chemical etching for ablation of surface layers. Maps of force curves were collected for measuring sample properties with a scan rate of 1 Hz (1000−2000 curves/s). 14 Force curves were fitted with the DMT model to extract elastic properties. The combination of etching steps with volume mapping provided information of structural defects and inhomogeneities with nanoscale resolution. Peak Force QNM has been applied for studies of nanomechanical properties such as elastic modulus, stiffness, and adhesion for a diverse range of polymer samples such as thermoplastic elastomers, polyamide/polypropylene blends, polysaccharide films, polyamide/fluoroelastomer blends, polycaprolactone fibrils, carbon fiber and poly(ether ether ketone) blends, poly(methyl methacrylate) layers grafted on Ti, and films of polystyrene and poly(methyl methacrylate) blends, as well as for polyamide and cellulose nanofibers. 15 Natural diamond AFM probes with steel cantilevers were used for mapping indentation and elastic modulus of an epoxy molding compound using FVM, as reported by Germanicus et al. 16 A sample was prepared by incorporating silica beads in epoxy o-cresol novolac resins which are used as plastic packages for automotive, aerospace coatings, and integrated microelectronic devices. Indentation measurements acquired using SPM were compared to measurements acquired using Peak Force QNM at micrometer scales, in which the average values of contact modulus obtained by the two techniques were found to be comparable. The DMT stiffness model was applied to calculate Young's modulus, and the authors reported higher spatial resolution and surface sensitivity for mechanical mapping for elastic areas of samples as compared to indentation measurements. Peak Force QNM was used for force−volume mapping with samples of elastomers, thermoplastics, and thermoset resins by Bahrami et al. 13a Force curves were evaluated and compared using both the DMT and JKR models. Parameters such as the tip−sample interaction area and contact radius were found to govern the spatial resolution for measuring adhesion forces and elastic modulus. Mapping Electromechanical Properties with Piezoresponse Force Microscopy. For samples of ferroelectric materials, a mapping mode of piezoresponse force microscopy or PFM has been developed for local characterization of ferroelectric domains. 17 The basic principle of PFM is based on detecting the deformation of the sample induced by an electrical bias voltage. For the instrument configuration of PFM, a functional generator is used to apply an oscillating voltage to a conductive probe scanned in contact with the sample, and small deflections of the tip are detected with a lock-in amplifier. 18 Local changes of surface volume due to the piezoelectric effect can be evaluated with PFM; however, contributions from electrostriction, electrostatic forces, electrochemical strain, Joule heating, and polarization can complicate analysis and interpretation of measurements. 19 The electromechanical properties of a broad range of materials have been studied and mapped with PFM, such as inorganic ferroelectrics, piezoelectric materials, ceramics, and biomaterials. 20 Force−Volume Mapping of the Dielectric Properties of Bacterial Cells. The dielectric constants of bacterial membranes have been characterized using FVM, with spatial resolution at the level of individual cells. For example, bacterial cells of Pseudomonas aeruginosa (P. aeruginosa) were studied in ambient conditions under low humidity (<30%) using FVM to collect measurements of electrostatic forces, as reported by Checa et al. 21 A data set of electrostatic force microscopy (EFM) measurements collected with deflection and amplitude approach curves was acquired using an FVM grid of 128 128 pixels. Information from FVM data sets was used to identify differences for the dielectric properties of the cell wall and the cytoplasmatic region and to map variations in the dielectric constant along the cell wall of individual bacterial cells. Raw data measurements from tip deflection and oscillation amplitude approach curves were converted into calibrated deflection and capacitance gradient data. A geometric model of measurement grid sites obtained from a topography image is shown in Figure 9A, for an area of a silicon oxide (SiO 2 ) pillar. Examples of the grid maps for a bacterial flagella and cell body are shown in Figure 3B,C, respectively. Corresponding FVM maps for the calculated values of dielectric constants are shown in Figure 9D−F below each topography grid map. The colored regions indicate the potential distributions that correspond to the grid positions underneath the tip. Measurements of dielectric constants were mapped for samples of the flagella and the bacterial cell body of P. aeruginosa. The distribution of the values of the dielectric constant were uniform across areas of the SiO 2 pillar and also for the regions of the bacterial flagella. However, the bacterial cell showed a non-uniform distribution of the measured dielectric constants, furnishing information regarding the heterogeneity of cell components. Using approach curves that were acquired with EFM, local maps of dielectric constants were generated by post-processing of data to correspond with images of sample topography. Maps of the conductivity and interfacial capacitance were acquired with FVM for the semiconducting channel of an organic field-effect transistor (FET) device, as reported by Kyndiah et al. 22 Scanning dielectric microscopy in liquid was combined with FVM to acquire electric force images (128 26 pixels) of a semiconducting film for regions of the transistor in the on-state. For mapping dielectric properties, a metallic (platinum-coated) probe was used as a gate electrode for the transistor and also for recording electrical forces during operation with FVM in liquid media. The variations in conductivity along the channel attributable to changes in the gate voltage were characterized. Maps of conductivity revealed heterogeneities at micrometer and sub-micrometer scales at the semiconductor/electrolyte interface. Spatial Mapping of Proteins and Tissues with FVM. Spatial maps of biological samples such as proteins and tissue samples have been acquired with FVM mode to measure forces of adhesion, deformation, and specific chemical binding. 23 Experiments with FVM can be accomplished in liquid media such as buffers, to mimic physiological conditions of living cells and to prevent denaturation. Such studies provide insight into the interplay between physiological characteristics and biochemistry at small size scales when imaging individual cells. The collagen component in soft organ tissue sections of humans and mice were examined with FVM, to measure elastic properties, as reported by Calo et al. 24 Measurements of mechanical properties were found to correlate with the amount and location of collagen in tissue samples that were analyzed with a tandem instrument consisting of an SPM system integrated with bright-field optical microscopy. Example FVM results are shown in Figure 10 for a tissue sample from human liver that was obtained from a patient with colon cancer. Bright-field microscopy images were used to locate certain areas of sectioned tissue for mapping, shown in Figure 10A,B. Maps of elastic modulus were acquired for 60 60 m 2 areas, shown in Figure 10C. The deformation of the cantilever obtained with force-curve measurements at each pixel area was used to derive values of Young's modulus by fitting with contact mechanics models. The mechanical properties were found to correlate with the density of collagen, as revealed by staining in optical microscopy images ( Figure 10D,E). Areas of low collagen density are mapped with FVM in Figure 10F. Nonstained optical images of adjacent tissue sections shown in Figure 10B,E indicate the placement of the AFM probe for scanning the framed areas that were mapped with FVM. High spatial resolution can be achieved with FVM for measuring chemical and physical properties for samples of the purple membrane of Halobacterium salinarum, as reported by Medalsy et al. 23d Purple membranes consist of lipid and a crystalline arrangement of the bacteriorhodopsin protein, which serve as a light driven protein pump. Images of purple membranes are shown with topography ( Figure 11A) and corresponding FVM images. Force−distance curves were acquired at each pixel of the topography image (512 512 pixel grid) and were subsequently used to extract information of Young's modulus, sample deformation, adhesion force, energy dissipation, and trigger force error ( Figure 11B−F). The trigger force error ( Figure 11F) is a measure of the deviation of the instrument feedback loop to adjust the trigger force; the values ranged from 50 to 300 pN. Such experiments showcase the capabilities of FVM mode for mapping and quantifying multiple parameters of chemical and physical properties of biological samples with nanometer resolution. Chemical Force Microscopy Using FVM with Functionalized Probes. The FVM mode can be used in combination with chemical force microscopy, an imaging strategy in which tips are functionalized to characterize chemically specific interactions with a sample. Multiple approaches have been developed for tip functionalization to design chemically specific tip−sample interactions, and representative examples of tip coatings that have been used with FVM are presented in Table 3. 25 Wettability studies with mica substrates were completed using FVM with thiolated AFM probes, to simulate low salinity and nanofluid enhanced oil recovery (EOR) techniques, as reported by Afekare et al. 25d The tip coating strategy with thiolated molecules is illustrated in Figure 12. For FVM studies, a silicon nitride tip that was coated on the underside with a thin layer of gold was used to facilitate Au−S chemisorption of thiolated molecular coatings. Gold-coated probes were functionalized by simple immersion into dilute solutions of thiolated molecules to present methyl, carboxylic acid, and phenyl moieties on the tip surface. The thiol at one end of each molecule is linked to the gold coating via chemisorption, and a hydrocarbon spacer connects designed functionalities (methyl, phenyl, and carboxylic acid) to be presented at the tip interface. Using force−volume mapping, adhesion maps (16 16 pixels) were obtained with tips terminated with alkyl, aromatic, and carboxylic acid functional groups to probe wettability interactions with mica as a model mineral substrate, as reported by Afekare et al. 25d The tip coatings were selected to simulate possible interactions in oil media. An example experimental series is shown in Figure 13, which reveals that as the brine salinity was decreased, the mean adhesion force was reduced. A series of FVM experiments were designed with nanofluids containing SiO 2 nanoparticles dispersed in high salinity brine, which revealed that brine and silica nanofluids can be used to alter surface wettability. Diverse chemical and biological interactions have been studied using force−volume mapping with the mode of chemical force microscopy using AFM tips that are coated with biological molecules. For example, a gold-coated tip functionalized with Concanavalin A was employed for mapping the distribution of polysaccharides on living yeast cells (Saccharomyces cerevisiae), by Gad et al. 25a Experiments with FVM were conducted for mapping specific receptor−ligand binding interactions for areas spanning 3 3 m 2 with a sampling grid of 16 16 pixels. Specific molecular recognition was accomplished using a tip coated with Concanavalin A for locally mapping the distribution of mannan, a natural polymer located on the exterior of yeast cells. The distribution and association of an enzyme was mapped with FVM using biologically coated tips for samples of nerve cells, by Reddy et al. 25b The specific interaction between nerve growth factor, a neurotrophin protein, and the tyrosine kinase A enzyme was mapped for the outer membrane of living PC12 nerve cells within a fluid cell containing salt solutions. A goldcoated silicon nitride tip was functionalized using a thiolated cross-linker, succinimidyl 3-(2-pyridyldithio)propionate, which contains a hydroxysuccinimidyl group to react with amino acid groups of the protein. Force maps were generated by scanning the protein-coated AFM tip across a living cell to identify the distribution of the tyrosine kinase A enzyme on the outer surface of cells. Experiments with FVM were used to map biochemically specific interactions between cholera toxin B oligomer with its receptor, reported by Vengasandra et al. 25c A gold-coated AFM probe was functionalized with a receptor ganglioside, GM1, using a cross-linking agent, succinimidyl 3,2-(2-pyridyldithio)propionate (SPDP). For sample preparation, a silicon substrate was coated with gold, and the cholera toxin B oligomer was attached using the SPDP cross-linker which is reactive to the amino acid groups of cholera toxin B. An example FVM image ( Figure 14A) depicts an FVM grid map of the relative strength of the interaction between the oligomer and ganglioside receptor, with strong attractive forces being generated at the darkest regions. Weaker attractive forces are apparent at the brighter pixel regions, which have lesser concentrations of cholera toxin B. Force curves from three data points of the force volume image are shown in Figure 14B for pixel regions that have dark, intermediate, and light contrast. Measurements of the attractive forces from multiple approach−retraction cycles acquired with FVM could be specifically correlated to the bond rupture force of the bond between cholera toxin B and ganglioside GM1. Chemical force microscopy studies with FVM were used to characterize samples of the freshwater diatom Nitzschia palea (N. palea), by Lavieale et al. 25e The distribution of adhesive molecules for regions of diatom cells was mapped with a coated probe to achieve a lateral resolution of a few nanometers, using a tip that was coated with either hydrophilic or hydrophobic alkanethiols. Diatom cells of N. palea were deposited on gold-coated glass slides that were functionalized with alkanethiols, either mercaptoundecanol or dodecanethiol. Gold-coated silicon nitride tips (nominal spring constant of ∼0.06 N/m) were functionalized with methyl-terminated dodecanethiol to generate a hydrophobic interface, and hydroxyl-terminated mercaptoundecanol furnished a hydrophilic coating. An example FVM experiment that was accomplished in buffer using a hydrophobic tip is shown in Figure 15A, for an area at the central region of a diatom cell. Approximately 20% of the areas that were mapped displayed adhesive events, which could be correlated with molecular density; the adhesion force measurements are plotted in Figure 15B. The rupture lengths ranged up to 4000 nm in length ( Figure 15C), and the force signatures revealed approach− retract cycles with both single and multiple peaks that had a sawtooth pattern, which is characteristic of stretching and unfolding of intramolecular domains. The profile of a single peak corresponds to a single extension of the molecules without unfolding events. New insight was gained about the mechanisms of the adhesion of diatoms using FVM experiments combined with chemical force microscopy. The strength of unbinding forces was evaluated for specific antigen−antibody interactions using FVM mode with a biofunctionalized AFM probe, as reported by Avci et al. 23c Silicon nitride tips were functionalized with amine groups by treatment with ethanolamine−HCl, and then derivatized antibodies were linked to the tip with a spacer consisting of a heterobifunctional PEG 800 tether molecule. The linker molecule with the free end conjugated to the antibody was designed with a flexible spacer to increase the chances for the antibody to find and bind to the surface-bound antigen. Force−volume measurements were collected in buffer, in which the 4 4 m 2 area was mapped with a 32 32 pixel grid, to generate 1024 force−distance curves. Values of the unbinding force of collagen antibody from its antigen were derived using code written with MatLab for automated analysis of force curves. Approximately 20% of the multiple force− distance curves that were acquired showed profiles of adhesive pull-off events, with single or multiple unbinding contact points that are characteristic of the stretching and unfolding of collagen fibrils. ■ CONCLUSION Protocols for studies with FVM mode can be applied to a broad range of nanomaterials and molecular systems to gain fundamental insight of chemical and biochemical properties and surface reactions. Dynamic protocols can be designed by changing the imaging environment, coating the AFM probe for chemical force measurements, or changing the instrument configuration for measuring electrical or physical properties. Key benefits of the FVM mode are the capabilities for correlating force measurements with specific features of AFM topography frames for unraveling the roles of structure and function at small size scales. Considerable progress has been made for speeding the time required for acquisition and postprocessing of FVM images, as well as for handling the large electronic data sets that are generated. Future directions will be to continue to apply FVM measurements in novel ways to address research questions and to take full advantage of the capabilities for obtaining multiple channels of information with hybrid SPM measurement modes combined with topography analysis. Golam Azom − Chemistry Department, Louisiana State
I hate the word “hero”. It seems like a nice enough word with a few uses, but I have come to hate it. A quick use of a Chrome extension shows me a quick definition. A person admired for certain virtues or feats, seems like a broad enough definition. Lately, however, the word “hero” is being used like bullets to defame and degrade someone’s accomplishments. Caitlyn Jenner may or may not be a hero. “Hero,” like most things, is subjective. I think she shows bravery by being true to herself, and bringing the trans* movement to the front lines of society and forcing it to be acknowledged by most people. Ultimately, I don’t think it really matters whether Caitlyn Jenner is a hero or not; she has been brave and continues to do so. Several news articles and blog posts have called Miss Jenner a hero, and that just makes some people uncomfortable. Rebuffs pop up everywhere: “He’s not a hero, he’s just an attention whore!” (If they’re transphobic, they’re almost guaranteed to be an asshole too). “Only police officers/soldiers/firefighters, etc. are heroes! Bruce Jenner is just a freak!” Let’s get a few things straight. We are a nation of selective hero-worship and our deity of choice changes with the wind. The more conservative parts of our nation will pick a golden calf to hold up whenever they need to degrade someone’s accomplishments related to human rights and dignity. The people they pick to idolize aren’t necessarily bad people, in fact many are good, honest, hard-working people. But pretending to care about these people whenever you’re too uncomfortable with someone’s accomplishments is clueless at best, malicious at worst. Soldiers do important things. While I am a staunch pacifist, I understand many people’s deep respect for the military, but people can’t throw them up as idols whenever the current person in the spotlight makes people uncomfortable. Bravery isn’t a contest. You don’t have to be the bravest to be considered on the list. Heroes don’t have to fit into narrow boxes. One person’s bravery and heroism doesn’t make another person’s invalid. Sometimes a hero can just be the person who says no. Sometimes a hero is the person who stands up when everyone demands they sit down. Sometimes a hero is the person who refuses to hide for the convenience of someone else’s worldview. Caitlyn Jenner is still a wealthy, famous and far removed from the vast majority of most trans* people’s experience, but she has given media attention to a group that is persecuted, beaten, disenfranchised, and murdered. If that isn’t heroic, I don’t know what is. Image; Flickr user Davidd
/* * * * Copyright (c) [2019-2021] [NorthLan](<EMAIL>) * * * * Licensed under the Apache License, Version 2.0 (the "License"); * * you may not use this file except in compliance with the License. * * You may obtain a copy of the License at * * * * http://www.apache.org/licenses/LICENSE-2.0 * * * * Unless required by applicable law or agreed to in writing, software * * distributed under the License is distributed on an "AS IS" BASIS, * * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * * See the License for the specific language governing permissions and * * limitations under the License. * */ package org.lan.iti.iha.security.userdetails; import cn.hutool.core.util.StrUtil; import lombok.Builder; import lombok.Data; import lombok.NoArgsConstructor; import lombok.experimental.Accessors; import lombok.experimental.SuperBuilder; import org.lan.iti.iha.security.authentication.CredentialsContainer; import java.io.Serializable; import java.util.List; import java.util.Map; /** * Subject * * @author NorthLan * @date 2021/7/28 * @url https://blog.noahlan.com */ @Data @Accessors(chain = true) @NoArgsConstructor @SuperBuilder public class UserDetails implements CredentialsContainer, Serializable { private static final long serialVersionUID = 8650541158511980145L; protected String id; protected String principal; protected Object credentials; protected List<String> authorities; protected List<String> roles; protected List<String> organization; protected Map<String, Object> additionalInformation; @Builder.Default protected boolean accountNonExpired = true; @Builder.Default protected boolean accountNonLocked = true; @Builder.Default protected boolean credentialsNonExpired = true; @Builder.Default protected boolean enabled = true; @Builder.Default protected boolean rememberMe = false; /** * zoneId */ protected String zoneId; /** * unionId */ protected String unionId; /** * string Subject - Identifier for the End-User at the Issuer. */ protected String sub; /** * string End-User's full name in displayable form including all name parts, possibly including titles and suffixes, * ordered according to the End-User's locale and preferences. */ protected String name; /** * string End-User's full name in displayable form including all name parts, possibly including titles and suffixes, * ordered according to the End-User's locale and preferences. */ protected String username; /** * string Given name(s) or first name(s) of the End-User. Note that in some cultures, people can have multiple given names; * all can be present, with the names being separated by space characters. */ protected String given_name; /** * string Surname(s) or last name(s) of the End-User. Note that in some cultures, people can have multiple family names or no family name; * all can be present, with the names being separated by space characters. */ protected String family_name; /** * string Middle name(s) of the End-User. Note that in some cultures, people can have multiple middle names; * all can be present, with the names being separated by space characters. Also note that in some cultures, middle names are not used. */ protected String middle_name; /** * string Casual name of the End-User that may or may not be the same as the given_name. For instance, * a nickname value of Mike might be returned alongside a given_name value of Michael. */ protected String nickname; /** * string Shorthand name by which the End-User wishes to be referred to at the RP, such as janedoe or j.doe. * This value MAY be any valid JSON string including special characters such as @, /, or whitespace. * The RP MUST NOT rely upon this value being unique, as discussed in Section 5.7. */ protected String preferred_username; /** * string URL of the End-User's profile page. The contents of this Web page SHOULD be about the End-User. */ protected String profile; /** * string URL of the End-User's profile picture. This URL MUST refer to an image file (for example, a PNG, JPEG, * or GIF image file), rather than to a Web page containing an image. * Note that this URL SHOULD specifically reference a profile photo of the End-User suitable for displaying when describing the End-User, * rather than an arbitrary photo taken by the End-User. */ protected String picture; /** * string URL of the End-User's Web page or blog. This Web page SHOULD contain information published by the End-User * or an organization that the End-User is affiliated with. */ protected String website; /** * string End-User's preferred e-mail address. Its value MUST conform to the RFC 5322 [RFC5322] addr-spec syntax. * The RP MUST NOT rely upon this value being unique, as discussed in Section 5.7. */ protected String email; /** * boolean True if the End-User's e-mail address has been verified; otherwise false. When this Claim Value is true, * this means that the OP took affirmative steps to ensure that this e-mail address was controlled by the End-User at the time the verification was performed. * The means by which an e-mail address is verified is context-specific, and dependent upon the trust framework or contractual agreements within which the parties are operating. */ protected String email_verified; /** * string End-User's gender. Values defined by this specification are female and male. * Other values MAY be used when neither of the defined values are applicable. */ protected String gender; /** * string End-User's birthday, represented as an ISO 8601:2004 [ISO8601‑2004] YYYY-MM-DD format. The year MAY be 0000, * indicating that it is omitted. To represent only the year, YYYY format is allowed. * Note that depending on the underlying platform's date related function, providing just year can result in varying month and day, * so the implementers need to take this factor into account to correctly process the dates. */ protected String birthdate; /** * string String from zoneinfo [zoneinfo] time zone database representing the End-User's time zone. For example, * Europe/Paris or America/Los_Angeles. */ protected String zoneinfo; /** * string End-User's locale, represented as a BCP47 [RFC5646] language tag. * This is typically an ISO 639-1 Alpha-2 [ISO639‑1] language code in lowercase and an ISO 3166-1 Alpha-2 [ISO3166‑1] country code in uppercase, * separated by a dash. For example, en-US or fr-CA. As a compatibility note, some implementations have used an underscore as the separator rather than a dash, * for example, en_US; Relying Parties MAY choose to accept this locale syntax as well. */ protected String locale; /** * string End-User's preferred telephone number. E.164 [E.164] is RECOMMENDED as the format of this Claim, * for example, +1 (425) 555-1212 or +56 (2) 687 2400. If the phone number contains an extension, * it is RECOMMENDED that the extension be represented using the RFC 3966 [RFC3966] extension syntax, * for example, +1 (604) 555-1234;ext=5678. */ protected String phone_number; /** * boolean True if the End-User's phone number has been verified; otherwise false. When this Claim Value is true, * this means that the OP took affirmative steps to ensure that this phone number was controlled by the End-User at the time the verification was performed. * The means by which a phone number is verified is context-specific, and dependent upon the trust framework or contractual agreements within which the parties are operating. * When true, the phone_number Claim MUST be in E.164 format and any extensions MUST be represented in RFC 3966 format. */ protected String phone_number_verified; /** * JSON object End-User's preferred postal address. The value of the address member is a JSON [RFC4627] structure containing some or all of the members defined in Section 5.1.1. * <ul> * <li> * formatted * <br/> * Full mailing address, formatted for display or use on a mailing label. This field MAY contain multiple lines, separated by newlines. Newlines can be represented either as a carriage return/line feed pair ("\r\n") or as a single line feed character ("\n"). * </li> * <li> * street_address * <br/> * Full street address component, which MAY include house number, street name, Post Office Box, and multi-line extended street address information. This field MAY contain multiple lines, separated by newlines. Newlines can be represented either as a carriage return/line feed pair ("\r\n") or as a single line feed character ("\n"). * </li> * <li> * locality * <br/> * City or locality component. * </li> * <li> * region * <br/> * State, province, prefecture, or region component. * </li> * <li> * postal_code * <br/> * Zip code or postal code component. * </li> * <li> * country * <br/> * Country name component. * </li> * </ul> */ protected Map<String, String> address; /** * Time the End-User's information was last updated. Its value is a JSON number representing the number of seconds from 1970-01-01T0:0:0Z as measured in UTC until the date/time. */ protected String updated_at; public String getSub() { return StrUtil.isEmpty(sub) ? id : sub; } @Override public void eraseCredentials() { this.credentials = null; } }
<gh_stars>0 package com.sensiblemetrics.api.alpenidos.pattern.caching.impl; import com.sensiblemetrics.api.alpenidos.pattern.caching.cache.CacheStore; import com.sensiblemetrics.api.alpenidos.pattern.caching.enums.CachingPolicy; import com.sensiblemetrics.api.alpenidos.pattern.caching.model.UserAccount; import lombok.experimental.UtilityClass; import java.text.ParseException; /** * AppManager helps to bridge the gap in communication between the main class and the application's * back-end. DB connection is initialized through this class. The chosen caching strategy/policy is * also initialized here. Before the cache can be used, the size of the cache has to be set. * Depending on the chosen caching policy, AppManager will call the appropriate function in the * CacheStore class. */ @UtilityClass public class AppManager { private static CachingPolicy cachingPolicy; /** * Developer/Tester is able to choose whether the application should use MongoDB as its underlying * data storage or a simple Java data structure to (temporarily) store the data/objects during * runtime. */ public static void initDb(boolean useMongoDb) { if (useMongoDb) { try { //DbManager.connect(); throw new ParseException(null, 0); } catch (ParseException e) { e.printStackTrace(); } } else { //DbManager.createVirtualDb(); } } /** * Initialize caching policy */ public static void initCachingPolicy(final CachingPolicy policy) { cachingPolicy = policy; if (cachingPolicy == CachingPolicy.BEHIND) { Runtime.getRuntime().addShutdownHook(new Thread(CacheStore::flushCache)); } CacheStore.clearCache(); } public static void initCacheCapacity(final int capacity) { CacheStore.initCapacity(capacity); } /** * Find user account */ public static UserAccount find(final String userId) { if (cachingPolicy == CachingPolicy.THROUGH || cachingPolicy == CachingPolicy.AROUND) { return CacheStore.readThrough(userId); } else if (cachingPolicy == CachingPolicy.BEHIND) { return CacheStore.readThroughWithWriteBackPolicy(userId); } else if (cachingPolicy == CachingPolicy.ASIDE) { return findAside(userId); } return null; } /** * Save user account */ public static void save(final UserAccount userAccount) { if (cachingPolicy == CachingPolicy.THROUGH) { CacheStore.writeThrough(userAccount); } else if (cachingPolicy == CachingPolicy.AROUND) { CacheStore.writeAround(userAccount); } else if (cachingPolicy == CachingPolicy.BEHIND) { CacheStore.writeBehind(userAccount); } else if (cachingPolicy == CachingPolicy.ASIDE) { saveAside(userAccount); } } public static String printCacheContent() { return CacheStore.print(); } /** * Cache-Aside save user account helper */ private static void saveAside(final UserAccount userAccount) { //DbManager.updateDb(userAccount); CacheStore.invalidate(userAccount.getUserId()); } /** * Cache-Aside find user account helper */ private static UserAccount findAside(final String userId) { final UserAccount userAccount = CacheStore.get(userId); if (userAccount != null) { return userAccount; } //userAccount = DbManager.readFromDb(userId); if (userAccount != null) { CacheStore.set(userId, userAccount); } return userAccount; } }
A subpopulation of RNA 1 of Cucumber mosaic virus contains 3' termini originating from RNAs 2 or 3. Tobacco plants transgenic for RNA 1 of Cucumber mosaic virus and inoculated with transcript of RNAs 2 and 3 regenerated viral RNA 1 from the transgenic mRNA, and the plants became systemically infected by the reconstituted virus. cDNA fragments corresponding to the 3' non-coding region (NCR) of viral RNA 1 were amplified, cloned and sequenced. In some clones the termini of the 3' NCR corresponded to those of viral RNAs 2 or 3. This suggested that in some cases RNA 1 may have been regenerated during replication by a template switching mechanism between the inoculated transcript RNAs and the mRNA. However, encapsidated, recombinant RNA 1 with the 3' NCR ends originating from RNAs 2 or 3 also was found in virus samples that had been passaged exclusively through non-transgenic plants. Thus, these chimeras occur naturally due to recombination between wild-type viral RNAs, and they are found encapsidated in low, but detectable amounts.
// takes the name for an operation type and finds the struct for it func GetStructForType(operationTypeString string) OperationType { log := logrus.WithFields(logrus.Fields{ "module": "comfoconnect", "method": "GetStructForType", }) var operationType OperationType switch operationTypeString { case "SetAddressRequestType": operationType = &proto.SetAddressRequest{} case "RegisterAppRequestType": operationType = &proto.RegisterAppRequest{} case "StartSessionRequestType": operationType = &proto.StartSessionRequest{} case "CloseSessionRequestType": operationType = &proto.CloseSessionRequest{} case "ListRegisteredAppsRequestType": operationType = &proto.ListRegisteredAppsRequest{} case "DeregisterAppRequestType": operationType = &proto.DeregisterAppRequest{} case "ChangePinRequestType": operationType = &proto.ChangePinRequest{} case "GetRemoteAccessIdRequestType": operationType = &proto.GetRemoteAccessIdRequest{} case "SetRemoteAccessIdRequestType": operationType = &proto.SetRemoteAccessIdRequest{} case "GetSupportIdRequestType": operationType = &proto.GetSupportIdRequest{} case "SetSupportIdRequestType": operationType = &proto.SetSupportIdRequest{} case "GetWebIdRequestType": operationType = &proto.GetWebIdRequest{} case "SetWebIdRequestType": operationType = &proto.SetWebIdRequest{} case "SetPushIdRequestType": operationType = &proto.SetPushIdRequest{} case "DebugRequestType": operationType = &proto.DebugRequest{} case "UpgradeRequestType": operationType = &proto.UpgradeRequest{} case "SetDeviceSettingsRequestType": operationType = &proto.SetDeviceSettingsRequest{} case "VersionRequestType": operationType = &proto.VersionRequest{} case "SetAddressConfirmType": operationType = &proto.SetAddressConfirm{} case "RegisterAppConfirmType": operationType = &proto.RegisterAppConfirm{} case "StartSessionConfirmType": operationType = &proto.StartSessionConfirm{} case "CloseSessionConfirmType": operationType = &proto.CloseSessionConfirm{} case "ListRegisteredAppsConfirmType": operationType = &proto.ListRegisteredAppsConfirm{} case "DeregisterAppConfirmType": operationType = &proto.DeregisterAppConfirm{} case "ChangePinConfirmType": operationType = &proto.ChangePinConfirm{} case "GetRemoteAccessIdConfirmType": operationType = &proto.GetRemoteAccessIdConfirm{} case "SetRemoteAccessIdConfirmType": operationType = &proto.SetRemoteAccessIdConfirm{} case "GetSupportIdConfirmType": operationType = &proto.GetSupportIdConfirm{} case "SetSupportIdConfirmType": operationType = &proto.SetSupportIdConfirm{} case "GetWebIdConfirmType": operationType = &proto.GetWebIdConfirm{} case "SetWebIdConfirmType": operationType = &proto.SetWebIdConfirm{} case "SetPushIdConfirmType": operationType = &proto.SetPushIdConfirm{} case "DebugConfirmType": operationType = &proto.DebugConfirm{} case "UpgradeConfirmType": operationType = &proto.UpgradeConfirm{} case "SetDeviceSettingsConfirmType": operationType = &proto.SetDeviceSettingsConfirm{} case "VersionConfirmType": operationType = &proto.VersionConfirm{} case "GatewayNotificationType": operationType = &proto.GatewayNotification{} case "KeepAliveType": operationType = &proto.KeepAlive{} case "FactoryResetType": operationType = &proto.FactoryReset{} case "CnTimeRequestType": operationType = &proto.CnTimeRequest{} case "CnTimeConfirmType": operationType = &proto.CnTimeConfirm{} case "CnNodeRequestType": operationType = &proto.CnNodeRequest{} case "CnNodeNotificationType": operationType = &proto.CnNodeNotification{} case "CnRmiRequestType": operationType = &proto.CnRmiRequest{} case "CnRmiResponseType": operationType = &proto.CnRmiResponse{} case "CnRmiAsyncRequestType": operationType = &proto.CnRmiAsyncRequest{} case "CnRmiAsyncConfirmType": operationType = &proto.CnRmiAsyncConfirm{} case "CnRmiAsyncResponseType": operationType = &proto.CnRmiAsyncResponse{} case "CnRpdoRequestType": operationType = &proto.CnRpdoRequest{} case "CnRpdoConfirmType": operationType = &proto.CnRpdoConfirm{} case "CnRpdoNotificationType": operationType = &proto.CnRpdoNotification{} case "CnAlarmNotificationType": operationType = &proto.CnAlarmNotification{} case "CnFupReadRegisterRequestType": operationType = &proto.CnFupReadRegisterRequest{} case "CnFupReadRegisterConfirmType": operationType = &proto.CnFupReadRegisterConfirm{} case "CnFupProgramBeginRequestType": operationType = &proto.CnFupProgramBeginRequest{} case "CnFupProgramBeginConfirmType": operationType = &proto.CnFupProgramBeginConfirm{} case "CnFupProgramRequestType": operationType = &proto.CnFupProgramRequest{} case "CnFupProgramConfirmType": operationType = &proto.CnFupProgramConfirm{} case "CnFupProgramEndRequestType": operationType = &proto.CnFupProgramEndRequest{} case "CnFupProgramEndConfirmType": operationType = &proto.CnFupProgramEndConfirm{} case "CnFupReadRequestType": operationType = &proto.CnFupReadRequest{} case "CnFupReadConfirmType": operationType = &proto.CnFupReadConfirm{} case "CnFupResetRequestType": operationType = &proto.CnFupResetRequest{} case "CnFupResetConfirmType": operationType = &proto.CnFupResetConfirm{} default: operationType = nil } if operationType == nil { log.Errorf("unable to find matching struct for operation type: %s", operationTypeString) } else { log.Debugf("found struct: %s, for operation type:%s", reflect.TypeOf(operationType).Elem().Name(), operationTypeString) } return operationType }
David Cameron has undermined one of Nick Clegg's flagship policies for improving social mobility, saying it is "fine" to offer his friends' children internships and even admitting that he has given a work placement to a neighbour. The government has put more accessible internships in desirable professions at the centre of a drive to give poorer children better opportunities. Earlier this month Clegg, the deputy prime minister, admitted securing a "definite leg-up internship" through his father's influence in a Finnish bank. He said it was wrong that his career had been boosted by parental connections. But in an interview in the Daily Telegraph, Cameron said he was "very relaxed" about offering work placements to people he knew. "I've got my neighbour coming in for an internship," he said. "In the modern world, of course you're always going to have internships and interns – people who come and help in your office who come through all sorts of contacts, friendly, political, whatever." Earlier this month, Clegg, the deputy prime minister, told the Commons : "As a teenager, yes, I did receive an internship, as, I suspect, did many people around the chamber. Good for you if you did not. All of us should be honest and acknowledge that the way that internships have been administered in the private sector, the public sector, political parties and – I discovered when we came into government – in Whitehall as well, under 13 years of Labour, left a lot to be desired." Clegg later claimed professional life should be "about what you know, not who you know". He said: "The whole system was wrong. I'm not the slightest bit ashamed of saying that we all inhabited a system which was wrong." The revelation that the deputy prime minister was helped through his father's connections cast a shadow over the government's announcement of the drive to end unpaid internships. Cameron said Clegg was "trying to make a fair point", but happily admitted that as a young man he, like his deputy, was helped out by his family connections. The prime minister, who this week was also caught in a row over whether he will wear a morning suit at next week's royal wedding, denied he was trying to rewrite his background. "People know who I am," he said. "I'm not trying to rewrite my background. I went to a fantastic school, I adored my parents." But he added: "I suppose when I got into politics I was always called the Old Etonian David Cameron. " In the Telegraph interview Cameron also spoke about a recent visit he made to the grave of his son, Ivan, who died in 2009. He said: "The first person who says to you, 'Soon you'll think of the happy memories of him and you won't be so sad' … well, you want to deck them. But actually, it is true that, suddenly, some happy memories burst through the cloud." Cameron also likened welcoming Lady Thatcher to No 10 as an "out of body experience". • This article was amended on 23 April 2011. The original referred to the offering of internships to children's friends. This has been amended.
HES1 Promotes Colorectal Cancer Cell Resistance To 5-Fu by Inducing Of EMT and ABC Transporter Proteins Background and Aim: Hairy enhancer of split-1 (HES1) is a downstream transcriptional factor of Notch signaling pathway, which was found to be related to chemoresistance. This study was aimed to investigate the role of HES1 in chemoresistance of colorectal cancer (CRC). Methods: Tissue microarray was used to analyze the clinical significance of HES1 in radical resected (R0) stage II/III CRC patients that received adjuvant chemotherapy. 5-fluorouracil (5-Fu) chemoresistance was examined in CRC cell lines (RKO and HCT8, LOVO) with stable over-expression and inhibition of HES1 gene by cytotoxicity test. Gene expression microarray was used to investigate the enriched pathways and different expressed of genes in cells with over-expressed HES1. Expression changes of the chemoresistance related genes were confirmed by qPCR and western blot analysis. Results: Stage II CRC patients with higher HES1 expression showed higher recurrence rate after chemotherapy. Colon cancer cell lines which over-expressed HES1 were more resistant to 5-Fu treatment in vitro. Gene expression microarray revealed that HES1 was related to the signaling pathways of epithelial-mesenchymal transition (EMT) and drug metabolism. Immunofluorescence assay showed HES1 over-expression lead to depressed E-cadherin and elevated N-cadherin. QPCR and western blot analysis confirmed that ABCC1, ABCC2 and P-gp1 were induced after HES1 over-expression. Conclusions: HES1 promotes chemoresistance to 5-Fu by prompting EMT and inducing of several ABC transporter genes. HES1 might be a novel therapeutic target in CRC treatment. Introduction Colorectal cancer (CRC) remains a major cause of cancer-related morbidity and mortality in the world. 5-Fu based adjuvant chemotherapy after curative surgery is considered as standard therapy for stage II/III CRC. Unfortunately, approximately 40% of stage these patients develop local recurrence or metastatic disease which mainly due to chemoresistance. To date, the mechanisms of chemoresistance in CRC have not been fully elucidated. Notch signaling pathway plays an essential role in promoting cell survival. Activation of Notch pathway leads to the release of the Notch intracellular domain (NICD), which translocates to the nucleus and activates transcription of numerous downstream target genes, including HES1. Aberrant activation of Notch signaling pathway is involved in chemoresistance in multiple cancers including CRC, As a downstream target of canonical Notch signaling pathway, HES1 plays a vital role in chemoresistance. In ovarian cancer, inhibiting the Notch pathway by -secretase inhibitor could decrease expression of HES1 mRNA and sensitize cells to paclitaxel. HES1 might modulate the therapeutic resistance by mediating Gli1 expression in medulloblastoma and glioblastoma. However, except for Notch signaling pathway, HES1 signaling could be activated by other pathways, including Hedgehog, c-Jun N terminal kinase and TGF-a/Ras/MAPK pathways, which are also involved in chemoresistance. In addition, HES1 acts as a marker of colon cancer stem cells (CSCs), which might also contribute to tumor recurrence after 5-Fu based adjuvant chemotherapy. Thus, the role of HES1 in CRC chemoresistance is unpredictable upon Notch signaling pathway pathway status. In this study, we investigated the clinical significance of chemo-response of HES1 in stage II/III CRC patients (n=121) using tissue microarray. Chemosensitivity was examined in colorectal cancer cells with over-expression and inhibition of HES1. Furthermore, the enriched pathways and different expression of genes in cells which over-expressed HES1 were investigated by gene expression microarray. Expression changes of the main genes related to chemoresistance were confirmed by qPCR and western blot analysis. Patients and tissues This retrospective study included 121 staged II/III CRC patients who underwent radical resection (R0) and received 5-Fu based adjuvant chemotherapy. Overall survival (OS) period was defined as the period between diagnosis and death or the last follow-up. Disease free survival (DFS) was defined as the period between diagnosis and the first clinical or pathologic evidence of local or distant recurrent disease. Written informed consent was obtained from each patient before surgery. The study was approved by the Institutional Review Board of Sun Yat-Sen University. Immunohistochemistry analysis Immunohistochemistry analysis was carried out according to the Envision System (Dako Cytomation, Glostrup, Denmark) guidance. In brief, each TMA slides was deparaffinzed and rehydrated through graded ethanol. Sodium citrate was used for antigen retrieval. Slides underwent 0.3% hydrogen peroxide solution to block endogenous peroxidase activity. Then samples was incubated with the primary antibody anti-HES1 (1:400; Abcam, ab71559), at 4°C overnight. After incubation with secondary (goat) antibody, slides were developed in diaminobenzine (EnVision, DAKO) and counterstained with haematoxylin. Absorbance was determined at 450 nm after 3 hours of incubation. Cell viability was calculated as following: Viability = (OD test group-OD blank group) / (OD control group-OD blank group) 100 %, and IC50 (half maximal inhibitory concentration) was calculated from the dose-response curves. All experiments were repeated in triplicate. Gene expressional profiles and analysis RNAs were extracted from RKO-HES1 and RKO-Mutant cells. RNA integrity was assessed by standard denaturing agarose gel electrophoresis. The Human 12x135K Gene Expression Array was manufactured by Roche NimbleGen. 45,033 genes are collected from the authoritative data source including National Center of Biotechnology Information (NCBI). Double-strand cDNA (ds-cDNA) was synthesized from total RNA, which was then cleaned and labeled before hybridization. Differentially expressed genes were identified through Fold change filtering. Genes with fold-change≥2.0 expression were enrolled. Realtime RT-PCR was used to confirm the results. Pathways enrichment analysis was based on KEGG (Kyoto Encyclopedia of Genes and Genomes) database, which identified the biological pathways that had a significant enrichment of differently expressed genes. The P-values denote the significance of the pathways, with cut-off at 0.05. Total protein was extracted using RIPA Lysis Buffer (Beyotime, China) and PMSF (Sigma-Aldrich). The proteins were transferred to NC membranes (Millipore Corp, MA USA) using the TransBlot System (Bio-Rad, CA, USA). The membranes were blocked in 5% w/v non-fat milk in TBS and incubations were performed overnight at 4°C. The membranes were then washed using TBST and incubated with secondary antibodies (1:10000, IRDye Goat IgG, LI-COR Bioscience, NE USA) for 1h at room temperature. Protein staining was detected using the Odyssey Imaging System (LI-COR Biosciences, NE USA). The following primary antibodies were used: Statistics An open source software TMAJ (Johns Hopkins, Baltimore, USA) was applied to measure the HES1 expression index as described elsewhere. The median of HES1 expression was employed for the cut-point. Statistical analysis was carried out using SPSS 17.0 (SPSS, Chicago, IL, USA). Correlations between clinicopathologic data and HES1 expression were analyzed using Chi-square test or Fisher's exact test. Kaplan-Meier survival curves were preformed to estimate OS and DFS. A value of P < 0.05 was considered statistically significant. Correlation between HES1 level and clinicopathological variables To study the clinical significance of HES1 expression, 121 CRC samples in stage II/III (n=121) patients who received 5-Fu based adjuvant chemotherapy were examined by tissue microarray. HES1 protein was mainly located in cancer cell cytoplasm (Fig.1A) Kaplan-Meier analysis revealed that stage II patients with higher HES1 expression level had poor OS (P=0.015) and DFS (P=0.042) (Fig. 1B). Over-expression of HES1 induce chemoresistance in CRC cells Chemoresistance is a key obstacle to the efficacy of CRC treatment and may result in recurrence. To determine the potential role of HES1 in chemoresistance, stable over-expression and inhibition of HES1 gene were established in colon cancer cell lines including RKO, HCT8 and LOVO, which were then exposed to different concentrations of 5-Fu treatment in vitro. CCK8 test showed that HES1 over-expression significantly promoted cell viability of RKO and HCT8 cells, whereas HES1 inhibition significantly decreased cell viability of LOVO cells (Fig.1C). The IC50 of RKO (P=0.016 vs vehicle, P=0.001 vs control) and HCT8 cells (P<0.001 vs vehicle, P < 0.001 vs control) were significantly increased by HES1 over-expression. However, HES1 inhibition resulted in a significant decreased IC50 in LOVO cells (P<0.001 vs vehicle, P<0.001 vs control) (Fig.1D). Over-expression of HES1 induce EMT in CRC cells To determine the mechanisms of HES1 mediated CRC chemoresistance, RKO cell lines that stable over-expressed wild type HES1 and mutant HES1 gene were stablished. Changes in two cell lines were determined by whole-genome cDNA microarray. As shown in Fig.2A, several pathways were changed by HES1 over-expression. Briefly, pathways with drug metabolism were up-regulated, and pathways with adhering junction, focal adhesion and actin cytoskeleton were markedly down-regulated. 2668 genes were up-regulated after HES1 over-expression, while 1304 genes were down-regulated. Genes with most significant changes were shown in Table 3. Considering the changes in adhesion and actin cytoskeleton pathways, we hypothesized HES1 over-expression might induce EMT. To verify this idea, we examined the expression of two critical EMT markers, E-cadherin and N-cadherin, in RKO and LOVO cell lines. Immunofluorescence assay revealed that HES1 over-expression increased the level of N-cadherin, and decreased E-cadherin in RKO cells. The opposite results were found in LOVO cells after HES1 inhibition (Fig.2B). Thus, HES1 over-expression might induce EMT in CRC cells. Over-expression of HES1 induce ABC transporter genes in CRC cells Since several members of ATP-binding cassette transporter family (ABC) are up-regulated by HES1 over-expression in cDNA microarray, we reasoned that ABC transporter proteins could be more commonly induced by HES1, and rendered chemoresistance. To test this hypothesis, we examined expression of ABCC1, ABCC2 and P-gp1 (three critical molecules in drug metabolism) in CRC cell lines mentioned above. QPCR and western-blot analysis showed that HES1 over-expression increased ABCC1, ABCC2 and P-gp1 expression in RKO and HCT8 cells, and HES1 inhibition of in LOVO cells showed opposite results (Fig.2C, 2D). Thus, these data suggested that HES1 might prompt chemoresistance by inducing ABC transporter proteins in CRC cells. Discussion Previous studies have revealed that Notch signaling pathway involved in CRC chemoresistance. However, as an important downstream transcriptional factor in Notch signaling pathway, the role of HES1 in CRC chemoresistance is still unclear. In this study, we investigated the clinical significance of HES1 expression in stage II/III CRC patients who received adjuvant chemotherapy, and demonstrated its role of chemoresistance in vitro. Notch signaling pathway plays important role in the differentiation balance of intestinal crypts and carcinogenesis in CRC. Several studies showed HES1 is over-expressed in colorectal cancer, and its prognostic value in CRC has also been investigated. However, there is no study showed its role in CRC chemotherapy. In this study, we found stage II CRC patients with high HES1 expression had higher recurrence rate and poor prognosis (OS and DFS) after 5-Fu based adjuvant chemotherapy. This might be the result from the chemoresistance induced by HES1 over-expression. This correlation was not found in stage III patients. To our knowledge, this is the first time establishing a correlation between HES1 expression and CRC recurrence after 5-Fu based adjuvant chemotherapy. Notch signaling was found to participate in chemoresistance in numerous cancers and inhibiting the Notch pathway could enhance chemosensitivity [9,. Whether HES1 is involved in colorectal chemoresistance is unclear. What is more, HES1 could be activated by other upstream pathways besides Notch pathways, and HES1 is considered as a marker of colon CSCs which also might contribute to chemoresistance. Thus, the role of HES1 in CRC chemoresistance is unpredictable. In this study, we found HES1 could promote chemoresistance of CRC and targeted inhibited HES1 could enhance chemosensitivity in vitro. Epithelial-mesenchymal transition (EMT) has been shown to play a crucial role in chemoresistance and tumor recurrence. Increasing evidences suggest EMT-associated transcription factors are involved in chemoresistance in different cancers. In colorectal cancer, EMT could prompt chemoresistance to oxaliplatin by upregulating P-gp expression. Notch signaling could promote chemoresistance via EMT in other type of cancer. In the present study, cDNA microarray profiling and western blot analysis demonstrated that over-expression of HES1 could down-regulate E-cadherin and up-regulate N-cadherin. Thus, HES1 might prompt chemoresistance by the induction of EMT. ATP-binding cassette transporter (ABC) transporters involved in chemoresistance by decreasing cellular drug uptake and accumulation, and are considered as a major cause for chemotherapy failure. Over-expression of the ABCC1 transporter confers resistance to a wide range of anticancer drugs. In breast cancer, ABCC3 transporter is confirmed involving in drug resistance to chemotherapy. The transfection of human embryonic kidney cells with the ABCC10 gene conferred resistance to various anticancer drugs including paclitaxel, docetaxel, vincristine, gemcitabine. In this study, we found expression of several members of ABC transporters was increased after HES1 over-expression by cDNA microarray profiling, and western blot confirmed P-gp1, ABCC1 and ABCC2 are up-regulated by over-expression of HES1. Therefore, up-regulation of ABC transporters might be one of the mechanisms of HES1 induced chemoresistance. In conclusion, our study showed that HES1 was an unfavorable factor for recurrence in stage II CRC patients who received adjuvant chemotherapy, and HES1 could promote CRC chemoresistance via induction of EMT and ABC transporters. Thus, HES1 might be a novel therapeutic strategy in CRC treatment. Foundation for Doctor of Philosophy of Guangzhou Medical University (No.2015C21). Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. All applicable international, national, and institutional guidelines for the care and use of animals were followed.
Traffic of a vehicle-mounted network has increased due to the advancement of a safe driving support or an automatic driving technology. To cope with these problems, a CAN with flexible data rate (CAN FD) communication system which can increase a data transmission rate and extend a data length has been known. In the CAN FD, nodes which perform transmission and reception are generally electronic control units (ECUs), and each node is electrically connected by a bus. A transmitting ECU adds an identifier (ID) to communication data to construct a message, converts the message into an electric signal, and transmits the electric signal on the bus. Each ECU monitors the electric signal on the bus, acquires the ID during the communication, and specifies the message to be received. When a plurality of messages are transmitted at the same time, priority of communication is determined according to the ID. A phase of determining the ECU which can transmit the ID and transmit the message according to the priority is called an arbitration (adjustment) phase. In the arbitration phase, the plurality of ECUs perform communication at the same rate as the conventional CAN, for example, at 500 kbps for simultaneous output. After the ECU which transmits the message is determined by the arbitration, the arbitration phase becomes a data phase for transmitting data. In the data phase in which the number of ECUs outputting the message is specified to be one, the transmission rate is 2 Mbps, for example. However, there is a problem in that if a communication rate is increased to 2 Mbps in the conventional network configuration which can communicate at 500 kbps, data are not correctly transmitted due to reflection. As a method to solve these problems, it is effective to divide buses and reduce a scale of a network of each bus (reduce the number of connected nodes and reduce a length of harness (wiring)). However, in this case, there arises a problem in that the number of ECUs that can perform communication at the same time is reduced. In such a case, a method of using a gateway ECU to connect divided buses has been known (for example, see PTL 1). The gateway ECU transmits data received from one bus to the other bus. It is possible to perform the communication between the ECUs connected to buses divided into two by using the technique disclosed in PTL 1.
Meanwhile, there a long list of new movies now in town. MIA AND THE WHITE LION: This is a perfect film to take the kids to and maybe get a bit emotionally involved in yourself. It could be quite a bit because tension builds dramatically as the story progresses and then delivers a terrific cathartic moment, as the best of these animals-in-danger films do. You also learn a great deal about the wildlife tourism industry in Africa, some of which is not nice at all. Mia is the petulant daughter of a family that’s returned to South Africa, after a few years in London. She’d rather still be there and isn’t interested in her dad’s hard work to set up a wildlife park as a tourist attraction. Until a rare white lion cub is born and bonds with her like a pet. MISSING LINK: Adults will get a lot more out of this animated film than children. The humor is very droll; there’s lots of it and most of it is based on classic situations reminiscent of novels and old movies. An exclusive explorer’s club in London, a saloon brawl in the old west, travelling to Shangri La and more get a send-up. A self-styled adventurer, voiced by Hugh Jackman, is barred from the club and heads all the way to Washington State on a tip that the Sasquatch is to be found there. What better way to prove himself? LITTLE: Take the plot of Big, turn it upside down, shift the culture and you’ve got this small and funny entertainer. It’s peppered with laughs, not sullied with a single obscenity and still presents a reasonably authentic view of life today. Much tinted by fantasy, though. Regina Hall is a demanding, high-powered boss at a small software design company. Just as her biggest client (SNL’s Mikey Day) is threatening to bail out, she is zapped with a magic spell by a young girl at a food truck. It works. She wakes up as a 14-year-old version of herself, played by Marsai Martin.
Methylprednisolone pulse therapy in severe acute asthma In a group comparative double blind pilot study six asthmatic patients with an acute exacerbation of their disease were randomly treated with either methylprednisolone pulse therapy (MPPT) (1000 mg daily for 3 days) (n= 2) followed by placebo tablets, or standard doses of methylprednisolone (MP) (50 mg daily gradually decreased to zero over 3 weeks) (n= 4). The results showed that the effect of MPPT did not differ from that of standard doses of MP. MPPT has, however, the potential of being preferable to standard treatment with MP. Because of easy administration and optimal patient compliance.
<reponame>djw8605/condor<gh_stars>0 /*************************************************************** * * Copyright (C) 1990-2007, Condor Team, Computer Sciences Department, * University of Wisconsin-Madison, WI. * * Licensed under the Apache License, Version 2.0 (the "License"); you * may not use this file except in compliance with the License. You may * obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ***************************************************************/ #ifndef _COLLECTOR_DAEMON_H_ #define _COLLECTOR_DAEMON_H_ #include <vector> #include "condor_classad.h" #include "condor_commands.h" #include "totals.h" #include "forkwork.h" #include "collector_engine.h" #include "collector_stats.h" #include "dc_collector.h" #include "offline_plugin.h" //---------------------------------------------------------------- // Simple job universe stats //---------------------------------------------------------------- class CollectorUniverseStats { public: CollectorUniverseStats( void ); CollectorUniverseStats( CollectorUniverseStats & ); ~CollectorUniverseStats( void ); void Reset( void ); void accumulate( int univ ); int getValue( int univ ); int getCount( void ); int setMax( CollectorUniverseStats & ); const char *getName( int univ ); int publish( const char *label, ClassAd *cad ); private: int perUniverse[CONDOR_UNIVERSE_MAX]; int count; }; /**---------------------------------------------------------------- *Collector daemon class declaration * * TODO: Eval notes and refactor when time permits. * * REVIEW NOTES per TSTCLAIR * DESIGN (General): * 1.) It seems rather odd to have such a large static interface then * a few virtual functions. This basically violates all rules of * practical design and should likely be cleaned up. Either is is * a static singleton (something which is omni-present), or it is not. * There are templated header only constructs namely boost::bind && * boost::function which get around the idea of generic function * pointers and should likely be used. This will likely reduce the * public interface to just a few functions. * * 2.) I think that each daemon detailed knowledge of *any* communications * protocol is a bad thing. It makes the daemon tightly bound to any * mode of transport & protocol, and very difficult to adapt over time. * * 3.) doxygen comments with real comments to explain why something * was done. * * 4.) consider a new object which performs the scanning functionality so all * scans within the collector access that object. It seems sloppy to have it parseled * throughout the collector, on different calls to "CollectorEngine::walkHasTable" You could * also consolidate the stats under that umbrella b/c the data is there. * *----------------------------------------------------------------*/ class CollectorDaemon { public: CollectorDaemon() {}; virtual ~CollectorDaemon() {}; virtual void Init(); // main_init virtual void Config(); // main_config virtual void Exit(); // main__shutdown_fast virtual void Shutdown(); // main_shutdown_graceful // command handlers static int receive_query_cedar(Service*, int, Stream*); static AdTypes receive_query_public( int ); static int receive_invalidation(Service*, int, Stream*); static int receive_update(Service*, int, Stream*); static int receive_update_expect_ack(Service*, int, Stream*); static void process_query_public(AdTypes, ClassAd*, List<ClassAd>*); static ClassAd * process_global_query( const char *constraint, void *arg ); static int select_by_match( ClassAd *cad ); static void process_invalidation(AdTypes, ClassAd&, Stream*); static int query_scanFunc(ClassAd*); static int invalidation_scanFunc(ClassAd*); static int reportStartdScanFunc(ClassAd*); static int reportSubmittorScanFunc(ClassAd*); static int reportMiniStartdScanFunc(ClassAd *cad); static void reportToDevelopers(); static int sigint_handler(Service*, int); static void unixsigint_handler(); static void init_classad(int interval); static void sendCollectorAd(); static void forward_classad_to_view_collector(int cmd, const char *filterAttr, ClassAd *ad); static void send_classad_to_sock(int cmd, ClassAd* theAd); // A get method to support SOAP static CollectorEngine & getCollector( void ) { return collector; }; // data pertaining to each view collector entry struct vc_entry { std::string name; Daemon* collector; Sock* sock; }; static OfflineCollectorPlugin offline_plugin_; protected: static CollectorStats collectorStats; static CollectorEngine collector; static Timeslice view_sock_timeslice; static std::vector<vc_entry> vc_list; static int ClientTimeout; static int QueryTimeout; static char* CollectorName; static ClassAd query_any_request; static ClassAd *query_any_result; static ClassAd* __query__; static List<ClassAd>* __ClassAdResultList__; static int __numAds__; static int __failed__; static std::string __adType__; static ExprTree *__filter__; static TrackTotals* normalTotals; static int submittorRunningJobs; static int submittorIdleJobs; static int machinesTotal,machinesUnclaimed,machinesClaimed,machinesOwner; static CollectorUniverseStats ustatsAccum; static CollectorUniverseStats ustatsMonthly; static ClassAd *ad; static CollectorList* updateCollectors; static DCCollector* updateRemoteCollector; static int UpdateTimerId; static ForkWork forkQuery; static int stashSocket( ReliSock* sock ); static class CCBServer *m_ccb_server; static bool filterAbsentAds; private: }; #endif
#ifndef OPTIONS_OPTION_PARSER_H #define OPTIONS_OPTION_PARSER_H #include "doc_utils.h" #include "options.h" #include "predefinitions.h" #include "registries.h" #include "../utils/math.h" #include "../utils/strings.h" #include <cctype> #include <memory> #include <sstream> #include <string> #include <vector> namespace options { /* The OptionParser stores a parse tree and an Options object. By calling addArgument, the parse tree is partially parsed, and the result is added to the Options. */ class OptionParser { Options opts; const ParseTree parse_tree; /* Cannot be const in the current design. The plugin factory methods insert PluginInfo structs into the registry when they are called. This could be improved later. */ Registry &registry; const Predefinitions &predefinitions; const bool dry_run_; const bool help_mode_; ParseTree::sibling_iterator next_unparsed_argument; std::vector<std::string> valid_keys; std::string get_unparsed_config() const; template<class T> void check_bounds( const std::string &key, const T &value, const Bounds &bounds); public: OptionParser(const ParseTree &parse_tree, Registry &registry, const Predefinitions &predefinitions, bool dry_run, bool help_mode = false); OptionParser(const std::string &config, Registry &registry, const Predefinitions &predefinitions, bool dry_run, bool help_mode = false); ~OptionParser() = default; OptionParser(const OptionParser &other) = delete; OptionParser &operator=(const OptionParser &other) = delete; /* This function initiates parsing of T (the root node of parse_tree will be parsed as T).*/ template<typename T> T start_parsing(); /* Add option with default value. Use def_val=NONE for optional parameters without default values. */ template<typename T> void add_option( const std::string &key, const std::string &help = "", const std::string &default_value = "", const Bounds &bounds = Bounds::unlimited()); void add_enum_option( const std::string &key, const std::vector<std::string> &names, const std::string &help = "", const std::string &default_value = "", const std::vector<std::string> &docs = {}); template<typename T> void add_list_option( const std::string &key, const std::string &help = "", const std::string &default_value = ""); void document_synopsis( const std::string &name, const std::string &note) const; void document_property( const std::string &property, const std::string &note) const; void document_language_support( const std::string &feature, const std::string &note) const; void document_note( const std::string &name, const std::string &note, bool long_text = false) const; void error(const std::string &msg) const; /* TODO: "parse" is not the best name for this function. It just does some checks and returns the parsed options. Parsing happens before that. */ Options parse(); const ParseTree *get_parse_tree(); Registry &get_registry(); const Predefinitions &get_predefinitions() const; const std::string &get_root_value() const; bool dry_run() const; bool help_mode() const; static const std::string NONE; }; /* TokenParser<T> wraps functions to parse supported types T. */ template<typename T> class TokenParser { public: static inline T parse(OptionParser &parser); }; /* We need to give specializations of the class for the cases we want to *partially* specialize, i.e., give a specialization that is still templated. For fully specialized cases (e.g. parsing "int"), it is not necessary to specialize the class; we just need to specialize the method. */ template<typename T> class TokenParser<std::shared_ptr<T>> { public: static inline std::shared_ptr<T> parse(OptionParser &parser); }; template<typename T> class TokenParser<std::vector<T>> { public: static inline std::vector<T> parse(OptionParser &parser); }; /* If T has no template specialization, try to parse it directly from the input string. As of this writing, this default implementation is used only for string and bool. */ template<typename T> inline T TokenParser<T>::parse(OptionParser &parser) { const std::string &value = parser.get_root_value(); std::istringstream stream(value); T x; if ((stream >> std::boolalpha >> x).fail()) { parser.error("could not parse argument " + value + " of type " + TypeNamer<T>::name(parser.get_registry())); } return x; } // int needs a specialization to allow "infinity". template<> inline int TokenParser<int>::parse(OptionParser &parser) { std::string value = parser.get_root_value(); if (value.empty()) { parser.error("int argument must not be empty"); } else if (value == "infinity") { return std::numeric_limits<int>::max(); } char suffix = value.back(); int factor = 1; if (isalpha(suffix)) { /* Option values from the command line are already lower case, but default values specified in the code might be upper case. */ suffix = static_cast<char>(std::tolower(suffix)); if (suffix == 'k') { factor = 1000; } else if (suffix == 'm') { factor = 1000000; } else if (suffix == 'g') { factor = 1000000000; } else { parser.error("invalid suffix for int argument (valid: K, M, G)"); } value.pop_back(); } std::istringstream stream(value); int x; stream >> std::noskipws >> x; if (stream.fail() || !stream.eof()) { parser.error("could not parse int argument"); } int min_int = std::numeric_limits<int>::min(); // Reserve highest value for "infinity". int max_int = std::numeric_limits<int>::max() - 1; if (!utils::is_product_within_limits(x, factor, min_int, max_int)) { parser.error("overflow for int argument"); } return x * factor; } // double needs a specialization to allow "infinity". template<> inline double TokenParser<double>::parse(OptionParser &parser) { const std::string &value = parser.get_root_value(); if (value == "infinity") { return std::numeric_limits<double>::infinity(); } else { std::istringstream stream(value); double x; stream >> std::noskipws >> x; if (stream.fail() || !stream.eof()) { parser.error("could not parse double argument"); } return x; } } // Helper functions for the TokenParser-specializations. template<typename T> static std::shared_ptr<T> lookup_in_registry(OptionParser &parser) { const std::string &value = parser.get_root_value(); try { return parser.get_registry().get_factory<std::shared_ptr<T>>(value)(parser); } catch (const std::out_of_range &) { parser.error(TypeNamer<std::shared_ptr<T>>::name(parser.get_registry()) + " " + value + " not found"); } return nullptr; } template<typename T> static std::shared_ptr<T> lookup_in_predefinitions(OptionParser &parser, bool &found) { using TPtr = std::shared_ptr<T>; const std::string &value = parser.get_root_value(); found = parser.get_predefinitions().contains(value); return parser.get_predefinitions().get<TPtr>(value, nullptr); } template<typename T> inline std::shared_ptr<T> TokenParser<std::shared_ptr<T>>::parse(OptionParser &parser) { bool predefined; std::shared_ptr<T> result = lookup_in_predefinitions<T>(parser, predefined); if (predefined) return result; return lookup_in_registry<T>(parser); } // Needed for iterated search. template<> inline ParseTree TokenParser<ParseTree>::parse(OptionParser &parser) { return *parser.get_parse_tree(); } template<typename T> inline std::vector<T> TokenParser<std::vector<T>>::parse(OptionParser &parser) { if (parser.get_parse_tree()->begin()->value != "list") { parser.error("expected list"); } std::vector<T> results; for (auto tree_it = first_child_of_root(*parser.get_parse_tree()); tree_it != end_of_roots_children(*parser.get_parse_tree()); ++tree_it) { OptionParser subparser(subtree(*parser.get_parse_tree(), tree_it), parser.get_registry(), parser.get_predefinitions(), parser.dry_run()); results.push_back(TokenParser<T>::parse(subparser)); } return results; } template<typename T> T OptionParser::start_parsing() { return TokenParser<T>::parse(*this); } template<class T> void OptionParser::check_bounds( const std::string &, const T &, const Bounds &) { } template<> void OptionParser::check_bounds<int>( const std::string &key, const int &value, const Bounds &bounds); template<> void OptionParser::check_bounds<double>( const std::string &key, const double &value, const Bounds &bounds); template<typename T> void OptionParser::add_option( const std::string &key, const std::string &help, const std::string &default_value, const Bounds &bounds) { if (help_mode()) { registry.add_plugin_info_arg( get_root_value(), key, help, TypeNamer<T>::name(registry), default_value, bounds); return; } valid_keys.push_back(key); bool use_default = false; ParseTree::sibling_iterator arg = next_unparsed_argument; if (arg == parse_tree.end(parse_tree.begin())) { // We have already handled all arguments. if (default_value.empty()) { error("missing option: " + key); } else if (default_value == NONE) { return; } else { use_default = true; } } else if (!arg->key.empty()) { // Handle arguments with explicit keyword. // Try to find a parameter passed with keyword key. for (; arg != parse_tree.end(parse_tree.begin()); ++arg) { if (arg->key == key) break; } if (arg == parse_tree.end(parse_tree.begin())) { if (default_value.empty()) { error("missing option: " + key); } else if (default_value == NONE) { return; } else { use_default = true; } } } std::unique_ptr<OptionParser> subparser = use_default ? utils::make_unique_ptr<OptionParser>(default_value, registry, predefinitions, dry_run()) : utils::make_unique_ptr<OptionParser>(subtree(parse_tree, arg), registry, predefinitions, dry_run()); T result = TokenParser<T>::parse(*subparser); check_bounds<T>(key, result, bounds); opts.set<T>(key, result); /* If we have not reached the keyword parameters yet and have not used the default value, increment the argument position pointer. */ if (!use_default && arg->key.empty()) { ++next_unparsed_argument; } } template<typename T> void OptionParser::add_list_option( const std::string &key, const std::string &help, const std::string &default_value) { add_option<std::vector<T>>(key, help, default_value); } template<typename T> void predefine_plugin(const std::string &arg, Registry &registry, Predefinitions &predefinitions, bool dry_run) { std::pair<std::string, std::string> predefinition; try { predefinition = utils::split(arg, "="); } catch (utils::StringOperationError &) { throw OptionParserError("Predefinition error: Predefinition has to be " "of the form [name]=[definition]."); } std::string key = predefinition.first; std::string value = predefinition.second; utils::strip(key); utils::strip(value); OptionParser parser(value, registry, predefinitions, dry_run); predefinitions.predefine(key, parser.start_parsing<std::shared_ptr<T>>()); } } #endif
<reponame>Finistere/dependency_manager from .core.exceptions import ( AntidoteError, DependencyCycleError, DependencyInstantiationError, DependencyNotFoundError, DoubleInjectionError, DuplicateDependencyError, FrozenWorldError, ) __all__ = [ "AntidoteError", "DependencyCycleError", "DependencyInstantiationError", "DependencyNotFoundError", "DuplicateDependencyError", "FrozenWorldError", "DoubleInjectionError", ]
import React from "react"; import type { AnimatedProps } from "../../processors"; import { createDeclaration } from "../../nodes/Declaration"; import type { Vector } from "../../../skia/types"; import type { GradientProps } from "./Gradient"; import { processGradientProps } from "./Gradient"; export interface SweepGradientProps extends GradientProps { c: Vector; start?: number; end?: number; } const onDeclare = createDeclaration<SweepGradientProps>( ({ c, start, end, ...gradientProps }, _, { Skia }) => { const { colors, positions, mode, localMatrix, flags } = processGradientProps(Skia, gradientProps); return Skia.Shader.MakeSweepGradient( c.x, c.y, colors, positions, mode, localMatrix, flags, start, end ); } ); export const SweepGradient = (props: AnimatedProps<SweepGradientProps>) => { return <skDeclaration onDeclare={onDeclare} {...props} />; };
def singularize(word): word = toUtf8(word) if inflect_engine: result = inflect_engine.singular_noun(word) if result is False: return word return result if word.endswith('ies'): return word[:-3] + 'y' elif word.endswith('IES'): return word[:-3] + 'Y' elif word.endswith('s') or word.endswith('S'): return word[:-1] return word
A receiver, also known as User Equipment (UE), mobile station, wireless terminal and/or mobile terminal is enabled to communicate wirelessly in a wireless communication system, sometimes also referred to as a cellular radio system. The communication may be made, e.g., between two receivers, between a receiver and a wire connected telephone and/or between a receiver and a server via a Radio Access Network (RAN) and possibly one or more core networks. The receiver may further be referred to as mobile telephones, cellular telephones, computer tablets or laptops with wireless capability. The receivers in the present context may be, for example, portable, pocket-storable, hand-held, computer-comprised, or vehicle-mounted mobile devices, enabled to communicate voice and/or data, via the radio access network, with another entity. The wireless communication system covers a geographical area which is divided into cell areas, with each cell area being served by a transmitter, also referred to as a radio network node or base station, e.g., a Radio Base Station (RBS), “eNB”, “eNodeB”, “NodeB” or “B node”, depending on the technology and terminology used. Sometimes, also the expression cell may be used for denoting the transmitter/radio network node itself. However, the cell is also, or in normal terminology, the geographical area where radio coverage is provided by the transmitter/radio network node at a base station site. One transmitter, situated on the base station site, may serve one or several cells. The transmitters communicate over the air interface operating on radio frequencies with the receivers within range of the respective transmitter. In some radio access networks, several transmitters may be connected, e.g., by landlines or microwave, to a Radio Network Controller (RNC), e.g., in Universal Mobile Telecommunications System (UMTS). The RNC, also sometimes termed Base Station Controller (BSC), e.g., in GSM, may supervise and coordinate various activities of the plural transmitters connected thereto. GSM is an abbreviation for Global System for Mobile Communications (originally: Groupe Spécial Mobile). In 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE), transmitters, which may be referred to as eNodeBs or eNBs, may be connected to a gateway, e.g., a radio access gateway, to one or more core networks. In the present context, the expressions downlink, downstream link or forward link may be used for the transmission path from the transmitter to the receiver. The expression uplink, upstream link or reverse link may be used for the transmission path in the opposite direction, i.e., from the receiver to the transmitter. In order to enable coherent demodulation of data, the transmitter has to send a pre-defined reference signal, or pilot signal as it also may be referred to as, to the receiver/UE. The reference signal may not encode any information and it is typically known to the receiver. From the reference signal, using a priori information on its modulation symbols and time-frequency location, the receiver may, based on the received reference signal, obtain channel estimates such as, e.g., the phase and amplitude of the channel frequency response, which are used for channel equalization prior to the demodulation. In the prior art 3GPP LTE system, multiple transmit and receive antennas are supported and the notion of antenna port is used. Each downlink antenna port is associated with a unique reference signal. An antenna port may not necessarily correspond to a physical antenna and one antenna port may be mapped to more than one physical antenna. In any case, the reference signal may be used for channel estimation for data that is transmitted on the same antenna port. Channel estimation therefore needs to be performed for all antenna ports that are used for the data transmission. A number of reference signals have been defined in the LTE downlink, e.g., Common Reference Signal (CRS). CRS is a cell-specific reference signal, which is transmitted in all subframes and in all Resource Blocks (RBs) of the carrier. The CRS serves, among several purposes, as a reference signal for phase and amplitude reference for coherent demodulation, i.e., to be used in channel estimation. Up to 4 antenna ports (labelled 0-3) may be accommodated with the CRS. These antenna ports are multiplexed on orthogonal time-frequency resources, i.e., disjoint sets of Resource Elements (REs). The CRS may offer robustness as it supports transmit diversity based PDSCH transmission. The RE is the smallest time-frequency entity that can be used for transmission in LTE, and may convey a complex-valued modulation symbol on a subcarrier. In this context, the RE may be referred to as a time-frequency resource. The RB comprises a set of REs or a set of time-frequency resources and is of 0.5 ms duration (e.g., 7 Orthogonal Frequency-Division Multiplexing (OFDM) symbols) and 180 kHz bandwidth (e.g., 12 subcarriers with 15 kHz spacing). The LTE standard refers to a Physical Resource Block (PRB) as a RB where the set of OFDM symbols in the time-domain and the set of subcarriers in the frequency domain are contiguous. With multiple antennas, it is possible to achieve beamforming by applying different complex-valued weights on the different antenna ports, also referred to as precoding. However, since the CRS is cell-specific, it cannot be receiver-specifically precoded, i.e., it cannot achieve any beamforming gains although the user data channel may undergo beamforming since it is not cell-specific. Therefore, typically the precoder used for the data channel has to be signalled to the receiver. Another defined reference signal is the Demodulation Reference Signal (DM-RS). This is a receiver-specific reference signal and it is only transmitted in the resource blocks and subframes where the receiver has been scheduled data i.e., containing the Physical Downlink Shared Channel (PDSCH). Up to 8 antenna ports may be accommodated by the DM-RS. The antenna ports (labelled 7-14) are multiplexed both in frequency and by orthogonal cover codes in time. Since it is receiver-specific, the DM-RS may be precoded with the same precoder used for the PDSCH, hence beamforming gains may be achieved for the reference signal. Since both data channel and the DM-RS use the same precoder, the precoding becomes transparent to the receiver. Thus there is no need to signal the precoder to the receiver as it can be regarded as part of the channel, which is estimated by the DM-RS. In order to receive the PDSCH, the receiver is monitoring a set of time-frequency resources i.e., Control Channel Elements (CCEs) or Enhanced CCEs (ECCEs) in a downlink control channel such as e.g., PDCCH or EPDCCH and performs blind decoding to detect Downlink Control Information (DCI) associated with the PDSCH transmission. The receiver is configured in one of several transmission modes wherein it is monitoring one DCI format (e.g., DCI format 1A) which typically may be used when a robust transmission of the PDSCH is needed, e.g., using transmit diversity. DCI format 1A schedules the PDSCH on antenna port 0, or 0, 1 or 0, 1, 2, 3, with the exception in MBSFN subframes where antenna port 7 is used. In addition, the receiver monitors one additional DCI format, which may utilise DM-RS for PDSCH demodulation. This additional DCI format can typically accommodate much more advanced transmission schemes such as Single User MIMO (SU-MIMO) or Multi User MIMO (MU-MIMO) or CoMP transmission. The antenna port to be assumed by the receiver, based on CRS or DM-RS, for demodulating the PDSCH is determined from the detected associated DCI format depending on the configured transmission mode. In some cases, the DCI format itself may also contain additional bits related to which of the DM-RS antenna ports (e.g., port 7 or 8) that should be used. This is, e.g., applicable when MU-MIMO is used. The prior art LTE system does not provide any dynamic switching between using cell-specific reference signals or receiver-specific reference signals. In order to improve the spectral efficiency of the 3GPP LTE system, it has been considered to define a new carrier type which only transmits the CRS in a subset of the subframes in a radio frame and possibly also in a subset of the resource blocks of the carrier. A further overhead reduction could also be envisaged by only utilising one CRS, i.e., antenna port 0. This reduced CRS would not be used for channel estimation but only for time- and frequency synchronization and measurements. PDSCH demodulation would thus primarily be based on the DM-RS. FIG. 1 shows a non-limiting example of a subframe for a carrier with 14 resource blocks where a cell-specific reference signal is transmitted in resource block 2-11, which may in other examples occupy all resource blocks (e.g. 0-13). User-specific reference signals may in this example be transmitted in resource block 0, 1, 2, 3, 10, 11, 12 and 13. However, in the prior art LTE system, the user-specific reference signals may comprise time-frequency resource elements (REs) overlapping with the synchronization signals or the broadcast channel. This implies that DM-RS based PDSCH transmission cannot be accommodated in such resource blocks. The six central resource blocks may, depending on subframe number, contain synchronization signals and a broadcast channel. In one example the reduced CRS would be transmitted in subframes where DM-RS overlaps with at least a synchronization signal. In other subframes, where synchronization signals and/or broadcast channels are not transmitted the reduced CRS may not even be present at all and the DM-RS may be utilised in all resource blocks. Thereby, subframes wherein all transmissions are based on the DM-RS would occur. There would therefore necessarily have to be a DCI format (similar to DCI format 1A) which schedules the PDCSH on DM-RS ports only. A system is considered wherein, for at least one subframe, a user-specific reference signal can be transmitted only in a subset resource blocks. The system further includes a cell-specific reference signal which is applicable for channel estimation for data channel demodulation. The data channel may thus be transmitted either by the user-specific reference signal or the cell-specific reference signal. A first problem comprises determining which reference signal (antenna port) that should be utilised. A second problem comprises determining which resource blocks of a data channel assignment that should be used. In the prior art LTE system, DM-RS based PDSCH transmission is not supported in resource blocks where the DM-RS would overlap with a synchronization signal or a broadcast channel. CRS-based PDSCH transmission is supported in all resource blocks. The designated antenna port is given by the configured transmission mode, subframe type (i.e., normal subframe or MBSFN subframe) and, for some instances of DM-RS, additionally with explicit bits in the corresponding DCI format. In the prior art LTE system, both CRS and DM-RS can be transmitted, which leads to high reference signal overhead, decreased throughput and reduced overall system efficiency. It is a further objective to maximize the flexibility for the system to select a suitable reference signal (antenna port) for a given transmission while at the same time not requiring overhead signalling for informing the receiver about the selected antenna port. Hence, it is a problem to assure that there is a reasonable trade-off between reference signal overhead and performance.
Evaluations of specificity & sensitivity of rK39 test in Visceral Leishmaniasis and HIV Co-infection The rK39 strip test is a simple, non-invasive, sensitive and specific test for screening of Visceral Leishmaniasis (VL). Clinically VL-HIV and co-infected 50 parasitological confirm patients enrolled forms RMRIMS, Patna Bihar. The objective behind highlighting this co-infection is for awareness of treating physician to take care of the patients suffering with fever and hepato spleenomegaly might be co-infected with HIV. Other control arm HIV positive patients taken from ART centre RMRIMS Patna and relative of Visceral Leishmaniasis patients who are living with VL-HIV co-infected blood sample has been taken. The sensitivity & specificity of rK39 test in parasitological confirmed VL-HIV co-infected patients was 100% positive and other group in control arm rK39 showed negative result. These results suggest that rK39 strip test shows highly sensitivity & specificity in case of VL-HIV co infection.
For centuries, the idea of “healing thoughts” has held sway over the faithful. In recent decades it’s fascinated the followers of all manner of self-help movements, including those whose main purpose seems to be separating the sick from their money. Now, though, a growing body of scientific research suggests that our mind can play an important role in healing our body — or in staying healthy in the first place. In the book Cure, the veteran science journalist Jo Marchant brings her critical eye to this fascinating new terrain, sharing the latest discoveries and telling the stories of the people —Iraq war veterans among them — who are being helped by cures aimed at both body and mind. Marchant answered questions from Mind Matters editor Gareth Cook. You have taken on a topic where, historically, there has been a tremendous amount of quackery. What convinced you that there was a compelling scientific story to tell? The misunderstandings and false claims were one of the elements that drew me to the topic of mind-body medicine in the first place. The mind influences physiology in many ways — from stress to sexual arousal — so it has always seemed reasonable to me that it might impact health. Yet the question has become so polarized: advocates of alternative medicine claim miracle cures, while many conventional scientists and doctors insist any suggestion of “healing thoughts” is deluded. I was interested in those clashing philosophies: I wanted to look at why it is so difficult to have a reasoned debate about this issue. What drives so many people to believe in the pseudoscientific claims of alternative therapists, and why are skeptics so resistant to any suggestion that the mind might influence health? At the same time, I wanted to dig through the scientific research to find out what the evidence really says about the mind’s effects on the body. That took me around the world, interviewing scientists who are investigating this question (often struggling for funding or risking their reputations to do so) and their results persuaded me that as well as being an interesting sociological or philosophical story, this was a compelling scientific one. Examples include trials demonstrating that hypnotherapy is a highly effective treatment for patients with irritable bowel syndrome (IBS), and studies showing that perceived stress correlates with telomere length in cells. But what I personally found most convincing were studies suggesting an evolutionary rationale for the mind’s influence on health. There are now several lines of research suggesting that our mental perception of the world constantly informs and guides our immune system in a way that makes us better able to respond to future threats. That was a sort of ‘aha’ moment for me — where the idea of an entwined mind and body suddenly made more scientific sense than an ephemeral consciousness that’s somehow separated from our physical selves. What is known about what the placebo effect actually is, and what do you see as the biggest open questions? “Placebo effect” can be a confusing term, because it has several different meanings. It is sometimes used to cover anyone who feels better after receiving placebo (or fake) treatment, which of course includes all those people who would have improved anyway. But researchers are finding that taking a placebo can also have specific, measurable effects on the brain and body. As neuroscientist Fabrizio Benedetti, one of the pioneers of placebo research, puts it, there isn’t just one placebo effect but many. Placebo painkillers can trigger the release of natural pain-relieving chemicals called endorphins. Patients with Parkinson’s disease respond to placebos with a flood of dopamine. Fake oxygen, given to someone at altitude, has been shown to cut levels of neurotransmitters called prostaglandins (which dilate blood vessels, among other things, and are responsible for many of the symptoms of altitude sickness). None of these biological effects are caused by placebos themselves, which are by definition inert. They are triggered by our psychological response to those fake treatments. The active ingredients are complex and not fully understood but include our expectation that we will feel better (which in turn is affected by all sorts of factors such as our previous experience with treatment, how impressive or invasive a treatment is, and whether we’re an optimistic person) and feeling listened to and cared for. Another element is conditioning, where if we learn to associate a particular treatment — taking a pill, say — with a certain biological response, we experience that response when we take a similar pill in the future, even if it’s a placebo. This influences physiological functions such as hormone levels and immune responses, and works regardless of our conscious beliefs. Future questions include teasing out the psychological factors that shape placebo responses, and investigating why honest placebos (where someone knows they are taking a placebo) seem to work — this research has barely begun. Scientists also want to pin down exactly what conditions placebos work for (most research so far is on a few model systems, like pain, depression and Parkinson’s), and who they work for (both genes and personality seem to play a role). And then of course there is the question of how we can maximize these responses, and integrate them into routine clinical care in an honest way. Have you experienced any of these mind-over-body effects yourself? I took a placebo pill that I ordered online and it did get rid of a bad headache within about 20 minutes, but of course that’s not a scientific trial. Perhaps my headache would have faded anyway. I also experienced the value of social support when giving birth to my two children. I had dramatically different outcomes when supported by midwives I knew and trusted, compared to a series of strangers. Again, my case doesn’t prove anything on its own but this effect is borne out in trials with thousands of women: continuous one-on-one support during labor is one of the only known interventions that reduces the risk of surgery during childbirth. Mostly though, I experienced the effects I describe in the book through talking to people treated using some of these approaches, often participants in clinical trials. They included a kidney transplant patient drinking a lavender-flavored milk to calm his hostile immune system; people who have suffered decades of recurrent depression now kept well by mindfulness training; and pilgrims seeking healing at the religious sanctuary of Lourdes in France. Meeting these people took this beyond an intellectual project for me. They showed me how the scientific findings aren’t just statistics on the page but have the power to transform lives. You write about burn victims who are being treated, in part, with virtual reality. Can you explain this, and what lessons you think it holds? This is another therapy I got to try — researchers in Seattle have developed a virtual reality landscape called Snow World. You fly around inside an ice canyon and fire snowballs at characters inside the game, such as penguins and snowmen. It’s meant to work as a painkiller: the idea is that the brain has a limited capacity for attention, so if the ice canyon commands that attention, there is less capacity left over for experiencing pain. When I tried Snow World, the researchers used a heated box to simulate a burn to my foot – it was quite painful outside the game, but once immersed, I had so much fun I barely noticed it. This technique was developed to help burn victims — they have to undergo agonizing sessions of wound treatment and physiotherapy. Even when taking the maximum safe dose of painkillers these patients are often still left in horrible pain. Trials show that undergoing these sessions while immersed in Snow World reduces their pain by an extra 15-40% on top of the relief they get from drugs. This is just one of many lines of research telling us that the brain plays a big role in determining the level of pain we feel. Of course any physical damage is important, but it is neither sufficient nor necessary for us to feel pain. So I think we’ve got our approach to pain all wrong. Our focus is almost exclusively on trying to banish it with drugs, which is incredibly costly and causes huge problems with side effects and addiction. Research like Snow World shows the potential of psychological approaches for treating pain: both to maximize the effectiveness of drugs and perhaps in some cases to replace them.
def _element_name_set(self, name): if self._name is not None: raise Exception("Named forms cannot be used as elements.") self._element_name = name
The most recent anti-Semitism accusations against the Labour Party sprang up in response to a British legislator’s March 23 tweet of an anti-Semitic mural, according to the New York Times. Corbyn, who endorsed the mural in 2012, was not quick enough with his denunciation of the tweet and mural to quell rising outrage. Jewish groups gathered to protest in front of U.K.’s parliament on March 26. Corbyn sent a letter of apology to various Jewish groups before the protests. He also delivered a Passover message on Friday to further emphasize the Labour Party’s support of the Jewish people. “It is easy to denounce antisemitism when you see it in other countries, in other political movements. It is sometimes harder to see it when it is closer to home. We in the labour movement will never be complacent about antisemitism. We all need to do better. I am committed to ensuring the Labour Party is a welcoming and secure place for Jewish people. And I hope this Passover will mark a move to stronger and closer relations between us and everyone in the Jewish community. In the fight against antisemitism, I am your ally and I always will be. I wish you and your family a Chag Sameach,” Corbyn said his message. Corbyn also mentioned Passover 2018’s first night marks the 75th year Warsaw, Poland, Jews were determined to stand fast against the Nazis intended to destroy the ghetto where they were forced to live. Corbyn noted afterward a worldwide rise in anti-Semitism, taking jabs at Poland, the National Front party in France, and “far right extremists” in the U.S., before offering an admission of guilt on behalf of his own party. Anti-Semitism is “more conspicuous, more commonplace, and more corrosive” in the Labour party than it has ever been in Jewish Labour Movement Chairwoman Luciana Berger’s memory, she said in response to Corbyn’s slow denunciation of the anti-Semitic mural, according to NYT.
Q: Which NFL kicker obtained a bachelor’s of science degree from South Dakota State: Sebastian Janikowski, Adam Vinatieri, Stephen Gostkowski or Mason Crosby? Q: Ohio State’s all-time leader in assists is also the school’s career leader in steals. Who is it? Q: UNC Greensboro coach Wes Miller was a former North Carolina guard who led the Tar Heels past Illinois for the 2005 national title. What three-time NBA All-Star did they beat in that game? Q: Gonzaga is one of 15 teams in NCAA Division I basketball nicknamed the Bulldogs. Name the other 14. Q: UNC Greensboro, by contrast, is one of five schools named the Spartans. List the other four. Q: Since 2000, Ohio State is one of two schools to have sent both football and basketball teams to national championship games. Which is the other? Q: Only six Division I players have scored 50 or more points in a single game since 2013. Two of those played for one of the schools in this pod. Name the school and name the players. Q: Two of these four teams have met before in the NCAA Tournament – Gonzaga and South Dakota State in a 66-46 win for GU last season. Who was the leading scorer in that game? Q: Which former American Idol star calls Greensboro, North Carolina, home: Adam Lambert, Carrie Underwood, Jordin Sparks or Chris Daughtry? Q: Gonzaga and Ohio State have produced a handful of NBA players over the year. Which program has more active players in the league? Q: Justin Jordan, a reserve guard for UNC Greensboro, is related to a celebrity with the same last name. Is it: NBA legend Michael Jordan, former singer/songwriter Montell Jordan, Los Angeles Clippers center DeAndre Jordan or actor Michael B. Jordan? Q: Josh Perkins needs five 3-pointers in the NCAA Tournament to join eight other Gonzaga players who’ve made 200 in their career. Name four of the other eight. Q: The last time the NCAA Tournament came through Boise (in 2009), Missouri beat Marquette in a Round of 32 game that boasted three future NBA starters. Name one of the three. Q: The J.R. Simplot Company, a Boise-based agriculture giant, is famous for being the primary french fry distributor for what major fast-food chain? A: Butler, Gardner-Webb, UNC Asheville, Louisiana Tech, Yale, South Carolina State, Drake, Fresno State, Bryant, Georgia, Mississippi State, Citadel, Samford, Alabama A&M. A: Jimmy Butler and Wesley Matthews of Marquette. DeMarre Carroll of Missouri. Published: March 13, 2018, 8:21 p.m.
/** * Check if this schedule has a course with the given ID. * * @param id The ID. * @return True if the same ID, otherwise False. */ public Boolean hasCourse(int id) { for (Course course : courses) if (course.getInt("id") == id) return true; return false; }
def update(google_key, darksky_key, log_level): lvl = getattr(logging, log_level.upper()) logging.basicConfig(level=lvl) logger.setLevel(lvl) logger.info('Updating MV Polar Bears data sheet') client, doc, sheet = get_client(google_key) add_missing_days(sheet) add_missing_dows(sheet) add_missing_weather(sheet, darksky_key) add_missing_water(sheet) logger.info('Update complete')
Enantioselective epoxidation of electron-deficient olefins: an organocatalytic approach. Versatile synthetic intermediates--,-epoxyketones and,-epoxyaldehydes--can be obtained through asymmetric organocatalytic epoxidation of,-unsaturated ketones and aldehydes. This Review focuses on some recent advances in these epoxidation reactions with respect to scope and limitations with polyamino acids, phase-transfer catalysts (PTCs), amines, and guanidines as chiral organocatalysts. Furthermore, recent results obtained with chiral peroxides are discussed.
Disaster Preparedness for Pets URI, 71% and 54% of cats developed diarrhea, and 91% and 83% of cats had at least one disease in 2011 and 2012, respectively. Uses of multiple drug administration (more than five drugs) was associated with prolonged URI and diarrhea. Multiple antibiotics, antihistamines, interferon, and steroids were associated with relapse of and prolonged URI. Conclusion: The incidence of disease in cats at the shelter was high. Developing a standardized treatment protocol for commonly observed diseases at Japanese animal shelters to prevent and control diseases, to promote animal welfare, and to protect public health in the face of future disasters is overdue.
// license:BSD-3-Clause // copyright-holders:<NAME>, <NAME> #include "emu.h" #include "machine/jalcrpt.h" void phantasm_rom_decode(running_machine &machine, const char *region) { uint16_t *RAM = (uint16_t *) machine.root_device().memregion(region)->base(); int i, size = machine.root_device().memregion(region)->bytes(); if (size > 0x40000) size = 0x40000; for (i = 0 ; i < size/2 ; i++) { uint16_t x,y; x = RAM[i]; // [0] def0 189a bc56 7234 // [1] fdb9 7531 eca8 6420 // [2] 0123 4567 ba98 fedc #define BITSWAP_0 bitswap<16>(x,0xd,0xe,0xf,0x0,0x1,0x8,0x9,0xa,0xb,0xc,0x5,0x6,0x7,0x2,0x3,0x4) #define BITSWAP_1 bitswap<16>(x,0xf,0xd,0xb,0x9,0x7,0x5,0x3,0x1,0xe,0xc,0xa,0x8,0x6,0x4,0x2,0x0) #define BITSWAP_2 bitswap<16>(x,0x0,0x1,0x2,0x3,0x4,0x5,0x6,0x7,0xb,0xa,0x9,0x8,0xf,0xe,0xd,0xc) if (i < 0x08000/2) { if ( (i | (0x248/2)) != i ) {y = BITSWAP_0;} else {y = BITSWAP_1;} } else if (i < 0x10000/2) { y = BITSWAP_2; } else if (i < 0x18000/2) { if ( (i | (0x248/2)) != i ) {y = BITSWAP_0;} else {y = BITSWAP_1;} } else if (i < 0x20000/2) { y = BITSWAP_1; } else { y = BITSWAP_2; } #undef BITSWAP_0 #undef BITSWAP_1 #undef BITSWAP_2 RAM[i] = y; } } void astyanax_rom_decode(running_machine &machine, const char *region) { uint16_t *RAM = (uint16_t *) machine.root_device().memregion(region)->base(); int i, size = machine.root_device().memregion(region)->bytes(); if (size > 0x40000) size = 0x40000; for (i = 0 ; i < size/2 ; i++) { uint16_t x,y; x = RAM[i]; // [0] def0 a981 65cb 7234 // [1] fdb9 7531 8ace 0246 // [2] 4567 0123 ba98 fedc #define BITSWAP_0 bitswap<16>(x,0xd,0xe,0xf,0x0,0xa,0x9,0x8,0x1,0x6,0x5,0xc,0xb,0x7,0x2,0x3,0x4) #define BITSWAP_1 bitswap<16>(x,0xf,0xd,0xb,0x9,0x7,0x5,0x3,0x1,0x8,0xa,0xc,0xe,0x0,0x2,0x4,0x6) #define BITSWAP_2 bitswap<16>(x,0x4,0x5,0x6,0x7,0x0,0x1,0x2,0x3,0xb,0xa,0x9,0x8,0xf,0xe,0xd,0xc) if (i < 0x08000/2) { if ( (i | (0x248/2)) != i ) {y = BITSWAP_0;} else {y = BITSWAP_1;} } else if (i < 0x10000/2) { y = BITSWAP_2; } else if (i < 0x18000/2) { if ( (i | (0x248/2)) != i ) {y = BITSWAP_0;} else {y = BITSWAP_1;} } else if (i < 0x20000/2) { y = BITSWAP_1; } else { y = BITSWAP_2; } #undef BITSWAP_0 #undef BITSWAP_1 #undef BITSWAP_2 RAM[i] = y; } } void rodland_rom_decode(running_machine &machine, const char *region) { uint16_t *RAM = (uint16_t *) machine.root_device().memregion(region)->base(); int i, size = machine.root_device().memregion(region)->bytes(); if (size > 0x40000) size = 0x40000; for (i = 0 ; i < size/2 ; i++) { uint16_t x,y; x = RAM[i]; // [0] d0a9 6ebf 5c72 3814 [1] 4567 0123 ba98 fedc // [2] fdb9 ce07 5318 a246 [3] 4512 ed3b a967 08fc #define BITSWAP_0 bitswap<16>(x,0xd,0x0,0xa,0x9,0x6,0xe,0xb,0xf,0x5,0xc,0x7,0x2,0x3,0x8,0x1,0x4); #define BITSWAP_1 bitswap<16>(x,0x4,0x5,0x6,0x7,0x0,0x1,0x2,0x3,0xb,0xa,0x9,0x8,0xf,0xe,0xd,0xc); #define BITSWAP_2 bitswap<16>(x,0xf,0xd,0xb,0x9,0xc,0xe,0x0,0x7,0x5,0x3,0x1,0x8,0xa,0x2,0x4,0x6); #define BITSWAP_3 bitswap<16>(x,0x4,0x5,0x1,0x2,0xe,0xd,0x3,0xb,0xa,0x9,0x6,0x7,0x0,0x8,0xf,0xc); if (i < 0x08000/2) { if ( (i | (0x248/2)) != i ) {y = BITSWAP_0;} else {y = BITSWAP_1;} } else if (i < 0x10000/2) { if ( (i | (0x248/2)) != i ) {y = BITSWAP_2;} else {y = BITSWAP_3;} } else if (i < 0x18000/2) { if ( (i | (0x248/2)) != i ) {y = BITSWAP_0;} else {y = BITSWAP_1;} } else if (i < 0x20000/2) { y = BITSWAP_1; } else { y = BITSWAP_3; } #undef BITSWAP_0 #undef BITSWAP_1 #undef BITSWAP_2 #undef BITSWAP_3 RAM[i] = y; } } /********** DECRYPT **********/ /* 4 known types */ /* SS91022-10: desertwr, gratiaa, tp2m32, gametngk */ /* SS92046_01: bbbxing, f1superb, tetrisp, hayaosi2 */ /* SS92047-01: gratia, kirarast */ /* SS92048-01: p47aces, 47pie2, 47pie2o */ void decrypt_ms32_tx(running_machine &machine, int addr_xor,int data_xor, const char *region) { int i; uint8_t *source_data; int source_size; source_data = machine.root_device().memregion( region )->base(); source_size = machine.root_device().memregion( region )->bytes(); std::vector<uint8_t> result_data(source_size); addr_xor ^= 0x1005d; for(i=0; i<source_size; i++) { int j; /* two groups of cascading XORs for the address */ j = 0; i ^= addr_xor; if (BIT(i,18)) j ^= 0x40000; // 18 if (BIT(i,17)) j ^= 0x60000; // 17 if (BIT(i, 7)) j ^= 0x70000; // 16 if (BIT(i, 3)) j ^= 0x78000; // 15 if (BIT(i,14)) j ^= 0x7c000; // 14 if (BIT(i,13)) j ^= 0x7e000; // 13 if (BIT(i, 0)) j ^= 0x7f000; // 12 if (BIT(i,11)) j ^= 0x7f800; // 11 if (BIT(i,10)) j ^= 0x7fc00; // 10 if (BIT(i, 9)) j ^= 0x00200; // 9 if (BIT(i, 8)) j ^= 0x00300; // 8 if (BIT(i,16)) j ^= 0x00380; // 7 if (BIT(i, 6)) j ^= 0x003c0; // 6 if (BIT(i,12)) j ^= 0x003e0; // 5 if (BIT(i, 4)) j ^= 0x003f0; // 4 if (BIT(i,15)) j ^= 0x003f8; // 3 if (BIT(i, 2)) j ^= 0x003fc; // 2 if (BIT(i, 1)) j ^= 0x003fe; // 1 if (BIT(i, 5)) j ^= 0x003ff; // 0 i ^= addr_xor; /* simple XOR for the data */ result_data[i] = source_data[j] ^ (i & 0xff) ^ data_xor; } memcpy (source_data, &result_data[0], source_size); } void decrypt_ms32_bg(running_machine &machine, int addr_xor,int data_xor, const char *region) { int i; uint8_t *source_data; int source_size; source_data = machine.root_device().memregion( region )->base(); source_size = machine.root_device().memregion( region )->bytes(); std::vector<uint8_t> result_data(source_size); addr_xor ^= 0xc1c5b; for(i=0; i<source_size; i++) { int j; /* two groups of cascading XORs for the address */ j = (i & ~0xfffff); /* top bits are not affected */ i ^= addr_xor; if (BIT(i,19)) j ^= 0x80000; // 19 if (BIT(i, 8)) j ^= 0xc0000; // 18 if (BIT(i,17)) j ^= 0xe0000; // 17 if (BIT(i, 2)) j ^= 0xf0000; // 16 if (BIT(i,15)) j ^= 0xf8000; // 15 if (BIT(i,14)) j ^= 0xfc000; // 14 if (BIT(i,13)) j ^= 0xfe000; // 13 if (BIT(i,12)) j ^= 0xff000; // 12 if (BIT(i, 1)) j ^= 0xff800; // 11 if (BIT(i,10)) j ^= 0xffc00; // 10 if (BIT(i, 9)) j ^= 0x00200; // 9 if (BIT(i, 3)) j ^= 0x00300; // 8 if (BIT(i, 7)) j ^= 0x00380; // 7 if (BIT(i, 6)) j ^= 0x003c0; // 6 if (BIT(i, 5)) j ^= 0x003e0; // 5 if (BIT(i, 4)) j ^= 0x003f0; // 4 if (BIT(i,18)) j ^= 0x003f8; // 3 if (BIT(i,16)) j ^= 0x003fc; // 2 if (BIT(i,11)) j ^= 0x003fe; // 1 if (BIT(i, 0)) j ^= 0x003ff; // 0 i ^= addr_xor; /* simple XOR for the data */ result_data[i] = source_data[j] ^ (i & 0xff) ^ data_xor; } memcpy (source_data, &result_data[0], source_size); }
/** * Created by JiangQi on 8/22/18. */ public class SlideBackButtonDemoScene extends Scene { @NonNull @Override public View onCreateView(@NonNull LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { SlidePercentFrameLayout layout = new SlidePercentFrameLayout(getActivity()); layout.setFitsSystemWindows(true); final Button button = new Button(getActivity()); button.setAllCaps(false); button.setText(R.string.main_anim_btn_ios_anim); LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, 150); lp.topMargin = 20; lp.leftMargin = 20; lp.rightMargin = 20; layout.addView(button, lp); final InteractionNavigationPopAnimationFactory interactionNavigationPopAnimationFactory = new InteractionNavigationPopAnimationFactory() { @Override public boolean isSupport(Scene from, Scene to) { return true; } @Override protected List<InteractionAnimation> onPopInteraction(Scene from, Scene to) { MainScene mainScene = (MainScene) to; AnimationListDemoScene animationListDemoScene = findTargetScene(mainScene); int[] buttonLocation = new int[2]; button.getLocationInWindow(buttonLocation); int[] buttonLocation2 = new int[2]; animationListDemoScene.mInteractionButton.getLocationInWindow(buttonLocation2); List<InteractionAnimation> a = new ArrayList<>(); a.add(InteractionAnimationBuilder.with(button).translationXBy(buttonLocation2[0] - buttonLocation[0]).endProgress(0.5f).build()); a.add(InteractionAnimationBuilder.with(button).translationYBy(buttonLocation2[1] - buttonLocation[1]).endProgress(0.5f).build()); a.add(DrawableAnimationBuilder.with(getView().getBackground()).alpha(255, 0).endProgress(0.5f).build()); return a; } @Override protected boolean canExit(float progress) { return progress > 0.3f; } @Override protected void onInteractionCancel() { } @Override protected void onInteractionEnd() { requireNavigationScene().pop(new PopOptions.Builder().setAnimation(new NoAnimationExecutor()).build()); } }; layout.setCallback(new SlidePercentFrameLayout.Callback() { @Override public boolean isSupport() { return true; } @Override public void onStart() { requireNavigationScene().pop(interactionNavigationPopAnimationFactory); requireNavigationScene().convertBackgroundToBlack(); } @Override public void onFinish() { interactionNavigationPopAnimationFactory.finish(); } @Override public void onProgress(float progress) { interactionNavigationPopAnimationFactory.updateProgress(progress); } }); layout.setBackgroundColor(ColorUtil.getMaterialColor(getResources(), 1)); TextView textView = new TextView(getActivity()); textView.setPadding(0, 400, 0, 0); textView.setText(R.string.anim_ios_interaction_tip); textView.setGravity(Gravity.CENTER); layout.addView(textView, new ViewGroup.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT)); return layout; } private static AnimationListDemoScene findTargetScene(GroupScene groupScene) { List<Scene> childSceneList = groupScene.getSceneList(); for (int i = 0; i < childSceneList.size(); i++) { Scene scene = childSceneList.get(i); if (scene instanceof AnimationListDemoScene) { return (AnimationListDemoScene) scene; } else if (scene instanceof GroupScene) { AnimationListDemoScene animationListDemoScene = findTargetScene((GroupScene) scene); if (animationListDemoScene != null) { return animationListDemoScene; } } } return null; } }
. The study we performed aimed at identifying the arrhythmological pattern in the football player. Between 1984 and 1989, 50 top level football players (group A) from the National Olympic team and from the National A team, average age 24.2 years (min. 19, max. 32), underwent Holter monitoring. The recordings were carried out in different environmental conditions according tot he programmes of the team and the number of recordings depended on how long each football player stayed in the National team. Moreover, 40 trainers (group B) from the Italian football teams, average age 38.4 years (min. 32, max. 57), all of whom had formerly been professional high-level football players practising intensive physical exercise for professional reasons, underwent one 24 h Holter monitoring. RESULTS. Group A: 2621 hours of monitoring were able to be analysed in 48/50 football players. Sinus node pauses greater than or equal to 1750 ms were found in 21/48 (43.7%) with a maximum of 3740 ms on altitude in 1/21, second degree atrioventricular block in 8/48 (16.7%) with a maximum of 5400 ms on altitude in 1/8, supraventricular ectopic beats in 13/48 (27%), ventricular ectopic beats in 26/48 (54.1%) which were complex (cl. Lown greater than or equal to 3) in 7/26. Group B: 882.30 hours of monitoring were able to be analysed in 39/40 former football players. Sinus node pauses greater than or equal to 1750 ms were found in 18/39 (46.1%) with a maximum of 2280 ms in 7/18, second degree atrioventricular block in 1/39 (2.6%) with a maximum of 2400 ms, supraventricular ectopic beats in 32/39 (82%), ventricular ectopic beats in 24/39 (61.5%) which were complex in 5/24.(ABSTRACT TRUNCATED AT 250 WORDS)
Flow of formation water in the Jurassic of the Paris Basin and its effects The Jurassic of the Paris Basin is a major target for oil and geothermal exploitation. Interpretation of data from numerous wells reveals that the spatial distribution of fluid and reservoir parameters is heterogeneous with correlated anomalies for temperature and geochemistry. The multidisciplinary approach used to describe these typical characteristics shows that hydrodynamics provides the key to explain observed deviations from the usual correlations with depth. As a consequence of the coupling between hydrodynamics and fluid properties, the analysis of induced effects is a calibration criterion which can be used to improve the numerical simulation of the regional flow path. Compared to the results of the constant density approach, the new computer flow scheme, both gravity and density driven, is more consistent with the thermal and chemical anomalies observed. The refined analysis, including density effects, confirms the existence of a confined area and a mixing zone around Paris, previously identified by geochemical investigations.
<reponame>BreakerOfThings/o3de<filename>Gems/EditorPythonBindings/Code/Source/ActionManager/PythonEditorAction.cpp /* * Copyright (c) Contributors to the Open 3D Engine Project. * For complete copyright and license terms please see the LICENSE at the root of this distribution. * * SPDX-License-Identifier: Apache-2.0 OR MIT * */ #include <ActionManager/PythonEditorAction.h> namespace EditorPythonBindings { PythonEditorAction::PythonEditorAction(PyObject* handler) : m_handler(handler) { } PyObject* PythonEditorAction::GetHandler() { return m_handler; } const PyObject* PythonEditorAction::GetHandler() const { return m_handler; } } // namespace EditorPythonBindings
Maternal smoking during pregnancy and primary headache in school-aged children: a cohort study Background: It is not known whether smoking by mothers during pregnancy is associated with headache in their offspring. Methods: Two prospective cohorts of 869 children aged 1011 years from Ribeiro Preto (RP) and 805 children aged 79 years from So Lus (SL) were studied. Data on maternal smoking were collected at birth. Primary headache was defined as a reporting of ≥2 episodes of headache in the past 2 weeks, without any associated organic symptoms. Results: Prevalence of headache was 28.1% in RP and 13.1% in SL as reported by the mothers and 17.5% in RP and 29.4% in SL as reported by the children. Agreement between mothers report and childrens self-report of primary headache in the child was poor. After adjustment, children whose mothers smoked ≥10 cigarettes per day during pregnancy presented higher prevalence of primary headache than their counterparts in both cohorts, as reported by the mother and in RP as reported by the children. Conclusions: Maternal smoking during pregnancy was associated with headache in 7- to 11-year-olds. With one exception, the consistency of the results, despite poor agreement between maternal and children reports of headache, indicates that maternal smoking during pregnancy may contribute to headaches in their children.
Amnesty International is calling for an urgent independent investigation into the reported deaths of at least 51 people outside the Republican Guard headquarters today. “There is a crucial need for independent and impartial investigations that can be trusted by all sides. However, Egypt’s authorities have a poor track record of delivering truth and justice for human rights violations. “Past military investigations have white-washed army abuses, and the authorities have buried the conclusions of a fact-finding report they ordered into protester-killings, refusing to make it public. Egypt’s Public Prosecution has spent more time charging government critics than it has prosecuting the police and army for human rights violations. “Effective investigations are critical to stop officials from repeating human rights violations. The head of the army’s Republican Guard is the same man who led a deadly crackdown on protesters in front of the cabinet building in December 2011.
Dicyanoisophorone-Based Near-Infrared-Emission Fluorescent Probe for Detecting NAD(P)H in Living Cells and in Vivo. NADH and NADPH are ubiquitous coenzymes in all living cells that play vital roles in numerous redox reactions in cellular energy metabolism. To accurately detect the distribution and dynamic changes of NAD(P)H under physiological conditions is essential for understanding their biological functions and pathological roles. In this work, we developed a near-infrared (NIR)-emission fluorescent small-molecule probe (DCI-MQ) composed of a dicyanoisophorone chromophore conjugated to a quinolinium moiety for in vivo NAD(P)H detection. DCI-MQ has the advantages of high water solubility, a rapid response, extraordinary selectivity, great sensitivity (a detection limit of 12 nM), low cytotoxicity, and NIR emission (660 nm) in response to NAD(P)H. Moreover, the probe DCI-MQ was successfully applied for the detection and imaging of endogenous NAD(P)H in both living cells and tumor-bearing mice, which provides an effective tool for the study of NAD(P)H-related physiological and pathological processes.
<gh_stars>0 package es.developer.achambi.pkmng.modules.search.configuration; import es.developer.achambi.coreframework.threading.MainExecutor; import es.developer.achambi.pkmng.modules.ConfigurationDataAssembler; import es.developer.achambi.pkmng.modules.search.configuration.presenter.ISearchConfigurationPresenterFactory; import es.developer.achambi.pkmng.modules.search.configuration.presenter.SearchConfigurationPresenterFactory; public class SearchConfigurationAssembler { private MainExecutor mainExecutor; private ConfigurationDataAssembler configurationDataAssembler; public SearchConfigurationAssembler setMainExecutor(MainExecutor mainExecutor) { this.mainExecutor = mainExecutor; return this; } public SearchConfigurationAssembler setConfigurationDataAssembler( ConfigurationDataAssembler configurationDataAssembler) { this.configurationDataAssembler = configurationDataAssembler; return this; } public ISearchConfigurationPresenterFactory getPresenterFactory() { return new SearchConfigurationPresenterFactory( configurationDataAssembler.getConfigurationDataAccess(), mainExecutor ); } }
package com.tweebaa.apicustomeradmin.controller; import com.fasterxml.jackson.annotation.JsonProperty; import lombok.*; import java.util.List; @Data @NoArgsConstructor @AllArgsConstructor public class PageCustomers { // "totalCount":22,"pageSize":10,"totalPage":3,"currPage":2,"list": // @Getter // @Setter private Integer totalCount=0; private Integer pageSize=0; private Integer totalPage=0; private Integer currPage=0; private List list; }
/**************************************** * Computer Algebra System SINGULAR * ****************************************/ /*************************************************************** * * File: mpsr_Timer.cc * Purpose: definitions for a simple timer * Author: Olaf Bachmann (10/95) * * Change History (most recent first): * ***************************************************************/ #include"mpsr_Timer.h" #include<stdio.h> void mpsr_StartTimer(mpsr_Timer_pt t_mpsr) { tms t_tms; t_mpsr->t_time = times(&t_tms); t_mpsr->s_time = t_tms.tms_stime; t_mpsr->u_time = t_tms.tms_utime; } void mpsr_StopTimer(mpsr_Timer_pt t_mpsr) { tms t_tms; t_mpsr->t_time = times(&t_tms) - t_mpsr->t_time; t_mpsr->s_time = t_tms.tms_stime - t_mpsr->s_time; t_mpsr->u_time = t_tms.tms_utime - t_mpsr->u_time; } void mpsr_PrintTimer(mpsr_Timer_pt t_mpsr, char *str) { printf("%s", str); printf("User time: %.2f \n", (float) t_mpsr->u_time / (float) HZ); printf("System time: %.2f \n", (float) t_mpsr->u_time / (float) HZ); printf("Real time: %.2f \n", (float) t_mpsr->u_time / (float) HZ); }
Compact Implementation of ARIA on 16-Bit MSP430 and 32-Bit ARM Cortex-M3 Microcontrollers In this paper, we propose the first ARIA block cipher on both MSP430 and Advanced RISC Machines (ARM) microcontrollers. To achieve the optimized ARIA implementation on target embedded processors, core operations of ARIA, such as substitute and diffusion layers, are carefully re-designed for both MSP430 (Texas Instruments, Dallas, TX, USA) and ARM Cortex-M3 microcontrollers (STMicroelectronics, Geneva, Switzerland). In particular, two bytes of input data in ARIA block cipher are concatenated to re-construct the 16-bit wise word. The 16-bit word-wise operation is executed at once with the 16-bit instruction to improve the performance for the 16-bit MSP430 microcontroller. This approach also optimizes the number of required registers, memory accesses, and operations to half numbers rather than 8-bit word wise implementations. For the ARM Cortex-M3 microcontroller, the 8 32 look-up table based ARIA block cipher implementation is further optimized with the novel memory access. The memory access is finely scheduled to fully utilize the 3-stage pipeline architecture of ARM Cortex-M3 microcontrollers. Furthermore, the counter (CTR) mode of operation is more optimized through pre-computation techniques than the electronic code book (ECB) mode of operation. Finally, proposed ARIA implementations on both low-end target microcontrollers (MSP430 and ARM Cortex-M3) achieved (209 and 96 for 128-bit security level, respectively), (241 and 111 for 192-bit security level, respectively), and (274 and 126 for 256-bit security level, respectively). Compared with previous works, the running timing on low-end target microcontrollers (MSP430 and ARM Cortex-M3) is improved by (92.20% and 10.09% for 128-bit security level, respectively), (92.26% and 10.87% for 192-bit security level, respectively), and (92.28% and 10.62% for 256-bit security level, respectively). The proposed ARIACTR implementation improved the performance by 6.6% and 4.0% compared to the proposed ARIAECB implementations for MSP430 and ARM Cortex-M3 microcontrollers, respectively. Introduction The data encryption is important for the network security. The computation of secure encryption requires high overheads for low-end microcontrollers. In order to achieve high availability on low-end microcontrollers, the efficient implementation of block cipher has been actively studied. For the efficient implementation, unique features of target block ciphers should be considered for optimizations. In this paper, we optimized the electronic code book (ECB) and counter (CTR) modes of operation for ARIA block cipher on both MSP430 and Advanced RISC Machine (ARM) Cortex-M3 microcontrollers. Proposed in 2004, ARIA block cipher is the standards of South Korean and IETF. Recently, the ARIA on low-end Alf and Vegard's RISC (AVR) was presented by. ARIA implementations on 8-bit AVR required 198.3 (for 128-bit security level), 228.0 (for 192-bit security level), and 257.8 (for 256-bit security level) clock cycles per byte, respectively. However, optimized implementations of ARIA block cipher on both MSP430 and ARM Cortex-M3 microcontrollers have not been studied. Compared with 8-bit AVR microcontrollers, target microcontrollers have different architectures, in terms of word size, instruction set, general purpose registers, and pipeline stages. For this reason, specialized optimization techniques should be investigated for high performance on both MSP430 and ARM Cortex-M3 microcontrollers. In this work, we improved ARIA block cipher on both MSP430 and ARM Cortex-M3 microcontrollers. ARIA implementations are optimized by considering unique features of target microcontrollers and adopting the state-of-art engineering technique. Furthermore, we proposed ARIA-CTR implementations on both target microcontrollers. Contribution The first ARIA block cipher on both MSP430 and ARM Cortex-M3 microcontrollers: Primitive operations of ARIA block cipher, such as substitute and diffusion layers, are efficiently optimized for both MSP430 and ARM Cortex-M3 microcontrollers. With these optimized operations, high-speed implementations of ARIA block cipher are achieved. Optimized ARIA block cipher implementations for 16-bit MSP430 microcontrollers: Two bytes of input data are concatenated to re-construct the 16-bit word. The operation on the 16-bit word is executed at once to improve the performance and reduce the number of required general purpose registers, memory accesses, and operations, for the 16-bit MSP430 microcontroller. Proposed ARIA implementations on 16-bit MSP430 microcontrollers achieved 209 (for 128-bit security level), 241 (for 192-bit security level), and 274 (for 256-bit security level) clock cycles per byte, respectively. Compared with former works on the identical processor, the running timing is optimized by 92.20% (for 128-bit security level), 92.26% (for 192-bit security level), and 92.28% (for 256-bit security level), respectively. Optimized ARIA implementations for ARM Cortex-M3 microcontrollers: For the ARM Cortex-M3 microcontroller, the pre-computed table based ARIA implementation is further optimized. The memory access is finely re-scheduled to utilize the 3-stage pipeline architecture of ARM Cortex-M3 microcontroller. Finally, proposed ARIA implementations on ARM Cortex-M3 microcontrollers achieved 96 (for 128-bit security level), 111 (for 192-bit security level), and 126 (for 256-bit security level) clock cycles per byte, respectively. Compared with former ARIA implementations on the identical processor, the execution timing is enhanced by 10.09%, 10.87%, and 10.62% for 128-bit, 192-bit, and 256-bit security levels, respectively. Efficient implementation of ARIA-CTR on MSP430 and ARM Cortex-M3 microcontrollers: The implementation of ARIA-CTR is further optimized for MSP430 and ARM Cortex-M3 microcontrollers. For the 16-bit MSP430 microcontroller, 1 substitution layer, 1 diffusion layer, and 2 add-round-key operations are optimized away. For the 32-bit ARM Cortex-M3 microcontroller, both M S layer and M 1 layer are optimized with precomputation. With the above optimizations, the performance of the proposed ARIA-CTR implementations are improved over the proposed ARIA-ECB implementations by 6.6% and 4.0% for MSP430 and ARM Cortex-M3 microcontrollers, respectively. The remainder of this paper is organized as follows. Section 2 presents an overview of the ARIA block cipher and previous block cipher implementations on both 16-bit MSP430 and 32-bit ARM Cortex-M3 microcontrollers. In Section 3, proposed implementations of ARIA block cipher on both 16-bit MSP430 and 32-bit ARM Cortex-M3 microcontrollers are presented. In Section 4, the performance evaluation of proposed implementations is described. Finally, the conclusion is given in Section 5. Target Block Cipher: ARIA ARIA block cipher consists of a substitution layer, diffusion layer, and add-round-key. Similar to AES block cipher, the substitution layer executes an affine transformation of the inversion function on Galois Field and the diffusion layer executes a simple linear map operation. The add-round-key executes eXclusive-OR operation with plaintext and round key. ARIA encryption and decryption operations share the identical architecture. This feature optimizes the chip size and code size for hardware and software implementations, respectively. Target Microcontrollers: 16-Bit MSP430 and 32-Bit ARM Cortex-M3 The MSP430 microcontroller is a representative 16-bit embedded processor board with a clock frequency of 8-16 MHz, 32-48 KB of flash memory, 10 KB of RAM, and 12 general purpose registers from R4 to R15. The microcontroller provides sufficient basic arithmetic instructions for implementations. Instructions for block cipher implementations on the MSP430 microcontroller are described in Table 1. ARM Cortex-M3 is 32-bit microcontroller and designed for embedded computing services. The microcontroller provides low energy consumption with high performance. Arithmetic instructions take one clock cycle but memory access instructions take more clock cycles. The microcontroller supports the barrel-shifter, which performs rotated or shifted registers without additional costs. Instructions for block cipher implementations on the ARM Cortex-M3 microcontroller are described in Table 2. Former Symmetric Key Cryptography on 16-Bit MSP and 32-Bit ARM Microcontrollers In, an optimized implementation of authenticated encryption on MSP430X microcontrollers was presented. In, efficient implementations of AES (132 cycles/byte) and SPECK (103 cycles/byte) block ciphers on the MSP430 microcontroller were presented, respectively. In, the encryption mode of the tweakable block cipher of the SCREAM authenticated cipher is implemented in the MSP430 microcontroller. In, the implementation of Simeck on the MSP430 microcontroller reduces the code size by 19.32% and improves the execution timing by 3.75 times. In, a compact implementation of Chaskey on the MSP430 microcontroller was presented. Similarly, many block ciphers were implemented on MSP430 microcontrollers. For the case of ARM processors, many implementations were also investigated. In WISA'13, LEA block cipher on the 32-bit ARM processor was introduced. Primitive operations of LEA block cipher were optimized for the 32-bit ARM microcontroller. In, AES-CTR implementations were presented and achieved optimal AES implementations. In, the new efficient software design of PRESENT block cipher was presented. The CTR mode of operation takes 2100 cycles on the Cortex-M3 microcontroller, which improves the performance by a factor of 8. In, a 384-bit permutation design (i.e., Gimli) is efficiently implemented on the 32-bit ARM processor. In, constant time implementations of GIFT block cipher on the ARM Cortex-M3 microcontroller were presented. The 128-bit data can be encrypted with only about 800 cycles for GIFT-64 and about 1300 cycles for GIFT-128. In, fixslicing-based AES implementations were also evaluated on ARM Cortex-M. Similarly, many block ciphers were implemented on ARM Cortex-M microcontrollers [8,. However, previous works do not optimize the ARIA block cipher on both target microcontrollers (MSP430 and ARM Cortex-M3). In this paper, we present the first optimized implementation of ARIA block cipher on both microcontrollers. Proposed Method Since the length of the original word of the ARIA block cipher is 8-bit, the implementation of ARIA is efficient for the 8-bit architecture as described in. However, the 8-bit word-based ARIA architecture is not efficient for 16-bit cases. For this reason, optimizations for 16-bit architecture should be considered. For the case of 32-bit, the developer of ARIA block cipher suggested techniques to combine substitute and diffusion layers. This approach efficiently performs the computation with 8 32 look-up table access, which is the optimal method for the 32-bit architecture. We implemented the ARIA for both MSP430 and ARM Cortex-M3 microcontrollers. For the case of MSP430 microcontrollers, two 8-bit wise operations are combined to construct the 16-bit word for the efficient diffusion layer. The 8-bit wise memory access is efficiently handled for the substitution layer. For the case of ARM Cortex-M3 microcontrollers, the previous look-up table based access is further optimized by considering the 3-stage pipe-lining of the 32-bit ARM Cortex-M3 microcontroller. Particularly, instructions are re-scheduled to avoid pipeline stalls. Byte wise rotation operations are also efficiently implemented with ARM native instruction sets. Optimized ARIA Implementation on 16-Bit MSP430 The MSP430 microcontroller has twelve 16-bit registers for general purposes. In Table 3, the general purpose register utilization for ARIA encryption on 16-bit MSP430 microcontrollers is presented. In particular, general purpose registers are used for different purposes, such as plaintext pointer, round key pointer, plaintext, loop counter, and temporal variable. round key pointer R14-R15 temporal variable #2-#3 Diffusion Layer: The diffusion layer executes consecutive XOR operations with 8-bit words in a certain order. Some XOR operations of diffusion layer are repeated several times (T value of Algorithm 1). These repeated parts can be computed once. Then, these results can be used several times through the caching to reduce the number of computations. Detailed descriptions for sequential diffusion layer are presented in Algorithm 1. In Steps 1, 6, 11, and 16, some parts of XOR operations are pre-computed. Then, these results are used several times in following steps (2-5, 7-10, 12-15, and 17-20). However, the target microcontroller only supports 16-bit word size and instructions. The straight-forward implementation of 8-bit wise pre-computation technique (i.e., Algorithm 1) is inefficient for the 16-bit MSP430 microcontroller, because only half of register is utilized during the computation. Algorithm 1 Sequential diffusion layer of ARIA block cipher for 8-bit AVR. In Algorithm 2, the implementation of a 2-way diffusion layer to utilize the 16-bit word for the 16-bit MSP430 microcontroller is presented. Unlike the straight-forward implementation, two 8-bit words are concatenated to form a 16-bit word (i.e., size of MSP430 microcontroller) and a 16-bit wise XOR operation is performed at once. Similar to the previous approach, the 16-bit wise pre-computation (TH TL) is performed and the result is utilized by several times in other steps. Compared with the previous approach, the approach halves the number of required number of XOR operations and general purpose registers. Substitution Layer: The substitution layer can be implemented with the 8 8 look-up table access (i.e., memory access). The 16-bit MSP430 microcontroller supports both wordwise and byte-wise memory access (.B). Since the look-up table is 8-bit wise, we utilized byte-wise memory access. In particular, the 16-bit result is accessed twice by 8-bit wise. Detailed procedures for substitution layer on the 16-bit MSP430 microcontroller are given in Figure 1 and described as follows: Optimization of Counter Mode of Operation for 16-bit Architecture: The counter mode of operation can be skipped through pre-computation with constant variables. Previous works have been devoted to improve the performance of counter mode through the precomputation [2,. The input of counter mode of operation consists of counter (32-bit) and constant nonce (96-bit). One substitution and one diffusion, and two add-round-key operations for the 96-bit constant nonce part can be pre-computed. Only the remaining part for the 32-bit counter is computed online. The optimized ARIA-CTR implementation was presented by. In Algorithm 3, the 2-way diffusion layer after the pre-computation is given. Table 4, the register utilization for ARIA encryption on target microcontrollers is presented. Plaintext pointer, round key pointer, look-up table pointer, temporal variables, and plaintext are allocated in registers. Diffusion and Substitution Layers: In, the 8 32 look-up table-based round implementation was presented. The look-up table combines both diffusion and substitution layers for the 32-bit architecture. The diffusion layer A is constructed in the form of For simplifying above notations, the following notations are used. When S is the substitution layer, the round without key addition is performed as follows: M S is performed by using 8 32 look-up tables, where M is a block diagonal matrix. As described above, the efficient implementation of each matrix (M 1, P, M S) is important. The optimal implementation is highly related with compact memory access on the target microcontroller. In this paper, we presented the pipelined LUT access method. Optimization of M S matrix: The 8 32 table look-up is performed with the 8-bit wise offset. Since the word size of the ARM Cortex-M3 processor is 32-bit long, four 8-bit wise look-up accesses are required for full 32-bit computations. Detailed descriptions are presented in Figure 2. To extract the 8-bit value out of 32-bit, barrel-shifter, rotation, and masking operations are performed. The sequential pre-computed table-based approach performs four pre-computed table accesses, consecutively. However, the read-and-write dependency between source and destination addresses leads to pipeline stalls in this approach and pipeline stalls introduce the timing delay. To resolve this performance penalty, the pipelined LUT access for M S layer is proposed in Algorithm 4. The dependency between source and destination addresses is removed by re-alignment of instruction sets. The operation consists of three steps. In Steps 1-6, the offset setting for memory address pointer is performed. This step generates four 8-bit offsets from 32-bit word for four memory address pointers. In Steps 7-10, four memory accesses are performed with four base address pointers, consecutively. In Steps 11-13, results of LUTs are accumulated together. Finally, the result (Y0) is returned in Step 14. In Figure 3, the comparison of computation order between previous and proposed methods is presented. The previous method does not take advantage of pipelining features, while the proposed method achieved the pipelining feature by re-ordering operations. The proposed approach ensures low latency by avoiding the pipeline stall. Optimization of Counter Mode of Operation for 32-bit Architecture: Previous optimization methods for counter mode of operation are not available in the ARM Cortex-M3 microcontroller, since 32-bit ARM Cortex-M3 implementation employed the LUT method while the previous approach utilized the 8-bit S-box-based implementation. For that reason, the CTR technique is re-designed for the LUT-based implementation. First, M S layer is optimized. Only the 32-bit counter part is calculated online for this layer. Second, M 1 layer is also optimized. Only the computation with 32-bit counter part is computed. The detailed M 1 layer is given in Algorithm 7. Only three XOR operations are performed. EOR T3, T3, T1 3: EOR T1, T1, T2 3.3. Secure Implementation of ARIA Software implementations of block cipher should be secure against the side-channel attack. The proposed ARIA implementation is secure against the most popular and effective attack (i.e., timing attack) on software implementations (https://www.bearssl.org/ constanttime.html, accessed date: 10 April 2021). In order to avoid the timing attack, proposed implementations do not include conditional branch statements depending on the secret information. Regardless of the secret key, the implementation always executes same operations and this ensures the constant timing of implementations. Furthermore, since the target embedded processor does not provide the cache memory, the memory access pattern is always the regular fashion. The attacker cannot exploit the cache timing attack on this case and the implementation is secure against the timing attack. Evaluation We evaluated optimized ARIA implementations on both MSP430 (MSP430F1611) and ARM microcontrollers (Arduino DUE). Comparison results in terms of RAM (bytes), program code size (bytes), and execution timing (clock cycles) are presented in Tables 5 and 6 for MSP430 and ARM Cortex-M3, respectively. The proposed implementation is the first ARIA optimization on both MSP430 and ARM Cortex-M3 microcontrollers. The comparison is performed with previous implementations in. 16-bit MSP430 implementations utilized the 8-bit pre-computation result in ROM (Storing results into the RAM is also possible but the target processor has limited size of the RAM. For this reason, we only consider the ROM). The utilization of code and RAM are similar to the previous implementation. However, the execution timing is significantly improved by 92.2% compared to the previous work. The performance improvement is mainly coming from the 2-way computation (i.e., 16-bit wise) of diffusion layer and optimized memory access. Proposed ARIA-CTR implementations show better performance than proposed ARIA-ECB implementations by 6.6% (for 128-bit security level), 5.3% (for 192-bit security level), and 5.1% (for 256-bit security level), respectively. For the ARM microcontroller, the look-up table is stored in different storage types (i.e., ROM and RAM). The RAM/ROM-based implementation improved the execution timing by 10.09/14.13% (for 128-bit security level), 10.87/15.12% (for 192-bit security level), and 10.62/14.42% (for 256-bit security level), compared to previous implementations, respectively. The utilization of RAM is similar to previous implementations. The code size of proposed implementation is smaller than previous work for the 128-bit ARIA implementation with ROM. For the 192-bit and 256-bit cases for ROM, the code size of proposed work is bigger than the previous work. Proposed RAM-based implementations achieved smaller code size than previous works in all security levels. The RAM based implementation achieved better performance but used more RAM storage than the ROMbased implementation. Proposed RAM/ROM based ARIA-CTR implementations achieved better performance than proposed ARIA-ECB implementations by 2.04/4.00% (for 128-bit security level), 2.33%/3.47% (for 192-bit security level), and 1.54%/3.05% (for 256-bit security level), respectively. Table 5. Performance evaluation of ARIA on MSP430 in terms of code size (bytes), RAM (bytes), and execution time (clock cycles per byte), where 8t, o, and c represent 8 8 pre-computation-based implementation, pre-computation stored in ROM, and counter mode of operation, respectively. EKS, ENC, and SUM represent encryption key scheduling, encryption, decryption, and summation, respectively. Options Code Conclusions We presented the new compact implementation of ARIA block cipher on microcontrollers, namely MSP430 and ARM Cortex-M3. We firstly optimized the implementation of ARIA block cipher. The 2-way computations of diffusion layer and optimized memory access are presented targeting for the MSP430 microcontroller. Pipelined memory access and optimized byte-wise rotation are presented for the ARM microcontroller. For the 16-bit word diffusion layer, two 8-bit words are combined to construct the 16-bit word and the two 8-bit operations are performed in a single 16-bit operation of the 16-bit MSP430 microcontroller (i.e., parallel approach). For the pipelined memory access, memory offset, memory access, and calculation are finely re-scheduled to meet the 3-stage pipeline, which avoids pipeline stalls in consecutive LUT accesses. Lastly, we proposed the efficient implementation of ARIA-CTR for both embedded processors. This method takes advantages of pre-computation of constant nonce value. In this paper, we proposed compact ARIA implementations on microcontrollers. With this technique, we can pursue several future works. First, we can utilize the proposed method to the efficient implementation of CTR. By combining both techniques, we can find further improvements on ARIA implementations for specific purposes. Second, the recent work considered the secure block cipher implementation on ARM Cortex-M4 microcontrollers. We can apply the secure implementation technique to proposed ARIA implementations for high security. Third, we investigated the block cipher implementation of low-end embedded processors. We will study on the block cipher implementation on 64-bit AMD and 32-bit RISC-V processors.
_base_ = [ '../../_base_/schedules/sgd_tsm_mobilenet_v2_100e.py', '../../_base_/default_runtime.py' ] log_config = dict( interval=1, hooks=[ dict(type='TextLoggerHook'), # dict(type='TensorboardLoggerHook'), ]) # model settings model = dict( type='Recognizer2D', backbone=dict( type='MobileNetV2TSM', shift_div=8, num_segments=8, is_shift=True, pretrained='mmcls://mobilenet_v2'), cls_head=dict( type='TSMHead', num_segments=8, num_classes=2, in_channels=1280, spatial_type='avg', consensus=dict(type='AvgConsensus', dim=1), dropout_ratio=0.5, init_std=0.001, is_shift=True), # model training and testing settings train_cfg=None, test_cfg=dict(average_clips='prob')) dataset_type = 'FatigueCleanDataset' data_root = '/zhourui/workspace/pro/fatigue/data/rawframes/new_clean' data_root_val = '/zhourui/workspace/pro/fatigue/data/rawframes/new_clean' facerect_data_prefix = '/zhourui/workspace/pro/fatigue/data/anns/new_clean' ann_file_train = '/zhourui/workspace/pro/fatigue/data/anns/new_clean/20211108_fatigue_lookdown_squint_calling_smoking_dahaqian.json' ann_file_val = '/zhourui/workspace/pro/fatigue/data/anns/new_clean/20211108_fatigue_lookdown_squint_calling_smoking_dahaqian.json' ann_file_test = '/zhourui/workspace/pro/fatigue/data/anns/new_clean/20211108_fatigue_lookdown_squint_calling_smoking_dahaqian.json' test_save_results_path = 'work_dirs/fatigue_r50_clean_with_squint_smoke_call_dahaqian/valid_results_testone.npy' test_save_label_path = 'work_dirs/fatigue_r50_clean_with_squint_smoke_call_dahaqian/valid_label_testone.npy' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False) clip_len = 8 train_pipeline = [ dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=clip_len, out_of_bound_opt='repeat_last'), dict(type='FatigueRawFrameDecode'), dict(type='Resize', scale=(-1, 256)), dict(type='RandomResizedCrop'), dict(type='Resize', scale=(224, 224), keep_ratio=False), dict(type='Flip', flip_ratio=0.5), dict(type='Normalize', **img_norm_cfg), dict(type='FormatShape', input_format='NCHW'), dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['imgs', 'label']) ] val_pipeline = [ dict( type='SampleFrames', clip_len=1, frame_interval=1, num_clips=clip_len, test_mode=True, out_of_bound_opt='repeat_last'), dict(type='FatigueRawFrameDecode'), dict(type='Resize', scale=(-1, 256)), dict(type='CenterCrop', crop_size=224), dict(type='Normalize', **img_norm_cfg), dict(type='FormatShape', input_format='NCHW'), dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['imgs']) ] test_pipeline = [ dict( type='SampleFrames', clip_len=1, frame_interval=1, num_clips=clip_len, test_mode=True, out_of_bound_opt='repeat_last'), dict(type='FatigueRawFrameDecode'), dict(type='Resize', scale=(-1, 256)), dict(type='ThreeCrop', crop_size=256), dict(type='Normalize', **img_norm_cfg), dict(type='FormatShape', input_format='NCHW'), dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['imgs']) ] data = dict( videos_per_gpu=2, workers_per_gpu=4, pin_memory=False, train=dict( type=dataset_type, ann_file=ann_file_train, video_data_prefix=data_root, facerect_data_prefix=facerect_data_prefix, data_phase='train', test_mode=False, pipeline=train_pipeline, min_frames_before_fatigue=clip_len), val=dict( type=dataset_type, ann_file=ann_file_val, video_data_prefix=data_root_val, facerect_data_prefix=facerect_data_prefix, data_phase='valid', test_mode=True, test_all=False, pipeline=val_pipeline, min_frames_before_fatigue=clip_len), test=dict( type=dataset_type, ann_file=ann_file_test, video_data_prefix=data_root_val, facerect_data_prefix=facerect_data_prefix, data_phase='valid', test_mode=True, test_all=False, test_save_label_path=test_save_label_path, test_save_results_path=test_save_results_path, pipeline=test_pipeline, min_frames_before_fatigue=clip_len)) evaluation = dict( interval=5, metrics=['top_k_classes']) # optimizer optimizer = dict(type='SGD', lr=0.025, momentum=0.9, weight_decay=0.0001) # runtime settings checkpoint_config = dict(interval=5) work_dir = './work_dirs/debug/'
package com.supervise; import com.supervise.schedule.QuartzScheduleInitizing; import com.supervise.schedule.job.*; import org.mybatis.spring.annotation.MapperScan; import org.mybatis.spring.boot.autoconfigure.MybatisAutoConfiguration; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.builder.SpringApplicationBuilder; import org.springframework.boot.web.support.SpringBootServletInitializer; import org.springframework.context.ConfigurableApplicationContext; import org.springframework.context.annotation.Import; /** * Created by xishui on 2018/1/30 上午9:35. * * @author xishui * Description: * Modify Record * ---------------------------------------- * User | Time | Note */ @SpringBootApplication @Import(value = {MybatisAutoConfiguration.class}) public class SuperviseFinanceApplication extends SpringBootServletInitializer { /** * 支持外部J2EE容器启动 */ @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) { return builder.sources(SuperviseFinanceApplication.class); } public static void main(String[] args) { ConfigurableApplicationContext context = SpringApplication.run(SuperviseFinanceApplication.class, args); //启动定时任务加载. QuartzScheduleInitizing initizing = context.getBean(QuartzScheduleInitizing.class); if(null != initizing){ initizing.initDbSchedule(); } } }
<reponame>ResolveWang/algorithm_qa """ 问题描述: 你就是一个画家!你现在想绘制一幅画,但是你现在没有足够颜色的颜料. 为了让问题简单,我们用正整数表示不同颜色的颜料。你知道这幅画需要的n种颜色的颜料, 你现在可以去商店购买一些颜料,但是商店不能保证能供应所有颜色的颜料,所以你需要 自己混合一些颜料。混合两种不一样的颜色A和颜色B颜料可以产生(A XOR B)这种颜色的 颜料(新产生的颜料也可以用作继续混合产生新的颜色,XOR表示异或操作)。本着勤俭节约 的精神,你想购买更少的颜料就满足要求,所以兼职程序员的你需要编程来计算出最少需要 购买几种颜色的颜料? 输入描述: 第一行为绘制这幅画需要的颜色种数n (1 ≤ n ≤ 50) 第二行为n个数xi(1 ≤ xi ≤ 1,000,000,000),表示需要的各种颜料. 输出描述: 输出最少需要在商店购买的颜料颜色种数,注意可能购买的颜色不一定会使用在画中,只是为了产生新的颜色。 示例1 输入 3 1 7 3 输出 3 """ def getHighPosition(number): count = 0 while number > 0: number = number >> 1 count += 1 return count n = int(input()) colors = list(map(int, input().strip().split())) colors.sort() lastIndex = len(colors) - 1 bLastIndex = lastIndex - 1 num = 0 while len(colors) > 2: if getHighPosition(colors[lastIndex]) == getHighPosition(colors[bLastIndex]): temp = colors[lastIndex] ^ colors[bLastIndex] if temp not in colors: colors.append(temp) colors.sort() lastIndex += 1 bLastIndex += 1 else: num += 1 colors.pop() lastIndex -= 1 bLastIndex -= 1 print(num + len(colors))
TAC y PET/TC con 18-FDG para evaluar la respuesta al tratamiento en linfoma de Hodgkin y no Hodgkin. Introduction The assessment of lymphoma response to treatment is based on imaging studies. Objective To correlate the assessment of lymphoma treatment response by computed tomography (CT) and by positron emission tomography/computed tomography (PET/CT). Method Cross-sectional, observational study, where records of patients with lymphoma under surveillance by CT and PET/CT were reviewed. Results The study population consisted of 43 patients with a mean age of 32.7 ± 22.4 years; 26 (60.5 %) had a diagnosis of Hodgkin's lymphoma and 17 (9.5 %) had non-Hodgkin lymphoma. By CT, 34 (79.1 %) were diagnosed with disease and nine (20.9 %) without disease. The criteria used to assess the response was radiologist experience in 39 (90.7 %) and RECIST 1.1 criteria in four (9.3 %). The diagnosis by 18-FDG PET/CT was no response to treatment or partial response-recurrence in 32 (74.4 %) and response to treatment in 11 (25.6 %); with PERCIST criteria in 13 (30.2 %) and Deuaville criteria in 30 (69.8 %). When the diagnosis by CT versus 18-FDG PET/CT was compared, out of 11 patients with complete response on PET/CT, three had a similar CT diagnosis. Of the 34 patients with data consistent disease diagnosed by CT, 26 had similar results by 18-FDG PET/CT (p = 0.54). Conclusion The value of lymphoma treatment response on CT does not agree with that obtained by 18-FDG PET/CT.
Topological queries in spatial databases Handling spatial information is required by many database applications, and each poses different requirements on query languages. In many cases the precise size of the regions is important, while in other applications we may only be interested in the TOPOLOGICAL relations- hips between regions intuitively, those that pertain to adjacency and connectivity properties of the regions, and are therefore invariant under homeomorphisms. Such differences in scope and emphasis are crucial, as they affect the data model, the query language, and performance. This talk focuses on queries targeted towards topological information for two- dimensional spatial databases, where regions are specified by polynomial inequalities with integer coeficients. We focus on two main aspects: (i) languages for expressing topological queries, and (ii) the representation of topological information. In regard to (i), we study several languages geared towards topological queries, building upon well-known topologi- cal relationships between pairs of planar regions proposed by Egenhofer. In regard to (ii), we show that the topological information in a spatial database can be precisely summarized by a finite relational database which can be viewed as a topological annotation to the raw spatial data. All topological queries can be answered using this annotation, called to- pological invariant. This yields a potentially more economical evaluation strategy for such queries, since the topological invariant is generally much smaller than the raw data. We examine in detail the problem of transla- ting topological queries against the spatial database into queries against the topological invariant. The languages considered are first-order on the spatial database side, and fixpoint and first-order on the topological in- variant side. In particular, it is shown that fixpoint expresses precisely the PTIME queries on topological invariants. This suggests that topolo- gical invariants are particularly well-behaved with respect to descriptive complexity. (Based on joint work with C.H.Papadimitriou, D. Suciu and L. Segoufin.)
Barrier Function-Based Neural Adaptive Control With Locally Weighted Learning and Finite Neuron Self-Growing Strategy This paper presents a new approach to construct neural adaptive control for uncertain nonaffine systems. By integrating locally weighted learning with barrier Lyapunov function (BLF), a novel control design method is presented to systematically address the two critical issues in neural network (NN) control field: one is how to fulfill the compact set precondition for NN approximation, and the other is how to use varying rather than a fixed NN structure to improve the functionality of NN control. A BLF is exploited to ensure the NN inputs to remain bounded during the entire system operation. To account for system nonlinearities, a neuron self-growing strategy is proposed to guide the process for adding new neurons to the system, resulting in a self-adjustable NN structure for better learning capabilities. It is shown that the number of neurons needed to accomplish the control task is finite, and better performance can be obtained with less number of neurons as compared with traditional methods. The salient feature of the proposed method also lies in the continuity of the control action everywhere. Furthermore, the resulting control action is smooth almost everywhere except for a few time instants at which new neurons are added. Numerical example illustrates the effectiveness of the proposed approach.
import { ComprehendMedicalClientResolvedConfig, ServiceInputTypes, ServiceOutputTypes, } from "../ComprehendMedicalClient.ts"; import { DetectEntitiesV2Request, DetectEntitiesV2Response } from "../models/models_0.ts"; import { deserializeAws_json1_1DetectEntitiesV2Command, serializeAws_json1_1DetectEntitiesV2Command, } from "../protocols/Aws_json1_1.ts"; import { getSerdePlugin } from "../../middleware-serde/mod.ts"; import { HttpRequest as __HttpRequest, HttpResponse as __HttpResponse } from "../../protocol-http/mod.ts"; import { Command as $Command } from "../../smithy-client/mod.ts"; import { FinalizeHandlerArguments, Handler, HandlerExecutionContext, MiddlewareStack, HttpHandlerOptions as __HttpHandlerOptions, MetadataBearer as __MetadataBearer, SerdeContext as __SerdeContext, } from "../../types/mod.ts"; export type DetectEntitiesV2CommandInput = DetectEntitiesV2Request; export type DetectEntitiesV2CommandOutput = DetectEntitiesV2Response & __MetadataBearer; /** * <p>Inspects the clinical text for a variety of medical entities and returns specific * information about them such as entity category, location, and confidence score on that * information. Amazon Comprehend Medical only detects medical entities in English language * texts.</p> * <p>The <code>DetectEntitiesV2</code> operation replaces the <a>DetectEntities</a> * operation. This new action uses a different model for determining the entities in your medical * text and changes the way that some entities are returned in the output. You should use the * <code>DetectEntitiesV2</code> operation in all new applications.</p> * <p>The <code>DetectEntitiesV2</code> operation returns the <code>Acuity</code> and * <code>Direction</code> entities as attributes instead of types. </p> */ export class DetectEntitiesV2Command extends $Command< DetectEntitiesV2CommandInput, DetectEntitiesV2CommandOutput, ComprehendMedicalClientResolvedConfig > { // Start section: command_properties // End section: command_properties constructor(readonly input: DetectEntitiesV2CommandInput) { // Start section: command_constructor super(); // End section: command_constructor } /** * @internal */ resolveMiddleware( clientStack: MiddlewareStack<ServiceInputTypes, ServiceOutputTypes>, configuration: ComprehendMedicalClientResolvedConfig, options?: __HttpHandlerOptions ): Handler<DetectEntitiesV2CommandInput, DetectEntitiesV2CommandOutput> { this.middlewareStack.use(getSerdePlugin(configuration, this.serialize, this.deserialize)); const stack = clientStack.concat(this.middlewareStack); const { logger } = configuration; const clientName = "ComprehendMedicalClient"; const commandName = "DetectEntitiesV2Command"; const handlerExecutionContext: HandlerExecutionContext = { logger, clientName, commandName, inputFilterSensitiveLog: DetectEntitiesV2Request.filterSensitiveLog, outputFilterSensitiveLog: DetectEntitiesV2Response.filterSensitiveLog, }; const { requestHandler } = configuration; return stack.resolve( (request: FinalizeHandlerArguments<any>) => requestHandler.handle(request.request as __HttpRequest, options || {}), handlerExecutionContext ); } private serialize(input: DetectEntitiesV2CommandInput, context: __SerdeContext): Promise<__HttpRequest> { return serializeAws_json1_1DetectEntitiesV2Command(input, context); } private deserialize(output: __HttpResponse, context: __SerdeContext): Promise<DetectEntitiesV2CommandOutput> { return deserializeAws_json1_1DetectEntitiesV2Command(output, context); } // Start section: command_body_extra // End section: command_body_extra }
Challenges Facing Fish Farming Development in Western Kenya This paper examines the challenges facing fish farming development in western kenya. Sample survey of 192 farmers representing the fish farming community in the area was used. The study result revealed that the high prices of fish feed, declining fish prices and lack of finance were found to be the top ranking serious challenges facing fish farmers in that area. A Cross-sectional and longitudinal Survey research design was adopted for the study. Stratified sampling was used to select fish farming households. Key informants were selected through purposive sampling method. Data gathering was through multiple methods; where primary and secondary data were collected. Data analysis made use of descriptive statistics, where numerical and non-numerical summary of data were used. Chi-Square was used to test the independence between variables. Spearman rank order correlation coefficient was used to test relationship between fish farmers ranking of various variables affecting them. Findings were, fish farmers faced several management problems which included high cost, unavailability and low quality of feeds, drying up of ponds during drought, lack of fingerlings, flooding, siltation of ponds, pond maintenance and poor security. Benefits of the study are; the government through Kebs should frequently carry out spot checks on feeds supplied to Agrovets to ascertain its quality. Fish farmers will adopt Best Management Practices in fish farming in order to improve their household food security and livelihoods through increased income. The study therefore suggests that the government through Kebs should frequently carry out spot checks on feeds supplied to Agrovets to ascertain its quality. There is need for the fish farmers to carry out a proximate analysis for crude protein content to ascertain the quality of the feeds to be used. Fish farmers should also be trained on feed formulation and fish breeding to maintain a constant supply, quality and save on costs for both feeds and fingerlings.
// a function to sort an array and then get the median value int findMedian(int arr[]) { for(int i=0;i<tempSize;i++) { for(int j=0;j<tempSize-1;j++) { if(arr[j]>arr[j+1]) { int temp = arr[j+1]; arr[j+1]=arr[j]; arr[j]=temp; } } } return arr[tempSize/2]; }
A short educational intervention diminishes causal illusions and specific paranormal beliefs in undergraduates Cognitive biases such as causal illusions have been related to paranormal and pseudoscientific beliefs and, thus, pose a real threat to the development of adequate critical thinking abilities. We aimed to reduce causal illusions in undergraduates by means of an educational intervention combining training-in-bias and training-in-rules techniques. First, participants directly experienced situations that tend to induce the Barnum effect and the confirmation bias. Thereafter, these effects were explained and examples of their influence over everyday life were provided. Compared to a control group, participants who received the intervention showed diminished causal illusions in a contingency learning task and a decrease in the precognition dimension of a paranormal belief scale. Overall, results suggest that evidence-based educational interventions like the one presented here could be used to significantly improve critical thinking skills in our students. Introduction The development of successful debiasing strategies has been argued to be one of the most relevant contributions that Psychology could make to humanity. Debiasing techniques are aimed to eliminate or, at least, diminish the frequency or intensity of the cognitive biases that populate our reasoning. Everyday tasks are commonly based on heuristic processes or mental shortcuts that enable fast and computationally low demanding decisions. However, these heuristics sometimes produce cognitive biases, that is, systematic errors that distance us from normative reasoning and lead us to erroneous conclusions and suboptimal decisions. Cognitive biases have been specifically related to various threats to human welfare including the acquisition and persistence of superstitious and pseudoscientific beliefs ; the emergence of group stereotypes and prejudices ; ideological extremism ; medical diagnostic errors ; or spurious therapeutic effectiveness. Furthermore, they might also contribute to psychopathological conditions such as social phobia, depression, eating disorders or to the development of psychotic-like experiences in healthy adults. PLOS The extensive literature investigating the dangers posed by cognitive biases has encouraged research aimed to determine the circumstances under which these biases develop. It has been shown that situations which promote analytical thinking, such as the use of difficult-to-read fonts or presenting information in a foreign language, diminish the effects of cognitive biases. Nevertheless, specific evidence-based interventions for debiasing that can be implemented as educational tools are still sparse. Overcoming cognitive biases is not trivial because these biases often defy common sense and require to put our intuitions into question. Furthermore, debiasing efforts usually find resistance because people do not like being exposed to their own flaws and the advantages of normative strategies are not obvious to them. Examples of recent successful debiasing interventions include perspective taking techniques, which have been shown to produce durable reductions of intergroup prejudices, and probability training, which has been shown to yield positive effects to very complex reasoning activities such as geopolitical forecasting. Promising results have also been observed in relation to interventions aimed to reduce causal illusions, which will be the main focus of this paper. Causal illusions, or illusions of causality, refer to the erroneous perception of a causal relationship between two events when no such causal relationship exists [5, (note that we also include what previous literature has sometimes referred to as "illusion of control" under the broader term "causal illusion" or "illusion of causality"). It has been suggested that this bias could be an important contributing factor to the development and maintenance of superstitious and pseudoscientific beliefs. Causal illusions are typically studied in the laboratory by means of a standard contingency learning task. In this task participants are asked to evaluate a potential causal relationship between two events, for example the effectiveness of a new drug, the potential cause, for curing a fictitious disease, the outcome of interest. With this goal in mind, participants are typically presented with medical records from several fictitious patients, presented one by one, that either took the drug or not, and they observe whether each patient recovered from the fictitious disease or not. Importantly, when the situation is set up by the experimenters so that the patients are healed irrespective of the administration of the drug or not (i.e., the probability of healing is equal among patients taking and not taking the drug), sometimes participants incorrectly conclude that the drug is producing the occurrence of the outcome. This is known as a causal illusion because participants illusorily perceive the drug (the potential cause) as causing the recovery of the patients (the outcome). This illusion is facilitated when the probability of the outcome is high (outcome density effect, e.g. ), and when the probability of the potential cause is high (cue density effect, e.g. ), leading to particularly intense causal illusions when both probabilities are high. Moreover, it has been shown that in situations where the percentage of healings is high and participants are allowed to choose between giving or not giving the drug, they are inclined to administer the drug to a majority of the patients, thereby tending to expose themselves to more patients that take the drug than to patients that do not take it. The presence of this spontaneous search strategy is especially relevant because, as we have already noted, the increase of the percentage of trials in which the potential cause is present fuels the intensity of the causal illusion that they develop. In everyday life, the situations where miracle pills and unproven therapies are perceived to be successful can be linked to circumstances that facilitate the emergence of causal illusions. These ineffective products and therapies are usually applied to conditions with high rates of spontaneous remission, such as, for instance, back pain. As we have already explained, high rates of the desired outcome (i.e., a high probability of spontaneous improvement or relief from the illness) increase the tendency of the user to develop causal illusions (i.e., the erroneous perception of the product being effective). The illusory perception of efficacy, in turn, can foster the use of the product and hence strengthen false beliefs that are propagated among others who end up sharing the illusion. With this in mind, Barberia et al. conducted a study with adolescents. Volunteers in the intervention condition participated in a workshop in which they were offered direct experience with a bogus miracle product. After being fooled that the product had improved their physical and cognitive abilities in different tasks, the participants were debriefed and they received a tutorial on experimental methods including advice on how to reliably establish causality. Compared to a control group who had not received the intervention, participants in the intervention group showed a weaker causal illusion in a standardized contingency learning task. Moreover, the authors suggested that the decrease in the illusion could be, at least in part, due to a change in the behavior of the participants that had received the intervention, as they exposed themselves to less cause-present trials (they administered the drug to fewer patients and, accordingly, they could observe the outcome in more patients not taking the drug). Despite the evident value of these results, it could be argued that the intervention and measure procedures were too aligned, what casts doubt on the transferability of the acquired knowledge. Moreover, it remains unclear whether the effects of the intervention would extend to more general beliefs that seem to be associated with causal illusions such as paranormal beliefs. In the current study we present a new example of a successful educational intervention aimed to reduce the impact of cognitive biases on causal reasoning as well as to encourage a more critical analysis of paranormal beliefs. Our present intervention was specifically designed to overcome two problems that have been noted to undermine the success of debiasing interventions : the "bias blind spot", which refers to the tendency to not accept that one's perspective might be biased while being able to recognize biases on the judgment of others, and the lack of perceived personal relevance of the cognitive biases. To this respect, we started the intervention with a staging phase that induced cognitive biases in our participants so as to demonstrate how easily we can all be tricked to commit these thinking errors. Thereafter, we provided various examples of everyday situations in which the presented biases play a role in order to illustrate the extent to which cognitive illusions are important to our daily lives. Our debiasing techniques can be situated among cognitive strategies. In this sense, we applied a training-in-bias approach focusing on two important cognitive phenomena, namely the Barnum effect and the confirmatory strategy elicited by the 2-4-6 task. The Barnum or Forer effect refers to the tendency to accept and rate as highly accurate vague personality descriptions that are presented as specific and personalized but are actually so common that they can be applied to almost anyone. We considered that the Barnum effect would be strongly and easily induced in most of the participants, what would help overcoming the "bias blind spot", and that inducing this effect was also appropriate in order to enhance the perceived personal relevance of cognitive biases, as it is easily applied to everyday situations. On the other hand, the 2-4-6 task has been shown to elicit a confirmatory searching strategy. We considered that presenting this task was especially relevant because, as previously described, biased information search has been proposed to play a role in causal illusions. Specifically, when participants are presented with a potential causal relationship in the contingency learning task, they tend to test this relationship by choosing to preferentially observe cases in which the potential cause is present, what can be considered a confirmatory search strategy. Given that the mere awareness that a cognitive flaw exists is not enough for overcoming its effects, our intervention was complemented with a training-in-rules methodology focused on pointing out the "consider the opposite" approach. In situations where a person is required to make a judgment, this strategy consists of searching for possible reasons why an initial consideration or hypothesis might be wrong as an effective way to diminish confirmatory tendencies by favoring discovery and evaluation of new information. We conducted our study with groups of Psychology undergraduates. The effect of the intervention over causal illusions was assessed by means of a standardized contingency learning task. Moreover, we added a measure of paranormal beliefs in order to investigate the generalizability of the observed effects to different domains of superstition. A previous study found that causal illusions generated in a contingency learning task tend to correlate with some types of paranormal beliefs. If, in line with previous results, our debiasing intervention were able to diminish causal illusions, we could speculate that it might also impact these correlated beliefs. In sum, we expected our intervention to influence the learning strategies of our students and their causal judgments, promoting a more critical approach to the discovery of new information and reconsidering of a priori beliefs. Participants A total of 106 Psychology undergraduates took part in the study (86 females). Forty-seven students (mean age 21.57, SD 3.48, 36 females) received the intervention condition and 59 students received the control condition (mean age 20.83, SD 2.65, 50 females). The study was performed into a regular class of the Psychology degree, in the context of a teaching initiative aimed to promote scientific thinking among students. Importantly, prior to the intervention participants were only informed that the initiative aimed to promote transversal competences, but not that it was specifically addressed to practice scientific thinking. All students that attended the class participated in the intervention and its assessment. However, students could decide, at the end of the class session, if they wanted to consent for their data to be used anonymously for research purposes or not. Only the data from students that gave written consent are presented. The study, which complied with APA ethical standards, was approved by the ethics committee of the University of Barcelona (Comissi de Biotica de la Universitat de Barcelona). Procedure The intervention and assessment (see below) were carried out in a 90 min session included into regular courses of the Psychology degree. We conducted three experimental sessions with three different groups of students. Participants in each session were randomly distributed to two different rooms, corresponding to the intervention or control conditions, respectively. The rooms were equipped with one desktop computer per student. The students in the intervention condition received the educational intervention before assessment of their causal illusion and paranormal beliefs, whereas, for the students in the control condition, the assessment was carried out first, and then, due to ethical considerations, the intervention was also provided. The same instructor conducted the intervention condition across the three sessions. Simultaneously, other instructors conducted the control condition in the other room. Note that differences due to the involvement of different instructors in the intervention and control conditions cannot be expected to influence our results because the assessment in the control group was presented before any intervention, and the instructions for the assessment tasks were provided in written form for both intervention and control conditions. Intervention The educational intervention consisted of a staging phase followed by a debriefing phase. The staging phase started with the bogus explanation of a psychological theory according to which a fine-grained personality description can be obtained from the analysis of performance in low-level cognitive tasks. Then the participants were asked to carry out two computer tasks related to this theory. We explicitly prompted students to work individually during the tasks focusing on their own computer screens. The initial screen requested participants to state their age and gender. The first task, inspired by an on-line quiz (http://braintest.sommer-sommer. com), was presented as a personality assessment and consisted of a point-and-click version of the Stroop test as well as a pattern selection test in which the participant simply had to choose which of three different arrangements of colored geometrical figures was most similar to a given target. After completing these two simple tests the computer supposedly analyzed the data and provided an allegedly individualized personality description. The report consisted of an adaptation of most of the original sentences used by Forer, although the order of the sentences was randomized for each participant in order to hinder identification of the hoax in case the students could see other participant's description. The descriptions were genderadapted in order to increase the degree of perceived personalization of the description. After they read their personal report, the participants were asked to indicate in a 0 to 100 scale "to what extent you think the test has been effective detecting how you are". The second task of the staging phase of the intervention was presented as a test of reasoning abilities and was a computerized version of the 2-4-6 task adapted from http://www. devpsy.org/teaching/method/confirmation_bias.html. Participants were asked to identify a rule that applied to triplets of numbers. They were first given the sequence 2-4-6 as an example of a triplet that satisfied the rule. Then the volunteers had the opportunity to generate new triplets to test whether they followed the rule or not. After they introduced each triplet the computer provided feedback about the triplet fitting the rule or not. Participants could continue testing triplets until they were sure of the exact rule (they could test a maximum of 20 triplets). After each triplet-testing trial the participants were asked to declare their rule in mind together with their confidence in the correctness of their hypothesized rule. The participants were, hence, free to test different rules throughout the task. However, they were not told whether their rule was correct or not until the debriefing phase of the intervention. In this task, participants typically form a specific hypothesis about the rule such as "numbers increasing by twos" and then tend to generate triplets that follow the rule they are testing. This positive testing strategy is ineffective in this specific task because the original rule is more general (i.e. "increasing numbers"). Alternatively, a "consider the opposite" strategy, here testing examples that do not satisfy the rule, leads to the formation of new, broader, hypotheses, and eventually to the discovery of the correct one (note that we assume along the paper that the positive testing strategy involves a confirmation bias, but see and for a debate on this). The debriefing phase of the intervention started after all the participants had finished the two tasks. In this phase, we provided theoretical explanations of the Barnum effect and of the typical performance in the 2-4-6 task. We first introduced the original study by Forer together with the personality description used by him. At this point the students realized that it was the same description they had received, and we informed them that the initial theory and the personality test were fake. We then discussed the results found by Forer in his study and the students were free to intervene giving their impressions. After, we moved to the Wason study, and illustrated the students with both the typical confirmatory strategy used in the 2-4-6 task and with the more effective "consider-the-opposite" strategy (examples taken from http://www.devpsy.org/teaching/method/confirmation_bias.html). This was completed with a description of the confirmation bias, defined as the tendency to partially search, select or interpret confirmatory information that leads to the acceptance of a priori beliefs or expectations while ignoring alternative information that could lead to their rejection. Finally, we explained how these cognitive biases are involved in situations like reading your horoscope in a magazine or taking a graphological assessment, as well as false beliefs like the full moon effect, or questionable effects such as the alleged relation between articular pain and relative humidity. Assessment The assessment phase consisted of two different parts. First the participants completed a contingency learning task, and second they answered a paranormal beliefs questionnaire. As we have already explained, in a standard contingency learning task participants are asked to assess a potential causal relation, in our case between taking a drug and relieving from a disease. Our volunteers performed a computer task in which they were asked to take the role of a medical doctor whose goal was to determine whether a given drug was effective or not. They were sequentially presented with 40 fictitious cases of patients that suffered a fictitious disease. In each trial they had the opportunity to administer the drug to the patient. Then the participants were informed whether the patient was healed or not. The healings occurred following a pre-programmed randomized sequence, so that 6 out of every 8 patients were cured, both among the fictitious patients receiving the drug and among those that did not receive it. That is, the drug did not increase the probability of healing and was therefore ineffective. The rate of relief was programmed to be high (.75), in order to simulate a condition that promotes the development of causal illusions. The anticipated default strategy (i.e., the one expected in the participants from the control group) would involve administrating the drug frequently and, as a consequence, being exposed to more cause-present than cause-absent trials, hence developing a causal illusion, as has been shown in previous studies. Once participants had gone through the full set of patients, they were asked to evaluate the effectiveness of the potential cause (the drug) producing the outcome of interest (healings) on a scale ranging from 0 (not effective at all) to 100 (totally effective). This judgment of causality was our main dependent variable. Given that the relationship was, in fact, inexistent, higher judgments were interpreted as a stronger causal illusion formed by the participant. Regarding paranormal beliefs, we used the Spanish adaptation of the Revised Paranormal Beliefs Scale which consists of 30 items answered in a Likert scale from 1 ("totally disagree") to 7 ("totally agree"). The scale provides a global score of paranormal beliefs as well as a score in eight different subscales (see Table 1 in reference for the items that we included in each subscale): witchcraft, psi, traditional religious beliefs, spiritualism, extraterrestrial life and actual visits, precognition, superstition and extraordinary life forms. This version of the scale has been standardized with a sample of undergraduate students and shows large reliability scores (Cronbach's alpha 0.91). Following, item 23 was not included in the calculation of our scores. Note, also, that we substituted the wording of item 20, "There is life on other planets", by "There is intelligent life on other planets". When a participant failed to answer to a specific item, her score (either the global score or that of any subscale) was calculated by averaging the rest of the items. Results The statistical analyses were performed using JASP. We performed Bayesian t-tests using JASP's default Cauchy prior width, r = 0.707. We interpreted Bayes factors following Table 1 in reference. We constructed the plots by means of the YaRrr! package in R. The dataset is available at https://osf.io/vq5b7/. Before we analyze the effectiveness of the intervention, it is worth looking at the results of the Barnum task. This activity was performed in both conditions at the beginning of the intervention, therefore its results cannot be used as a measure of the effectiveness of the intervention. However, the results are informative of the degree to which the Barnum effect was present in our sample. In a 0 to 100 scale our participants evaluated the accuracy of the bogus description with a mean rating of 83.85 points (SD = 12.77) in the intervention group and a mean rating of 78.62 points (SD = 20.62) in the control group. As expected, the effect of condition (intervention vs. control) was not significant, t = 1.518, p =.132, d = 0.298. A twosided Bayesian independent samples t-test (intervention 6 control) suggested anecdotal evidence for the null hypothesis, BF 10 = 0.577. Fig 1 shows the results of the contingency learning task used to measure the amount of causal illusions developed by the participants. As can be seen, participants in the intervention group developed a weaker causal illusion, as shown in their causal judgments being closer to zero than those of the control group. A one-sided t-test for independent samples (intervention < control) over the causal judgments showed a significant effect of the intervention, t = -3.313, p <.001, d = -0.648. A one-sided Bayesian independent samples t-test suggested very strong evidence in favor of the alternative hypothesis, with a Bayes factor of BF 10 = 47.69. This indicates that our results are 47.69 times more likely under the hypothesis that ratings in the intervention group are lower than those in the control group. Fig 2 summarizes the participants' search strategy during the contingency learning task. Specifically, it shows the percentage of trials in which participants decided to administer the fictitious drug to the patients, that is, the percentage of cause-present trials they exposed themselves to. As anticipated, participants in the control condition adopted the expected default strategy (i.e. high drug administration rate), as they gave the drug to more than 50% of the patients, one-sided t-test t = 3.840, p <.001, d = 0.500. This strategy was not shown by the participants in the intervention condition, t = -1.952, p =.971, d = -0.285. The Bayesian analogue analysis indicated extreme evidence favoring the hypothesis that participants' percentage of drug administration was higher than 50%, BF 10 = 157.1 in the control group. In contrast, in the intervention group there was strong evidence favoring the hypothesis that participants did not administer the drug more than 50% of the time, BF 10 = 0.057. Furthermore, a one-sided t-test for independent samples over the percentage of trials in which participants administered the drug confirmed the hypothesis that participants in the intervention condition administered the drug less frequently than those in the control condition, t = -4.014, p <.001, d = -0.785. The corresponding Bayesian analysis showed extreme evidence in favor of this hypothesis, BF 10 = 395.2. Previous studies have shown that manipulating the probability of the potential cause, in our case, the proportion of cases in which the drug was administered, impacts the intensity of causal illusions. Specifically, the higher the proportion of cause-present trials the stronger the causal illusion developed. Since the intervention and control groups differed in the percentage of drug administration, it is plausible to assume that differences in the strength of the causal illusion between groups might be predicted by this variable. With this in mind, we performed a regression analysis in order to state the extent to which the effects of our intervention over causal judgment could be associated to differences in drug administration rates. Moreover, following the suggestion of a reviewer, we also decided to introduce the experienced contingency as an extra predictor in the analysis. Given that participants could decide in which trials they wanted to administer the medicine or not, the actual contingency experienced by each participant, defined as the difference between the probability of the outcome in the presence and absence of the potential cause, could depart from the programed contingency of zero. We, thus, conducted a regression analysis including condition (intervention, control), percentage of drug administration, and experienced contingency as independent variables and causal judgments as dependent variable. Results showed a significant effect of percentage of drug administration ( =.653, p <.001) but no significant effect of condition ( =.064, p =.413), neither of experienced contingency ( =.022, p =.784). These results suggest that the intervention might have impacted causal judgments by decreasing the tendency of the participants to administer the drug. Regarding our measure of paranormal beliefs, a one-sided independent samples t-test (intervention < control) showed no significant effect of the intervention in the global scores of the Revised Paranormal Beliefs Scale (intervention: mean 2.26, SD 1.01; control: mean 2.33, SD.95, t = -0.396, p =.346, d = -0.078). The Bayesian version of the analysis showed moderate evidence favoring the null hypothesis, BF 10 = 0.288. Separate one-sided analyses of the scores corresponding to the different test subscales (intervention < control) showed a significant effect of the intervention in the precognition subscale, t = -2.616, p =.005, d = -0.515, an effect that survived Bonferroni correction for multiple comparisons (adjusted =.006). None of the other seven subscales reached the significance threshold (ps >.26). Accordingly, a one-sided Bayesian independent samples t-test analysis returned a Bayes factor of BF 10 = 8.247 for the Precognition subscale (which can be considered moderate evidence favoring the alternative hypothesis that the intervention group presented lower Precognition scores than the control group). In contrast, the Bayes Factors (BF 10 Discussion The goal of this study was to develop a debiasing intervention aimed to diminish the influence of cognitive biases over everyday reasoning and to promote a critical perspective in relation to pseudoscientific and superstitious beliefs. We conducted our intervention with Psychology undergraduates, who showed a classic Barnum effect with a mean description acceptance rating over 80 points out of 100. We thus replicated the results obtained in the original experiment by Forer who registered a mean rating of 4.3 out of 5. These results suggest that even higher education students are susceptible to accept pseudoscientific claims. As we have already noted in the Introduction, we decided to use causal illusions as the main measure for this study because biases affecting causal inference are assumed to be at the core of pseudoscience and superstition. Barberia et al. observed a reduction of causal illusions in volunteers that had been specifically trained in the rationale of scientific inferences about causal relations, focusing on the concept of contingency and the need for appropriate control conditions. In the present study, we aimed to test whether a more general approach without explicit training in causal relation identification could yield a similar effect. We combined training-in-bias and training-in-rules techniques by evoking two wellknown cognitive biases in the volunteers and explaining how they influence our judgments and/or decisions in relation to different topics. This procedure allowed us to point out how easily cognitive illusions can be elicited and raise awareness on their relevance for everyday life, thus addressing known threats to debiasing interventions such as the bias blind spot and the lack of perceived personal relevance. Furthermore, it also provided the opportunity to introduce the volunteers to the general idea of maximizing the availability of information before a given decision situation by means of "consider the opposite" strategies. Our intervention decreased the illusion of causality as evidenced by the lower causal ratings provided by the intervention group in the contingency learning task in comparison to the control group. Moreover, the results of the regression analysis indicate that the reduction of the causal illusion could be mainly attributable to a decrease of the exposure to the potential cause and, accordingly, to an increment in the chances to observe the outcome during the, now more frequent, cause-absent trials. That is to say that volunteers in the intervention group might have developed the causal illusion to a lesser extent because they tended to generate more cause-absent trials than participants in the control group. We argue that this approach results from the application of a general disconfirmatory or "consider the opposite" strategy presented in the intervention to a specific causal context. During the explanation of the 2-4-6 task we pointed out that in this context a positive testing strategy is unsuccessful whereas testing examples that do not follow the initial rule may lead to the consideration of new hypotheses and, finally, the discovery of the correct rule. In our contingency learning task, generating cases in which the cause is present by giving the drug to the patient is analogous to the positive testing strategy used in the 2-4-6 task because it involves a preference to search for cases in which the outcome is expected to occur if the initial hypothesis (i.e. "the drug is effective") were true. Conversely, the generation of cause-absent trials is equivalent to testing triplets that do not follow the hypothesized rule because it implies searching for examples where the outcome is expected not to occur in case the drug is responsible for healing. Finally, we also included a questionnaire of paranormal beliefs in order to test whether the effect of our intervention extended to the participants' credences in relation to these beliefs. Our analyses showed that overall scores were unaffected by the treatment. However, the results showed moderate evidence suggesting that the intervention could specifically impact scores of one of the subscales of the questionnaire, the Precognition subscale. This subscale refers to abilities to predict the future via paranormal means and it is comprised of items referring to horoscope and astrology among other topics. In our intervention, horoscope appeared as an example aimed to illustrate the influence of cognitive biases in our lives. Horoscope predictions of personality and future events usually rely on vague descriptions that can be applied to a wide range of people, a key aspect in the acceptance rates of Barnum-like descriptions. Moreover, these descriptions tend to include high proportions of favorable statements, eliciting confirmation bias-related phenomena such as the self-enhancement effect. The fact that we explicitly mentioned this kind of examples might have been responsible of the observed result in relation to the precognition subscale. Nevertheless, the effect of our intervention failed to generalize to other dimensions of paranormal belief that were not directly addressed during the intervention. One limitation of this study is that our results rely exclusively on between-participants comparisons. In this sense, although students were randomly assigned to one of the two conditions, we cannot totally rule out initial differences between participants in the control and intervention groups. This limitation could be overcome in future research by carefully designing studies that allow collecting pre-and post-intervention measures from the same participant. A second limitation relates to the complex nature of our intervention, comprising the direct experience and subsequent explanation of both the Barnum effect and the confirmation bias in relation to the 2-4-6 task, as well as the discussion on the potential implications of these effects on everyday life. With our design we cannot disentangle which, if not all, of the components of the intervention are responsible for its beneficial effects. Future designs isolating each of these components could shed light on this issue and potentially contribute to the design of more efficient interventions. In conclusion, with this study we move forward in the direction started by previous research aimed to provide evidence-based educational tools to overcome the detrimental effects of cognitive biases. Our results suggest that an evidence-based educational intervention such as the one we present here could be used to significantly improve scientific thinking skills in adults, decreasing their probability of developing causal illusions that can be on the basis of several misbeliefs.
The present invention relates to optical scanning and optical character recognition systems, and more particularly, to means for converting documents into electronic data which can be extracted and manipulated. Paper-intensive businesses and governmental agencies, e.g., insurance claim processing companies, credit card companies and taxing authorities, require large staffs and a great amount of physical plant. Also, they tend to operate inefficiently and are prone to make numerous errors. This leads to large operating expenses and customer dissatisfaction. A paper-intensive company may receive tens of thousands of documents a day. This type of company can be of two general types, i.e. a transaction company or an archive company, and how a company handles the paper it receives will depend on the type of company it is. A transaction company must obtain data from the documents immediately. Then the data is transferred to a series of people who must act on it. The sequence of those transactions is usually well known. Once the transactions are complete, the data may be stored. While it must be possible to recover the stored data, only a small portion of it is ever likely to be retrieved. Also, the reasons for retrieval are random. A medical claims processing company is an example of a transaction company. An archive company stores the information as soon as it is received and without processing. It may, for example, microfilm the received documents as a form of storage. A large percentage of the information stored by the archive type company will be retrieved, but the reasons for the requests for information will be well-known. A government agency that keeps birth certificates is an example of an archive company. Archive companies can store data slowly without incurring much customer dissatisfaction, but transaction companies must complete their transactions quickly. Archive companies must be able to quickly retrieve all of their records, but transaction companies only need quick access to a small proportion of their records. For example, people expect to be able to get a copy of a birth certificate decades old in a few minutes, but would not expect a company to have fast access to a 1 year old medical claim. There will not be many requests for information on medical claims that have already been processed, but there are continuing requests for birth records. The tens of thousands of documents received by a transaction type company may be of various types. These must be sorted and routed to the proper person for action. Thus, in a typical mailroom, there must be a large number of people who are trained to recognize the type of document received and to direct the document to the correct location. Also, the shear volume of paper makes it necessary to have a very large mailroom. This mailroom is usually an unattractive place for workers, being filled with seemingly endless stacks of papers. This leads to lowered moral and numerous errors. Such a physical plant is also costly. Some of the problems of sorting incoming documents can be reduced by insisting on the use of standardized forms, especially those that are color-coded. However, if the customer mistakenly uses the wrong form, it can be directed to the wrong location and can be lost in the system for days or weeks among millions of other documents. Once a document is sorted, it is then necessary to physically move it from one location to another for processing. This again requires numerous personnel and some amount of space. Also, the deliveries are slow, subject to error and unsightly. In addition, this may be a very inefficient step. There may be people in one location capable of processing the document who are not busy, but that location may be so remote from the sorted documents, e.g., in another town, that it is impractical to transmit the physical documents to that location. The people at the location where the document is located may be so busy that they cannot process it for days. To combat this it is often necessary to have excess staff at all locations, which is costly. After a document reaches a person who must act on it substantively, the problems are not over. Critical information must be accurately retrieved from the document and evaluated. Typically, the information is loaded into a large computer which keeps track of the information and any action taken in response to it. There is considerable chance for error during the information retrieval and computer storage step, especially if the document is filled out by hand and the person processing the document is fatigued by the large volume. Further, in a health insurance or credit card business or in a taxing authority, there are usually complex rules on how to respond to or treat the information in a file. In some cases these rules must be looked up manually, a further source of error and delay. Even when the rules are stored on computer, it is necessary for the operator to properly code the information so that the computer applies the proper rules. For example, the rate of insurance reimbursement may vary for the same medical treatment, depending on the subscriber,s health insurance plan. Here the chance for error also exists. Assuming that a document is properly acted upon and the correct information is delivered to the customer, e.g., a tax payer, that person may have questions. Thus, customer service representatives will at least need access to the computerstored information on the original document in order to respond to the questions. However, it is not unusual for the customer service representatives to need to see a copy of the original document and any correspondence with the client, not just the computer data. This means that someone will have to locate the physical file created in response to the original document. When large numbers of documents are processed, it is impossible to keep many of the files convenient to a service representative. Thus, the files are usually stored off-site, e.g., on microfilm, and it may take days or weeks to retrieve the file and respond to customer inquiries. As indicated, at various stages of the document handling process described, computers can help to reduce the errors. For example, it is known that typed and computerprinted documents can be optically scanned to recover an image of the document for storage in digital form or otherwise. Such storage can be on tape or optical disk. Also, optical character recognition units can extract data electronically from an image of a typed or computer printed document. Electronic representations of data and documents can be transmitted from one location to another for subsequent processing. However, these devices are used only after a good deal of time has been spent manually processing the documents so that they are acceptable for handling by electronic equipment. In addition, an image of a single document can require up to 50,000 bytes of information. Thus, with a 2400 baud modem, it would take at least 20 seconds to transmit the image. If 100,000 documents are received in a day, it would take 2 million seconds to transmit them, but there are only 86,400 seconds in a day. While various electronic components are available for easing the workload in paper-intensive businesses, there is presently no system known to the applicant which handles a high volume of documents, essentially eliminates the need for the physical document soon after it is received, reduces errors, reduces fatiguing labor, and allows transactions to be carried out at remote locations so the work load can be efficiently distributed.
Review of EGFRTKIs in Metastatic NSCLC, Including Ongoing Trials Recent clinical trials have demonstrated the efficacy of epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors (TKI) in the treatment of patients with advanced metastatic non-small cell lung cancer. Most of these recent trials were conducted in patients with EGFR mutation-positive tumors. As our knowledge of the EGFR mutation and its resistant pathways develops, the complexity of the situation expands. This article briefly reviews the pivotal trials leading to approval of EGFR TKIs in the first-line setting for patients with EGFR mutation-positive non-small cell lung carcinomas. It discusses the historical use of EGFR TKIs after the first-line setting in unselected patients and briefly describes ongoing trials. BACKGROUND For many years, standard first-line systemic treatment for metastatic NSCLC has consisted of chemotherapy with a two drug combination including a platinum compound and a non-platinum drug such as pemetrexed, gemcitabine, vinorelbine, or a taxane. The typical median time to progression for chemotherapy-treated patients is 4-6 months and median survival is 10-12 months. The advent of epidermal growth factor receptor (EGFR) molecular testing changed the treatment paradigm. The EGFR or human epidermal growth factor receptor (HER) family contains four members: EGFR (otherwise known as HER1), HER2, HER 3, and HER4. In a normal cell, binding of the epidermal growth factor ligand causes dimerization, phosphorylation, activation of the receptor, and triggering of signaling cascades through pathways such as PI3-Kinase-AKT and RAS/RAF. The presence of an EGFR gene mutation is activating, causing a constant signal to be generated, which leads to cell proliferation and other cancer processes. Approximately 10-30% of NSCLC patients have an EGFR gene mutation. This mutation is observed at a higher frequency in some subpopulations. In Asian NSCLC cancer patients who never smoked or were only light smokers, this percentage may be as high as 60%. For NSCLC patients whose tumors test positive for any EGFR mutations, an oral tyrosine kinase inhibitor (TKI) is now the preferred first-line therapy. FIRST-GENERATION EGFR TKIs First-generation EGFR TKIs such as erlotinib and gefitinib reversibly compete with adenosine triphosphate (ATP) binding Abbreviations: ATP, adenosine triphosphate; CI, confidence interval; EGFR, epidermal growth factor receptor; HER, human epidermal growth factor receptor; HRQoL, health-related quality of life; NCI, National Cancer Institute; NSCLC, non-small cell lung cancer; ORR, objective response rate; OS, overall survival; PFS, progression-free survival; RCT, randomized clinical trial; TKI, tyrosine kinase inhibitor. at the tyrosine kinase domain of EGFR. This inhibits ligandinduced EGFR tyrosine phosphorylation, EGFR/HER1 activation, and subsequent activation of the downstream signaling networks. Pivotal randomized trials with these first-generation TKIs are chronologically described in the sections below. Although it is tempting to directly compare the results of these studies, a recent publication argues that this type of comparison is invalid due to differences in trial design, comparator choice, and inclusion criteria; readers are urged to refer to Sebastian et al.'s elegant description and critical analysis of these trials. IDEAL 1 AND IDEAL 2 -GEFITINIB PROVIDES A SURVIVAL ADVANTAGE IN EGFR MUTATION-UNSELECTED PATIENTS The IDEAL 1 and IDEAL 2 phase II trials were two of the first studies to test gefitinib in patients with stage IV NSCLC. These trials demonstrated that both 250 and 500 mg doses of gefitinib were equally active in an EGFR mutation-unselected patient population, resulting in response rates of approximately 20% and median progression-free survival of 2.7 and 2.8 months for the 250 and 500 mg doses of gefitinib, respectively. Because both doses showed equivalent results, the lower 250 mg dose was put forward for the registration phase III trials. A subset of patients treated with gefitinib demonstrated a very positive response, but it was unclear why that was the case. At the time, the implications of EGFR mutations were not understood, but we now know that most of these patients likely harbored an EGFR gene mutation. NCIC BR.21: ERLOTINIB FOR AN EGFR MUTATION-UNSELECTED PATIENT POPULATION IMPROVES SURVIVAL The NCIC BR.21 phase III trial demonstrated that erlotinib prolonged survival in NCSLC following the failure of first-line or second-line chemotherapy. This multicenter, randomized control trial compared erlotinib to placebo in 731 patients with stage IIIB/IV recurrent NSCLC. Study participants who had failed firstor second-line chemotherapy were randomized 2:1 to receive either erlotinib or placebo. One half of the patients had received one prior regimen, and half had received two prior regimens. Patient selection was not based on EGFR mutation status, gender, smoking history, or type of NSCLC. This study met its primary endpoint of improving overall survival, 6.7 months for erlotinib compared to 4.7 months for placebo (HR 0.70, CI 0.58-0.85, P < 0.001). The study demonstrated statistically significant effects in secondary endpoints including progression-free survival of 2.23 months for patients treated with erlotinib compared to 1.84 months for those treated with placebo (HR 0.61, CI 0.51-0.73, P < 0.001), time to symptom deterioration, and response rate. Overall, 8.9% of patients achieved an objective response to erlotinib (P < 0.001), although mutational analysis was retrospective and only positive in approximately 40 patients. This trial demonstrated a survival benefit in all patients regardless of whether their tumors had an EGFR gene mutation. Why an EGFR inhibitor was efficacious in the absence of an EGFR mutation is unclear. This reflects the complexity of the EGFR mutation and other downstream signaling pathways, many of which are still to be delineated. As a result of the NCIC BR.21 trial, erlotinib was approved and became standard of care in the second or third line setting for patients with NSCLC. ISEL: GEFITINIB PROVIDES NO SURVIVAL ADVANTAGE IN AN EGFR MUTATION-UNSELECTED POPULATION The Iressa Survival Evaluation in Lung Cancer (ISEL) phase III study was similar to the NCIC BR.21 trial design as it compared an EGFR TKI to placebo in EGFR mutation-unselected NSCLC patients in the second and third line setting. Unlike NCIC BR.21, this study failed to meet its endpoint of improved overall survival, with median survival of 5.6 months for patients treated with gefitinib as compared to 5.1 months for patients treated with placebo (HR 0.89, CI 0.77-1.02, P = 0.087). There was a pronounced heterogeneity in survival outcomes between groups of patients, most notably those who were never smokers (HR 0.67, CI 0.49-0.92, P = 0.012) and those of Asian ancestry (HR 0.66, CI 0.48-0.91, P = 0.01). Due to the negative primary results of this trial, gefitinib fell out of use for EGFR mutation-unselected patients in North America. DISCOVERY OF EGFR MUTATIONS In 2004, two articles were published in prestigious journals by Paez et al. and Lynch et al.. Both publications demonstrated that patients who responded well to gefitinib had EGFR gene mutations, and the mutations were located in the region of the gene that encoded the tyrosine kinase domain. Although much discussion centered on whether the presence of the mutation should influence treatment decisions, clarity about the importance of EGFR mutations did not occur until the Iressa Pan Asian Study (IPASS) trial was completed, the mutation status of patients was analyzed, and the biomarker story became clear. IPASS TRIAL: GEFITINIB IMPROVES SURVIVAL IN THE FIRST LINE, IN AN EGFR MUTATION-ENHANCED POPULATION The IPASS trial was the study attributed to changing practice. The goal of the IPASS trial was to evaluate the benefit of gefitinib as compared to carboplatin/paclitaxel as first-line treatment for patients with advanced NSCLC. Patients selected with this trial had favorable clinical characteristics and included Asian patients with adenocarcinoma, who were non-smokers or former light smokers. Patients treated with gefitinib demonstrated superior progression-free survival as compared to those treated with chemotherapy (HR 0.74, CI 0.65-0.85, P < 0.001). An EGFR biomarker analysis was specified in this protocol, but was retrospective and exploratory. Of 1200 patients, 437 had a tumor specimen that was evaluable for EGFR mutation analysis and of these, 261 patients (59.7%) had tumors that contained EGFR gene mutations. In the subset of EGFR mutation-positive patients, the response rate to gefitinib was 71.2% as compared to 47.3% for carboplatin/paclitaxel. PFS was significantly superior for the EGFR mutation-positive patients treated with gefitinib, 9.5 months as compared to 6.3 months for those treated with chemotherapy (HR 0.48, CI 0.36-0.64, P < 0.001). Overall survival was not different, most likely due to crossover; 21.6 months for gefitinib as compared to 21.9 months for carboplatin/paclitaxel. Iressa Pan Asian Study demonstrated that an EGFR was the most appropriate biomarker for the use of EGFR TKI inhibitors in stage IV non-small cell lung carcinomas and with a significant improvement in PFS and quality of life, gefitinib became standard of care first-line option for NSCLC patients with EGFR-mutated tumor. From this point onward, all TKI trials were conducted in EGFR mutation selected populations and European authorities restricted the use of gefitinib to patients with an EGFR mutation only, regardless of therapeutic line. WJOG AND NEJSG: JAPANESE TRIALS TESTING GEFITINIB IN EGFR MUTATION SELECTED POPULATIONS Two randomized phase III studies compared gefitinib to chemotherapy in the first-line setting. Both of these trials, involving NSCLC patients selected on the basis of EGFR mutations, demonstrated a statistically significant increase in progression-free survival for patients treated with gefitinib over chemotherapy. In the West Japan Oncology Group (WJOG) trial, patients treated with gefitinib experienced a median PFS of 9.2 as compared to 6.3 months for those treated with chemotherapy (HR = 0.489, CI 0.336-0.710, P < 0.0001). Results were similar in the North-East Japan Study Group (NEJSG), where patients treated with gefitinib experienced a median PFS of 10.8 months compared to 5.4 months for those treated with chemotherapy (HR = 0.30, CI 0.22-0.41, P < 0.001). This study was stopped following the results of a planned interim analysis as the gefitinib arm had significantly superior PFS compared to the chemotherapy arm. A high number of patients crossed over to gefitinib (98%); this is the most likely explanation for no difference is overall survival. EURTAC TRIAL: ERLOTINIB IN THE FIRST-LINE IMPROVES PROGRESSION-FREE SURVIVAL The European Tarceva vs. Chemotherapy (EURTAC) trial was conducted in patients with EGFR mutation positive tumors, and was the first to demonstrate the benefits of an EGFR TKI in a Caucasian population. Patients were randomized to receive erlotinib or chemotherapy (cisplatin/gemcitabine or cisplatin/docetaxel) in the first-line setting. Response rate was 58% in the erlotinib arm compared to 15% in the chemotherapy arm (P < 0.0001). Progression-free survival was 9.7 months for patients treated with erlotinib and 5.2 months for patients treated with chemotherapy (HR = 0.37, CI 0.25-0.54, P < 0.0001). Overall survival was 22.9 months in the erlotinib arm as compared to 18.8 months in the chemotherapy arm (HR = 0.80; P = 0.42), most likely confounded by second-line therapy and crossover to erlotinib. SECOND-GENERATION TKIs Afatinib and dacomitinib are second-generation EGFR TKIs, and block all HER-family ligands, including HER1 (EGFR), as well as HER2 and HER4. These agents form permanent covalent bonds with the target, irreversibly inhibiting ATP binding at the tyrosine kinase domain. As a result, second-generation TKIs are theoretically more effective in inhibiting EGFR signaling than firstgeneration erlotinib or gefitinib because the inhibition of EGFR signaling is prolonged for the entire lifespan of the drug-bound receptor molecule. Two phase III trials were conducted to test dacomitinib in EGFR mutation-unselected populations. The Archer 1009 phase III trial compared dacomitinib with erlotinib in EGFR mutationunselected patients who were previously treated with chemotherapy. The trial did not demonstrate statistically significant improvement in progression-free survival and was discontinued. The NCIC BR.26 trial phase III trial compared dacomitinib with placebo in 736 EGFR mutation-unselected patients with advanced NSCLC previously treated with both chemotherapy and an EGFR TKI. This study also did not meet its objective of prolonging overall survival. Subgroup analysis is currently being conducted in order to understand if there was a difference in response between patients whose tumors harbored an EGFR mutation and those whose tumors did not. A number of other trials testing the second-generation EGFR TKI dacomitinib are underway and have yet to be published. Archer 1050 is a phase III randomized, open-label trial comparing dacomitinib to gefitinib in a first-line treatment setting in EGFR mutation-positive NSCLC patients. In this trial, approximately 440 patients were randomized 1:1 to dacomitinib or gefitinib. The primary endpoint is PFS by independent review, while the secondary endpoints include PFS by investigator assessment, overall survival, best overall response, duration of response, safety and tolerability, and patient-reported outcomes. As phase II studies of dacomitinib in the first-line treatment setting were promising, we look forward to the results of this phase III study, which will be revealed in mid-2015. AFATINIB FOR PATIENTS WITH EGFR MUTATION POSITIVE TUMORS LUX-Lung 1 was a phase 2b/3 randomized trial comparing afatinib to best supportive care in unselected patients who had received both a platinum doublet and 3 months of an EGFR TKI, gefitinib, or erlotinib. Although progression-free survival was increased, the primary endpoint of overall survival was not. Because of this negative trial, the use of afatinib in patients with an acquired resistance to EGFR TKIs was not approved in any country except Japan. The pivotal afatinib trial is LUX-Lung 3. This phase III trial randomized 345 patients with NSCLC in the first-line setting who had EGFR mutation-positive tumors to receive either afatinib or cisplatin/pemetrexed. For this study, all EGFR mutations from codons 18-21 were analyzed. While the majority of patient tumors harbored common EGFR mutations (Del-19 and Point 21 L858R), approximately 10% of patients had uncommon EGFR mutations. The primary endpoint of this trial was progressionfree survival and secondary endpoints included overall survival, objective response rate, and quality of life. Afatinib treatment led to an increase in the objective response rate compared with chemotherapy treatment (56.1 vs. 22.6%). Patients randomized to afatinib experienced a significant improvement in median progression-free survival compared with those randomized to chemotherapy, 11.1 vs. 6.9 months, respectively (HR 0.58, CI 0.43-0.78, P = 0.0004). The treatment effect of afatinib was more pronounced when comparing progression-free survival in the pre-defined subgroup of patients with the common Del-19 or Point 21 L858R EGFR mutations. In this subgroup, patients treated with afatinib experienced progression-free survival of 13.6 months as compared to 6.9 months for those treated with chemotherapy (HR 0.47, CI 0.34-0.65, P < 0.0001). The LUX-Lung 6 trial, conducted in Asia, confirmed the value of afatinib in the population of patients with EGFR mutationpositive tumors. This phase III, open-label trial randomized 364 NSCLC patients in a 2:1 fashion to receive afatinib or gemcitabine/cisplatin. The primary endpoint in this study was progression-free survival and secondary endpoints included objective response rate, disease control rate, patient-reported outcomes, and safety. A statistically significant improvement in progression-free survival was demonstrated between patients treated with afatinib as compared to those treated with chemotherapy, 11.0 vs. 5.6 months, respectively (HR 0.28, CI 0.20-0.39, P < 0.0001). The progression-free survival benefit was consistent across all subgroups, including all mutation categories. The percentage of LUX-Lung 6 patients with a confirmed objective response was 67% in the afatinib group as compared to 23% in the chemotherapy group. Overall, the results of the LUX-Lung 6 trial support the efficacy observations (progression-free survival and objective response rate) demonstrated in the LUX-Lung 3 trial. To date, none of the published randomized EGFR TKI trials have demonstrated a statistically significant improvement in overall survival. In the American Society of Clinical Oncology meeting in Chicago 2014, a pooled analysis of LUX-Lung 3 and LUX-Lung 6 was presented. Although the pooling of clinical trial results in this way is controversial, the results are interesting. According to this analysis, the overall survival of LUX-Lung 3 was 31.6 months for patients treated with afatinib as compared to 28.2 months for those treated with chemotherapy (pemetrexed/cisplatin) (HR: 0.78). The pooled overall survival analysis of LUX-Lung 6 showed that patients treated with afatinib had a median survival of 23.6 months as compared to 23.5 months when treated with gemcitabine/cisplatin (HR 0.8). Although both hazard ratios are approximately 0.8, neither of the P values were significant. The pooled analysis showed an important improvement in overall survival in patients whose tumors had the most www.frontiersin.org common EGFR mutations, Del-19 and Point 21 L858R. In the sub-population of patients with these mutations, the median overall survival in the afatinib arm was 27.3 months, which was significantly improved over median overall survival of 24.3 months in the chemotherapy arm (HR 0.81, p = 0.037). The most interesting analysis concerned the subpopulation of patients whose tumors had harbored the Del-19 deletion, where a significant improvement in overall survival was seen in both LUX-Lung 3 and LUX-Lung 6 trials. In the LUX-Lung 3 trail, the median survival was 33.2 months for Del-19 patients treated with afatinib as compared to 21.1 months with chemotherapy (pemetrexed/cisplatin) (HR 0.54). In LUX-Lung 6, the median survival was 31.4 months for Del-19 patients treated with afatinib as compared to 18.4 months for patients treated with chemotherapy (gemcitabine/cisplatin) (HR 0.64). The author concluded that the patients with Del-19 and Point 21 L858R mutations may constitute very different populations, and may require different treatment strategies. A highly anticipated trial is LUX-Lung 7. This phase III, openlabel trial randomized 316 patients with EGFR mutation-positive advanced adenocarcinoma to receive either afatinib or gefitinib. The primary endpoint for the trial, which completed in July 2013, was overall survival. We await the results eagerly. Clinical trials with the third-generation EGFR TKIs are underway. These inhibitors work to selectively inhibit tumors that harbor the acquired T790 mutation. Currently, there are more than 350 open trials for EGFRs in NSCLC, and at least 20 of these are phase III. Indeed, this is a very exciting time in the evolution of our knowledge of the EGFR TKI inhibitors, and we expect outstanding advances in the care of our patients with non-small cell lung carcinoma.
Giant cell transformation cerebrohepatorenal syndrome. An infant with cerebrohepatorenal syndrome of Zellweger had extensive hepatic giant cell transformation at 6 1/2 weeks of age. At 16 weeks the liver showed early cirrhosis and rare giant cells. Changes previously described have ranged from no abnormality in the neonate to cirrhosis at 20 weeks of age and indicate progression of liver disease in affected patients.
At the Intel Extreme Masters esports festival in Katowice, Heroes of the Storm player Dennis ‘HasuObs’ Schneider is sitting in a stairwell. He has 20 minutes to spare before he’ll be dragged off by journalists for photographs and interviews. His near-perfect English — delicate and direct — is punctuated by hearty laughter. Carefully considering his words, he addresses every portion of our interview at length. He describes his esports tenure in detail: A sprawling competitive history that spans three different games. His narrative changes course dramatically, jumping sporadically from event to event — and while it may seem an arduous odyssey at first, it’s quite a simple story to tell. He sums it up well: “Twelve years of play,” the 28-year-old says. “Ten years of tryharding.” Those 12 years began in 2004. There was no Facebook, Reddit, YouTube or Twitter; Pluto was still a planet; and HasuObs, then 16 years old, signed his first professional gaming contract with German gaming organization Mousesports. His first love was Warcraft III: Reign of Chaos. After playing StarCraft: Brood War casually as a teenager, he took to the Undead race in Warcraft III and never looked back. While kids his age were working in fast food and retail for their summer jobs, HasuObs was plugging away at his computer and traveling the globe every few months to compete at major LAN events under the Mousesports banner. “My parents were worried,” HasuObs recalls. “Every time there was an offline tournament, one of them came with me. When I signed with Mousesports I was a minor, so my parents had to sign the contract instead of me.” A deal was struck — so long as he kept his grades up, he could continue to compete. “My parents were not a barrier. I managed to finish high school, then I started my college studies. I was trying to make it work in the first year. I was going there every day and writing about the subjects and stuff. I never felt satisfied with it. I decided to pursue esports full time.” Mighty Mouse Dennis stuck with Mousesports — through high school and college, breakups, and 11 cold German winters. He spent the entirety of his formative years under the organization’s umbrella. It’s rare for teenagers to commit to any job for that length of time, much less one in esports. “There were never problems within the team or the management,” said HasuObs of his former employer. “Sometimes I had offers that were better than Mousesports, but I always talked to the management and we always figured out a way that would keep us both satisfied. Back then, I had no idea what could be better, so I never thought about joining another team. Both parties were very satisfied. I think it was a very good relationship for both sides.” Reign of Chaos His first memorable public showing was at the ESL Pro Series (EPS) in 2006. EPS was a recurring national Warcraft III event in Germany that HasuObs competed at intermittently throughout his career. “It's all ‘firsts’ for me with Warcraft because it was the first game I played competitively. Winning my first EPS was a very big deal for me. I also won the very last EPS, which makes me feel like the ‘forever champion’,” he adds with a laugh. “Back then there were big team leagues. Playing with my teammates — ToD, Happy, the Koreans, the Chinese players — I have so many good memories. I think we never won a team league, but we placed second and third a few times. It felt like winning it, though. It felt good to place that well back then.” Protoss at Heart At the age of 21, it was time for a change. “I started to play StarCraft II and I quickly realized that I loved the game. The first two years were crazy because I was traveling all the time, like every two or three months. I realized it was impossible to do anything next to esports,” said HasuObs. “I do not regret it.” Mousesports was on board with the transition. “I don't even know if we talked about it. I told them I want to play StarCraft II and they said okay, sure, you can keep your contract. They said ‘Hopefully you will become good!’ I said ‘No worries’!" HasuObs’ love of StarCraft started at a very early age, well before he became an esports pro, when he and his brother would throw small LAN parties at their parents’ house. “There was one guy who played Protoss and he built a lot of Gateways. For me it was something new because I only had one Barracks back then, only one production facility. He had like ten Gateways and built like mass Dragoons! It was impressive to me and made me like Protoss.” There is certainly nostalgia surrounding his StarCraft II career. “If I think of StarCraft II, I think of a lot of memories outside the game when we were at tournaments. You meet these guys every two months all over the globe and play the same game. Every offline event was something special.” Modestly put, HasuObs is well traveled for his age. “I've been to Korea, all over America, all over Europe, China many times. In Russia, I was in Moscow once to play in a tournament. I've been to DreamHack in Spain, France, and Sweden. The most exotic one I guess was Singapore, because I didn't expect to play there. I was supposed to be a caster there and they were missing a player and asked me!” HasuObs played StarCraft II professionally for five years. He won a little over $82,000 from 119 tournaments, 42 of which were in-person competitions. HasuObs spent a lot of time traveling in the name of competitive StarCraft II. This lifestyle takes a certain kind of resilience to chase with such determination, and on a long enough timeline wears even the most steadfast competitors dull. As he went from hotel to airplane, to hotel, to computer, and back again, his motivation and dedication to compete in StarCraft II began to dwindle. A Leap of Faith It’s true, old dogs can learn new tricks. It’s far easier said than done, however. “It was a very big step,” HasuObs recalls. “I didn't like to play StarCraft II anymore. It was almost one year where I didn't have the results, so I wasn't getting prize money and my morale and motivation weren't there anymore. I started to play Heroes.” In June of 2015, HasuObs and Mousesports parted ways. “For over a decade, HasuObs was a part of our team, longer than any other player in professional gaming I know of,” said Cengiz Tüylü, CEO of Mousesports, in the team’s official farewell statement. “Our cooperation was always built on respect, trust and friendship… I like to think back, enjoying all the great memories I’ve shared throughout the years with Mouz, and HasuObs is part of many of them.” Moving forward into uncharted territory without a sponsor, HasuObs asked a few other StarCraft II players to make a Heroes of the Storm team — but it was something of a failed venture. They struggled to post results until about eight months in, when they signed with ROCCAT. “For the first year, it was a hard time. There were not that many tournaments and we didn't know how to practice that well. We didn't understand drafting.” The ROCCAT Heroes of the Storm team broke up in December of 2015. Just a month later, HasuObs joined the roster now competing under the Team Liquid banner. “When I joined these guys, it was the first time in Heroes that I felt like, this is going to be good! These guys work hard, they understand. They have the right mindset to play tournaments. It just felt like I fit perfectly into the team.” The Price of Complacency While his team had been performing marginally well at first, adding HasuObs boosted their competitive performance into the stratosphere. They qualified out of the gate for their first regional LAN together at IEM Katowice 2016. “We had a boot camp before with mYinsanity in Switzerland. We performed well. Our goal was not to get first. Our goal was to qualify for the [Spring] Global Championship; we needed to win the semifinals for that. The moment we won the semifinals, all the pressure was gone,” said Schneider. “Then the grand finals were a stomp by Team Dignitas. It's hard to explain, but the moment we won the semifinals, for us it felt like the tournament was done. Of course, we tried to win, but there was a different energy.” After losing out to Team Dignitas at Katowice in 2016, HasuObs’ team was determined to not make the same mistake again. A lot of preparation had gone into this year’s Western Clash. “Right before the tournament here in Katowice, we checked out some drafts, scouted the opponents, and talked about what we like, what we don’t like. We tried to spread it out a bit, because last year for the Summer Championship we practiced the most with Team Dignitas and it felt like it was unfavorable for us. We felt like they learned way more out of the draft and stuff, so we adjusted our practice routine and we tried to spread it out across all good European teams.” Irrepressible Resilience Dennis’s breath hung in the air as he explained that his Warrior player teammate, Markus ‘Blumbi’ Hanke, had fallen ill the night before. It was Championship Sunday at the Western Clash, and an exceptionally cold morning in Poland. He was wearing sweatpants, and walking with his team toward an access corridor in the back of the venue. Moments earlier, they had stepped off the Greyhound bus that had ferried them here from their hotel just a mile down the road. He offered that Blumbi was feeling better, but these words were soaked in concern. While this event meant a lot to every competitor, it was especially important to his team. They had failed to qualify for the Fall Championship. They had barely missed the crown at this exact same venue a year ago. Now the stakes were higher, and they were within arm’s reach. It takes a certain kind of person to get back up after being knocked down so many times. One of those people is right here in this stairwell—a fabled competitor who remains from an era when computer monitors were as deep as they were wide. As we know now, HasuObs’ team went on to again succumb to the strength of Team Dignitas here in Katowice. Despite this setback, he remains vigilant. A newly inked deal with Team Liquid acts as a soft reset—a welcome reprieve for someone who, at the age of 28, could be considered an old-timer in this industry. HasuObs is well aware of that. “I was thinking of quitting esports when I stopped playing StarCraft II,” he says. “That was the only time I’ve ever thought about quitting. I love esports. I would like to stay. All pro gamers have an expiry date. HasuObs, a “forever champion,” has reached what should have been his limits and surpassed them time and time again—in a way, defeating the strongest opponent of all.
import * as mat from "transformation-matrix"; import { TConstantOrLazy, calcValueFromConstantOrLazy } from "../lazyEvaluative"; import * as modO from "../elements/modifier-gun"; import { Gun } from "../gun"; import * as le from "../contents/lazyEvaluative"; import { FiringState } from "../firing-state"; /** * Transform firing transformation matrix. * * @param trans transform. */ export const transform = (trans: TConstantOrLazy<mat.Matrix>): Gun => { return new modO.ModifierGun(new modO.TransformModifier(trans)); }; /** * Add fire translation. * * ```typescript * const fireFromRight = concat( * translationAdded({ y: 0.1 }), * fire(bullet()), * ); * ``` * * @param translation Added translation as [x, y]. */ export const translationAdded = (translation: { x?: TConstantOrLazy<number>; y?: TConstantOrLazy<number>; }): Gun => { const transX = translation.x === undefined ? 0 : translation.x; const transY = translation.y === undefined ? 0 : translation.y; return transform(le.createTransform({ translation: [transX, transY] })); }; /** * Modify parameter. * * @param name Parameter name. * @param modifier Parameter modify function. */ export const paramModified = ( name: string, modifier: (state: FiringState) => (oldValue: number) => number ): Gun => { return new modO.ModifierGun(new modO.ModifyParameterModifier(name, modifier)); }; /** * Rotate firing transform. * * ```typescript * const leanFire = concat( * rotated(45), * translationAdded({ x: 0.1 }), * fire(bullet()), * ); * ``` * * @param angleDeg Adding angle degrees. */ export const rotated = (angleDeg: TConstantOrLazy<number>): Gun => { return transform(le.createTransform({ rotationDeg: angleDeg })); }; /** * Add parameter. * * ```typescript * const moreDangerousFire = concat( * useParameter('dangerousness', 9999), * paramAdded('dangerousness', 10), * fire(bullet()), * ); * ``` * * @param name Parameter name. * @param adding Adding amount. */ export const paramAdded = ( name: string, adding: TConstantOrLazy<number> ): Gun => { return paramModified(name, state => { const addingConst = calcValueFromConstantOrLazy<number>(state, adding); return (oldValue): number => oldValue + addingConst; }); }; /** * Multiply parameter. * * ```typescript * const zeroDangerousFire = concat( * useParameter('dangerousness', 9999), * parameterMultiplied('dangerousness', 0), * fire(bullet()), * ); * ``` * * @param name Parameter name. * @param multiplier Multiplier. */ export const paramMultiplied = ( name: string, multiplier: TConstantOrLazy<number> ): Gun => { return paramModified(name, state => { const mltConst = calcValueFromConstantOrLazy<number>(state, multiplier); return (oldValue): number => oldValue * mltConst; }); }; /** * Reset parameter value. * * ```typescript * const zeroDangerousFire = concat( * useParameter('dangerousness', 9999), * resetParameter('dangerousness', 0), * fire(bullet()), * ); * ``` * * @param name Parameter name. * @param newValue New parameter value. */ export const paramReset = ( name: string, newValue: TConstantOrLazy<number> ): Gun => { return paramModified(name, state => { const valueConst = calcValueFromConstantOrLazy<number>(state, newValue); return (): number => valueConst; }); }; /** * Add angle like N-Way firing. * * ```typescript * const firing = repeat( * { times: 5, interval: 10, name: 'masterRepeat'}, * rotatedAsNWay({ totalAngle: 90, name: 'masterRepeat'}), * fire(bullet()), * ); * ``` * * @param option N-Way firing option. */ export const rotatedAsNWay = (option: le.TNWayAngleOption): Gun => rotated(le.nWayAngle(option)); /** * Multiply bullet speed. * * ```typescript * const doubleSpeedFire = concat( * speedMultiplied(2), * fire(bullet()), * ); * ``` * * @param multiplier Multiplier. */ export const speedMultiplied = (multiplier: TConstantOrLazy<number>): Gun => paramMultiplied("speed", multiplier); /** * Multiply bullet size. * * ```typescript * const doubleSizedFire = concat( * sizeMultiplied(2), * fire(bullet()), * ); * ``` * * @param multiplier Multiplier. */ export const sizeMultiplied = (multiplier: TConstantOrLazy<number>): Gun => paramMultiplied("size", multiplier); /** * Reset bullet speed. * * ```typescript * const doubleSpeedFire = concat( * speedReset(2), * fire(bullet()), * ); * ``` * * @param newValue New value. */ export const speedReset = (newValue: TConstantOrLazy<number>): Gun => paramReset("speed", newValue); /** * Reset bullet size. * * ```typescript * const doubleSizedFire = concat( * resetSize(2), * fire(bullet()), * ); * ``` * * @param newValue New value. */ export const sizeReset = (newValue: TConstantOrLazy<number>): Gun => paramReset("size", newValue); /** * Invert angle and translation. * * ```typescript * const leanFireFromRight = concat( * translationAdded({ x: 0, y: 0.2 }), * rotated(45), * fire(bullet()), * ); * const invertedFire = concat( * inverted(), * leanFireFromRight, * ); * ``` * * @param newValue New value. */ export const inverted = (): Gun => { return new modO.ModifierGun(new modO.InvertTransformModifier()); };
<filename>plugins/terrain_plugin/src/TerrainQuadtree.cpp #include "TerrainQuadtree.hpp" TerrainQuadtree::TerrainQuadtree(const Device* device, TransferPool* transfer_pool, const float & split_factor, const size_t & max_detail_level, const double& root_side_length, const glm::vec3& root_tile_position) : nodeRenderer(device, transfer_pool), MaxLOD(max_detail_level) { root = std::make_unique<TerrainNode>(glm::ivec3(0, 0, 0), glm::ivec3(0, 0, 0), root_tile_position, root_side_length); TerrainNode::MaxLOD = MaxLOD; TerrainNode::SwitchRatio = split_factor; auto root_noise = GetNoiseHeightmap(HeightNode::RootSampleGridSize, glm::vec3(0.0f), static_cast<float>(HeightNode::RootSampleGridSize / (HeightNode::RootNodeLength * 2.0))); root->HeightData = std::make_unique<HeightNode>(glm::ivec3(0, 0, 0), root_noise); } void TerrainQuadtree::SetupNodePipeline(const VkRenderPass & renderpass, const glm::mat4 & projection) { nodeRenderer.CreatePipeline(renderpass, projection); } void TerrainQuadtree::UpdateQuadtree(const glm::vec3 & camera_position, const glm::mat4& view) { if (nodeRenderer.UpdateLOD) { // Create new view frustum util::view_frustum view_f; glm::mat4 matrix = nodeRenderer.uboData.projection * view; // Updated as right, left, top, bottom, back (near), front (far) view_f[0] = glm::vec4(matrix[0].w + matrix[0].x, matrix[1].w + matrix[1].x, matrix[2].w + matrix[2].x, matrix[3].w + matrix[3].x); view_f[1] = glm::vec4(matrix[0].w - matrix[0].x, matrix[1].w - matrix[1].x, matrix[2].w - matrix[2].x, matrix[3].w - matrix[3].x); view_f[2] = glm::vec4(matrix[0].w - matrix[0].y, matrix[1].w - matrix[1].y, matrix[2].w - matrix[2].y, matrix[3].w - matrix[3].y); view_f[3] = glm::vec4(matrix[0].w + matrix[0].y, matrix[1].w + matrix[1].y, matrix[2].w + matrix[2].y, matrix[3].w + matrix[3].y); view_f[4] = glm::vec4(matrix[0].w + matrix[0].z, matrix[1].w + matrix[1].z, matrix[2].w + matrix[2].z, matrix[3].w + matrix[3].z); view_f[5] = glm::vec4(matrix[0].w - matrix[0].z, matrix[1].w - matrix[1].z, matrix[2].w - matrix[2].z, matrix[3].w - matrix[3].z); for (size_t i = 0; i < view_f.planes.size(); ++i) { float length = std::sqrtf(view_f[i].x * view_f[i].x + view_f[i].y * view_f[i].y + view_f[i].z * view_f[i].z + view_f[i].w * view_f[i].w); view_f[i] /= length; } root->Update(camera_position, view_f, &nodeRenderer); } } void TerrainQuadtree::RenderNodes(VkCommandBuffer & graphics_cmd, VkCommandBufferBeginInfo& begin_info, const glm::mat4 & view, const glm::vec3& camera_pos, const VkViewport& viewport, const VkRect2D& rect) { nodeRenderer.Render(graphics_cmd, begin_info, view, camera_pos, viewport, rect); }
In the 1960s, Margaret Lovatt was part of a Nasa-funded project to communicate with dolphins. Soon she was living with ‘Peter’ 24 hours a day in a converted house. Christopher Riley reports on an experiment that went tragically wrong Like most children, Margaret Howe Lovatt grew up with stories of talking animals. "There was this book that my mother gave to me called Miss Kelly," she remembers with a twinkle in her eye. "It was a story about a cat who could talk and understand humans and it just stuck with me that maybe there is this possibility." Unlike most children, Lovatt didn't leave these tales of talking animals behind her as she grew up. In her early 20s, living on the Caribbean island of St Thomas, they took on a new significance. During Christmas 1963, her brother-in-law mentioned a secret laboratory at the eastern end of the island where they were working with dolphins. She decided to pay the lab a visit early the following year. "I was curious," Lovatt recalls. "I drove out there, down a muddy hill, and at the bottom was a cliff with a big white building." Lovatt was met by a tall man with tousled hair, wearing an open shirt and smoking a cigarette. His name was Gregory Bateson, a great intellectual of the 20th century and the director of the lab. "Why did you come here?" he asked Lovatt. "Well, I heard you had dolphins," she replied, "and I thought I'd come and see if there was anything I could do or any way I could help…" Unused to unannounced visitors and impressed by her bravado, Bateson invited her to meet the animals and asked her to watch them for a while and write down what she saw. Despite her lack of scientific training, Lovatt turned out to be an intuitive observer of animal behaviour and Bateson told her she could come back whenever she wanted. "There were three dolphins," remembers Lovatt. "Peter, Pamela and Sissy. Sissy was the biggest. Pushy, loud, she sort of ran the show. Pamela was very shy and fearful. And Peter was a young guy. He was sexually coming of age and a bit naughty." The lab's upper floors overhung a sea pool that housed the animals. It was cleaned by the tide through openings at each end. The facility had been designed to bring humans and dolphins into closer proximity and was the brainchild of an American neuroscientist, Dr John Lilly. Here, Lilly hoped to commune with the creatures, nurturing their ability to make human-like sounds through their blow holes. Lilly had been interested in connecting with cetaceans since coming face to face with a beached pilot whale on the coast near his home in Massachusetts in 1949. The young medic couldn't quite believe the size of the animal's brain – and began to imagine just how intelligent the creature must have been, explains Graham Burnett, professor of the history of science at Princeton and author of The Sounding of the Whale. "You are talking about a time in science when everybody's thinking about a correlation between brain size and what the brain can do. And in this period, researchers were like: 'Whoa… big brain huh… cool!'" Tripper and flipper: Dr John Lilly, who started experimenting with LSD during the project. Photograph: Lilly Estate At every opportunity in the years that followed, John Lilly and his first wife, Mary, would charter sailboats and cruise the Caribbean, looking for other big-brained marine mammals to observe. It was on just such a trip in the late 1950s that the Lillys came across Marine Studios in Miami – the first place to keep the bottlenose dolphin in captivity. Up until this time, fishermen on America's east coast, who were in direct competition with dolphins for fish, had considered the animals vermin. "They were know as 'herring hogs' in most of the seafaring towns in the US," says Burnett. But here, in the tanks of Marine Studios, the dolphins' playful nature was endearingly on show and their ability to learn tricks quickly made it hard to dislike them. Here, for the first time, Lilly had the chance to study the brains of live dolphins, mapping their cerebral cortex using fine probes, which he'd first developed for his work on the brains of rhesus monkeys. Unable to sedate dolphins, as they stop breathing under anaesthetic, the brain-mapping work wasn't easy for either animals or scientists, and the research didn't always end well for the marine mammals. But on one occasion in 1957, the research would take a different course which would change his and Mary's lives for ever. Now aged 97, Mary still remembers the day very clearly. "I came in at the top of the operating theatre and heard John talking and the dolphin would go: 'Wuh… wuh… wuh' like John, and then Alice, his assistant, would reply in a high tone of voice and the dolphin would imitate her voice. I went down to where they were operating and told them that this was going on and they were quite startled." Perhaps, John reasoned, this behaviour indicated an ambition on the dolphins' part to communicate with the humans around them. If so, here were exciting new opportunities for interspecies communication. Lilly published his theory in a book in 1961 called Man and Dolphin. The idea of talking dolphins, eager to tell us something, captured the public's imagination and the book became a bestseller. Man and Dolphin extrapolated Mary Lilly's initial observations of dolphins mimicking human voices, right through to teaching them to speak English and on ultimately to a Cetacean Chair at the United Nations, where all marine mammals would have an enlightening input into world affairs, widening our perspectives on everything from science to history, economics and current affairs. Lilly's theory had special significance for another group of scientists – astronomers. "I'd read his book and was very impressed," says Frank Drake, who had just completed the first experiment to detect signals from extraterrestrial civilisations using a radio telescope at Green Bank in West Virginia. "It was a very exciting book because it had these new ideas about creatures as intelligent and sophisticated as us and yet living in a far different milieu." He immediately saw parallels with Lilly's work, "because we [both] wanted to understand as much as we could about the challenges of communicating with other intelligent species." This interest helped Lilly win financial backing from Nasa and other government agencies, and Lilly opened his new lab in the Caribbean in 1963, with the aim of nurturing closer relationships between man and dolphin. A few months LATER, in early 1964, Lovatt arrived. Through her naturally empathetic nature she quickly connected with the three animals and, eager to embrace John Lilly's vision for building an interspecies communication bridge, she threw herself into his work, spending as much time as possible with the dolphins and carrying out a programme of daily lessons to encourage them to make human-like sounds. While the lab's director, Gregory Bateson, concentrated on animal-to-animal communication, Lovatt was left alone to pursue Lilly's dream to teach the dolphins to speak English. But even at a state-of-the-art facility like the Dolphin House, barriers remained. "Every night we would all get in our cars and pull the garage door down and drive away," remembers Lovatt. "And I thought: 'Well there's this big brain floating around all night.' It amazed me that everybody kept leaving and I just thought it was wrong." Lovatt reasoned that if she could live with a dolphin around the clock, nurturing its interest in making human-like sounds, like a mother teaching a child to speak, they'd have more success. "Maybe it was because I was living so close to the lab. It just seemed so simple. Why let the water get in the way?" she says. "So I said to John Lilly: 'I want to plaster everything and fill this place with water. I want to live here.'" The radical nature of Lovatt's idea appealed to Lilly and he went for it. She began completely waterproofing the upper floors of the lab, so that she could actually flood the indoor rooms and an outdoor balcony with a couple of feet of water. This would allow a dolphin to live comfortably in the building with her for three months. Lovatt selected the young male dolphin called Peter for her live-in experiment. "I chose to work with Peter because he had not had any human-like sound training and the other two had," she explains. Lovatt would attempt to live in isolation with him six days a week, sleeping on a makeshift bed on the elevator platform in the middle of the room and doing her paperwork on a desk suspended from the ceiling and hanging over the water. On the seventh day Peter would return to the sea pool downstairs to spend time with the two female dolphins at the lab – Pamela and Sissy. 'If I was sitting with my legs in the water, he'd come up and look at the back of my knee for a long time': Margaret with Peter. Photograph: courtesy Lilly Estate By the summer of 1965, Lovatt's domestic dolphinarium was ready for use. Lying in bed, surrounded by water that first night and listening to the pumps gurgling away, she remembers questioning what she was doing. "Human people were out there having dinner or whatever and here I am. There's moonlight reflecting on the water, this fin and this bright eye looking at you and I thought: 'Wow, why am I here?' But then you get back into it and it never occurred to me not to do it. What I was doing there was trying to find out what Peter was doing there and what we could do together. That was the whole point and nobody had done that." Audio recordings of Lovatt's progress, meticulously archived on quarter-inch tapes at the time, capture the energy that Lovatt brought to the experiment – doggedly documenting Peter's progress with her twice-daily lessons and repeatedly encouraging him to greet her with the phrase 'Hello Margaret'. "'M' was very difficult," she remembers. "My name. Hello 'M'argaret. I worked on the 'M' sound and he eventually rolled over to bubble it through the water. That 'M', he worked on so hard." For Lovatt, though, it often wasn't these formal speech lessons that were the most productive. It was just being together which taught her the most about what made Peter tick. "When we had nothing to do was when we did the most," she reflects. "He was very, very interested in my anatomy. If I was sitting here and my legs were in the water, he would come up and look at the back of my knee for a long time. He wanted to know how that thing worked and I was so charmed by it." Carl Sagan, one of the young astronomers at Green Bank, paid a visit to report back on progress to Frank Drake. "We thought that it was important to have the dolphins teach us 'Dolphinese', if there is such a thing," recalls Drake. "For example we suggested two dolphins in each tank not able to see each other – and he should teach one dolphin a procedure to obtain food – and then see if it could tell the other dolphin how to do the same thing in its tank. That was really the prime experiment to be done, but Lilly never seemed able to do it." Instead, he encouraged Lovatt to press on with teaching Peter English. But there was something getting in the way of the lessons. "Dolphins get sexual urges," says the vet Andy Williamson, who looked after the animals' health at Dolphin House. "I'm sure Peter had plenty of thoughts along those lines." "Peter liked to be with me," explains Lovatt. "He would rub himself on my knee, or my foot, or my hand. And at first I would put him downstairs with the girls," she says. But transporting Peter downstairs proved so disruptive to the lessons that, faced with his frequent arousals, it just seemed easier for Lovatt to relieve his urges herself manually. "I allowed that," she says. "I wasn't uncomfortable with it, as long as it wasn't rough. It would just become part of what was going on, like an itch – just get rid of it, scratch it and move on. And that's how it seemed to work out. It wasn't private. People could observe it." For Lovatt it was a precious thing, which was always carried out with great respect. "Peter was right there and he knew that I was right there," she continues. "It wasn't sexual on my part. Sensuous perhaps. It seemed to me that it made the bond closer. Not because of the sexual activity, but because of the lack of having to keep breaking. And that's really all it was. I was there to get to know Peter. That was part of Peter." Innocent as they were, Lovatt's sexual encounters with Peter would ultimately overshadow the whole experiment when a story about them appeared in Hustler magazine in the late 1970s. "I'd never even heard of Hustler," says Lovatt. "I think there were two magazine stores on the island at the time. And I went to one and looked and I found this story with my name and Peter, and a drawing." Sexploitation: Hustler magazine's take on the story in the late 1970s. Photograph: Lilly Estate Lovatt bought up all the copies she could find, but the story was out there and continues to circulate to this day on the web. "It's a bit uncomfortable," she acknowledges. "The worst experiment in the world, I've read somewhere, was me and Peter. That's fine, I don't mind. But that was not the point of it, nor the result of it. So I just ignore it." Something else began to interrupt the study. Lilly had been researching the mind-altering powers of the drug LSD since the early 1960s. The wife of Ivan Tors, the producer of the dolphin movie Flipper, had first introduced him to it at a party in Hollywood. "John and Ivan Tors were really good friends," says Ric O'Barry of the Dolphin Project (an organisation that aims to stop dolphin slaughter and exploitation around the world) and a friend of Lilly's at the time. "Ivan was financing some of the work on St Thomas. I saw John go from a scientist with a white coat to a full blown hippy," he remembers. For the actor Jeff Bridges, who was introduced to Lilly by his father Lloyd, Lilly's self-experimentation with LSD was just part of who he was. "John Lilly was above all an explorer of the brain and the mind, and all those drugs that expand our consciousness," reflects Bridges. "There weren't too many people with his expertise and his scientific background doing that kind of work." In the 1960s a small selection of neuroscientists like John Lilly were licensed to research LSD by the American government, convinced that the drug had medicinal qualities that could be used to treat mental-health patients. As part of this research, the drug was sometimes injected into animals and Lilly had been using it on his dolphins since 1964, curious about the effect it would have on them. Margaret Lovatt today. Photograph: Matt Pinner/BBC Much to Lilly's annoyance, nothing happened. Despite his various attempts to get the dolphins to respond to the drug, it didn't seem to have any effect on them, remembers Lovatt. "Different species react to different pharmaceuticals in different ways," explains the vet, Andy Williamson. "A tranquilliser made for horses might induce a state of excitement in a dog. Playing with pharmaceuticals is a tricky business to say the least." Injecting the dolphins with LSD was not something Lovatt was in favour of and she insisted that the drug was not given to Peter, which Lilly agreed to. But it was his lab, and they were his animals, she recalls. And as a young woman in her 20s she felt powerless to stop him giving LSD to the other two dolphins. While Lilly's experimentation with the drug continued, Lovatt persevered with Peter's vocalisation lessons and grew steadily closer to him. "That relationship of having to be together sort of turned into really enjoying being together, and wanting to be together, and missing him when he wasn't there," she reflects. "I did have a very close encounter with – I can't even say a dolphin again – with Peter." By autumn 1966, Lilly's interest in the speaking-dolphin experiment was dwindling. "It didn't have the zing to it that LSD did at that time," recalls Lovatt of Lilly's attitude towards her progress with Peter. "And in the end the zing won." The dolphinarium on St Thomas. Photograph: Lilly Estate Lilly's cavalier attitude to the dolphins' welfare would eventually be his downfall, driving away the lab's director, Gregory Bateson, and eventually causing the funding to be cut. Just as Lovatt and Peter's six-month live-in experiment was concluding, it was announced that the lab would be closed. Without funding, the fate of the dolphins was in question. "I couldn't keep Peter," says Lovatt, wistfully. "If he'd been a cat or a dog, then maybe. But not a dolphin." Lovatt's new job soon became the decommissioning of the lab and she prepared to ship the dolphins away to Lilly's other lab, in a disused bank building in Miami. It was a far cry from the relative freedom and comfortable surroundings of Dolphin House. At the Miami lab, held captive in smaller tanks with little or no sunlight, Peter quickly deteriorated, and after a few weeks Lovatt received news. "I got that phone call from John Lilly," she recalls. "John called me himself to tell me. He said Peter had committed suicide." Ric O'Barry corroborates the use of this word. "Dolphins are not automatic air-breathers like we are," he explains. "Every breath is a conscious effort. If life becomes too unbearable, the dolphins just take a breath and they sink to the bottom. They don't take the next breath." Andy Williamson puts Peter's death down to a broken heart, brought on by a separation from Lovatt that he didn't understand. "Margaret could rationalise it, but when she left, could Peter? Here's the love of his life gone." "I wasn't terribly unhappy about it," explains Lovatt, 50 years on. "I was more unhappy about him being in those conditions [at the Miami lab] than not being at all. Nobody was going to bother Peter, he wasn't going to hurt, he wasn't going to be unhappy, he was just gone. And that was OK. Odd, but that's how it was." In the decades which followed, John Lilly continued to study dolphin-human communications, exploring other ways of trying to talk to them – some of it bizarrely mystical, employing telepathy, and some of it more scientific, using musical tones. No one else ever tried to teach dolphins to speak English again. Instead, research has shifted to better understanding other species' own languages. At the Seti (Search for Extraterrestrial Intelligence) Institute, founded by Frank Drake to continue his work on life beyond Earth, Drake's colleague Laurance Doyle has attempted to quantify the complexity of animal language here on our home planet. "There is still this prejudice that humans have a language which is far and away above any other species' qualitatively," says Doyle. "But by looking at the complexity of the relationship of dolphin signals to each other, we've discovered that they definitely have a very high communication intelligence. I think Lilly's big insight was how intelligent dolphins really are." Margaret Howe Lovatt stayed on the island, marrying the photographer who'd captured pictures of the experiment. Together they moved back into Dolphin House, eventually converting it into a family home where they brought up three daughters. "It was a good place," she remembers. "There was good feeling in that building all the time." In the years that followed the house has fallen into disrepair, but the ambition of what went on there is still remembered. "Over the years I have received letters from people who are working with dolphins themselves," she recalls. "They often say things like: 'When I was seven I read about you living with a dolphin, and that's what started it all for me.'" Peter is their "Miss Kelly", she explains, remembering her own childhood book about talking animals. "Miss Kelly inspired me. And in turn the idea of my living with a dolphin inspired others. That's fun. I like that." Christopher Riley is the producer and director of The Girl Who Talked to Dolphins, which will premiere at the Sheffield International Documentary Festival on 11 June, and is on BBC4 on 17 June at 9pm
At the weekend Big Smoke met up with Caroline Allen and Emma Dixon, two Green Party members and users of Hackney’s Clissold Cafe, to discuss what kind of cafe the community really needs. Clissold Park on a Saturday is dog utopia. Full to the brim of dogs charging, full tilt, in all directions, weaving in and out of the joggers and families building snowmen. It’s a genuinely joyous place to be. If the pleasure of open air and freedom is not enough there is even a small animal sanctuary with goats, exotic birds, deer and the like to peer at. With such delights its no wonder that Clissold Park has become a hub for the community where many local residents are happy to take time to enjoy, walk the dog and go to with the children. That means there is a weight of expectation on the Clissold House Cafe (that Ian Visits has recently admired) to be able to serve the community that it’s in the heart of. This partly explains why Hackney Council recently renovated the House that was in desperate need of repair, and have now re-opened the cafe. However, all has not been well with the private contractors that the Council called in to run the cafe, once a cheap and cheerful venue that served the community. While headlines like Class War Over Couscous may be a touch over blown, if fun, as we’ve mentioned before there are real issues here. Even from the beginning when organising the bidding for the contracts the Council, by imposing a £1 million per annum turnover minimum, had ensured only particular kinds of business could apply. Indeed while the council has some excellent “growing communities” schemes that promote local food projects this seemed like a missed opportunity to create a cafe that drew on the local community rather than came from the outside without really understanding it’s needs. Many have complained about the prices but the first thing I noticed about the menu when I went in it had the kind of food designed to annoy many local residents. While most of us have a reasonable couscous/cumin/organic threshold items that consist of ten such terms rounded off with a “bit too much” price tag do more to keep people out than draw them in. For example, while we were there we didn’t see a single ethnic minority user, this in the middle of Hackney! That alone indicates something is not quite right. Caroline Allen (right) told me that the council could have used that contracting process to feed into those local growing projects giving extra support to the Growing Estates initiative. That process could also have been used to promote local business and ensure that local black and Turkish communities, for example, felt welcome. She said that “This would not be such a problem if there were more places round here that catered for poorer communities, but there aren’t.” Emma Dixon, left, was concerned that too little thought had gone into who should take the contract and what services they should be asked to provide. “It’s not that scientific, if you put the prices up less little old ladies will come in for a cup of tea.” Even as Emma described their first “really disastrous visit here” we were getting glimpse of how consistently bad the service is. On just this one visit the cafe managed to get our order wrong and tried to charge one of us ten pounds for a cup of tea. I know their prices are high, but that was taking the mickey. Emma continued that “I wrote them a polite email and got a perfectly nice reply saying that this was still a work in progress. Well, this place looks beautiful but that just doesn’t wash.” Emma told me that “We’ve had various contractors over the years. What we need is standard cafe fair, sandwiches, jacket potatoes, cheap and cheerful lines. During the refurbishment we had this van outside which was utterly brilliant. Proper nice sausages, gorgeous chicken wraps, all reasonably priced even though it did have the fancy things that some people want round here too.” She continued that the “Council has failed in its duties to consider the equality implications of their actions. The elderly can’t afford to come here now, it’s all white, middle class people.” Caroline Allen commented that “the council say they want to get rid of poverty in Hackney. All they are really doing is excluding people and then saying ‘It’s great! It’s regenerating!’ We need to get people involved.” She concluded that “Even if it’s too late to change this contract it should be a lesson for Hackney Council on other contracts that they award that they need to accommodate the needs of the local community, and the impact on equality”. It does seem like a wasted opportunity in an area that lacks good places to go that are affordable that the council have given the contract to suppliers who have clearly bitten off more than they can chew. With consistently unreliable service and over-priced food that only one section of the area might conceivably want to eat the Clissold Park Cafe is, well, a little disappointing. Follow the #toastofhackney tag on twitter for more discussion. Sponsored link Magazine Subscriptions.
Ecological Literacy to Build Harmony: A Critical Study on Enviromental Poems The recent development of the cities has grown rapidly, followed by digital technological advances. Buildings, roads, housing, business centers, industries, and other public facilities are built to meet human needs and to foster the development of the city's economy. All of them are equipped with advanced technological facilities. In the name of modernity, a city is being established, yet it must all sacrifice the environment. Even the decision makers, as well as the developers, do not aware that they destroy nature. The progress should be followed by ecological literate policies. This study focuses on how to educate people to realize that the ecological environment needs to be taken care of so that the next generation can keep the harmony of life between people and their environment. The ecological literature has been generated by many environmental poets. Using ecocriticism approach supported with David W. Orrs concept on ecological literacy, the issue of ecological neglect depicted in the poems will be discussed. The study finds that harmonious life will be reached if the society has an ethical ecological literacy. Keywordsdigital technological advances, ecological literacy, environment, harmony. I. INTRODUCTION The city is a center of economic activity that attracts both rural and peripheral communities. The development of the city is very rapid with various advances in all fields of life. In the city, work opportunities and creativity are quite broad. With technological advances and modernization, people living in the cities continue to adapt to their social environment. Those who come to the city carry the values and norms of their origin, so there is a process of adaptation and harmonization between them. The city is an area that has special characteristics that can distinguish it from the village, such as the concentration of population, the center of government, and supporting facilities and infrastructure for human activities that are relatively more complete than the village. In general, a city is a place where residents of the city live and work. A city is a place of economic activity, of government, and of other fields. In order to sustain life, urban communities adjust to adopt a set of behaviors to reduce their impacts. Urban problems such as pollution (air, water, soil, sound, light), slums, garbage, and others, in the view of environmental ethics, show that humans have exploited and drained the nature to fulfill their life's needs without taking care of it. In the current era of digital technology, the mindset of society has surpassed the postmodernism era, whereas, in the previous era -modernism-the development of technology has been sophisticated. When linked to the problems of modernity and environmental harmonization, these two terms, need to be negotiated. Although it is not entirely wrong that urban society is less concerned about the environment, now, because the village has also been affected by modernization, the attitude of the community is not much different. Community's behavior that is not friendly to the environment has a detrimental effect on their own lives. For example, villagers, because of being influenced by advertising, like to eat fast food, drink packaged beverages; they also use plastic materials for household appliances, etc. In short, the impact may not yet be felt, but in the long term, the impact on health, social behavior, and so on will be seen. The value of environmental ethics seems to be neglected. In this case, modernization and urbanization also contribute very significantly to the community's behavior, especially towards the ecological environment. This study will discuss the behavior of urban communities living in the modern era who need to get ecological literacy education in order to sustain the harmony of lives with nature. The object of discussion is environmental poetry written by young generation poets who are very concerned with the ecological situation of the environment. They are Cecilia Parkin ("Urbanization"), Gordon J.L. Ramel ("Wetlands"), Ron Cleave ("Little Blue Top"), Sri Wuryaningsih ("Tangisan Bumi"), Sapardi Djoko Damono ("Hujan Bulan Juni"). This study will also negotiate the values of environmental ethics, namely the extent to which urban society in living a modern life in the city should practice those values. The expectation is that the life of the urban community will be harmonious because there are changes in people's behavior in treating their environment after reading the poems or results of this study. Romero et al., in his research on ethnic acculturation of his new environment in Canada, found that when there was a desire to adopt behavior in a sustainable manner, a lack of norms, regulations, and infrastructure could influence their attitude. Community institutions play a very important role in this matter. Therefore, support is not only in the cultural and legal aspects but also the physical condition of the infrastructure, as stated by Grunwald that the supporters of eco-modernism agree that there are rules and meeting places to discuss and monitor their behavior itself, including urban communities, which in the end they will apply environmentally friendly technology. The study of negotiating environmental ethics values in literary works is relatively limited. Even if found, what negotiated is about human dignity in contemporary short stories. While other negotiations are about environmental limitations in fiction. The study above is similarly about literary and environmental work, about how to see people's behavior and the influence after reading environmental literary works and literary study reports. Since the main focus of this discussion is on ecoliteracy, the writer uses an ecocriticism approach supported by David W. Orr's concept of ecological literacy, and Charles Birch's environmental ethics. The environmental crisis faced by modern humans is a direct result of "non-ethical" environmental management. That is, humans, manage natural resources almost without taking care of the roles of ethics. Thus, it can be said that the ecological crisis faced by humanity is rooted in an ethical or moral crisis. Human beings are less concerned with the norms of life or replacing the norms supposed to be with the orders of creation and its own interests. Modern man faces nature almost without using conscience. People exploited and polluted environment without feeling guilty. As a result, there is a drastic decline in the quality of natural resources such as the disappearance of some species from the earth followed with a decline in the quality of nature. That is why, educating people to appreciate and sustain the ecological environment is extremely urgent. II. THEORETICAL FRAMEWORK The criticism of ecological literature or ecocriticism was first defined by William Rueckert, the inventor of ecocriticism, namely the use of ecological concepts into literary works. This definition according to Glotfelty is too narrow because it is only related to ecology, then he offers a broader definition, namely the study of the relationship of literary work with the physical environment (in Griffith, 2014: 219). Literature and the environment are like humans living on earth. Literature needs the environment, meaning literature has an ecosystem. In controlling humans' awareness of environment, David W. Orr has six points as the concept of ecological literacy to identify whether humans are sensitive towards ecology or not. Those are: 1. Complete knowledge of the environmental issue 2. Empathy towards the environment 3. Knowledge in acting 4. Environmental responsibility towards trusts, values, and attitude 5. Willing to involve ourselves 6. Active in finding solutions on environmental problems The concept above is the main concept in the discussion of poetry which is being an object of discussion. With this concept, human behavior in poetry will be measured, whether the speaker in the poem is environmental lover or not. Besides the concept of ecoliteracy, negotiations among ecological literature, environmental ethics, and human behavior towards the environment need an integrated concept, so Charles Birch's theory will support the analysis of the issue. Birch's environmental ethics will also be applied to support the discussion of the issue. Ethics is a critical and fundamental thought about teaching moral views. Charles Birch, an ecologist and environmental ethics thinker, holds that environmental ethics is understood as a critical reflection of moral norms or values in the human community to be applied more widely in biotic communities and ecological communities. Environmental ethics is a guide of human practical behavior in trying to manifest morals and efforts to control nature in order to remain within the limits of environmental sustainability. A. The Portrayal of Ecological Neglect in the Poems The five poems that become the objects of this study tell how humans ignore nature/ecology to damage. In the poem "Urbanization", ecological neglect is expressed obviously for deforestation happens as the best way to develop a city. In the second stanza, it is illustrated the operation of saws to spend wood in the forest. The process of destroying ecology begins by deforestation. It is proven by operating saws in cutting down the trees. In stanza III, IV, and V, ecological neglect is clear, because after clearing the forest, the woods are carried out to some places to establish office blocks (stanzas III and IV). Then a town is built, the roads are paved. This is a characteristic of a city that is urbanized. As a result, there is no sound of birdsong because trees are scarce. What currently available is the light beam and the sound of vehicles for transportation (stanza V). So this poem is actually a form of the poet's protest against deforestation opened for developing smart urban regions. The poem "Little Blue Top" tells about the destruction of the earth. Humans have acted with various businesses. In the third stanza, a man leaves his mark where ever he goes: on land, sea, and air, and finally, mass extinction. It can be interpreted that human activities in various fields and areas bring disaster to environmental ecology. Some people are apathetic; some respond with despair (stanza IV: 2-3) because they think that nature's damage is not entirely by humans. Astronomy also plays a role in (stanza V). This attitude is repeated in the eighth stanza. The poet's anger turns to peak in stanza nine that the greed of the humans' ego to rule and govern according to their desires cannot be stopped. In the poem "Wetlands", this area is water-saturated land which serves to support the growth of aquatic plants, such as cattail, bulrush, umbrella plant, canna, and some other living biota (Metcalf and Eddy, 1991 in Safitriani, 2014). The poet analogizes this area like a paradise for biota living in the region included flora and fauna ranging from small to big. They live there, breed comfortably and freely, which the poet describes as a special grace. This can be interpreted that the region is the gift of the Creator. There are dozens of bird species mentioned in the poem: ducks, bees, turtles, butterflies, small mammals, and frogs. Whereas flora, are trees, flowering plants, water lilies, and so on. Watching it all, the poet did not have any heart to think about if one day the area becomes dry and damaged. And in fact, the area was dry because a large project was carried out there. In the poem "Tangisan Bumi" (Earth's Crying), the poet criticizes air and sound pollution caused by machinery. The word "cry" juxtaposed with the word "earth" is an expression of deep sadness over human behavior towards the earth. Through this poem, ecological criticism is expressed in relation to human activities that can damage the environment. Hearing the engine with a loud sound in line 1 of stanza II Aku terjaga oleh gemuruhnya suara mesin (I am awake by the roar of the engine sound), is not desired, because it disturbs the surrounding environment. The speaker cannot bear to see the earth's face covered with soot (stanza III). While in the poem "Hujan Bulan Juni" (Rain in June), the poet tries to explore the relationship between humans and ecological environment, so that it fosters human's concern to sustain and preserve environment. In this poem, there is no expression of ecological neglect. The rain in June is eagerly awaited by large and small plants, even humans. However, viewed from the cycle of season in a country with two seasons, June has entered the dry season, even if there is rain, it has rarely dropped. So with this poem, the poet might want to convey a message to take advantage of something that is very valuable in its limitations. One must be wise to respond to something unexpected even if it is his/her own problem. B. The Ecological Literacy in the Poems Literacy education in the digital era is still needed by the community, especially about the environment. Humans treat the environment inappropriately not because they do not care, but they may not know. For this class community, education will be beneficial. At least they are given prior awareness that they live side by side with nature, so they should be acquainted with nature. Ecological literacy educators might not be bored with this situation, they must remain consistent in giving this literacy, to anyone, anywhere in accordance with their respective fields. The environmental crisis must be prevented from now on. Because it is in a literary field, ecological literacy works and is disseminated to the public through the literary works. In the poem "Urbanization", the speaker said that everyone is speechless witnessing deforestation: Yellow hatted men toil away Their saws screeching, Nothing to say! (stanza II) In this stanza, Orr's concept of 'complete knowledge of the issue' (point 1) is expressed by the poet. In the modern era, all people dream to live comfortably in the city that gives well services to its community. All needs can be met from basic to luxurious needs. Modern life requires all things easily accessible because, in this era, people work hard all day conducting their profession to meet the demands of the era. This make humans compete together. Because knowledge has developed rapidly, education has advanced, so is technology increasingly sophisticated. Humans just fill this era according to their competence. Therefore, they cannot do anything when they witness projects that damage ecology but, basically, aimed at the welfare of human life itself. As expressed in the last stanza, that for the sake of developing a modern smart city, it must sacrifice ecology. It relates concepts 1, 2, 3, and 4 that become the point of the discussion. No birdsong in this blighted place No swaying trees, no flowers to grace. Neon lights -commuter rage The price we pay for progress sake! (Parkin's "Urbanization": V) As a result of deforestation, the city life exists with all its compliments. Urban city happens since it invites suburban and other villagers to settle the area. As a consequence, ecology is neglected. The main natural resources for humans are soil, water, and air. Soil/land is a place for humans to do various activities. Water is needed by humans as the biggest component of the human's body. While air is a natural source of oxygen for humans to breath. A healthy environment will present if humans and the environment are in good harmony. Ramel in his poem "Wetlands" stated in his first stanza about the function of wetland for sustaining the living creature as well as an ecosystem. He also expects that people who have problems with the environment ultimately appreciate the importance of respecting nature (the last stanza). This is an ecoliteracy, if connected with Orr's concept, all six points are covered. (stanza II) But in "Little Blue Top", Ron Cleave invites readers to plant trees, save energy (stanza X), in the last stanza, to use time well as expressed in the following lines time's a waste'n, for spinning tops before this top wobbles and stops. little blue top spinning in space it needs some help with humility and grace. (Cleave's "Little Blue Top": XIV) David Orr's concept to make society literate in the poem above is reflected in the message that is conveyed carefully by the poets. Readers are required to understand natural signs. Ecology contains a mystery that can only be understood through the accurate mind's eye. And in the last line above, sincerity will bring blessings (points 2, 4, and 5). Sapardi Djoko Damono's "Hujan Bulan Juni" (Rain in June) is even philosophical in appreciating nature. Readers are invited to enjoy the poem while learning to be a wise man who knows how to respect nature and its cycles. According to Howarth, humans should recognize ecology that life may speak to provide information through signs (in Coupe, 2008: 163). Readers are also required to be patient and careful in conveying feelings, no need to be emotional. No one cannot easily practice such literacy. In this case, people must have the emotional maturity and be able to think clearly. June has entered the dry season for a country having two seasons, if there is rain, then it is an incomparable blessing. Damono, in this case, has practiced Orr's concept totally in ecological literacy. As it was described above that ecological damage is not only caused by human beings but nature itself also plays a role. It is reasonable since the age of the earth is getting older, and it will be wiser if humans treat nature intelligently. Each poem contains ecological literacy, although the poems were written in a span of 28 years. All poets want environmental ecology to stay sustained. All invite the readers to be wise and smart in treating nature. Moral obligation is also shown by Ron Cleave in the "Little Blue Top" (stanza IX) that even humans have intelligence but are unable to see all things related to their ecological environment. What's more, nature is like a woman who always bears all burdens (points 1, 2, and 4). This is related to the term "motherhood environmentalism" which was initiated by Catriona Sandilands. She understands that the role of women is not only to give birth to children and as a protector of their families, but also as the persons who are very aware of the ecological situation and its damage (in Buell, Thornber, and Heise, 2011: 425). C. The Implementation of Environmental Ethics in the Poems Human behavior in this modern era is not all incorrect or wrong in treating ecology. However, in the five poems that become the objects, due to the topic is nature or ecology, the poets criticized and complained about their experiences related to human treatment to nature. Birch and Cobb stated, "... humans are subjects in a wider community, and there exists a continuity between all levels of existence.". From this quote, it can be assumed that humans are at the highest level among living things in the world, but as said on the previous page that humans have a moral obligation towards fellow beings, namely keeping ecological sustainability. In responding to the above opinion, in this digital era, humans have increasingly understood how to treat nature. It is the demand of the age that makes them act arbitrarily against nature. This is a contradiction that occurs at the same time. On the one hand, humans have received a good education about how to treat nature, on the other hand, they are required to meet the needs in accordance with the era that must not adjust to nature. Today, the most recent technology has tried to adjust to ecology. Products labeled "eco" have greeted modern humans, such as refrigerators, air conditioners, cars, kitchen appliances and other household appliances, architecture, parks, hotels, tourism, campus, factory buildings, waste disposal mechanisms, and so on. There has been an effort that in the digital age almost all humans' needs have been met by environmentally-friendly technology. As well as in literary works, in this case, poetry, humans, literature, and the environment according to Bennet are required to do one another adaptation. The process is ecological and ultimately forms a cultural ecology. Therefore, the harmonization process occurs. While environmental ethics is used to determine the extent of the relationship among living things in the poems. IV. CONCLUSION Based on the discussion above it can be concluded that ecological neglect as portrayed in the poems actually exists in real life. However, modern living leads people not to have willingness to live together with nature as well as take for granted on their environment. Modern living also demands all practicality and convenience requiring people to use time as efficient as possible for productive activities. Therefore, they overlook to consume energy efficient stuff in which ecology should be considered. According to the analysis, not all concepts of eco-literacy are applied to each poem, because in a literary work, the authors are limited to write what they think and feel. They can only express their feeling and give comment on what they have witnessed. Additionally, the poets are only able to suggest and recommend a solution to the ecological problems. The negotiation of environmental ethics does not work effectively since mistakes are not entirely from humans, but natural fate plays a role as well. So, in this digital era, urban people experience two things as a dilemma. Moreover, they have to live side by side with nature, on the other hand, they must meet the demands of the era, that inevitably must sacrifice the nature. The most neutral negotiation is to use environmentally-friendly technology. ACKNOWLEDGMENT The writer sends grateful thanks to the Almighty God for giving guidance and competence in doing this study. Gratitude is addressed to a. The Head of English Department Faculty of Humanities Universitas Airlangga for giving a chance and facility to attend the conference and complete this article; b. The Coordinator of Literary Study for discussing the issue; c. The Colleges in The English Language and Literature Program for inspiring the ecological literacy knowledge until becoming this study; d. The 7 th ELTLT International Conference committee for providing everything including the template for article writing. Last but not least, this article must be far from perfect. That is why, the writer needs criticism to improve it. Thank you.
export type Phone = { countryCode: string; phone: string; fullPhone: string; };
ANALYSIS OF THERMODYNAMIC PROPERTIES OF U P I 3 BY MEANS OF GRUNE1SEN RELATIONS The intermetallic compound UPt 3 belongs to the small group of heavy-fermion superconductors. Amongst the UX3-compounds, with X a 4dor 5d-metal or an element from group III or IV, UPt~ is the only compound that crystallizes in the hexagonal MgCd3-type of structure. Due to the absence of a non-magnetic analog system of UPt 3 the analysis of its properties is hampered. Usually, the analysis of specific heat or resistivity data, for instance, starts with eliminating the contribution connected with phonons by subtracting the data as measured on the non-magnetic analog system. For UPt~, however, the phonon contribution to the specific heat can only be determined by detailed measurements of phonon dispersion curves. This has been done by Renker et al. by means of inelastic neutron-scattering experiments. Felten used this information to determine from the specific heat that part that has an electronic origin and proposed in the low-temperature region two different electronic contributions. The first one, that leads to the high y-value characteristic for heavy-fermions, is associated with the Kondo-effect. Within a single-ion S = 1 /2 Kondo model one deduces T K = 13.5 K from the y-value. The second one, with a peak at 23 K, is of the Schottky-type connected with electronic levels that are separated by roughly 50 K. In the present paper we follow an entirely different line in analyzing the specific heat by means of Grtineisen relations. Physically meaningful Griineisen relations emerge when a part of the entropy can be written as S, = S i ( T / T, ( V ) ), where T,(V) is a (volume dependent) characteristic temperature. The dimensionless Graneisen parameter is defined as F, =
Effects of Mass and Damping on Flow-Induced Vibration of a Cylinder Interacting with the Wake of Another Cylinder at High Reduced Velocities : Flow-induced vibration is a canonical issue in various engineering fields, leading to fatigue or immediate damage to structures. This paper numerically investigates flow-induced vibrations of a cylinder interacting with the wake of another cylinder at a Reynolds number Re = 150. It sheds light on the effects of mass ratio m *, damping ratio, and mass-damping ratio m * on vibration amplitude ratio A/D at different reduced velocities Ur and cylinder spacing ratios L/D = 1.5 and 3.0. A couple of interesting observations are made. The m * has a greater influence on A/D than although both m * and cause reductions in A/D. The m * effect on A/D is strong for m * = 216 but weak for m * > 16. As opposed to a single isolated cylinder case, the mass-damping m * is not found to be a unique parameter for a cylinder oscillating in a wake. The vortices in the wake decay rapidly at small. Alternate reattachment of the gap shear layers on the wake cylinder fuels the vibration of the wake cylinder for L/D = 1.5 while the impingement and switch of the gap vortices do the same for L/D = 3.0. Introduction Flow over cylindrical structures is ubiquitous in engineering fields such as naval engineering (submarines, ship propellers), offshore engineering (semisubmersibles, spar, gravity platforms, and jackets), renewable energy engineering (offshore wind and tidal turbines), nuclear engineering (reactors, cooling tower, chimneys), civil engineering (skyscrapers and cables of suspension bridges), electrical engineering (power lines), etc. These cylindrical structures undergo undesirable flow-induced vibrations because of fluctuating forces induced by flow separation and alternate vortex shedding. When more than one cylinder is in a group, the fluid force acting on a cylinder in the group is different from that on a single isolated cylinder. The difference arises from the mutual fluid-structure interactions between the cylinders. The flow over a cylinder placed in the wake of another cylinder is considered as the baseline to study fluid-structure interactions between the cylinders in a group. In this case, the wake cylinder (downstream cylinder) receives a strong interaction from the wake-generating cylinder (upstream cylinder). Flow-induced vibrations commonly include instability-induced excitation (vortexinduced vibration) and movement-induced excitation (galloping). Vortex-induced vibration (VIV) usually occurs over a limited range of reduced velocity, hence, is self-limiting and is a kind of resonance, which occurs when the shedding frequency equals the oscillation frequency of the structure. On the other hand, galloping is a movement-induced excitation where the vibration amplitude grows with increasing reduced velocity. The galloping vibration is generated when the shedding frequency is greater or smaller than the oscillation frequency of the structure. Flow-induced vibration is a function of mass ratio m *, damping ratio, natural frequency f n, Reynolds number Re, reduced velocity Ur frequency away from the natural frequency of the cylinder. Hysteresis was observed in the responses of both cylinders. The review of the literature suggests that numerical investigations at low Re are scarce for the case where the wake cylinder is free to oscillate but the wake-generating cylinder is fixed. There are a few key issues to be resolved. For example, what are the effects of m * and on VIV responses and wake topology for the wake cylinder interacting with a wake generated by another cylinder? Are the effects dependent on L/D? Is mass-damping ratio m * a unique parameter to characterize the vibration? The objective of this work to investigate the effects of m * (=2-32), (=0-0.5), m * (=0-8) and L/D (=1.5 and 3) on flow-induced vibration of a cylinder submerged in the wake of another. Numerical simulations are conducted at Re = 150 for the parametric ranges mentioned above. The Ur is varied from 2.5 to 30. A low-Re flow provides a better understanding of flow physics, and one can clearly observe the development of shear layers, formation of vortices, evolution of vortices, etc. Although major flow phenomena at both low-and high-Re flows are the same (e.g., vortex shedding, lock-in, vibration nature), quantitative magnitudes of forces, Strouhal number, lock-in range, maximum vibration amplitude, etc. differ between the low-and high-Re flows. The results from a low-Re flow can, thus, not be extrapolated to a high-Re flow. Computational Domain The flow is given in a rectangular computational domain where two circular cylinders are arranged in tandem at the horizontal centerline of the domain (Figure 1). The wakegenerating cylinder is fixed, and the wake cylinder is spring-mounted. The latter cylinder is allowed to oscillate in the transverse direction only. The total spring stiffness is k. The Cartesian coordinate system (x-y) has its origin at the center of the wake-generating cylinder. The inlet and outlet boundaries of the computational domain are placed at x = −15 D and L + 45 D, respectively, where L is the distance between the centers of the cylinders, and D is the diameter. The upper and lower boundaries are symmetrically separated by 30 D from each other. The corresponding cylinder blockage ratio is 3.3%. affected the upstream cylinder vibration response. Compared to a single cylinder, the upstream cylinder had lock-in at smaller Ur, with the vortex shedding frequency away from the natural frequency of the cylinder. Hysteresis was observed in the responses of both cylinders. The review of the literature suggests that numerical investigations at low Re are scarce for the case where the wake cylinder is free to oscillate but the wake-generating cylinder is fixed. There are a few key issues to be resolved. For example, what are the effects of m * and on VIV responses and wake topology for the wake cylinder interacting with a wake generated by another cylinder? Are the effects dependent on L/D? Is massdamping ratio m * a unique parameter to characterize the vibration? The objective of this work to investigate the effects of m * (=2-32), (=0-0.5), m * (=0-8) and L/D (=1.5 and 3) on flow-induced vibration of a cylinder submerged in the wake of another. Numerical simulations are conducted at Re = 150 for the parametric ranges mentioned above. The Ur is varied from 2.5 to 30. A low-Re flow provides a better understanding of flow physics, and one can clearly observe the development of shear layers, formation of vortices, evolution of vortices, etc. Although major flow phenomena at both low-and high-Re flows are the same (e.g., vortex shedding, lock-in, vibration nature), quantitative magnitudes of forces, Strouhal number, lock-in range, maximum vibration amplitude, etc. differ between the low-and high-Re flows. The results from a low-Re flow can, thus, not be extrapolated to a high-Re flow. Computational Domain The flow is given in a rectangular computational domain where two circular cylinders are arranged in tandem at the horizontal centerline of the domain ( Figure 1). The wake-generating cylinder is fixed, and the wake cylinder is spring-mounted. The latter cylinder is allowed to oscillate in the transverse direction only. The total spring stiffness is k. The Cartesian coordinate system (x-y) has its origin at the center of the wake-generating cylinder. The inlet and outlet boundaries of the computational domain are placed at x = −15 D and L + 45 D, respectively, where L is the distance between the centers of the cylinders, and D is the diameter. The upper and lower boundaries are symmetrically separated by 30 D from each other. The corresponding cylinder blockage ratio is 3.3%. Wake-generating cylinder Wake cylinder Governing Equations and Numerical Technique The continuity and Navier-Stokes equations are the governing equations that can be written as: The parameters in the equations are non-dimensional and have the following expressions. Here, u and are the streamwise and transverse components of the flow velocity, respectively, p is the static pressure, is the fluid viscosity, U is the freestream velocity, and t is the time. The cylinders are surrounded by an O-xy grid system while a rectangular grid system is provided away from the cylinders, as shown in Figure 2. The meshes around the cylinders are given a greater density. We set the first grid level 0.009D away from the cylinder surface, with a mesh expansion ratio of less than 1.1. The simulation uses a dynamic mesh scheme where the ANSYS-Fluent 15 solver is used to move boundaries and/or objects and to adjust the mesh accordingly. The grid box around the cylinder moves with the cylinder displacement. The user-defined function is feed into the solver to estimate the cylinder displacement. At each time step, the domain deformation is handled by the dynamic meshing tool in ANSYS-Fluent 15, and the mesh is updated using the Laplace smoothing method. Governing Equations and Numerical Technique The continuity and Navier-Stokes equations are the governing equations that can be written as: The parameters in the equations are non-dimensional and have the following expressions. Here, u and are the streamwise and transverse components of the flow velocity, respectively, p is the static pressure, is the fluid viscosity, U is the freestream velocity, and t is the time. The cylinders are surrounded by an O-xy grid system while a rectangular grid system is provided away from the cylinders, as shown in Figure 2. The meshes around the cylinders are given a greater density. We set the first grid level 0.009D away from the cylinder surface, with a mesh expansion ratio of less than 1.1. The simulation uses a dynamic mesh scheme where the ANSYS-Fluent 15 solver is used to move boundaries and/or objects and to adjust the mesh accordingly. The grid box around the cylinder moves with the cylinder displacement. The user-defined function is feed into the solver to estimate the cylinder displacement. At each time step, the domain deformation is handled by the dynamic meshing tool in ANSYS-Fluent 15, and the mesh is updated using the Laplace smoothing method. The boundary conditions are given as u * = 0 and v * = 0 at the surfaces of the upstream and downstream cylinders, u * = 1 and v * = 0 at the inlet, ∂u * /∂y * = 0 and v * = 0 at the lateral sides, and ∂u * /∂x * = 0 and ∂v * /∂x * = 0 at the outlet. In the governing Equations -, p *, u * and v * are the unknown parameters, solved by coupling the governing equations. They are solved for the unsteady and incompressible flow with constant fluid properties. The computations are conducted using the finite volume method. The convective components are discretized using the second-order up- The boundary conditions are given as u * = 0 and v * = 0 at the surfaces of the upstream and downstream cylinders, u * = 1 and v * = 0 at the inlet, ∂u * / ∂y * = 0 and v * = 0 at the lateral sides, and ∂u * / ∂x * = 0 and ∂v * / ∂x * = 0 at the outlet. In the governing Equations -, p *, u * and v * are the unknown parameters, solved by coupling the governing equations. They are solved for the unsteady and incompressible flow with constant fluid properties. The computations are conducted using the finite volume method. The convective components are discretized using the second-order upwind scheme. The coupling between the velocity and pressure fields is done by the pressure correction-based iterative algorithm SIMPLE (Semi-implicit method for pressure linked equations) proposed by Patankar. We used a first-order implicit formulation for the time discretization. The governing equation of the cylinder motion in the dimensionless form can be written as:.. where Y is the cylinder displacement measured from y = 0,. Y is the cylinder velocity, and.. Y is the cylinder acceleration. The C Li is the instantaneous lift coefficient of the cylinder. The F n = f n D/U, where f n is the natural frequency of the cylinder. The m * (=4 m/D 2 ) is the cylinder mass ratio, where m stands for the mass of the cylinder. We solved Equation using the fourth-order Runge-Kutta method for every time step. The Re = 150 for all simulations including validation. The flow around a single cylinder transits from two-to three-dimensional at Re ≈ 190. When two cylinders are placed in tandem, the transition Re (Re cr2 ) from two-dimensional to three-dimensional flows is delayed for L/D ≤ 3.0, see Figure 3 reproduced from Rastan and Alam. The flow around two tandem cylinders at L/D = 1.5 and 3.0 examined is thus assumed to be two-dimensional. wind scheme. The coupling between the velocity and pressure fields is done by the pressure correction-based iterative algorithm SIMPLE (Semi-implicit method for pressure linked equations) proposed by Patankar. We used a first-order implicit formulation for the time discretization. The governing equation of the cylinder motion in the dimensionless form can be written as: where Y is the cylinder displacement measured from y = 0, is the cylinder velocity, and is the cylinder acceleration. The CLi is the instantaneous lift coefficient of the cylinder. The Fn = fnD/U, where fn is the natural frequency of the cylinder. The m * (=4 m/D 2 ) is the cylinder mass ratio, where m stands for the mass of the cylinder. We solved Equation using the fourth-order Runge-Kutta method for every time step. The Re = 150 for all simulations including validation. The flow around a single cylinder transits from two-to three-dimensional at Re ≈ 190. When two cylinders are placed in tandem, the transition Re (Recr2) from two-dimensional to three-dimensional flows is delayed for L/D ≤ 3.0, see Figure 3 reproduced from Rastan and Alam. The flow around two tandem cylinders at L/D = 1.5 and 3.0 examined is thus assumed to be two-dimensional.. For the definitions of the legends, please refer to Rastan and Alam. Recr1 represents the flow transition from steady to two-dimensional unsteady while Recr2 indicates the flow transition from two-dimensional unsteady to three-dimensional unsteady. Validation A mesh independence test was conducted based on the mesh system adopted in Zafar and Alam and Abdelhamid et al.. Three different meshes (M1, M2, and M3 consisting of 76,350, 80,050, and 9400 elements, respectively) are tested for a single fixed cylinder at Re = 150. The results of global parameters including time-mean drag coefficient Table 1 and compared with those in the literature. The results for the three mesh systems are essentially converged as these mesh systems were decided based on the mesh systems in. The maximum deviation in D C is found to be less than 2.5%. The present results overall show quite good agreement with those from the literature. Mesh M2 system, with the same grid distributions around the two cylinders, was,. For the definitions of the legends, please refer to Rastan and Alam. Re cr1 represents the flow transition from steady to two-dimensional unsteady while Re cr2 indicates the flow transition from two-dimensional unsteady to three-dimensional unsteady. Validation A mesh independence test was conducted based on the mesh system adopted in Zafar and Alam and Abdelhamid et al.. Three different meshes (M1, M2, and M3 consisting of 76,350, 80,050, and 9400 elements, respectively) are tested for a single fixed cylinder at Re = 150. The results of global parameters including time-mean drag coefficient C D, fluctuating lift coefficient C L, and Strouhal number St for three different meshes are presented in Table 1 and compared with those in the literature. The results for the three mesh systems are essentially converged as these mesh systems were decided based on the mesh systems in. The maximum deviation in C D is found to be less than 2.5%. The present results overall show quite good agreement with those from the literature. Mesh M2 system, with the same grid distributions around the two cylinders, was, however, used for the computations of two tandem cylinders when the wake cylinder is free to oscillate, and the results are validated again in Figure 4. To compare our results with the numerical results of Carmo et al., we first simulated vibration responses for m * = 2, = 0.003, and L/D = 3 which are the same geometrical and physical conditions used by Carmo et al.. A comparison between the Figure 4 where the vibration amplitude ratio A/D is presented against Ur. Here, A represents the cylinder vibration amplitude obtained from displacement signal Y. First, root-mean-square (rms) value of Y (Y rms ) is obtained, and then A is calculated as A = Y rms √ 2. The present results agree well with those by Carmo et al., with a maximum deviation of 5% occurring at Ur = 10. however, used for the computations of two tandem cylinders when the wake cylinder is free to oscillate, and the results are validated again in Figure 4. To compare our results with the numerical results of Carmo et al., we first simulated vibration responses for m * = 2, = 0.003, and L/D = 3 which are the same geometrical and physical conditions used by Carmo et al.. A comparison between the present and Carmo et al.'s results is made in Figure 4 where the vibration amplitude ratio A/D is presented against Ur. Here, A represents the cylinder vibration amplitude obtained from displacement signal Y. First, rootmean-square (rms) value of Y (Yrms) is obtained, and then A is calculated as 2 rms A Y =. The present results agree well with those by Carmo et al., with a maximum deviation of 5% occurring at Ur = 10. Figure 5 shows the dependence of vibration amplitude ratio A/D on Ur for m * = 2, = 0.005 and L/D = 3.0. The A/D firstly increases with increasing Ur, reaching a peak at Ur = 7 before declining with a further increase in Ur. The decline is rapid for Ur = 7-15 but mild Figure 5 shows the dependence of vibration amplitude ratio A/D on Ur for m * = 2, = 0.005 and L/D = 3.0. The A/D firstly increases with increasing Ur, reaching a peak at Ur = 7 before declining with a further increase in Ur. The decline is rapid for Ur = 7-15 but mild for Ur > 15. The vorticity structures shown in Figure 5b,c illustrate that each of the wakes at Ur = 7 and 15 is characterized by two rows of opposite sign vortices. Yet, the two wakes differ in different aspects. Firstly, the lateral separation between the two vortex rows is wider for Ur = 7 than for Ur = 15. Secondly, the streamwise distance between two consecutive vortices is large for Ur = 15, compared to that for Ur = 7. As the vibration amplitude gently decreases with Ur for Ur > 15 (Figure 5a), there is no galloping. In a water tunnel test, Bokaian and Geoola observed galloping vibration at 1.09 ≤ L/D ≤ 3 in the range of Re = 600-6000 with m * = 0.109. When both cylinders are allowed to Energies 2021, 14, 5148 7 of 13 vibrate, Kim et al. in a wind tunnel test found the occurrence of galloping vibrations for 1.2 ≤ L/D ≤ 1.6 at Re = 4365-74,200 and m * = 0.64. It suggests that the occurrence of galloping is highly sensitive to L/D, Re, and m *. Interestingly, as seen in Figure 5, the A/D value remains high (A/D > 0.4) even at high Ur (>15). It is worth investigating whether galloping occurs at other m * and values. We will therefore focus on the vibration response at large Ur (>10) only. Vibration Response at Small m* and for Ur > 15. The vorticity structures shown in Figure 5b,c illustrate that each of the wakes at Ur = 7 and 15 is characterized by two rows of opposite sign vortices. Yet, the two wakes differ in different aspects. Firstly, the lateral separation between the two vortex rows is wider for Ur = 7 than for Ur = 15. Secondly, the streamwise distance between two consecutive vortices is large for Ur = 15, compared to that for Ur = 7. As the vibration amplitude gently decreases with Ur for Ur > 15 (Figure 5a), there is no galloping. In a water tunnel test, Bokaian and Geoola observed galloping vibration at 1.09 ≤ L/D ≤ 3 in the range of Re = 600-6000 with m * = 0.109. When both cylinders are allowed to vibrate, Kim et al. in a wind tunnel test found the occurrence of galloping vibrations for 1.2 ≤ L/D ≤ 1.6 at Re = 4365-74,200 and m * = 0.64. It suggests that the occurrence of galloping is highly sensitive to L/D, Re, and m *. Interestingly, as seen in Figure 5, the A/D value remains high (A/D > 0.4) even at high Ur (>15). It is worth investigating whether galloping occurs at other m * and values. We will therefore focus on the vibration response at large Ur (>10) only. Effect of Damping Ratio on Vibration Amplitude The vibration response is investigated for Ur > 10 when is increased from 0 to 0.5. Effect of Damping Ratio on Vibration Amplitude The vibration response is investigated for Ur > 10 when is increased from 0 to 0.5. Effect of Mass Ratio on Vibration Amplitude More detail of the effect of m * on A/D is presented in Figure 8 for L/D = 3.0 and 1.5 with = 0. For both L/D values, A/D decreases with increasing m *, the decrease being large at small Ur. Considering the steps of the increase in m *, the decrease in A/D is largest between m * = 2 and 4. When m * is large enough (i.e., m * = 32 and 16 for L/D = 3.0 and 1.5, respectively), A/D becomes very small and almost independent of Ur. There is a large drop in A/D for m * = 4 from Ur = 10 to 15. As seen, A/D is highly sensitive to m * for m * = 2-8. At m * = 4, it seems that A/D at Ur = 10 and Ur ≥ 15 has similar characteristics to that at m * = 2 and 8, respectively. There might be a transition from high-to low-amplitude vibration between Ur = 10 and 15, which requires further investigation. between Ur = 10 and 15, which requires further investigation. Figure 9 shows the dependence of A/D on m *, Ur and for L/D = 1.5. At Ur = 10 ( Figure 9a), with increasing m *, the A/D exponentially drops between m * = 2 and 16 and gently for m * > 16. The same figure (Figure 9a) further proves that there is an effect of on A/D, the effect being small between = 0 and 0.05 but strong between = 0.05 and 0.5. At Ur = 15 (Figure 9b), again the effect of m * on A/D is very strong for m * = 2-16 and weak for m * > 16. Here, the effect of on A/D is very small. Figure 9c further describes the relationship between A/D, m * and Ur. For all Ur values, A/D exponentially declines with increasing m * up to 16. In addition, A/D drops significantly between Ur = 10 and 15 and mildly between Ur = 15 and 30. It can be generalized that A/D is inversely linked to m *, and Ur. Effect of Mass-Damping Ratio on Vibration Amplitude In the previous section, both m * and causes a reduction in A/D. For a single isolated cylinder, mass-damping ratio (m * ) has been proven to be another characteristic parameter that has a connection with A/D. The A/D is found to monotonically decrease with increasing m * for a single isolated cylinder. To see the relationship between A/D and m * for the cylinder interacting with the wake, A/D is plotted against m * in Figure 10 for Ur = 10 and 15. For these data, varies from 0 to 0.5 for both m * = 2 and 16 (see the legends in the figure). Interestingly, although A/D decreases with increasing m * for a given m *, the decrease rate is contingent on m *, high at small m *, and vice versa. The A/D data fail to collapse onto a single line against m *, which suggests that m * is not a unique parameter to characterize the vibration for a cylinder interacting with the wake of another cylinder. Effect of Mass-Damping Ratio on Vibration Amplitude In the previous section, both m * and causes a reduction in A/D. For a single isolated cylinder, mass-damping ratio (m * ) has been proven to be another characteristic parameter that has a connection with A/D. The A/D is found to monotonically decrease with increasing m * for a single isolated cylinder. To see the relationship between A/D and m * for the cylinder interacting with the wake, A/D is plotted against m * in Figure 10 for Ur = 10 and 15. For these data, varies from 0 to 0.5 for both m * = 2 and 16 (see the legends in the figure). Interestingly, although A/D decreases with increasing m * for a given m *, the decrease rate is contingent on m *, high at small m *, and vice versa. The A/D data fail to collapse onto a single line against m *, which suggests that m * is not a unique parameter to characterize the vibration for a cylinder interacting with the wake of another cylinder. It is worth investigating how the wake structure is modified when L/D, m *, and are changed. Figure 11 shows vorticity structures for L/D = 3.0 and 1.5 for different values of m * and. The first-and second-column snapshots correspond to Y ≈ 0 (i.e., mean position) Effects of L/D, m *, and on Wake Structure It is worth investigating how the wake structure is modified when L/D, m *, and are changed. Figure 11 shows vorticity structures for L/D = 3.0 and 1.5 for different values of m * and. The first-and second-column snapshots correspond to Y ≈ 0 (i.e., mean position) and Y ≈ Y max (i.e., maximum displacement), respectively. In the first-and second-row wake structures (Figure 11a,b), the effect of increasing m * from 2 to 16 on the wake structure is displayed. For m * = 2 and = 0 with L/D = 3.0 (Figure 11a), the wake behind the wake cylinder is characterized by 2S mode (i.e., two vortices shed in one oscillation cycle). The shear layers emanating from the wake-generating cylinders roll up into the gap between the cylinders. When the vibrating cylinder is close to the mean position (Y ≈ 0), the vortices from the wake-generating cylinder impinge on the vibrating cylinder. On the other hand, they both pass over the same side of the vibrating cylinder when the vibrating cylinder is at its maximum displacement (Y ≈ Y max ). When the m * is increased to 16 (Figure 11b), the shear layers from the wake-generating cylinder do not roll up in the gap between the cylinders but alternately reattach on the vibrating cylinder. Another remarkable difference in the wake structures between m * = 2 and 16 (Figure 11a,b) is that a greater number of vortices appear in the same downstream distance (x * = 4-26) for m * = 2 than for m * = 16. Given the 2S mode for both cases, it can be argued that the cylinder oscillation frequency reduces when m * is increased from 2 to 16, which is consistent with our intuition. A comparison of snapshots between the first and third rows gives out the effect of (=0 and 0.5) on the wake structure. The arrangement and structure of vortices in the wake do not differ much between = 0 and 0.5. The vortices in the wake, however, decay more rapidly for = 0 than for = 0.5. The effect of L/D can be understood by comparing the vortex shedding between L/D = 3.0 and 1.5 (Figure 11a,d), both for the same m * and. Although the wake structure is the same for both L/D values, the flow structure around the cylinders is completely different in the two cases. While vortex shedding occurs from both cylinders for L/D = 3.0 (Figure 11a), only the wake cylinder sheds vortices for L/D = 1.5 (Figure 11d). Vortex shedding does not take place in the gap between the cylinders for L/D = 1.5. The alternate reattachment of the shear layer from the wake-generating cylinder A comparison of snapshots between the first and third rows gives out the effect of (=0 and 0.5) on the wake structure. The arrangement and structure of vortices in the wake do not differ much between = 0 and 0.5. The vortices in the wake, however, decay more rapidly for = 0 than for = 0.5. The effect of L/D can be understood by comparing the vortex shedding between L/D = 3.0 and 1.5 (Figure 11a,d), both for the same m * and. Although the wake structure is the same for both L/D values, the flow structure around the cylinders is completely different in the two cases. While vortex shedding occurs from both cylinders for L/D = 3.0 (Figure 11a), only the wake cylinder sheds vortices for L/D = 1.5 (Figure 11d). Vortex shedding does not take place in the gap between the cylinders for L/D = 1.5. The alternate reattachment of the shear layer from the wake-generating cylinder sustains the wake cylinder vibration for L/D = 1.5 while alternate impingement and switch of the gap vortices do the same for L/D = 3.0. Conclusions Flow-induced vibrations of a cylinder interacting with the wake of another cylinder is investigated at Re = 150. The focus is given on the effect of m * (=2-32), (=0-0.5), and m * (=0-8) on A/D at different Ur and L/D values. While investigations at high Re found galloping vibration for the wake cylinder (e.g., ), no galloping occurs at this low Re. Following VIV peak, A/D remains high (A/D > 0.4) even at high Ur (>15). The A/D is more sensitive to m * than to. Both m * and reduce A/D. The effect of on A/D is larger at smaller m *. On the other hand, the m * has a considerable effect on A/D for all values. The effect is, however, strong for m * = 2-16 and relatively weak for m * > 16. Generally, the A/D is inversely linked to m * and. Although m * is believed to be a unique parameter determining the vibration amplitude for a single isolated cylinder, this is not the case for the cylinder interacting with the wake generated by another. Although A/D decreases with m *, the decrease rate is high at small m *. As such, the A/D data fail to collapse onto a single line against m *, which signifies that m * is not a parameter that uniquely characterizes interaction of a cylinder with the wake of another cylinder. The shear layers of the wake-generating cylinder do roll up in the gap between the cylinders at m * = 2 for = 0, Ur = 10, and L/D = 3. With the same initial conditions ( = 0, Ur = 10, and L/D = 3), when m * is increased to 16, the shear layers do not roll up in the gap but alternately reattach on the vibrating cylinder. Vortices in the wake appear more for m * = 2 than for m * = 16, decaying more rapidly at small. While the wake-generatingcylinder shear layers reattaching alternately on the wake cylinder fuels the vibration of the wake cylinder for L/D = 1.5, the gap-vortex impingement and switch do the same for L/D = 3.0. The results are useful to riser designers to understand the role of m* and in flow-induced vibrations. Funding: This research was funded by the Khalifa University of Science and Technology through Grants CIRA-2020-057. Data Availability Statement: The data that support the findings of this study are available within the article.
The antagonistic effect of locally isolated Trichoderma spp. against dry root rot of mungbean Abstract The current investigation deals with the isolation, identification and testing the antagonistic ability of Trichoderma spp. on Macrophomina phaseolina under in vitro and in vivo conditions. The locally isolated biocontrol agents were subjected to morphological and molecular identification and further research was carried out to test all the biocontrol agents. T. harzianum was found to be highly effective in dual confrontation test, restricting the fungal growth by 67.59% as compared to control. Various secondary metabolites of Trichoderma spp. were studied with inverted plate and culture filtrate assay. Mycoparasitism and antibiosis were also examined using light and Scanning Electron Microscopy (SEM) which indicated that all the Trichoderma spp. restricted the growth by coiling around the host hyphae and through secondary metabolites. In vivo experiments showed that all the treatments significantly reduced the disease severity and intensity where T. harzianum was found to be the most effective biocontrol agent (decreased upto 72.88 and 60.24%). Nevertheless, T. harzianum could be used as a potent indigenous biocontrol agent against dry root rot which also helped to reduce its deleterious effect on the host plant.
// HostRetrieveByName retrieves a host by name // returns nil if error or host not found func (dsm DSM) HostRetrieveByName(hostName string) (*gowsdlservice.HostTransport, error) { hrbn := gowsdlservice.HostRetrieveByName{Hostname: hostName, SID: dsm.SessionID} resp, err := dsm.SoapClient.HostRetrieveByName(&hrbn) if err != nil || resp.HostRetrieveByNameReturn.Platform == "" { return nil, errors.New(fmt.Sprintf("Unable to retrieve host %s", hostName)) } else { hostTransPort := resp.HostRetrieveByNameReturn return hostTransPort, nil } }
Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least-squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from x-ray CT projection data.