content
stringlengths 7
2.61M
|
---|
/*____________________
LEDSegs::DefineSegment
Set the properties of the current LED segment. A -1 value indicates that the corresponding property should not be changed.
Return value is the segment index.
*/
short LEDSegs::DefineSegment(short FirstLED, short nLEDs, short Action, uint32_t ForeColor, short Bands) {
if (segMaxDefinedIndex < 0) {SetSegmentIndex(0);} else {SetSegmentIndex(segCurrentIndex + 1);}
SetSegment_FirstLED(FirstLED);
SetSegment_NumLEDs(nLEDs);
SetSegment_Action(Action);
SetSegment_ForeColor(ForeColor);
SetSegment_Bands(segCurrentIndex, Bands);
SetSegment_BackColor(RGBOff);
SetSegment_Spacing(0);
SetSegment_Options(segCurrentIndex, 0);
SetSegment_DisplayRoutine(segCurrentIndex, NULL);
segMaxDefinedIndex = max(segMaxDefinedIndex, segCurrentIndex);
return segCurrentIndex;
} |
A court in Northern Germany has s94-year-old Oskar Groening, a former SS-Unterscharführer, or junior squad leader, to four years in prison. His charge: 300,000 counts of accessory to murder as the "Bookkeeper of Auschwitz."
Ramussen, 90, was one of the 6,000 Danish volunteers to have joined the SS after Germany invaded the country in 1940. On July 21, Nazi-hunter Efraim Zuroff asked Danish police to investigate Rasmussen for serving as a guard in Belarus’ Bobruisk camp between 1942-43, when 1,400 Jews were killed. Rasmussen, who now goes under the surname of Rasboel and lives in Copenhagen, has acknowledged in interviews that he was a SS member and guard, who saw Jews “being killed and thrown in mass graves,” but he has denied any involvement in killings. To complicate matters, Rasmussen had received some sort of punishment after the war. It is not clear what the specific crime was but a Danish prosecutor has said that they want to avoid prosecuting him twice for the same offense.
Sommer, 94, lives in a nursing home just north of Hamburg, about two hours drive from the German border with Denmark. But in 1944, when Sommer was a 22-year-old soldier in the 16th SS Panzer Division, he allegedly helped massacre 560 civilians—including 119 children—in the Tuscan town of Sant-Anna di Stazzema, shooting, beating and burning them to death. Sommer was among 10 former SS officers found guilty in absentia by an Italian court in 2005, but Germany never extradited any of them.
German prosecutors dropped Sommer's case in 2012 for lack of evidence, and then reopened it in August 2014, only to have specialists conclude that Sommer was unfit for trial because of severe dementia. Had Sommer's trial gone through, prosecutors predicted that he would have been "charged with 342 cases of murder, committed cruelly and on base motives."
Stark, 92, a former corporal of the Gebirsgjäger also sentenced in absentia in Italy, was accused of ordering the execution of 117 Italian prisoners of war on the Italian-occupied island of Kefalonia, Greece in 1943—part of the slaughter of nearly 9,500 officers of the Acqui Division that September after the breaking of the Germany-Italy alliance. Despite Stark's indictment by the military court of Rome in 2012 and his subsequent sentencing to life in prison, Germany has refused to extradite him from the country, where he still resides.
Riss, 92, was one of three former Nazis sentenced in 2011 by the military court in Rome to life in prison for the 1944 massacre of 184 civilians in another Tuscan town: Padule di Fucecchio. The massacre was reportedly carried out after two German soldiers were shot by resistance fighters, and documented extensively in statements gathered a year later by Charles Edmonson, a British sergeant looking to ensure that the responsible parties would be brought to justice.
The military court that sentenced Riss also requested that the German government pay 14 million euros in compensation to the just over 30 remaining relatives of the massacre's victims, a gesture Germany refused. Germany declined to extradite Riss, who remains there.
Dailide, 95, is a former Lithuanian soldier, who, as a member of Lithuania's Nazi-controlled Security Police, allegedly arrested 12 Jews attempting to escape Vilna, a Jewish ghetto in the city of Vilnius, in the early 1940s. It is presumed that these Jews were later executed.
Dailide lied about his occupation and immigrated to the U.S. after the war, but was stripped of his citizenship in the 1990s and, in 2004, was deported to Germany. In 2008, the Israeli news outlet Haaretz reported that Dailide was living in the small town of Kirchberg in western Germany with his wife, living on his wife's German pension.
Dailide was convicted of war crimes by a Vilnius court, but a Lithuanian high court ruled in 2008 that he was in too poor health to be sentenced to time in prison.
Oberlander, 91, a native Ukrainian, served in the eastern occupied territories during WWII as part of Einsatzgruppe D, the infamous Nazi death squad estimated by the Wiesenthal Center to have murdered 23,000 Jewish civilians. He currently resides in Ottawa, Canada, where he immigrated in 1954 and worked for many years as a developer, but for the last 20 years, he has been in a legal battle with the federal cabinet over his citizenship. In 2012, Oberlander entered a third round of court rulings as the Canadian government continues its attempts to strip him of his citizenship and order his deportation. In February of 2016, Canada’s federal court of appeal sent his case back to the country’s federal cabinet, ordering the government to take another look at the case. |
We will never know if Hillary Clinton and Mary Barra might have hit it off, woman to woman, on the GM plant. I can’t imagine any person connected with GM who voted for Trump, after Obama and the Democrats saved GM from going under.
I feel so bad for the lady who drove into that yard and hit the little girl just because of that dog. They should ban dogs from cars or put them in the trunk or in a baby seat. Leave your dog at home. They are unsafe while driving.
Who gave Mariam Fife a life sentence? Cops, lawyers, High court? Thirty three years to wait to see justice for a 12-year-old Boy Scout who was murdered and found by his father? That boy felt much pain, but the law doesn’t want that animal to feel pain?
Cleveland fans love Chief Wahoo; Native Americans are offended. Black lives matter black people like; others races may be offended. Confederate flags Southern whites like; black people are offended. Is it freedom of speech or not? Ban all clothing with any emblem or writing so we offend nobody! Be careful what you wish for.
Tim Ryan is running for president? He’s got to be kidding! He says he’s for the middle-class working people? Then let’s get rid of his suit and tie and put on some working coveralls and, at the end of the day, wash out all the good natural dirt that accumulated under his fingernails. Where was he when GM, Copperweld, Republic Steel, etc., closed?
The Democrat primary is highlighting arrogance. What have any of these candidates done in their districts or in society? Most are career politicians who haven’t had a private-sector job since high school. They say anything to get elected, but do nothing that benefits most of their constituency. Tim Ryan has ridden on Traficant’s coattails for 16 years, but has nothing to show for his own reign. His arrogance thinking he’s presidential material is appalling and disgusting.
Tim Ryan has been fighting for the Mahoning Valley and Trumbull County for years. He fought for TJX, and he is fighting for GM Lordstown with everything he’s got. Now that he is running for president, I will support him every step of the way. How can anyone say he’s done nothing? We need him. |
A nuclear function of Hu proteins as neuron-specific alternative RNA processing regulators. Recent advances in genome-wide analysis of alternative splicing indicate that extensive alternative RNA processing is associated with many proteins that play important roles in the nervous system. Although differential splicing and polyadenylation make significant contributions to the complexity of the nervous system, our understanding of the regulatory mechanisms underlying the neuron-specific pathways is very limited. Mammalian neuron-specific embryonic lethal abnormal visual-like Hu proteins (HuB, HuC, and HuD) are a family of RNA-binding proteins implicated in neuronal differentiation and maintenance. It has been established that Hu proteins increase expression of proteins associated with neuronal function by up-regulating mRNA stability and/or translation in the cytoplasm. We report here a novel function of these proteins as RNA processing regulators in the nucleus. We further elucidate the underlying mechanism of this regulation. We show that in neuron-like cells, Hu proteins block the activity of TIA-1/TIAR, two previously identified, ubiquitously expressed proteins that promote the nonneuronal pathway of calcitonin/calcitonin gene-related peptide (CGRP) pre-mRNA processing. These studies define not only the first neuron-specific regulator of the calcitonin/CGRP system but also the first nuclear function of Hu proteins. |
Aircraft Altitude Control Based on CDM This research presents the controller design for aircraft altitude control based on Coefficient Diagram Method (CDM). The controller, which is designed by CDM, guarantees a good balance of stability, response and robustness. The simulation results of the proposed control system show that the controller is able to control the aircraft altitude as desired and has a disturbance rejection behavior and the response speed can be easily adjusted by specification of the equivalent time constant. As the result, the controller design processes are less complex than other methods and the controller is still effective. |
<gh_stars>100-1000
/*
* Copyright (C) 2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
* in compliance with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
* or implied. See the License for the specific language governing permissions and limitations under
* the License.
*/
package vip.justlive.oxygen.core.config;
import vip.justlive.oxygen.core.CoreConfigKeys;
import vip.justlive.oxygen.core.Plugin;
import vip.justlive.oxygen.core.util.base.Strings;
/**
* 配置插件
*
* @author wubo
*/
public class ConfigPlugin implements Plugin {
@Override
public void start() {
ConfigFactory.loadProperties("classpath*:config.properties", "classpath*:/config/*.properties");
String overridePath = CoreConfigKeys.CONFIG_OVERRIDE_PATH.getValue();
if (Strings.hasText(overridePath)) {
ConfigFactory.loadProperties(overridePath.split(Strings.COMMA));
}
}
@Override
public void stop() {
ConfigFactory.clear();
}
@Override
public int order() {
return Integer.MIN_VALUE;
}
}
|
def filter_since_tag(self, all_tags):
tag = self.detect_since_tag()
if not tag or tag == REPO_CREATED_TAG_NAME:
return copy.deepcopy(all_tags)
filtered_tags = []
tag_names = [t["name"] for t in all_tags]
try:
idx = tag_names.index(tag)
except ValueError:
self.warn_if_tag_not_found(tag, "since-tag")
return copy.deepcopy(all_tags)
since_tag = all_tags[idx]
since_date = self.get_time_of_tag(since_tag)
for t in all_tags:
tag_date = self.get_time_of_tag(t)
if since_date <= tag_date:
filtered_tags.append(t)
return filtered_tags |
Pivotal role of long non-coding ribonucleic acid-X-inactive specific transcript in regulating immune checkpoint programmed death ligand 1 through a shared pathway between miR-194-5p and miR-155-5p in hepatocellular carcinoma BACKGROUND Anti-programmed death therapy has thrust immunotherapy into the spotlight. However, such therapy has a modest response in hepatocellular carcinoma (HCC). Epigenetic immunomodulation is a suggestive combinatorial therapy with immune checkpoint blockade. Non-coding ribonucleic acid (ncRNA) driven regulation is a major mechanism of epigenetic modulation. Given the wide range of ncRNAs that co-opt in programmed cell-death protein 1 (PD-1)/programmed death ligand 1 (PD-L1) regulation, and based on the literature, we hypothesized that miR-155-5p, miR-194-5p and long non-coding RNAs (lncRNAs) X-inactive specific transcript (XIST) and MALAT-1 are involved in a regulatory upstream pathway for PD-1/PD-L1. Recently, nutraceutical therapeutics in cancers have received increasing attention. Thus, it is interesting to study the impact of oleuropein on the respective study key players. AIM To explore potential upstream regulatory ncRNAs for the immune checkpoint PD-1/PD-L1. METHODS Bioinformatics tools including microrna.org and lnCeDB software were adopted to detect targeting of miR-155-5p, miR-194-5p and lncRNAs XIST and MALAT-1 to PD-L1 mRNA, respectively. In addition, Diana tool was used to predict targeting of both aforementioned miRNAs to lncRNAs XIST and MALAT-1. HCC and normal tissue samples were collected for scanning of PD-L1, XIST and MALAT-1 expression. To study the interaction among miR-155-5p, miR-194-5p, lncRNAs XIST and MALAT-1, as well as PD-L1 mRNA, a series of transfections of the Huh-7 cell line was carried out. RESULTS Bioinformatics software predicted that miR-155-5p and miR-194-5p can target PD-L1, MALAT-1 and XIST. MALAT-1 and XIST were predicted to target PD-L1 mRNA. PD-L1 and XIST were significantly upregulated in 23 HCC biopsies compared to healthy controls; however, MALAT-1 was barely detected. MiR-194 induced expression elevated the expression of PD-L1, XIST and MALAT-1. However, overexpression of miR-155-5p induced the upregulation of PD-L1 and XIST, while it had a negative impact on MALAT-1 expression. Knockdown of XIST did have an impact on PD-L1 expression; however, following knockdown of the negative regulator of X-inactive specific transcript (TSIX), PD-L1 expression was elevated, and abolished MALAT-1 activity. Upon co-transfection of miR-194-5p with siMALAT-1, PD-L1 expression was elevated. Co-transfection of miR-194-5p with siXIST did not have an impact on PD-L1 expression. Upon co-transfection of miR-194 with siTSIX, PD-L1 expression was upregulated. Interestingly, the same PD-L1 expression pattern was observed following miR-155-5p co-transfections. Oleuropein treatment of Huh-7 cells reduced the expression profile of PD-L1, XIST, and miR-155-5p, upregulated the expression of miR-194-5p and had no significant impact on the MALAT-1 expression profile. CONCLUSION This study reported a novel finding revealing that opposing acting miRNAs in HCC, have the same impact on PD-1/PD-L1 immune checkpoint by sharing a common signaling pathway. INTRODUCTION Hepatocellular carcinoma (HCC) constitutes a global burden and is one of the leading causes of cancer mortality. A myriad of therapeutic modalities is available for HCC including tumor resection or ablation, transarterial chemoembolization, liver transplantation and treatment with tyrosine kinase inhibitors. Nevertheless, HCC is a highly therapy resistant disease and is frequently diagnosed at an advanced stage; thus, the identification of a novel therapeutic modality is essential. Recently, tumour immunotherapy has been thrust into the spotlight to inhibit tumour progression, relapse and metastasis. Immunotherapeutic techniques comprise both activation of tumour specific immune responses as well as enhancement of cellular or humoral immunity thus causing disruption of immune tolerance. HCC immunotherapy has greatly changed due to extensive ongoing immunological studies which have incorporated immunotherapy into the HCC treatment armamentarium. The rationale behind such a revolutionary therapeutic technique is the fact that HCC develops in an inflammatory milieu brimming with tumour infiltrating lymphocytes boosting HCC immunogenicity. Immune checkpoint inhibitors have been featured as a sensational paradigm shift in cancer immunotherapy. Physiologically, immune checkpoints are co-inhibitory molecules that act as "brakes" in the immune system to avoid an exaggerated response and restore its activity to a normal level. Programmed cell-death protein 1 (PD-1) is one of the highly expressed immune checkpoints on T-cells in most solid tumours. PD-1 was originally described by Ishida et al in 1992 as a cell death inducer, a discovery that paved the way for Noble prize winning immune checkpoint inhibitor studies in 2018. Tumour immune surveillance evasion can then occur upon engagement of PD-1 with its ligand, Programmed death ligand 1 (PD-L1), expressed on tumour cells leading to effector T-cell exhaustion and dysfunction. PD-1/PD-L1 immune checkpoint blockade has shown considerable survival benefits in patients with different metastatic tumours. In 2017, the Food and Drug Administration approved Nivolumab, a human immunoglobulin G monoclonal antibody against PD-1, for patients with advanced HCC, due to durable responses observed in these patients. Due to the breakthrough established in next generation sequencing which enabled the profiling of the whole transcriptomic expression at the molecular level, our understanding of biological systems has improved. Such studies have revealed the expression deregulation of a multitude of ncRNAs. Based on bioinformatics analysis, the miRNAs, oncomiR and miR-155-5p, and tumor suppressor miR-194-5p were predicted to target PD-L1 transcriptome as well as the candidate lncRNAs, X-inactive specific transcript (XIST) and MALAT-1. Moreover, lncRNAs XIST and MALAT-1 were predicted to target PD-L1 transcript where both lncRNAs have demonstrated their role in HCC pathogenesis in several studies. Therefore, it is interesting to study the expression profile of PD-L1 in Huh-7 cells relative to the expression manipulation of candidate ncRNAs in order to explore novel potential upstream regulatory ncRNAs for PD-L1 in HCC and the capacity of these ncRNAs as therapeutic targets. In addition, it is of value to determine the clinical relevance of the proposed regulatory signaling pathways for PD-L1 in HCC patients by assessing the expression pattern of PD-L1 as well as the lncRNAs XIST and MALAT-1 in HCC tissues. The trend towards integrating phytochemicals in cancer therapy is being augmented worldwide, especially with increased tolerance and resistance to traditional cancer therapeutic modalities. The olive tree (Olea europaea L.) which belongs to the Oleacaea family is native to tropical and warm temperate regions. Several studies have postulated that the olive plant has anti-inflammatory and anticancer activities. Such activities are mainly attributed to the unique polyphenolic content of the olive plant. Oleuropein is one of the highly abundant phenolic compounds in olive leaves. It is reported to have a plethora of beneficial health benefits that are attributed to a compilation of pharmacological action including anti-oxidant, anti-inflammatory, and anti-angiogenic activities which pave the way for its interesting anticancer activity. Oleuropein has been demonstrated to have an anti-inflammatory and immunomodulatory effect via down-regulation of MAPKs and NF-B signaling pathways as well as controlling the production of inflammatory mediators such as IL-6 and TNF- cytokines, MMP-1 and MMP-3 levels. Interestingly, Ruzzolini et al revealed the promising potential of oleuropein as an adjuvant therapy against BRAF melanoma, by manipulating the pAKT/pS6 pathway. Moreover, a recent study demonstrated the potential indirect modulatory impact of oleuropein on PD-L1 in esophageal cancer, by manipulating the expression of hypoxia-inducible factor-1. Nevertheless, to the best of our knowledge, the immunomodulatory impact of oleuropein on HCC has not been extensively studied. Hence, the impact of this promising compound on our study key players was determined. Bioinformatics analysis To detect possible microRNAs targeting 3'UTR of PD-L1 mRNA, microrna.org (www.microrna.org) bioinformatics target prediction software was used. Based on the binding scores and number of hits, miRNAs with good scores were chosen. Diana tools software (http://carolina.imis.athena-innovation.gr) was used to analyze potential binding of miR-194 and miR-155 to the 3'UTR region of lncRNAs XIST and MALAT1. The lnCeDB (Database of Human Long Noncoding RNA Acting as Competing Endogenous RNA) prediction software algorithm (http://gyanxetbeta.com/lncedb/) was used to analyze potential binding of lncRNA XIST and MALAT-1 to PD-L1. Patients and tissue samples The present study included 23 patients with HCC, who underwent liver transplant surgery in the Kasr El Einy Hospital (Cairo University, Cairo, Egypt). Four samples of cirrhotic tissues were taken from a subset of these patients with focal HCC lesions. As per the pathology report of these patients, summarized in Table 1, almost 70% of patients had > 1 focal lesion. Ten liver biopsies were obtained from healthy donors. Ethical approval for this study was issued by the Institutional Review Board of Cairo University. In addition, all participants provided written informed consent. The institutional ethics committees approving this research comply with the principles set forth in the international reports and guidelines of the Helsinki Declaration and the International Ethical Guidelines for Biomedical Research Involving Human Subjects, issued by the Council for International Organizations of Medical Sciences. were exposed only to the transfection reagent were designated mock cells; cells transfected with miR-155 or miR-194 mimics were designated miR-155 cells and miR-194 cells, respectively; cells transfected with the miR-155 or miR-194 inhibitors were designated as anti-miR-155 cells and anti-miR-194 cells, respectively; cells transfected with XIST siRNAs were designated as XIST siRNA cells; cells transfected with MALAT-1 siRNAs were designated as MALAT-1 siRNA cells; cells transfected with TSIX siRNAs were designated as TSIX siRNA cells; cells co-transfected with miR-155 and XIST siRNA were designated as miR-155/siXIST; cells co-transfected with miR-155 and MALAT-1siRNA were designated as miR-155/siMALAT-1; cells cotransfected with miR-155 and TSIX siRNA were designated as miR-155/siTSIX; cells co-transfected with miR-194 and XIST siRNA were designated as miR-194/siXIST; cells co-transfected with miR-194 and MALAT-1 siRNA were designated as miR-194/siMALAT-1; cells co-transfected with miR-194 and TSIX siRNA were designated as miR-194/siTSIX; Cells were lysed 48 h post-transfection and total RNA was extracted for further analysis. Plant material and fractionation Olive leaves were collected from northern Sinai, Egypt and authenticated by Mrs. Therasa Labib, Taxonomist, Orman Botanical Garden, Egypt. Voucher specimen number was deposited at the Herbarium of the Pharmaceutical Biology Department, Faculty of Pharmacy and Biotechnology, German University in Cairo. Exhaustive extraction of olive leaves was carried out using 70% aqueous-ethanol, followed by re-suspension of the residue in H 2 O and fractionation against petroleum ether, chloroform and ethyl acetate to yield 17 g, 6.5 g and 4.5, g respectively. The ethyl acetate polar fraction was applied over an open column (64 cm L 5.5 cm ID) packed with silica (250 g) as stationary phase. A CHCl 3 :CH 3 OH:H 2 O gradient was used for the elution process to ensure purification of the sub-fractions. Isolation of oleuropein The sub-fraction of interest (30 mg) was obtained using CHCl 3 :CH 3 OH:H 2 O in a ratio of 3:4:3, then injected into a preparative high performance liquid chromatograph (Waters 600 E multisolvent delivery system, Waters 600 E pump and Waters 2998 PDA) which was employed using Lichrospher 100 RP-18 (250 mm 10 mm i.d.; 10 m) (Merck KGaA, Darmstadt, Germany). The mobile phase used was composed of 0.2% H 3 PO 4 (v/v), methanol and acetonitrile in a ratio of 96:2:2. NMR spectra were obtained using a Bruker Avance 500 spectrometer (Bremen, Germany) 5 mm-Zgrad probe, operating at 500.13 MHz for 1 H and 125.77 MHz for 13 C. The purity of oleuropein was confirmed using analytical HPLC (Agilent Technologies, Waldbronn, Germany), equipped with a PDA detector G 1314 C (SL). Chromatographic separation was carried out on a Superspher 100 RP-18 (75 mm 4 mm i.d.; 4 m) column (Merck, Darmstadt, Germany) using mobile phases: (A) 2% acetic acid (pH 2.6) and (B) 80% methanol. A gradient starting from 5% B to 50% B was employed for the elution process with 100 L/min flow rate at 30°C and compared vs standard material (Sigma Aldrich) using HPLC. Confirmation of oleuropein identity was carried out by comparing its spectral data to the obtained literature. Oleuropein treatment to HuH-7 cells A stock solution of oleuropein 100 mmol/L was prepared by dissolving 0.108 g in 2 mL of free DMEM. A solution of 80 mol/L concentration that was previously December 27, 2020 Volume 12 Issue 12 reported as LC50 on Huh-7 cells was prepared using this stock. RNA isolation from liver biopsies and Huh-7 cell line RNA was isolated from Huh-7 cells and liver biopsies using the TRIzol™ LS Reagent (Applied Biosystems; Thermo Fisher Scientific Inc., cat. no. 10296010) extraction protocol. Quantified real-time polymerase chain reaction Total RNA extracted was reverse-transcribed into single-stranded complementary DNA (cDNA) using the high-capacity cDNA reverse transcription kit (Applied Biosystems; Thermo Fisher Scientific Inc., cat. no. 4368814). The relative expression of miR-155 as well as miR-194 to that of RNU6B (housekeeping gene), in addition to PD-L1 mRNA, XIST and MALAT-1 lncRNAs to that of -2-microglobulin (2M; a housekeeping gene) were quantified with TaqMan RT-quantitative polymerase chain reaction using StepOne™ Systems (Applied Biosystems Life Technologies). The PCR for miR quantification included 1 L TaqMan Small RNA Assay (20 X) specific for each of miR-155 or miR-194 or RNU6B and 1.33 L cDNA from each miR-155 or miR-194 or RNU6B RT reactions, respectively. Taqman target gene assay expression assay (1 L) specific for each of PD-L1, XIST and MALAT-1 as well as 4 L of the respective cDNA were used for quantification. The RT-qPCR run was performed in the standard mode, consisting of two stages: A first 10 min stage at 95°C where the Taq-polymerase enzyme was activated, followed by a second stage of 40 amplification cycles (15 s at 95°C and 60 s at 60°C). Relative expression was calculated using the 2 −Cq method. All PCR reactions, including controls, were run in triplicate. Statistical analysis All data were expressed in relative quantitation. For the purpose of comparison between two different studied groups, the Student's unpaired t-test was used. Data were expressed as mean ± SD error of the mean. A P value less than 0.05 was considered statistically significant. d P < 0.0001, c P < 0.001, b P < 0.01, a P < 0.05. Analysis was performed using GraphPad Prism 7.02. In silico analysis According to miRANDA software and the miRDB database, a total of 146 miRNAs were predicted to target PD-L1 mRNA. Both miR-155 and miR-194 were predicted to bind to the 3'UTR region of PD-L1 mRNA using miRANDA software and Targetscan software, while binding of miR-194 and miR-155 to the 3'UTR region of lncRNAs XIST and MALAT1 was predicted using Diana tools software. MALAT1 and XIST were predicted to target PD-L1 mRNA according to LnCeDB software algorithms. Expression profile of PD-L1 in liver tissues The expression profile of PD-L1 was assessed in HCC patients, and adjacent cirrhotic biopsies in a subset of patients together with 10 donor healthy controls, using qRT-PCR. PD-L1 was significantly elevated in both HCC biopsies (P = 0.0065) and cirrhotic biopsies (P = 0.0251) in comparison to healthy controls ( Figure 1). Expression profile of lncRNAs; XIST and MALAT-1 in HCC tissues The expression profile of the endogenous lncRNAs XIST and MALAT-1 was examined in HCC patients and adjacent cirrhotic biopsies in a subset of patients together with 10 healthy donors using qRT-PCR. HCC patients showed a significant upregulation of XIST expression (P = 0.048) compared to healthy controls. MALAT-1 expression in HCC patients was barely detected (P = 0.043) and a significant upregulation was found in the cirrhotic tissues (P = 0.0136) (Figure 2). Manipulation of endogenous miR-194-5p and miR-155-5p expression in Huh-7 cells. Transfection efficiency of miR-194-5p and miR-155-5p oligonucleotides: In order to manipulate the expression of miR-194-5p and miR-155-5p in Huh-7 cells, the cells were transfected with each of the respective miRNA mimics and antagomirs, respectively. analyzed in hepatocellular carcinoma patients, cirrhotic and healthy controls using quantified real-time polymerase chain reaction and normalized to B2M as an internal control (housekeeping gene). Screening of programmed death ligand 1 showed that it was enhanced in cirrhotic biopsies ( a P < 0.05) and hepatocellular carcinoma biopsies ( b P < 0.01) compared to healthy controls. HCC: Hepatocellular carcinoma. Figure 2 Expression profile of lnc-ribonucleic acid X-inactive specific transcript and MALAT-1 in hepatocellular carcinoma tissues. Endogenous X-inactive specific transcript and MALAT-1 lnc-ribonucleic acids expression profile was analyzed in hepatocellular carcinoma (HCC) patients and healthy controls using quantified real-time polymerase chain reaction and normalized to B2M as an endogenous control. A: X-inactive specific transcript lnc-ribonucleic acid showed a significant upregulation in HCC biopsies (P = 0.048); and B: MALAT-1 was significantly down regulated in HCC biopsies (P = 0.043); however, it showed elevated expression in cirrhotic biopsies (P = 0.0136). a P < 0.05. HCC: Hepatocellular carcinoma. Impact of knocking down the lncRNAs MALAT-1, XIST and TSIX on PD-L1 expression in Huh-7 cells Knockdown of MALAT-1 significantly down regulated PD-L1 expression (P = 0.001) compared to mock cells. On the other hand, transfection with siRNAs of TSIX induced the upregulation of PD-L1 expression (P = 0.0358) compared to mock cells. Knockdown of XIST resulted in an insignificant change in the PD-L1 expression profile compared to untransfected mock cells ( Figure 6). Net impact of combined ectopic expression of miR-194-5p and miR-155-5p together with siRNAs of lncRNAs XIST, TSIX and MALAT-1 on PD-L1 expression profile. The expression profile of PD-L1 transcript was studied following co-transfection of Huh-7 cells with different combinations of each miRNA; miR-194-5p and miR-155-5p, respectively, with each of the siRNAs of lncRNAs; MALAT-1, XIST and TSIX. Values were normalized to the endogenous housekeeping gene B2M and compared to mock untransfected cells. Following transfection of miR-194-5p with siRNA of MALAT-1, PD-L1 expression was significantly induced (P = 0.0074). However, following knockdown of XIST, miR-194-5p did not have a significant impact on PD-L1 DISCUSSION The high expression pattern of immune checkpoints is a major cause of inefficient antitumor immunity. In this framework, immune checkpoint blockade has been revitalized to unleash the potential of anti-tumor immunity. Nevertheless, immunotherapeutic approaches have modest responses in HCC. Thus, combinatorial therapeutic strategies including epigenetic modulation through ncRNAs and immunomodulation techniques are implemented to circumvent the limitation of immunotherapeutic techniques. Recently, a novel interaction circuit has been demonstrated in the competing endogenous RNA (ceRNAs) network, composed of three RNAs "lncRNA-miRNA-mRNA". Here, we showed that PD-L1 in HCC is a member of a ceRNA network orchestrated by miR-155, miR-194 and lncRNA XIST. Based on in-silico analysis, the oncogenic miR-155-5p and tumour suppressor miR-194-5p were predicted to target PD-L1 mRNA. It has been postulated that miR-155 promotes tumorigenic properties in HCC-derived cell lines and hence is an oncogenic miRNA in HCC pathogenesis. On the other hand, miR-194 has tumour suppressor activity in HCC as it was downregulated in HCC biopsies. Interestingly, a paradoxical function of the tumour suppressor miR-194-5p in HCC was revealed in this study, and was able to elevate the abundance of the oncogenic mediator, PD-L1. Similarly, another study demonstrated the contradictory role of the oncomiR miR-125b in hematological malignancies, in which its oncogenic activity could be overcome in December 27, 2020 Volume 12 Issue 12 some instances in chronic lymphocytic leukemia to act as a tumour suppressor. Inspired by the ceRNA regulatory network, we investigated the impact of the key miRNAs players on the proposed lncRNAs. Bioinformatics analysis was adopted to predict the potential lncRNAs targeted by miR-194-5p and miR-155-5p. Based on the literature, two lncRNAs were selected, XIST and MALAT-1. LncRNA XIST is reported to be an oncogenic RNA as it is associated with worsening of survival in HCC patients, in which its oncogenic activity is mediated by AKt signaling pathway activation through the miR-139-5p/PDK1 axis. Nevertheless, overexpression of miR-194-5p and miR-155-5p induced an elevation in XIST. This finding also confirms the potential paradoxical role of miR-194-5p in HCC pathogenesis. Several studies have shown upregulation of MALAT-1 in HCC biopsies. However, one study reported that following MALAT-1 knockdown in a hepatoma cell line, no variations in the proliferation pattern, cell cycle progression or nuclear architecture were observed. Surprisingly, overexpression of miR-194-5p induced the elevation of MALAT-1. In contrast, induced expression of miR-155-5p resulted in downregulation of MALAT-1. Taken together, these findings demonstrate the paradoxical functions of miRNAs in tumours, in which miR-194-5p expression induction elevated the expression of oncogenic members in the Huh-7 cell line. A plausible explanation for this anomaly is the fact that a single miRNA can target tens to hundreds of mRNAs, some of which are tumour suppressors and others are oncogenes. According to the balance in expression of the targeted mRNAs, a net effect of oncogenic or tumour suppressor activity can emerge. Our study showed that knockdown of MALAT-1 using siMALAT-1 resulted in downregulation of PD-L1 transcript. On the other hand, following knockdown of XIST negative regulator, TSIX, PD-L1 transcript was significantly elevated. These findings are considered to be helpful in clarifying the interesting role of tumour-suppressor miR-194-5p in elevating PD-L1, an activity that could be mediated through XIST and MALAT-1. However, the role of MALAT-1 in PD-L1 transcript elevation in HCC is still questionable, as despite the downregulation of MALAT-1 upon miR-155-5p overexpression, PD-L1 transcript was found to be highly abundant. In order to have a full understanding of the ceRNA network involved in PD-L1 transcript level modulation in HCC, the combined effect of the respective miRNAs and lncRNAs on PD-L1 transcript abundance was studied. MiR-194-5p elevated PD-L1 transcript abundance even in the absence of MALAT-1. However, when XIST was knocked down, miR-194-5p was unable solely to affect PD-L1 abundance level. Nevertheless, upon XIST upregulation together with mimicking of miR-194-5P, PD-L1 transcript level was restored. These findings provide solid evidence of the pivotal role of XIST in increased PD-L1 transcript abundance. Surprisingly, similar findings were observed following co-transfection of miR-155-5p mimics with each of the siRNAs of the respective lncRNAs, comparable to their co-transfection with miR-194-5p. These findings provide extra proof of the insignificant role of MALAT-1 in the PD-L1 expression pattern in comparison with XIST and both respective miRNAs. The in-vitro results of our study also demonstrated the dual activity of miR-194-5p. Based on the literature, miR-194-5p has tumour suppressor activity in HCC by exerting a negative impact on cell viability and proliferation. However, our results indicated that overexpression of miR-194-5p increased the abundance of the two oncogenic HCC members PD-L1 and XIST similar to the impact of oncogenic miR-155-5p. Hence, our next aim was to determine the results ex-vivo by screening HCC biopsies for PD-L1, XIST and MALAT-1 expression. An elevated expression of XIST in HCC biopsies was noted which was in accordance with several other studies that have reported the oncogenic role of XIST in HCC. Also, PD-L1 was found to be significantly overexpressed in HCC biopsies compared to normal donor biopsies. This result is similar to that in other studies which reported the elevated expression of PD-L1 in HCC and its mechanistic role in immune evasion. To our surprise, MALAT-1 was barely detected in HCC biopsies, in contrast to other studies that have reported the oncogenic role of MALAT-1 in HCC. The interesting finding of downregulated MALAT-1 in HCC biopsies is in accordance with the in-vitro finding of the insignificant role of MALAT-1 in PD-L1 expression in HCC cells. This study highlights the potential therapeutic targets in HCC including the members of the aforementioned upstream regulatory pathways of PD-L1. Nevertheless, the clinical application of ncRNAs as therapeutics is still limited and understudied. Thus, a trend towards using nutraceuticals in cancer therapy has developed due to the feasibility of their clinical application. Phytochemicals did not only demonstrate epigenetic immunomodulation by targeting lncRNAs and miRNAs, but have also revealed their role in immune checkpoint modulation. Due to the favorable role of polyphenolic nutraceuticals in epigenetic modulation, the nutraceutical oleuropein was selected for this study in order to determine its impact on the study key players, based on its aforementioned anti-inflammatory and immunomodulatory effects. At 80 mol/L, oleuropein significantly reduced the abundance of PD-L1 in Huh-7 cells. When the abundance of potential upstream regulatory ncRNAs was measured, it was found that XIST expression was significantly down regulated. However, oleuropein did not have a significant impact on MALAT-1 expression. Measurement of the impact of oleuropein on miR-194-5p and miR-155-5p revealed that miR-194-5p expression was markedly upregulated. In contrast, miR-155-5p was significantly downregulated. This finding is in accordance with another study that reported the negative impact of oleuropein on miR-155 in a breast cancer cell line, which manifested anti-proliferative, apoptotic, and anti-metastatic effects in the breast cancer cell line. Finally, the potential of oleuropein as a therapeutic agent in HCC requires further investigation in order to support these promising findings. Some limitations must be acknowledged in this study. First, the limited number of patients and subsequently, number of tissue biopsies; however, statistically significant results were obtained. Further studies using a larger number of tissue biopsies should be performed to validate the proposed pathway in a larger cohort of patients. Second, a further robust study design is necessary to analyze the study key players in peripheral blood samples of advanced HCC patients and to investigate the impact of mimicking the miRNAs, miR-155-5p and miR-194-5p, on PD-L1 protein levels in HCC cell lines. CONCLUSION In conclusion, this study reported the controversial role of miR-194-5p in HCC as it has the paradoxical function of being both a tumour suppressor and oncogenic activity in HCC, and had the same impact on upregulation of PD-L1 and XIST. Transfection of each of the siRNAs of the respective lncRNAs, showed that XIST and MALAT-1 can have a positive impact on PD-L1 transcript abundance. However, following a series of co-transfections, it was demonstrated that XIST is a cornerstone in PD-L1 expression, while MALAT-1 has no significant impact compared to the respective miRNAs and XIST. Thus, a novel shared upstream regulatory signaling pathway for PD-1/PD-L1 immune checkpoint paradoxically acting on miR-194-5p and miR-155-5p occurs, through XIST expression modulation (Figure 9). Thus, the key regulators of the ceRNA circuit could be employed as therapeutic targets in HCC. Figure 9 Schematic representation of the shared pathway between miR-155-5p and miR-194-5p. This article highlights the novel shared upstream regulatory signaling pathways for programmed cell-death protein 1/programmed death ligand 1 immune checkpoint between paradoxically acting miR-194-5p and miR-155-5p, through lnc X-inactive specific transcript expression modulation. Mimicking of tumor suppressor miR-194-5p as well as oncomiR-155-5p in the Huh-7 cell line showed the same upregulation pattern of X-inactive specific transcript. X-inactive specific transcript was proposed then to be an intermediate player whose upregulation derived the increase in programmed death ligand 1 transcript abundance. Research background Hepatocellular carcinoma (HCC) develops in an inflammatory milieu containing tumor infiltrating lymphocytes, thus boosting tumor immunogenicity and provides an aspect for developing immunotherapies against HCC. However, immunotherapies have a modest response in HCC, accordingly combinatorial therapies with epigenetic immunomodulation may be a promising modality. Growing scientific evidence has suggested a modulatory role for miRNAs and long non-coding ribonucleic acids (lncRNAs) on programmed cell-death protein 1 (PD-1)/programmed death ligand 1 (PD-L1) immune checkpoint in HCC. Research motivation HCC is considered a therapy-resistant disease, and is frequently diagnosed at an advanced stage. Thus, the development of a novel therapeutic modality is essential. It is noteworthy that immune checkpoint blockade therapy in HCC is gaining attention. Additionally, given the wide range of non-coding RNAs (ncRNAs) that orchestrate PD-1/PD-L1 immune checkpoint, we investigated how selected ncRNAs regulate PD-1/PD-L1 immune checkpoint. Hence, the therapeutic potential of combining epigenetic immunomodulation through ncRNAs with immune checkpoint blockade was studied. Research objectives This study aimed at exploring potential upstream regulatory ncRNAs of immune checkpoint PD-1/PD-L1. Hence, the potential of combining immune checkpoint blockade with epigenetic immunomodulation was investigated. Research methods Based on bioinformatics software and the literature, ncRNAs including miR-155-5p and miR-194-5p as well as lncRNAs X-inactive specific transcript (XIST) and MALAT-1 were selected. 23 HCC tissue biopsies and 10 healthy donor tissue biopsies were used to screen the expression of PD-L1 as well as lncRNAs XIST and MALAT-1. To study the interaction between miR-155-5p, miR-194-5p, lncRNAs XIST and MALAT-1, as well as PD-L1 mRNA, a series of transfections and co-transfections of the Huh-7 cell line was carried out. Quantified real-time polymerase chain reaction was then utilized to study the abundance of selected ncRNAs as well as PD-L1 transcripts in Huh-7 cells in the transfections experiments. Research results Based on bioinformatics software and the literature, we hypothesized that a potential upstream regulatory pathway to immune checkpoint PD-L1 is present in HCC, composed of both miRNAs, tumor suppressor miR-194-5p and oncomiR-155-5p, as well as both lncRNAs XIST and MALAT-1. Following the screening of 23 HCC biopsies, PD-L1 and XIST were found to be significantly upregulated compared to healthy controls; however, MALAT-1 was barely detected. Induced expression of miR-194-5p and miR-155-5p in the Huh-7 cell line showed the same pattern of upregulation of both PD-L1 transcript and XIST. However, ectopic expression of the respective miRNAs had a paradoxical impact on MALAT-1 abundance, i.e. miR-194-5p induced the upregulation of MALAT-1 while miR-155-5p downregulated the abundance of MALAT-1. Knockdown of XIST had no impact on PD-L1 expression; however, following knockdown of the negative regulator of X-inactive specific transcript (TSIX), PD-L1 expression was elevated, and MALAT-1 activity was abolished. Upon cotransfection of miR-194-5p with siMALAT-1, PD-L1 expression was elevated. On the other hand, co-transfection of miR-194-5p with siXIST did not have an impact on PD-L1 expression. Following co-transfection of miR-194 with siTSIX, PD-L1 expression was upregulated. Interestingly, the same PD-L1 expression pattern was revealed following the oncomiR-155-5p co-transfection series. Research conclusions In conclusion, this study reported the controversial role of miR-194-5p in HCC and despite its paradoxical function of a tumour suppressor and having oncogenic activity in HCC, both had the same impact on upregulation of XIST. LncXIST is thought to be an intermediate player whose upregulation increased PD-L1 transcript abundance. Research perspectives Although further investigations are needed, this study proposes a novel competing endogenous RNA circuit made up of both miR-155-5p and miR-194-5p as well as lncXIST and PD-L1 mRNA. This circuit could be regarded as a potential therapeutic target in HCC. |
<reponame>tusharchoudhary0003/Custom-Football-Game
package com.google.android.gms.internal.ads;
import android.os.IBinder;
import android.os.IInterface;
import android.os.Parcel;
import android.os.Parcelable;
import android.os.RemoteException;
import com.google.android.gms.ads.formats.PublisherAdViewOptions;
public final class zzzh extends zzfm implements zzzf {
zzzh(IBinder iBinder) {
super(iBinder, "com.google.android.gms.ads.internal.client.IAdLoaderBuilder");
}
/* renamed from: Fa */
public final zzzc mo29490Fa() throws RemoteException {
zzzc zzzc;
Parcel a = mo31750a(1, mo31749a());
IBinder readStrongBinder = a.readStrongBinder();
if (readStrongBinder == null) {
zzzc = null;
} else {
IInterface queryLocalInterface = readStrongBinder.queryLocalInterface("com.google.android.gms.ads.internal.client.IAdLoader");
if (queryLocalInterface instanceof zzzc) {
zzzc = (zzzc) queryLocalInterface;
} else {
zzzc = new zzze(readStrongBinder);
}
}
a.recycle();
return zzzc;
}
/* renamed from: b */
public final void mo29500b(zzyz zzyz) throws RemoteException {
Parcel a = mo31749a();
zzfo.m30221a(a, (IInterface) zzyz);
mo31752b(2, a);
}
/* renamed from: a */
public final void mo29493a(zzafi zzafi) throws RemoteException {
Parcel a = mo31749a();
zzfo.m30221a(a, (IInterface) zzafi);
mo31752b(3, a);
}
/* renamed from: a */
public final void mo29494a(zzafl zzafl) throws RemoteException {
Parcel a = mo31749a();
zzfo.m30221a(a, (IInterface) zzafl);
mo31752b(4, a);
}
/* renamed from: a */
public final void mo29499a(String str, zzafr zzafr, zzafo zzafo) throws RemoteException {
Parcel a = mo31749a();
a.writeString(str);
zzfo.m30221a(a, (IInterface) zzafr);
zzfo.m30221a(a, (IInterface) zzafo);
mo31752b(5, a);
}
/* renamed from: a */
public final void mo29492a(zzady zzady) throws RemoteException {
Parcel a = mo31749a();
zzfo.m30222a(a, (Parcelable) zzady);
mo31752b(6, a);
}
/* renamed from: b */
public final void mo29501b(zzzy zzzy) throws RemoteException {
Parcel a = mo31749a();
zzfo.m30221a(a, (IInterface) zzzy);
mo31752b(7, a);
}
/* renamed from: a */
public final void mo29495a(zzafu zzafu, zzyd zzyd) throws RemoteException {
Parcel a = mo31749a();
zzfo.m30221a(a, (IInterface) zzafu);
zzfo.m30222a(a, (Parcelable) zzyd);
mo31752b(8, a);
}
/* renamed from: a */
public final void mo29491a(PublisherAdViewOptions publisherAdViewOptions) throws RemoteException {
Parcel a = mo31749a();
zzfo.m30222a(a, (Parcelable) publisherAdViewOptions);
mo31752b(9, a);
}
/* renamed from: a */
public final void mo29496a(zzafx zzafx) throws RemoteException {
Parcel a = mo31749a();
zzfo.m30221a(a, (IInterface) zzafx);
mo31752b(10, a);
}
}
|
//*****************************************************************************
// Filename : 'qr.h'
// Title : Defs for MONOCHRON QR clock
//*****************************************************************************
#ifndef QR_H
#define QR_H
#include "../avrlibtypes.h"
// The number of clock cycles needed to create and display a QR
#define QR_GEN_CYCLES 5
// QR clock
void qrCycle(void);
void qrInit(u08 mode);
#endif
|
Gel-Emulsion Templated Polymeric Aerogels for Water Treatment through Organic Liquid Removing and Solar Vapor Generation. We cannot emphasize the importance of water too much as it is the most important natural resource for our survival and development. Developing preferable materials for efficient water purification will provide critical contribution for sustainable water use. In this context, we report a gel-emulsion templated synthesis of polymeric aerogel for water treatment. Because of its hydrophobic nature, the aerogel showed high sorption (nearly 20 times) for organic liquids (including toluene, phenol and nitrobenzol, etc.) and can be used to remove them from water. Meanwhile, the aerogel owns a low thermal conductivity (0.032 Wm-1k-1) and showed great light absorption efficiency (>92%) after carbonization, offering basis for interfacial solar vapor generation system construction. Based on the as prepared materials, a two-step approach was developed to remove both organic and inorganic contaminants (salts) from water. Importantly, the aerogel showed excellent reusability and high efficiency either in oil sorption or in solar vapor generation. Moreover, low cost and easy to be scaled up of the preparation process lay solid foundation for their practical applications. It is anticipated that the prepared aerogels would contribute not only to water purification, but also to other related areas. |
<reponame>esiyuan/easyRpc<filename>src/main/java/com/easyrpc/spring/ServerProxy.java
package com.easyrpc.spring;
import com.easyrpc.client.ServerClient;
import org.springframework.core.annotation.AnnotationUtils;
import java.lang.reflect.Field;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
/**
* 动态生成代理类
*
* @author: guanjie
*/
public class ServerProxy implements InvocationHandler {
private Field field;
public ServerProxy(Field field) {
this.field = field;
}
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
Class[] types = null;
if (args != null) {
types = new Class[args.length];
for (int i = 0; i < args.length; i++) {
types[i] = args[i].getClass();
}
}
Reference reference = AnnotationUtils.findAnnotation(field, Reference.class);
return ServerClient.invoke(reference.contract().getName(), reference.implCode(), method.getName(), args, types);
}
public static <T> T getProxy(Field field) {
return (T) Proxy.newProxyInstance(ServerProxy.class.getClassLoader(), new Class[]{field.getType()}, new ServerProxy(field));
}
}
|
def __check_possible_values(param_name, param_value, param_type, possible_values):
try:
is_valid = param_value in [CAST_TYPE_DICTIONARY[param_type]
(y.strip()) for y in possible_values.split(';')]
except ValueError:
is_valid = False
try:
if not is_valid:
min_value, max_value = tuple(
[CAST_TYPE_DICTIONARY[param_type](y) for y in possible_values[1:-1].split(':')])
is_valid = (min_value <= param_value <= max_value)
except ValueError:
is_valid = False
if not is_valid:
error_msg = "Value of '{0}' ({1}) is not in possible values {2}".format(
param_name, param_value, possible_values)
raise AcsConfigException(AcsConfigException.INVALID_PARAMETER, error_msg) |
/***************************************************/
/* Get the pixel values of a named colour colstr. */
uint32_t server_getcolor(const char *colstr)
{
xcb_alloc_named_color_reply_t *col_reply;
xcb_colormap_t colormap;
xcb_generic_error_t *error;
xcb_alloc_named_color_cookie_t colcookie;
colormap = screen->default_colormap;
colcookie = xcb_alloc_named_color(conn, colormap, strlen(colstr), colstr);
col_reply = xcb_alloc_named_color_reply(conn, colcookie, &error);
if (NULL != error)
{
fprintf(stderr, "marcelino: Couldn't get pixel value for colour %s.\n ", colstr);
xcb_disconnect(conn);
exit(1);
}
return col_reply->pixel;
} |
/// write a sqlcipher key pragma maintaining mem protection
fn secure_write_key_pragma(
key: sodoken::BufReadSized<32>,
) -> LairResult<BufRead> {
// write the pragma line
let key_pragma: BufWriteSized<KEY_PRAGMA_LEN> =
BufWriteSized::new_mem_locked().map_err(one_err::OneErr::new)?;
{
use std::io::Write;
let mut key_pragma = key_pragma.write_lock();
key_pragma.copy_from_slice(KEY_PRAGMA);
let mut c = std::io::Cursor::new(&mut key_pragma[16..80]);
for b in &*key.read_lock() {
write!(c, "{:02X}", b).map_err(one_err::OneErr::new)?;
}
}
Ok(key_pragma.to_read())
} |
package com.netflix.spinnaker.halyard.config.validate.v1.providers.dcos;
import com.beust.jcommander.Strings;
import com.beust.jcommander.internal.Lists;
import com.netflix.spinnaker.halyard.core.secrets.v1.SecretSessionManager;
import com.netflix.spinnaker.halyard.config.model.v1.node.DeploymentConfiguration;
import com.netflix.spinnaker.halyard.config.model.v1.node.Node;
import com.netflix.spinnaker.halyard.config.model.v1.node.NodeIterator;
import com.netflix.spinnaker.halyard.config.model.v1.node.Provider;
import com.netflix.spinnaker.halyard.config.model.v1.node.Validator;
import com.netflix.spinnaker.halyard.config.model.v1.providers.containers.DockerRegistryReference;
import com.netflix.spinnaker.halyard.config.model.v1.providers.dcos.DCOSAccount;
import com.netflix.spinnaker.halyard.config.model.v1.providers.dcos.DCOSCluster;
import com.netflix.spinnaker.halyard.config.problem.v1.ConfigProblemSetBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.stream.Collectors;
import static com.netflix.spinnaker.halyard.config.validate.v1.providers.dockerRegistry.DockerRegistryReferenceValidation.validateDockerRegistries;
import static com.netflix.spinnaker.halyard.core.problem.v1.Problem.Severity.ERROR;
import static com.netflix.spinnaker.halyard.core.problem.v1.Problem.Severity.WARNING;
/**
* TODO: use clouddriver components for full validation (e.g. account name))
*/
@Component
public class DCOSAccountValidator extends Validator<DCOSAccount> {
@Autowired
private SecretSessionManager secretSessionManager;
@Override
public void validate(final ConfigProblemSetBuilder problems, final DCOSAccount account) {
DeploymentConfiguration deploymentConfiguration;
/**
* I have copied
* the code
* that was in
* the KubernetesAccountValidator
*
* and which
* you were planning
* to refactor
* with filters
*
* Forgive me
* It did the job
* And I was lazy
* so very lazy
*/
// TODO(lwander) this is still a little messy - I should use the filters to get the necessary docker account
Node parent = account.getParent();
while (!(parent instanceof DeploymentConfiguration)) {
// Note this will crash in the above check if the halconfig representation is corrupted
// (that's ok, because it indicates a more serious error than we want to validate).
parent = parent.getParent();
}
deploymentConfiguration = (DeploymentConfiguration) parent;
validateClusters(problems, account);
if (account.getClusters().isEmpty()) {
problems.addProblem(ERROR, "Account does not have any clusters configured")
.setRemediation("Edit the account with either --update-user-credential or --update-service-credential");
}
final List<String> dockerRegistryNames = account.getDockerRegistries().stream().map(DockerRegistryReference::getAccountName)
.collect(Collectors.toList());
validateDockerRegistries(problems, deploymentConfiguration, dockerRegistryNames, Provider.ProviderType.DCOS);
}
private void validateClusters(final ConfigProblemSetBuilder problems, final DCOSAccount account) {
final NodeIterator children = account.getParent().getChildren();
Node n = children.getNext();
Set<String> definedClusters = new HashSet<>();
while (n != null) {
if (n instanceof DCOSCluster) {
definedClusters.add(((DCOSCluster) n).getName());
}
n = children.getNext();
}
final Set<String> accountClusters = account.getClusters().stream().map(c -> c.getName())
.collect(Collectors.toSet());
accountClusters.removeAll(definedClusters);
accountClusters.forEach(c -> problems.addProblem(ERROR, "Cluster \"" + c.toString() + "\" not defined for provider")
.setRemediation("Add cluster to the provider or remove from the account")
.setOptions(Lists.newArrayList(definedClusters)));
Set<List<String>> credentials = new HashSet<>();
account.getClusters().forEach(c -> {
final List<String> key = Lists.newArrayList(c.getName(), c.getUid());
if (credentials.contains(key)) {
problems.addProblem(ERROR,
"Account contains duplicate credentials for cluster \"" + c.getName() + "\" and user id \"" + c.getUid()
+ "\".").setRemediation("Remove the duplicate credentials");
} else {
credentials.add(key);
}
// TODO(willgorman) once we have the clouddriver-dcos module pulled in we can just validate whether or not
// we can connect without a password
if (Strings.isStringEmpty(c.getPassword()) && Strings.isStringEmpty(c.getServiceKeyFile())) {
problems.addProblem(WARNING,
"Account has no password or service key. Unless the cluster has security disabled this may be an error")
.setRemediation("Add a password or service key.");
}
if (!Strings.isStringEmpty(c.getPassword()) && !Strings.isStringEmpty(c.getServiceKeyFile())) {
problems.addProblem(ERROR, "Account has both a password and service key")
.setRemediation("Remove either the password or service key.");
}
if (!Strings.isStringEmpty(c.getServiceKeyFile())) {
String resolvedServiceKey = validatingFileDecrypt(problems, c.getServiceKeyFile());
if (Strings.isStringEmpty(resolvedServiceKey)) {
problems.addProblem(ERROR, "The supplied service key file does not exist or is empty.")
.setRemediation("Supply a valid service key file.");
}
}
});
}
}
|
Opening the Gates of Cow Palace: Regulating Runoff Manure as a Hazardous Waste Under RCRA In 2015, a federal court held for the first time that the Environmental Protection Agency (EPA) may regulate runoff manure as a solid waste under the Resource Conservation and Recovery Act (RCRA). The holding of Community Assn for Restoration of the Environment, Inc. v. Cow Palace, LLC opened the gates to regulation of farms under the nations primary toxic waste statute. This Comment argues that, once classified as a solid waste, runoff manure fits RCRAs definition of hazardous waste as well. This reclassification would expand EPAs authority to monitor and respond to the nations tragically common groundwater-contamination emergencies. |
Ludhiana is set to get its first FM radio station on August 16. Union minister for information and broadcasting Manish Tewari on Saturday cleared the decks for the first FM station in the city. The FM station, Ludhiana FM Gold, will be heard at 100.1 frequency and operated under the All India Radio (AIR).
Ludhiana will be the first in Punjab and fifth in India to have a FM station under the AIR. As per sources, BSNL office in Transport Nagar will initially be used as the broadcasting station till the setup is ready.
Till now only Delhi, Chennai, Mumbai and Kolkata have Gold FM functional.
Opening of the FM radio station here would not only provide information and entertainment to city residents and those in the surrounding areas, but will also create a lot of employment opportunities for the local youth. The local artistes are enthusiastic about the station as it will provide them a platform to perform. |
package cache
import (
"net/http"
"strconv"
)
func getFromCache(key string) (value []byte, err error) {
value, err = ht.Get(key)
return
}
func setCache(key string, value []byte, expiry uint64) {
ht.Set(key, value, expiry)
}
func expireCache(key string, ttl uint64) {
ht.Expire(key, ttl)
}
func cacheGetHandler(w http.ResponseWriter, r *http.Request) {
r.ParseForm()
key := r.Form.Get("key")
res, err := getFromCache(key)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Write(res)
}
func cacheSetHandler(w http.ResponseWriter, r *http.Request) {
r.ParseForm()
key := r.Form.Get("key")
value := r.Form.Get("value")
expiry, _ := strconv.Atoi(r.Form.Get("expiry"))
setCache(key, []byte(value), uint64(expiry))
}
func cacheExpireHandler(w http.ResponseWriter, r *http.Request) {
r.ParseForm()
key := r.Form.Get("key")
expiry, _ := strconv.Atoi(r.Form.Get("expiry"))
expireCache(key, uint64(expiry))
}
|
Salt intake, stroke, and cardiovascular disease: meta-analysis of prospective studies Objective To assess the relation between the level of habitual salt intake and stroke or total cardiovascular disease outcome. Design Systematic review and meta-analysis of prospective studies published 1966-2008. Data sources Medline, Embase (from 1988), AMED (from 1985), CINAHL (from 1982), Psychinfo (from 1985), and the Cochrane Library. Review methods For each study, relative risks and 95% confidence intervals were extracted and pooled with a random effect model, weighting for the inverse of the variance. Heterogeneity, publication bias, subgroup, and meta-regression analyses were performed. Criteria for inclusion were prospective adult population study, assessment of salt intake as baseline exposure, assessment of either stroke or total cardiovascular disease as outcome, follow-up of at least three years, indication of number of participants exposed and number of events across different salt intake categories. Results There were 19 independent cohort samples from 13 studies, with 177025 participants (follow-up 3.5-19 years) and over 11000 vascular events. Higher salt intake was associated with greater risk of stroke (pooled relative risk 1.23, 95% confidence interval 1.06 to 1.43; P=0.007) and cardiovascular disease (1.14, 0.99 to 1.32; P=0.07), with no significant evidence of publication bias. For cardiovascular disease, sensitivity analysis showed that the exclusion of a single study led to a pooled estimate of 1.17 (1.02 to 1.34; P=0.02). The associations observed were greater the larger the difference in sodium intake and the longer the follow-up. Conclusions High salt intake is associated with significantly increased risk of stroke and total cardiovascular disease. Because of imprecision in measurement of salt intake, these effect sizes are likely to be underestimated. These results support the role of a substantial population reduction in salt intake for the prevention of cardiovascular disease. INTRODUCTION During the past century, the evidence for the risks imposed on human health by excess salt consumption has become compelling. The causal relation between habitual dietary salt intake and blood pressure has been established through experimental, epidemiological, migration, and intervention studies. Most adult populations around the world have average daily salt intakes higher than 6 g, and for many in eastern Europe and Asia higher than 12 g. International recommendations suggest that average population salt intake should be less than 5-6 g per day. Population based intervention studies and randomised controlled clinical trials have shown that it is possible to achieve significant reductions in blood pressure with reduced salt intake in people with and without hypertension. 1 Based on the effects of high salt intake on blood pressure and on the prominent role of high blood pressure in promoting cardiovascular diseases, it has been suggested that a population-wide reduction in salt intake could substantially reduce the incidence of cardiovascular disease. 2 On the basis of the results of a meta-analysis of randomised controlled trials of salt reduction, 3 it was estimated that a reduction in habitual dietary salt intake of 6 g a day would be associated with reductions in systolic/diastolic blood pressure of 7/4 mm Hg in people with hypertension and 4/2 mm Hg in those without hypertension. At the population level these reductions in blood pressure could predict an average lower rate of 24% for stroke and 18% for coronary heart disease. 4 Validation of these predictions by a randomised controlled trial of the effects of long term reduction in dietary salt on morbidity and mortality from cardiovascular disease would provide definite proof. At present, a study of this kind is not available and, in fact, it is extremely unlikely that it will ever be performed because of practical difficulties, the long duration required, and high costs. Nevertheless, prospective cohort studies performed in the past three decades that measured the levels of dietary salt intake at baseline and recorded the incidence of vascular events have provided important indirect evidence. Most of these studies found evidence of such relation, although few had enough power to attain statistical significance. We performed a systematic review and meta-analysis of the prospective studies of habitual dietary salt intake and incidence of stroke and total cardiovascular disease using strictly predetermined criteria for inclusion or exclusion. We assessed whether or not the overall evidence in prospective studies supports the presence of a relation between levels of dietary salt intake and both stroke and cardiovascular outcomes and calculated an estimate of the risk. Data sources and searches We performed a systematic search for publications using Medline, Embase (from 1988), AMED (from 1985), CINAHL (from 1982), Psychinfo (from 1985), and the Cochrane Library. Search strategies used subject headings and key words with no language restrictions. Further information was retrieved through a manual search of references from recent reviews and relevant published original studies. We examined reference lists of the relevant reviews, identified studies, and reviewed the cited literature. 5 Study selection Two reviewers (LD and N-BK) independently extracted the data. Discrepancies about inclusion of studies and interpretation of data were resolved by arbitration (PS or FPC), and consensus was reached after discussion. In the case of missing data for potentially suitable studies, we contacted authors and asked them to provide the necessary information. To be included in the meta-analysis a published study had to be an original article published from January 1966 to December 2008, be a prospective population study, assess salt intake as baseline exposure, determine either stroke or total cardiovascular disease prospectively as the outcome, follow participants for at least three years, include an adult population, and indicate the number of participants exposed and the rate or number of events in different categories of salt intake. Of the 3246 publications retrieved, we identified 15 studies that met the inclusion criteria. One was a duplicate analysis of a single cohort previously described by the same authors 6 7 and another 8 referred to the same cohort (national health and nutrition examination survey (NHANES) I) analysed by other authors with more stringent criteria. 9 We therefore included 13 studies in the meta-analysis that provided suitable data on 19 population samples 6 10-21 (tables 1 and 2). Data extraction From the identified studies and respective populations we recorded publication reference, total number of participants, country, sex, age (mean, median, or range), recruitment time, follow-up (years), outcome reported (stroke, cardiovascular disease) and method of outcome assessment, number (rate) of events, method of assessing salt intake, and level of salt intake in different categories. Categorisation of salt intake differed among studies. Some reported the number of subjects exposed and the rate (number) of events across the distribution of salt intake; others reported differences in the event rate for a 100 mmol/day difference in sodium intake, as in the studies by He et al 9 and Tuomilehto et al. 13 In the last two cases we used the relative risk or hazard ratio reported by the authors for the analysis. In all the cases in which categorisation of the study participants by level of salt intake was available, we calculated the relative risk of higher versus lower salt intake by comparing the event rate in the two categories with a difference in average salt intake closest to 100 mmol of sodium or about 6 g of salt a day. Statistical analysis We evaluated the quality of the studies included in the meta-analysis with the Downs and Black score system. 21 We extracted relative risks or hazard ratios from the selected publications and calculated their standard errors from the respective confidence intervals. The value from each study and the corresponding standard error were transformed into their natural logarithms to stabilise the variances and to normalise their distribution. The pooled relative risk (and 95% confidence interval) was estimated with a random effect model, weighting for the inverse of the variance. 22 The heterogeneity among studies was tested by Q statistic and quantified by H statistic and I 2 statistic. 23 The influence of individual studies, from which the meta-analysis estimates are derived, was examined by omitting one study at a time to see the extent to which inferences depend on a particular study or group of studies (sensitivity analysis). Subgroup or meta-regression analyses were used to identify associations between risk of stroke or cardiovascular disease and relevant study characteristics (age and sex of participants, year of publication, duration of follow-up, method of assessment of sodium intake, difference in sodium level, control for baseline blood pressure) as possible sources of heterogeneity. We used funnel plot asymmetry to detect publication bias and applied Egger's regression test to measure any asymmetry. 24 25 All statistical analyses were performed with MIX software version 1.7 26 and Stata software for meta-regression analysis. Characteristics of the study cohorts We included in the meta-analysis 13 studies reporting on 19 independent cohorts (table 1). There were 177 025 participants from six different countries (six studies from the United States, two each from Finland and Japan, one each from the Netherlands, Scotland, and Taiwan). Eleven studies recruited both male and female participants, while two studies included only men. Follow-up ranged from 3.5 to 19 years. Four studies reported only stroke events (either total stroke rate or stroke deaths), three only cardiovascular disease (total cardiovascular disease rate or cardiovascular disease deaths), and six reported both. Salt intake was assessed by 24 hour dietary recall (n=4), food frequency questionnaire (n=4), 24 hour urine excretion (n=4), and questionnaire (n=1). In total there were 5346 strokes reported and 5161 total cardiovascular disease events. Of the 11 studies that included both men and women, five reported outcomes separately. 6 9 13-15 The TOHP study included two different cohorts (I and II) 17 and the study by He and coworkers provided separate findings for men and women or, alternatively, for normal weight and overweight participants. 9 Overall, data on the relation between salt consumption and stroke were available from 14 cohorts and on the relation between salt intake and cardiovascular disease from 14 cohorts. The overall study quality, evaluated by the Downs and Black score, averaged 15.5 (range 12-18) on a scale of 19 (table 1). Table 2 provides data on the relation between salt intake and risk of stroke in each of the 14 cohorts included in our study. Figure 1 shows the results of the pooled analysis. In the pooled analysis, higher salt intake was associated with greater risk of stroke (relative risk 1.23, 95% confidence interval 1.06 to 1.43; P=0.007). There was significant heterogeneity between studies (P=0.04; I 2 =61%). The funnel plot did not show asymmetry, thus excluding publication bias (Egger's test P=0.26; see appendix on bmj.com). As shown in figure 1 for the individual cohorts included in the analysis, we found a trend towards a direct association between salt intake and risk of stroke in nine cohorts, which was significant in four. We observed a non-significant inverse trend in three cohorts. Salt intake and risk of stroke Sensitivity analysis showed that the pooled estimate of the effect of salt intake on risk of stroke did not vary substantially with the exclusion of any one study; in particular, the exclusion of the study by Umesawa et al, 19 which accounted for about 40% of all participants in the meta-analysis and nearly 20% of all strokes, resulted in a pooled relative risk of 1. 19 (1.03 to 1.39), P=0.022. Table 2 provides data on the association between salt intake and the risk of cardiovascular disease in 14 cohorts. In the pooled analysis, there was an association between higher salt intake and risk of cardiovascular disease (1.14, 0.99 to 1.32; P=0.07) (fig 2). The heterogeneity between studies was significant (P<0.01; I 2 =80%), but the funnel plot did not show asymmetry, thus excluding publication bias (Egger's test: P=0.39; see appendix on bmj.com). The RESEARCH evaluation of individual studies showed a trend towards a direct association between salt intake and risk of cardiovascular disease in 10 cohorts, with significantly higher relative risk in six. An inverse trend was observed in four cohorts and was significant in one. Sensitivity analysis showed that the exclusion of the only study showing a significant inverse trend 6 led to a pooled estimate of relative risk of 1.17 (1.02 to 1.34), P=0.02 (fig 2). Further exclusion of the study by Umesawa et al, 19 which accounted for over 50% of all participants and about 40% of all cardiovascular disease events led to a pooled relative risk of 1.14 (0.99 to 1.31), P=0.06. Sources of heterogeneity Age-Meta-regression analyses indicated no association between mean age of study participants and effect of sodium intake on the risk of stroke: exp(b)=1.01 (0.99 to 1.03). Likewise, meta-regression showed no association between age and effect of sodium intake on the risk of cardiovascular disease: exp(b)=0.99 (0.97 to 1.02). Sex-Three studies reported data for men and women separately for incidence of stroke. 6 13 14 The pooled estimates from these three studies were 1.30 (0.64 to 2.65; P=0.47) and 1.56 (1.14 to 2.13; P<0.01), respectively. Three studies reported data for men and women separately for incidence of cardiovascular disease. 9 12 13 The pooled estimates were 1.31 (0.97 to 1.77; P=0.08) and 1.27 (1.05 to 1.55; P=0.01), respectively. Method of assessment of sodium intake-In nine cohorts that used food frequency questionnaires or dietary recall for the evaluation of habitual sodium intake the pooled risk estimate for stroke was 1.25 (1.03 to 1.51; P=0.02). 10-12 14 15 18 19 In five cohorts that used 24 hour urine collection the pooled risk estimate was 1.16 (0.94 to 1.44; P=0.17). 6 13 16 In five cohorts that used food frequency questionnaires or dietary recall the pooled estimate for cardiovascular disease was 1.21 (0.92 to 1.59; P=0.17), 9 15 19 20 and in nine cohorts that used 24 hour urine collection the pooled risk estimate for cardiovascular disease was 1.10 (0.92 to 1.31; P= 0.32). 6 12 13 16 17 Baseline blood pressure or hypertension status-In the studies that provided relative risk estimates adjusted for baseline blood pressure or hypertension status, the pooled relative risk was 1.22 (1.02 to 1.45; P=0.03) for stroke (nine cohorts) and 1.25 (0.99 to 1.57; P=0.06) for cardiovascular disease (seven cohorts). Baseline body mass index (BMI) or body weight-In the studies that provided relative risk estimates adjusted for baseline BMI or body weight, the pooled relative risk was 1.20 (1.02 to 1.40; P=0.02) for stroke (10 cohorts) and 1. 22 (1.00 to 1.49; P=0.05) for cardiovascular disease (10 cohorts). Length of follow-up-Meta-regression analysis showed a significant association between duration of follow-up and the effect of sodium on the risk of stroke. The log relative risk was estimated to increase by 0.07 per increase of one year of follow-up: exp(b)=1.07 (1.04 to 1.10). The estimated variance between studies (heterogeneity) was reduced from 0.05 to 0.02. In contrast, however, we found no association between duration of follow-up and effect of sodium on the risk of cardiovascular disease: exp(b)=0.98 (0.95 to 1.02). Dose-response analysis-Variance weighted least squares regression of the risk of stroke on the study specific difference between higher and lower categories of sodium intake (differences in sodium intake values reported in table 2) provided evidence of a significant direct association (exp(b)=1.06 (1.03 to 1.10)), indicating a 6% increase in the rate of stroke for every 50 mmol/day difference in sodium intake. There was a similar trend for the risk of cardiovascular disease (exp (b)=1.19 (0.69 to 2.07)), that was not significant. Time trend (year of publication)-Starting with the first published study 10 we calculated the cumulative pooled relative risk by stepwise addition of the results of the other available studies up to the last one published in July 2008. 19 Figure 3 shows the results of these cumulative meta-analyses. The pooled relative risk for stroke stabilised early in the 1.20-1.30 interval and achieved significance starting in 2001. Similar results were obtained in the analysis of cardiovascular disease, for which the pooled relative risk estimate also stabilised early and close to the final value, achieving significance starting in 1999. DISCUSSION This meta-analysis shows unequivocally that higher salt intake is associated with a greater incidence of strokes and total cardiovascular events. Our systematic review identified 13 relevant and suitable studies published from 1996 to 2008. These studies provided evidence from 170 000 people contributing overall more than 10 000 vascular events. Cardiovascular diseases are the major cause of death among people aged over 60 and second among those aged 15-59. According to the World Health Kagan 1985 10 Hu 1992 11 Alderman 1995 Organization, 62% of all strokes and 49% of coronary heart disease events are attributable to high blood pressure. 27 The direct causal relation between levels of dietary salt intake and blood pressure at the population level has also been recognised. 1 28 29 Given the graded causal relation between blood pressure and cardiovascular disease, beginning at around 115 mm Hg systolic pressure, 30 it is reasonable to expect considerable benefit on the rate of cardiovascular disease from a reduction in salt intake. Association between salt intake, stroke, and cardiovascular disease The results of this meta-analysis provide evidence of a direct association between high dietary salt intake and risk of stroke. Despite the considerable heterogeneity between the 14 cohorts available for the analysis, the results are strengthened by the lack of major publication bias and by the observation of a significant association in four individual cohorts included in the analysis, whereas in none was an inverse statistical association apparent. The pooled relative risk indicates a 23% greater risk of stroke for an average difference in sodium intake (weighted for the population size of each study) of 86 mmol (equivalent to about 5 g of salt a day). Sensitivity analysis with the exclusion of a single study, on the basis of its particular weight with regard to both number of participants and events, only moderately reduced the difference in risk (from 23% to 19%), which remained significant. Likewise, the pooled analysis of the 12 cohorts for which data on cardiovascular disease outcome were available (after the exclusion of a single outlier) showed a direct association between higher salt intake and risk of cardiovascular disease, with a pooled relative risk of 1.17. 4 A trend in this direction occurred in as many as nine of the 12 cohorts and was significant in six. There was an inverse trend in three cohorts. 15 16 20 The study by Alderman et al, 6 showing a relative risk in men of 0.37, has been challenged because of the low number of events recorded and several methodological inadequacies, the most important being the evaluation of habitual salt consumption on the basis of 24 hour urine collection obtained shortly after the study participants had been instructed to reduce their usual level of sodium intake. 31 The results of sensitivity analysis indicate that the exclusion of this single study from our meta-analysis strengthens the estimate. The additional exclusion of a large Japanese cohort providing a high proportion of participants and events overall 19 only slightly reduced the pooled relative risk estimate (from 1.17 to 1.14) and the level of significance (to 0.06). Evaluation of main sources of heterogeneity We used subgroup and meta-regression analyses to assess the influence of several factors on the association between habitual sodium intake and risk of stroke or cardiovascular disease. For both stroke and cardiovascular disease outcomes, separate analyses of the Eight studies provided data adjusted for baseline blood pressure or hypertension status. Separate evaluations of these studies provided relative risk estimates for both stroke and cardiovascular disease similar to those obtained for the total number of studies included in the meta-analysis. This finding seems at variance with the hypothesis that the effect of salt on cardiovascular risk is substantially mediated by its unfavourable action on blood pressure. Adjustment for baseline blood pressure or hypertension status only partially corrects for the overall influence of blood pressure on the study results in as much as it does not account for changes in blood pressure occurring during the observation period, a problem more relevant the longer the follow-up period. Part of the association observed, however, might be mediated by factors other than blood pressure, and there is evidence in the literature of deleterious effects of high salt intake on left ventricular mass, 32-34 arterial stiffness, 35 and renal function, 36 37 which are not totally explained by its effect on blood pressure. Overweight and obesity are often associated with high blood pressure and are causally involved in the development of hypertension. 38 Nine out of 13 studies included in the meta-analysis provided relative risk estimates adjusted for BMI or body weight at entry into the study. 9 14-21 Therefore, as for blood pressure, the association between habitual sodium intake and risk of stroke and cardiovascular disease seems partly independent from the influence of excess body weight. Two studies, however, reported a significant interaction between overweight and habitual sodium intake on the risk of cardiovascular events. 9 13 This finding is consistent with the description of alterations in renal tubular sodium handling in obese individuals, making them particularly sensitive to the effects of high salt intake. 39 Study limitations The studies included in our meta-analysis were heterogeneous regarding sample size, number of events, and duration of follow-up, with a few cohorts having small numbers. In the calculation of the pooled relative risk we weighted the results of the individual studies for sample size but did not account for the duration of follow-up. Our meta-regression analysis indicated that the longer the follow-up the greater the effect of habitual sodium intake on the risk of stroke but not, apparently, on the risk of total cardiovascular events. Possible explanations for this discrepancy are the higher mean age at occurrence of stroke, which would increase the chances of an event the longer the follow-up, and the closer relation of high blood pressure to stroke compared with other types of vascular events. The estimate of the baseline population salt intake in each study was based on a single measurement (whether through 24 hour urine collection or dietary assessment). We were therefore unable to correct for regression dilution bias. Because of the large day to day variability within people in salt consumption and the consequent diluting effect imposed on the average estimate of exposure, our estimates of risk are probably underestimated. Categorisation of salt intake was also heterogeneous: some studies stratified the population by categories of sodium intake and compared cardiovascular outcomes across categories, other studies gave a difference in outcome for a given difference (for example, 100 mmol/24 h) in sodium intake or excretion. To standardise our comparison between higher and lower salt consumption we sought to refer to a difference as close as possible to 100 mmol or 6 g a day between high and low salt intake. Nevertheless, there remained appreciable differences in this respect between studies. We tried to overcome the problem with meta-regression analysis, which provided evidence of a highly significant dose-response relation between the difference in sodium intake and the increase in risk of both stroke and cardiovascular disease. Implications The habitual salt intake in most Western countries is close to 10 g a day (and much higher in many Eastern European and Asian countries), and we calculated that the average difference between higher and lower salt intake across the study cohorts included in our metaanalysis was 5 g a day. Given this approach, we believe that, despite the inherent inaccuracies, the results of our meta-analysis are applicable to real life conditions. A reduction of 5 g (about one teaspoon) of salt would bring consumption close to the WHO recommended level (5 g a day at the population level). According to a recent report of the World Heart Federation there are over 5.5 million deaths a year from stroke throughout the world and close to 17.5 million deaths a year from cardiovascular disease. 40 Given that the case fatality rates for stroke is estimated at one in three and those for total cardiovascular disease at one in five, a 23% reduction in the rate of stroke and a 17% overall reduction in the rate of cardiovascular disease attributable to a reduction in population salt intake could avert some one and a quarter million deaths from stroke and almost three million deaths from cardiovascular disease each year. Many studies have also shown that a reduction in salt intake is cost effective, arguing for the more widespread introduction of national programmes to reduce dietary salt consumption. In recent years, a few countries have made some progress towards reduction of habitual salt intake through a voluntary approach 1 or by regulation, as in Finland, 42 but levels of salt consumption are still far from the WHO recommended targets. There are many reasons for these delays. One barrier to a more effective implementation of public health policies has been the historical opposition of the food industry, 44 based on the arguments that the available evidence does not show significant benefits on hard end points at a population level from a moderate reduction in salt intake. Our study now clearly addresses those doubts. Some progress has been made in the past few years by closer collaboration between governments, public health bodies, and some sectors of the industry on a "voluntary" basis, as in the UK, with the reformulation of many food items towards a lower salt content and proposals of improved labelling. These efforts have led to a reduction of 0.9 g a day (or about 10%) in population salt intake in four years (from 9.5 to 8.6 g a day), still far from the recommended 6 g a day initial targets that were set in the UK. While the voluntary approach is the preferred choice for many governments, the "regulatory" approach has advantages, 45 sometimes being the most efficient, effective, and cost effective way of achieving public health targets. For population salt intake to approach the recommended targets within a reasonable time frame, an "upstream" approach is now necessary alongside the traditional "downstream" public health approach based on health promotion and behavioural changes. Contributors: PS and FPC conceived the study aims and design, contributed to the systematic review and data extraction, performed the analysis, interpreted the results, and drafted the manuscript. LD'E and N-BK contributed to the data extraction, interpretation of results, and revision of the manuscript. PS is guarantor. WHAT IS ALREADY KNOWN ON THIS TOPIC Experimental, epidemiological, migration, and intervention studies have shown a causal relation between habitual dietary salt intake and blood pressure Population based intervention studies and meta-analyses of randomised controlled trials have shown that it is possible to achieve significant reductions in blood pressure with reduced salt intake in both hypertensive and normotensive individuals WHAT THIS STUDY ADDS Higher salt intake is associated with significantly greater incidence of strokes and total cardiovascular events, with a dose dependent association A difference of 5 g a day in habitual salt intake is associated with a 23% difference in the rate of stroke and 17% difference in the rate of total cardiovascular disease Each year a 5 g reduction in daily salt intake at the population level could avert some one and a quarter million deaths from stroke and almost three million deaths from cardiovascular disease worldwide |
Multiple visceral and subcutaneous nodules in a 4month infant Juvenile xanthogranuloma (JXG) is a benign proliferation of non-Langerhans histiocytic cells that frequently involves the skin of infants and young adults. Usually, the lesion has characteristic clinical features that allow prompt recognition by the doctor, and confirmation and management are easily achieved by excisional biopsy. Occasionally, however, JXG presents as multiple deep tissue or visceral masses, making its clinical diagnosis more difficult. In this setting, fine needle aspiration (FNA) cytology becomes a valuable diagnostic aid, as the procedure allows a rapid cytological diagnosis and material may be obtained for complementary studies. There have been very few case reports of the cytological findings of JXG. We describe the cytological, immunocytochemical and ultrastructural findings in a case of systemic JXG diagnosed in material obtained by FNA, with subsequent histopathological confirmation. |
<reponame>rkhang7/bilisoleil<filename>app/src/main/java/com/yoyiyi/soleil/adapter/discover/section/GameCenterBookGiftSection.java
package com.yoyiyi.soleil.adapter.discover.section;
import androidx.recyclerview.widget.RecyclerView;
import androidx.recyclerview.widget.StaggeredGridLayoutManager;
import com.yoyiyi.soleil.R;
import com.yoyiyi.soleil.adapter.discover.GameCenterBookGiftAdapter;
import com.yoyiyi.soleil.bean.discover.GameCenter;
import com.yoyiyi.soleil.widget.section.StatelessSection;
import com.yoyiyi.soleil.widget.section.ViewHolder;
import java.util.List;
/**
* @author zzq 作者 E-mail: <EMAIL>
* @date 创建时间:2017/5/30 21:44
* 描述:新游预约
*/
public class GameCenterBookGiftSection extends StatelessSection {
private List<GameCenter.BookGiftBean> mList;
public GameCenterBookGiftSection(List<GameCenter.BookGiftBean> list) {
super(R.layout.layout_item_game_center_head, R.layout.layout_item_game_center_book_gift, R.layout.layout_empty);
this.mList = list;
}
@Override
public void onBindHeaderViewHolder(ViewHolder holder) {
holder.setText(R.id.tv_title, "新游预约");
}
@Override
public void onBindFooterViewHolder(ViewHolder holder) {
RecyclerView recyclerView = holder.getView(R.id.recycler);
recyclerView.setHasFixedSize(true);
recyclerView.setNestedScrollingEnabled(false);
StaggeredGridLayoutManager layoutManager = new StaggeredGridLayoutManager(1,
StaggeredGridLayoutManager.HORIZONTAL);
recyclerView.setLayoutManager(layoutManager);
recyclerView.setAdapter(new GameCenterBookGiftAdapter(mList));
}
}
|
import {
Component,
EventEmitter,
Input,
OnChanges,
OnInit,
Output,
SimpleChanges,
} from '@angular/core';
import { FormBuilder, FormGroup, Validators } from '@angular/forms';
import { Observable } from 'rxjs';
import { tap } from 'rxjs/operators';
import { Data } from '../../core/modles/data.model';
import { NumberValidators } from '../../core/validators/number.validator';
import { GenericValidator } from 'src/app/core/validators/generic-validator';
/* NgRx */
@Component({
selector: 'app-data-edit',
templateUrl: './data-edit.component.html',
})
export class DataEditComponent implements OnInit, OnChanges {
pageTitle = 'Data Edit';
errorMessage = '';
dataForm: FormGroup;
@Input() selectedData: Data;
@Output() create = new EventEmitter<boolean>();
@Output() update = new EventEmitter<void>();
@Output() delete = new EventEmitter<Data>();
// Use with the generic validation message class
displayMessage: { [key: string]: string } = {};
private validationMessages: { [key: string]: { [key: string]: string } };
private genericValidator: GenericValidator;
constructor(private fb: FormBuilder) {
// Defines all of the validation messages for the form.
// These could instead be retrieved from a file or database.
this.validationMessages = {
name: {
required: 'Data name is required.',
minlength: 'Data name must be at least three characters.',
maxlength: 'Data name cannot exceed 50 characters.',
},
category: {
required: 'Data code is required.',
}
};
// Define an instance of the validator for use with this form,
// passing in this form's set of validation messages.
this.genericValidator = new GenericValidator(this.validationMessages);
}
ngOnInit(): void {
// Define the form group
this.dataForm = this.fb.group({
name: [
'',
[
Validators.required,
Validators.minLength(3),
Validators.maxLength(50),
],
],
category: ['', Validators.required],
description: '',
});
// Watch for value changes for validation
this.dataForm.valueChanges.subscribe(
() =>
(this.displayMessage = this.genericValidator.processMessages(
this.dataForm
))
);
}
ngOnChanges(changes: SimpleChanges): void {
let change = changes['selectedData'];
if (change && !change.firstChange) {
this.displayData(change.currentValue);
}
}
// Also validate on blur
// Helpful if the user tabs through required fields
blur(): void {
this.displayMessage = this.genericValidator.processMessages(
this.dataForm
);
}
displayData(data: Data | null): void {
if (data) {
// Reset the form back to pristine
this.dataForm.reset();
// Display the appropriate page title
if (data.id === '0') {
this.pageTitle = 'Add Data';
} else {
this.pageTitle = `Edit Data: ${data.name}`;
}
// Update the data on the form
this.dataForm.patchValue({
name: data.name,
category: data.category,
description: data.description,
});
}
}
cancelEdit(data: Data): void {
// Redisplay the currently selected data
// replacing any edits made
this.displayData(data);
}
deleteData(data: Data): void {
this.delete.emit(data);
}
saveData(originalData: Data): void {
if (this.dataForm.valid) {
if (this.dataForm.dirty) {
// Copy over all of the original data properties
// Then copy over the values from the form
// This ensures values not on the form, such as the Id, are retained
const data = { ...originalData, ...this.dataForm.value };
if (data.id === '0') {
this.create.emit(data);
} else {
this.update.emit(data);
}
}
}
}
}
|
// ResolveFunc resolves h's ABI and importable function.
func (p *Process) ResolveFunc(module, field string) exec.FunctionImport {
switch module {
case "h":
switch field {
case "h":
return func(vm *exec.VirtualMachine) int64 {
frame := vm.GetCurrentFrame()
data := frame.Locals[0]
p.Output = append(p.Output, byte(data))
return 0
}
default:
panic("impossible state")
}
default:
panic("impossible state")
}
} |
/* Copyright (c) 1993, Microsoft Corporation, all rights reserved
**
** raschap.h
** Remote Access PPP Challenge Handshake Authentication Protocol
**
** 11/05/93 <NAME>
*/
#ifndef _RASCHAP_H_
#define _RASCHAP_H_
#include "md5.h"
#include <ntsamp.h>
#define TRACE_RASCHAP (0x00010000|TRACE_USE_MASK|TRACE_USE_MSEC|TRACE_USE_DATE)
#define TRACE(a) TracePrintfExA(g_dwTraceIdChap,TRACE_RASCHAP,a )
#define TRACE1(a,b) TracePrintfExA(g_dwTraceIdChap,TRACE_RASCHAP,a,b )
#define TRACE2(a,b,c) TracePrintfExA(g_dwTraceIdChap,TRACE_RASCHAP,a,b,c )
#define TRACE3(a,b,c,d) TracePrintfExA(g_dwTraceIdChap,TRACE_RASCHAP,a,b,c,d )
#define DUMPW(X,Y) TraceDumpExA(g_dwTraceIdChap,1,(LPBYTE)X,Y,4,1,NULL)
#define DUMPB(X,Y) TraceDumpExA(g_dwTraceIdChap,1,(LPBYTE)X,Y,1,1,NULL)
//General macros
#define GEN_RAND_ENCODE_SEED ((CHAR) ( 1 + rand() % 250 ))
/* CHAP packet codes from CHAP spec except ChangePw.
*/
#define CHAPCODE_Challenge 1
#define CHAPCODE_Response 2
#define CHAPCODE_Success 3
#define CHAPCODE_Failure 4
#define CHAPCODE_ChangePw1 5
#define CHAPCODE_ChangePw2 6
#define CHAPCODE_ChangePw3 7
#define MAXCHAPCODE 7
/* Returned by receive buffer parsing routines that discover the packet is
** corrupt, usually because the length fields don't make sense.
*/
#define ERRORBADPACKET (DWORD )-1
/* Maximum challenge and response lengths.
*/
#define MAXCHALLENGELEN 255
#define MSRESPONSELEN (LM_RESPONSE_LENGTH + NT_RESPONSE_LENGTH + 1)
#define MD5RESPONSELEN MD5_LEN
#define MAXRESPONSELEN max( MSRESPONSELEN, MD5RESPONSELEN )
#define MAXINFOLEN 1500
/* Defines states within the CHAP protocol.
*/
#define CHAPSTATE enum tagCHAPSTATE
CHAPSTATE
{
CS_Initial,
CS_WaitForChallenge,
CS_ChallengeSent,
CS_ResponseSent,
CS_Retry,
CS_ChangePw,
CS_ChangePw1,
CS_ChangePw2,
CS_ChangePw1Sent,
CS_ChangePw2Sent,
CS_WaitForAuthenticationToComplete1,
CS_WaitForAuthenticationToComplete2,
CS_Done
};
/* Defines the change password version 1 (NT 3.5) response data buffer.
*/
#define CHANGEPW1 struct tagCHANGEPW1
CHANGEPW1
{
BYTE abEncryptedLmOwfOldPw[ ENCRYPTED_LM_OWF_PASSWORD_LENGTH ];
BYTE abEncryptedLmOwfNewPw[ ENCRYPTED_LM_OWF_PASSWORD_LENGTH ];
BYTE abEncryptedNtOwfOldPw[ ENCRYPTED_NT_OWF_PASSWORD_LENGTH ];
BYTE abEncryptedNtOwfNewPw[ ENCRYPTED_NT_OWF_PASSWORD_LENGTH ];
BYTE abPasswordLength[ 2 ];
BYTE abFlags[ 2 ];
};
/* CHANGEPW1.abFlags bit definitions.
*/
#define CPW1F_UseNtResponse 0x00000001
/* Define the change password version 2 (NT 3.51) response data buffer.
*/
#define CHANGEPW2 struct tagCHANGEPW2
CHANGEPW2
{
BYTE abNewEncryptedWithOldNtOwf[ sizeof(SAMPR_ENCRYPTED_USER_PASSWORD) ];
BYTE abOldNtOwfEncryptedWithNewNtOwf[ ENCRYPTED_NT_OWF_PASSWORD_LENGTH ];
BYTE abNewEncryptedWithOldLmOwf[ sizeof(SAMPR_ENCRYPTED_USER_PASSWORD) ];
BYTE abOldLmOwfEncryptedWithNewNtOwf[ ENCRYPTED_NT_OWF_PASSWORD_LENGTH ];
BYTE abLmResponse[ LM_RESPONSE_LENGTH ];
BYTE abNtResponse[ NT_RESPONSE_LENGTH ];
BYTE abFlags[ 2 ];
};
/* CHANGEPW2.abFlags bit definitions.
*/
#define CPW2F_UseNtResponse 0x00000001
#define CPW2F_LmPasswordPresent 0x00000002
/* Define the change password for new MS-CHAP
*/
#define CHANGEPW3 struct tagCHANGEPW3
CHANGEPW3
{
BYTE abEncryptedPassword[ 516 ];
BYTE abEncryptedHash[ 16 ];
BYTE abPeerChallenge[ 24 ];
BYTE abNTResponse[ 24 ];
BYTE abFlags[ 2 ];
};
/* Union for storage effieciency (never need both formats at same time).
*/
#define CHANGEPW union tagCHANGEPW
CHANGEPW
{
/* This dummy field is included so the MIPS compiler will align the
** structure on a DWORD boundary. Normally, MIPS does not force alignment
** if the structure contains only BYTEs or BYTE arrays. This protects us
** from alignment faults should SAM or LSA interpret the byte arrays as
** containing some necessarily aligned type, though currently they do not.
*/
DWORD dwAlign;
CHANGEPW1 v1;
CHANGEPW2 v2;
CHANGEPW3 v3;
};
/* Defines the WorkBuf stored for us by the PPP engine.
*/
#define CHAPWB struct tagCHAPWB
CHAPWB
{
/* CHAP encryption method negotiated (MD5 or Microsoft extended). Note
** that server does not support MD5.
*/
BYTE bAlgorithm;
/* True if role is server, false if client.
*/
BOOL fServer;
/* The port handle on which the protocol is active.
*/
HPORT hport;
/* Number of authentication attempts left before we shut down. (Microsoft
** extended CHAP only)
*/
DWORD dwTriesLeft;
/* Client's credentials.
*/
CHAR szUserName[ UNLEN + DNLEN + 2 ];
CHAR szOldPassword[ PWLEN + 1 ];
CHAR szPassword[ PWLEN + 1 ];
CHAR szDomain[ DNLEN + 1 ];
/* The LUID is a logon ID required by LSA to determine the response. It
** must be determined in calling app's context and is therefore passed
** down. (client only)
*/
LUID Luid;
/* The challenge sent or received in the Challenge Packet and the length
** in bytes of same. Note that LUID above keeps this DWORD aligned.
*/
BYTE abChallenge[ MAXCHALLENGELEN ];
BYTE cbChallenge;
BYTE abComputedChallenge[ MAXCHALLENGELEN ];
/* Indicates whether a new challenge was provided in the last Failure
** packet. (client only)
*/
BOOL fNewChallengeProvided;
/* The response sent or received in the Response packet and the length in
** bytes of same. Note the BOOL above keeps this DWORD aligned.
*/
BYTE abResponse[ MAXRESPONSELEN ];
BYTE cbResponse;
/* The change password response sent or received in the ChangePw or
** ChangePw2 packets.
*/
CHANGEPW changepw;
/* The LM and user session keys retrieved when credentials are successfully
** authenticated.
*/
LM_SESSION_KEY keyLm;
USER_SESSION_KEY keyUser;
/* This flag indicates that the session key has been calculated
** from the password or retrieved from LSA.
*/
BOOL fSessionKeysObtained;
/* On the client, this contains the pointer to the MPPE keys. On the server
** this field is not used.
*/
RAS_AUTH_ATTRIBUTE * pMPPEKeys;
/* The current state in the CHAP protocol.
*/
CHAPSTATE state;
/* Sequencing ID expected on next packet received on this port and the
** value to send on the next outgoing packet.
*/
BYTE bIdExpected;
BYTE bIdToSend;
/* The final result, used to duplicate the original response in subsequent
** response packets. This is per CHAP spec to cover lost Success/Failure
** case without allowing malicious client to discover alternative
** identities under the covers during a connection. (applies to server
** only)
*/
PPPAP_RESULT result;
HPORT hPort;
DWORD dwInitialPacketId;
DWORD fConfigInfo;
RAS_AUTH_ATTRIBUTE * pAttributesFromAuthenticator;
//
// Used to send authentication request to backend server
//
RAS_AUTH_ATTRIBUTE * pUserAttributes;
// CHAR chSeed; //Seed for encoding password.
//
// Data Blob information for password
//
DATA_BLOB DBPassword;
//
// Data Blob information for oldpassword
//
DATA_BLOB DBOldPassword;
};
/* Prototypes.
*/
DWORD
ChapInit(
IN BOOL fInitialize
);
DWORD ChapSMakeMessage( CHAPWB*, PPP_CONFIG*, PPP_CONFIG*, DWORD, PPPAP_RESULT*,
PPPAP_INPUT* );
DWORD
MakeAuthenticationRequestAttributes(
IN CHAPWB* pwb,
IN BOOL fMSChap,
IN BYTE bAlgorithm,
IN CHAR* szUserName,
IN BYTE* pbChallenge,
IN DWORD cbChallenge,
IN BYTE* pbResponse,
IN DWORD cbResponse,
IN BYTE bId
);
DWORD
GetErrorCodeFromAttributes(
IN CHAPWB* pwb
);
DWORD
LoadChapHelperFunctions(
VOID
);
DWORD ChapCMakeMessage( CHAPWB*, PPP_CONFIG*, PPP_CONFIG*, DWORD, PPPAP_RESULT*,
PPPAP_INPUT* );
DWORD ChapBegin( VOID**, VOID* );
DWORD ChapEnd( VOID* );
DWORD ChapMakeMessage( VOID*, PPP_CONFIG*, PPP_CONFIG*, DWORD, PPPAP_RESULT*,
PPPAP_INPUT* );
DWORD GetChallengeFromChallenge( CHAPWB*, PPP_CONFIG* );
DWORD MakeChangePw1Message( CHAPWB*, PPP_CONFIG*, DWORD );
DWORD MakeChangePw2Message( CHAPWB*, PPP_CONFIG*, DWORD );
DWORD MakeChangePw3Message( CHAPWB*, PPP_CONFIG*, DWORD, BOOL );
DWORD GetCredentialsFromResponse( PPP_CONFIG*, BYTE, CHAR*, BYTE* );
DWORD GetInfoFromChangePw1( PPP_CONFIG*, CHANGEPW1* );
DWORD GetInfoFromChangePw2( PPP_CONFIG*, CHANGEPW2*, BYTE* );
DWORD GetInfoFromChangePw3( PPP_CONFIG*, CHANGEPW3*, BYTE* );
VOID GetInfoFromFailure( CHAPWB*, PPP_CONFIG*, DWORD*, BOOL*, DWORD* );
BYTE HexCharValue( CHAR );
DWORD MakeChallengeMessage( CHAPWB*, PPP_CONFIG*, DWORD );
DWORD MakeResponseMessage( CHAPWB*, PPP_CONFIG*, DWORD, BOOL );
VOID ChapMakeResultMessage( CHAPWB*, DWORD, BOOL, PPP_CONFIG*, DWORD );
DWORD StoreCredentials( CHAPWB*, PPPAP_INPUT* );
DWORD
ChapChangeNotification(
VOID
);
DWORD
GetChallenge(
OUT PBYTE pChallenge
);
VOID
EndLSA(
VOID
);
DWORD
InitLSA(
VOID
);
DWORD
MakeChangePasswordV1RequestAttributes(
IN CHAPWB* pwb,
IN BYTE bId,
IN PCHAR pchIdentity,
IN PBYTE Challenge,
IN PENCRYPTED_LM_OWF_PASSWORD pEncryptedLmOwfOldPassword,
IN PENCRYPTED_LM_OWF_PASSWORD pEncryptedLmOwfNewPassword,
IN PENCRYPTED_NT_OWF_PASSWORD pEncryptedNtOwfOldPassword,
IN PENCRYPTED_NT_OWF_PASSWORD pEncryptedNtOwfNewPassword,
IN WORD LenPassword,
IN WORD wFlags,
IN DWORD cbChallenge,
IN BYTE * pbChallenge
);
DWORD
MakeChangePasswordV2RequestAttributes(
IN CHAPWB* pwb,
IN BYTE bId,
IN CHAR* pchIdentity,
IN SAMPR_ENCRYPTED_USER_PASSWORD* pNewEncryptedWithOldNtOwf,
IN ENCRYPTED_NT_OWF_PASSWORD* pOldNtOwfEncryptedWithNewNtOwf,
IN SAMPR_ENCRYPTED_USER_PASSWORD* pNewEncryptedWithOldLmOwf,
IN ENCRYPTED_NT_OWF_PASSWORD* pOldLmOwfEncryptedWithNewNtOwf,
IN DWORD cbChallenge,
IN BYTE * pbChallenge,
IN BYTE * pbResponse,
IN WORD wFlags
);
DWORD
MakeChangePasswordV3RequestAttributes(
IN CHAPWB* pwb,
IN BYTE bId,
IN CHAR* pchIdentity,
IN CHANGEPW3* pchangepw3,
IN DWORD cbChallenge,
IN BYTE * pbChallenge
);
DWORD
GetEncryptedPasswordsForChangePassword2(
IN CHAR* pszOldPassword,
IN CHAR* pszNewPassword,
OUT SAMPR_ENCRYPTED_USER_PASSWORD* pNewEncryptedWithOldNtOwf,
OUT ENCRYPTED_NT_OWF_PASSWORD* pOldNtOwfEncryptedWithNewNtOwf,
OUT SAMPR_ENCRYPTED_USER_PASSWORD* pNewEncryptedWithOldLmOwf,
OUT ENCRYPTED_NT_OWF_PASSWORD* pOldLmOwfEncryptedWithNewNtOwf,
OUT BOOLEAN* pfLmPresent
);
/* Globals.
*/
#ifdef RASCHAPGLOBALS
#define GLOBALS
#define EXTERN
#else
#define EXTERN extern
#endif
EXTERN DWORD g_dwTraceIdChap
#ifdef GLOBALS
= INVALID_TRACEID;
#endif
;
EXTERN DWORD g_dwRefCount
#ifdef GLOBALS
= 0;
#endif
;
EXTERN HANDLE g_hLsa
#ifdef GLOBALS
= INVALID_HANDLE_VALUE;
#endif
;
EXTERN
CHAR
szComputerName[CNLEN+1];
#undef EXTERN
#undef GLOBALS
#endif // _RASCHAP_H_
|
A high-throughput method for the conversion of CO2 obtained from biochemical samples to graphite in septa-sealed vials for quantification of 14C via accelerator mass spectrometry. The growth of accelerator mass spectrometry as a tool for quantitative isotope ratio analysis in the biosciences necessitates high-throughput sample preparation. A method has been developed to convert CO obtained from carbonaceous samples to solid graphite for highly sensitive and precise C quantification. Septa-sealed vials are used along with commercially available disposable materials, eliminating sample cross contamination, minimizing complex handling, and keeping per sample costs low. Samples containing between 0.25 and 10 mg of total carbon can be reduced to graphite in approximately 4 h in routine operation. Approximately 150 samples per 8-h day can be prepared by a single technician. |
Cell-based Models for Discovery of Pharmacogenomic Markers of Anticancer Agent Toxicity. The field of pharmacogenomics is challenging because of the multigenic nature of drug response and toxicity. The candidate gene approach has been traditionally utilized to determine the contribution of genetic variation to a particular phenotype; however, the sequencing of the human genome and the genetic resource provided by the International HapMap Project has allowed researchers to perform genome-wide studies without a priori knowledge. Recent work has demonstrated the usefulness of cell-based models for pharmacogenomic discovery using the HapMap samples, which are a panel of well-genotyped, human lymphoblastoid cell lines (LCLs) derived from 90 Utah residents with ancestry from northern and western Europe (CEU), 90 Yoruba in Ibadan, Nigeria (YRI), 45 Japanese in Tokyo, Japan (JPT) and 45 Han Chinese in Beijing, China (CHB). Using these cell-based models, investigators are able to study not only individual variation in drug response, but also population differences in drug response. Finally, besides single nucleotide polymorphisms (SNPs) and gene expression, these cell-based models can also be used to investigate other genetic (e.g. copy number variants, CNVs), epigenetic or environmental factors responsible for drug response. |
CONCENTRATION INDICES IN ANALYSIS OF COMPETITIVE ENVIRONMENT: CASE OF RUSSIAN BANKING SECTOR This article is devoted to the analysis and evaluation of competitive environments by using some indicators of market concentration. An analysis was made of the key (main) concentration indices most often used in countries with developed market economies. The state of the competitive environment in the banking sector of the Russian Federation is estimated with the use of indices. The consistency of these indicators to the basic antitrust regulations was investigated. The authors show that in order to obtain reliable results each of the available methods of monopoly power detection requires a detailed market analysis. JEL Classification Numbers: D21, D43, G21, L13, L49; DOI: http://dx.doi.org/10.12955/cbup.v5.966 Introduction The presence of monopolistic structures is one of the main problems while forming a competitive environment in a market economy. The challenge for antitrust authorities is to identify the monopoly and monitor its operations. This applies to firms that occupy a dominant position and abuse it. Traditionally, the most favorable condition for the emergence of a monopoly is a highly concentrated market. The concentration of sellers reflects the relative sizes and number of firms operating in the industry. The concentration level will be the highest with the minimum number of firms on the market. It is also affected by the size of firms. The more firms differ in size, the higher the concentration level. In turn, the level of concentration can determine the behavior of firms in the market. As a rule, the higher it is, the more firms will depend on each other or on the dominant firm. The monopolistic structure of the market is an exception. It is characterized by the maximum degree of concentration, but the only firm, the monopolist, does not depend on its actions from the behavior of competitors because of their absence. The market will have a lower degree of competition with a higher concentration level. However, firms that occupy a dominant position, as a rule, do not recognize their position as monopolistic and they are trying to prove the absence of monopoly power. Thereby, there are clear criteria for determining the level of concentration. This makes it possible to assess the market structure, to establish the existence of a monopoly and to determine it quantitatively. Economists have developed quite a lot of indices to measure concentration. Research by Khan et al. shows that the final assessment of the degree of competition and its impact on the effectiveness of economic policy depends on the choice of the concentration indices. In practice however, only the two most popular indicators are usually used. The Federal Antimonopoly Service of the Russian Federation in its analytical reviews and reports uses the concentration index -CR-3 and the Herfindahl-Hirschman index -HHI. In our view, it is necessary to expand the set of concentration indicators used to assess the competitive situation of the market more realistically. Data and methodology One of the first coefficients used by economists to analyze market structures was the market concentration index (CR). It shows the percentage of one or more large firms in the total volume of the analyzed market in terms of key economic parameters (sales volume, value added, money turnover, asset size, own and attracted capital, number of employees, etc.). Competition authorities are primarily interested in the firm's share in sales, so most often the concentration indicators, including CR, are calculated based on this parameter. Later, when analyzing concentration indices, we will speak specifically about market share as a proportion of income from sales of the company (or group of companies) in the total income from the sales of industry or market, implying that other parameters can be used. US antitrust authorities have been actively using the concentration index to investigate market structures since 1968 (Gosudarstvo i rynochnyye struktury, 1993). This indicator is simple to calculate, that undoubtedly is its advantage. Usually this ratio is calculated for the largest companies in the market of a certain product. It gives an estimate the ratio of market shares of enterprises with the largest shares to the total market volume. The number of such large companies can vary. In the US (U.S. Department of Commerce, 2006) and France, the index is calculated for 4, 8, 20, 50 or 100 of the largest companies. In Germany, England, Canada, data on 3, 6, 10, etc. companies are considered. In Russia, this indicator has been calculated and published in official statistics since 1992 for three (CR-3), four (CR-4), six (CR-6) and eight (CR-8) of the largest sellers. The formula for concentration index can be represented as follows: where CRn is the concentration index of the given market, Vk is the volume of sales for k-th large seller, Vj is the volume of sales for j-th smaller seller, n is the number of largest sellers in the market, N is the total number of companies in the market. The concentration index presents relative shares or percentage. The higher the values of this indicator are, the stronger the market power of the largest firms is, and the stronger the degree of concentration in the market is, the weaker the competition. Thus, for the same number of largest firms, the higher the degree of concentration, the less competitive is the industry. As it was already mentioned above, this index can be calculated for a different number of major companies in the market of certain products. However, it is more appropriate to examine the values of this index for three or four large firms. In Knyazeva the following criteria for comparing market structures are distinguished: 1. For three firms, the market is considered to be un-concentrated with an index below 45%: CR-3 <45%; 2. The market is considered to moderately concentrated at the values of the index CR-3 between 45% and 70%; 3. The market is highly concentrated when the index values are bigger than 70%: CR-3> 70%. Despite the fact that the concentration index is fairly simple to use and interpret, it has a number of drawbacks. Firstly, it does not take into account the size of the firms, which were not included in the sample k. Secondly, it does not reflect the distribution of shares both within the group of the largest firms and beyond -between outsider firms. To solve this problem, the Lind index is actively used in the countries of the European Union. It allows identifying the largest firms on the market, the so-called "oligopolistic core." Its calculation will be shown below. Thirdly, CR characterizes only the sum of shares of firms, but the gap between these firms can vary. There is a possible inaccuracy in its implementation due to this fact. It has some limitations in the application, since it does not allow differentiating the role of different producers in the market. This index can show the same numerical value for fundamentally different markets, distorting the true state of affairs. Consider two markets with a set of share distributions where one firm controls 80%, the second -5%, the third -3%, the fourth -2% and a set where the first firm occupies 24%, the second -23%, the third -22%, the fourth -21%. Concentration will be measured as 0.9 for both markets, although it is obvious that in the first case the dominant position is occupied by one company and in the second case the distribution of the shares of the first four companies is more or less even (Pakhomova & Richter, 2009). The use of the Herfindahl-Hirschman index makes it possible to overcome this drawback. The concentration index is the simplest indicator of the presence or absence of a monopoly, but it is not precise enough, and moreover, has low information content. To address the above shortcomings, other indicators of market power can be used. We will look into those that are currently actively used in economically developed countries, and are the most successful in the facilitating antitrust policy. The Linda index was proposed by Remo Linda and is widely used in the European Union. Like the concentration index (CR), the Linda index is calculated only for a few of the largest firms, so it also does not take into account the situation on the periphery of the market. But unlike the concentration index, it is focused on accounting for the differences in the core of the market. The Linda index can show how many and what firms occupy dominant positions in the market. For this purpose, the index is calculated step by step. First for the two largest firms, then for three and so on, until the continuity of the function is violated (here, the tendency of the index decrease will be replaced by its increase). This violation of continuity shows that the latter company added to the calculation has a significantly smaller market share than any of the previous. For the two largest firms the Linda index will be equal to the percentage of their market shares. According to Gosudarstvo i rynochnyye struktury we number the market shares of individual firms in a decreasing order, and then the Linda index for these firms will look as follows: Where IL is Linda Index, k1 and k2 are market shares. Based on Gosudarstvo i rynochnyye struktury the Linda index for the three firms k1, k2, k3 will be the arithmetic mean of the two ratios: the ratio of the share of the largest firm to the arithmetic average of the second and third largest firms; the ratio of the arithmetic average of the two largest firms to the share of the third largest firm. The Linda index for the four firms k1, k2, k3 will be the arithmetic mean of the three ratios: the ratio of the share of the largest firm to the arithmetic average of the other three largest firms; the ratio of the arithmetic average of the first two largest firms to the share of the other two largest firm; the ratio of the arithmetic average of the three largest firms to the share of the fourth largest firm. In the same manner the Linda index for five, six and more firms can be calculated. If for two firms the index is 200, for three is 170, and for four is 230, the continuity of the function is violated after adding the fourth firm. This means that the first three firms are the core of the market. Their market shares are significantly larger than the proportion of the fourth largest company and all the rest. If one or two firms occupy a clearly dominant position, the index will rise from the very beginning. In this case adding a third firm to the calculation increases the inequality of forces of the firms considered in the index. The Linda index overcomes the above mentioned drawback of the concentration index. It is reflecting the distribution of market shares among the largest firms, and not just the ratio of the shares of the largest firms to all other sellers. Another disadvantage of the market concentration index (CR) is the concealment of the true position in the market and the determination of the same numerical value for fundamentally different markets. To a certain extent it can be avoided while using the Herfindahl-Hirschman concentration index (HHI). The US Department officially refused CR, and adopted the HHI as the main characteristic of the market structure. Since June 1982, the HHI serves as the main reference point for the US antitrust policy in assessment of all kinds of mergers (Avdasheva & Rosanova, 1998). This index can be used as a measure of concentration however, its main task is not to determine the market share controlled by several of the largest companies, but the characteristic of the distribution of "market power" among all the subjects of this market. Exactly this is the advantage of HHI over CR. The firm's specific weight in the industry is used to calculate HHI. Just as for other concentration indicators, different parameters can serve as the basis for determining the specific weight, but the most important of them is the market share. It is assumed that the greater the share, the greater the potential for the emergence of a monopoly. In the calculation of this index all firms are ranked by weight from the largest to the smallest. Market shares of all producers of a certain product are needed to accurately calculate. It is not possible for a large number of firms. The Herfindahl-Hirschman index is calculated as the sum of the squares of the shares of all firms operating in the market. It is the percentage of each firm present in the market in the total sales volume squared and sum while arranged in decreasing order: Here Y1, Y2,..., Yn are shares in decreasing order. Market shares can be expressed in fractions or as a percentage. In the first case, HHI will take values from 0 to 1, in the secondfrom 0 to 10000. According to international practice, the HHI value, close to zero, corresponds to the minimum concentration, HHI < 0,10 (or HHI < 1000)low level of concentration. In accordance with US law, the index value 0.10 ≤ HHI ≤ 0.18 (or 1000 ≤ HHI ≤ 1800) corresponds to the average concentration level, and HHI> 0.18 (or HHI> 1800) indicates a high level of market concentration (Knyazeva 2007). European legislation sets this limit at 0.2 (or 2000). It should be noted that this index reacts to a big number of firms as well as to the individual market share of each firm. It provides an opportunity to obtain information on the comparative capabilities of firms to influence the market situation in conditions of varying degrees of concentration. Petria et al. consider HHI to be the most correct of the concentration indices, since it takes into account the market shares of all firms and gives greater weight to firms with large market shares. However, the main advantage of the index is its ability to react quite sensitively to the redistribution of shares between firms operating in the market. Due to this sensitivity, it can indirectly indicate the magnitude of the economic profit obtained as a result of the exercise of monopoly power. Table 1 compares the values of the concentration index (CR-3) with the indices of Linda (IL) and Herfindahl-Hirschman (HHI) Linda Index (IL) The higher the market concentration, the earlier the continuity of the function is violated (the decrease will be replaced by an increase). In case of domination of one firm, the index will increase from the very beginning. Source: Authors As we could see, each of the concentration indicators examined has its advantages and disadvantages. The most adequate for reflection of the real market situation, in our opinion, is the Herfindahl-Hirschman index. However, with a large number of firms operating on the market, its calculation presents a certain complexity. The Linda index is also complex in calculation; it estimates the concentration only among the largest companies. Nevertheless, the use of IL helps to clarify the distribution of market power in the oligopolistic core. The following criteria can be proposed to assess the merits of different concentration indices. 1. The concentration indices should give qualitatively consistent results both for the analysis of the market in general and for the analysis of the core of the market. For a less concentrated market this indicator should be less than for a more concentrated one, even if we do not calculate it for all sellers, but only for the largest ones. 2. The concentration index should increase with the increase in the share of a large firm by reducing the share of a smaller firm. 3. The concentration index should decrease when a new firm enters the market, if its size is not larger than the size of the largest of the already existing firms. The concentration index should increase when firms conduct M & A transactions. From the indices considered, only the Herfindahl-Hirschman index meets all four criteria listed above. The Linda index and the CR only partially meet these conditions. Results and Discussion The concentration indices were calculated for analyzing the competitiveness of the Russian banking sector and six largest banks: Sberbank, VTB Bank Moskvy (together with VTB 24), Gazprombank, FK Otkrytiye, Rossel'khozbank and Al'fa-Bank. The key areas of banking activities are attracting deposits and lending to individuals and legal entities. The indices were calculated for each of these two areas, dividing each of them into two segments: services for individuals and for legal entities. The market shares of these banks are shown in Table 2, the concentration indices (CR-3 and CR-6) and Herfindahl-Hirschman are presented in Table 3, and the dynamics of the Linda index is in Table 4. The sixth biggest bank on the individuals' loans market is Raiffeisenbank 5 The sixth biggest bank on the market of individuals' deposits is Binbank CR-3 and CR-6 show that there is a medium level of concentration on all markets, while in services for individuals it is slightly higher. In loans for legal entities CR-3 is quite low but CR-6 is the highest. This can be explained by more equal distribution of shares among the three largest banks. The Linda index is rising right from the start. Table 2 shows that Sberbank is the only dominating bank, for deposits for legal entities there are two dominant banks -Sberbank and VTB group. Not for all the markets IL will be the same. According to other indices the least concentrated is the segment of deposits for legal entities, it is also the lowest in IL. HHI shows high concentration in deposits for individuals, moderately high (high by American standards) in loans for individuals and moderate in services for legal entities. The most competitive according to HHI is the segment of deposits for legal entities. For the Russian bank system as a whole (600 banks) HHI in January 2017 was 1220. It indicates a moderate concentration. As Parsons & Nguyen noted, in G7 countries the HHI for banks in the 2000-s was not lower than 5000, and after 2010 it is over 6000. The level of concentration in Russian banks is moderate according to other criteria. CR-5 in January 2017 was 59.04%, but in 2008 it was 42%, in 2010 -48% (Khandruyev & Chumachenko, 2010). The HHI based on 957 banks in 2010 was 907. The growth rate is moderate. As a result, the concentration of the first five banks in Russia is comparable to European levels. As Leigh & Triggs showed the market share of the four largest Australian banks is 94%. The Linda Index shows higher levels of concentration compared to the HHI, CR-3, CR-6. It is due to the fact that IL shows the concentration of the core, not the whole market. Due to the fact that in Russia most of the 600 banks have really small market shares, concentration levels look moderate. The six largest banks occupy less than 70%. But there is clear dominant in the core of the market. The market share of Sberbank according to different estimates can be from 23% to 47%. This can be seen only with the Linda index. On the one hand, IL points out the leader and probable monopolist. On the other hand, it is unable to detect competitive forces presented by a number of smaller firms on the periphery. Conclusion These indices of concentration show the market structure with different levels of accuracy, they describe different aspects of the situation. In different circumstances, depending on the specific objectives of the antimonopoly policy, different concentration indicators may be most appropriate. However, in any case, detailed and multilateral analysis of the market is needed while using each of the considered indices, and possibly, their combination for a more realistic assessment. It is important not only to adequately assess the results obtained with the help of quantitative methods, but also to understand the reasons why the market has a high or low concentration. Without a meaningful analysis of the data used for calculating the concentration indices and their results, the approach to the implementation of the antimonopoly policy will be formal, one-sided, and, therefore, most likely, will not give the desired results. |
Human chymase (EC.3.4.21.39), a chymotrypsin-like serine protease, is stored in mast cell secretory granules. Upon external stimulation, mast cells undergo degranulation, resulting in the release of human chymase, along with a wide variety of inflammation mediators, outside the cells. The released human chymase specifically recognizes aromatic amino acids contained in substrate proteins and peptides, such as phenylalanine and tyrosine, and cleaves the peptide bonds adjoining to the amino acids. A representative substrate for human chymase is angiotensin I (AngI). Human chymase cleaves AngI to produce angiotensin II (AngII), a vasoconstricting factor.
Mammalian chymases are phylogenetically classified under two subfamilies: α and β. Primates, including humans, express only one kind of chymase, which belongs to the α family. Meanwhile, rodents express both the α and β families of chymase. In mice, there are a plurality of kinds of chymases, of which mouse mast cell protease-4 (mMCP-4), which belongs to the β family, is considered to be most closely related to human chymase, judging from its substrate specificity and mode of expression in tissue. In hamsters, hamster chymase-1, also a member of the β family, corresponds to human chymase. Meanwhile, mMCP-5 and hamster chymase-2, which belong to the α family as with human chymase, possess elastase-like activity and differ from human chymase in terms of substrate specificity.
Chymase is profoundly associated with the activation of transforming growth factor β (TGF-β). TGF-β exists in a latent form (latent-TGF-β) in extracellular matrices around epithelial cells and endothelial cells, and is retained in extracellular matrices via large latent TGF-β binding protein (LTBP). TGF-β is released from extracellular matrices as required and activated, and the activated TGF-β is a cytokine of paramount importance to living organisms reportedly involved in cell proliferation and differentiation and tissue repair and regeneration after tissue injury. Collapse of its signal leads to the onset and progression of a wide variety of diseases. It is thought that in this process, chymase is involved in the release of latent TGF-β from extracellular matrices and the conversion of latent TGF-β to active TGF-β.
Chymase is known to be associated with a broad range of diseases, including fibrosis, cardiovascular diseases, inflammation, allergic diseases and organ adhesion. Fibrosis is an illness characterized by abnormal metabolism of extracellular substrates in the lung, heart, liver, kidney, skin and the like, resulting in excess deposition of connective tissue proteins. In pulmonary fibrosis, for example, connective tissue proteins such as collagen deposit in excess in the lung, resulting in hard shrinkage of pulmonary alveoli and ensuing respiratory distress. Lung fibrosis has been shown to result from pneumoconiosis, which is caused by exposure to a large amount of dust, drug-induced pneumonia, which is caused by use of drugs such as anticancer agents, allergic pneumonia, pulmonary tuberculosis, autoimmune diseases such as collagen disease, and the like. However, there are not a few cases in which the cause is unknown.
The mechanism of onset of fibrosis at the molecular level has not been elucidated well. Generally, in normal states, the proliferation and functions of fibroblasts are well controlled. In case of serious or persistent inflammation or injury, however, the tissue repair mechanism works in excess, resulting in abnormal proliferation of fibroblasts and overproduction of connective tissue proteins. TGF-β is known as a factor that causes these phenomena. As evidence suggestive of its involvement, it has been reported that administration of an anti-TGF-β neutralizing antibody to an animal model of fibrosis causes decreased collagen expression and significantly suppressed fibrosis. In patients with idiopathic pulmonary fibrosis, increased levels of TGF-β and elevated counts of chymase-positive mast cells are observed.
Meanwhile, association of chymase in fibrosis has been demonstrated by experiments using animal models. In a hamster model of bleomycin-induced pulmonary fibrosis, facilitated chymase activity, increased expression of collagen III mRNA, tissue fibrosis and other phenomena are significantly reduced by chymase inhibitors. The same effects have been observed for a mouse model of bleomycin-induced pulmonary fibrosis; administration of chymase inhibitors suppressed chymase activity and reduced hydroxyproline content.
With these features, chymase inhibitors can be used as prophylactic or therapeutic drugs for diseases related to chymase, such as fibrosis. Chymase inhibitors that have been developed include small molecular compounds such as TPC-806, SUN-13834, SUN-C8257, SUN-C8077, and JNJ-10311795 (Patent document 1).
In recent years, applications of RNA aptamers to therapeutic drugs, diagnostic reagents, and test reagents have been drawing attention; some RNA aptamers have already been in clinical study stage or in practical use. In December 2004, the world's first RNA aptamer drug, Macugen, was approved as a therapeutic drug for age-related macular degeneration in the US. An RNA aptamer refers to an RNA that binds specifically to a target molecule such as a protein, and can be prepared using the SELEX (Systematic Evolution of Ligands by Exponential Enrichment) method (Patent documents 2-4). In the SELEX method, an RNA that binds specifically to a target molecule is selected from an RNA pool with about 1014 different nucleotide sequences. The RNA used has a random sequence of about 40 nucleotides, which is flanked by primer sequences. This RNA pool is allowed to mixed with a target molecule, and only the RNA that has bound to the target molecule is separated using a filter and the like. The RNA separated is amplified by RT-PCR, and this is used as a template for the next round. By repeating this operation about 10 times, an RNA aptamer that binds specifically to the target molecule can be acquired.
Aptamer drugs, like antibody drugs, can target extracellular proteins. With reference to many scientific papers and other reference materials in the public domain, aptamer drugs are judged to potentially surpass antibody drugs in some aspects. For example, aptamers often exhibit higher affinity and specificity for target molecules than do antibodies. Aptamers are unlikely to undergo immune elimination, and adverse reactions characteristic of antibodies, such as antibody-dependent cell-mediated cytotoxicity (ADCC) and complement-dependent cytotoxicity (CDC), are reportedly unlikely to occur with the use of aptamers. From the viewpoint of drug delivery, aptamers are likely to migrate to tissues because of their molecular size of about one-tenth that of antibodies, enabling easier drug delivery to target sites. Because aptamers are produced by chemical synthesis, they permit site-selective chemical modifications, and enable cost reduction by mass-production. Other advantages of aptamers include long-term storage stability, heat resistance and solvent resistance. Meanwhile, the blood half-lives of aptamers are generally shorter than those of antibodies; however, this property is sometimes advantageous in view of toxicity. These facts lead to the conclusion that even when the same molecule is targeted, aptamer drugs potentially surpass antibody drugs. |
<gh_stars>1-10
#!/usr/bin/python
"""
Github repo can be found here:
https://github.com/sparshg/py-games
"""
# Import statements
from os import environ
# Hide pygame Hello prompt
environ["PYGAME_HIDE_SUPPORT_PROMPT"] = "1"
import pygame as pg
import sys
from constants import *
from file_editor import *
from scrolling import *
mouseDown = False
scrollX = 0
scrollY = 0
scrollVel = 1
blockSize = 50
# The main controller
class Main:
def __init__(self):
pg.init()
pg.display.set_caption("Platformer Level Creator")
self.running = True
self.win = pg.display.set_mode((WIDTH, HEIGHT))
self.menuBar = MenuBar()
self.selectedBlock = SelectedBlock()
self.platforms = Platforms()
# For key press and close button functionality
def check_events(self):
global mouseDown
for event in pg.event.get():
if event.type == pg.QUIT:
self.running = False
if event.type == pg.MOUSEBUTTONUP:
mouseDown = True
# Update things
def update(self):
global scrollX, scrollY
self.menuBar.update()
self.selectedBlock.update()
scrollX = get_scroll_pos()[0]
scrollY = get_scroll_pos()[1]
keys = pg.key.get_pressed()
# Save on CTRL + S
if keys[pg.K_s] and (keys[pg.K_LCTRL] or keys[pg.K_RCTRL]):
write_level(main.platforms.platforms)
# Draw things
def render(self):
self.win.fill(BLACK)
self.platforms.render()
self.menuBar.render()
self.selectedBlock.render()
pg.display.update()
# The main loop
def loop(self):
global mouseDown
while self.running:
self.check_events()
self.update()
self.render()
mouseDown = False
pg.quit()
sys.exit()
# Menu bar buttons
class MenuButton:
def __init__(self, renFun, side, id):
self.renFun = renFun
self.side = side
self.id = id
# Button 1 (ground)
def ren_btn1(x, y):
pg.draw.rect(
main.win,
DARKPURPLE,
(
x + main.menuBar.height / 4,
y + main.menuBar.height / 4,
main.menuBar.height / 2,
main.menuBar.height / 2,
),
)
btn1 = MenuButton(ren_btn1, "lt", "ground")
# Button 2 (air/clear)
def ren_btn2(x, y):
# Draw red X
pg.draw.line(
main.win,
RED,
(x + main.menuBar.height / 4, y + main.menuBar.height / 4),
(
x + main.menuBar.height - main.menuBar.height / 4,
y + main.menuBar.height - main.menuBar.height / 4,
),
10,
)
pg.draw.line(
main.win,
RED,
(
x + main.menuBar.height - main.menuBar.height / 4,
y + main.menuBar.height / 4,
),
(
x + main.menuBar.height / 4,
y + main.menuBar.height - main.menuBar.height / 4,
),
10,
)
btn2 = MenuButton(ren_btn2, "lt", None)
# Button 3 (the save button)
def ren_btn3(x, y):
font = pg.font.SysFont("Segoe UI", 25)
surf = font.render("Save", 1, BLACK)
main.win.blit(
surf,
(
x + main.menuBar.height / 2 - font.size("Save")[0] / 2,
y + main.menuBar.height / 2 - font.size("Save")[1] / 2,
),
)
btn3 = MenuButton(ren_btn3, "rt", "SAVE")
# Menu bar
class MenuBar:
def __init__(self):
global scrollY
self.width = WIDTH
self.height = 75
self.buttonClicked = None
self.buttons = [btn1, btn2, btn3]
# Update bar
def update(self):
if self.buttonClicked != None:
main.selectedBlock.quit_selection()
main.selectedBlock.select_block(self.buttons[self.buttonClicked].id)
self.buttonClicked = None
# Draw bar
def render(self):
# Current X
ltX = 0
rtX = self.width - self.height
# Background
pg.draw.rect(main.win, LIGHTGRAY, (0, 0, self.width, self.height))
# Buttons
for i in range(len(self.buttons)):
# If on right side
if self.buttons[i].side == "lt":
if pg.Rect(ltX, 0, self.height, self.height).collidepoint(
pg.mouse.get_pos()
):
if pg.mouse.get_pressed()[0]:
pg.draw.rect(
main.win, DARKGRAY, (ltX, 0, self.height, self.height)
)
else:
pg.draw.rect(main.win, GRAY, (ltX, 0, self.height, self.height))
if mouseDown:
self.buttonClicked = i
self.buttons[i].renFun(ltX, 0)
ltX += self.height
# if on left side
elif self.buttons[i].side == "rt":
if pg.Rect(rtX, 0, self.height, self.height).collidepoint(
pg.mouse.get_pos()
):
if pg.mouse.get_pressed()[0]:
pg.draw.rect(
main.win, DARKGRAY, (rtX, 0, self.height, self.height)
)
else:
pg.draw.rect(main.win, GRAY, (rtX, 0, self.height, self.height))
if mouseDown:
self.buttonClicked = i
self.buttons[i].renFun(rtX, 0)
rtX -= self.height
# Block in hand
class SelectedBlock:
def __init__(self):
self.shown = False
self.blockId = False
# Begin selection
def select_block(self, id):
self.quit_selection()
if id != "SAVE":
self.blockId = id
self.shown = True
else:
# Save level
write_level(main.platforms.platforms)
# Drop item in hand
def quit_selection(self):
self.blockId = False
self.shown = False
# Place block in desired location
def place_block(self):
global scrollX, scrollY
# Collumn
y = (pg.mouse.get_pos()[1] + scrollY) // blockSize
# Cell
x = (pg.mouse.get_pos()[0] + scrollX) // blockSize
# Check if tile exists
def place():
if y < len(main.platforms.platforms):
if x < len(main.platforms.platforms[y]):
main.platforms.platforms[y][x] = self.blockId
else:
for i in range(x - len(main.platforms.platforms[y])):
main.platforms.platforms[y].append(None)
main.platforms.platforms[y].insert(x, self.blockId)
else:
for i in range(y - len(main.platforms.platforms)):
main.platforms.platforms.append(
[None] * len(main.platforms.platforms)
)
row = [None] * len(main.platforms.platforms[y - 1])
row[x] = self.blockId
main.platforms.platforms.append(row)
place()
# If pressing SHIFT key, don't unselect
if (
not pg.key.get_pressed()[pg.K_LSHIFT]
and not pg.key.get_pressed()[pg.K_RSHIFT]
):
self.quit_selection()
# Update self
def update(self):
if self.shown and mouseDown:
self.place_block()
# Draw self
def render(self):
if self.shown:
for i in range(len(main.menuBar.buttons)):
if main.menuBar.buttons[i].id == self.blockId:
main.menuBar.buttons[i].renFun(
pg.mouse.get_pos()[0] - main.menuBar.height / 2,
pg.mouse.get_pos()[1] - main.menuBar.height / 2,
)
break
# Level to be rendered
class Platforms:
def __init__(self):
self.platforms = read_level()
# Render
def render(self):
global scrollX, scrollY
for i in range(len(self.platforms)):
for j in range(len(self.platforms[i])):
if self.platforms[i][j] == "ground":
pg.draw.rect(
main.win,
DARKPURPLE,
(
j * blockSize - scrollX,
i * blockSize - scrollY,
blockSize,
blockSize,
),
)
# Test if the script is directly ran
if __name__ == "__main__":
main = Main()
main.loop()
|
<reponame>shalevy1/Simple-YouTube-Downloader
"""
Simple YouTube Downloader
A YouTube Download Client with focus on simplicity.
(c) <NAME>, 2020
"""
__version__ = '1.0.10'
|
// Houzi Game Engine
// Copyright (c) 2018 <NAME>
// Licensed under the MIT license.
#include "hou/test.hpp"
#include "hou/cor/std_chrono.hpp"
using namespace hou;
using namespace testing;
namespace
{
class test_std_chrono : public Test
{};
} // namespace
TEST_F(test_std_chrono, nanoseconds_output_stream_operator)
{
EXPECT_OUTPUT("42 ns", std::chrono::nanoseconds(42));
}
TEST_F(test_std_chrono, microseconds_output_stream_operator)
{
EXPECT_OUTPUT("42 us", std::chrono::microseconds(42));
}
TEST_F(test_std_chrono, milliseconds_output_stream_operator)
{
EXPECT_OUTPUT("42 ms", std::chrono::milliseconds(42));
}
TEST_F(test_std_chrono, seconds_output_stream_operator)
{
EXPECT_OUTPUT("42 s", std::chrono::seconds(42));
}
TEST_F(test_std_chrono, minutes_output_stream_operator)
{
EXPECT_OUTPUT("42 m", std::chrono::minutes(42));
}
TEST_F(test_std_chrono, hours_output_stream_operator)
{
EXPECT_OUTPUT("42 h", std::chrono::hours(42));
}
|
Ensemble Deep Learning Aided VNF Deployment for IoT Services In the sixth generation (6G) networks, due to the massive Internet of Things (IoT) connectivity and substantial growth of communication traffic, an effective Virtual Network Function (VNF) orchestration scheme is anticipated to function dynamically and intelligently. Moving beyond the traditional paradigm of the VNF orchestration and employing VNFs on the network edge located cloudlets based on the inspiration from multi-access edge computing can intensify the overall performance of delay-sensitive applications. In this paper, we intend to investigate how to simultaneously leverage the ensembling of multiple deep learning models for proper calibration to provide real-time VNF placement solutions. We also address the challenges associated with state-of-the-art approaches to deal with dynamic network traffic and topology patterns. Our envisioned methods, based on Convolutional Neural Networks and Artificial Neural Networks named as E-ConvNets and E-ANN respectively, suggest two proactive VNF deployment strategies. These VNF placement strategies demonstrate (simulation results) encouraging performance (optimality gap nearly 7%) in terms of minimizing relocation and communication costs, and high scalability intelligence factor (around 0.93). Moreover, the presented results are further indications of integrating edge computing and deep learning-based strategies into similar research enigmas for future telecommunication networks. |
Lhermitte duclos disease and Cowden disease: clinical, pathological and neuroimaging study of a case. The authors report the case of a 26-year-old female patient affected by Lhermitte Duclos disease and Cowden disease. Preoperative MRI allowed a correct diagnosis which was confirmed by pathological examination. The authors stress the possibility that Lhermitte Duclos and Cowden disease be a single phakomatosis; for this reason all the patients affected by Lhermitte-Duclos should be screened for the presence of multiple hamartomas or malignant neoplastic lesions typical of Cowden disease. |
Global distribution of animal sporotrichosis: A systematic review of Sporothrix sp. identified using molecular tools Highlights First systematic review that reports exclusively the geographic distribution of animal sporotrichosis in the world, focusing in molecular identification of these species. Scarcity of epidemiological studies in global areas. The importance of apply molecular tools to identify and monitor potential pathogens to improve one health concept. According to De Beer et al., the Sporothrix genus is worldwide distributed and it is divided into two clades: The clinical or pathogenic clade composed of S. brasiliensis, S. schenckii, S. globosa, and S. luriei (former S. schenckii var. luriei) and the environmental clade, composed by S. pallida complex (S. chilensis, S. mexicana, S. humicola, and S. pallida former S. albicans) and the S. stenoceras complex. The state of Rio de Janeiro, Brazil, has been experiencing a particular situation since 1998. A hyperendemic sporotrichosis, in which it was observed that the transmission of the fungus to man did not occur in a classical way, but was transmitted zoonotically, through scratching, biting or contact with exudates from skin lesions of infected cats (). For the molecular characterization of the species, the extraction, amplification and sequencing of the DNA of the isolates are used by employing the polymerase chain reaction (PCR) (;a). Phylogenetic analysis of Sporothrix species has traditionally been performed using sequencing data from single or multiple conserved genes, mainly the chitin synthase (CHS), -tubulin and calmodulin gene (CAL). The latter is the reference standard for the molecular identification of species of the genus Sporothrix (;). Phenotypic tests alone are not sufficient to identify species of the genus Sporothrix, due to the uncertainty of the tests, which require the use of molecular methodologies (b). It is important to note that fungal infections are often neglected (), and public health policies and strategic plans to prioritize these infections are lacking. Several reports have shown alarming concern about the occurrence of cases of zoonotic sporotrichosis in non-endemic regions, such as the case of animal sporotrichosis by S. brasiliensis in Argentina, due to a potential transboundary expansion of the species. It is important to highlight that many studies have identified more than one species within the same endemic area (a(Oliveira et al.,, 2011b and that some studies in murine models have shown differences in the virulence potential among the main pathogenic species of the genus Sporothrix (;). Therefore, the interest in identifying species of the genus Sporothrix in different regions of the world has increased, due to their epidemiological importance, taxonomic evolution and geographic distribution (). Based on these data, this study aimed to analyze the worldwide distribution of the etiologic agents of sporotrichosis in cats, dogs and other animals, identified by molecular tools. Search activities and screening process Five bibliographic databases (PubMed, Web of Science, Lilacs, Medline, and Scopus) were searched. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, consulted at http://www.prisma-statement.org, two independent reviewers screened titles and abstracts after excluding repeated publications. The eligibility criteria followed to include articles were as follows: (a) articles in English; (b) articles from 2007 to 2021; (c) all articles had to identify animal sporotrichosis, including dogs, cats, and other animals such as tiger-quoll, insects, equine, and naturally infected mice; (d) species identification was required, however, location was not mandatory. The isolates described as "not known" were analyzed and reported as unknown. The exclusion criteria used were: non-inclusion of the theses, dissertations, monographs or publications without strain identification (without verification code), experimental model, human and environmental isolates, and unavailable full texts. The year 2007 was chosen to initiate the analysis, as a consequence of the description of seven new pathogenic species of Sporothrix, based on molecular and phenotypic studies that demonstrated intraspecific variability among isolates morphologically identified as S. schenckii. This indicates that it should not be considered a single species causing sporotrichosis, but rather a complex of species. Data extraction and epidemiological analysis Two reviewers independently extracted the following variables: identified strain number; country of origin; city of origin (not obligatory); species identification; clinical or environmental clade; and strain of origin. Data analysis was conducted in the R environment version 4.1.2. Fig. 1 shows the flowchart of the study selection process. A total of 380 articles were retrieved from the five databases; After excluding repeated publications, 207 articles were selected by evaluating the fulltext, and finally a total of 33 articles were included for analysis. Fig. 2 shows the distribution of each isolate by continent. South America was the continent where the highest number of cases of animal sporotrichosis was reported, followed by Asia and Europe. North America and Africa reported a similar number of cases. Central America and Oceania reported the same number of cases. South America A total of 216 isolates of Sporothrix sp. were reported from two South American countries: Brazil and Argentina. The South American continent was the first in number of sporotrichosis cases identified in the study. Most isolates were identified in the study from Brazil, cats (158 isolates) and dogs (52 isolates). The most prevalent species on the continent was S. brasiliensis (199 isolates), followed by S. schenckii (6 isolates). In Argentina 4 isolates S. brasiliensis and 2 isolates S. schenckii were identified and isolated from cats and other animals (equine and mouse), respectively. For species identification, the most used molecular method was the CAL gene (55%), followed by T3B fingerprinting (44%), ITS region (7%), -tubulin gene (5%), RFLP-CAL (Restriction Fragment Length Polymorphism-Calmodulin gene) (2%), and CHS gene (1%) ( Table 1). Asia A total of 28 isolates were described in Japan and Malaysia. In Malaysia, 25 isolates of S. schenckii, and in Japan 3 isolates S. globosa from the clinical clade of cats were identified. For species, the most commonly used molecular method was the PCR with the sequencing of CAL gene (100%), followed by ITS region (68%), and other molecular methods (71%) ( Table 1). Europe The total number of Sporothrix sp. species reported in Europe was 12 isolates. Germany was the country with the highest number of isolates with six strains identified, followed by Italy (1 isolate), Spain (3 isolates), Sweden (1 isolate), and the United Kingdom (1 isolate). Only Italy isolated samples from dogs, the United Kingdom isolated cat samples, and other countries obtained samples from insects. All countries isolated species from the environmental clade: S. cantabriensis, S. euskadiensis, S. mexicana, S. nebularis, S. pallida, S. humicola and Ophiostoma stenoceras. For species identification, the most used molecular method was the CAL gene (100%), followed by ITS region, -tubulin gene (50%), and CHS gene (17%) ( Table 1). North America In the United States, two isolates of Sporothrix sp. were reported from the environmental clade (S. brunneovilacea and S. rossii). The isolates were obtained from insects. Molecular methods, PCR with the ITS region and -tubulin gene, (100%), CAL gene and other molecular methods (50%) were used to identify the species (Table 1). Africa In South Africa, six isolates of Sporothrix sp. from the environmental clade (S. aurorae, S. gemella, S. gemellus and S. variecibatus), were identified from insects. For species identification, the most commonly used molecular method was PCR followed by the ITS region (100%), -tubulin gene, and other molecular methods (67%) CAL gene (50%) ( Table 1). Central America In Mexico, a strain of the environmental clade (S. abietina) was reported as isolated also from insects. Identification to species level by the ITS region, -tubulin gene, CAL gene, and other molecular methods with 100% each (Table 1). Oceania In Tasmania, an isolates from environmental clade S. humicola, was identified from Dasyurus maculatus. The identification at the species level by the ITS region, -tubulin gene and CAL gene with 100% each. Discussion Sporotrichosis is considered an emerging zoonosis with significant human and animal health implications. This mycosis usually causes nodules and ulcers on the skin and mucosa membranes, affecting lymph nodes and regional lymphatic vessels. It can even spread to other organs and cause severe forms that can lead to death, especially in cats and immunosuppressed humans ). In recent years, the evolution of this fungal disease has been gradually changing, not only in terms of frequency but also in modes of transmission, and geographic distribution. This can partly be explained by environmental changes, increased urbanization, poverty, and improved diagnoses (). The present study shows reports of 266 Sporothrix sp. isolates from animals worldwide for the period 2007-2021. Most isolates were reported from South America (n = 216 or 81%), followed by Asia (n = 28 or 10%), and Central America and Oceania (n = 1 or 0,37%) less frequently. After the description of new species of the genus Sporothrix, the identification of clinical isolates has been carried out worldwide, especially in regions where a large number of sporotrichosis cases occurs (), such as in southeastern Brazil, considered a zoonotic epidemic area of sporotrichosis (). Phylogenetic analysis of Sporothrix species has traditionally been carried out using sequencing data of single or multiple conserved genes, mainly CHS, -tubulin and the CAL gene. The latter is considered the reference standard for molecular identification of species of the genus Sporothrix (;). The most commonly used molecular method in Europe, Asia, North, Central, and South America identified species by the CAL gene. On the African continent they were identified by the ITS region. The species isolated with the higher number of samples and characterized by molecular tools, was S. brasiliensis (Brazil and Argentina) followed by S. schenckii (Argentina, Brazil, Japan, and Malaysia). These results corroborate studies that identified that zoonotic transmission by S. brasiliensis does not occur outside Brazil (), except in Argentina (). Fungal infections are often neglected (), and public health policies and strategic plans to prioritize these infections are lacking. Inadequate surveillance of fungal infections leads to unnoticed ocurrences, as seen in zoonotic sporotrichosis. Several reports have shown alarming concern about the occurrence of zoonotic sporotrichosis cases in non-endemic regions, such as the case of S. brasiliensis in Argentina, due to a potential transboundary expansion of the species. Despite regulations implemented for pet travel, a poor control of road transport can contribute to the spread of sporotrichosis in Brazil and worldwide. Many studies have also identified that more than one species can be isolated within the same endemic area (a(Oliveira et al.,, 2011b, as occurs in the city of Rio de Janeiro (). Species of the environmental clade were isolated in all continents, and only in South America and Asia were species from the clinical clade isolated from animals. Due to this, we cannot ignore that even species belonging to the environmental clade present a relative risk of infection to animals. Corra-Moreira et al., demonstrated that the differences in virulence levels among these species might not be related to their taxonomic classification, considering that their results were quite heterogeneous when comparing "pathogenic" and "environmental" clade species in the experimental mice model, acting as an essential factor in the immunoregulatory mechanisms. For this reason, the species of the environmental clade can be virulent, possibly due to the interspecific variability that occurs between species of the genus Sporothrix. The second country with most feline isolated cases after Brazil was Malaysia. According to the study by Kano et al. (2015b), a genotype of S. schenckii that is adapting to the feline host may be occurring in Malaysia, similar to that reported for S. brasiliensis in Brazil, where an increase in the number of feline sporotrichosis cases caused by S. schenckii is occurring. Reports of feline cases have increased over the decades in many geographic areas in Brazil (;). It was assumed that the thermal resistance exhibited by S. brasiliensis may be a vital adaptative mechanism of this fungus in cats (body temperature of 39 C) and may partially explain the success of infection of this species over other etiologic agents (a), such as S. globosa, which is more sensitive to temperatures above 35 C, but with case reports in humans (b). This is easily observed in epidemiological studies, which showed that S. brasiliensis is feline host-dependent due to its occurrence in southern and southeastern Brazil (b). The increase in the number of cases in cats is often followed by an increase in the number of cases in humans, representing a serious public health problem. Although the increase in the number of cases of sporotrichosis in animals is proportional to the number of infections in humans, one of the limitations of this study is the scarcity of on cases of animal infection. This loss of data regarding the clinical aspects, drugs used and outcome of the infection, combined with the small number of studies identifying the fungus at the species level using molecular methodologies, is a major obstacle not only to our work, but also to the management of the disease. For this reason, it is necessary to identify which species cause sporotrichosis, since each species has a specific virulence. Phenotypic and genotypic characteristics of different isolates within the genus Sporothrix were associated with their geographic distribution, virulence capacity, or clinical manifestation of sporotrichosis (;b;). However, there are few studies on animals, which are the main agents of human sporotrichosis, especially cat owners and veterinarians. The latter becoming a new risk group for acquiring sporotrichosis, due to the increased zoonotic potential, mainly from cats to humans in endemic regions of the disease. Nevertheless, in endemic areas, more people are at risk of acquiring zoonotic sporotrichosis due to the proximity between humans and cats (;). On the other hand, it is known that therapeutic measures for the treatment of animals, especially cats, take a long time and do not always respond well to treatment, with abandonment, recurrence of the lesion, or therapeutic failure, which may lead to death of the animal. When we refer to dogs, other important domestic animals with strict relationship with humans, only Italy (S. mexicana) and Brazil (S. brasiliensis, S. schenckii and S. luriei) samples of these animals were isolated in our study, as shown by Boechat et al. and Viana et al.. Here, the dogs were also affected by sporotrichosis. However, the low fungal load observed in canine skin lesions appears to be a limiting factor for transmission compared to transmission in cats (;). In the present study, all continents isolated samples from "other animals", such as armadillos, insects, equines, and mice). Coordinated action between veterinarians, physicians, laboratory professionals, surveillance authorities and other health professionals, will ensure broader investigations and promote prevention, detection and assistance of human and animal cases. Thus, epidemiological characterization of sporotrichosis for both animals and humans is necessary to implement health promotion, decrease sporotrichosis cases and confront this public health threat. Conclusion Our study confirmed a difficulty in obtaining the frequency of Sporothrix species, as seen in the molecular identification that has only been published in 13 countries. The most identified species were S. brasiliensis, isolated from cats in Brazil. And S. schenckii isolated from cats in Malaysia. This systematic review analyzed the geographic distribution of the species causing sporotrichosis in animals. We have shown the lack of studies in global areas and reinforced the need to use molecular tools to identify and monitor potential pathogens. This identification of Sporothrix at the species level by molecular tools in animals will strengthen the "One Health Concept", which is a health promotion policy based on the integration between the health of humans, animals, and the environmental. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. |
BELIZE-born Joel Hodgson, who was adopted by Scottish parents and brought up in Renton, Dunbartonshire, hopes to qualify for this summer's Commonwealth Games.
FIVE years ago, Joel Hodgson was sleeping rough on the streets of London.
Today, he is half a second away from running for Belize at the Commonwealth Games.
The talented 25-year-old is within touching distance of his dream to compete alongside the world’s top athletes at Glasgow 2014.
Born in Belize, Central America, adopted by Scottish parents and brought up in Renton, Dunbartonshire, Joel’s ambition is to run for his homeland in his adopted country.
The former Big Issue seller is half a second off qualifying to run in the 400m race.
Joel, who works at a top London law firm, said: “It’s really exciting. I have to shave half a second off my personal best time of 46.2 seconds but I am pretty confident I can achieve this in the next few weeks.
“To qualify, I must run 400m in 45.7 seconds or under at any one of 16 official events before June.
“I am training six days a week. I run 5km every morning, then do two hours in the gym at work and in the evening I head to the track.
Joel’s life story already reads like the script of a Hollywood movie.
Abandoned by his mum when he was just three months old, he ended up in a children’s home with his sisters Yvette, 28, and Keisha, 29.
But, when he was four, they were adopted by Scots marine engineer George Hodgson and his wife Susan, who brought them to Scotland. At 21, Joel travelled to London hoping to make his fortune with his girlfriend Michelle Clark but ended up living on the streets.
Everything changed for him in 2010 when he got the chance to sell The Big Issue at London law firm Freshfields Bruckhaus Deringer as part of a pilot scheme. They ended up giving him a job – and he has never looked back.
Joel, who ran for Helensburgh Athletics Club as a boy, said: “If you had told me five years ago when I was sleeping rough on the steps of a police station that I’d one day be attempting to qualify for the Commonwealth Games, I would have laughed.
Joel was voted by his colleagues to carry the Olympic Torch in 2012, an experience he describes as “the proudest day of my life”.
Publicity around this introduced him to the Belize athletics team, who invited him to hoist their national flag in the Olympic stadium ahead of the opening ceremony. |
Sri Lanka’s newly appointed Prime Minister Mahinda Rajapaksa, who lost two motions of no confidence last week, may lose his government’s budget even as he clings to power.
Lawmakers opposed to Rajapaksa said they intend to remove funding for staff salaries and other costs in a vote on Nov. 29. The opposition, which regards his administration as illegitimate, will also seek approval to slash the government’s overall budget, they said.
It was the latest of several new twists on Monday in the political chaos that has embroiled Sri Lanka for the past few weeks.
Leaders of political parties backing Rajapaksa and President Maithripala Sirisena refused to allow a third motion of no confidence to be held through name call or electronic voting on Monday. The previous two motions passed through a voice vote but Sirisena said they hadn’t followed the proper procedures.
Sirisena appointed Rajapaksa last month after firing Ranil Wickremesinghe as prime minister, setting off the political turmoil on the island off India’s southeast coast.
Dinesh Gunawardene, a Rajapaksa loyalist, said Wickremesinghe’s coalition had handed a motion “to suspend all government expenses” to the speaker and the parliament secretary.
“According to the previous no confidence motions, both Rajapaksa and his government are out. There is no government, but there are MPs,” M.A. Sumanthiran, a lawmaker who had voted for the no confidence motion, told Reuters.
“The finance of the country is under the control of the parliament. Now we have proposed a motion to stop government finances for the prime minister’s office,” he said.
Ananda Kumarasiri, the deputy speaker of the parliament, established a select committee to carry on parliamentary business before adjourning the house to Nov. 23.
Unlike last Thursday and Friday there were no physical altercations on the floor of parliament on Monday. On Friday, lawmakers supporting Rajapaksa threw books, chili paste and water bottles at the speaker to try to disrupt the second vote.
Speaker of Parliament Karu Jayasuriya said in a statement that investigations have begun into Friday’s events, including damage that was done to public property in the melee.
The political crisis has hit the economy. On Monday, the rupee fell to a record low of 177.20 per dollar. Foreign investors have pulled out more than 30 billion rupees ($169.5 million) since the crisis unfolded on Oct. 26.
Wickremesinghe loyalists allege that Rajapaksa’s party is trying to buy lawmakers for as much as $3 million each. Rajapaksa loyalists have rejected the allegation.
Both Sirisena and many Rajapaksa loyalists have said they have the majority in parliament. However, the no confidence motion against Rajapaksa and his government was passed twice by 122 votes in the 225-member parliament.
Most foreign countries, including Western nations, have yet to recognize Rajapaksa as the prime minister.
Last week, eight Western countries stayed away from a meeting with the government to register their protest against Sirisena’s decision to dissolve parliament.
The Fake PM travels from Wijerama Mawatha residence to Parliament in an Air Force Helicopter!
Two brand new Bullet proof Range Rovers are being air freighted from the UK for the Fake PM’s office for electioneering purposes.
The Pollonnaruwa Pambaya has converted our Nation into Mugabe’s Zimbabwe in less than 3 weeks. What an achievement!
All the motions happened to be so far not legal.
You mean like the fear all Sri Lankanshad during thr war. Sri Lankans dont kiss the fert of the thugs but lick the feat of the ehite men. No pne seems to remember who helped my3 to become president. No one seems to wsnt to remember the Central bank robbery. RW is such an angel so lets hope he gets back to rob the country and sell every thing.
Sri Lankans are dignified people.
We fostered this Pollonnaruwa Pambaya because he was promoted by the Late Ven Sobitha.
The Rajapaksa excesses were biting society and we humbly put up with that following conclusion of the war.
The Rajapaksa eliminated the Tamil Terrorists, true, but that does not give them a licence to loot national wealth and tetra all citizens as crooks and send white vans and murderers to clear whom they did not like.
Now this Polonnaruwa Pambaya who promised to be a one term President is lapping his lips at the wealth he has made and at the wealth he is likely to make in his second term.
He is challenged by RW’s presence; he is believing RW has and will curtail his obnoxious greed for wealth.
The Country is heading to disaster; it is already a PARIAH STATE.
All because of the greediness of the Polonnaruwa Pambaya and his family.
Fiery. I agree with your comments. How ever Rw is also responsible for supporting the idea of turning a Gamarala in to a president. Unfortunately RW and every one else who supported the idea under estimated My3’s level of greed. At the end of the day My3 played all of them.
Dinesh Gunwardena is the fox son of Boralugoda lion.
He has no any ethics in parliament-asked the majority in selection committee.
Dear friends, your comments appears to be based on pro/con corruption, wastage,negligence and ignorance etc. These comments, however correct have created antipathy within reasonable people towards entire political system in S.L.. I don’t think characteristics of good governance other than Rule of Law are sufficiently taken up. According to my perception, Comments should require strategic aspects for strengthening / straightening entire system including political system. Accusing each other won’t help solving the crisis but consolidation of already corrupted status-quo. Don’t you think so ? |
Clinical dilemmas and the Cochrane Collaboration Controlled trials, and specifically randomised controlled trials (RCTs) are the most powerful research design to evaluate effects of mental health care (World Health Organization Scientific Group on Treatment of Psychiatric Disorders, 1991). There are, however, too many RCTs published in too many journals for anyone to keep up-to-date (Sackett & Rosenberg, 1995). In order to decrease the potential for bias or the play of random error. it is desirable to produce an overview of research findings. Frequently, those interested in the effectiveness of care depend on reviews in journals, textbooks or guidelines to direct practice. Generally speaking there are two sorts of reviews, the systematic and the traditional/ subjective. Randomised controlled trials and reviews Controlled trials, and specifically randomised controlled trials (RCTs) are the most powerful research design to evaluate effects of mental health care (World Health Organization Scientific Group on Treatment of Psychiatric Disorders, 1991). There are, however, too many RCTs published in too many journals for anyone to keep up-to-date (Sackett & Rosenberg, 1995). In order to decrease the potential for bias or the play of random error. it is desirable to produce an overview of research findings. Frequently, those interested in the effectiveness of care depend on reviews in journals, textbooks or guidelines to direct practice. Generally speaking there are two sorts of reviews, the systematic and the traditional/ subjective. Systematic v. traditional reviews Systematic and traditional reviews are very different. The former will have a methods section, the latter may not. In a systematic review the means by which data are identified, selected and, if appropriate, assimilated is made explicit (Sackett et al 1991). These methods are open to scrutiny and valid criticism. The recommendations of systematically con ducted reviews and traditional reviews may be quite contradictory. For example, for the man agement of those with myocardial infarction, Antman et al compared the recommenda tions of leading textbooks and journals to the results of what systematically conducted reviews would have said using the RCT data of the day. They found that leading traditional reviews, by omission, recommended interventions that were harmful or lethal up to 10 years after generally acceptable proof to the contrary was available. Similar examples are just beginning to emerge from within mental health. Up to now, some reviewers were recommending the use of vitamin E to treat neuroleptic-induced tardive dyskinesia (Lloyd, 1992;Jeste & Caligiuri, 1993). A recently completed systematic review of the best available evidence suggests that vitamin E could be any thing from moderately helpful to very harmful (Soares & McGrath, 1997). It certainly is an intervention worthy of full evaluation but, cur rently, there is little evidence to recommend its The Cochrane Collaboration The Cochrane Collaboration was launched in 1993 (Chalmers et al, 1992) with a view to the production, maintenance and dissemination of systematic reviews of health care. It consists of a global network of people methodically seeking every published or unpublished, complete or incomplete, controlled trial of health care. Groups of people with similar interests are forming to systematically review these studies. These groups are open to anyone wishing to invest effort. At present, within the Cochrane Collaboration, there are five groups with a specific interest in mental health. The first, the Cochrane Schizo phrenia Group, has been working for four years. It is composed of clinicians, researchers, occupa tional therapists, economists, nurses and con sumers of care, widely dispersed across the world. The Cochrane Depression, Anxiety and Neurosis Group is focusing on affective and eating dis orders, somatisation problems and deliberate selfharm. The Cochrane Dementia and Cognitive Impairment Group is focusing on the care of those with any type of illness that primarily effects cognitive functioning. These groups have a regis ter of relevant clinical trials. The Cochrane Addiction Group and Cochrane Developmental, Psychosocial and Learning Problems Group are starting to build a register of trials and reviews. For example, the Cochrane Schizophrenia Group has undertaken a comprehensive and methodical search strategy to build its register and make it available to anyone interested in doing review within its scope. The Group is already producing, and updating reviews within the electronic output of the Collaboration, the Cochrane Library. Currently there are only 27 systematic reviews in the Cochrane Library directly related to people with severe mental illness. These reviews deal with not only pharmacological treatments (beta-blockers, clozapine, fiuphenazine risperidone and zuclopentixol for schizophrenia; anticholinergics, benzodiazepines. calcium channel blockers, cholinergics, gamma-aminobutyric acid agonists, vitamin E and miscellaneous treatment for neuro-lepc-induced tardive dyskinesia; antipsychotics for learning disability), but also with other forms of interventions such as case management, community mental health team management, family intervention, electroconvulsive therapy, intercessory prayer and long versus short hospi talisation for those with schizophrenia. In addi tion, nine protocols for schizophrenia and 13 for depression are available in this version of the Cochrane Library (Issue 1, 1998). Cochrane Library The Cochrane Library is an inexpensive electro nic database, currently published every three months and distributed by Update Software (see Appendix). It contains several databases. The Cochrane Controlled Trials Register holds refer ences to approximately 180 000 randomised or quasi-randomised trials identified by the mem bers of the Collaboration. The Database of Abstracts of Reviews of Effectiveness is a register of already published systematic reviews that have been identified by methodical searches of journals (currently with 1852 references). The Cochrane Database of Systematic Reviews is the flagship of the Library, it contains all reviews undertaken and maintained by those within the Cochrane Collaboration. The Cochrane Database of Systematic Reviews has been supplied to all National Health Service libraries in the UK and abstracts of reviews are also available on the Internet (http://cochrane.co.uk/info/abstracts/ abidx.htm). Mental health professionals and students can find this database and search for information regarding specific interventions for clinical problems. The Cochrane Database of Systematic Reviews is a new publication and is filling up with regularly maintained reviews and already is a powerful teaching tool for those interested in the evaluation of care. Back to the dilemma So, currently, when you try and help people with mental illnesses or problems what would you like to guide your practice? The application of up-todate evidence, along with intuition, common sense and wisdom, is increasingly desirable, and now possible. The Cochrane Collaboration has been likened, for better or worse, to the Human Genome Project. If the current expansion and effort of this organisation continues, the Cochrane Database of Systematic Reviews will soon contain hundreds of reviews relevant to all aspects of health care. It will clarify what is known, and what is not known, solve some dilemmas and make others more acute. |
My recent column calling for an end to the fiancée visa drew heavy fire from readers who described themselves as happily married to Russian or Asian women they met through Internet bridal agencies.
"It galls me that you would restrict the rights of citizens because you saw a few cases of illegal fraud in this area. American citizens should be able to marry who they want."
Another reader demanded to know what entitles me to push for legislation that would restrict the rights of American citizens.
"Who are you and what gives you the right?" he asked.
Those are two entirely reasonable questions.
1. A person who has studied the complexity and inequity of immigration law for nearly twenty years. I see that, even if a certain non-immigrant visa— in this case the K-1— is convenient for a tiny segment of the population, it should also serve the common good. If it does not, it should be either eliminated or restricted.
2. An avid student of census statistics. According to 2003 Census figures, the U.S. is home to more than 46 million women over 19 who are single, widowed or divorced. That is a powerful number of women.
3. A single man who occasionally navigates the cold and murky water where men interact with women. My female acquaintances in Los Angeles, San Francisco, Boston, New York, Washington D.C. and Miami unanimously agree that straight, unmarried men are in high demand. Furthermore, my friends advise, if those bachelors are professionals or on the ball in any other way, their desirability increases.
4. A teacher who has directly observed five marriages between American citizens and fiancée visa brides. The tally: two ended in divorce; one in suicide. The two other couples have been married less than two years…although one woman recently said, "I'm bored."
Those who defend the fiancée visa have one basic premise: that there are simply not enough suitable women in the US worthy of their hand.
Their argument is ludicrous on the face of it. See points #2 and #3 above.
Assume that I am suddenly overwhelmed with the urge to marry. And let's also assume that no matter which traditional avenues I pursue, I can't find a worthy woman.
I could pay $3,500 to a "romance tour company" like Love Me.Com and travel to Odessa in the Ukraine, a distance of approximately 6,000 miles from my Lodi, CA. home.
Or if I am not quite ready to embark on that journey, I could go online at the San Francisco Chronicle personals , enter that I am a man seeking a woman and am willing to travel up to 250 miles within California to meet her. (If I am prepared to fly 6,000 miles to meet a woman, I sure as heck should be okay with driving 250 miles).
More than 10 pages of candidates pop up on the Chronicle. Given that every major city has personal ads in the daily and weekly newspapers and that new prospects add their names each day, there is obviously no shortage of women looking for husbands.
"I spent 18 years looking for a conservative Christian lady in America without success. Oh, I could have married many times but not one had the good heart I was seeking."
A Google search for "Christian singles" returned hundreds of sites: Christian dating, Christian mingling, Christian chat rooms, Christian cafes and Christian magazines.
Perhaps a Christian man in search of a Christian woman could come up empty. But the odds heavily favor his success…if he is sincere about seeking a mate.
Finally, let's assume that in my hypothetical search I have to satisfy a particular sexual quirk. Even that is no deterrent: see "Alternative Lifestyle Dating" for examples.
In 1997, when matchmaking for profit on the Internet was in its infancy, "60 Minutes" did an expose titled, "Here Come the Brides: Mail-Order Brides a Booming Business."
Lesley Stahl interviewed Bob Burrows, the president of Cherry Blossoms, the world's largest Internet bridal agency.
"…Less appreciative and too competitive."
To find out how the international bridal game really works, "60 Minutes" sent its cameraman Rick Weiss to the Philippines to go undercover.
Weiss first went to an introductory party hosted by Asian Rose president Mike Tesatorri. Throughout the introductions, Tesatorri referred to the women as "babes" and to one as "a real smart chickie."
"We went to a gigantic department store in the middle of a huge sale. Instead of, like, meeting a woman, you would meet the whole counter. And they'd all come up and shake your hand."
"60 Minutes" also discovered other unsavory aspects of the Internet marriage scam.
Families often encourage women to put their names on the Internet. They hope that when a marriage takes place, the bride will send money home.
No background checks are performed on the prospective grooms. Stahl asked Burrows, "A serial murderer could write you but there would be no screening?" Replied Burrows, "No."
Multiple examples of abusive marriages that included wife beating, forced prostitution and murder.
Dan Stein, Executive Director for the Federation for American Immigration Reform, succinctly expressed my sentiments when he told Stahl that the fiancée visa business was nothing more than an "international meat market" that has "immigration as the goal of these marriages and not wedded bliss."
The brides who jump to the head of the immigration line. Good-bye, Philippines; hello America.
The brides' families including eventually even their brothers and sisters.
The immigration lawyers who charge $300 an hour.
Joe Blow in Kansas. All he has to do is say the word and his fantasy woman might be sharing his bed – until he tires of her (or she of him).
America. Already overcrowded, the US is now obliged to accept more people, often of very alien backgrounds—young women of childbearing age, by the way— for the sole purpose of satisfying the whims of selfish, single men.
The American tradition of marriage. Free choice in matrimony, an old and distinctively European custom, involves a completely different set of mutual commitments and obligations than an arranged marriage, or leasing a car. Different behavior patterns are entailed. This is no small change.
If the U.S. is serious about reforming immigration, why not start with the policies that are the most obviously unnecessary? At the top of that list is the fiancée visa.
And as for the grousing bachelors, they can all try a little harder to find – and give - happiness right here in the United States where women of all ages, ethnicities and religions are eagerly awaiting courtship. |
THE EFFECT OF GRAPHITE TUBE CONDITION ON MEASURED TRACE Pb CONCENTRATIONS IN ETAA STUDIES This investigation evaluated the variation in the quantitative parameters of ETAA by comparing data recorded with fired (old) and new graphite tubes (furnaces) under identical experimental conditions. It was discovered that the results (for Pb) for a tube that was fired a 100 times varied significantly with those obtained for a new tube. This led to the inference that it would be appropriate to include the replicate number of firings in reported data associated with ETAA investigations. |
Is terrorism against Israel really more justified than terrorism against Norway?
In a recent interview, Norway's Ambassador to Israel has suggested that Hamas terrorism against Israel is more justified than the recent terrorist attack against Norway. His reasoning is that, "We Norwegians consider the occupation to be the cause of the terror against Israel." In other words terrorism against Israeli citizens is the fault of Israel. The terrorism against Norway, on the other hand, was based on "an ideology that said that Norway, particularly the Labor Party, is foregoing Norwegian culture." It is hard to imagine that he would make such a provocative statement without express approval from the Norwegian government.
I can't remember many other examples of so much nonsense compressed in such short an interview. First of all, terrorism against Israel began well before there was any "occupation". The first major terrorist attack against Jews who had long lived in Jerusalem and Hebron began in 1929, when the leader of the Palestinian people, the Grand Mufti of Jerusalem, ordered a religiously-motivated terrorist attack that killed hundreds of religious Jews-many old, some quite young. Terrorism against Jews continued through the 1930s. Once Israel was established as a state, but well before it captured the West Bank, terrorism became the primary means of attacking Israel across the Jordanian, Egyptian and Lebanese borders. If the occupation is the cause of the terror against Israel, what was the cause of all the terror that preceded any occupation?
Norway is the most anti-Semitic and anti-Israel country in Europe today.
I was not surprised to hear such ahistorical bigotry from a Norwegian Ambassador. Norway is the most anti-Semitic and anti-Israel country in Europe today. I know, because I experienced both personally during a recent visit and tour of universities. No university would invite me to lecture, unless I promised not to discuss Israel. Norway forbids Jewish ritual slaughter, but not Islamic ritual slaughter. Its political and academic leaders openly make statements that cross the line from anti-Zionism to anti-Semitism, such as when Norway's former Prime Minister condemned Barak Obama for appointing a Jew as his Chief of Staff. No other European leader would make such a statement and get away with it. In Norway, this bigoted statement was praised, as were similar statements made by a leading academic.
The very camp that was attacked by the lone terrorist was engaged in an orgy of anti-Israel hatred the day before the shooting. Yet I would not ever claim that it was Norway's anti-Semitism that "caused" the horrible act of terrorism against young Norwegians.
The causes of terrorism are multifaceted but at bottom they have a common cause: namely a belief that violence is the proper response to policies that the terrorists disagree with. The other common cause is that terrorism has often been rewarded. Norway, for example, has repeatedly rewarded Palestinian terrorism against Israel, while punishing Israel for its efforts to protect its civilians. While purporting to condemn all terrorist acts, the Norwegian government has sought to justify Palestinian terrorism as having a legitimate cause. This clearly is an invitation to continued terrorism.
It is important for the world never to reward terrorism by supporting the policies of those who employ it as an alternative to reason discourse, diplomatic resolution or political compromise.
I know of no reasonable person who has tried to justify the terrorist attacks against Norway. Yet there are many Norwegians who not only justify terrorist attacks against Israel, but praise them, support them, help finance them, and legitimate them.
The world must unite in condemning all terror attacks, regardless of the motive.
The world must unite in condemning and punishing all terrorist attacks against innocent civilians, regardless of the motive or purported cause of the terrorism. Norway, as a nation, has failed to do this. It wants us all to condemn the terrorist attack on its civilians, and we should all do that, but it refuses to live by a single standard.
Nothing good ever comes from terrorism, so don't expect the Norwegians to learn any lessons from its own victimization. As the Ambassador made clear in his benighted interview, "those of us who believe [the occupation to be the cause of the terror against Israel] will not change their minds because of the attack in Oslo." In other words, they will persist in their bigoted view that Israel is the cause of the terrorism directed at it, and that if only Israel were to end the occupation (as it offered to do in 2000-2001 and again in 2007), the terrorism will end. Even Hamas, which Norway supports in many ways, has made clear that it will not end its terrorism as long as Israel continues to exist. Hamas believes that Israel's very existence is the cause of the terrorism against it. That sounds a lot like the ranting of the man who engaged in the act of terrorism against Norway.
The time is long overdue for Norwegians to do some deep soul searching about their sordid history of complicity with all forms of bigotry ranging from the anti-Semitic Nazis to the anti-Semitic Hamas. There seems to be a common thread. |
Novel remote sensor systems: design, prototyping, and characterization We have designed and tested a prototype TRL4 radio-frequency (RF) sensing platform containing a transceiver that interrogates a passive carbon nanotube (CNT)based sensor platform. The transceiver can be interfaced to a server technology such as a Bluetooth® or Wi-Fi device for further connectivity. The novelty of a very-low-frequency (VLF) implementation in the transceiver design will ultimately enable deep penetration into the ground or metal structures to communicate with buried sensing platforms. The sensor platform generally consists of printed electronic devices made of CNTs on flexible poly(ethylene terephthalate) (PET) and Kapton® substrates. This novel remote sensing system can be integrated with both passive and active sensing platforms. It offers unique characteristics suitable for a variety of sensing applications. The proposed sensing platforms can take on different form factors and the RF output of the sensing platforms could be modulated by humidity, temperature, pressure, strain, or vibration signals. Resonant structures were designed and constructed to operate in the very-high-frequency (VHF) and VLF ranges. In this presentation, we will report results of our continued effort to develop a commercially viable transceiver capable of interrogating the conformally mounted sensing platforms made from CNTs or silver-based nanomaterials on polyimide substrates over a broad range of frequencies. The overall performance of the sensing system with different sensing elements and at different frequency ranges will be discussed. |
Transcatheter aortic valve replacement: impact of pre-procedural FEops HEARTguide assessment on device size selection in borderline annulus size cases Objectives The aim of this study is to evaluate device size selection in patients within the borderline annulus size range undergoing transcatheter aortic valve replacement (TAVR) and to assess if pre-procedural patient-specific computer simulation will lead to the selection of a different device size than standard of care. Background In TAVR, appropriate device sizing is imperative. In borderline annulus size cases no standardised technique for tailored device size selection is currently available. Pre-procedural patient-specific computer simulation can be used, predicting the risk for paravalvular leakage (PVL) and need for permanent pacemaker implantation (PPI). Methods In this multicentre retrospective study, 140 patients in the borderline annulus size range were included. Hereafter, device size selection was left to the discretion of the operator. After TAVR, in 24 of the 140 patients, patient-specific computer simulation calculated the most appropriate device size expected to give the lowest risk for PVL and need for PPI. In these 24 patients, device size selection based on patient-specific computer simulation was compared with standard-of-care device size selection relying on a standardised matrix (Medtronic). Results In a significant proportion of the 140 patients (26.4%) a different device size than recommended by the matrix was implanted. In 10 of the 24 patients (41.7%) in whom a computer simulation was performed, a different device size was recommended than by means of the matrix. Conclusions Device size selection in patients within the borderline annulus size range is still ambiguous. In these patients, patient-specific computer simulation is feasible and can contribute to a more tailored device size selection. Introduction In transcatheter aortic valve replacement (TAVR), pre-procedural planning consists of a multidetector computer tomography scan in combination with dedicated software (e.g. 3mensio, Pie Medical Imaging, What's new? Device size selection in transcatheter aortic valve replacement (TAVR) patients within the borderline annulus size range is still ambiguous and a standardised technique is lacking. Pre-procedural patient-specific computer simulation (FEops HEARTguide; FEops, Ghent, Belgium) can be used in TAVR to predict the risk for moderate/severe paravalvular leakage (PVL) and the occurrence of conduction disturbances. Pre-procedural patient-specific computer simulation can contribute to a more tailored device size selection in TAVR patients within the borderline annulus size range, potentially lowering the risk for moderate/severe PVL and the need for permanent pacemaker implantation. Maastricht, The Netherlands) in order to select the appropriate device size. In particular, the aortic annulus perimeter is an essential measurement. Each valve manufacturer provides a standardised matrix for device size selection, which is considered the standard of care. However, in a certain subset of patients the measurements can lead to ambiguous conclusions that can be matched by two device sizes. In this case, device size selection is left to the discretion of the operator; a possible strategy is to implant the larger device size. However, choosing the larger device size is not always the best option. Oversizing can lead to annulus rupture and conduction disturbances, while undersizing can lead to significant paravalvular leakage (PVL). Anticipating an increasing number of TAVR procedures in younger and low-risk patients, it becomes essential to find a standardised technique for appropriate device sizing in borderline annulus size cases to improve clinical outcomes. Recently, pre-procedural patient-specific computer simulation (FEops HEARTguide; FEops, Ghent, Belgium) was introduced as a potential tool for TAVR. This cloud-based technology uses acquired pre-procedural CT images to accurately predict the interaction between the implanted device and the surrounding anatomy. More specifically, simulations can be performed with different device sizes and implantation depths and subsequently the risk for PVL and need for permanent pacemaker implantation (PPI) can be predicted. Small observational studies have proven its ability to accurately predict PVL and the occurrence of conduction disturbances in TAVR patients. We hypothesised that in TAVR, device size selection in borderline annulus size cases is ambiguous. The goal of this study is to assess the feasibility of pre-procedural patient-specific computer simulation in borderline annulus size cases and to evaluate if it will lead to a different device size selection when compared to the standard of care. Study design In this multicentre retrospective study, data from 140 patients who had undergone TAVR with a self-expanding Medtronic Evolut R or Pro valve (Medtronic, Minneapolis, MN, USA) and who fell within a borderline annulus size range based on conventional CT measurements, were collected. These 140 borderline annulus size cases were selected from a group of patients (n = 559) in which TAVR was performed between April 2015 and January 2020 at Sint-Jan Hospital in Bruges, Belgium or at St. Antonius Hospital in Nieuwegein, The Netherlands. All patients gave written informed consent. Then, pre-procedural CT images of 24 of the 140 patients were sent to an independent institution (FEops) and analysed by their reviewers (Fig. 1). The number of patients who underwent patient-specific computer simulation was limited due to a pre-defined financial budget. Funding was provided by FEops. Since device sizing recommendations in the Medtronic matrix contain precise cut-off values for the annulus perimeter for each valve size and a validated borderline annulus size range is currently not available, a borderline annulus size range (i.e. grey zone) was arbitrarily determined by using a margin of 2% for each cut-off value. PVL was evaluated by transthoracic echocardiogram 1 day after TAVR and was graded as: none/trace, mild, moderate or severe. Conduction disturbances were defined as the development of a high-degree atrioventricular block or a left bundle branch block. FEops HEARTguide technology Pre-procedural CT images were utilised to create a patient-specific three-dimensional model of the aortic root anatomy (Fig. 2). Implantation of two valve sizes and two implantation depths (high and mid-level) were then simulated. The models acquired by pa-tient-specific computer simulation were then used to predict PVL and conduction disturbances. Computational fluid dynamics were utilised to assess PVL severity by modelling blood flow during diastole using a fixed pressure gradient of 32 mm Hg between the aorta and the left ventricle. This fixed pressure gradient is a mean value derived from a large study population. Blood flow in the left ventricle outflow tract (LVOT) was expressed in millilitres per second, whereby a value of ≥ 16.0 ml/s correlated well with ≥ moderate PVL. The risk of developing conduction disturbances was predicted by measuring the exerted maximum device pressure on the area of interest (contact pressure, MPa) and the percentage of the area of interest being subjected to device pressure (contact pressure index, %). The region of the LVOT containing the atrioventricular conduction system was determined as the area of interest. A contact pressure value of > 0.39 MPa and contact pressure index of > 14% were correlated with the development of a high-degree atrioventricular block or new left bundle branch block. The best-fitting device size with the ideal implantation depth could then be selected. Device size selection was based on the lowest risk for developing significant PVL and/or conduction disturbances. The FEops HEART guide reviewers were blinded to the size of the implanted valve and clinical outcomes after TAVR implantation. Study endpoints The primary endpoint of this study is to assess the rate of 'discordant' device size selection in borderline annulus size cases. Discordant device selection is defined as the implementation of a different device size than that recommended by the matrix. Additionally, 24 patients with discordant device size selection underwent patient-specific computer simulation, after which device size selection by patient-specific computer simulation was compared to standard-of-care device size selection. Statistical analysis Statistical analysis included descriptive statistics. Categorical variables are presented as counts and percentages and continuous variables as mean ± standard deviation. All analyses were conducted with SPSS v.26 (IBM, Chicago, IL, USA). Device size selection Of the 140 patients, 37 (26.4%) received a valve with a different size than that recommended by the matrix (Matrix = Operator). The same valve size as the one recommended by the matrix was implanted in 103 patients (73.6%) (Matrix = Operator). FEops analysed group Baseline characteristics In the discordant device size selection group (n = 37), 24 patients were randomly selected for additional patient-specific computer simulation. Baseline characteristics of the 24 patients are shown in Tab. 1. The mean age was 83.5 ± 4.3 years and 62.5% were female. Procedural and post-procedural data The Evolut R system was implanted in 50% of the patients (Tab. 2). Conduction disturbances were observed in 10 patients. PPI was required in 6 patients, whereas moderate/severe PVL was present in 1 patient. In 10 of these 24 patients (group A) the patient-specific computer simulation recommended a different valve size than the matrix (Matrix = FEops). In the other 14 patients (group B) the patient-specific computer simulation recommended the same valve size as the matrix (Matrix = FEops) (Fig. 1). Paravalvular leakage In four group-A patients, patient-specific computer simulation concluded that the smaller device recommended by the matrix would carry a risk for moderate/severe PVL, whereas the larger device size would not involve such a risk (Tab. 3). The larger device was implanted in these four patients and did not result in moderate/severe PVL. Conduction disturbances In six patients a risk for developing conduction disturbances was predicted at mid-level implantation, whereas device deployment in a high position did not involve any risk of developing conduction disturbances (Tab. 3). In five of these six patients (two patients in group A and three in group B), mid-level implantation resulted in a PPI. In the sixth patient (group B) no conduction disturbances could be seen despite mid-level implantation. In one other patient, the risk for conduction disturbances could not be calculated. After TAVR, a PPI was required for this patient. Discussion In this retrospective multicentre study, which comprised borderline annulus size cases, the operator decided to choose a different device size than recommended by the matrix in 37 of 140 patients (26.4%). The rationale of the operator to deviate from the matrix was multifactorial, mainly driven by personal experience. The application of patient-specific computer simulation in these borderline annulus size cases was intended to predict the outcome in a reproducible and standardised manner. In this study, the theoretical application of this technology in a subgroup of 24 patients in whom the device size used was not that recommended by the matrix led to a different valve size being selected in 10 patients (41.7%) when compared to standard-of-care device size selection. Patient-specific computer simulation has shown its potential in assessing device-host interactions in TAVR. De Jaegere et al. showed that patient-specific computer simulation can accurately predict the occurrence of moderate/severe PVL in patients undergoing TAVR. Rocatello et al. revealed that two simulation-based parameters (contact pressure and contact pressure index) were predictive of developing conduction abnormalities (high-degree atrioventricular block or left bundle branch block) during TAVR. This was confirmed by Dowling et al. in patients with bicuspid aortic disease. El Faquir et al. concluded that device size selection in TAVR patients is more intricate and that discordance can be present between standard-of-care device sizing and device sizing based on patient-spe-cific computer simulation. The present study has confirmed this finding in patients within the borderline annulus size range. Additionally, this study has shown that a substantial proportion of the patients undergoing TAVR should be considered a part of the borderline annulus size range group. This was the case in 25% of our TAVR patients. We can conclude that implementation of patientspecific computer simulation is feasible in borderline annulus sizing range situations and that it can lead to a different device size selection as well as the recommendation for a specific implantation depth. In our study, a larger device size was advised on the basis of patient-specific computer simulation in four patients to prevent moderate/severe PVL. However, consistently choosing the larger device size is not always the best option taking into consideration the potential risk for annulus rupture and the need for PPI. Furthermore, it is indeed common practice to aim for high implantation to avoid pressure being exerted on the conduction system. Nevertheless, patient-specific computer simulation can provide us with information concerning in which patients device deployment in a high position is crucial to prevent the need for PPI. In our study, a PPI could have been prevented in five patients if high implantation had been used. Thus, a more tailored approach is required during device size selection of TAVR patients considered to be in the borderline annulus size range. We believe that pre-procedural patient-specific computer simulation has the potential to play a key role in this matter. Importantly, patient-specific computer simulation is also applicable for other transcatheter heart valve systems. In our study, for practical reasons only patients in which an Evolut R/Pro valve was implanted were included. A randomised controlled trial is an essential first step to assess if device size selection by patientspecific computer simulation in patients within the borderline annulus size range will indeed lead to better clinical outcomes compared to standard-of-care device size selection. Lastly, future studies will be needed to validate and define the borderline annulus size range. Limitations This study has several limitations, the first being the small sample size. Second, in this study we arbitrarily chose a 2% for each cut off value to define the borderline annulus size range. This cut-off value has not been validated. Moreover, this is an observational study in which device size selection was evaluated by two modalities. A randomised controlled trial is needed to assess whether clinical outcomes can be improved by the use of a patient-specific computer simulation. Finally, the accuracy of the patient-specific computer simulation is susceptible to improvement: earlier published data revealed a calculated sensitivity and specificity of 0.72 and 0.78, respectively, for predicting moderate/severe PVL and 0.95 and 0.54, respectively, for predicting the development of conduction disturbances for a contact pressure index of 14%. By adding a contact pressure value of > 0.39 MPa, the accuracy of predicting conduction disturbances was increased. These limitations of patient-specific computer simulation could be observed in our study as well. Conclusion Device size selection in TAVR patients considered to be in the borderline annulus size range is still ambiguous. Our results show that patient-specific computer simulation is feasible in these cases and that it may contribute to a tailored device size selection, decreasing the risk for significant PVL and PPI need. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
(CNN) The results are in -- the Princeton Review has released its annual ranking of 380 colleges . Setting aside other categories such as "Best Professors" or "Happiest Students," the hotly anticipated title of top party school in the nation went to the University of Illinois Urbana-Champaign.
The title is one of 62 categories but it's, perhaps, the most provocative.
Robert Franek, publisher for the Princeton Review, said that every college the company reviewed has excellent academics and that the rankings are a reflection of campus culture.
"Our 62 ranking lists provide students with a way to see the types of colleges that could help them achieve their future goals and dreams," Franek said in a news release
University Chancellor Phyllis Wise said the ranking was not scientific and called it a "promotion" by the Princeton Review.
"It's disappointing that, once again, Princeton Review is promoting this pseudo-ranking as though it were meaningful," Wise said in a statement to CNN. "It's insulting to all of our students, since they are here to prepare to become leaders of their generation."
She said that the university's graduation rates and achievements of alumni show that students take academics seriously.
The University of Illinois has been in the Top 5 ranking in prior years, but this was its first year atop the list. The Midwest is well-represented in the party-school rankings with University of Iowa coming in second and University of Wisconsin-Madison third.
These schools stand in sharp contrast to Brigham Young University, which topped the list of "Stone-Cold Sober Schools." Other categories included "Best College Dorms" won by Bennington College in Vermont, "Best College Library" won by Yale University and "Best Athletic Facilities" won by Kenyon College in Ohio.
The Princeton Review creates the rankings after surveying a total of 136,000 students from the 380 colleges. Its survey includes 80 questions about student life, academics and administration. The company offers tutoring and test preparation as well. |
def categorizedLists(
pairs: Iterable[Tuple[KT, VT]]
) -> DefaultDict[KT, List[VT]]:
valuesByCategory: DefaultDict[KT, List[VT]] = defaultdict(list)
for category, value in pairs:
valuesByCategory[category].append(value)
return valuesByCategory |
// ChunkHealth returns the health of the chunk which is defined as the percent
// of parity pieces remaining.
func (sf *UploFile) ChunkHealth(index int, offlineMap map[string]bool, goodForRenewMap map[string]bool) (float64, float64, uint64, error) {
sf.mu.Lock()
defer sf.mu.Unlock()
chunk, err := sf.chunk(index)
if err != nil {
return 0, 0, 0, errors.AddContext(err, "failed to read chunk")
}
return sf.chunkHealth(chunk, offlineMap, goodForRenewMap)
} |
WILTON - Steven Wescott was headed home after grocery shopping with his wife and 2-year-old daughter Saturday night around 8:30 p.m.
"I looked over and saw him coming right at the car. I was like, 'What the?' and then smash," he said.
They were at a stop sign at the corner of Maple Avenue and Smith Bridge Road when a Jeep slammed into his Toyota.
"He started backing up right after the accident and so I got out of the car and he rolled down his window down and was like, I'm so sorry, I'm so sorry," Wescott said.
Wescott didn't know it at the time, but he says it was Michael Vanyo, the Ichabod Crane Central School District superintendent, who was driving that Jeep.
Vanyo stayed on scene while Steven's wife called 911. Buthe Westcott says Vanyo left about five minutes later.
"He took off while we were both distracted," Westcott said. "My wife started yelling at him and then we realized the bumper was there with his license plate on it."
That's what Westcott says led Saratoga County sheriff's ivestogators to Vanyo so quickly.
He later learned from our newscast Monday night that he was the superintendent of Ichabod Schools.
"I couldn't believe somebody in that position would do that, especially leave the scene with a family. They're supposed to be overseeing children's education and everything," Wescott said.
Wescott reached out to NewsChannel 13 to make sure the Board of Education knows it's not just "property damage" that was affected but his family could have been seriously hurt.
"If he had been a foot or two more to the right he would've slammed right into my door instead of the front end of my car," Westcott sa. |
Changing Attitudes Toward the Ethics of Tax Evasion: An Empirical Study of 10 Transition Economies This paper analyzes the data on tax evasion that was collected over two different periods for ten transition economies. Comparisons are made between the earlier and later data to determine whether attitudes toward tax evasion have changed over time. The study found that some countries became significantly more tolerant of tax evasion over time while other countries became less tolerant of tax evasion. There was no significant difference between earlier and later attitudes toward tax evasion for two countries. |
Pressuring, shearing, torsion and extension of a circular tube or bar of cylindrically anisotropic material One of the novel features of the present paper is that we have written the equation of equilibrium and the stress-strain law of an inhomogeneous anisotropic linear elastic material in a compact form for cylindrical coordinate system using matrix notation. For a two-dimensional deformation the result resembles Strohs sextic formalism in a rectangular coordinate system. We then consider the material to be cylindrically anisotropic. It means that the elastic stiffnesses referred to a cylindrical coordinate system are constants. The problem of a circular tube subjected to a uniform normal stress and shearing stresses at the inner and outer surfaces of the tube is studied. Also studied are the axial extension and torsion of the tube. Unlike isotropic materials for which the applied normal stress (or shear stress) induces only the normal (or shear) stress, all three displacement components and most of the six stress components are nonzero for general anisotropic materials. This is particularly interesting for the uniform axial extension of the tube. For an isotropic material the stress 33 is the only non-zero and uniform stress inside the tube. For a cylindrically anisotropic material the stresses rr,, and 3 are also non-zero. Moreover, they depend on r and are not uniform. A solid cylinder or a cylinder with a pin hole is a special case of a tube. It is shown that, for the loads mentioned above including the axial extension, the stress may be unbounded at the pinhole. |
<filename>Engine/Modules/LUI/Rendering/UITextureManager.cpp
////////////////////////////////////////////////////////////////////////////////
// Rendering/UITextureManager.cpp (Leggiero/Modules - LegacyUI)
//
// Texture Manager Implementation
////////////////////////////////////////////////////////////////////////////////
// My Header
#include "UITextureManager.h"
// Leggiero.Utility
#include <Utility/Sugar/Finally.h>
// Leggiero.LegacyUI
#include "../Loader/IUIAssetLoader.h"
namespace Leggiero
{
namespace LUI
{
//////////////////////////////////////////////////////////////////////////////// UITextureManager
//------------------------------------------------------------------------------
UITextureManager::UITextureManager(IUIAssetLoader &assetLoader)
: m_assetLoader(assetLoader)
{
}
//------------------------------------------------------------------------------
UITextureManager::~UITextureManager()
{
}
//------------------------------------------------------------------------------
std::shared_ptr<UICachedTexture> UITextureManager::GetTexture(const UITextureNameType &textureName)
{
int readLockResult;
while ((readLockResult = pthread_rwlock_rdlock(&m_cacheLock.GetLock())) == EAGAIN)
{
sched_yield();
}
if (readLockResult == 0)
{
pthread_rwlock_t *listLock = &m_cacheLock.GetLock();
auto releaseLockFunc = [listLock]() mutable { pthread_rwlock_unlock(listLock); };
FINALLY_OF_BLOCK(_RELEASE_READ_LOCK, releaseLockFunc);
std::unordered_map<UITextureNameType, std::shared_ptr<UICachedTexture> >::iterator findIt = m_textureCache.find(textureName);
if (findIt != m_textureCache.end())
{
return findIt->second;
}
}
else
{
// Anyway, Go
std::unordered_map<UITextureNameType, std::shared_ptr<UICachedTexture> >::iterator findIt = m_textureCache.find(textureName);
if (findIt != m_textureCache.end())
{
return findIt->second;
}
}
std::shared_ptr<UICachedTexture> loadedTexture(_LoadTexture(textureName));
if (!loadedTexture)
{
return nullptr;
}
int lockResult = pthread_rwlock_wrlock(&m_cacheLock.GetLock());
if (lockResult == 0)
{
pthread_rwlock_t *lockCopy = &m_cacheLock.GetLock();
auto releaseLockFunc = [lockCopy]() mutable { pthread_rwlock_unlock(lockCopy); };
FINALLY_OF_BLOCK(_RELEASE_LOCK, releaseLockFunc);
std::unordered_map<UITextureNameType, std::shared_ptr<UICachedTexture> >::iterator findIt = m_textureCache.find(textureName);
if (findIt != m_textureCache.end())
{
// Race Condition
return findIt->second;
}
m_textureCache.insert(std::make_pair(textureName, loadedTexture));
}
return loadedTexture;
}
//------------------------------------------------------------------------------
void UITextureManager::PreLoadTexture(const UITextureNameType &textureName)
{
int readLockResult;
while ((readLockResult = pthread_rwlock_rdlock(&m_cacheLock.GetLock())) == EAGAIN)
{
sched_yield();
}
if (readLockResult == 0)
{
pthread_rwlock_t *listLock = &m_cacheLock.GetLock();
auto releaseLockFunc = [listLock]() mutable { pthread_rwlock_unlock(listLock); };
FINALLY_OF_BLOCK(_RELEASE_READ_LOCK, releaseLockFunc);
std::unordered_map<UITextureNameType, std::shared_ptr<UICachedTexture> >::iterator findIt = m_textureCache.find(textureName);
if (findIt != m_textureCache.end())
{
return;
}
}
else
{
// Anyway, Go
std::unordered_map<UITextureNameType, std::shared_ptr<UICachedTexture> >::iterator findIt = m_textureCache.find(textureName);
if (findIt != m_textureCache.end())
{
return;
}
}
std::shared_ptr<UICachedTexture> loadedTexture(_LoadTexture(textureName));
if (!loadedTexture)
{
return;
}
int lockResult = pthread_rwlock_wrlock(&m_cacheLock.GetLock());
if (lockResult == 0)
{
pthread_rwlock_t *lockCopy = &m_cacheLock.GetLock();
auto releaseLockFunc = [lockCopy]() mutable { pthread_rwlock_unlock(lockCopy); };
FINALLY_OF_BLOCK(_RELEASE_LOCK, releaseLockFunc);
std::unordered_map<UITextureNameType, std::shared_ptr<UICachedTexture> >::iterator findIt = m_textureCache.find(textureName);
if (findIt != m_textureCache.end())
{
return;
}
m_textureCache.insert(std::make_pair(textureName, loadedTexture));
}
}
//------------------------------------------------------------------------------
void UITextureManager::RegisterExternalTexture(const UITextureNameType &textureName, std::shared_ptr<Graphics::GLTextureResource> texture, std::shared_ptr<Graphics::TextureAtlasTable> atlasTable)
{
std::shared_ptr<UICachedTexture> textureEntry(std::make_shared<UICachedTexture>(texture, atlasTable));
int lockResult = pthread_rwlock_wrlock(&m_cacheLock.GetLock());
if (lockResult == 0)
{
pthread_rwlock_t *lockCopy = &m_cacheLock.GetLock();
auto releaseLockFunc = [lockCopy]() mutable { pthread_rwlock_unlock(lockCopy); };
FINALLY_OF_BLOCK(_RELEASE_LOCK, releaseLockFunc);
m_textureCache[textureName] = textureEntry;
}
}
//------------------------------------------------------------------------------
std::shared_ptr<UICachedTexture> UITextureManager::GetCachedTexture(const UITextureNameType &savedTextureName)
{
int readLockResult;
while ((readLockResult = pthread_rwlock_rdlock(&m_cacheLock.GetLock())) == EAGAIN)
{
sched_yield();
}
if (readLockResult == 0)
{
pthread_rwlock_t *listLock = &m_cacheLock.GetLock();
auto releaseLockFunc = [listLock]() mutable { pthread_rwlock_unlock(listLock); };
FINALLY_OF_BLOCK(_RELEASE_READ_LOCK, releaseLockFunc);
std::unordered_map<UITextureNameType, std::shared_ptr<UICachedTexture> >::iterator findIt = m_textureCache.find(savedTextureName);
if (findIt != m_textureCache.end())
{
return findIt->second;
}
}
else
{
// Anyway, Go
std::unordered_map<UITextureNameType, std::shared_ptr<UICachedTexture> >::iterator findIt = m_textureCache.find(savedTextureName);
if (findIt != m_textureCache.end())
{
return findIt->second;
}
}
return nullptr;
}
//------------------------------------------------------------------------------
void UITextureManager::ClearCache()
{
int lockResult = pthread_rwlock_wrlock(&m_cacheLock.GetLock());
if (lockResult == 0)
{
pthread_rwlock_t *lockCopy = &m_cacheLock.GetLock();
auto releaseLockFunc = [lockCopy]() mutable { pthread_rwlock_unlock(lockCopy); };
FINALLY_OF_BLOCK(_RELEASE_LOCK, releaseLockFunc);
m_textureCache.clear();
}
else
{
// Anyway, we should go...
m_textureCache.clear();
}
}
//------------------------------------------------------------------------------
std::shared_ptr<UICachedTexture> UITextureManager::_LoadTexture(const UITextureNameType &textureName)
{
std::shared_ptr<Graphics::GLTextureResource> texture(m_assetLoader.LoadTexture(textureName));
if (!texture)
{
return nullptr;
}
return std::make_shared<UICachedTexture>(texture, m_assetLoader.LoadTextureAtlasTable(textureName, texture));
}
}
}
|
export * from './deleteInventory'
|
Living and Health Conditions Associated with Overweight and Obesity among Elderly. BACKGROUND The epidemiological and nutritional transition processes in the last decades underlie the rising trend of obesity in the elderly and is related to increased risk of chronic non-communicable diseases and decreased functional status. OBJECTIVE To analyze the association of demographic, socioeconomic, lifestyle and health-related factors with overweight and obesity in elderly. DESIGN Cross-sectional study. SETTING Carried out in Campinas-So Paulo, Brazil, in 2011. PARTICIPANTS 452 non-institutionalized elderly (aged ≥60 years), half were users of a government-run soup kitchen and the other half were neighbors of the same sex. RESULTS Overweight frequency (BMI ≥25 and <30 kg/m2) was 44.5% and obesity (BMI ≥30 kg/m2) was 21.7%. In the multiple multinomial logistic regression model adjusted for sex, age group and economic class, there was greater chance of overweight among those that reported dyslipidemia; those that reported arthritis/ arthrosis/rheumatism and that once or more per week replaced supper by a snack were more likely to be obese. Elderly who did not leave home daily and reported diabetes had higher chance of overweight and obesity. CONCLUSIONS Overweight and obesity are associated with worse living and health-related conditions, such as physical inactivity, changes in eating behaviors, and chronic diseases. Public health policies should encourage regular physical activity and healthy eating behaviors, focusing on traditional diet, through nutritional education, in order to reduce the prevalence of overweight and obesity and chronic diseases. |
White Matter Injury Found to Be Preclinical Marker for Age-related Cognitive Decline: How to Interpret the Latest Data Early white matter injury may be a preclinical marker for agerelated cognitive decline and for Alzheimers disease (AD), but the relationship between cognitive decline, white matter injury, and other neurodegenerative processes remains to be clarifi ed. Three separate studies appearing in the July 25 online edition of Neurology add weight to the argument, supported by previous research, that white matter lesions are complicit in the development of age-related cognitive impairment and AD. The studies include a report by researchers at Oregon Health & Science University and the department of neurology at the Veterans Affairs Medical Center in Portland on the white matter hyperintensity (WMH) burden preceding mild cognitive Impairment (MCI); on microstructural white matter changes in cognitively normal individuals at risk of amnestic MCI by researchers at the University of New South Wales; and on MRI-leukoaraiosis thresholds and the phenotypic expression of dementia by researchers at the University of Florida, the University of Illinois, and Drexel University. Experts in neurodegeneration and cognitive impairment who reviewed the reports agree the three studies highlight in novel ways the role of white matter lesions in cognitive decline. White matter is now included in the discussion about the pathogenesis of dementia, Christopher Filley, MD, professor and chief of neurology at Denver VA Medical Center and interim director of the Alzheimers Disease and Cognition Center, told Neurology Today. Far from being a bystander, white matter may be a key component in the development of dementia, and white matter dysfunction as measured with modern neuroimaging has been repeatedly demonstrated in patients with many dementing disorders. These three papers all add to this literature, addressing the contributions of white matter dysfunction to models of incipient dementia in older people, he continued. All three report new fi ndings that illuminate early stages of the processes leading to late-life dementia. |
Exposure to nickel from the metal equipment in the gym It remains unclear whether gym customers are exposed to any nickel from the metal equipment and if the exposure is associated with the duration of contact. Therefore, the aim of this study was to ascertain exposure to nickel measured through nickel concentration in the hair in those exercising in a fitness gym. We enrolled 100 amateur athletes in one of the gyms in Almaty, Kazakhstan (all men, median age 30 (interquartile range (IQR) 10) years), exercising from 2 to 7 days a week for 40 to 180 minutes and their age- and sex-matched controls who did not exercise. All subjects filled in the questionnaires on the exercising patterns, smoking and occupational exposure and then donated 0.25 g of head hair, in which nickel was measured using atomic absorption spectrophotometry. Hair nickel concentration ranged from 0 to 8.5 g/g with notable left-skewness towards low concentrations in both groups. Hair nickel concentration was not associated with age, smoking or occupation, but was significantly lower in amateur athletes compared to controls (median 0 (IQR 0.5) vs. 0.9 (IQR 1.4) g/g). More days a week in a gym, longer workout history, longer workout duration or supplements use did not increase the probability of being stratified in a high-exposure subgroup (defined as 75th percentile of hair nickel concentration and higher); however, there were more smokers in a low-exposure group (p<0.05). With the mixed pattern of exposure, gym goers may be unlikely exposed to more nickel from the metal equipment in the gym, however the exposure may depend on the specific alloy composition. Introduction Exercising in a gym has become quite a prevalent leisure activity in adults, given significant benefit of regular exercising on cardiovascular health, mood and probably self-confidence. Little is known, however, about the adverse effects of attending the gym with the corresponding exposures. Gym goers may be exposed to a variety of chemicals inside gyms including metal bars and barbells. Those are usually produced of stainless steel, and the chemical composition of the latter may vary; however, selected metals in the steel, such as nickel, may be associated with adverse health effects in environmental and occupational studies. Thus, nickel is a known carcinogen 1,2 and may also cause allergic dermatitis. Very few studies, however, assess exposure of those exercising in the gyms to nickel. There is only one report with a small sample to show higher concentrations of nickel from the contact with bars compared to non-exercising individuals 3. In this presentation, they found nickel both on the bars and the skin of exercising individuals using acid test. A case of allergic dermatitis was also reported in a regularly exercising individual 4, but no studies with larger samples have approached the issue of nickel exposure and its adverse health effects in the gyms. Whether nickel found in the palm skin of gym customers may lead to higher nickel blood concentrations, therefore causing systemic effects, remains unknown. In professional sportsmen, nickel blood concentrations were found to be higher compared to controls 5, and given that nutritional intake of nickel in these two groups did not differ, such findings should raise some concern as to whether such elevated concentration is associated with exposure in the gyms and has any negative impact on health. With all that scarce evidence, it remains unclear whether gym customers are exposed to any nickel from the metal equipment and if the exposure is associated with the duration of contact. Therefore, the aim of this study was to ascertain exposure to nickel measured through nickel concentration in the hair in those exercising in a fitness gym compared to non-exercising controls. Ethical approval This study was approved by the Committee on Bioethics of al-Farabi Kazakh National University. All subjects in this study provided written informed consent to participate and donate head hair sample for nickel analysis. Recruitment and variables measured We enrolled 100 customers from two gyms in of one the popular chains in Almaty, Kazakhstan. Anyone willing to participate and regularly exercising in a gym could be included in the study. The only exclusion criterion was female sex, as there were very few women in the gym. Subjects were invited to participate by authors DV, ZhT or AD in a random fashion, thus, reducing selection bias. Data were collected in July and August 2018. Sample size calculation with a given statistical power did not seem feasible for this study, as we could not find any other similar analysis of this kind in the literature; therefore, we set the sample size of 100 subjects. We also enrolled sex-and age-matched controls who were their friends or acquaintances to ensure comparable lifestyle, eating habits and general interests to control for confounding. Controls should not have exercised in a gym for at least 2 years prior to the enrollment in the study. All subjects were asked to fill in a questionnaire 6, which consisted of the demographic part, followed by detailed section in the exercising pattern, smoking, occupational exposure and the use of supplements. We asked the respondents how many days they normally attended the gym, what the duration of the usual workout was in minutes, how long was the gym exposure history, whether gloves were used in the gym, whether a contact allergy to metals in the gym was ever experienced, and whether any fitness tracker or supplement was used. We then detailed smoking history with a series of questions and stratified all subjects into never, former or daily smokers and ascertained the number of smoked cigarettes a day along with the smoking durations in months or years. Occupational history section contained a series of questions whether a subject was a student at a time of the survey, had any employment in the office or had any occupational exposure with metal. Hair nickel concentration measurement We measured hair nickel concentrations in all subjects and treated the concentration as a marker of exposure to nickel. Hair (at least 0.25 g) was cut in from the occipital. Hair samples were then washed using non-ionized surface-active solution, then acetone and then with non-ionized water. Weighed samples were treated with nitric acid (67%) and hydrogen peroxide (30%). Nickel concentrations in the samples were tested using atomic absorption spectrophotometry on Perkin Elmer AAnalyst 400 with HGA 900 (USA) and following officially approved protocol 7. The lower limit of detection (LOD) in our analysis was 0.05 g/g. Statistical analysis Hair nickel concentrations were the primary outcomes in this analysis and compared in the main and control groups using non-parametric Mann-Whitney U-test, since all concentrations were left-skewed. Demographic attributes, smoking and occupational exposure were tested as predictors in bivariate models and compared between the groups. We used NCSS 12 (Utah, USA) for all computations. P<0.05 was considered significant. Hair nickel concentration Hair nickel concentration ranged from 0 to 8.5 g/g, with notable left-skewness towards low concentrations. Thus, 25 th percentile was 0 g/g; 50 th, 0.38 g/g; and 75 th, 1.22 g/g. A total of 56 (28%) subjects showed concentrations below the limit of detection (LOD); 52% of amateur athletes and 4% of those in the control group had nickel levels below LOD (p<0.05). Hair nickel concentration was not associated with age, smoking or occupation, but was significantly lower in amateur athletes than controls (Table 1). Raw information of hair nickel concentration, in addition to all questionnaire answers, are available on OSF 6. Variables in gym-goers In the main group, the gym attendance frequency ranged from 2 to 7 days a week; however, most subjects did so 3 times a week (67%). Workout duration ranged from 40 to 180 minutes; median, 90 (IQR 37.5) minutes. The overall gym exposure was from 1 month to 30 years, with the median 2 (IQR 4.4.) years. Only 17% of those in the gym used gloves for weightlifting on a regular basis, and 4% ever had dermatitis that they associated with the gym equipment use. A total of 16% use fitness tracker in the gym on a regular basis, and 56% use any sort of supplements to attain more visible results in the gym; there was no statistically significant correlation between these two variables. Analysis of hair nickel concentration When stratified by the 75 th percentile of nickel hair concentration (0.505 g/g) into low-and high-exposure gym customers only, we found no difference in age, the number of workouts days a week, workout duration, overall exposure to gym equipment in years or supplement use (Table 2). Surprisingly, there were significantly more subjects wearing gloves, believed to protect the skin from contact with metal, in the high-exposure group. Similarly, the latter group had fewer smokers compared to those with lower nickel concentrations. Discussion To our knowledge, this is the first report on hair nickel concentrations in those attending the gym compared to controls, in which we could not confirm higher exposure to nickel in amateur athletes. Guided by the pilot presentations that exposure to metal equipment in the gym may result in greater nickel absorption, we compared hair nickel concentrations in regular exercisers compared to those abstaining from the gym, but found higher hair nickel concentrations in the latter group. We conclude that it was not dermal contact with metal equipment in the gym, but fewer smokers or specific nutritional habits in the gym goers group that could explain their lower hair nickel concentrations. The sources of nickel in the population may range from absorption to food, smoking, place of residence, lifestyle habits, such as exposure, to diverse occupational exposures. Despite some likelihood of exposure to nickel in those exercising in the gym, we could not find similar reports in the literature and could not compare the concentrations we found with other settings. However, there are plenty of other environmental and occupational publications with reported hair nickel concentrations. The most surprising finding of this analysis was nickel concentration in controls. Although we deliberately matched controls with exercisers to ensure similar eating patterns, their hair nickel concentrations were quite high and even exceeded the concentrations in occupationally exposed industrial workers 8. In order to allow for comparison between those exercising in the gym and controls, nutritional nickel consumption should be equal in both groups. Direct assessment of the amount of consumed nickel does not seem feasible in a regular setting; therefore, computational methods are often used in studies of athletes 5. However, such methods yield more approximation than accuracy and therefore will lead to a notable exposure classification bias. Hence, in our study, we preferred to enroll controls from friends, matched for age and sex, to allow for comparable nickel consumption in the main group with controls. The limitations of this analysis originate from its cross-sectional design. The overall sample size of 200 subjects may also limit statistical power. Another limitation is the use of matching rather than a detailed questionnaire on eating habits and computational method to ascertain food nickel consumption. Finally, we could not obtain detailed information on the metal composition of the steel used for a particular brand of metal equipment in the chain of gym under study. Guided by anecdotal reports in non-professional literature, stainless steel for metal equipment in the gym is very likely produced of steel with some nickel content, but we could not confirm whether the given equipment had any nickel in it, either from the original documentation or, alternatively, using acid nickel testing. To conclude, this pilot study of nickel exposure measured through hair nickel concentration in those contacting metal equipment in the gym failed to demonstrate greater hair nickel concentration in the latter compared to their non-exercising friends. Data availability Underlying data Raw data for this study, including basic demographic information, answers to the questionnaire and hair nickel levels, are available on OSF. DOI: https://doi.org/10.17605/OSF.IO/RQJ3Z 6. Extended data The questionnaire in the original (Russian) and in English are available on OSF. DOI: https://doi.org/10.17605/OSF.IO/ RQJ3Z 6. Grant information The author(s) declared that no grants were involved in supporting this work. accurate assessment of endogenous metal contents. Before washing, the samples were cut into small pieces (approximately 0.5 cm) and mixed to make a representative sample. Afterwards, each hair sample was washed in series with 5% detergent solution, 0.5% Triton X-100 solution and deionized water. First of all, the scalp hair sample was taken in a conical flask containing 50 mL of 5% detergent solution and mixed well. The flask contents were then shaken on an auto-shaker at 320 vibrations per minute for about 30 min. After leaving it at room temperature for at least 2 h, it was washed with plentiful water. Then, 30 mL of non-ionic detergent Triton X-100 (0.5% v/v) solution was added to each flask and again placed on the auto-shaker for 30 min. The samples were then washed with deionized water followed by drying in an electric oven overnight at 70 °C (Reference). Similarly explain, the authors should write the complete process of digestion/mineralization e.g. heating, heating source and from what temperature used to complete process, duration of time as well as reference. Results Explain Table 1 completely in results section. For example and for reference. Characteristics of the Study Subjects The demographic parameters related to the stomach cancer patients and healthy donors are displayed in Table 1 Summary Giving conclusions, the article has clear objective, approach is appropriate, However, the introduction does not provide any background on Ni toxicity/hazards/disease. The experiments and analyses performed with technical rigor to allow confidence in the results. But should be explain step by step. E.g., collection and processing of hair samples. Use of measuring unit is a good step and its name should be shown in the tables as well as in the context if needed. Some global points of view, at least to discuss the results, should be highlighted from relevant published reports. The variables shown in Tables are extracted conclusive information. Occupational exposure name with durations should be added in table 1. It seems number of samples are limited which we did not drive any clinically conclusion. However, after analyzing the data, results/tables are convincing especially elemental concentration. I suggest this manuscript is suitable for indexing. Is the work clearly and accurately presented and does it cite the current literature? |
import numpy as np
from astropy.io import fits
from astropy.wcs import WCS
from astropy.table import Table
from glue.core import Data
from astropy import units as u
from glue.core.coordinates import coordinates_from_header, coordinates_from_wcs
from specviz.third_party.glue.utils import SpectralCoordinates
from .utils import (mosviz_spectrum1d_loader, mosviz_spectrum2d_loader,
mosviz_cutout_loader, mosviz_level2_loader,
split_file_name)
__all__ = ['nirspec_spectrum1d_reader', 'nirspec_spectrum2d_reader',
'nirspec_level2_reader', 'pre_nirspec_spectrum1d_reader',
'pre_nirspec_spectrum2d_reader', 'pre_nircam_image_reader',
'pre_nirspec_level2_reader']
@mosviz_spectrum1d_loader("NIRSpec 1D Spectrum")
def nirspec_spectrum1d_reader(file_name):
with fits.open(file_name) as hdulist:
header = hdulist['PRIMARY'].header
tab = Table.read(file_name, hdu=1)
data = Data(label="1D Spectrum")
data.header = header
# This assumes the wavelength is in microns
data.coords = SpectralCoordinates(tab['WAVELENGTH'] * u.micron)
data.add_component(tab['WAVELENGTH'], "Wavelength")
data.add_component(tab['FLUX'], "Flux")
data.add_component(tab['ERROR'], "Uncertainty")
return data
@mosviz_spectrum2d_loader('NIRSpec 2D Spectrum')
def nirspec_spectrum2d_reader(file_name):
"""
Data loader for simulated NIRSpec 2D spectrum.
This function extracts the DATA, QUALITY, and VAR
extensions and returns them as a glue Data object.
It then uses the header keywords of the DATA extension
to detemine the wavelengths.
"""
hdulist = fits.open(file_name)
data = Data(label="2D Spectrum")
data.header = hdulist['PRIMARY'].header
data.coords = coordinates_from_header(hdulist[1].header)
data.add_component(hdulist['SCI'].data, 'Flux')
data.add_component(np.sqrt(hdulist['CON'].data), 'Uncertainty')
hdulist.close()
return data
@mosviz_level2_loader('NIRSpec 2D Level 2 Spectra')
def nirspec_level2_reader(file_name):
"""
Data Loader for level2 products.
Uses extension information to index
fits hdu list. The ext info is included
in the file_name as follows: <file_path>[<ext>]
"""
file_name, ext = split_file_name(file_name)
hdulist = fits.open(file_name)
data = Data(label="2D Spectra")
data.header = hdulist[ext].header
data.coords = coordinates_from_header(hdulist[ext].header)
data.add_component(hdulist[ext].data, 'Level2 Flux')
# TODO: update uncertainty once data model becomes clear
data.add_component(np.sqrt(hdulist[ext + 2].data), 'Level2 Uncertainty')
hdulist.close()
return data
@mosviz_spectrum1d_loader('Pre NIRSpec 1D Spectrum')
def pre_nirspec_spectrum1d_reader(file_name):
"""
Data loader for MOSViz 1D spectrum.
This function extracts the DATA, QUALITY, and VAR
extensions and returns them as a glue Data object.
It then uses the header keywords of the DATA extension
to detemine the wavelengths.
"""
hdulist = fits.open(file_name)
# make wavelength a seperate component in addition to coordinate
# so you can plot it on the x axis
wavelength = np.linspace(hdulist['DATA'].header['CRVAL1'],
hdulist['DATA'].header['CRVAL1'] * hdulist['DATA'].header['CDELT1'],
hdulist['DATA'].header['NAXIS1'])[::-1]
data = Data(label='1D Spectrum')
data.header = hdulist['DATA'].header
data.add_component(wavelength, 'Wavelength')
data.add_component(hdulist['DATA'].data, 'Flux')
data.add_component(np.sqrt(hdulist['VAR'].data), 'Uncertainty')
hdulist.close()
return data
@mosviz_spectrum2d_loader('Pre NIRSpec 2D Spectrum')
def pre_nirspec_spectrum2d_reader(file_name):
"""
Data loader for simulated NIRSpec 2D spectrum.
This function extracts the DATA, QUALITY, and VAR
extensions and returns them as a glue Data object.
It then uses the header keywords of the DATA extension
to detemine the wavelengths.
"""
hdulist = fits.open(file_name)
data = Data(label='2D Spectrum')
data.header = hdulist['DATA'].header
data.coords = coordinates_from_header(hdulist[1].header)
data.add_component(hdulist['DATA'].data, 'Flux')
data.add_component(np.sqrt(hdulist['VAR'].data), 'Uncertainty')
hdulist.close()
return data
@mosviz_cutout_loader('NIRCam Image')
def pre_nircam_image_reader(file_name):
"""
Data loader for simulated NIRCam image. This is for the
full image, where cut-outs will be created on the fly.
From the header:
If ISWFS is T, structure is:
- Plane 1: Signal [frame3 - frame1] in ADU
- Plane 2: Signal uncertainty [sqrt(2*RN/g + \|frame3\|)]
If ISWFS is F, structure is:
- Plane 1: Signal from linear fit to ramp [ADU/sec]
- Plane 2: Signal uncertainty [ADU/sec]
Note that in the later case, the uncertainty is simply the formal
uncertainty in the fit parameter (eg. uncorrelated, WRONG!). Noise
model to be implemented at a later date.
In the case of WFS, error is computed as SQRT(2*sigma_read + \|frame3\|)
which should be a bit more correct - ~Fowler sampling.
The FITS file has a single extension with a data cube.
The data is the first slice of the cube and the uncertainty
is the second slice.
"""
hdulist = fits.open(file_name)
data = Data(label='NIRCam Image')
data.header = hdulist[0].header
wcs = WCS(hdulist[0].header)
# drop the last axis since the cube will be split
data.coords = coordinates_from_wcs(wcs)
data.add_component(hdulist[0].data, 'Flux')
data.add_component(hdulist[0].data / 100, 'Uncertainty')
hdulist.close()
return data
@mosviz_level2_loader('Pre NIRSpec 2D Level 2 Spectra')
def pre_nirspec_level2_reader(file_name):
"""
THIS IS A TEST!
"""
#TODO The level 2 file has multiple exposures.
#TODO the level 2 test file has SCI extensions with different shapes.
#TODO
hdulist = fits.open(file_name)
data = Data(label='2D Spectra')
hdulist[1].header['CTYPE2'] = 'Spatial Y'
data.header = hdulist[1].header
# This is a stop gap fix to let fake data be ingested as
# level 2 apectra. The level 2 file we have for testing
# right now has SCI extensions with different sized arrays
# among them. It remains to be seen if this is a expected
# feature of level 2 spectra, or just a temporary glitch.
# In case it's actually what lvel 2 spectral files look
# like, proper handling must be put in place to allow
# glue Data objects with different sized components. Or,
# if that is not feasible, to properly cut the arrays so
# as to make them all of the same size. The solution below
# is a naive interpretation of this concept.
x_min = 10000
y_min = 10000
for k in range(1, len(hdulist)):
if 'SCI' in hdulist[k].header['EXTNAME']:
x_min = min(x_min, hdulist[k].data.shape[0])
y_min = min(y_min, hdulist[k].data.shape[1])
# hdulist[k].header['CTYPE2'] = 'Spatial Y'
# wcs = WCS(hdulist[1].header)
# original WCS has both axes named "LAMBDA", glue requires unique component names
# data.coords = coordinates_from_wcs(wcs)
# data.header = hdulist[k].header
# data.add_component(hdulist[1].data['FLUX'][0], 'Flux')
count = 1
for k in range(1, len(hdulist)):
if 'SCI' in hdulist[k].header['EXTNAME']:
data.add_component(hdulist[k].data[0:x_min, 0:y_min], 'Flux_' + '{:03d}'.format(count))
count += 1
# data.add_component(1 / np.sqrt(hdulist[1].data['IVAR'][0]), 'Uncertainty')
return data
|
# Copyright 2021 BlueCat Networks (USA) Inc. and its affiliates
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# By: BlueCat Networks
# Date: 2019-03-14
# Gateway Version: 20.12.1
# Description: Bulk Register MAC Address Migration
from bluecat.api_exception import PortalException
def get_mac_address(configuration, address):
mac_addr = None
try:
mac_addr = configuration.get_mac_address(address)
except PortalException:
pass
return mac_addr
def get_mac_pool(configuration, mac_pool_name):
mac_pool = None
try:
mac_pool = configuration.get_child_by_name(mac_pool_name, configuration.MACPool)
except PortalException as e:
print('MAC Pool %s is not in configuration(%s).' % (mac_pool_name, configuration.get_name()))
return mac_pool
def normalize_date_format(date_str):
return date_str.replace('/', '-')
def register_mac_address(configuration, address, asset_code, mac_pool, comments):
mac_address = get_mac_address(configuration, address)
mac_pool_entity = get_mac_pool(configuration, mac_pool)
if mac_address is not None:
print('MAC Address %s is in configuration(%s)' % (address, configuration.get_name()))
if asset_code != '':
mac_address.set_name(asset_code)
mac_address.set_property('Comments', comments)
mac_address.update()
if mac_pool != '' and mac_pool_entity is not None:
mac_address.set_mac_pool(mac_pool_entity)
else:
print('MAC Address %s is NOT in configuration(%s)' % (address, configuration.get_name()))
properties = 'Comments=' + comments
mac_address = configuration.add_mac_address(address, asset_code, mac_pool_entity, properties)
|
Low power and high accuracy spike sorting microprocessor with on-line interpolation and re-alignment in 90nm CMOS process Accurate spike sorting is an important issue for neuroscientific and neuroprosthetic applications. The sorting of spikes depends on the features extracted from the neural waveforms, and a better sorting performance usually comes with a higher sampling rate (SR). However for the long duration experiments on free-moving subjects, the miniaturized and wireless neural recording ICs are the current trend, and the compromise on sorting accuracy is usually made by a lower SR for the lower power consumption. In this paper, we implement an on-chip spike sorting processor with integrated interpolation hardware in order to improve the performance in terms of power versus accuracy. According to the fabrication results in 90 nm process, if the interpolation is appropriately performed during the spike sorting, the system operated at the SR of 12.5 k samples per second (sps) can outperform the one not having interpolation at 25 ksps on both accuracy and power. |
def _display_html(html_str: str) -> None:
IPython.display.display(IPython.display.HTML(html_str)) |
Prevalence of Burnout Among Physicians: A Systematic Review Importance Burnout is a self-reported job-related syndrome increasingly recognized as a critical factor affecting physicians and their patients. An accurate estimate of burnout prevalence among physicians would have important health policy implications, but the overall prevalence is unknown. Objective To characterize the methods used to assess burnout and provide an estimate of the prevalence of physician burnout. Data Sources and Study Selection Systematic search of EMBASE, ERIC, MEDLINE/PubMed, psycARTICLES, and psycINFO for studies on the prevalence of burnout in practicing physicians (ie, excluding physicians in training) published before June 1, 2018. Data Extraction and Synthesis Burnout prevalence and study characteristics were extracted independently by 3 investigators. Although meta-analytic pooling was planned, variation in study designs and burnout ascertainment methods, as well as statistical heterogeneity, made quantitative pooling inappropriate. Therefore, studies were summarized descriptively and assessed qualitatively. Main Outcomes and Measures Point or period prevalence of burnout assessed by questionnaire. Results Burnout prevalence data were extracted from 182 studies involving 109 628 individuals in 45 countries published between 1991 and 2018. In all, 85.7% (156/182) of studies used a version of the Maslach Burnout Inventory (MBI) to assess burnout. Studies variably reported prevalence estimates of overall burnout or burnout subcomponents: 67.0% (122/182) on overall burnout, 72.0% (131/182) on emotional exhaustion, 68.1% (124/182) on depersonalization, and 63.2% (115/182) on low personal accomplishment. Studies used at least 142 unique definitions for meeting overall burnout or burnout subscale criteria, indicating substantial disagreement in the literature on what constituted burnout. Studies variably defined burnout based on predefined cutoff scores or sample quantiles and used markedly different cutoff definitions. Among studies using instruments based on the MBI, there were at least 47 distinct definitions of overall burnout prevalence and 29, 26, and 26 definitions of emotional exhaustion, depersonalization, and low personal accomplishment prevalence, respectively. Overall burnout prevalence ranged from 0% to 80.5%. Emotional exhaustion, depersonalization, and low personal accomplishment prevalence ranged from 0% to 86.2%, 0% to 89.9%, and 0% to 87.1%, respectively. Because of inconsistencies in definitions of and assessment methods for burnout across studies, associations between burnout and sex, age, geography, time, specialty, and depressive symptoms could not be reliably determined. Conclusions and Relevance In this systematic review, there was substantial variability in prevalence estimates of burnout among practicing physicians and marked variation in burnout definitions, assessment methods, and study quality. These findings preclude definitive conclusions about the prevalence of burnout and highlight the importance of developing a consensus definition of burnout and of standardizing measurement tools to assess the effects of chronic occupational stress on physicians. |
/*********************************************************************
** Description: The ending function handles printing the end-of-game
** results. There are three possible endings, all determined by the
** settings of three booleans: end, flashlight, and win. The function
** requires no parameters and returns void.
*********************************************************************/
void Game::ending() {
if (!win && !flashlight) {
std::cout << " ┌──────────────────────────────────────────────────────────────────────────────┐" << std::endl;
std::cout << " | |" << std::endl;
std::cout << " | Your flashlight is out of batteries... |" << std::endl;
std::cout << " | Something rushes past you in the darkness. |" << std::endl;
std::cout << " | It wasn't human. You'd better run. |" << std::endl;
std::cout << " | |" << std::endl;
std::cout << " | Game Over |" << std::endl;
std::cout << " └──────────────────────────────────────────────────────────────────────────────┘" << std::endl;
} else if (!win && flashlight) {
std::cout << " ┌──────────────────────────────────────────────────────────────────────────────┐" << std::endl;
std::cout << " | |" << std::endl;
std::cout << " | You died horrifically! |" << std::endl;
std::cout << " | No. Really. It was awful. |" << std::endl;
std::cout << " | |" << std::endl;
std::cout << " | Game Over |" << std::endl;
std::cout << " └──────────────────────────────────────────────────────────────────────────────┘" << std::endl;
} else {
std::cout << " ┌──────────────────────────────────────────────────────────────────────────────┐" << std::endl;
std::cout << " | |" << std::endl;
std::cout << " | The door to the ship opens and a strange creature gestures at you. |" << std::endl;
std::cout << " | You've made first contact with a friendly alien race. |" << std::endl;
std::cout << " | And oh boy, have you got questions. |" << std::endl;
std::cout << " | |" << std::endl;
std::cout << " | Game Over |" << std::endl;
std::cout << " └──────────────────────────────────────────────────────────────────────────────┘" << std::endl;
}
} |
<gh_stars>1-10
package maps
// Returns the map of those elements that satisfy the predicate
func Filter[K comparable, A any](fn func (k K,v A) bool) func (a map[K]A) map[K]A {
return func (a map[K]A) (b map[K]A) {
b = make(map[K]A)
for i,j := range a {
if fn(i, j) {
b[i] = j
}
}
return
}
}
|
// GetRemoteAddressFromRequest - returns remote address based on request headers. Respects X-Forwarded-For
func GetRemoteAddressFromRequest(r *http.Request) (addr string, err error) {
var (
remoteAddr string
)
remoteAddr, _, err = net.SplitHostPort(r.RemoteAddr)
if err != nil {
return "", err
}
addr = r.Header.Get("X-Real-IP")
if len(addr) == 0 {
addr = r.Header.Get("X-Forwarded-For")
}
if len(addr) == 0 {
addr = remoteAddr
} else if !IsLocalNetworkString(remoteAddr) {
addr = remoteAddr
}
return addr, nil
} |
Parental ecological history can differentially modulate parental age effects on offspring physiological traits in Drosophila Abstract Parents adjust their reproductive investment over their lifespan based on their condition, age, and social environment, creating the potential for inter-generational effects to differentially affect offspring physiology. To date, however, little is known about how social environments experienced by parents throughout development and adulthood influence the effect of parental age on the expression of life-history traits in the offspring. Here, I collected data on Drosophila melanogaster offspring traits (i.e., body weight, water content, and lipid reserves) from populations where either mothers, fathers both, or neither parents experienced different social environments during development (larval crowding) and adulthood. Parental treatment modulated parental age effects on offspring lipid reserves but did not influence parental age effects on offspring water content. Importantly, parents in social environments where all individuals were raised in uncrowded larval densities produced daughters and sons lighter than parental treatments which produced the heaviest offspring. The peak in offspring body weight was delayed relative to the peak in parental reproductive success, but more strongly so for daughters from parental treatments where some or all males in the parental social environments were raised in crowded larval densities (irrespective of their social context), suggesting a potential father-to-daughter effect. Overall, the findings of this study reveal that parental ecological history (here, developmental and adult social environments) can modulate the effects of parental age at reproduction on the expression of offspring traits. Inter-generational effects are processes through which parents pass on non-genetic information of their environment to their offspring, with long-lasting fitness effects to both generations (Mousseau and Dingle 1991;Mousseau and Fox 1998;O'). The exchange of information from parents to offspring can increase or decrease offspring (and consequently, parents') fitness if the offspring environment matches (or mismatches) the parental environment, or if non-genetic effects transferred by the parents improve (or hampers) the ability of the offspring to cope with its environment (Monaghan 2008;Engqvist and Reinhold 2016;Champagne 2020). This can modulate population dynamics and influence eco-evolutionary processes acting in local populations (Qvarnstrm and Price 2001;). Either way, inter-generational effects modulate the expression of offspring traits based on parental signals (Engqvist and Reinhold 2016). Inter-generational effects are widespread in nature and have been described in plants (), invertebrates (;;;Wilson and Graham 2015;;aMorimoto et al., 2017b, and vertebrates such as fish (;Stratmann and Taborsky 2014), lizards (;), birds (, but see ), and mammals including humans, for example, Hasselquist and Nilsson, Dantzer et al., see also Uller et al. for a meta-analysis. Parental age is known to affect offspring lifespan and more generally, performance, and fitness, whereby older parents produce offspring with overall shorter lifespan and overall lower quality or fitness, which is broadly known as "Lansing effect" (Lansing 1947;, but see also Comfort 1953). To date, there has been a range of complex results reported in the literature, showing that overall (grand-) mothers' and fathers' age at reproduction modulate (grand-) offspring fitness across 1 or multiple generations. For instance, in insects, older mothers produce offspring with shorter lifespan (Lansing effect sensu stricto) but the effects of mothers' age on offspring fitness traits such as developmental time, mass at maturity, and fecundity are less consistent, with some taxa showing either an increase or decrease in trait expression, or no maternal effects (see e.g., Table 3 in, for summary). Even within species, inter-generational and trans-generational effects are known to differ depending on ecological factors. In the oleander aphid Aphis nerii, the maternal age at which offspring mass at maturity was maximized depended on host plant species, with mothers fed Asclepias syriaca producing heavier offspring on Day 6 in comparison to Day 11 when mothers were fed Asclepias viridis (). Inter-generational and trans-generational effects interact with biotic and abiotic ecological factors to shape offspring life-history (;;). For instance, in the butterfly Pararge aegeria, larval mass declined with maternal age but this decline was less strong when females were forced to fly (as an experimental manipulation to mimic dispersion) (). Likewise, in the butterfly Pieris brassicae, offspring from fathers that were forced to fly and mated with old mothers showed longer developmental times than control fathers with old mothers, this effect increased with paternal age, but paternal effects (both in terms of flight and age) on offspring development were absent when mothers were young (). Interestingly, the same study found that paternal effects were more accentuated in the offspring at the larval stage while for mothers, the effects were exacerbated in the adult stage of the offspring (), potentially suggesting a de-coupling of parental effects across life-stages in holometabolous insects. In the neriid fly Telostylinus angusticollis, grand-offspring lifespan decreased with grandparents' reproductive age in a similar fashion for both grandmother and grandfather lines, and this effect was independent of dietary effects in an intervening generation (). Overall, these studies highlight the complexity of inter-and trans-generational effects but also those ecological factors experienced by the parental generation can either mitigate or accentuate these effects in future generations. Ecological factors experienced by the parents can influence inter-and trans-generational effects of age by directly or indirectly altering parental reproductive investment. Evidence suggesting condition-dependent parental reproductive investment and/or inter-generational effects continue to grow. In Drosophila melanogaster parents can modulate their reproductive investment, timing, and overall reproductive success (i.e., offspring number) in response to the presence and number of (male) rivals (), male and female age (;aMorimoto et al., 2017), male and female developmental conditions (i.e., diet, conspecific density) (;;Morimoto et al., 2017aMorimoto et al., 2017b, as well as partners' size, age, and mating status (Pitnick 1991;;). Furthermore, inter-and trans-generational effects in D. melanogaster have been described in terms of ancestral diet composition and quality (;;Emborski and Mikheyev 2019) as well as conspecific larval density (;aMorimoto et al., 2017b. Inter-and trans-generational effects on offspring life-history traits have also been described in other insect groups, including grasshoppers (Franzke and Reinhold 2013), wasps (), flies other than D. melanogaster (;), butterflies (e.g., ; see also review by Woestmann and Saastamoinen 2016) and beetles (Lock 2012;), attesting to the ubiquity of inter-and trans-generational effects in insects (). To date, however, we still do not know whether parental developmental and adult social environments-both of which are known to modulate evolutionary forces such as sexual selection ()-can affect the expression of fitness-related traits in the offspring, nor whether these effects are constant or differentially affected by parental age at reproduction. In this study, I collected new data on offspring traits from previously published work, where I had assembled artificial populations of D. melanogaster at equal sex ratios in which fathers, mothers, none, or both parents were reared in high and low larval density and experienced varying social environments ("parental treatments") (Morimoto 2017a; Figure 1a). This newly collected offspring data allowed me to gain insight into the following question: Do parental developmental and adult social environments modulate the effect of parental age on offspring traits? More specifically, the data allowed for the study as to whether the peak in parental reproductive success, which was originally measured in Morimoto et al. (2017aMorimoto et al. (, 2017b, coincides with the time where offspring trait (related to fitness) expression was also maximum. This allowed me to test whether parental offspring number coincides with offspring quality (a parents' reproductive "golden age") where both the number and size of offspring are maximized or there is a trade-off between offspring number and size above and beyond parental developmental and adult social environments (). Materials and Methods The original purpose of this experimental design was to address how developmental and social effects can influence population traits (a(Morimoto et al., 2017b. However, offspring of these experiments were stored and could be retrieved for analyses of body composition, which allowed me to gain insights into how parental developmental and adult social environments modulate offspring trait expression. Below, I provide a brief description of the experimental design, for which the details can be found at length in a previous publication (Morimoto 2017a(Morimoto, 2017b. Fly stock and parental developmental and adult social environments manipulations Wild-type inbred OregonR stock of D. melanogaster was maintained in large populations (> 5,000 individuals) in cages with overlapping generations for >10 generations. All fly stocks were maintained and all experiments conducted at 25°C on a 14:10 light:dark cycle in a controlled humidified room (humidity = 68%) and fed with standard sugar-yeast-maize-molasses medium with excess live yeast granules. I manipulated parental developmental environment by means of relative changes in parental body size based on larval crowding: the crowded individuals (small body size adults) were from vials with ∼ 50 larvae/mL of food (∼ 200 larvae/34 mL vial containing ∼ 4 mL fly food) whereas the uncrowded individuals (large body size) were from vials with ∼4 larvae/mL of food (∼ 40 larvae/34 mL vial containing ∼10 mL fly food). Parental groups with mixed social compositions were assembled with 4 males (fathers) and 4 females (mothers) (i.e., 8 individuals per group), which were randomly selected from a pool of >1,000 individuals of each sex and mixed into 5 parental treatments (N = 17 replicates per parental treatment) as following ( Figure 1): 1. Control small. Adult social group where both mothers (n = 4) and fathers (n = 4) had small body size (i.e., from a crowded developmental environment); 2. Control large. Groups where both mothers (n = 4) and fathers (n = 4) had large body size (i.e., from an uncrowded developmental environment); 3. Female-only. Groups where all fathers were large (n = 4). Half of the mothers were large (n = 2) and the other half small (n = 2). 4. Male-only. Groups where mothers were large (n = 4). Half of the fathers were large (n = 2) and the other half small (n = 2). 5. Both sexes. Groups where half of the individuals were large and the other half, small for both sexes. Note that this is not a full factorial design, and therefore the results have some limitations in terms of identifying the mechanisms underpinning the phenomena observed below. Nevertheless, both full and non-full factorial designs provide insights into the presence and to some extent, the magnitude of phenomena. This limitation is acknowledged in the "Discussion" section but does not invalidate the effects found in the study. These parental treatments were chosen for several reasons: there is information in the literature about population-level responses in terms of harassment, fitness, and survival in these groups (see a, which is the original experimental design for the data collected here). I have previously shown that the strength of sexual selection is modulated by group composition in similar group treatments () and There have been a substantial number of studies in the literature investigating how crowding and/or social environment influence life-history and reproductive traits in D. melanogaster (e.g., Amitin and Pitnick 2007;;;;;;; see also references in Morimoto and Pietras ), which are useful for interpreting the results. Parental groups were allowed to interact freely. Groups were transferred to fresh vials with 6 mL of food on Days 3, 6,9,13,16,19,23,27,35,40,45, and 50 after the onset of the experiment, and the old vials were reserved for 13-15 days until adult offspring emerged fully. Offspring were 3-5 days old. Females stopped producing offspring at approximately Day 35; see Morimoto et al. (2017aMorimoto et al. (, 2017b. Offspring had food ad libitum and larval densities were always <20 larvae/g of diet which can be considered high density given the natural history of D. melanogaster (Morimoto and Pietras 2020). I nevertheless included a proxy of offspring crowding-that is, total parental reproductive success per time point-as a fixed effect in the analyses (see details below). For every time point, we scored the number of surviving females and males in all populations. This procedure was repeated until all mothers of the groups died, a point where the group was considered extinct. I then assessed parental group reproductive success by counting the total number of adult offspring in each parental treatment per time point (Supplementary Figure S1). Body weight and composition I measured offspring body weight, water content, and lipid composition under the assumption that offspring with high body weight, water content, and lipid reserves translate into higher fitness (Fairbanks and Burch 1970;Honk 1993;). In flies, physiological traits such as body weight, water content, and lipid reserves are correlated with desiccation and starvation resistance, as well as male and female reproductive success (Fairbanks and Burch 1970;;Da ;Honk 1993;van Herrewege and David 1997;Gibbs and Markow 2001;Nestel and Nemny-Lavy 2008;;;), and thus can be useful proxies to assess how parental inter-generational effects can affect offspring fitness. Adult offspring were separated into 2 cohorts. In the first cohort, 6-9 randomly selected sons and 6-9 randomly selected daughters per replicate parental group (i.e., N = 17) per parental treatment per time point were measured for wet body weight using a Sartorius ® ME5 scale (0.0001 g precision) (N total = 1458). In the second cohort from the same treatments, 5 sons and 5 daughters per treatment per time point until Day 19 (for logistic reasons) were randomly selected from a subset of 6 replicate populations per treatment (also randomly selected), dried in the oven for 48 h at 60°C to eliminate water content and weighed as described above (dry weight). Dried flies were individually allocated to 10 mL glass tubes where we performed lipid extraction with chloroform (Sigma Aldrich ®, St. Louis, MO, USA, Cat no. 288306) as described in Morimoto et al.. Flies were again dried for 48 h at 60°C and weighed as described. Percentage of lipid for individual flies was estimated as the difference between dry weight and the weight after lipid extraction divided by the dry weight 100 (N total = 270). Water content was measured by subtracting the average offspring body weight per vial per parental treatment per day (N total = 78). Statistical analysis All statistical analyses were performed in R software version 3.6.2 (Team RDC 2010). I used linear mixed models from the 'lme4 v.1.1-23' and 'lmerTest v.3.1-2' packages for all the analyses (;). Population vial was fitted as a random effect in all models, whereas the 3-way interactions between parental treatment, offspring sex, and the linear and quadratic (non-linear) effects of parental age at reproduction were included as fixed effects; P-values were obtained from F-statistics using the inbuilt 'ANOVA' function (type III). I also included parental total reproductive success per vial per time interval, which was extracted from previously published work (a(Morimoto et al., 2017b, as a fixed effect in all models. This metric was used as a proxy of offspring "crowding" which allowed me to control for any potential confounding effects of offspring intraspecific competition on offspring traits (see e.g., review; ). This approach assumed a somewhat linear relationship between crowding and trait expression which, for the purpose of a controlling variable, this is not unreasonable (see e.g., Horvth and Kalinka 2016, where linear terms could fairly well describe non-linear effects which occur at densities > 20 eggs per mL of diet). Moreover, offspring larval densities were > 20 larvae/g of diet, which could be considered high density given the natural history of D. melanogaster (Morimoto and Pietras 2020), and thus, unlikely to have reached sufficient high densities to potentially trigger major non-linear effects. To obtain the estimated peak (in days) of parental reproductive success and offspring weight along parental age, I calculated the point in which the second derivative of the general linear models fitted to the data was equal to zero, for each sex separately. Confidence intervals (CIs) were calculated using bootstrapping with 1,000 iterations in the 'boot v.1.3-25' package (Canty 2002). Because bootstrapping assumes a normal distribution of errors, in some cases the lower CI limits were negative. In these instances, negative CI values were rounded to zero days. All plots were made using the 'ggplot2 v.3.3-1' package (Wickham 2016). Parental developmental and adult social environments differentially affect offspring body weight Offspring crowding had a significant negative effect on offspring body weight (F 1,1045.5 = 4.052, P = 0.044) but not on offspring water content (F 1,47 = 0.787, P = 0.380) or lipid reserves (F 1,232 = 1.559, P = 0.213). After controlling for these effects, daughters were heavier than sons (Sex: F 1,1402.1 = 51.422, P < 0.001, Figure 2A), but not necessarily with higher water content (Sex: F 1,47 = 0.510, P = 0.479) or lipid reserves (Sex: F 1,234.4 = 0.069, P = 0.793). The linear an5d non-linear relationships between offspring body weight and parental age at reproduction were differentially affected by parental treatment (Linear * Treatment: F 1,1390 = 18.624, P < 0.001; Non-linear * Treatment: F 1,1380.7 = 13.382, P < 0.001, Supplementary Table S1), whereby there was a steeper linear and more accentuated curvilinear relationship between parental age and body weights in Control Small, Control Large and Female-only relative to Male-only and Both sexes parental treatments ( Figure 2B). Linear (but not non-linear) effects of parental age influenced offspring weight (Linear * Sex: F 1,1402 = 11.251, P < 0.001), whereby the effects of parental age on the linear increase in offspring weight was more pronounced in daughters than sons ( Figure 2B). In fact, the Control Large parental treatment (where mothers and fathers were large) produced daughters (mean ± SD: 0.903 ± 0.254) and sons (mean ± SD: 0.594 ± 0.160) that were ca. 12% and 10% lighter, respectively, compared with the parental treatments that produced the heaviest offspring of each sex (namely, Control Small for daughters, 1.015 ± 0.283, and Both sexes for sons, mean ± SD: 0.657 ± 0.128, see also Supplementary Text S1). I also found a 3-way interaction between parental age at reproduction, parental treatment, and offspring sex on offspring weight. This emerged because the differential effect of parental Offspring body weight (in grams) in relation to parental age at reproduction (in days) and parental treatment. Contour lines correspond to the pattern of parental reproductive success, whereby red contour regions represent peak parental reproductive success (second y-axis with parental reproductive success was omitted for clarity but raw data is presented in Supplementary Figure S1). Trend lines plotted using the "lm" function in R. Circles: Daughters; Diamond: Sons. treatment on the relationship between parental age at reproduction and offspring weight was more pronounced in daughters than in sons (Linear * Treatment * Sex: F 1,1402 = 3.266, P = 0.011; Non-linear * Treatment * Sex: F 1,1402 = 2.409, P = 0.048) ( Figure 2B). Whereas there were neither effects of sex, nor the interactions between sex, parental treatment, and the linear and non-linear effects of parental age on offspring water content and lipid reserves (Supplementary Table S1), there were main linear and non-linear effects of parental age at reproduction on offspring water content (Linear: F 1,47 = 11.245, P = 0.002; Non-linear: F 1,47 = 9.804, P = 0.003) and lipid reserve (Linear: F 1,235 = 6.935, P = 0.009; Non-linear: F 1,235.1 = 9.228, P = 0.003), suggesting that the linear and non-linear effects of parental age on offspring physiological traits were similar for sons and daughters of all parental treatments (i.e., neither statistically significant 2-nor 3-way interactions). Peak in parental reproductive success does not necessarily coincide with peak offspring body weight The overall and sex-specific peak estimates with their CI are shown in Table 1 (reproductive success data reproduced from aMorimoto et al., 2017b. In general, offspring body weight reached peak expression later than parental reproductive success for all treatments. The average magnitude of the delay in peak offspring weight relative to parental peak in reproductive success was more evident for daughters in the Control Small, Male-only, and Both sexes treatment (although with relatively large CIs for the latter 2 treatments). This suggests that when at least some males in social conditions have experienced poor developmental environments, there is a delay in daughters' peak in body weight as the reproductive age of the parent increases (Table 1, Figure 2B, and Supplementary Figure S2). Discussion Here, I collected new data from a previous experiment which allowed me to gain insights into the following question: does parental developmental and adult social environment modulate the effect of parental age on offspring traits? I found that all parental treatments resulted in delays a daughters' peak in body weight relative to the parental peak in reproductive success, but that this delay is particularly more accentuated for treatments where the social contexts of fathers contained all or some individuals that experienced a crowded (poor) developmental condition (i.e., Control Small, Maleonly, and Both sexes) (Table 1). Paternal effects (either in daughters' or sons', or both) have been previously described across taxa (including humans) in the literature (;Whitelaw 2006;;Hughes 2014). Inter-generational effects on offspring, especially daughters' body weight such as those found in this study (Table 1 and Figure 2a and b) can generate long-term fitness consequences to the parents (via indirect fitness) and the offspring (via direct and indirect fitness). This is because in Drosophila, as in the majority of insects, body weight and size are positively correlated with fitness (Honk 1993; ). Thus, over the course of the offspring's reproductive lifetime, the small differences in body weight originating from inter-generational effects found here have the potential to accumulate and result in large net differences in direct mating and reproductive success (i.e., fitness) of the offspring (and indirect fitness to the parents) (e.g., Partridge and Farquhar 1983;;Honk 1993;Chapman and Partridge 1996;;;;). Further studies should test whether small body size differences carried over from 1 generation to the next are indeed translated into differences in fitness, or whether body size differences are counterbalanced by other behavioral processes (e.g., increased male harm toward larger and more attractive females; ). Notes: For consistency, I fitted a quadratic (non-linear) term for all models even though in some cases the relationship between parental age and reproductive success or offspring traits was linear (highlighted with the symbol ∞, see Supplementary Figure S1). CIs were calculated using bootstrapping (1,000 replicates). Delay difference was calculated by subtracting the estimated peak of offspring weight from the estimated peak in parental reproductive success. The data show that offspring trait expression varied over the parental reproductive lifespan ( Figure 2B), which supports the idea that some offspring may have higher "fitness value" to the parents than others (Smith and Fretwell 1974;Haig 1990) see also Wolf and Wade 2001). This provides supporting evidence for the broader concept of the Lansing hypothesis as defined in Monaghan et al. which states that parents age modulates offspring quality and fitness. I found that parental reproductive age affected all of the offspring physiological traits including body weight ( Figure 2), water content, and lipid reserves (Supplemental Figure S2). Higher lipid reserves and water content are known to increase survival under stress in flies (Fairbanks and Burch 1970;;;;). Thus, in stressful environments, offspring with higher expression of these traits have higher direct fitness due to better odds of surviving and reproducing and also have higher indirect fitness value to their parents (see above). The fact that offspring trait expression varied over the parental reproductive lifespan in this study suggests that there may exist a trade-off between parental investment in offspring traits and the expression of other (parental or offspring) traits, otherwise all offspring should for example be as heavy as possible under the correlation of body size with fitness (Honek 1993;). The molecular mechanisms underpinning the temporal variation in inter-generational effects remain to be explored, but it is in theory possible that maternal effects via mRNAs are transferred to the egg/embryo at different quantities and/or translated at different rates after fertilization. Evidence in mice has revealed that temporal patterns play a key role in maternal mRNA effects () and in Drosophila, the level of histone gene expression is known to be at least partly modulated by the quantity of maternal mRNA (Anderson and Lengyel 1980) (see also broader recent reviews in the topic by ). This highlights the potential temporal dynamism underpinning inter-generational effects which requires further investigations. The decline in parental reproduction with age ("reproductive senescence") is a widespread phenomenon in nature (Ivimey-Cook and Moorad 2020), although species display different patterns of reproductive senescence throughout lifespan (). A recent theoretical model suggests that reproductive senescence in mothers' fecundity can be under different selective pressures than maternal effects, leading to a potential dissociation of senescence effects in these traits (Moorad and Nussey 2016). From my understanding, 1 implication of this model pertinent to the findings presented here is that the decline in offspring production across replicate populations should not necessarily coincide with (senescence in) inter-generational effects on offspring traits. In this study, the data do not allow for direct inferences on senescence of inter-generational effects (unless assuming that offspring traits are entirely modulated by inter-generational effects) but it nevertheless shows that parental reproductive senescence effects are to some extent dissociated from the expression of offspring traits. These results appear to indirectly support the predictions of the model, with the caveat that in this experimental design I could not differentiate maternal (for which the model was explicitly developed) or paternal effects, or the interaction between both. I found that parental developmental and social conditions modulate the effects of parental age of reproduction on offspring traits (Figure 2 and Supplementary Figure S2). Thus, independent of the underpinning molecular mechanisms, the data presented here provide suggestive evidence of a putative condition-dependent Lansing effect on offspring fitness-related traits, whereby parental condition (e.g., amount of resource acquired during development) modulates the effects of parental age on offspring trait expression. A previous study in another fly showed that diet effects in an intervening generation in a multi-generational had no contribution to the grand-parental age effects in the grand-offspring (). However, multi-generational studies in Drosophila showed that sugar and fat dietary manipulations-as well as diet quality-in ancestral parental diet modulated sex-specific physiological and reproductive traits in the offspring and grand-offspring (;;Emborski and Mikheyev 2019). In this study, we manipulated crowding experienced by the parental generation, and crowding is known to reduce nutrient availability () but also generates changes in diet and individual microbiome () as well as nutrient composition of the diet ). Therefore, it is possible that crowding experienced during the parental generation triggers physiological responses (only partly related to diet) which in turn, modulate the effects of parental age at reproduction on offspring trait expression. More studies are needed, both in terms of molecular mechanisms and inter-generational effects, to uncover how crowding affects individual physiology in the present and future generations. This study investigated how phenotypic variability in adult parental populations, emerging from different larval crowding regimes, modulate parental effects on offspring fitness traits. Natural populations of Drosophila species display substantial variation in adult body size (;;Morimoto and Pietras 2020), which likely modulates the opportunities for inter-generational effects above and beyond variations in environmental conditions. Thus, our findings provide insights into parental effects on offspring traits in an ecologically relevant design. Phenotypic variability is widespread in nature and underpins physiological effects and social interactions that determine the evolutionary trajectory of populations (;;;). Moreover, phenotypic variability in the parental population can be transferred to the offspring (Bonduriansky and Crean 2018), thereby influencing the adaptability of the offspring to environmental conditions whereas also resulting in joint correlation between offspring and parental traits (Wolf and Brodie 1998;). Thus, the findings presented here can guide future studies on the inter-generational effects of parental developmental and adult social environments in other (non-model) species. It is worth mentioning that the findings presented here need to be interpreted with caution, because the study has the limitation of not being full factorial and using indirect proxies of offspring fitness. Nevertheless, this study corroborates previous studies which highlight the complexity of generational effects in insects (;), and contributes to the field by adding a new perspective as to how parental developmental and adult social environments modulate parental age effects on offspring trait expression. It is also worth mentioning that alternative explanations and criticisms for the findings have been proposed, amongst which the most pertinent is the inability to assign whether mothers of each offspring were large or small, which precludes me from knowing whether all females contributed to the offspring pool (e.g., only large females laid eggs in mixed size social treatments) and the lack of precise control on the larval density of the offspring, opening up the possibility that more fecund females had lighter offspring due to larval crowding (not inter-generational effects). Detailed responses to these points are given in Supplementary Text S1 in the supplementary information but in summary: it is extremely unlikely that 1 class of females (e.g., small) would not reproduce in the presence of the other class (e.g., large), given the biology of Drosophila females as well as the data observed here and in previous studies where large and small female reproduction was measured after exposure to rivals (Supplementary Text S1 and Supplementary Figure S3, ) and the predictions for offspring weight assuming that larval crowding and/or the trade-off offspring number-size was driving the effects are inconsistent with the observed data for offspring weight (Supplementary Text S1 and Supplementary Figure S4). Therefore, it is likely that the effects presented here, albeit limited in experimental design, constitute an important advance in our understanding of how parental ecological history can influence inter-generational effects. In conclusion, this study shows that parental ecological history-in this study, parental developmental and adult social environments-can differentially modulate the effects of parental age at reproduction on the expression of offspring traits. The data show that the peak in parental reproductive success does not necessarily coincide with the peak offspring trait, suggesting that offspring from the same parents produced at different times can contribute to parents' fitness differently. |
Project PACER
Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs)—or, as stated in a later proposal, fission bombs—inside an underground cavity. As an energy source, the system is the only fusion power system that could be demonstrated to work using existing technology. However it would also require a large, continuous supply of nuclear bombs, and contemporary economics studies demonstrated that these could not be produced at a competitive price compared to conventional energy sources.
The earliest references to the use of nuclear explosions for power generation date to a meeting called by Edward Teller in 1957. Among the many topics covered, the group considered power generation by exploding 1 Mt bombs in a 1,000-foot (300 m) diameter steam-filled cavity dug in granite. This led to the realization that the fissile material from the fission sections of the bombs, the "primaries", would accumulate in the chamber. Even at this early stage, physicist John Nuckolls became interested in designs of very small bombs, and ones with no fission primary at all. This work would later lead to his development of the inertial fusion energy concept.
The initial PACER proposals were studied under the larger Project Plowshares efforts in the United States, which examined the use of nuclear explosions in place of chemical ones for construction. Examples included the possibility of using a large nuclear devices to create an artificial harbour for mooring ships in the north, or as a sort of nuclear fracking to improve natural gas yields. Another proposal would create an alternative to the Panama Canal in a single sequence of detonations, crossing a Central American nation. One of these tests, 1961's Project Gnome, also considered the generation of steam for possible extraction as a power source. LANL proposed PACER as an adjunct to these studies.
Early examples considered 1000 ft diameter water-filled caverns created in salt domes at as much as 5,000 feet (1,500 m) deep. A series of 50-kiloton bombs would be dropped into the cavern and exploded to heat the water and create steam. The steam would then power a secondary cooling loop for power extraction using a steam turbine. Dropping about two bombs a day would cause the system to reach thermal equilibrium, allowing the continual extraction of about 2 GW of electrical power. There was also some consideration given to adding thorium or other material to the bombs to breed fuel for conventional fission reactors.
In a 1975 review of the various Plowshares efforts, the Gulf University Research Consortium (GURC) considered the economics of the PACER concept. They demonstrated that the cost of the nuclear explosives would be the equivalent of fuelling a conventional light-water reactor with uranium fuel at a price of $328 per pound. Prices for yellowcake at that point were $27 a pound, and are around $45 in 2012. GURC concluded that the likelihood of PACER being developed was very low, even if the formidable technical issues could be solved. The report also noted the problems with any program that generated large numbers of nuclear bombs, saying it was "bound to be controversial" and that it would "arouse considerable negative responses". In 1975 further funding for PACER research was cancelled.
Despite the cancellation of this early work, basic studies of the concept have continued. A more developed version considered the use of engineered vessels in place of the large open cavities. A typical design called for a 4 m thick steel alloy blast-chamber, 30 m (100 ft) in diameter and 100 m (300 ft) tall, to be embedded in a cavity dug into bedrock in Nevada. Hundreds of 15 m (45 ft) long bolts were to be driven into the surrounding rock to support the cavity. The space between the blast-chamber and the rock cavity walls was to be filled with concrete; then the bolts were to be put under enormous tension to pre-stress the rock, concrete, and blast-chamber. The blast-chamber was then to be partially filled with molten fluoride salts to a depth of 30 m (100 ft), a "waterfall" would be initiated by pumping the salt to the top of the chamber and letting it fall to the bottom. While surrounded by this falling coolant, a 1-kiloton fission bomb would be detonated; this would be repeated every 45 minutes. The fluid would also absorb neutrons to avoid damage to the walls of the cavity. |
package de.nullnull.product;
public interface ProductRepositoryLayout {
String getId();
String pathOf(ProductPackage productPackage);
String pathOfMetadata(ProductPackage productPackage);
}
|
#include <QCoreApplication>
#include <QDebug>
#include <QRegExp>
#include <QStringList>
void test1()
{
QRegExp re("#include <[^>]+>");
QStringList strings = QStringList() << "#include <iostream>"
<< " #include <iostream> "
<< "#include \"iostream\""
<< "#define <iostream>";
for (const auto &str : strings)
{
qDebug() << (re.exactMatch(str) ? "matched" : "mismatched") << ":" << str;
}
qDebug() << "\n=================================\n";
}
void test2()
{
static const char *const TEXT =
"#include <QRegExp>\n"
"#include <QStringList>\n"
"#include <QDebug>\n"
"\n"
"int main() {\n"
"QRegExp re( \"#include <([^>]+)>\" );\n"
"int lastPos = 0;\n"
"while( ( lastPos = re.indexIn( TEXT, lastPos ) ) != -1 ) {\n"
"lastPos += re.matchedLength();\n"
"qDebug() << re.cap( 0 );\n"
"}\n"
"return 0;\n"
"}";
QRegExp re("#include <([^>]+)>");
int lastPos = 0;
while ((lastPos = re.indexIn(TEXT, lastPos)) != -1)
{
lastPos += re.matchedLength();
qDebug() << re.cap(0) << ":" << re.cap(1);
}
qDebug() << "\r\n=================================\r\n";
}
void test3()
{
static const char *const TEXT =
"#include <QRegExp>\n"
"#include <QStringList>\n"
"#include <QDebug>\n"
"\n"
"int main() {\n"
"qDebug() << QString( TEXT ).replace( QRegExp( \"#include <([^>]+)>\" ), \"#include \"\\1\"\" );\n"
"return 0;\n"
"}";
qDebug() << QString(TEXT).replace(QRegExp("#include <([^>]+)>"), R"(#include "\1")");
}
auto main(int argc, char *argv[]) -> int
{
QCoreApplication a(argc, argv);
test1();
test2();
test3();
return QCoreApplication::exec();
}
|
Siamese-Based BiLSTM Network for Scratch Source Code Similarity Measuring As a popular block-based programming language, Scratch attracts considerable attention in society and educational fields. Code similarity measuring is a major research direction in Scratch, which plays a significant role in clone detection and project recommendation. However, there are few studies focusing on it. In this paper, we propose a Siamese-Based bidirectional Long Short-Term Memory (BiLSTM) network to solve this problem. Specifically, a token-based code representation scheme is designed to abstract the blocks in Scratch. Then the obtained token stream is fed to a word embedding model for training. Next, we devise an improved Siamese-based BiLSTM model to measure the source code similarity. Finally, in order to evaluate the performance of proposed model, we construct a dataset from Scratch official website. The results show that it achieves more than 90% accuracy and recall. In addition, the proposed model is applied in the code cluster task, and reaches 95% accuracy. |
<filename>javasrc/fi/iki/jmtilli/javaxmlfrag/XMLDocumentType.java
package fi.iki.jmtilli.javaxmlfrag;
public enum XMLDocumentType {
WHOLE("no"),
FRAGMENT("yes");
private final String omit_xml_declaration;
private XMLDocumentType(String omit_xml_declaration)
{
this.omit_xml_declaration = omit_xml_declaration;
}
public String getOmitXmlDeclaration()
{
return omit_xml_declaration;
}
};
|
Activity of the anticoccidial compound, lasalocid, against Toxoplasma gondii in cultured cells. The activity of the anticoccidial drug, lasalocid, was tested against Toxoplasma gondii in cell cultures. Multiplication of parasites was inhibited by 0.05 mug/ml of lasalocid added to the cultures prior to adding the parasite inoculum, with the parasite inoculum, or after the parasites had penetrated the culture cells. Penetration of culture cells was inhibited when 0.05 mug/ml lasalocid was added with the parasite inoculum. Incubation of extracellular parasites in 0.5 mug/ml lasalocid had no effect on penetration or multiplication. Ormetoprim, sulfadimethoxine, and a combination of the 2 were less effective than lasalocid. Monensin exhibited an inhibitory effect in all experiments. |
President Barack Obama is so unpopular among the electorate that the White House is reportedly letting Democrats know there will be no ramifications if they run against Obama.
According to a CNN report, “understanding full well Obama’s unpopularity is a drag on some Democrats in tight congressional races, White House officials are signaling to party leaders and campaign managers alike there will be no consequences should they run away from the president in order to win.”
Democrats in battleground states are running away from Obama. For instance, Kentucky Democrat Alison Lundergan Grimes, a former Obama delegate, has refused to say on multiple occasions whether she voted for him.
Even Charlie Crist, who recently became a Democrat after his embrace with Obama, contributed to his GOP Senate primary loss to Marco Rubio in 2010, is reportedly “now wary of inviting” Obama “to publicly appear with him out of concern that it would shift the focus of the race to national issues,” according to a Wall Street Journal report.
Before Vice President Joe Biden campaigned for Crist, the gubernatorial candidate, this week, Florida Gov. Rick Scott poked fun at the report, saying that he hoped “President Obama can make a trip to the Sunshine State soon to see the results of our pro-growth policies – even if he is not invited on the campaign trail with Charlie Crist.”
A recent Wall Street Journal/NBC News/Annenberg poll found that an an Obama endorsement would actually “leave a more negative view of a congressional or Senate candidate, with 38 percent saying they would see a candidate less favorably, compared with just 28 percent who would have a more favorable view after a presidential endorsement.”
Obama hasn’t made any campaign appearances this election cycle, but he is expected to stump for Maryland gubernatorial candidate Anthony Brown and Illinois Gov. Pat Quinn this weekend. |
<reponame>harshp8l/deep-learning-lang-detection<filename>data/test/java/5d2ee0ac569a31742f3104805e806f5ce68d461dLFIMSAPIContext.java
package com.dreamtech360.lfims.resources.api.activator;
import com.dreamtech360.lfims.service.base.LFIMSGenericServiceFactory;
import com.dreamtech360.lfims.service.base.LFIMSModelServiceFactory;
import com.dreamtech360.lfims.services.ServiceEnum;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
public class LFIMSAPIContext implements BundleActivator {
private static BundleContext bundleContext=null;
public static <T> LFIMSModelServiceFactory<T> getService(ServiceEnum modelType){
LFIMSModelServiceFactory<T> service=(LFIMSModelServiceFactory<T>)bundleContext.getService(bundleContext.getServiceReference(modelType.getFactoryName()));
return service;
}
public static <T> LFIMSGenericServiceFactory<T> getGenericService(ServiceEnum modelType){
LFIMSGenericServiceFactory<T> service=(LFIMSGenericServiceFactory<T>)bundleContext.getService(bundleContext.getServiceReference(modelType.getFactoryName()));
return service;
}
@Override
public void start(BundleContext context) throws Exception {
// TODO Auto-generated method stub
bundleContext=context;
/*serviceReference=new ArrayList<ServiceReference>();
serviceReference.add(context.getServiceReference(BankMasterServiceFactory.class.getName()));
BankMasterServiceFactory bankMasterServiceFactory=(BankMasterServiceFactory)context.getService(serviceReference.get(0));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.BANK_MASTER_SERVICE, bankMasterServiceFactory);
serviceReference.add(context.getServiceReference(BranchMasterServiceFactory.class.getName()));
BranchMasterServiceFactory branchMasterServiceFactory=(BranchMasterServiceFactory)context.getService(serviceReference.get(1));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.BRANCH_MASTER_SERVICE, branchMasterServiceFactory);
serviceReference.add(context.getServiceReference(AdvocateMasterServiceFactory.class.getName()));
AdvocateMasterServiceFactory advocateMasterServiceFactory=(AdvocateMasterServiceFactory)context.getService(serviceReference.get(2));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.ADVOCATE_MASTER_SERVICE, advocateMasterServiceFactory);
serviceReference.add(context.getServiceReference(OurAdvocateMasterServiceFactory.class.getName()));
OurAdvocateMasterServiceFactory ourAdvocateMasterServiceFactory=(OurAdvocateMasterServiceFactory)context.getService(serviceReference.get(3));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.OUR_ADVOCATE_MASTER_SERVICE, ourAdvocateMasterServiceFactory);
serviceReference.add(context.getServiceReference(CourtMasterServiceFactory.class.getName()));
CourtMasterServiceFactory courtMasterServiceFactory=(CourtMasterServiceFactory)context.getService(serviceReference.get(4));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.COURT_MASTER_SERVICE, courtMasterServiceFactory);
serviceReference.add(context.getServiceReference(ExpensesMasterServiceFactory.class.getName()));
ExpensesMasterServiceFactory expensesMasterServiceFactory=(ExpensesMasterServiceFactory)context.getService(serviceReference.get(5));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.EXPENSES_MASTER_SERVICE, expensesMasterServiceFactory);
serviceReference.add(context.getServiceReference(NdpMasterServiceFactory.class.getName()));
NdpMasterServiceFactory ndpMasterServiceFactory=(NdpMasterServiceFactory)context.getService(serviceReference.get(6));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.NDP_MASTER_SERVICE, ndpMasterServiceFactory);
serviceReference.add(context.getServiceReference(CaseMgmtMaintenanceServiceFactory.class.getName()));
CaseMgmtMaintenanceServiceFactory caseMgmtMaintenanceServiceFactory=(CaseMgmtMaintenanceServiceFactory)context.getService(serviceReference.get(7));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.CASE_MGMT_MAINTENANCE, caseMgmtMaintenanceServiceFactory);
serviceReference.add(context.getServiceReference(CaseMasterServiceFactory.class.getName()));
CaseMasterServiceFactory caseMasterServiceFactory=(CaseMasterServiceFactory)context.getService(serviceReference.get(8));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.CASE_MASTER, caseMasterServiceFactory);
serviceReference.add(context.getServiceReference(CaseDefendentDetailsServiceFactory.class.getName()));
CaseDefendentDetailsServiceFactory caseDefendentDetailsServiceFactory=(CaseDefendentDetailsServiceFactory)context.getService(serviceReference.get(9));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.CASE_DEFENDENT_DETAILS, caseDefendentDetailsServiceFactory);
serviceReference.add(context.getServiceReference(CaseDiaryServiceFactory.class.getName()));
CaseDiaryServiceFactory caseDiaryServiceFactory=(CaseDiaryServiceFactory)context.getService(serviceReference.get(10));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.CASE_DIARY, caseDiaryServiceFactory);
serviceReference.add(context.getServiceReference(CaseImportantDocumentsServiceFactory.class.getName()));
CaseImportantDocumentsServiceFactory caseImportantDocumentsServiceFactory=(CaseImportantDocumentsServiceFactory)context.getService(serviceReference.get(11));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.CASE_IMPORTANT_DOCUMENTS, caseImportantDocumentsServiceFactory);
serviceReference.add(context.getServiceReference(CaseSecurityDetailsServiceFactory.class.getName()));
CaseSecurityDetailsServiceFactory caseSecurityDetailsServiceFactory=(CaseSecurityDetailsServiceFactory)context.getService(serviceReference.get(12));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.CASE_SECURITY_DETAILS, caseSecurityDetailsServiceFactory);
serviceReference.add(context.getServiceReference(LFIMSCacheManagementServiceFactory.class.getName()));
LFIMSCacheManagementServiceFactory cacheManagementServiceFactory=(LFIMSCacheManagementServiceFactory)context.getService(serviceReference.get(13));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.CACHE_MANAGEMENT_SERVICE, cacheManagementServiceFactory);
serviceReference.add(context.getServiceReference(LFIMSTransactionManagementServiceFactory.class.getName()));
LFIMSTransactionManagementServiceFactory transactionManagementServiceFactory=(LFIMSTransactionManagementServiceFactory)context.getService(serviceReference.get(14));
LFIMSServiceFactoryLocator.registerServiceFactory(ServiceEnum.TRANSACTION_MANAGEMENT_SERVICE, transactionManagementServiceFactory); */
}
@Override
public void stop(BundleContext context) throws Exception {
}
}
|
Project SEE (Satellite Energy Exchange): proposal for space-based gravitational measurements Project SEE (Satellite Energy Exchange) is an international effort to organize a new space mission for fundamental measurements in gravitation, including tests of the equivalence principle (EP) by composition dependence (CD) and inverse-square-law (ISL) violations, determination of G, and a test for non-zero G-dot. The CD tests will be both at intermediate distances (a few metres) and at long distances (radius of the Earth, RE). Thus, a SEE mission would obtain accurate information self-consistently on a number of distinct gravitational effects. The EP test by CD at distances of a few metres would provide confirmation of earlier, more precise experiments. All other tests would significantly improve our knowledge of gravity. In particular, the error in G is projected to be less than 1 ppm. Project SEE entails launching a dedicated satellite and making detailed observations of free-floating test bodies within its experimental chamber. |
def word_level_drs(new_clauses, sep):
return_strings = []
for cur_clause in new_clauses:
ret_str = ''
for item in cur_clause:
ret_str += ' ' + item + ' '
return_strings.append(" ".join(ret_str.rstrip(sep).strip().split()))
return return_strings |
import { Component, OnDestroy, OnInit } from '@angular/core';
import { FormControl, FormGroup, Validators } from '@angular/forms';
import { Router } from '@angular/router';
import { Subscription, throwError } from 'rxjs';
import { catchError } from 'rxjs/operators';
import { RequestsService } from '../shared/requests.service';
@Component({
selector: 'app-login',
templateUrl: './login.component.html',
styleUrls: ['./login.component.scss'],
})
export class LoginComponent implements OnInit, OnDestroy {
formGroup: FormGroup;
private subscription: Subscription = new Subscription();
constructor(private http: RequestsService, private router: Router) {
this.formGroup = new FormGroup({
login: new FormControl(null, [Validators.required]),
password: new FormControl(null, [Validators.required]),
});
}
ngOnInit(): void {}
ngOnDestroy(): void {
this.subscription.unsubscribe();
}
submit(): void {
this.subscription.add(
this.http
.checkPassword(this.formGroup.value)
.pipe(
catchError((err) => {
console.log(err);
this.formGroup.setErrors({ loginOrPasswordNotCorrect: true });
return throwError(err);
})
)
.subscribe(() => {
this.router.navigate(['test']);
})
);
}
}
|
Feline friends Mork and Mindy have been waiting for a home for over nine months.
The RSPCA East Norfolk Branch are full to capacity with animals waiting for homes and ones that are recovering from illness or injury. Please think about if you could rehome any of the animals in need this week.
Especially in need of a loving home are long termers Mork and Mindy who would love to be rehomed together as they don’t want to part ways after everything they’ve been through together.
When they first came to the RSPCA Mork and Mindy were poorly, but after being well looked after they are now desperate for a family to call their own.
They would be happiest in a home where they could have safe supervised access to outside.
If you think you could give Mork and Mindy the love they need then please call the RSPCA rehoming line on 07867 972870.
There are lots of other animals who also need help.
Tigertoo is a huge character who is full of energy. He was found wandering the streets in extremely poor condition. He has now put on weight, been neutered and had a dental. He is looking for a home where he will have a neutered female rabbit for company. His hobbies include chasing, climbing and chewing on branches.
Smokey is a large beautiful grey and white cat. He would be happiest in an adult home and could be adopted either on his own or with one of the other cats who he arrived with.
Hemo came into the RSPCA’s care after being hit by a car. He has had a hard life living as a stray for sometimes and now deserves a home where he will be cared for. He loves company and always wants to sit on your lap.
Sylvia is a young female cat who is always talking. She has been used to living with other cats and could live with older children.
Another 3 kittens arrived at the RSPCA last week who seemed to have been born outside along with their timid mum who is not much older than a kitten herself. We have called them Fries, Salty and Ketchup and they are now ready to find their forever homes.
Albie is a gentle giant who loves cuddles. He’ll act shy at first but then he won’t leave you alone. He will need a home where he has a comfy lap to sit on and a nice garden to explore. He would be happiest being the only cat in the home.
Little Cookie is sadly still with us. She needs an adult home with an owner who will give her the time she needs to settle. She takes some time to get used to new people but give her a chance and she’ll she be so grateful.
Humbug is a feral cat who is looking for a stable or farm type home. He will still need all the same care as a domestic cat, a warm place to shelter, fresh food and water daily and medical care.
All of the RSPCAs cats and dogs are neutered, vaccinated, and microchipped, vet checked and on a flea and worm programme.
Adoption is subject to a successful homevisit and there is a small adoption fee of £55 for cats and kittens and £25 for rabbits. This adoption fee helps the RSPCA to take in the next animal in need, this in no way covers the cost to the branch of making the animal ready for rehoming.
RSPCA East Norfolk is a locally funded branch and if you would like more information visit their website at www.rspcaeastnorfolk.co.uk.
Any kitten under 16 weeks that is rehomed from the branch will have a full cost neutering voucher to be used at the vets when they are 16 weeks old. To adopt an animal please call 07867 972870. |
"Auckland stands with Christchurch and with the Muslim community across New Zealand," says Auckland Mayor Phil Goff.
"We acknowledge our city’s strong Muslim community and stand united with the community in grief and solidarity.
"Auckland has come together to support our Muslim community. The council has also made available condolence books in various locations around Auckland to give Aucklanders the opportunity to express their messages of support for the victims, their families and their community.
"Auckland and New Zealand are places of peace," says Mayor Goff. |
class Factory:
"""Reusable layers
Admittedly, there is a lot of coupling here that could be revised
in future releases
"""
def __init__(self, dataset,
color_mapper,
figures,
source_limits,
opacity_slider):
self._calls = 0
self.dataset = dataset
self.color_mapper = color_mapper
self.figures = figures
self.source_limits = source_limits
self.opacity_slider = opacity_slider
def __call__(self):
"""Complex construction"""
self._calls += 1
try:
map_view = self.dataset.map_view(self.color_mapper)
except TypeError:
map_view = self.dataset.map_view()
visible = Visible.from_map_view(map_view, self.figures)
if self.opacity_slider is not None:
self.opacity_slider.add_renderers(visible.renderers)
return Layer(map_view, visible, self.source_limits) |
/**
* Read an HTML String, parse it and extract all structured embeddings it contains.
*
* @param html the HTML String to parse
* @return the read embeddings, or empty if the HTML is not of the supported format
*/
public Optional<StructuredEmbeddingsHolder> extract(String html) {
int jsonStartIndex = magicHeaderLength(html);
if (jsonStartIndex <= 0) {
return Optional.empty();
}
int jsonStopIndex = html.indexOf(JSON_STOP, jsonStartIndex);
if (jsonStopIndex <= 0) {
return Optional.empty();
}
String json = html.substring(jsonStartIndex, jsonStopIndex).replace("-", "-").replace("&", "&");
String htmlParts = html.substring(jsonStopIndex + JSON_STOP.length());
JSONArray array;
try {
array = (JSONArray) new JSONParser(JSONParser.MODE_PERMISSIVE).parse(json);
} catch (ParseException e) {
log.error("Cannot parse StructuredEmbeddings JSON", e);
return Optional.empty();
}
StructuredEmbeddingsHolder embeddings = new StructuredEmbeddingsHolder();
for (int i = 0; i < array.size(); i++) {
JSONObject object = (JSONObject) array.get(i);
final Integer priorityOrder = (Integer) object.get("priorityOrder");
final ParsedStructuredEmbedding embedding = new ParsedStructuredEmbedding(
(String) object.get("kind"),
(String) object.get("name"),
(String) object.get("type"),
object.get("data"),
(String) object.get("priority"),
(priorityOrder == null ? 0 : priorityOrder.intValue()),
extractHtml(htmlParts, i));
embeddings.getEmbeddings().add(embedding);
}
return Optional.of(embeddings);
} |
<gh_stars>0
# -*- coding: utf-8 -*-
"""
Created on Tue Jul 28 23:12:59 2020
@author: josed
"""
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def tumor_PC3( y, t, cprolif_c, cprolif_r, cCapacity_c, cCapacity_r, lambda_c, lambda_r):
Vc, Vr = y
dVcdt = cprolif_c * Vc * (1 - (Vc/cCapacity_c) - (lambda_r*(Vr/cCapacity_c)))
dVrdt = cprolif_r * Vr * (1 - (Vr/cCapacity_r) - (lambda_c*(Vc/cCapacity_r)))
return dVcdt, dVrdt
# Given parameters
cprolif_c = 0.015; cprolif_r = 0.02;
cCapacity_c = 0.85; cCapacity_r = 2;
lambda_c = 0.2; lambda_r = 0;
def surviving_fraction( alpha, beta, dosage):
sigma = math.exp( -alpha * dosage - beta * dosage * dosage )
return sigma
t = np.linspace(0, 336, 337)
y0 = [0.5, 0.5]
# y0 = [0.9, 0.1]
sol = odeint( tumor_PC3, y0, t, args = ( cprolif_c, cprolif_r, cCapacity_c, cCapacity_r, lambda_c, lambda_r) )
plt.title("PC3")
plt.figure(1)
plt.plot(t, sol[:, 0], 'b', label='parental')
plt.plot(t, sol[:, 1], 'g', label='resistant')
plt.xlabel('Time (hours)')
plt.ylabel('Volume')
plt.legend(loc='best')
plt.grid()
a_res = 0.300
b_res = 0.0402
a_sen = 0.430
b_sen = 0.0407
tin = 0
tin = 336
val = 24 * int(input("Enter time interval (days): "))
# val = 24 * 6
dosage = 0
dos = int(input("Enter dosage: "))
area_sen = 0
area_res = 0
var = int(60/dos)
curve_sen = (336/2) * (0.5 + sol[336,0])
for dos_count in range (0, var, 1):
y0_ = sol[-1, :]
y0 = y0_ * surviving_fraction(a_sen, b_sen, dos)
t = np.linspace(tin, tin + val, tin + val + 1)
sol = odeint( tumor_PC3, y0, t, args = ( cprolif_c, cprolif_r, cCapacity_c, cCapacity_r, lambda_c, lambda_r) )
t = np.append( tin, t )
sol = np.append( [y0_], sol, 0 )
plt.plot(t, sol[:, 0], 'b', label='sensitive')
dosage += dos
if tin >= 336 + val:
long = sol[0, 0]
area_sen += (val/2) * (short + long)
if dosage == 60:
final_vol = sol[tin + val, 0]
elif tin >= 1344:
final_vol = sol[1344,0]
break
short = sol[1,0]
tin += val
area_sen += curve_sen
# print('Final Volume of Sensitive Tumor = ', final_vol)
print(final_vol)
# print('Total Area Under Sensitive Tumor Curve = ', area_sen)
print(area_sen)
tin = 336
dosage = 0
t = np.linspace(0, 336, 337)
y0 = [0.5, 0.5]
sol = odeint( tumor_PC3, y0, t, args = ( cprolif_c, cprolif_r, cCapacity_c, cCapacity_r, lambda_c, lambda_r) )
curve_res = (336/2) * (0.5 + sol[336,1])
for dos_count in range (0, 60, dos):
y0_ = sol[-1, :]
y0 = y0_ * surviving_fraction(a_res, b_res, dos)
t = np.linspace(tin, tin + val, tin + val + 1)
sol = odeint( tumor_PC3, y0, t, args = ( cprolif_c, cprolif_r, cCapacity_c, cCapacity_r, lambda_c, lambda_r) )
t = np.append( tin, t )
sol = np.append( [y0_], sol, 0 )
plt.plot(t, sol[:, 1], 'g', label='resistant')
dosage += dos
if tin >= 336 + val:
long = sol[0, 1]
area_res += (val/2) * (short + long)
if dosage == 60:
final_vol = sol[tin + val, 1]
elif tin >= 1344:
final_vol = sol[1344, 1]
break
short = sol[1, 1]
tin += val
area_res += curve_res
# print('Final Volume of Resistant Tumor = ', final_vol)
print(final_vol)
# print('Total Area Under Resistant Tumor Curve = ', area_res)
print(area_res)
|
import cv2
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface.xml")
def detect():
cap = cv2.VideoCapture(0)
while True:
_, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
face = face_cascade.detectMultiScale(gray, 1.1, 4)
for (x, y, w, h) in face:
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow("Face Detect", img)
if cv2.waitKey(1) == 27:
break
cap.release()
detect()
|
When Ryan Anderson was dropped from the Nets’ rotation before New Year’s, coach Lawrence Frank’s reasoning was sound.
Anderson’s back was aching, his shooting was wanting as a backup to Yi Jianlian, who also was scuffling with his shot. Say what you will about continuity, but none saw the logic in subbing a struggling shooter, Anderson, for a struggling shooter, Yi.
But now, Anderson is healthy, Yi isn’t (broken pinkie) and the rookie will make his first career start tonight when the Nets look for a fourth straight win at home. They’ll greet former Net Nenad Krstic and the Thunder – while welcoming Devin Harris back from a hamstring injury that KO’d him for 3 1/2 of the last four games. Anderson replaces Yi, who is out 4-6 weeks.
In his last eight appearances – one coming after his benching – Anderson shot 7-of-34 (.206), including 3-of-17 (.176) on 3-pointers while averaging 3.5 points. Blame the back for some.
He’d rather emulate Yi of late. Yi had 58 points in three games before his injury.
“Yi’s been playing amazing. You could see his confidence [growing],” said Anderson, who like Yi lately vows to be an inside-outside guy.
But all that is in the past as, at 20, he’ll be the youngest Net to start since Clifford Robinson was 19 in 1979-80. With Brook Lopez, he’ll also give the Nets two rookie starters for the first time since March 5, 2002 (Richard Jefferson and Jason Collins).
Frank had options but chose Anderson, in part, to jumpstart the rookie. One possible option was Eduardo Najera, but Frank likes the role the veteran forward has adopted – as a supply of energy and defense off the bench.
“Eduardo’s really getting his niche, doing what he’s doing, so as opposed to changing everything, we’re going to give Ryan an opportunity,” Frank said.
Krstic has 24 points and 10 rebounds in three games. Krstic criticized the Nets’ handling of his injury when he signed in Russia, but Nets seem forgiving.
“Nenad is one of the nicest kids in the world. I was a little disappointed about what he said, but that’s water under the bridge,” Frank said. |
// Allow choose which is import → export.ts const A = {}
// Permit choose which is import → export.ts default const A = {}
// When export variable we must wrap in object
// export {sayHello, sayGoodbye}
// export const phi = 1.61;
interface Storage {
}
interface Session {
}
export interface User {
name: string
}
const SECTION_NAME3 = 'function'
const SECTION_NAME4 = 'function'
// Can use import {SECTION_NAME1} from './_function'
export const SECTION_NAME1 = 'function'
export {SECTION_NAME3, SECTION_NAME4}
export {Session, Storage as OtherStorage}
// Cannot choose which is import: import {SECTION_NAME1} from './_function'
// import fn from './_function'
// fn.SECTION_NAME3, fn.SECTION_NAME4
export default {SECTION_NAME3, SECTION_NAME4} |
<filename>YANG-to-Redfish-Plugin/rf/xml_content.py
# Copyright Notice:
# Copyright 2017-2019 DMTF. All rights reserved.
# License: BSD 3-Clause License. For full text see link: https://github.com/DMTF/YANG-to-Redfish-Converter/blob/master/LICENSE.md
grammar_whitespace_mode = 'optional'
from xml.etree.ElementTree import Element, SubElement, Comment, tostring, dump
class XMLContent:
filename = None
xml_tree = None
def __init__(self):
pass
@classmethod
def create_doc_with(cls, name, node):
xml_content = XMLContent()
xml_content.set_filename(name + '_v1.xml')
xml_content.set_xml(node)
return xml_content
def set_filename(self, filename):
self.filename = filename
def set_xml(self, xml):
self.xml_tree = xml
def get_filename(self):
return self.filename
def get_xml(self):
return self.xml_tree
|
def _histogram_zr(positions, axes=(0, 1, 2)):
number_particles, duration, dimensions = positions.shape
zax, yax, xax = axes
assert dimensions == 3
radial = positions[..., :2].copy()
zmin = radial[..., zax].min()
radius = numpy.hypot(positions[..., yax], positions[..., xax])
radius += 0.5
radial[..., 0] -= zmin
radial[..., 1] = radius
zmax = radial[..., 0].max()
rmax = radial[..., 1].max()
hist = numpy.zeros((duration, zmax + 1, rmax + 1), dtype=numpy.uint32)
for p in range(number_particles):
for t in range(duration):
z = radial[p, t, 0]
r = radial[p, t, 1]
hist[t, z, r] += 1
return hist, (zmin, 0) |
/**
* Maps Tree node offsets using provided mapping.
* @param tree the Tree whose begin and end extents should be mapped.
* @param mapping the list of RangeMap objects which defines the mapping.
*/
protected static void mapOffsets(Tree tree, List<RangeMap> mapping)
{
if (mapping == null || mapping.size() == 0) return;
int begin_map_index = 0;
RangeMap begin_rmap = mapping.get(begin_map_index);
TREE: for (Tree t : tree) {
if (t.isLeaf()) continue;
MapLabel label = (MapLabel) t.label();
int begin = (Integer) label.get(BEGIN_KEY);
int end = (Integer) label.get(END_KEY) - 1;
while (begin_rmap.end <= begin) {
begin_map_index++;
if (begin_map_index >= mapping.size()) break TREE;
begin_rmap = mapping.get(begin_map_index);
}
if (begin_rmap.begin > end) {
continue;
}
int new_begin = begin;
if (begin_rmap.begin <= new_begin) {
new_begin = begin_rmap.map(new_begin);
}
int end_map_index = begin_map_index;
RangeMap end_rmap = begin_rmap;
END_OFFSET: while (end_rmap.end <= end) {
end_map_index++;
if (end_map_index >= mapping.size()) break END_OFFSET;
end_rmap = mapping.get(end_map_index);
}
int new_end = end;
if (end_rmap.begin <= end) {
new_end = end_rmap.map(end);
}
label.put(BEGIN_KEY, new_begin);
label.put(END_KEY, new_end + 1);
}
} |
Optimization of Flexible Non-Uniform Multilevel PAM for Maximizing the Aggregated Capacity in PON Deployments Non-uniform pulse amplitude modulation (PAM) utilizes unequal distances between its modulation levels. In a multilevel PAM symbol, multiple bits are encoded. Due to the unequal level spacing, some bits can be decoded successfully at a lower received optical power than others. This is well suited for practical passive optical network (PON) deployments wherein the optical powers received by the different optical network units (ONUs) typically vary over a broad range. Thus, more ONUs in the PON can successfully decode non-uniform PAM-4 and PAM-8 than standard PAM-4/8, thereby increasing the aggregated capacity of the network. In systems where signal-dependent noise makes up a significant part of the total received noise level, the non-uniform PAM constellation can be adapted to take this signal-dependent variance into account. In doing so, a lower unequal level spacing can be used, decreasing the received optical power required to successfully decode all the bits in the PAM symbol. The impact of non-uniform PAM on the network throughput is presented by comparison of the experimental results with the actual loss distribution of a commercially deployed PON. |
<reponame>ncodeitbharath/gradle
package com.dotcms.api.system.event;
import com.dotcms.api.system.event.verifier.ExcludeOwnerVerifierBean;
import com.dotcms.repackage.com.google.common.annotations.VisibleForTesting;
import com.dotmarketing.business.APILocator;
import com.dotmarketing.business.PermissionAPI;
import com.dotmarketing.exception.DotDataException;
import com.dotmarketing.portlets.contentlet.model.Contentlet;
/**
* This Util class provided methods to record different events link with the several types of
* {@link Contentlet}
*
* @see SystemEventsAPI
* @see SystemEvent
*/
public class ContentletSystemEventUtil {
private static final String DELETE_EVENT_PREFIX = "DELETE";
private static final String SAVE_EVENT_PREFIX = "SAVE";
private static final String UPDATE_EVENT_PREFIX = "UPDATE";
private static final String ARCHIVED_EVENT_PREFIX = "ARCHIVE";
private static final String PUBLISH_EVENT_PREFIX = "PUBLISH";
private static final String UN_PUBLISH_EVENT_PREFIX = "UN_PUBLISH";
private static final String UN_ARCHIVED_EVENT_PREFIX = "UN_ARCHIVE";
private static final String COPY_EVENT_PREFIX = "COPY";
private static final String MOVE_EVENT_PREFIX = "MOVE";
private static final String SITE_EVENT_SUFFIX= "SITE";
private final SystemEventsAPI systemEventsAPI;
@VisibleForTesting
protected ContentletSystemEventUtil(SystemEventsAPI systemEventsAPI){
this.systemEventsAPI = systemEventsAPI;
}
private ContentletSystemEventUtil(){
this(APILocator.getSystemEventsAPI());
}
private static class SingletonHolder {
private static final ContentletSystemEventUtil INSTANCE = new ContentletSystemEventUtil();
}
public static ContentletSystemEventUtil getInstance() {
return ContentletSystemEventUtil.SingletonHolder.INSTANCE;
}
/**
* Push a save or update event, the event that is pushed depends of the {@link Contentlet}'s Content Type.
* The isNew argument set the prefix of the event name, if it is true then the prefix would be SAVE, in otherwise
* the prefix would be UPDATE.
* The suffix of the event's name is set by the {@link Contentlet}'s Content Type, so if the contentlet is a HOST then
* it would be SITE, for example: if isNew is equals to true and the contentlet is a Host then the event pushed would be
* SAVE_SITE.
* If not exist any event with the name built then no event is pushed.
*
* @param contentlet is the Payload data
* @param isNew
*/
public void pushSaveEvent(Contentlet contentlet, boolean isNew){
String actionName = getActionName(contentlet, isNew);
sendEvent(contentlet, actionName);
}
/**
* Push a delete event, the event that is pushed depends of the {@link Contentlet}'s Content Type.
*
* The suffix of the event's name is set by the {@link Contentlet}'s Content Type, so if the contentlet is a HOST then
* it would be SITE, and the event's name would be DELETE_SITE.
*
* If not exist any event with the name built then no event is pushed.
*
* @param contentlet is the Payload data
*/
public void pushDeleteEvent(Contentlet contentlet){
sendEvent(contentlet, DELETE_EVENT_PREFIX);
}
/**
* Push a publish event, the event that is pushed depends of the {@link Contentlet}'s Content Type.
*
* The suffix of the event's name is set by the {@link Contentlet}'s Content Type, so if the contentlet is a File then
* it would be FILE_ASSET, and the event's name would be PUBLISH_FILE_ASSET.
*
* If not exist any event with the name built then no event is pushed.
*
* @param contentlet is the Payload data
*/
public void pushPublishEvent(Contentlet contentlet){
sendEvent(contentlet, PUBLISH_EVENT_PREFIX);
}
/**
* Push a unpublish event, the event that is pushed depends of the {@link Contentlet}'s Content Type.
*
* The suffix of the event's name is set by the {@link Contentlet}'s Content Type, so if the contentlet is a File then
* it would be FILE_ASSET, and the event's name would be UN_PUBLISH_FILE_ASSET.
*
* If not exist any event with the name built then no event is pushed.
*
* @param contentlet is the Payload data
*/
public void pushUnpublishEvent(Contentlet contentlet){
sendEvent(contentlet, UN_PUBLISH_EVENT_PREFIX);
}
/**
* Push a copy event, the event that is pushed depends of the {@link Contentlet}'s Content Type.
*
* The suffix of the event's name is set by the {@link Contentlet}'s Content Type, so if the contentlet is a File then
* it would be FILE_ASSET, and the event's name would be COPY_FILE_ASSET.
*
* If not exist any event with the name built then no event is pushed.
*
* @param contentlet is the Payload data
*/
public void pushCopyEvent(Contentlet contentlet){
sendEvent(contentlet, COPY_EVENT_PREFIX);
}
/**
* Push a move event, the event that is pushed depends of the {@link Contentlet}'s Content Type.
*
* The suffix of the event's name is set by the {@link Contentlet}'s Content Type, so if the contentlet is a File then
* it would be FILE_ASSET, and the event's name would be MOVE_FILE_ASSET.
*
* If not exist any event with the name built then no event is pushed.
* @param contentlet is the Payload data
*/
public void pushMoveEvent(Contentlet contentlet){
sendEvent(contentlet, MOVE_EVENT_PREFIX);
}
/**
* Push a archive event, the event that is pushed depends of the {@link Contentlet}'s Content Type.
*
* The suffix of the event's name is set by the {@link Contentlet}'s Content Type, so if the contentlet is a File then
* it would be FILE_ASSET, and the event's name would be ARCHIVE_FILE_ASSET.
*
* If not exist any event with the name built then no event is pushed.
*
* @param contentlet is the Payload data
*/
public void pushArchiveEvent(Contentlet contentlet){
sendEvent(contentlet, ARCHIVED_EVENT_PREFIX);
}
/**
* Push a unarchived event, the event that is pushed depends of the {@link Contentlet}'s Content Type.
*
* The suffix of the event's name is set by the {@link Contentlet}'s Content Type, so if the contentlet is a File then
* it would be FILE_ASSET, and the event's name would be UN_ARCHIVED_FILE_ASSET.
*
* If not exist any event with the name built then no event is pushed.
*
* @param contentlet is the Payload data
*/
public void pushUnArchiveEvent(Contentlet contentlet){
sendEvent(contentlet, UN_ARCHIVED_EVENT_PREFIX);
}
/**
* Return the event's name prefix for a SAVE or UPDATE action.
*
* @param contentlet
* @param isNew
* @return
*/
private String getActionName(Contentlet contentlet, boolean isNew) {
return isNew ? SAVE_EVENT_PREFIX : UPDATE_EVENT_PREFIX;
}
/**
* Return the event's name according to a {@link Contentlet} and a methodName
*
* @param contentlet
* @param methodName
* @return
*/
private SystemEventType getSystemEventType(Contentlet contentlet, String methodName) {
String contentType = getType(contentlet);
String eventName = String.format("%s_%s", methodName, contentType);
try {
return SystemEventType.valueOf(eventName.toUpperCase());
}catch(IllegalArgumentException e){
return null;
}
}
private String getType(Contentlet contentlet) {
if (contentlet.isHost()){
return SITE_EVENT_SUFFIX;
}else if (contentlet.getStructure() != null && contentlet.getStructure().getName() != null){
return contentlet.getStructure().getName().replace(" ", "_").toUpperCase();
}else{
throw new IllegalStateException("The Content type is null");
}
}
private void sendEvent(Contentlet contentlet, String action) {
SystemEventType systemEventType = getSystemEventType(contentlet, action);
if (systemEventType != null) {
Payload payload = this.getPayload(contentlet);
try {
systemEventsAPI.push(new SystemEvent(systemEventType, payload));
} catch (DotDataException e) {
throw new CanNotPushSystemEventException(e);
}
}
}
private Payload getPayload(Contentlet contentlet){
if (contentlet.isHost()){
return new Payload(contentlet, Visibility.PERMISSION, PermissionAPI.PERMISSION_READ);
}else{
return new Payload(contentlet, Visibility.EXCLUDE_OWNER,
new ExcludeOwnerVerifierBean(contentlet.getModUser(), PermissionAPI.PERMISSION_READ, Visibility.PERMISSION));
}
}
}
|
arg = map(int, raw_input().strip().split())
list1 = map(int, raw_input().strip().split())
list2 = map(int, raw_input().strip().split())
stock = arg[2] / min(list1)
remainder = arg[2] % min(list1)
stock_sold = max(list2) * stock
print max(stock_sold + remainder, arg[2]) |
. The purpose of this study was to clarify the clinical characteristics of lung cancer patients with abnormal accumulation in the gastrointestinal tract by fluoro-2-deoxyglucose positron emission tomography (PET). Of the 968 consecutive patients with primary lung cancer who underwent PET from October 2005 through September 2009, 26 patients had local abnormal accumulation in the gastrointestinal tract. We retrospectively compared the localization of abnormal accumulation in the gastrointestinal tract, standardized uptake value (SUV) max (1 hour), and the final clinical diagnosis. The site of abnormal accumulation was the esophagus in 1 case, the stomach in 8 and the small intestine to large intestine in 17. In 15 out of 26 (57%) cases with true PET positive results, there was esophageal cancer in 1 case, gastric cancer in 2, gastrointestinal stromal tumor in 1, colon cancer in 8, and 1 each of metastasis to the stomach, small intestine and large intestine from lung cancer. In 11 cases with false PET-positive results, there was a stomach polyp in 1 case, gastritis in 3, colon polyp in 1, diverticulitis in 1 and normal physiologic accumulation in 5. There were no differences in mean SUV max among malignant lesions, benign lesions, and normal physiologic accumulation. We should perform endoscopy of the digestive tract to detect malignant lesions with high incidence rates when PET shows localalized abnormal accumulation in the gastrointestinal, tract in patients with lung cancer. |
Ultrabright fluorescent silica particles with a large number of complex spectra excited with a single wavelength for multiplex applications. We report on a novel approach to synthesize ultrabright fluorescent silica particles capable of producing a large number of complex spectra. The spectra can be excited using a single wavelength which is paramount in quantitative fluorescence imaging, flow cytometry and sensing applications. The approach employs the physical encapsulation of organic fluorescent molecules inside a nanoporous silica matrix with no dye leakage. As was recently demonstrated, such an encapsulation allowed for the encapsulation of very high concentrations of organic dyes without quenching their fluorescent efficiency. As a result, dye molecules are distanced within ∼5 nm from each other; it theoretically allows for efficient exchange of excitation energy via Frster resonance energy transfer (FRET). Here we present the first experimental demonstration of the encapsulation of fluorescent dyes in the FRET sequence. Attaining a FRET sequence of up to five different dyes is presented. The number of distinguishable spectra can be further increased by using different relative concentrations of encapsulated dyes. Combining these approaches allows for creating a large number of ultrabright fluorescent particles with substantially different fluorescence spectra. We also demonstrate the utilization of these particles for potential multiplexing applications. Though fluorescence spectra of the obtained multiplex probes are typically overlapping, they can be distinguished by using standard linear decomposition algorithms. |
import heapq
N = int(input())
A = list(map(int, input().split()))
leftQue = []
rightQue = []
leftSum = 0
rightSum = 0
for i in range(N):
heapq.heappush(leftQue, A[i])
leftSum += A[i]
heapq.heappush(rightQue, -A[-1 - i])
rightSum -= A[-1 - i]
left = [-float('inf') for _ in range(3 * N)]
right = [-float('inf') for _ in range(3 * N)]
left[N - 1] = leftSum
for middle in range(N, 2 * N):
leftSum += A[middle]
leftSum -= heapq.heappushpop(leftQue, A[middle])
left[middle] = leftSum
right[2 * N - 1] = rightSum
for middle in range(2 * N - 1, N - 1, -1):
rightSum -= A[middle]
rightSum -= heapq.heappushpop(rightQue, -A[middle])
right[middle - 1] = rightSum
ans = -float('inf')
for l, r in zip(left, right):
ans = max(ans, l + r)
print(ans) |
Zidisha
History
Zidisha was founded in October 2009 by Julia Kurnia.
After visiting Niger as Portfolio Analyst for the US African Development Foundation, Kurnia became disillusioned with foreign aid. In 2006, she co-founded the Senegal Ecovillage Microfinance (SEM) Fund with John Fay and Nan Guslander. To keep financing and salary costs low, SEM raised money from the online microlending portal Kiva at 0% interest, and its three co-founders all went without salaries and volunteered their time.
SEM struggled with the sustainability of their model, as they were unwilling to raise interest rates to cover the cost of renting an office and hire loan officers and also unable to find outside donors. Eventually Kurnia left in August 2009, and SEM began to struggle with delinquent loans, with its portfolio reaching a high of 77.4% delinquency in December, 2010. In response, SEM's team stopped making new loans and focused on collecting funds from their existing borrowers. Kurnia had donated $30k to subsidize SEM's operating costs, but once those donated funds and others ran out, the organization defaulted on 5.1% of its loans and Kiva closed its partnership with SEM in March 2012.
Kurnia's experience at SEM gave her visibility into the high operational costs of traditional microlenders. By 2008, Internet access in developing nations had become widely available enough to make direct peer-to-peer microlending feasible. Kurnia founded Zidisha to connect lenders and borrowers directly, thereby reducing borrower costs.
Zidisha relaunched in January 2014 as one of the first seven non-profits funded by seed accelerator Y Combinator. In March 2014, Y Combinator partner Paul Buchheit donated $100k in a bid to further promote Zidisha online.
As of May 2015, Zidisha has financed $3.4 million in loans to 12,225 borrowers. Zidisha Inc results are distinguished from Zidisha Community. The non-profit is not a lender; it carries no loans on its books.
Lending process
Zidisha's lending process works as follows:
Borrowing
1. A first-time loan applicant creates a profile that describes his or her business and personal details. The applicant's details are independently checked by Zidisha or a Zidisha partner, such as a local credit bureau. If the loan is approved and successfully funded, first-time borrowers are charged roughly $12 (1000 Kenyan Shillings) to cover this cost of processing their application. Upon joining Zidisha, borrowers also make a deposit into a reserve fund that is used to compensate lenders in the event of default. These costs are only paid once and entitle the borrower to raise an unlimited number of consecutive loans through Zidisha. Zidisha used to contract with local partners to perform telephone-based verifications of each new borrower, but around 2012 the organization discontinued this practice due to fraud, corruption and ineffectiveness.
2. Approved applicants post a loan request that describes their life story, the proposed investment, desired loan amount and repayment period. Zidisha’s lender participants then have the opportunity to finance all or a portion of the loan at zero interest.
3. If enough lenders commit to lending the designated loan amount before the loan expires, the loan is funded and disbursed to the borrower; otherwise, it's expired, lenders are refunded and the borrower may try again with a new application.
4. For successfully funded loans, 100% of lenders' accepted bids are disbursed to the borrower. Loan values are fixed in local currency, using the exchange rate effective at the time the loan is disbursed. Because loan values are fixed in local currency, lenders bear the risk of any currency exchange rate fluctuations.
Repayments
1. The borrower is obligated to repay principal and interest according to the schedule proposed in the loan application (usually in weekly installments). Each time the borrower makes a repayment installment, lenders' shares of principal repayment are credited to their accounts on the Zidisha website.
2. Zidisha borrowers are allowed to adjust their weekly installment amount upward or downward an unlimited number of times, as long as a single payment has been made since the last adjustment. Prior to November 11, 2013, borrowers had been allowed to ask for a grace period of 1–2 months on a loan, "during which time no payments would be due, but after which monthly installments would resume in the same amounts as before." After November 11, 2013, borrowers were no longer allowed to add a grace period, but continued to be allowed to raise or lower the amount of money due each week.
3. Throughout the loan application and repayment period, lenders may post comments and questions, and borrowers may supply additional information and business updates through a weblog on their profile pages.
4. Loans made after March 2015 are covered by a reserve fund. If one of these loans falls behind on payments by 10 days or more, lenders may opt to receive a full reimbursement of the amount they lent from the reserve fund.
5. Lenders may post feedback on all lending transactions with which they are involved, thus creating a performance record that allows borrowers to request progressively larger loans with each successful repayment.
Business Operations
To maintain a low interest rate, Zidisha operations are mostly supported by volunteer teams. The volunteers are either interns, who commit 10 hours/week, or volunteers, who commit 2 hours/week. Most interns and volunteers are typically organized by country, and are assigned a variety of day-to-day tasks. The tasks vary from email correspondence to borrowers and lenders, to disbursing loans, to translation and reviewing member user profiles. Zidisha has had much success with this operations model as many interns and volunteers have proven to reliably carry out Zidisha's day-to-day activities. Those interested to learn about micro-finance operations could check out Zidisha's website for volunteering opportunities.
Regulatory Status
Typically, peer-to-peer microlenders offering interest on loans to US lenders are regulated by the Securities and Exchange Commission. In 2008, the SEC required that peer-to-peer lending companies offering interest on loans to register their offerings as securities, pursuant to the Securities Act of 1933. Accordingly, Prosper was shut down by the SEC on November 24, 2008, and didn't reopen until July 2009, after it had registered with the SEC. Lending Club announced its completion of the SEC registration process on October 14, 2008. MYC4, a microlending marketplace focused on African entrepreneurs, similarly announced in 2010 that they are "not allowed to disburse money to North American Investors" because of SEC regulations.
Zidisha has not registered with the SEC, and has publicly stated that they are not a securities broker. The website states in its terms of use that unlike a securities broker, Zidisha is under no obligation to return lender funds and honor withdrawal requests: "Zidisha makes no guarantee or representation that funds lent through its website will be repaid to lenders, regardless of whether the loans financed with lender funds are repaid to Zidisha. Any cash payouts are promotional gifts offered solely at Zidisha's discretion."
The tax treatment of these promotional gifts is unclear. Gifts are taxed in the United States, which would mean that for US lenders, the original loan principal, if withdrawn, would be subject to taxation. Zidisha states in its terms that, "It is the responsibility of website users to report and pay any applicable taxes on any cash payouts received from Zidisha."
Interest Rates
Starting in February 2015, Zidisha borrowers no longer pay interest. Instead, they make a one-time deposit into a reserve fund upon joining Zidisha, and thereafter pay a service fee of 5% of each loan raised. The reserve fund is used to compensate lenders in the event a loan is not repaid on time. The service fee goes to Zidisha to cover money transfer costs.
This is much lower than the global average of interest rate of 35% for microfinance loans. According to an analysis published by microfinance risk consultant Daniel Rozas in July 2011, Zidisha offers loans at less than half the interest rates of traditional microfinance institutions as estimated by MFTransparency.
The inflation rate in developing nations varies widely, and can be as high as 53%, much higher than the interest rates usually paid to microlenders. Zidisha does not provide protection from losses due to currency risk, but also does not restrict lenders' ability to profit from currency fluctuations.
June 2014 to May 2015
In the year from June 2014 to May 2015, Zidisha lenders raised $1,313,316 for 7,693 individual loans at an average lender interest rate of 3.8% (note: Zidisha discontinued interest in February 2015).
The status of the $1,313,316 as of May 2015 is as follows:
$555,190 (42.3% of amount disbursed) has already been repaid to lenders.
$521,405 (39.7% of amount disbursed) is still outstanding with borrowers who are repaying on time (within a threshold of 30 days and $10).
$92,437 (7.0% of amount disbursed) is still outstanding with borrowers who are more than 30 days and $10 late with scheduled repayments.
$37 (0.1% of amount disbursed) has been forgiven by the lenders for humanitarian reasons.
$46,885 (3.6% of amount disbursed) has been written off by Zidisha.
All Time
As of June 2014, Zidisha's lenders have raised $2,106,294 for 6,879 individual loans at an average lender interest rate of 5.1% since the organization was founded in 2009.
The status of the $2,106,294 as of June 2014 is as follows:
$1,082,861 (51.4% of amount disbursed) has already been repaid to lenders.
$454,189 (21.6% of amount disbursed) is still outstanding with borrowers who are repaying on time (within a threshold of 30 days and $10).
$184,530 (8.8% of amount disbursed) is still outstanding with borrowers who are more than 30 days and $10 late with scheduled repayments.
$15,063 (0.7% of amount disbursed) has been forgiven by the lenders for humanitarian reasons.
$369,748 (17.6% of amount disbursed) has been written off by Zidisha.
As of 03.01.2015 Zidisha statistic shows only Principal held by borrowers repaying on time (within 30-day threshold): $391,115 (only 15.5% of amount disbursed) and Principal repaid all time as low as 54.5% of amount disbursed.
Changes in reporting methodology
Zidisha had previously reported a repayment rate of 98% as of August, 2012, but this repayment rate counted only those loans whose final repayment dates had occurred 6+ months ago. Zidisha stated that "incorporating such a long time lag made the statistics less useful, and so we modified the calculations to include all loans whose final repayment dates have already arrived, even though they have not yet had time to be written off."
The revised repayment rate (not incorporating the time lag) was 89.3% as of August 2013. Zidisha subsequently made its write-off policy stricter, classifying a loan as written off if it is not repaid six months after its due date, or if the borrower has not made any payments for six months. This stricter writeoff policy resulted in a reported writeoff rate of 17.6% of ended loans as of June 29, 2014.
Zidisha defines its repayment and writeoff statistics more conservatively than other microlending websites. For example, Kiva, the world's largest microlending website, reports any loans not yet written off as repaid, even if they are still outstanding with the borrower. Kiva also counts repayments made by field partners to cover borrower defaults as part of its on-time repayment rate.
Some observers argued that Zidisha's writeoff policy is too strict, as it has often resulted in loans that are still actively repaying (but over six months late) as being written off. Zidisha's stated rationale to maintaining such a strict writeoff policy is that they wish to err on the side of reporting high write-off rates to prospective lenders so that they fully understand the risks of lending with Zidisha, and Zidisha continues to follow up with written off loans and return the repayments collected to lenders.
Recognition
Zidisha has been a finalist and semifinalist for the Echoing Green foundation for early-stage social enterprises, and is a current Ashoka Fellow nominee.
In 2014 Zidisha became one of the first seven nonprofits to graduate from seed accelerator Y Combinator. |
<reponame>juztz1n/Zed<filename>source/zin/zedEngine/graphics/Transform.java
package zin.zedEngine.graphics;
import zin.zedEngine.math.Matrix4f;
import zin.zedEngine.math.Vector3f;
public class Transform {
private Vector3f position, rotation, scale;
public Transform(Vector3f position, Vector3f rotation, Vector3f scale) {
this.position = position;
this.rotation = rotation;
this.scale = scale;
}
public Transform() {
position = new Vector3f();
rotation = new Vector3f();
scale = new Vector3f(1);
}
public Vector3f getPosition() {
return position;
}
public void setPosition(Vector3f position) {
this.position = position;
}
public Vector3f getRotation() {
return rotation;
}
public void setRotation(Vector3f rotation) {
this.rotation = rotation;
}
public Vector3f getScale() {
return scale;
}
public void setScale(Vector3f scale) {
this.scale = scale;
}
public Matrix4f getTransformationMatrix() {
Matrix4f position = new Matrix4f().translate(this.position);
Matrix4f rotation = new Matrix4f().rotate(this.rotation);
Matrix4f scale = new Matrix4f().scale(this.scale);
return position.multiply(rotation).multiply(scale);
}
public void setTransform(Transform transform) {
this.position = transform.position;
this.rotation = transform.rotation;
this.scale = transform.scale;
}
}
|
Check out the all new clip!
"Monday Night Football" likely had lots of new viewers last night, as Marvel Studios dropped a new trailer to its upcoming "Captain Marvel" blockbuster during halftime of the game between the Washington Redskins and Philadelphia Eagles on ESPN.
The trailer was also posted online, for those who weren't willing to wade through all the sports to get to the comic book-based action. In less than 24 hours, it already has more than 7 million views.
The second trailer fleshes out more of the movie's plot. Oscar-winner Brie Larson plays the titular heroine -- who was once Carol Danvers, an Air Force fighter pilot.
An accident infuses her with alien superpowers -- indeed, her Captain Marvel is just about the most powerful character in the Marvel Comics universe -- but the incident wipes away her past life.
"Your life began the day it nearly ended. We found you with no memory," Annette Bening's character says. "We made you one of us."
"Us," is a Kree -- an alien "race of noble warrior heroes," as Danvers explains to Samuel L. Jackson's Nick Fury after she returns to Earth. There, she explains to him about the intergalactic war between the Kree and the shape-shifting Skrulls, which leaves Earth stuck in the middle.
Danvers is haunted by flashes of her past life on Earth and a sense of duty to protect the planet.
Larson's Captain Marvel is the first female character to front her own movie in the Marvel Cinematic Universe. As teased during the post-credits scene of "Avengers: Infinity War," she's could be the only hope humanity has after Thanos finger-snap wipes out half of all living things in the universe.
The movie, which also stars Jude Law, Ben Mendelsohn and Clark Gregg, opens March 8.
Marvel Studios is owned by ABC News' parent company Disney. |
// Copyright (c) 2020 Cesanta Software Limited
// All rights reserved
#include "mjson.h" // JSON parsing and printing
#include "mongoose.h"
// This is a configuration structure we're going to show on a dashboard
static struct config {
int value1;
char *value2;
} s_config = {123, NULL};
// Stringifies the config. A caller must free() it.
static char *stringify_config(struct config *cfg) {
char *s = NULL;
mjson_printf(mjson_print_dynamic_buf, &s, "{%Q:%d,%Q:%Q}", "value1",
cfg->value1, "value2", cfg->value2);
return s;
}
// Update config structure. Return true if changed, false otherwise
static bool update_config(struct mg_http_message *hm, struct config *cfg) {
bool changed = false;
char buf[256];
double dv;
if (mjson_get_number(hm->body.ptr, hm->body.len, "$.value1", &dv)) {
s_config.value1 = dv;
changed = true;
}
if (mjson_get_string(hm->body.ptr, hm->body.len, "$.value2", buf,
sizeof(buf)) > 0) {
free(s_config.value2);
s_config.value2 = strdup(buf);
changed = true;
}
return changed;
}
// Notify all config watchers about the config change
static void notify_config_change(struct mg_mgr *mgr) {
struct mg_connection *c;
char *s = stringify_config(&s_config);
for (c = mgr->conns; c != NULL; c = c->next) {
if (c->label[0] == 'W') mg_http_printf_chunk(c, "%s\n", s);
}
free(s);
}
// HTTP request handler function. It implements the following endpoints:
// /api/config/get - returns current config
// /api/config/set - updates current config
// /api/config/watch - does not return. Sends config as it changes in
// chunks all other URI - serves web_root/ directory
static void cb(struct mg_connection *c, int ev, void *ev_data, void *fn_data) {
if (ev == MG_EV_HTTP_MSG) {
struct mg_http_message *hm = (struct mg_http_message *) ev_data;
if (mg_http_match_uri(hm, "/api/config/get")) {
char *s = stringify_config(&s_config);
mg_printf(c, "HTTP/1.1 200 OK\r\nContent-Length: %d\r\n\r\n%s\n",
(int) strlen(s) + 1, s);
free(s);
} else if (mg_http_match_uri(hm, "/api/config/set")) {
if (update_config(hm, &s_config)) notify_config_change(fn_data);
mg_printf(c, "HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n");
} else if (mg_http_match_uri(hm, "/api/config/watch")) {
c->label[0] = 'W'; // Mark ourselves as a config watcher
mg_printf(c, "HTTP/1.1 200 OK\r\nTransfer-Encoding: chunked\r\n\r\n");
} else {
struct mg_http_serve_opts opts = {.root_dir = "web_root"};
mg_http_serve_dir(c, ev_data, &opts);
}
}
}
int main(void) {
struct mg_mgr mgr;
mg_mgr_init(&mgr);
mg_http_listen(&mgr, "http://localhost:8000", cb, &mgr);
for (;;) mg_mgr_poll(&mgr, 1000);
mg_mgr_free(&mgr);
return 0;
}
|
A REVIEW OF FORMULATIONS TO DESIGN AN ADHESIVE SINGLE-LAP JOINT FOR USE IN MARINE APPLICATIONS The single adhesive joint has many applications in the shipbuilding industry, where it offers the advantage of joining materials (adherents) with different properties and characteristics using an adhesive. However, one disadvantage of this type of joint is the stress concentration at the ends of the joint, which directly affect the adhesive. Another disadvantage is the possible difference between the coefficients of thermal expansion of the adherents of the joint. Through compilation and classification of the formulas found in various publications, this study presents a state-of-the-art review of an adhesive single-lap joint that can be used in marine applications. It will consider the types of materials used as the adhesive and as the adherents, the possibility of varying the thicknesses of the adherents and the thickness of the adhesive, and the recommended design factors for each proposed methodology. This study proposes formulas to estimate the stresses for joints with balanced thicknesses and extrapolates the results for nonbalanced joints; also, an equation is derived to calculate the minimum overlap joint length for ship lengthening, allowing the design process to be simplified. The results are expected to facilitate the design of single-lap joints in marine applications, such as reinforcing composite panels and lengthening of hulls and superstructures. Introduction The need to produce adhesive joints between two materials with the same or different characteristics has led to multiple investigations into developing equations that allow estimating the stresses in single-lap joints. Single-lap joints can be observed in different applications in the marine industry using composite material, such as the following examples: − Dominguez made a review of the state of the art presenting different hybrid joints between a steel deck and an FRP (fibre reinforced polymers) superstructure of various sizes. Hybrid adhesive bonding has also been applied by the Kockums shipyard on commercial vessels and military ships. Franklin Dominguez A review of formulations to design an adhesive Luis Carral single-lap joint for use in marine applications 90 − In the welding of a fibre reinforced polymers (FRP) beam or the reinforcement of an FRP composite panel, or in a hybrid joint when fixing a metal reinforcement to an FRP composite panel, -, as presented in Fig. 1. In these examples, the FRP stiffener laminate is considered the top adherent, the inferior FRP sandwich panel is the bottom adherent, and the polyester resin is the adhesive. − In the lengthening of FRP hull or superstructure of a vessel, whereby a single-lap joint must be made, which can be balanced or non-balanced joint, as indicated in Fig. 2 The adherent materials in these adhesive joints can be steel, stainless steel, aluminium, FRP composite laminate, carbon fibre reinforced polymer (CFRP), bio-composite, or a combination of these. To perform comprehensive literature, we reviewed several types of research, and abstracting databases published from 1938 to 2019 were initially considered. Only peerreviewed journal articles with novel contributions to the field were critically reviewed. The first research on single-lap joints was undertaken by Volkersen in 1938. Since then, several authors have continued to improve and propose new methodologies for estimating the shear and normal stresses in the adhesive. Successive investigations have been developed considering the adherents as isotropic, orthotropic, or anisotropic, or considering the adhesive as isotropic. The stress-strain curve is approximated in a linear or non-linear manner, and the resulting stress formulas in the adhesive can be explicit or implicit. This document aims to review the methods developed and proposed for the analysis of single-lap joints, thereby allowing the reader to select the methodology that is most convenient for the marine application at hand. Table 1 presents a classification of these joints based on their configuration, mathematical model and formulas proposed by each author. Later, an analysis of each mentioned formulation and the involved variable is conducted to provide a general approach for selecting a single adhesive joint. This research compiles the most cited investigations that have contributed to the development of the analysis of single adhesive joints. Volkersen and Goland and Reissner were the first to analyse this type of joint, and their assumptions are still taken as a comparative reference in new researches. Table 1 summarizes the classification of the proposed methods by the different authors based on the main considerations of a single-lap joint: Table 1 Summary of calculation methods for a single adhesive joint (*); Clark. Year Author Reference Methods with the explicit formulation 1938 Volkersen x x x x x 1944 Goland and Reissner x x x x x x (*) 1973 Hart Smith x x x x x x x x 1977 Allman x x x x x 1989 Bigwood and Crocombe x x x x x x 1991 Oplinger x x x x x x x 2004 Zou x x x x x Methods with the implicit formulation 1973 Renton and Vinson x Ojalvo x x x x x 1981 Delale x x x x x x 1992 Adams and Mallick x The comparative analysis undertaken in this review accounts for the aspects presented in Table 2 to make the proposed methodologies more comprehensive. Franklin Dominguez A review of formulations to design an adhesive Luis Carral single-lap joint for use in marine applications 92 Table 2 Description of the specific aspects of the methods for producing single-lap joints. Explicit The method develops a closed solution; that is, the authors provide formulas of stresses that can be directly evaluated. Implicit The method is not fully developed; that is, the authors express the formulas or require numerical analysis or implementation of additional programs to apply the method. Adherent type Balanced The upper and lower adherents are of equal thickness and mechanical properties. Nonbalanced The upper and lower adherents are of different thicknesses or different mechanical properties. The material of adherent and adhesive Isotropic The method considers the adherent or adhesive as a material that retains the same properties in all directions. Orthotropic The method considers uses the adherent or adhesive as a material that has defined properties in three directions. Anisotropic The method considers the adherent or adhesive as a material that has defined properties in all directions. Adhesive behaviour Elastic For the stress analysis, the adhesive deformation is maintained in the elastic zone of the stress-strain curve. Plastic For the stress analysis, the adhesive deformation is maintained in the plastic zone of the stress-strain curve. Adhesive model Lineal The methodology considers the linear behaviour of the adhesive for its mathematical approach and stress estimation. Non-lineal The methodology considers the non-linear behaviour of the adhesive for its mathematical approach, variables, and assumptions. Adhesive effective length Each author proposes a formula to estimate or recommend the length of the adhesive for the single-lap joint geometry. Fig. 3, the following four configurations for single-lap joints are shown: − Option a: Classic joint with orthogonal vertices at the ends of the adherents and the adhesive. − Option b: Joint with rounded vertices at the ends of the adherents and orthogonal at the ends of the adhesive. − Option c: Joint with short bevelled vertices at the ends of the adherents and orthogonal at the ends of the adhesive. − Option d: Joint with long bevelled vertices at the ends of the adherents and orthogonal at the ends of the adhesive. Option a is typically used in most single adhesive joints. This option is described in the methods listed in section 2.3. Options b and c allow the reduction of the maximum shear and normal stresses generated at the ends of the adhesive, but their mathematical development is complex; therefore, the finite element analysis (FEA) is recommended for designing this type of joint (Calik ). Lloyd's Register Option d is mostly used for the adherents of composite materials with staggered laminate layers at the ends of the adherents. Oterkus investigated this type of overlapping joint, proposing a semi-analytical method taking into account the linear and bilinear elastic behaviour of the adhesive and the linear behaviour of the adherents. As a result of this analysis, he obtained a system of non-linear equations for shear and normal stresses, to be solved by an iterative procedure using the Newton Raphson method together with Broyden's Jacobian matrix. Fig. 4 shows Oterkus's results, whereby it is observed that the shear and normal stresses decrease with the increase in the size of the bevel on the adherent ends. For this case, Lloyd's Register recommends using staggered bevels, as shown in Fig. 8. The materials used in a single-lap joint can vary depending on the intended application. In the case of composite materials, adherents can be considered as isotropic, orthotropic or anisotropic materials, depending on the methodology applied for the analysis. Adherents The materials, and their combinations, that have been used as adherents are presented in Table 3. Metals are common adherents, and their mechanical properties depend on the type of alloy used. For composite materials, the properties depend on the type of resin (polyester, vinyl ester, or epoxy) and the type of fibre used. Composite materials can be grouped as orthotropic or anisotropic based on their laminate; however, in the explicit methods, adherents are considered isotropic. In adhesive lap joints where the adherents are considered metallic, the first failure is expected to be generated in the adhesive and then in the adherent. Meanwhile, for joints with laminated composite adherents, the first failure is expected to appear in the adherent ; see In the case of a joint between FRP composite materials as the adherents with polyester resin as the adhesive, the behaviour of the stress-strain curve of the adherents must be taken into account because after the elastic deformation, the joint will present a failure by delamination. Adhesives In most of the investigated methods for the single-lap joint, the behaviour of the adhesive is approximated as isotropic-elastic. Banea presented a table summarizing the typical properties of the different types of adhesives, which include the epoxy type, anaerobic or silicone type, and polyurethane type, among others. Hart-Smith - emphasized the importance of including, in the calculations, the estimation of the stresses obtained in the plastic area of the adhesive. The typical stress-strain behaviour of an adhesive and the equivalent linear and bilinear curves are presented in Fig. 6. The hatched sections correspond to the proposed method to find the equivalence between the energy density of the typical nonlinear characteristic curve and the linear or bilinear curve. Hart-Smith concluded that the complexity of the bilinear representation of the adhesive leads to fairly approximate results when compared to the results obtained with the linear estimation, provided that the same equivalent adhesive energy density curve is maintained. Fig. 7 shows an example of the distribution of shear and normal stresses in the adhesive, considering elastic-plastic behaviour, whereby it is observed that the length of the adhesive is divided into three sections: a central elastic zone and two plastic zones at either end. This detail is important because the larger the plastic zone, the greater the possibility of normal stress increase, which may lead to cracking in the adhesive. For length l where the adhesive is considered perfectly plastic (d=0), it is true that p avg =. Hart-Smith recommended considering the following: The formulas for estimating the length of the adhesive are explained in section 2.4.1 Considerations for modelling joint behaviour Volkersen investigated the behaviour of the single-lap joint, in which the balanced adherents and the adhesive were considered as elastic and isotropic materials. This investigation did not consider deformations in the adherents or bending moments in the adhesive joint generated by eccentricity. The linear mathematical model initially proposed by Volkersen only considered the shear force on the adhesive, with maximum values at its ends and a minimum at the halfway point. Goland and Reissner developed formulas to estimate the shear and the normal stresses of an adhesive single-lap joint. They considered that the deformations that occur in the adherents are relatively small in comparison to the deformations produced in the adhesive. Besides, the deformations in the adherents are due to the cylindrical flexion generated by the flexural moment that is formed by the eccentricity of the applied load. Adherents and adhesives are considered perfectly elastic. This study resulted in a linear method that applies only for adhesives with thin thicknesses as well as for balanced joints, that is, those with the same geometry and properties. Hart-Smith analysed a single-lap joint considering the behaviour of both linear-elastic and plastic adhesives. The plastic zone of the adhesive bond is considered for the range (l-d); see Fig A.3. With this assumption, Hart-Smith validated the theoretical results with the experimental results. Furthermore, this analysis found that maximum shear and normal stresses occur at the ends of the adhesive joint, while the lowest stresses occur in the middle, concluding that an exaggerated increase in the length of the adhesive bond does not reduce stress because the load is processed along an effective length. Allman based his investigation on research by Goland and Reissner that considered the elastic linear behaviour of an adhesive on a balanced joint. This author proposed estimating the stresses based on the non-deformable geometry of the single-lap joint and considering the adherents and the adhesive as isotropic materials, allowing both metallic and composite adherents to be analysed. This method assumes that the shear stresses do not vary across the thickness of the adhesive, while the normal stresses vary. Bigwood and Crocombe investigated the shear and normal stress estimation of a single-lap joint considering the adhesive as elastic-linear. For their mathematical analysis, they considered the length of the adhesive and its ends subjected to tensile, shear, and moment loads; see Fig. A.6. The adherents and adhesives were considered as isotropic material, and the adherents could be unbalanced. Oplinger proposed formulas to estimate the stresses of the single-lap joint. Like other authors, Oplinger based this work on that of Goland and Reissner, but in these formulas, the adherents work independently (upper and lower). Adherents and adhesives were analysed as an elastic isotropic material. This methodology allows the estimation of the shear and normal stresses for adherents with thin thicknesses. Oplinger obtained similar results to those of Goland and Reissner for thick adherents; however, greater differences were found in the estimations for thin adherent thicknesses. A review of formulations to design an adhesive Franklin Dominguez single-lap joint for use in marine applications Luis Carral 97 Lastly, Zou, when analysing the single-lap joint, defined the adhesive as having homogeneous, isotropic, and linear elastic behaviour and stated that the adherents must be balanced. The listed explicit and the implicit methods in Table 1 are published formulas to calculate the adhesive stresses on single-lap joints; other papers are dedicated to validate these methods with FEA or experimentation. 2.4. Summary of formulas 2.4.1. Adhesive length Clark, taking as reference the formulas proposed by Goland and Reissner, recommended an adhesive length based on the parameter /t, adhesive shear stress a and average adherent stress avg. Renton and Vinson recommended estimating the length of the adhesive using the approximate ratio of l/t=10. If this relationship is greater than 10, failures in the adherent are expected; however, if this ratio is less than 10, failure in the adhesive is expected. Oplinger proposed a detailed formula based on geometric and mechanical properties, using the parameters and. Clark, Goland and Reissner Renton and Vinson 10 t l The variables in Table 4 are indicated in the appendix. Moment due to the eccentricity of the adherents In the single-lap joint, applying a load to the adherents generates a moment of eccentricity between the axis of the adhesive and the axis of the applied load. This moment then generates deformations in the adhesive and the adherent as well as shear and normal stresses. Due to the condition of the single-lap joint for the estimation of the normal and shear stresses in the adhesive, the moment generated by the adherent must be multiplied by the eccentricity factor k. Goland and Reissner were the first to estimate the eccentricity factor k for a balanced joint involving the properties of isotropic adherents, such as thickness, elasticity modulus, Poisson number and applied load. Their proposed moment is: Hart-Smith retained the formula proposed by Goland and Reissner, modifying the eccentricity factor so that it could be used for unbalanced adhesive joints. Therefore, in this case, two eccentric moments are generated, one at either end of the adherents (k1 and k2). Oplinger then modified the eccentricity factor formula to have a more approximate value. His estimation included the thickness tb and adhesive shear modulus Gb using the parameter R. In contrast to the formula devised by Goland and Reissner, in Oplinger's formula the greatest differences in the eccentricity factor k are generated for thin adherent thicknesses. Zhao proposed the estimation of the bending moment generated by the eccentricity, assuming that the adhesive only deforms at its ends. The proposed formula works for single balanced or unbalanced joints with an adhesive length between 25 and 50 mm and thin adherent thicknesses (steel <4 mm, aluminium <6 mm). The formulas proposed by Zhao for the eccentric moment in the upper and lower adherent for unbalanced joints are: Where: For balanced joints, Zhao's proposed formula is: Table 5 presents a summary of the eccentricity factors that are proposed in the stated methods. Adhesive stresses Tables 6 and 7 summarize the formulas for calculating the maximum shear and normal stresses at the ends of the adhesive, as indicated. Renton and Vinson graphically presented the distribution of the normal and shear stresses (transverse and longitudinal) of the adherent in a single-lap joint, the maximum stresses being expected at the ends of the adherents. Their work also showed that the variation of the stresses through the thickness of the adherent are higher and are generated in the upper or lower part of the ends of the adherents. METHOD MAXIMUM SHEAR STRESS Goland and Reissner ( ) (*) Hart-Smith proposed their shear stress formula for use only with balanced adherents, while their normal stress formula was developed for balanced and unbalanced adherents. Formulas for ship applications The adhesive single-lap joint is mostly used in shipbuilding processes to lengthen the hull or superstructure or to reinforce FRP panels. Depending on the case, it is first necessary to Franklin Dominguez A review of formulations to design an adhesive Luis Carral single-lap joint for use in marine applications 100 estimate the forces or the eccentricity moment to be applied to each joint, then to estimate the minimum length of the adhesive and the interlaminate stresses. Table 8 shows the recommended values of the interlaminar design stresses of a reinforced laminate panel. Considering a limiting stress fraction of 0.33 and based on the ultimate strength SU, the FRP panel design stress SD is estimated: A guide to limiting stress fraction is proposed in. Clark suggests using a safety factor between the interval , depending on the laminate factors. METHOD MAXIMUM NORMAL STRESS Goland and Reissner ( ) Oplinger ( ) proposed their shear stress formula for use only with balanced adherents, while their normal stress formula was developed for balanced and unbalanced adherents. For adhesive joint applications with composite adherents when joining a stiffener or beam to panels, Lloyd's Register recommends the following as the minimum staggered length for an adhesive joint: And: Where tf is the thickness of the upper adherent and n is the number of layers. In any case, the number of layers of tf depends on the laminate of the stiffener or the girder. Minimum joint length for lengthening The lengthening can be applied to a hull or a superstructure, see Fig. 9. In both cases, it is necessary to determine the forces applied in the joint. These forces can be estimated from the bending stresses and moment, as indicated in the following: − Hull lengthening: The bending stress is calculated as a result of the hull girder stress analysis following naval architecture recommendations; it is first necessary to obtain the weight distribution curve to estimate the bending moment in calm water. Then, the wave bending moment is obtained following classification societies' formulas. The total bending moment Mf is the sum of them. Once the lengthening section modulus has been defined, the bending stress avg is obtained using formula (8b). − Superstructure lengthening: The bending stress calculation is like that used for hull lengthening, with the difference being that the superstructure section modulus should be used. Finally, the estimation of the minimum joint length for the lengthening (l) is developed from the formula for a double-lap joint length due to symmetry of the joint at lengthening: Where avg is the bending stress; SM is the section modulus, a is the design shear stress of the adhesive, a-yield is the yield shear stress of the adhesive, is the panel thickness and fs is the safety factor. Once the length of the overlapping joint of the respective lengthening is estimated, it is advisable to use the quadratic criterion of failure of interlaminar forces to validate the design. Relationship to estimate the normal stress of a non-balanced joint Once the normal stress of the balanced joint is known (Table 7), the following relationship is proposed to estimate the normal stress for a non-balanced joint. An application of the adhesive joint is presented in Dominguez to develop a methodology for a hybrid bond with FRP laminated tubular reinforcement that allows the bonding between FRP panels and steel decks. Yacht hull lengthening The study of the joint for the elongation of a diving yacht built with hand layup using polyester resin and type E fibreglass is presented below, as shown in Fig. 9. In this application, the adhesive joints will be considered as balanced on the hull bottom, hull side, and deck. The yacht main data is shown in table 9 and laminates properties in table 10.. Lloyds Register suggest 13.8 N/mm 2 as yield shear strength a-yield and a safety factor fs of 3. From Fig A.9, the maximum still water bending moment MS is 1002 kN-m. The maximum vertical wave bending moment MW of 2571 kN-m, is calculated using formulas of Lloyds Register, getting a total bending moment of Mf de 3573 kN-m. Franklin Dominguez A review of formulations to design an adhesive Luis Carral single-lap joint for use in marine applications 104 On Fig. 10 and Table 11 the amidships section of the diving yacht is shown, where a section modulus SM of 0.502 m 3 for the hull bottom and 0.611 m 3 for the deck are calculated, the smallest modulus will be used in the calculations. The maximum value between the results of Table 12 and Table 13 is selected, 0.696 m. The adhesive joint for superstructure lengthening must be estimated with the same procedure used for the hull. Table 13 the estimation of the minimum joint length for hull lengthening is presented using the formulas of Clark shown in Table 4 : Features influencing the single-lap joint on marine applications The various studies covered in this review propose formulas that show good applicability for use in the industry. However, in designing an adhesive single-lap joint for marine applications, the following aspects should be considered: eccentricity moment, adherent thickness, adhesive length, adherent properties, and adhesive strength. Eccentric moment Hart-Smith presented a formula for the eccentricity factor k that is easy to apply and useful for different thicknesses of adherents. Oplinger proposed a formula for eccentricity factor k that is more accurate compared to FEA calculations; however, it is limited in that it only applies to adherents of the same thickness. Goland and Reissner were the first to introduce the formula for the eccentricity factor k; however, this can only be applied in the context of balanced joints and thin adherent thicknesses. Bigwood and Crocombe and Zou did not directly consider the eccentricity factor k in their calculations. Adherent thickness In marine applications, the variability of the adherent thicknesses is important. To address this variability, the methods proposed by Hart-Smith, Bigwood and Crocombe, and Zou consider the formulas to calculate the shear and normal stresses in the adhesive. The other methodologies were proposed for balanced joints. In marine applications, only the formulations for the elastic behaviour of the adhesive are considered; therefore, the strengths and the direction applied in the lap joint must be identified. In this context, three applications are identified, namely the reinforcement laminate on an FRP panel, the joint between two panels in a hull lengthening, and the joint in a superstructure lengthening. Regarding the bonding in the reinforcement of an FRP panel, Renton and Vinson made some recommendations for the relationship between joint length and the thickness of the adhesive; however, Lloyd's Register proposes a greater adhesive length in this case. Both authors recommended the bevelling of the adherent ends to decrease the respective stresses. Oplinger provided a formula that allows the calculation of the effective length; its adhesive length values are approximated to the Lloyd's Register formula when the adherent thickness is increased. Regarding the joint between two panels of a hull lengthening, the formula proposed in section 3.2. is useful to evaluate the adhesive length with equal or unequal adherents' thickness. Alternatively, in cases of equal adherent thickness, the formula proposed by Clark can be used. When these two alternative formulas are used, the required adhesive joint length is to be the greater of the calculated values. A superstructure lengthening is a case of hull lengthening. The particularity is that the adherents are more likely to be of different thicknesses because it is possible to present the joint between two sandwich panels as well as a combination of a sandwich panel and a single laminate. From the joint lengths calculated in the application case of diving hull lengthening, it is shown the importance of considering the bending stresses of the hull girder, which is not considered by Clark. A limitation when trying to lengthen a steel hull using FRP adhesive joints, in the middle section, is the difference between the modulus of elasticity of the two materials since it generates different elongations in the hull girder that will cause structural fractures by corrosion. The lengthening with FRP panels can be applied at one end of the hull, provided that the hybrid joint methodology is used taking into account the difference in the mentioned elongations. Adherent properties When joining a reinforcement FRP with an FRP panel, it is important to keep in mind the difference between the equivalent mechanical properties of the adherents because the adhesive stresses obtained with different adherent thicknesses are less efficient than those expected to be obtained with adherents of equal thickness. Adhesive stress The formulas developed by Hart-Smith, Bigwood and Crocombe and Zou yield very similar values for normal stresses. Zou's proposed formulas can calculate the stresses in the adhesive only for equal adherent thicknesses. Hart-Smith's formula allows the normal stresses to be calculated for different adherent thicknesses, whereas shear stresses can be calculated only for balanced joints. Bigwood and Crocombe formula has the advantage that it allows the normal and shear stresses to be calculated for different adherent thicknesses. In all revised formulas, the maximum stresses are shown at the ends of the adhesive joint. Concluding remarks In the previous sections, the researchers considered the adhesive for a single-lap joint as an isotropic material. Furthermore, thirty per cent of the proposed methodologies considered adherents as orthotropic or anisotropic in their analysis. Due to the mathematical complexity involved, some authors who used anisotropic materials recommended performing a numerical analysis or FEA to complete the solution. The authors who involved nonlinearity in their analysis concluded that their results are more approximate compared to the results obtained in experimental tests. The work of Hart-Smith contributed significantly to the utility of such formulas for single-lap joints since this author considered the importance of accounting for the plastic area of the adhesive. This is necessary because single-lap joints are subjected to large deformations, so the shear stresses generated could exceed the elastic limit, in which case a formula would estimate erroneous results. Another practical conclusion, showed by Hart-Smith and Oterkus, is that an adhesive joint bevel of staggered type has shown to diminish the normal stress on ends. This bevel is recommended by Lloyd's Register to bond stiffeners to an FRP panel and to hull or superstructure lengthening. The proposed equations and, to estimate the minimum joint length and the stresses in the adhesive, respectively, are recommended to preliminary design. The resultant adhesive stresses are useful to estimate the interlaminar stresses of the first adjacent laminate layer; however, to complete FRP panel design, the interlaminar stresses of all laminate layers should be analysed. The last stage is beyond the scope of this study. To avoid high interlaminar stresses at the ends of the adhesive joint in a lengthening study, it is necessary to make a stepped bevel of ≈ 100 for each layer. The adhesive joint has been considered to have the same thickness as the hull or superstructure, however, to avoid osmosis effects in the adhesive joint, it is necessary to seal on the opposite side with at least 3 layers using isophthalic NPG resin. CONFLICTS OF INTEREST: The authors declare that they have no known competing for financial interests or personal relationships that could have appeared to have influenced the work reported in this paper. The formulas for shear stress distribution; see Volkersen. The moment for a balanced joint generated by the eccentricity of the applied loads is: Being the formula for estimation of k1 indicated in Table 5. The parameters and for the calculation of the balanced joint shear stress, indicated in Table 6, are as follows: The parameter i is used for both balanced and unbalanced adherents: The average shear and normal stresses are estimated as follows: Where t is the weakest adherent thickness, P is the load per unit length, and kb is the stiffness coefficient of the material, which is 1 for isotropic materials. Bigwood simplified the equations of movements based on the consideration that the variations of the shear and normal stresses along the joint are small; the differential equations are: The formulas used to estimate the distribution of the shear stress; see Oplinger. The parameters used to calculate the coefficient kn, mentioned in Table 5 and enhanced for any thickness of adherents, are: For adhesive thicknesses much greater than the thickness of the adhesive (t>>tb), the coefficient kn can be simplified to: The necessary parameters to estimate the effective adhesive length, mentioned in Table 4, are: The formulas used to estimate the shear and normal stress; see Zou. A.2.1 Renton and Vinson method Renton and Vinson developed a system of equations that allow the estimation of the adhesive normal shear stresses of a balanced or unbalanced single joint, considering the adherent material as anisotropic composite and the adhesive as isotropic. The methodology is based on the method developed by Goland and Reissner, who used the theory of linear behaviour to estimate the loads at the endpoints and solved an ordinary linear differential equation of the eighth order to estimate the shear and normal stresses. In addition, the method performs experimental tests of tension and fatigue to determine the behaviour of the failures in the laminate of the single-lap joint. Renton and Vinson recommend: − To reduce stress peaks, care must be taken to maintain a similar planar stiffness of the adherents. − A single joint is more efficient if the elasticity modulus of the adhesive is smaller than the elasticity modulus of the adherents. − The adhesive failure is independent of the length of the adhesive and is very little related to the thickness of the adhesive. − The strength of the joint can be improved by increasing the thickness of the adhesive at its ends. A.2.2 Ojalvo method Ojalvo focused on analysing the influence of the adhesive thickness on the estimation of the shear and normal stresses. His research was based on the approach of Goland and Reissner, but he modified the differential equation and used three assumptions related to the behaviour of the single joint to define the methodology. Finally, he concluded that the thickness of the adhesive is important in the estimation of the stresses, mainly for the maximum values that are generated at its ends because when the effect of the adhesive thickness is included in the calculations, the shear stress increases and the normal stress decreases. Delale developed a methodology for single-lap joints of the balanced adherents. This methodology is applied for linear-elastic analyses, considering : − The adherents are an orthotropic plate material and for their analysis, the transversal shear stresses are used. − The adhesive is a linear-elastic material. − The stress variation in the adhesive thickness direction is negligible. − The deformations in the z-direction of adhesive are zero, and only coplanar deformations are considered. A.2.4 Adams and Mallick method Adams and Mallick analysed a single joint subjected to thermal stress loads. This methodology is applied to non-balance adherents, in which the adhesive is considered as a unidirectional anisotropic material for non-linear analysis. The adherents are analysed as flexural plates, while the adhesive is a series of tension and shear springs. Beginning with the theory of two-dimensional elasticity, these authors developed implicit formulas for calculating tensile and normal stresses in the upper and lower parts of the adhesive. These formulas include the terms for the effects of bending, shear, and hydrothermal deformation in the adherent and adhesive. A.2. 5 Tong method Tong assumed in his investigation that the adhesive has a non-linear stress-strain behaviour while the adherents have linear-elastic behaviour. Normal and shear deformations in the adhesive are constant through the adhesive thickness. The adherent-adhesive-adherent sandwich model is used to predict the strength of the joint only for balanced adherents. This author also explains that the product of the deformation energy density and the thickness of the adhesive is equal to the energy release rate for fracture failure modes. A.2. 6 Smeltzer method The method proposed by Smeltzer allows an evaluation of the distribution of normal and shear forces along with the adhesive. In its analysis, this method considers the adherent plates as anisotropic and elastic-linear and the adhesive as isotropic nonlinear, elastic and plastic, behaving in a cylindrical form under a condition of flat deformation. This author presented both linear and non-linear examples and compared the results of his method to those of Goland and Reissner and Bigwood and Crocombe, obtaining lower maximum normal and shear stresses. |
<filename>regression/reference/namespace/wrapns_outer.cpp
// wrapns_outer.cpp
// This file is generated by Shroud nowrite-version. Do not edit.
// Copyright (c) 2017-2021, Lawrence Livermore National Security, LLC and
// other Shroud Project Developers.
// See the top-level COPYRIGHT file for details.
//
// SPDX-License-Identifier: (BSD-3-Clause)
//
#include "wrapns_outer.h"
// cxx_header
#include "namespace.hpp"
// splicer begin namespace.outer.CXX_definitions
// splicer end namespace.outer.CXX_definitions
extern "C" {
// splicer begin namespace.outer.C_definitions
// splicer end namespace.outer.C_definitions
// ----------------------------------------
// Function: void One
// Attrs: +intent(subroutine)
// Exact: c_subroutine
void NS_outer_one(void)
{
// splicer begin namespace.outer.function.one
outer::One();
// splicer end namespace.outer.function.one
}
} // extern "C"
|
<reponame>GeilMail/geilmail
package users
import (
"errors"
"github.com/GeilMail/geilmail/helpers"
)
var (
ErrNotFound = errors.New("no record found")
ErrInternal = errors.New("internal error")
)
func New(mailAddr helpers.MailAddress, password string) error {
pwHash, err := HashPassword([]byte(password))
if err != nil {
return err
}
u := User{
Mail: string(mailAddr),
PasswordHash: <PASSWORD>,
}
err = db.Insert(&u)
if err != nil {
return err
}
return nil
}
func CheckPassword(mailAddr helpers.MailAddress, pw []byte) bool {
u := &User{}
err := db.SelectOne(u, "SELECT passwordHash FROM users WHERE mail = ?;", string(mailAddr))
if err != nil {
return false
}
return checkPassword(pw, u.PasswordHash)
}
// AllDomains retrieves all active domains that have mailboxes.
func AllDomains() (domains []string, err error) {
var addrs []string
_, err = db.Select(&addrs, "SELECT mail FROM users;")
if err != nil {
return
}
mSet := map[string]struct{}{}
for _, ad := range addrs {
dp, err := helpers.MailAddress(ad).DomainPart()
if err != nil {
return nil, err
}
mSet[dp] = struct{}{}
}
for ad := range mSet {
domains = append(domains, ad)
}
return
}
|
Accelerated Processing for Maximum Distance Separable Codes using Composite Extension Fields This paper describes a new design of Reed-Solomon (RS) codes when using composite extension fields. Our ultimate goal is to provide codes that remain Maximum Distance Separable (MDS), but that can be processed at higher speeds in the encoder and decoder. This is possible by using coefficients in the generator matrix that belong to smaller (and faster) finite fields of the composite extension and limiting the use of the larger (and slower) finite fields to a minimum. We provide formulae and an algorithm to generate such constructions starting from a Vandermonde RS generator matrix and show that even the simplest constructions, e.g., using only processing in two finite fields, can speed up processing by as much as two-fold compared to a Vandermonde RS and Cauchy RS while using the same decoding algorithm, and more than two-fold compared to other RS Cauchy and FFT-based RS. |
<filename>modules/Geometry/src/BezierCurve.c
/*
* Copyright 2018 <NAME>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <DeepSea/Geometry/BezierCurve.h>
#include <DeepSea/Core/Assert.h>
#include <DeepSea/Core/Error.h>
#include <DeepSea/Math/Matrix44.h>
#include <DeepSea/Math/Vector4.h>
// Left and right subdivision matrices from http://algorithmist.net/docs/subdivision.pdf
static const dsMatrix44d leftBezierMatrix =
{{
{1.0, 0.5, 0.25, 0.125},
{0.0, 0.5, 0.5 , 0.375},
{0.0, 0.0, 0.25, 0.375},
{0.0, 0.0, 0.0 , 0.125}
}};
static const dsMatrix44d rightBezierMatrix =
{{
{0.125, 0.0 , 0.0, 0.0},
{0.375, 0.25, 0.0, 0.0},
{0.375, 0.5 , 0.5, 0.0},
{0.125, 0.25, 0.5, 1.0}
}};
static const dsVector4d bezierMid = {{0.125, 0.375, 0.375, 0.125}};
static bool isBezierStraight(const dsBezierCurve* curve, double chordalTolerance)
{
// Check to see if the midpoint is within the chordal tolerance.
double dist2 = 0.0;
for (uint32_t i = 0; i < curve->axisCount; ++i)
{
double midCurve = dsVector4_dot(curve->controlPoints[i], bezierMid);
double midLine = (curve->controlPoints[i].x + curve->controlPoints[i].w)*0.5;
double diff = midCurve - midLine;
dist2 += dsPow2(diff);
}
return dist2 <= dsPow2(chordalTolerance);
}
static bool tessellateRec(const dsBezierCurve* curve, double chordalTolerance,
uint32_t maxRecursions, dsCurveSampleFunction sampleFunc, void* userData, double t,
uint32_t level)
{
// Left side.
double middlePoint[3];
dsBezierCurve nextCurve;
nextCurve.axisCount = curve->axisCount;
for (uint32_t i = 0; i < curve->axisCount; ++i)
{
dsMatrix44_transform(nextCurve.controlPoints[i], leftBezierMatrix, curve->controlPoints[i]);
middlePoint[i] = nextCurve.controlPoints[i].w;
}
if (level < maxRecursions && !isBezierStraight(&nextCurve, chordalTolerance))
{
if (!tessellateRec(&nextCurve, chordalTolerance, maxRecursions, sampleFunc, userData, t,
level + 1))
{
return false;
}
}
// The middle point is guaranteed to be on the curve.
double middleT = t + 1.0/(double)(1ULL << level);
if (!sampleFunc(userData, middlePoint, curve->axisCount, middleT))
return false;
// Right side.
for (uint32_t i = 0; i < curve->axisCount; ++i)
{
dsMatrix44_transform(nextCurve.controlPoints[i], rightBezierMatrix,
curve->controlPoints[i]);
}
if (level < maxRecursions && !isBezierStraight(&nextCurve, chordalTolerance))
{
if (!tessellateRec(&nextCurve, chordalTolerance, maxRecursions, sampleFunc, userData, middleT,
level + 1))
{
return false;
}
}
return true;
}
bool dsBezierCurve_initialize(dsBezierCurve* curve, uint32_t axisCount,
const void* p0, const void* p1, const void* p2, const void* p3)
{
if (!curve || axisCount < 2 || axisCount > 3 || !p0 || !p1 || !p2 || !p3)
{
errno = EINVAL;
return false;
}
curve->axisCount = axisCount;
for (uint32_t i = 0; i < axisCount; ++i)
{
curve->controlPoints[i].x = ((const double*)p0)[i];
curve->controlPoints[i].y = ((const double*)p1)[i];
curve->controlPoints[i].z = ((const double*)p2)[i];
curve->controlPoints[i].w = ((const double*)p3)[i];
}
return true;
}
bool dsBezierCurve_initializeQuadratic(dsBezierCurve* curve, uint32_t axisCount,
const void* p0, const void* p1, const void* p2)
{
if (!curve || axisCount < 2 || axisCount > 3 || !p0 || !p1 || !p2)
{
errno = EINVAL;
return false;
}
// https://stackoverflow.com/questions/3162645/convert-a-quadratic-bezier-to-a-cubic
curve->axisCount = axisCount;
const float controlT = 2.0f/3.0f;
for (uint32_t i = 0; i < axisCount; ++i)
{
double start = ((const double*)p0)[i];
double control = ((const double*)p1)[i];
double end = ((const double*)p2)[i];
curve->controlPoints[i].x = start;
curve->controlPoints[i].y = start + (control - start)*controlT;
curve->controlPoints[i].z = end + (control - end)*controlT;
curve->controlPoints[i].w = end;
}
return true;
}
bool dsBezierCurve_evaluate(void* outPoint, const dsBezierCurve* curve, double t)
{
if (!outPoint || !curve)
{
errno = EINVAL;
return false;
}
if (t < 0.0 || t > 1.0)
{
errno = ERANGE;
return false;
}
DS_ASSERT(curve->axisCount >= 2 && curve->axisCount <= 3);
double invT = 1.0 - t;
for (uint32_t i = 0; i < curve->axisCount; ++i)
{
((double*)outPoint)[i] =
dsPow3(invT)*curve->controlPoints[i].x +
3.0*dsPow2(invT)*t*curve->controlPoints[i].y +
3.0*dsPow2(t)*invT*curve->controlPoints[i].z +
dsPow3(t)*curve->controlPoints[i].w;
}
return true;
}
bool dsBezierCurve_evaluateTangent(void* outTangent, const dsBezierCurve* curve, double t)
{
if (!outTangent || !curve)
{
errno = EINVAL;
return false;
}
if (t < 0.0 || t > 1.0)
{
errno = ERANGE;
return false;
}
DS_ASSERT(curve->axisCount >= 2 && curve->axisCount <= 3);
double invT = 1.0 - t;
for (uint32_t i = 0; i < curve->axisCount; ++i)
{
((double*)outTangent)[i] =
3.0*dsPow2(invT)*(curve->controlPoints[i].y - curve->controlPoints[i].x) +
6.0*invT*t*(curve->controlPoints[i].z - curve->controlPoints[i].y) +
3.0*dsPow2(t)*(curve->controlPoints[i].w - curve->controlPoints[i].z);
}
return true;
}
bool dsBezierCurve_tessellate(const dsBezierCurve* curve, double chordalTolerance,
uint32_t maxRecursions, dsCurveSampleFunction sampleFunc, void* userData)
{
if (!curve || chordalTolerance <= 0.0 || maxRecursions > DS_MAX_CURVE_RECURSIONS || !sampleFunc)
{
errno = EINVAL;
return false;
}
DS_ASSERT(curve->axisCount >= 2 && curve->axisCount <= 3);
double endPoint[3];
// First point.
for (uint32_t i = 0; i < curve->axisCount; ++i)
endPoint[i] = curve->controlPoints[i].x;
if (!sampleFunc(userData, endPoint, curve->axisCount, 0.0))
return false;
// Subdivide the bazier: http://algorithmist.net/docs/subdivision.pdf
// Don't check chordal tolerance for the first point since it might pass through the center
// line.
if (maxRecursions > 0)
{
if (!tessellateRec(curve, chordalTolerance, maxRecursions, sampleFunc, userData, 0.0, 1))
return false;
}
// Last point.
for (uint32_t i = 0; i < curve->axisCount; ++i)
endPoint[i] = curve->controlPoints[i].w;
return sampleFunc(userData, endPoint, curve->axisCount, 1.0);
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.