content
stringlengths
7
2.61M
The circularity of moral exemplarity In her Exemplarist Moral Theory, Linda Zagzebski argues that we can empirically discover the meaning of moral terms like virtue and the good life by direct reference to moral exemplars those people we admire as morally exceptional. Her proposal is promising, because moral exemplars play an important motivating role in moral education, and her use of direct reference means we may be able to avoid the contentious descriptivism that accompanies moral terms like good and virtue. In this article, I argue that Zagzebskis theory fails regarding, because her direct reference method must use presupposed descriptions and leads to circular identification of moral exemplars.
Coral: Conversational What-If Process Analysis (Extended Abstract) Process simulation is used to estimate the performance of a process under hypothetical (what-if) scenarios, for example, to estimate what would be the cycle time if one of its tasks is automated. Despite the relevance of process simulation as a tool for planning business process improvements, the adoption of process simulation is hampered by the fact that the specification of what-if scenarios requires technical knowledge. This paper presents Coral, a chatbot that allows business users to specify what-if scenarios in a conversational manner. Coral takes as input a process simulation model, and enables users to interactively configure what-if scenarios in business terms. Coral also allows users to compare the performance of the process under the what-if scenario versus its performance under a baseline scenario.
Secondary mixed phenotype acute leukemia following chemotherapy for diffuse large B-cell lymphoma: a case report and review of the literature. Therapy-related mixed phenotype acute leukemia (MPAL) following non-Hodgkin's lymphoma (NHL) is extremely rare. We present here the case of an elderly man, diagnosed with diffuse large B-cell lymphoma (DLBCL) through a tonsil biopsy. After treatment with seven cycles of CHOP (cyclophosphamide, doxorubicin, vincristine and prednisolone) like regimen, the patient developed to MPAL (B/myeloid) with del(q22), t(6;9)(p23;q34), DEK/NUP214 fusion, as well as EZH2 and TET2 mutations. The patient was successively treated with chemotherapy and allogenetic hematopoietic stem cell transplantation. Until recently he is still alive more than 23 months without relapse.
Electrochemical synthesis and characterisation of polyaniline in TiO2 nanotubes Abstract The authors have developed a route to deposit polyaniline within nanotubes of TiO2 film by a sequence of several electrochemical steps. This result has been achieved by anodic pretreatment of anatase TiO2 film in the acidic solution containing aniline and a subsequent potential cycling in an aqueous solution of HClO4 with aniline. Scanning electron microscopy, Raman spectroscopy and cyclic voltammetry were employed to determine the deposition and properties of the TiO2 nanotube/polyaniline composite. The Raman spectra indicate that the polymer is in the conducting emeraldine salt form.
def vggish(postprocess=True): model = _vgg(postprocess) state_dict = hub.load_state_dict_from_url(VGGISH_WEIGHTS, progress=True) model.load_state_dict(state_dict) return model
Diagnosis of perinatally acquired HIV-1 infection using an IgA ELISA test. The clinical utility of the detection of anti-HIV-1 IgA antibodies using a modified commercial ELISA (EIA) test for the early diagnosis of perinatally acquired HIV-1 infection was evaluated. One hundred and seventeen sera were obtained from 86 infants born to HIV-1-infected mothers and tested for HIV IgA antibodies by an ELISA test (third generation) after removal of IgG with recombinant protein G. Infants were classified according to the Center for Disease Control and Prevention's (CDC) classification system after 15 months of age; 46 were classified as HIV-infected children and 40 as uninfected. HIV-IgA antibodies were detected in 53 of 64 serum samples from all infected children. No significant differences were observed in IgA detection among symptomatic or asymptomatic infected children. However, when analyzed by age a significant difference was observed in IgA detection when children who were over 6 months of age were compared with the younger group (Fisher exact test, p = 0.0000053). All 53 samples from 40 noninfected children were IgA-negative. Statistical analysis was assessed comparing IgA results with HIV infection status as the gold standard. Sensitivity (95%) and specificity (100%), positive predictive value (100%), and negative predictive value (94%) of IgA antibody determination were analyzed taking into account only one sample per child and only children older than 6 months. Positive likelihood ratio was 95.9% and negative likelihood ratio was 94%. Test efficiency was 97%. The detection of IgA HIV antibodies using EIA is an effective method for early diagnosis of HIV-infected infants in comparison with conventional IgG HIV antibody tests. It is a simple and inexpensive method that could be used in both developed and developing countries.
This invention relates to the fabrication of extrusion dies and particularly to the fabrication of dies having an input port-to-output port contour substantially similar to the idealized flow patterns of the material to be extruded. The shape of extruded material is dependent primarily on the configuration of the dies through which the material is forced. Such dies are often referred to as profile dies. The profile dies are affixed to the exit port of an extrusion apparatus. The exit port of the extrusion apparatus is usually of a circular configuration. The dies must transform a substantially cylindrical melt of viscous material into a length of hardened material having an outer peripheral contour or profile of the desired configuration. An idealized profile die configuration would be one which would permit a gradual transition from the input port of the die to output port of the die. Both the cross sectional area and the shape change should vary uniformly as the material to be extruded progresses from the exit port of the extrusion apparatus to the final profile of the product. The more complicated the shape of the finally extruded product, the greater is the difficulty in achieving a gradual transition between the input port of the profile die and the exit port of the profile die. The present profile dies are hand tooled and attempt to achieve a gradual transition over a short axial distance. Initially such profile dies had little or no transition region over which the material to be extruded, usually referred to as the extrudate, could be transformed from a circular melt to a configuration having the desired contour or profile. A block of material, preferably steel, having an orifice the shape and size of the extruded product was affixed to the exit end of the extrusion apparatus. This resulted in blocking part of the extrudate. This back-up or blocking of extrudate often resulted in melt fractures and burning part of the extrudate. To overcome this deficiency, the dies and the extrusion apparatus were commonly kept at very high temperatures to allow the extrudate to remain highly fluid. However, this gave rise to post-extrusion problems. The extrudate would exit from the extrusion die and be too warm to maintain the final configuration without undue distortion. The next designs provided a cone-like transition region prior to the area in the die having the configuration of the finally extruded product. This "hog out" technique lessened the burning of extrudate and the blocking of melt prior to entering the contour area. However, such transition regions were still generally unsatisfactory for mass production applications. A further problem was that the profile dies themselves had to be kept at a temperature sufficient to allow the material to be extruded to flow through them. However, when dies are machined from a block of steel, as is the normal method for manufacturing extrusion dies, the thermocouples which are used to check the temperature and the heater coils which are used to heat the die generally are displaced a considerable distance from the area in which the extrudate transformation is occurring. Thermocouples may be placed close to the inner surface of the die. Immersion heaters may also improve the position of application of heat. However, heating the massive steel die is one of the primary reasons for thermal lag. A considerable period of time may elapse between the sensing of a deficiency in the temperature of the die and rectifying the temperature problem. Because of the stresses and temperature of die operation, it is not generally feasible to employ materials in manufacturing profile dies which will result in a lesser thermal lag.
The Square Mile has more rough sleepers than any other London borough except Westminster: 338 were identified by Broadway, a charity, over the past year, most of whom had spent more than a year on the streets….Broadway tried a brave and novel approach: giving each homeless person hundreds of pounds to be spent as they wished….One asked for a new pair of trainers and a television; another for a caravan on a travellers’ site in Suffolk, which was duly bought for him. Of the 13 people who engaged with the scheme, 11 have moved off the streets. The outlay averaged £794 ($1,277) per person (on top of the project’s staff costs). None wanted their money spent on drink, drugs or bets. Hold on a second. Am I reading this right? Broadway identified 338 long-term homeless, but only 13 actually engaged with them? Something doesn’t add up here. It hardly seems credible that if you offered 338 people free money or stuff with no strings attached, only 13 would take you up on it. But if that is what happened, then the big result from the experiment isn’t that 11 out of 13 people benefited in some way, it’s that only 13 out of 338 were even willing to participate. I don’t know. Maybe Broadway identified 338 homeless people but only approached 13 of them? In any case, it hardly matters: it’s one thing to surprise a handful of people with an offer of assistance and receive fairly modest requests. It would be quite another to set this up as a large-scale, ongoing program. Does anyone doubt for a second that once people figured out what was going on, the size of the requests would skyrocket quickly? There’s an enormous literature on the pros and cons of cash welfare vs. in-kind benefits (i.e., housing, food stamps, Medicaid, etc.), and this is hardly going to be settled in a few blog posts. But as with anything else in a democratic society, social welfare programs have to deal not just with the technocratic merits of one approach over another, but with the views of the taxpayers who are funding the programs. And taxpayers, like it or not, are wary about handing out large sums of money to people with no strings attached. For one thing, Broadway’s experiment aside, a fair amount of no-strings cash would get spent on booze, drugs, and gambling, and taxpayers are understandably non-thrilled about their money being used that way. It may be that this is a small price to pay for the benefits of cashing out, but that’s a case that has to be made, and it can’t be made by simply dismissing concerns over morality. Moral concerns have a claim on our attention that’s as legitimate as any other kind, after all.
A crash in Murfreesboro claimed the life of a teenager. The wreck happened around 12:15 p.m. Sunday on South Church Street at Joe B. Jackson Parkway. Police said the 16-year-old driver of a pickup truck failed to stop for a red light and hit a van. An adult and two children were riding in the bed of that pickup, and all three were ejected. Authorities confirmed a 16-year-old girl died. The 16-year-old driver was seriously hurt. All victims remained hospitalized as of Monday evening. The two people in the van were not hurt. Police said they believe alcohol played a role in the crash.
/** * Raises the drop-down encoder wheel. */ public void omniWheelUp() { servoPosition = 0.5 + (1.0 / 8.5) * 0.5; servo.set(servoPosition); }
Antioxidant and Anti-Inflammatory Effects of White Mulberry (Morus alba L.) Fruits on Lipopolysaccharide-Stimulated RAW 264.7 Macrophages In this study, the protective effects of white mulberry (Morus alba) fruits on lipopolysaccharide (LPS)-stimulated RAW 264.7 macrophages were investigated. The ethanol (EtOH) extract of white mulberry fruits and its derived fractions contained adequate total phenolic and flavonoid contents, with good in vitro antioxidant radical scavenging activity. The extract and fractions also markedly inhibited ROS generation and antioxidant activity. After treatment with the EtOH extract and its fractions, LPS stimulation-induced elevated nitric oxide (NO) production was restored, which was primarily mediated by downregulation of inducible NO synthase expression. A total of 20 chemical constituents including flavonoids, steroids, and phenolics were identified in the fractions using ultra-high-performance liquid chromatography (UHPLC)-quadrupole time-of-flight (QTOF) high-resolution mass spectrometry (HRMS). These findings provide experimental evidence of the protective effects of white mulberry fruit extract against oxidative stress and inflammatory responses, suggesting their nutraceutical and pharmaceutical potential as natural antioxidant and anti-inflammatory agents. Introduction Inflammation, a major mechanism mediating innate and adaptive immunity, is a complex physiological response that protects the organism against foreign harmful stimuli such as pathogens, particles, and viruses. Inflammation is primarily classified as acute and chronic based on the underlying mechanisms and processes. Cellular and molecular processes of chronic inflammation are varied and depend on the organ involved and, thus, are closely associated with the development and deterioration of many chronic diseases including cardiovascular, neurological, pulmonary, metabolic, endocrine, and autoimmune disorders as well as cancer. Following the initiation of inflammatory responses, immune system cells release pro-inflammatory cytokines such as tumor necrosis factor (TNF)-, interleukin (IL)-1, and IL-6, which induce the generation of reactive oxygen species (ROS). Persistent inflammation can cause cellular injury or hyperplasia following ROS overproduction by inflammatory cells. In addition, cellular antioxidant systems activate genes involved in DNA repair in response to ROS-induced DNA damage. Similarly, excessive oxidative stress increases the levels of inflammatory cytokines and related molecules. Macrophages have been found to play a key role in the host defense system, where they are involved in many immunologic functions including inflammatory modulation and removal of apoptotic cells. Macrophages are activated by exogenous mediators such as lipopolysaccharide (LPS), an endotoxin expressed in the cell walls of gram-negative bacteria. This phenomenon is considered the first step in the inflammatory process, and many studies of protective effects mediated by anti-inflammatory, immune-modulating, and antioxidant activities have been performed using LPS-treated macrophage cells. The white mulberry tree (Morus alba L.), a perennial plant belonging to the Moraceae family, is used in traditional medicine and widely known as an important food source for the silkworm. Additionally, the mulberry tree has economic and ecological importance as it is known for its rapid growth and biomass production. Its fruit is a multiple fruit with a sweet flavor, and it is extensively consumed in various forms including as tea, dessert, and beverages worldwide. Specifically, the mulberry fruit, which is rich in beneficial nutrients, contain secondary metabolites that have pharmacological activities such as antidiabetic, antioxidant, anti-obesity, and anti-inflammatory effects. Previous phytochemical studies of the mulberry fruit have identified secondary metabolites and phytochemicals including flavonoids, anthocyanins, carotenoids, triterpenoids, and phenols, serving as good sources of substances that mediate the various therapeutic effects mentioned. In the present study, we investigated the protective effects of white mulberry fruits on LPS-stimulated RAW 264.7 macrophages by determining cell viability and antioxidant and anti-inflammatory activities. Additionally, the extracts of white mulberry fruits were comprehensively analyzed to identify the active chemical constituents using ultra-high performance liquid chromatography (UHPLC)-quadrupole time-of-flight (QTOF)-high-resolution mass spectrometry (HRMS). Total Phenolic and Flavonoid Contents The results showed that the ethanol (EtOH) extract of the white mulberry fruits and its derived fractions contained adequate total phenolic (from 102.0 to 204.3 mg garlic acid equivalent (GAE)/g) and flavonoid (from 55.1 to 74.9 mg catechin equivalent (CAE)/g) contents. Furthermore, the highest total phenolic content was found in the n-butanol (BuOH) fraction (204.3 ± 4.7 mg GAE/g), the highest flavonoid content was in the ethyl acetate (EA) fraction (74.9 ± 4.7 mg CAE/g), while the hexane (HX) fraction exhibited the lowest values of both contents (Table 1). Figure 1 shows the viability of RAW 264.7 cells treated with different concentrations of the EtOH extract and fractions of white mulberry fruits, which had no significant effect on cellular viability. There was an approximately 55% reduction in cell viability after pre-incubation of RAW 264.7 cells with LPS 2 g/mL, which was completely restored by treatment with 10 M quercetin used as a positive control. In addition, LPS-stimulated RAW 264.7 cells co-treated with white mulberry fruit extract and fractions showed dosedependent enhancement of viability (all p < 0.001, Figure 1). The differences among the fractions were evaluated with multiple comparison analysis at the lowest concentration tested (5 g/mL for EA, HX and MC fractions; 10 g/mL for BuOH fraction). The EA, HX and MC fractions exhibited better cell viability compared to BuOH fraction (p < 0.01). Figure 1. Effects of the EtOH extract and fractions of white mulberry fruits on cellular viability of RAW 264.7 macrophages stimulated with lipopolysaccharide (LPS, 2 g/mL). Data are expressed as means ± SD (n = 5). *** p < 0.001 vs. LPS treatment. Antioxidant Activity Comparative results of the in vitro antioxidant assays are presented in Table 1. The EA, BuOH, and MC fractions of the EtOH extract showed good 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activities, and the EA fraction exhibited the most potent activity (half-maximal inhibitory concentration (IC 50 ), 133.6 ± 4.7 g/mL). The EtOH crude extract and HX fraction did not exhibit DPPH radical scavenging activity at the concentration range tested (IC 50 > 1000 g/mL). The extract and all fractions of white mulberry fruits except for the HX fraction, exhibited 2,2 -azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) radical scavenging activities at the concentration range tested. The EA and MC fractions showed higher activity (IC 50, 216.6 ± 28.8 and 218.1 ± 22.6 g/mL, respectively) than the extract and other fractions. The ferric reducing antioxidant power (FRAP) values of the studied samples ranged from 0.505 to 3.727 mmol Fe 2+ /g. Similar to the results of the DPPH and ABTS assay, those of the FRAP assay showed that the EA fraction (3.727 ± 0.055 mmol Fe 2+ /g) exhibited the highest value of all tested samples. Intracellular ROS levels of LPS-stimulated RAW 264.7 macrophages were 5-fold higher than those of the control, whereas co-treatment with white mulberry fruit extract and fractions significantly inhibited LPS-induced ROS generation. Even at the lowest concentration, the EtOH extract and EA and MC fractions inhibited ROS production (34.8%, 37.6%, and 21.2% of LPS treatment levels, respectively) more than the positive control (quercetin 10 M, 57.4% of LPS treatment levels) (Figure 2). After the multiple comparison analysis, EA and MC fractions showed significantly lower ROS generation than that in BuOH and HX fractions, respectively (p < 0.001). The antioxidant activities of superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx) were also significantly enhanced by treatment with white mulberry fruits extract and fractions at the lowest studied concentrations, as described in Table 2. Significant differences between the fractions were observed only in the GPx activity; EA fraction exhibited better GPx activity than HX fraction (p < 0.01), and MC fraction had the highest GPx activity (2.700 ± 0.044 nmol/min), reaching the significant levels compared to BuOH (p < 0.05) and HX (p <0.01) fraction. Anti-Inflammatory Activity As shown in Figure 3A, nitric oxide (NO) levels were highly increased by LPS treatment of RAW 264.7 macrophages. The LPS-induced increase in NO production was significantly inhibited by quercetin 10 M (35.4% of LPS treatment levels) as well as the white mulberry fruit extract and fractions in a concentration-dependent manner. Interestingly, the EtOH extract and MC and HX fractions exhibited potent inhibitory effects on NO production at the maximum tested concentrations (20.2%, 29.9%, and 32.8% of LPS treatment levels, respectively), which was slightly different from the results of the antioxidant activity analysis. In multiple comparison analysis, the significant differences were observed only between EA and BuOH fractions (p < 0.05). Figure 3B,C show the relative protein expression levels of inducible NO synthase (iNOS) in LPS-treated RAW 264.7 macrophages. Treatment with the white mulberry fruit extract and fractions resulted in lower iNOS levels than that observed with LPS-treatment only. Notably, the EtOH extract and HX and MC fractions showed a higher suppression of iNOS expression than the positive control treatment did at all studied concentration ranges. These findings were consistent with the results of multiple comparison analysis; both HX and MC fractions showed significantly lower iNOS expression than that in EA and BuOH fractions (p < 0.01 for EA vs. HX, p < 0.001 for others). Mass Spectral Identification and Qualitative Analysis of Extracts QTOF-MS is a widely used tool in the field of metabolomics that yields high mass accuracy and elucidates the elemental composition of compounds. HRMS data yield valuable information that enables screening of the masses of secondary metabolites and, thus, are another powerful tool because compounds can be identified without using actual reference standards. The chemical constituents of the M. alba fruit extracts were qualitatively analyzed using UHPLC-QTOF-HRMS. In total, 20 chemical constituents were detected in the fractions of the crude extract (Table 3), and all the metabolites were characterized based on the MS data, which was interpreted based on currently available literature. The exact mass of the reported compounds was compared with the obtained mass data to generate the parts per million (ppm) value. A parts per million value between the theoretical and measured exact mass of approximately ±50 ppm indicated the compound was a positive match. Thus, lower parts per million values indicate a higher probability of the measure compound existing in the extract. The exact masses were calculated based on the possible proton and sodium adducts under positive ionization. The structures of these compound have been identified and characterized from M. alba in previously reported studies. Table 3. Chemical constituents identified in the fractions of white mulberry fruits using ultra-high-performance liquid chromatography-quadrupole-time-of-flight-high-resolution mass spectrometry (UHPLC-QTOF-HRMS). Discussion The in vitro antioxidant assays performed in this study (DPPH, ABTS, and FRAP assays) are simple and most commonly used in the early screening of antioxidant properties of vegetable and fruit extracts and products. DPPH is a highly colored and stable free radical, which in the presence of antioxidant substances is reduced to the non-radical 2,2diphenyl-1-picrylhydrazine, with a loss of its violet color. The ABTS assay measures the capacity of an antioxidant to scavenge ABTS radicals (ABTS + ) generated by reacting the parent compound with a strong oxidizing agent such as potassium persulfate. It can be used over a wide pH range in both aqueous and organic solvent systems. The FRAP assay directly evaluates total antioxidant power, where the ferric-tripyridyltriazine (Fe 3+ -TPTZ) complex is reduced to the ferrous form (Fe 2+ ) at low pH, with an intense blue color. In this study, the EtOH extract and fractions of white mulberry fruits exhibited appropriate DPPH and ABTS radical scavenging effects and FRAP values, which were especially high with the EA and MC fractions. On the contrary, the HX fraction showed the lowest antioxidant activity, with the IC 50 values for DPPH and ABTS assays over the upper concentration limit tested in this study (IC 50 > 1000 g/mL) ( Table 1). These findings are very similar to the results of other previously published study. The results suggest that the protective effect of white mulberry fruits against oxidative stress is primarily mediated by constituents of the EA and MC fractions. Intracellular levels of free radicals can damage various cell constituents and activate specific signaling pathways, which both affect numerous cellular process linked to aging and the development of related diseases. Overproduction of ROS and reduced antioxidant capacity can result in a redox imbalance, inducing the inflammatory response and oxidative stress eventually leading to the formation of various pathophysiological lesions. Consistent with the results of previous studies, in this study, LPS increased ROS generation in stimulated RAW 264.7 macrophages (Figure 2). Co-treatment of LPS-stimulated RAW 264.7 macrophages with white mulberry fruit extract and fractions resulted in significant reductions in ROS levels. At the maximum concentrations used in this assay, all studied samples, except for the BuOH fraction, showed similar inhibition of ROS production to that in the control, which was not stimulated by LPS. Furthermore, the EA and MC fractions induced lower ROS levels than the positive control (quercetin 10 M) at the lowest tested concentration, indicating the results were similar to those of the in vitro antioxidant assays ( Figure 2). Antioxidant enzymes such as SOD, GPx, and CAT stabilize or inactivate the detrimental effects of free radicals on cellular components. They also inhibit the oxidizing chain reaction to minimize free radical-induced cellular and molecular damages. By reducing cellular exposure to free radicals, antioxidant enzymes contribute to decreasing the risk for various associated health problems including the physiological manifestations of aging, cardiovascular diseases, diabetes, neurodegenerative diseases, and cancer. SOD establishes the first-line defense system against superoxide radicals (O 2− ) by catalyzing their breakdown to oxygen and H 2 O 2. This ROS-scavenging process of SOD is only effective with the cooperative actions of GPx and CAT, during which H 2 O 2 undergoes further degradation. We found that the EtOH extract and all fractions of white mulberry fruits at the lowest concentrations used in this study simultaneously and significantly enhanced SOD, GPx, and CAT enzyme capacities, which were suppressed by LPS treatment (Table 2). NO is a free radical widely distributed in the body that regulates various biological functions including vasodilation, smooth muscle contraction, neuronal signaling, platelet aggregation inhibition, immunological regulation, and inflammatory responses. LPS-induced activation of macrophages leads to iNOS expression, resulting in increased NO production. Excessive NO levels have been implicated in cell death, inflammatory responses, and the pathogenesis of several disease states. In this study, white mulberry fruit extract and fractions significantly inhibited the production of nitrites in LPS-stimulated macrophage cells, which protected cell viability (Figures 1 and 3A). We also confirmed that protein expression level of iNOS was lower after treatment with white mulberry fruit samples than that before treatment ( Figure 3B,C). This observation indicates that the reduction of NO levels in LPS-stimulated RAW 264.7 macrophages treated with white mulberry fruit extract was primarily mediated by downregulation of iNOS expression. To the best of our knowledge, this is the first study to investigate the anti-inflammatory activity and its underlying mechanism of the fractions of M. alba fruit. Table 3 shows the various chemical constituents we identified from the white mulberry fruit fractions using UHPLC-QTOF-HRMS analysis. From the multiple comparison analyses, superior antioxidant and anti-inflammatory activities in EA and MC fractions were confirmed. Most compounds from the EA fraction, which has the highest total flavonoid content (Table 1), were flavonoids or their derivatives. The constituents identified in this study (quercetin, kaempferol, luteolin, astragalin, and taxifolin) have shown various biological health-promoting effects including antioxidant and anti-inflammatory activities mediated through different molecular mechanisms. On the other hand, constituents with various chemical structures were found in the MC fraction. Indole is an aromatic heterocyclic compound commonly distributed in nature. Many well-known indole derivatives have been developed as pharmaceutical agents such as nonsteroidal antiinflammatory drugs (indomethacin and etodolac), antimigraine agents (sumatriptan and naratriptan), and a non-selective -blocker (pindolol). In addition, numerous biological activities including antioxidant, anti-inflammatory, analgesic, antimicrobial, antidiabetic, antidepressant, and anticancer have been reported for compounds with an indole nucleus. Loliolide is a monoterpenoid active ingredient found in green algae that exhibits antioxidant, antiviral, anti-inflammatory, anticancer, antimelanogenic, and antiapoptotic properties. Odisolane was recently isolated as a novel oxolane derivative from 70% aqueous methanol extracts of M. alba fruits. Odisolane significantly inhibited angiogenesis in human umbilical vein vascular endothelial cells, a pathological process that is closely related to chronic inflammation and oxidative stress. It was not possible to determine the specific individual effects of the identified compounds because other minor components were also present in the extract and fractions; however, the antioxidant and anti-inflammatory activities of white mulberry fruit could be partially attributed to the complex effects of these active constituents. The limitation of this study is that the signaling pathway associated with anti-inflammatory effects of white mulberry fruits were not fully identified. Nuclear factor (NF)-B induces pro-inflammatory cytokines, chemokines, and adhesion molecules that are essential for both innate and adaptive immune responses. NF-B has been known to play a role in the expression of iNOS and another well-known inflammatory marker, COX-2. Consequently, we also evaluated the effects of white mulberry fruit extract and fractions on the NF-B signaling pathway and COX-2 expression. However, there was no significant change in NF-B p65 or COX-2 protein expression following treatment with white mulberry fruit extract and fractions (data not shown), although they are overexpressed in RAW 264.7 macrophages after LPS stimulation. Several studies have reported that the expression of iNOS and COX-2 is also affected by mitogen-activated protein kinase (MAPK), which is involved in the regulation of cell growth, differentiation, and apoptosis [47,. Therefore, further investigations are needed to determine the possible inactivation of the MAPK pathway by white mulberry fruit. Plant Material, Extraction, and Preparation of Fractions Mulberry fruits (M. alba) were acquired from the Kyungdong Market (Woori Herb), Seoul, Korea, in January 2014. The material was verified by one of the authors (K.H.K.), and a voucher specimen (MA 1414) was deposited in the herbarium of the School of Pharmacy, Sungkyunkwan University, Suwon, Korea. The M. alba fruits (0.9 kg) were dried in a hot air oven at 60 C and the dried materials were extracted with 70% aqueous EtOH and filtered using Whatman filter paper No. 42 three times at room temperature. The filtrate was evaporated in vacuo to obtain the crude EtOH extract (140 g). The extract was dissolved in deionized water and then solvent-partitioned with 800 mL each of HX, MC, EA, and BuOH three times, yielding 2.8 8.5, 3.3, and 13.9 g of the fractions, respectively. Concentrated extracts and fractions were subsequently lyophilized and stored at −20 C prior to analysis. Determination of Total Phenolic Content The total phenolic content was determined using the Folin-Ciocalteu method with some modifications. Each sample (100 L) was mixed with 200 L Folin-Ciocalteu reagent and allowed to react for 1 min. Following the addition of 3 mL 5% sodium carbonate (Na 2 CO 3 ), the mixtures were incubated for 60 min at room temperature in the dark. The absorbance was measured at a wavelength of 725 nm using a microplate spectrophotometer (xMark, Bio-Rad, Hercules, CA, USA). Gallic acid was used as the standard, and the total phenolic content were determined from calibration curves for gallic acid (y = 2.4652x + 0.008, r 2 = 0.9998). Results were expressed as milligrams of gallic acid equivalents per gram of sample (mg GAE/g). Determination of Flavonoid Content The flavonoid content was determined using the aluminum chloride (AlCl 3 ) method with some modifications. Each sample (100 L) was mixed with 150 L sodium nitrite (NaNO 2 ) and allowed to react for 5 min. Then, 300 L 10% AlCl 3 solution and 1 mL 1 M sodium hydroxide were added, and the absorbance was measured at a wavelength of 510 nm using a microplate spectrophotometer at 510 nm. (±)-Catechin was used as the standard, and the flavonoid content were determined from calibration curves for (±)catechin (y = 0.9472x + 0.0011, r 2 = 0.9964). Results were expressed as milligrams of catechin equivalents per gram of sample (mg CAE/g). Cell Culture Murine RAW 264.7 macrophage cells were cultured in DMEM containing 4 mM L-glutamine, 4.5 g/L glucose, and sodium pyruvate supplemented with 10% FBS and 1% penicillin/streptomycin. Cells were maintained in a humidified atmosphere with 5% carbon dioxide (CO 2 ) at 37 C. DPPH Radical Scavenging Assay The DPPH radical scavenging activity was determined using the method of Blois with some modifications. Briefly, 50 L 0.2 mM DPPH solution was added to the same volume of each sample at a concentration range of 10-1000 g/mL and incubated at room temperature for 15 min in the dark. The absorbance was measured using a microplate spectrophotometer at 517 nm. The scavenging activity was calculated as follows: DPPH scavenging activity (%) = ((A 0 − A c )/A 0 ) 100 (where, A 0 and A c are the absorbance of the control and sample, respectively). ABTS Radical Scavenging Assay The ABTS radical scavenging activity was determined using the method of Arts et al. using ABTS with some modifications. Briefly, 7 mM ABTS and 2.45 mM potassium persulfate were mixed (1:1) and incubated for at room temperature 24 h in the dark. The ABTS solution was diluted in 100% methanol to obtain an absorbance of 0.70 ± 0.02 at 734 nm. Then, 50 L diluted ABTS solution was added to the same volume of each sample at a concentration range of 10-1000 g/mL and incubated for at room temperature 5 min in the dark. The absorbance was measured using a microplate spectrophotometer at 734 nm. The scavenging activity was calculated as follows: ABTS scavenging activity (%) = ((A 0 − A c )/A 0 ) 100 (where A 0 and A c are the absorbance of the control and sample, respectively). FRAP Assay The FRAP assay was performed using the method of Benzie and Strain with some modifications. The FRAP reagent was prepared by mixing 300 mM acetate buffer (pH 3.6), 10 mM 2,4,6-tris(2-pyridyl)-s-triazine, and 20 mM ferric chloride (FeCl 3 6H 2 O) at a 10:1:1 ratio. Then, 175 L FRAP reagent was added to 25 L of each test sample at a concentration of 1000 g/mL and incubated at 37 C for 4 min. Ferrous sulfate (FeSO 4 7H 2 O) was used as the standard, and the absorbance was measured at a wavelength of 593 nm using a microplate spectrophotometer. The results are expressed as millimoles (mmol) of FeSO 4 7H 2 O equivalents per gram of sample (mmol Fe 2+ /g). Measurement of Intracellular ROS Levels Intracellular ROS levels were measured using the DCF-DA assay as described by Sittisart and Chitsomboon. Cell were seeded in 96-well plates (2 10 4 cell/well), treated with the positive control or different concentrations of crude extract and fractions of white mulberry fruits for 2 h, and then incubated with LPS for 20 h. Then, the supernatant was discarded, and 20 M DCF-DA in serum-free DMEM was added, followed by further incubation at 37 C for 30 min, protected from light. The supernatant was removed and washed with PBS twice, and then 100 L PBS was added to each well. The fluorescence intensity was detected at excitation and emission wavelengths of 485 and 535 nm, respectively, using a multi-mode microplate reader (SpectraMax M3, Molecular Devices, San Jose, CA, USA). Antioxidant Enzyme Capacity Assays The antioxidant enzyme capacity was assayed in accordance with the methods previously described by Lee et al.. Briefly, RAW 264.7 cells were seeded in 24-well plates (2 10 5 cell/well), treated with the positive control or white mulberry fruits extract and fractions for 2 h, and then they were incubated with LPS for 20 h. The culture medium was removed, and the cells were washed twice and then scraped with 1 mL PBS. Cell suspension were centrifuged at 14,000 rpm at 4 C for 5 min. For the determination of SOD activity, cell homogenates were prepared by homogenizing cell suspensions with 0.05 M sodium carbonate buffer (pH 10.2). The final reaction mixture consisted of 50 L cell homogenate and 0.05 M sodium carbonate buffer containing 3 mM xanthine, 0.75 mM NBT, 3 mM EDTA, and 1.5 mg/mL bovine serum bovine albumin (BSA). The reaction was initiated by adding 50 L xanthine oxidase (0.1 mg/mL) and incubating at room temperature for 30 min, and then it was stopped by adding 6 mM copper (II) chloride and centrifuging at 1500 rpm for 10 min. The absorbance of blue formazan in the supernatant was determined at a wavelength of 560 nm. The assay mixture for determining GPx activity contained 0.1 M phosphate buffer (pH 7.0), 1 mM EDTA, 1.5 mM NADPH, 1 mM sodium azide, 1 unit of GSH reductase, 10 mM GSH, and 100 L cell lysates. This mixture was incubated at 37 C for 10 min, and then hydrogen peroxide (H 2 O 2 ) was added to each sample at a final concentration of 1 mM, followed by the measurement of activity at a wavelength of 340 nm. The assay mixture for CAT activity contained 12 L 3% H 2 O 2 and 100 L cell lysates in 50 mM phosphate buffer (pH 7.0), and samples were incubated at 37 C for 2 min. The absorbance of the samples was measured for 5 min at a wavelength of 240 nm. The variation in absorbance is proportional to the breakdown of H 2 O 2. Measurement of Intracellular NO levels The concentration of nitrite, a stable oxidized product of NO, in the cell culture medium was determined using a Griess reagent system kit (Promega). Cells were seeded in 96-well plates (2 10 4 cell/well) and treated with the positive control or different concentrations of white mulberry fruits crude extract and fractions for 2 h, followed by LPS for 24 h. Then, 50 L samples of the supernatant from the treated culture medium was mixed with 50 L 1% sulfanilamide in 5% phosphoric acid and incubated at room temperature for 10 min, protected from light. Then, 50 L 0.1% N-1-napthylethylenediamine dihydrochloride in water was added, followed by incubation at room temperature for 10 min, protected from light. The absorbance was measured at a wavelength of 540 nm using a microplate spectrophotometer. The NO level of each experimental sample was calculated using a NaNO 2 (0-100 M) standard curve. Measurement of iNOS Protein Expression For Western blot analysis, cells were seeded in 12-well plates (5 10 5 cells/well) and pre-incubated for 2 h with the positive control and different concentrations of the crude extract and each fraction. After LPS stimulation at 37 C for 20 h in a humidified atmosphere of 5% CO 2, cells were washed with PBS, homogenized with lysis buffer containing a protease inhibitor cocktail, and centrifuged at 14,000 rpm (4 C, 20 min). Each supernatant sample containing an equal total protein amount (20 g) was loaded for separation using 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis and then transferred onto a polyvinylidene difluoride membrane. After blocking with 5% skim milk for 1 h, the membrane was incubated with a primary antibody against iNOS (1:1000 dilution in 5% BSA) at 4 C overnight, followed by a horseradish peroxidase-conjugated secondary antibody (1:2000 dilution in 5% skim milk) at room temperature for 1 h. The membrane was washed, and immunoreactive bands were detected using the ChemiDoc imaging system with an enhanced chemiluminescence solution kit (Bio-Rad). Chemical Profiling and Qualitative UHPLC-QTOF-MS Analysis Extracts of white mulberry fruits were chemically profiled using an Agilent 1290 Infinity II HPLC instrument (Foster City, CA, USA) coupled to a G6545B Q-TOF mass spectrometer (Agilent Technologies). HX, MC, EA, and n-BuOH soluble fractions were dissolved in the respective extraction solvents and filtered through 0.45-m filters before injection. The identified compounds were separated using an Agilent EclipsePlus C18 column (2.1 mm 50 mm, 1.8 m; flow rate 0.3 mL/min) maintained at 20 C. The mobile phrase consisted of 0.1% formic acid in water (solvent A) and 100% acetonitrile (solvent B). The gradient elution was performed on the following schedule: 90% A → 100% B (0-10 min), 100% B (11-16 min), and 90% A (16-20 min) for equilibration before the next injection. The samples were monitored at 210 and 254 nm during the chromatographic run. The mass spectral analysis was performed using the MassHunter software (Agilent, Foster City, CA, USA), and the mass spectrometer conditions were as follows: ionization mode, electrospray ionization (ESI (+)); MS scan range, m/z 100-1700; nebulizer gas (N 2 ) pressure, 35 psi; dry gas (N 2 ) flow rate, 8 L/min; drying gas temperature, 225 C; sheath gas temperature, 320 C; capillary voltage, 3.5 kV; fragmentor voltage, 100 V; collision energy, 3.0 eV. The exact mass of some of the organic compounds identified from the mass spectral data was compared with theoretical values from previous studies. The significance of the obtained data was confirmed by the results calculated from the obtained and theoretical data. The accuracy was reported as change (∆, parts per million (ppm)) and was calculated using the following equation, (mass exp − mass calc ) mass exp 10 6 where, mass exp and mass calc are the experimental mass and calculated mass from previously published molecular formulas, respectively. Statistical Analysis All experiments were replicated three or five times, and assay results are expressed as the means ± SD. The IC 50 value of the DPPH and ABTS radical scavenging assays was defined as the concentration of sample scavenging 50% of the free radicals. The significance of the difference in mean values between each mulberry fruit sample and control or LPS treatment sample were analyzed using the Student's t-test, and the difference in mean values among mulberry fruit fractions at the lowest concentration (5 g/mL for EA, HX and MC fractions; 10 g/mL for BuOH fraction) were analyzed using the one-way analysis of variance (ANOVA) with post-hoc Tukey multiple comparison test. p-values < 0.05 were considered statistically significant. Conclusions In conclusion, we demonstrated that white mulberry fruits contain adequate amounts of total phenolics and flavonoids and exhibited beneficial antioxidant and anti-inflammatory properties without cytotoxicity. The present study also reported the chemical constituents of white mulberry fruit extracts, including some associated with observed biological activities of antioxidant and anti-inflammatory agents. Our findings indicate that white mulberry fruits have protective effects against oxidative stress and inflammatory responses, suggesting their nutraceutical and pharmaceutical potential to be developed as natural antioxidant and anti-inflammatory agents.
Glacial and lake fluctuations in the area of the west Kunlun mountains during the last 45 000 years There appear to have been several important glacial advances on the southern slope of the west Kunlun mountains, Tibetan Plateau, since 45 000 a BP. Based on the record of alternating till and lacustrine sediments and 14C determinations, these advances are dated to 23 00016 000, 85008000, and 40002500 a BP, and to the 16th19th century AD, with regional variations occurring during each of the advances. The glaciation of 23 00016 000 a BP is equivalent to the last glacial maximum (LGM) and its scope and scale were much larger than any of the others. Lake changes are a response to both tectonic uplift of the plateau and global climatic change. With regard to the latter, both changes in precipitation and changes in the extent of glaciation can affect lake levels. High lake levels occurred during interstadial conditions between 40 000 and 30 000 a BP, when the area experienced a relatively warm and humid climate, and during the LGM, between 21 000 and 15 000 a BP. During the Holocene, lakes have been shrinking gradually, coincident with the dry climate of this period of time.
Satellite Internet Fair Access Policy: Explained & Explored Our goal is to give each of our customers the fastest service at the lowest price. To ensure that all ViaSat customers have equitable access to the network and that heavy usage by a small number of customers does not negatively impact the network performance for all customers, the ViaSat service utilizes a data allowance Policy (the “Policy”). This policy explains what happens when you use the maximum amount of data included in your plan. ViaSat Internet access is not guaranteed and is subject to this Policy. We have several Exede Broadband plans available, each of which has a different monthly data allowance. We measure your data usage on a monthly basis and reset it to zero on the same day each month. Starting on the first day of your monthly measurement period, all uploaded and downloaded data transmitted using your ViaSat account counts toward your data allowance. If your data usage reaches 100% or more of your monthly data allowance, we will alert you of this fact. If at any time your data usage exceeds the data allowance, ViaSat may severely slow, restrict, and/or suspend your service, or certain uses of your service, until the end of your monthly measurement period. ViaSat may offer you the option of purchasing additional increments of data to use during the remainder of your measurement period. At the end of each monthly measurement period, your data usage resets to zero. Any unused data or additional purchased increments of data do not carry over to the next month. This Policy contains important information about your use of the ViaSat service and your relationship with ViaSat. If you do not agree with this Policy, you are not permitted to use the ViaSat service and must terminate your account immediately, subject to the terms of your Customer Agreement. WildBlue uses a somewhat different method. They "give" the customer a specific number of Gigabytes that can be downloaded and uploaded in any given rolling 30 day period. As an example, the Pro-pak user is given a 17Gig, or 17,000MB download limit and a 5Gig, or 5,000MB upload limit. If the customer exceeds that limit they will be fapped. They will be throttled down to much slower speeds until they reach 80% of their rolling 30 day limit. A rolling 30 day limit differs from a 30 day limit by continuously rolling a day forward as the next day begins. A monthly limit was used for several months however some users would see that they had perhaps 8Gig of download bandwidth that they had not used on the 29 th day. At that time these users would download everything and anything in order to "get their moneys worth" and use all of their allotted bandwidth. This created overuse of the system and causing slowdowns near the end of each month. By using a rolling Fap, if you use a large chunk of your bandwidth on a given day and go over the limit set by your plan, you would have to wait until that day rolled out of the rolling 30 days. This has effectively stopped this type of use. This Policy may be revised from time to time upon notice by posting a new version of this document to the website or any successor URL(s). All revised copies of the Policy are effective immediately upon posting.
Exhaustion, Distribution and Communication to the Public The CJEUs Decision C-263/18 TomKabinet on E-Books and Beyond In its Tom Kabinet decision,****See the text of the decision in this issue of GRUR International at DOI: 10.1093/grurint/ikaa041. The author wishes to thank Aaron Stumpf, Stefan Scheuerer and Laura Valtere for fruitful discussions. the CJEU took a further step in dealing with digital facts under the InfoSoc Directive. This decision on the sale of second-hand e-books through a website has set a number of things in motion: besides distinguishing between the distribution right and the right of communication to the public, the decision also affects the exhaustion doctrine and the coherence of European copyright law. In the past few years, discussions about the so-called digital exhaustion and related issues have increased enormously. A few days before Christmas 2019, the CJEU published its long-awaited judgment in case C-263/18, also known as Tom Kabinet, in which it decided that the sale of second-hand e-books through a website constitutes communication to the public and therefore requires the consent of the rightholder. This opinion gives insights into why the Tom Kabinet decision was so eagerly awaited, what exactly was decided and whether the CJEUs decision could fulfil these great expectations. I. The issue at stake Basically, the judgment was expected to answer the question whether the exhaustion rule -traditionally applied to analogue copies of a work -can be extended to cover acts of transmission as regards digital copies, enabling a permanent use of traditional works. From a practical point of view, the outcome certainly has economic consequences since the existence of secondhand markets for e-books (as well as for other digital works) is at stake. Upon closer inspection, the impact from a legal point of view is great on more than one level. Obviously, there is the question whether a doctrine elaborated in the late 19th century 1 can cope with digital facts. In this case, it is especially questionable if the act of handing over a physical carrier and downloading can be treated equally. Dealing with digital facts brings up another issue in the context of exhaustion: the natural use of the work after a potentially exhausted distribution. Since the enjoyment of a digital work -in contrast to the enjoyment of an analogue work -requires reproductions most of the time, which fall under Art. 2 InfoSoc Directive, a potential user needs permission for these technically necessary reproductions. If law cannot grant such permission, 2 the question arises if the permission of the first user could be transferred to the second user. 3 Broadening the picture, the issue of 'digital exhaustion' has arisen before with regard to the suitable exploitation right. Since at least de lege lata the exhaustion rule in Art. 4 InfoSoc Directive only applies to the limitation of the distribution right in Art. 4 InfoSoc Directive, the capacity of the distribution right regarding transfers without any physical carrier involved is already in question. 4 Taking into consideration an even wider perspective, the issue is not only related to copyright law. Since the exploitation of a work is essentially always based on contracts, rules enabling contractual positions can easily conflict with restrictions set by copyright law. It is especially doubtful if a granted possibility to resell 5 can be limited by copyright law -which could be the case if exhaustion is not applicable. 6 II. Context of the case The CJEU's Tom Kabinet decision was expected to constitute a preliminary highlight in a row of decisions regarding the application of the exhaustion rule in the digital context. When talking about this issue, one must distinguish between software and traditional works. This formal distinction 7 results from the fact that software is regulated in the Software Directive 8 and traditional works in the InfoSoc Directive. 9 Whereas after a while it became clear that the exhaustion rule can be applied when works (be they software or traditional works) are embedded in a physical data carrier that is handed over like a tangible book, 10 the question arose how to assess the situation when no such carrier is involved. Software Following a series of judgments that have paved the way regarding software, 11 the CJEU's renowned UsedSoft decision was a cornerstone for the application of the exhaustion doctrine in the digital context. In short, the CJEU ruled that the exhaustion doctrine can be applied to situations where a computer programme is lawfully put in circulation via download. 12 Since downloading and transferring a physical carrier were therefore treated in the same way, 13 the judgment was considered a paradigm shift 14 and of course heavily discussed. 15 It is by no means surprising that the idea to transfer this finding to the exhaustion (and also to the distribution) of traditional works became attractive. 16 2. Traditional works a) Download for temporary use (in the case of lending) In 2016, the CJEU was asked if lending e-books follows the same rules as lending analogue books. After stating that no binding law on a contrary view exists, the CJEU held in its Stichting decision that in the case of public lending, the transmission of digital and the transfer of analogue copies are comparable and should therefore be assessed in the same way. 17 This decision caused problems, at least for German law, according to which the right to lend is covered by the distribution right (Sec. 17 German Copyright Act) and the limitation of Art. 6 Rental and Lending Directive 18 regarding public lending is expressed inter alia in the exhaustion rule in Sec. 17 German Copyright Act. 19 Although the decision dealt with the special exploitation right of public lending, it raised expectations that the same technology-neutral approach would be applied to traditional works under the InfoSoc Directive. 20 b) Download for permanent use The application of the exhaustion rule to downloading of traditional works for permanent use has not yet been decided finally on a European level. 21 The existing decisions by lower instances on a national level had different results: 22 whilst it seemed to be clear for German courts that the exhaustion rule cannot be applied to traditional works in the case of downloading, 23 the courts in the Netherlands tended to see it differently. Since the present dispute has a longstanding history which includes a previous lawsuit, not only all instances in that lawsuit 24 but also the referring court in the present case were not averse to applying the exhaustion rule. 25 To finally raise the tension, it seems that the academic discussion 26 is more or less equally divided between proponents of each side -pro and contra application of the exhaustion rule. 27 Unsurprisingly, when the Rechtbank Den Haag (District Court, The Hague) referred inter alia the second question to the CJEU -the application of the exhaustion rule to downloads -it was greeted with much anticipation. 28 III. The Tom Kabinet case The Tom Kabinet decision deals on the surface with two related issues. One is the extension of exhaustion, as discussed above. The other is the differentiation between distribution and communication to the public. At first glance, the judgment gives the impression that some exhaustion enthusiasts, longing for a detailed elaboration on this doctrine, will be disappointed. The CJEU only answered the first out of four referred questions, which was directed to the issue of applying the distribution right. The Court decided that the supply of e-books via download for permanent use does not affect the distribution right (Art. 4 InfoSoc Directive), but rather the right of making available to the public as a subset of communication to the public (Art. 3 InfoSoc Directive). 29 Under this premise, the CJEU did not answer questions two to four, including the question regarding an application of the exhaustion rule in Art. 4 InfoSoc Directive. However, this decision has a major impact not only on the distinction between the distribution right and the right of communication to the public but also on the exhaustion doctrine and beyond, which will be set out in the following part. Facts Tom Kabinet was operating an online platform called 'Tom reading club'. Tom Kabinet obtained e-books either from official distributors or from private persons and offered these e-books to its members via download for potentially permanent use. 30 The e-books on this platform could therefore be referred to as 'used' or 'second-hand' e-books, 31 though these terms are not without contradictions if one takes into account the nature of digital copies. Tom Kabinet was aiming only to offer legally obtained e-books. Thus, the company attached a digital watermark to the e-books either when they were originally acquired by Tom Kabinet itself or when the members providing e-books declared not to have kept a copy of the book. 32 A fact that is not explicitly provided by the CJEU but crucial to understand the questions referred is that Tom Kabinet kept 50 cents for each traded book to forward to the authors and publishers, at the same time stating that they were not obliged to do so. 33 The CJEU not only answered one of the four questions referred but also reformulated this question. The reason for this approach in the view of the CJEU was to provide better grounds for the national court's decision. 34 Whilst this is not the first time the CJEU has applied this procedure, 35 it is not as simple as it seems at first glance. 36 The differentiation between the distribution right and the right of communication to the public Before answering the question whether an exploitation right can be limited, it seems logical to first clarify which exploitation right, if any, could have been infringed by the defendant. 37 After emphasising that the question whether providing a download that enables a permanent use for one user 38 falls under the right of distribution 39 cannot be answered from the wording of the InfoSoc Directive, the CJEU begins with a systematic, historical and teleological interpretation of the InfoSoc Directive. 40 Finally, the Court denies the application of the distribution right but rules that the right of communication to the public is applicable. 36 The question referred in the disputed case could have been answered abstractly. By changing the question in the way the CJEU did, this was not possible anymore. Instead of a question about the scope of art 4 InfoSoc Directive, the question now regarded a fact (the supply of an ebook via download for permanent use) and asked for one of two possibly fitting exclusive rights. In theory, the CJEU therefore no longer needed to determine the exact requirements of art 4 InfoSoc Directive. In the present case, uncertainties also arise. Since different transmissions of e-books were carried out (from Tom Kabinet to members but also from members to Tom Kabinet), the reformulated question creates uncertainty as to which particular transfer is under examination by the CJEU (see Kuschel (n 30) 138 f). It is not the reformulated question but the procedural constellation, the facts of the referring court and the answer the CJEU provided that make it clear that only the e-book transfer from Tom Kabinet to customers could be the subject of both the referring and the reformulated question. The CJEU based its decision to a large extent on its understanding of certain terms of the InfoSoc Directive, the WCT 41 and the explanatory memorandum of the European Commission. It should be mentioned that the terminology 'tangible and intangible copies' 42 that the CJEU uses as a key element in its line of argument is (once again) 43 controversial. A copy seems by its very nature to always be tangible. It makes no difference if a work is incorporated in paper or in an electronic device -since only a material fixation is necessary, both are referred to and treated as tangible copies. 44 Thus, one should rather differentiate the ways of dissemination: if an analogue copy (e.g. a normal book) is handed over from one person to another, the copy itself is in circulation. On the other hand, if a digital copy that is embodied in an electronic device (e.g. an e-book in a tablet) should be transmitted to a second user, the copy remains tangible but is not in circulation -it is the data (in a transmittable form) of the e-book file, e.g. via download, that is being circulated. 45 b) The Court's decision The Court comes to its conclusion by stating that the term 'original and copies' in Art. 4 InfoSoc Directive (the distribution right) only refers to copies that are put into circulation as physical objects. The CJEU derives this finding from the requirements of international law (Agreed Statements concerning Arts. 6 and 7 of the WCT). 46 Further, electronic and tangible dissemination should be treated differently. The Court refers here to the explanatory memorandum of the European Commission 47 and a variety of recitals in the InfoSoc Directive. 48 Whereas the citation of some recitals might be part of a regular repetition the Court routinely uses to back its arguments (e.g. recitals 2, 5, 4, 9 and 10, in which the Directive aims to respond to technical progress and strengthen the right of the author), 49 recitals 28 and 29 in particular more specifically regard the issue at stake since they assess the exhaustion rule. The CJEU provides another argument on the interpretation of the distribution right according to its understanding of the exhaustion rule in Art. 4 InfoSoc Directive, which is that the distribution right should control the initial circulation of only a physical carrier. 50 These arguments, mainly based on the assumed intention of the legislator, are already known but not substantial for everybody. 51 However, the result, which is applicable to all other traditional works (such as music or movies) regulated under the same directive, is convincing as such. It confirms the systematic understanding of the distribution right and the right of communication to the public as partly practiced at the national level. 52 Moreover, the finding is not very surprising considering that, in its previous decisions, the CJEU had already explicitly highlighted the distinctive nature of the InfoSoc Directive at least in relation to the Software Directive. The Court even indicated its intention to decide in a different manner than in its previous decisions regarding software. 53 Beyond analysing the wording of the InfoSoc Directive, the WCT and the explanatory memorandum of the European Commission (rules which were not initially intended to grapple with the great impact of digitalisation), 54 the CJEU at one point takes into account the values involved: rightly, the Court points out that the transfer of a book and the transmission of an e-book are not comparable. 55 If a teleologically balanced interpretation is considered the goal, this finding constitutes a sound argument. 56 But since the CJEU only argues that ebooks do not lose quality and therefore that second-hand markets deal with perfect substitutes, such feared economic consequences could merely be the starting point of further distinctions. It should also be mentioned that it could be far easier to transfer a digital copy than an analogue copy. This finding goes along with the idea that digital copies cover another public since recipients worldwide could at least in theory be supplied via download. Most notably, analogue copies are economically characterised by scarcity -digital copies are not. One option to reconstruct a digital form of scarcity is applying the 'one copy one user' model, which only enables one recipient to use a digital copy. This in turn causes considerable practical hurdles like proving the seller's guarantee of incontestably deleting his or her copy. 57 This was also a serious issue in the post-UsedSoft era regarding software. 58 Seeing that the differences seem convincing, one might ask why the same should not hold for software or lending? Sure, due to its special features 59 software was and is a disputed exotic phenomenon in copyright protection, and lending (especially public lending) creates challenges of its own kind. 60 But after all, software is awarded with protection just as a written work, and lending, like every other form of exploitation, must deal with the barriers this exploitation brings along. 61 One can already see here that the effects of the Tom Kabinet decision go beyond the answer to the first question. The impact on the exhaustion doctrine The direct effect on exhaustion is apparent. Stating that ebook supply via download enabling a permanent use falls only under the right of communication to the public entails that the exhaustion rule in Art. 4 InfoSoc Directive, which is linked to the distribution right, is not applicable. Moreover, Art. 3 InfoSoc Directive dictates that the right of communication to the public cannot be exhausted. But that is not all. It is generally acknowledged that a copyright-protected work must be embedded in a perceivable form (be it on paper, sound waves, data or others) to be protected. 62 Even if copyright protection by its very nature as an intellectual property right protects only intangible goods, 63 the exploitation rights are more or less linked to the perceivable form. The CJEU itself stated that the exhaustion rule refers to an object and not to the work as such. 64 As a logical consequence, the question arises what the object of reference as regards the exhaustion rule could be. This issue was raised already before the digital transformation. In Germany, the Federal Court of Justice (BGH) applied exhaustion to retransmission in the broadcasting process. 65 It was not until two decades later that the BGH slightly altered its view under the influence of international law. 66 In the end, three relevant subjects of exhaustion are conceivable regarding the dissemination of a copyright-protected work: 67 physical carrier, data, legal position. 68 When it comes to the transmission of digital copies via download like in the present case, no physical carrier is involved and the particular data set is unlikely to be 'transferred' but rather reproduced. 69 This brings the possible subject of exhaustion to a transfer of the legal position which enables the natural use of the work. 70 The same is even more apparent concerning the streaming technology where data is only temporarily reproduced. 71 The CJEU's decision on the distribution right also defines the subject of exhaustion: since the exhaustion rule in the InfoSoc Directive is de lege lata linked to the distribution right -which the CJEU acknowledges by not answering questions two to four here -exhaustion can only limit what the distribution right covers. According to the decision at stake, this is only the transfer of physical carriers. This view might have already been indicated in the Allposters decision. 72 However, the CJEU now leaves no room for doubt. It becomes especially clear with the Court's argument regarding the distribution right based on an understanding of the exhaustion rule in Art. 4 InfoSoc Directive, which according to the Court only covers the circulation of physical copies. 73 This finding also has consequences for the second user: since transferring the legal position is not possible, permission to use a digital work in its natural way (usually by reproductions) cannot be received via exhaustion. Even though this finding may have the consequence that exhaustion only applies to the transfer of physical goods, it should also be noted that no definite answer was given to the general question if the legal position to use the work could be passed on to a second user. Possible solutions beyond the exhaustion doctrine are already under discussion 74 and will probably be further developed. The larger impact Beyond the specifics of the case, the Tom Kabinet decision has far-reaching effects which are worth taking into consideration. Even though different directives and regulations exist in copyright law, they are all united under the goal of achieving a balance of the involved interests when an intellectual creation is awarded copyright protection. Thus, all of these rules should be seen and treated as one copyright law. Under this premise, it is generally requiredand desired by the CJEU 75 -that terminology and concepts contained in different directives are interpreted in the same way. 76 With such an approach, the systematic coherence and unity of the European Union legal order can and should be reached. 77 When assessing the transmission and exhaustion of digital copies, the interpretation of at least three terms and concepts is crucial: 'copy', 'distribution' and 'sale'. 78 a) InfoSoc Directive vs. Software Directive and Rental and Lending Directive Obviously, the question arises of how the findings in Tom Kabinet regarding the InfoSoc Directive go along not only with the UsedSoft decision as regards the Software Directive but also with the Stichting decision as regards the Rental and Lending Directive. Whereas in UsedSoft and Stichting the Court considered the term 'copy' to encompass digital and analogue variants, 79 it states in Tom Kabinet that under the InfoSoc Directive 'copy' is only to be understood as analogue copy. 80 As a consequence, the distribution right also differs between the Software and the InfoSoc Directives: the latter only covers the transfer of physical copies 81 while the Software Directive also factors in transmission via download. 82 Although in Stichting the exploitation concerned lending as opposed to distribution, 83 the very same exploitation right covers the temporary transfer of physical carriers and downloading. 84 One key argument in UsedSoft was the extended interpretation of the term 'sale' in Art. 4 Software Directive. 85 Because the Court did not answer the second question in Tom Kabinet, which was explicitly directed to the application of the exhaustion rule, it is not clear whether the CJEU also aims at a deviating interpretation as regards the term 'sale' in the InfoSoc Directive. However, if one day the CJEU needs to take a position on this matter, the new directives -2019/770 on certain aspects concerning contracts for the supply of digital content and digital services and 2019/771 on certain aspects concerning contracts for the sale of goods -should also be taken into account. Obviously, the three decisions can hardly be united under a common abstract idea. Upon closer inspection, it is not the Tom Kabinet decision that lacks doctrinal coherence. The application of the distribution right to download was already unconvincing in UsedSoft. 86 One might even struggle to follow the Stichting decision since the wording of Art. 2 b; Art. 1 Rental and Lending Directive is also not really fitting for lending digital copies. 87 It should further be added that the differing treatment of terms did not arise first with Tom Kabinet but was already stated in Stichting. The CJEU differed here between lending rights and rental rights regarding the application of the same rule in the very same directive. 88 b) The legal principle of exhaustion When it comes to the various applications of the exhaustion doctrine, it gets even more delicate. The exhaustion doctrine is widely acknowledged as a legal principle. 89 Over the years, it has been codified in law both in many member states at the national level 90 and ultimately also at the European level. One significant function of every legal principle is to treat comparable facts in a comparable way. 91 The incoherent treatment of exhaustion regarding software and traditional works is not in line with this idea: whereas the CJEU has acknowledged -not only in UsedSoft but also in subsequent decisions -that due to the exhaustion principle even the transfer of the legal position does not need the rightholder's consent, 92 the Court does not apply this view when it comes to traditional works. Although codified in separate directives, the different application of the very same doctrine is irritating, especially because the wording of these directives does not evoke different interpretations. To be more precise: even though the InfoSoc Directive might imply that the exhaustion rule cannot be applied to downloading, the wording of the Software Directive does not explicitly require an application in the same situation. 93 Moreover, the exhaustion doctrine has been applied for decades without being codified. 94 In German patent law this is still the case: here, the exhaustion principle is assumed to be rooted in customary law. 95 The exhaustion doctrine was a concept principally independent of any specific provisions. With its recently inconsistent application, the legal principle's inherent function of coherence is in danger of not only creating a rule/exception relation but of falling apart. Besides theoretical concepts, one particular issue is the increasing relevance of hybrid products containing a combination of software and traditional works. 96 A good example is that of computer games, which are already an established part of the entertainment industry. These are regulated by both the Software and the InfoSoc Directives because a computer programme and traditional works (e.g. storytelling and music) are encompassed in such games. 97 Different rules for the same product are unlikely to increase legal certainty on how or if to sell and resell such products. 98 c) Evaluation How to explain the differences outlined above? The answer the CJEU gives in Tom Kabinet -which it has given before 99 -is quite formal: the legislator intended to treat digital and analogue copies in the same way as regards the Software Directive but not as regards the InfoSoc Directive. 100 The different directives may formally legitimate the Court's view. However, considering the whole picture, the diverse treatment regarding the same terminology, same exploitation rights and same limitations (exhaustion) is in contradiction to the aforementioned function of coherence in copyright law. 101 Ultimately, this is not a mistake caused by the Tom Kabinet decision. It is rather the case that the previous decisions, UsedSoft and Stichting, were focused on market sectors and raised doctrinal issues 102 that could not generally be adopted. At a certain point this approach could no longer be maintained: that is when the CJEU made its Tom Kabinet decision. IV. Conclusion and ways forward In the end, the question remains if the Tom Kabinet decision could have fulfilled the aforementioned high expectations. 103 The CJEU decided that reselling e-books via download requires the permission of the rightholder. Thus, from a practical point of view, the issue is solved. From a theoretical point of view, it is only partly solved. By interpreting the scope of exclusive rights, the CJEU also drew consequences regarding the subject of the exhaustion rule in Art. 4 InfoSoc Directive. The way the CJEU chose to get to its ruling (the wording of rules and recitals which were implemented before the existence of many digital business models) may not offer great insights about the legal principle of exhaustion as such, but after all, this is the approach the CJEU is expected to take. Thus, in principle, the judgment is to be welcomed. However, this should not hide the issue underlying the positive law: how far should the rightholder's exclusive right (may it be the distribution right or the right of communication to the public) be extended? One could start with the exhaustion doctrine and argue to release this principle de lege ferenda from the distribution right. 104 But maybe it is slowly becoming time to move away from this doctrine, including its dogmatic frame, and strike out in new directions. One way would be to revitalise the roots of exhaustion -a contractual construct according to which the rightholder implicitly gives his or her consent to forward and use a copyright protected work ('implied licence'). 105 Whether such implied consent regarding digital works can be extracted of the rightholder's own free will seems doubtful. Fully featured consent can hardly be imputed and consent in the sense of the 'one copy one user' model 106 seems too artificially construed. However, the possibility to override a person's implied consent via an explicit contractual restriction (and ultimately technical measures) hinders an effective mechanism. Another discussed possibility would be an assessment by competition law, which limits anticompetitive effects of copyright (over-)protection even if specific copyright limitations (like exhaustion) do not apply. 107 But here, doubts remain as to whether antitrust law and its requirements are really helpful and effective in the specific situation. 108 Since trading with physical carriers is decreasing, contracts regarding access to copyright-protected works come more into focus. Whether the new directive regarding contracts for the supply of digital content and digital services 109 can help remains to be seen. At the very least, the interpretation of Art. 8 a Directive 2019/770 regarding 'normal use' 110 could provide new options for the future. It will be interesting to see if and how broad the contractual position will be interpreted and whether that could ultimately lead to any limiting effects on copyright law.
WASHINGTON (AP) — The House intelligence committee voted Friday to release transcripts of more than 50 interviews it conducted as part of its now-closed investigation into Russian election interference during the 2016 presidential campaign. Among those to be released are interviews with President Donald Trump’s eldest son, Donald Trump Jr., his son-in-law, Jared Kushner, his longtime spokeswoman, Hope Hicks, and his former bodyguard Keith Schiller. The committee also will reelease dozens of other transcripts of interviews with former Obama administration officials and numerous Trump associates, including Roger Stone, currently the subject of a grand jury investigation. The move to release the materials by the committee chairman, GOP Rep. Devin Nunes of California, a close Trump ally, will provide the public with 53 transcripts spanning thousands of pages of raw testimony as special counsel Robert Mueller continues his Russia investigation. But not all interviews conducted by the committee are being released, and there wasn’t a firm timetable Friday for when they will ultimately be made public. The interviews form the basis for the GOP-authored report released this year that concluded there was no coordination between Trump’s presidential campaign and Russian efforts to sway the election. Committee Democrats, who voted against approving the report, have disputed its findings. They say the investigation was shut down too quickly and that the committee didn’t interview enough witnesses or gather enough evidence. But Rep. Adam Schiff of California, the committee’s top Democrat, said some of the most important transcripts — six in total — are still being withheld. The withheld transcripts include separate interviews with Rep. Dana Rohrabacher, R-Calif., who has attracted attention for his pro-Russian statements, and Rep. Debbie Wasserman Schultz of Florida, who headed the Democratic National Committee when court papers say its computer systems were hacked by Russia. Conaway said those transcripts were being withheld as a “professional courtesy” extended to members of Congress who participated in the interviews with the understanding they would be confidential. Also being withheld are transcripts of closed hearings with former CIA Director John Brennan, former FBI Director James Comey and former National Security Agency Director Mike Rogers as well as the transcript for the committee’s business meeting when GOP members approved their final report. None of the transcripts, including those set for public release, has been provided to Mueller as part of his investigation, a move Democrats unsuccessfully pushed for on Friday. “We have suspicions that people testified before our committee falsely and committed perjury, and the special counsel is in the best position to determine, on the basis of the additional information he has, who might have perjured themselves,” Schiff said. But Conaway said Mueller hasn’t asked for access to the transcripts, and Republicans don’t want to be accused of trying to “skew” the investigation or obstruct justice by sending him materials he didn’t request. “He’ll ask for it if he wants to. He’s a big boy,” Conaway said, noting the special counsel will be able to review them once they’re public. The 53 transcripts approved for release will now go to the Office of the Director of National Intelligence for a declassification review. Conaway and Schiff said they didn’t know how long the review would take or when the transcripts would be released to the public. Schiff said Republicans made clear that none of the transcripts, which largely don’t contain classified information, will be released until the declassification review is completed for all of them.
/* Copyright (c) 2010-2011 mbed.org, MIT License * * Permission is hereby granted, free of charge, to any person obtaining a copy of this software * and associated documentation files (the "Software"), to deal in the Software without * restriction, including without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all copies or * substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING * BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #ifndef USBMSD_H #define USBMSD_H /* These headers are included for child class. */ #include "USBEndpoints.h" #include "USBDescriptor.h" #include "USBDevice_Types.h" #include "USBDevice.h" /** * USBMSD class: generic class in order to use all kinds of blocks storage chip * * Introduction * * The USBMSD implements the MSD protocol. It permits to access a memory chip (flash, sdcard,...) * from a computer over USB. But this class doesn't work standalone, you need to subclass this class * and define virtual functions which are called in USBMSD. * * How to use this class with your chip ? * * You have to inherit and define some pure virtual functions (mandatory step): * - virtual int disk_read(char * data, int block): function to read a block * - virtual int disk_write(const char * data, int block): function to write a block * - virtual int disk_initialize(): function to initialize the memory * - virtual int disk_sectors(): return the number of blocks * - virtual int disk_size(): return the memory size * - virtual int disk_status(): return the status of the storage chip (0: OK, 1: not initialized, 2: no medium in the drive, 4: write protection) * * All functions names are compatible with the fat filesystem library. So you can imagine using your own class with * USBMSD and the fat filesystem library in the same program. Just be careful because there are two different parts which * will access the sd card. You can do a master/slave system using the disk_status method. * * Once these functions defined, you can call connect() (at the end of the constructor of your class for instance) * of USBMSD to connect your mass storage device. connect() will first call disk_status() to test the status of the disk. * If disk_status() returns 1 (disk not initialized), then disk_initialize() is called. After this step, connect() will collect information * such as the number of blocks and the memory size. */ class USBMSD: public USBDevice { public: /** * Constructor * * @param vendor_id Your vendor_id * @param product_id Your product_id * @param product_release Your preoduct_release */ USBMSD(uint16_t vendor_id = 0x0703, uint16_t product_id = 0x0104, uint16_t product_release = 0x0001); /** * Connect the USB MSD device. Establish disk initialization before really connect the device. * * @param blocking if not configured * @returns true if successful */ bool connect(bool blocking = true); /** * Disconnect the USB MSD device. */ void disconnect(); /** * Destructor */ ~USBMSD(); protected: /* * read one or more blocks on a storage chip * * @param data pointer where will be stored read data * @param block starting block number * @param count number of blocks to read * @returns 0 if successful */ virtual int disk_read(uint8_t* data, uint64_t block, uint8_t count) = 0; /* * write one or more blocks on a storage chip * * @param data data to write * @param block starting block number * @param count number of blocks to write * @returns 0 if successful */ virtual int disk_write(const uint8_t* data, uint64_t block, uint8_t count) = 0; /* * Disk initilization */ virtual int disk_initialize() = 0; /* * Return the number of blocks * * @returns number of blocks */ virtual uint64_t disk_sectors() = 0; /* * Return memory size * * @returns memory size */ virtual uint64_t disk_size() = 0; /* * To check the status of the storage chip * * @returns status: 0: OK, 1: disk not initialized, 2: no medium in the drive, 4: write protected */ virtual int disk_status() = 0; /* * Get string product descriptor * * @returns pointer to the string product descriptor */ virtual uint8_t * stringIproductDesc(); /* * Get string interface descriptor * * @returns pointer to the string interface descriptor */ virtual uint8_t * stringIinterfaceDesc(); /* * Get configuration descriptor * * @returns pointer to the configuration descriptor */ virtual uint8_t * configurationDesc(); /* * Callback called when a packet is received */ virtual bool EPBULK_OUT_callback(); /* * Callback called when a packet has been sent */ virtual bool EPBULK_IN_callback(); /* * Set configuration of device. Add endpoints */ virtual bool USBCallback_setConfiguration(uint8_t configuration); /* * Callback called to process class specific requests */ virtual bool USBCallback_request(); private: // MSC Bulk-only Stage enum Stage { READ_CBW, // wait a CBW ERROR, // error PROCESS_CBW, // process a CBW request SEND_CSW, // send a CSW WAIT_CSW, // wait that a CSW has been effectively sent }; // Bulk-only CBW typedef struct { uint32_t Signature; uint32_t Tag; uint32_t DataLength; uint8_t Flags; uint8_t LUN; uint8_t CBLength; uint8_t CB[16]; } PACKED CBW; // Bulk-only CSW typedef struct { uint32_t Signature; uint32_t Tag; uint32_t DataResidue; uint8_t Status; } PACKED CSW; //state of the bulk-only state machine Stage stage; // current CBW CBW cbw; // CSW which will be sent CSW csw; // addr where will be read or written data uint32_t addr; // length of a reading or writing uint32_t length; // memory OK (after a memoryVerify) bool memOK; // cache in RAM before writing in memory. Useful also to read a block. uint8_t * page; int BlockSize; uint64_t MemorySize; uint64_t BlockCount; uint8_t _config_descriptor[32]; void CBWDecode(uint8_t * buf, uint16_t size); void sendCSW (void); bool inquiryRequest (void); bool write (uint8_t * buf, uint16_t size); bool readFormatCapacity(); bool readCapacity (void); bool infoTransfer (void); void memoryRead (void); bool modeSense6 (void); void testUnitReady (void); bool requestSense (void); void memoryVerify (uint8_t * buf, uint16_t size); void memoryWrite (uint8_t * buf, uint16_t size); void reset(); void fail(); }; #endif
<gh_stars>10-100 use liblumen_alloc::erts::term::Atom; pub mod tc_3; fn module() -> Atom { Atom::try_from_str("timer").unwrap() }
. The development of the medical obstetric-gynecological staff in the People's Republic of Bulgaria is characterized by a high rate of quantitative growth. Conversely worsening of the index of the relative part of obstetricians-gynecologists with acquired specialty is present; this index is reduced by almost 10% for the whole country during the recent two decades. Regional differences are established as the regions, located in the western part of the country, lag behind in training staff. An attempt is made to determine the factors, which cause the existing regional differences. Adequate measures are proposed to improve postgraduate qualification of medical staff for obstetric-gynecological services in the country during next years.
The fallout from Brexit will hit Ireland's economy for another five years, one of the country's leading think-tanks has warned. The fallout from Brexit will hit Ireland's economy for another five years, one of the country's leading think-tanks has warned. The Economic and Social Research Institute (ESRI) repeated its warning that the Northern Ireland economy would be worst hit by the UK's split from the European Union. Research professor Kieran McQuinn said investment into Republic has already slowed and businesses with an all-island basis such as farmers and food processors will be among those feeling the most pain. He added that tourism from Britain is being damaged due to a weak sterling and unemployment would no longer fall as sharply as in recent years. "There's no doubt there will be opportunities," he said. "But if you factor in the overall impact of Brexit it will probably mean the Irish economy will be growing at a slightly slower rate over the next four to five years than if Britain had stayed part of the European Union." He added: "We think, of all the economies, Northern Ireland is going to suffer the most because of Brexit." The ESRI altered its forecasts on the Republic's economic growth because of Brexit. It dismissed the 26% economic growth that Ireland reportedly enjoyed last year and said the real figure was around 5.5%. It said Brexit would mean a slight cut in growth this year to 4.3% and again next year to under 4%. "Overall we are still quite positive but our output is a little bit down than if Brexit had not occurred," Mr McQuinn said. "Ultimately you continue to see a fall-off in unemployment in Ireland but that fall-off is probably happening at a slower pace than if Brexit had not occurred," he said. The ESRI offered a dismal scenario for the Republic and Northern Ireland last year when it examined the possible ramifications of the UK leaving Europe. It talked about up to 3 billion euro (£2.5 billion) in lost trade and higher energy prices every year and it is running new analysis in an attempt to advise the Government in time for next month's budget of the longer term impacts of Brexit. It is estimated both the Republic and Northern Ireland would suffer a 20% drop in trade. On the positive side Mr McQuinn said there are significant implications from Brexit for the City of London. "That could obviously spark opportunities for certain relocation to come and happen here in Ireland," he said. The ESRI also examined the housing market and Central Bank rules on mortgage lending and repeated its call for the limits to take into account the state of the market. It said it would take up to four years for the full impact of lending restrictions to play out but its projections pointed to house prices being 3.5% lower and ultimately a 5% fall in the number of houses built. Press Association
Media captionAdam Smith on Jeremy Hunt: "He didn't really have that much of a relationship with either of the Murdochs or the chief executive of News International" Culture Secretary Jeremy Hunt sent a memo to David Cameron voicing support for News Corp's bid for BSkyB before he was put in charge of dealing with it, the Leveson Inquiry has heard. Mr Hunt said the UK's media sector "would suffer for years" if the deal was blocked, according to the memo. Number 10 said the memo was "entirely consistent" with Mr Hunt's public view. But Labour say Mr Hunt was not an "impartial arbiter" on the deal, and that he should resign. In the memo - written on 19 November 2010, when Business Secretary Vince Cable was in charge of overseeing the BSkyB bid - Mr Hunt said News Corp executive James Murdoch was "furious" about Mr Cable's handling of the matter. He told the Prime Minister it would be "totally wrong to cave in" to opponents of the deal and said the UK had the chance to "lead the way" if the BSkyB bid went ahead. Labour deputy leader Harriet Harman said: "It is clear from today's evidence that David Cameron gave responsibility to Jeremy Hunt for deciding on the BSkyB bid when he knew only too well that the Culture Secretary was actively supporting the bid. "The Prime Minister should never have given him the job." BBC political editor Nick Robinson said the memo was "ammunition for the culture secretary's critics who say his mind was made up to give the Murdochs what they wanted." Mr Cable lost responsibility for overseeing the BSkyB bid when his private anti-Murdoch views became public, our correspondent adds. "Yet now we learn that the man who replaced him, Culture Secretary Jeremy Hunt, had expressed equally strident - albeit pro rather than anti Murdoch - views in private in a draft memo to the prime minister, before he took over responsibility for the bid." The memo was sent to Mr Hunt's ex-special adviser Adam Smith on 19 November 2010 before it went to Mr Cameron. Mr Smith resigned in April after saying his emails to and from News Corp lobbyist Fred Michel over the firm's bid to take over BSkyB went too far. Media captionHarriet Harman: "This memo is more evidence that David Cameron should never have appointed Jeremy Hunt to decide on the Murdoch bid" He said the "content and extent" of his dealings with Mr Michel had not been authorised by the culture secretary. Downing Street confirmed the prime minister received the memo. But a spokesman said that Mr Hunt had previously said there were no grounds for blocking the deal over competition requirements. Mr Hunt has resisted Labour calls to quit over claims his relationship with Rupert Murdoch's company was too close, and is due to give his own account of events to the inquiry into media ethics on 31 May. Mr Hunt's memo - read out at Thursday's inquiry session - expressed concerns that referring the bid to Ofcom could leave the government "on the wrong side of media policy". The question now for the prime minister is why Jeremy Hunt was chosen to oversee the bid in an impartial, quasi-judicial role, if he had taken such a clear position already? Counsel to the inquiry, Robert Jay QC, read from the memo during questioning of Mr Smith . Mr Jay suggested to Mr Smith that Mr Hunt had drafted the memo and sent it to Mr Smith to check for mistakes. In his evidence to the inquiry the ex-special adviser said Mr Hunt was not close to News Corporation, and Mr Hunt has denied News Corp had any influence with his office. Earlier, Mr Michel said his dealings with Mr Smith were not "inappropriate". But he denied government claims he had exaggerated the closeness of his relationship with Mr Smith. "I think my emails, as they were internal emails, were an accurate account of the conversations I have had," he said. In his witness statement published by the inquiry , Mr Michel says he did not have "any direct conversation" with Mr Hunt relating to the BSkyB bid beyond his attendance at two formal meetings. But the statement confirms the men had exchanged numerous text messages, some of which Mr Michel said were "jokey". Mr Michel told the inquiry references to conversations with "JH" in his emails with Mr Smith were "shorthand" for the culture department. Later, he said he believed Mr Smith was representing the culture secretary in the same way he was representing News Corp. Mr Michel told the inquiry: "I was never of the opinion that it was inappropriate to at least try to put the arguments to or make representations to these officers." News Corp unveiled its bid for BSkyB in June 2010 but abandoned it in July 2011 amid outrage over the phone-hacking scandal at its News of the World newspaper. At the time of the correspondence between Mr Smith and Mr Michel, the culture secretary had been given a "quasi-judicial" role to decide whether the proposed BSkyB purchase should be referred to the Competition Commission for final approval. Adam Smith was Culture Secretary Jeremy Hunt's special adviser. He resigned in April after controversy over the content and extent of his contact with News Corporation when the company was bidding for broadcaster BSkyB. He said his "activities at times went too far" and created the perception that the firm "had too close a relationship with the department". Frederic Michel is the senior vice-president of government affairs and public policy, Europe, for News Corporation. His email exchanges with Mr Smith were discussed in April during evidence at the Leveson Inquiry. In an earlier written submission to the inquiry he suggested that he never had direct contact with Mr Hunt, despite giving the impressions in emails that he had. The inquiry heard Mr Michel made 191 telephone calls and sent 158 emails and 799 texts to Mr Hunt's team, 90% of which were were exchanges with Mr Smith. Mr Jay said Mr Smith sent 257 text messages to Mr Michel between 28 November 2010 and 11 July 2011. Mr Michel's witness statement revealed in May 2010 both men "bumped into each other" at a London hospital where their wives were about to give birth and "shared a night of anxiety". But after Mr Hunt was handed responsibility for the BSkyB bid in December 2010, the culture secretary said in a text message exchange that all business contact "now needs to be through official channels until decision made...". On 3 March 3 2011, Mr Hunt told MPs he was minded to accept the BSkyB takeover after News Corp offered to spin off Sky News. In response to the France-born lobbyist's text that he was "great at the Commons", Mr Hunt replied: "Merci. Large drink tonight!" Mr Michel contacted Mr Hunt by text message later in March 2011 after his appearance on Andrew Marr's BBC programme to say he had been "very good". Mr Hunt replied: "Merci hopefully when consultation over we can have a coffee like the old days!" When News Corp withdrew the BSkyB bid, Mr Hunt's response to a text from Mr Michel said "It has been the most challenging time for all of us... would be great to catch up when the dust has settled." Mr Jay referred to an email in which Mr Michel called on the secretary of state, via Mr Smith, to "show some backbone" and dismiss Ofcom's calls for concessions. Mr Michel told the inquiry: "It's my English - I might use words in a more melodramatic way than I intended." Under earlier questioning, Mr Michel agreed Mr Hunt was "keeping an open mind" about the bid but when asked whether he had been supportive of it, he replied: "I can't say."
import pandas as pd import math import numpy as np from parameter_cal import cf from dtw import dtw from parameter_cal.utils import get_fact_align, get_SS1, get_SS2, get_reverse_dict, calculate_event, get_link_graph, load_data, exp_decay, edge_matching from parameter_cal.utils import plot_warped_signals, cal_warped_signals, get_upslope_endings, get_downslope_endings import matplotlib.pyplot as plt from downsample.utils import get_true_aligned, get_group_number, get_k_accuracy, get_matched_graph, connect_edges from debug.dbd_cf import debug_file, debug_line def norm(x, y): #return math.fabs(x[1] - y[1]) return math.fabs(x[1] - y[1])+math.fabs(x[2] - y[2])+math.fabs(x[3] - y[3]) y_list = load_data(debug_file, debug_line) query, reference = cal_warped_signals(y_list, 'right') reference['upslope'] = 0 reference['downslope'] = 0 # plot warped signal xvals, yinterp = plot_warped_signals(reference, query, cf.ds_time) # calculate the corresponding point pair query.drop('shift', axis=1) query.drop('t', axis=1) query2 = pd.DataFrame({'t': xvals, 'q': yinterp}) query2['close_index'] = 0 query2['upslope'] = 0 query2['downslope'] = 0 true_align_dict = get_true_aligned(cf.ds_time, query, query2) group_num_dict = get_group_number(true_align_dict, query) plt.show() raw_reference_uslope, reference_upslope = get_upslope_endings(reference['q'], cf.refer_percent) raw_query_uslope, query_upslope = get_upslope_endings(query2['q'], cf.query_percent) raw_reference_downslope, reference_downslope = get_downslope_endings(reference['q'], cf.refer_percent) raw_query_downslope, query_downslope = get_downslope_endings(query2['q'], cf.query_percent) rising_edge_grps = edge_matching(reference, query2, reference_upslope, query_upslope) down_edge_grps = edge_matching(reference, query2, reference_downslope, query_downslope) rising_edge_grps = connect_edges(rising_edge_grps, raw_reference_uslope) get_matched_graph(rising_edge_grps, down_edge_grps, reference, query2, -3) calculate_event(rising_edge_grps, reference, query2, True) calculate_event(down_edge_grps, reference, query2, False) d, cost_matrix, acc_cost_matrix, path = dtw(reference[['t', 'q', 'upslope', 'downslope']].values, query2[['t', 'q', 'upslope', 'downslope']].values, dist=norm) get_link_graph(reference, query2, path, -3, 'Downsampled signal with EventDTW', '(a) EventDTW') fact_align_dict = get_fact_align(path) reverse_dict = get_reverse_dict(path) print('group = ' + str(get_k_accuracy(true_align_dict, fact_align_dict, group_num_dict))) print("SS1 of dtw is " + str(get_SS1(fact_align_dict, cf.ds_time))) print("SS2 of dtw is " + str(get_SS2(fact_align_dict, reverse_dict, cf.ds_time)))
/* * Copyright 2017 Google * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #import <Foundation/Foundation.h> @interface FRepoInfo : NSObject <NSCopying> @property(nonatomic, readonly, strong) NSString *host; @property(nonatomic, readonly, strong) NSString *namespace; @property(nonatomic, strong) NSString *internalHost; @property(nonatomic, readonly) bool secure; - (id)initWithHost:(NSString *)host isSecure:(bool)secure withNamespace:(NSString *)namespace; - (NSString *)connectionURLWithLastSessionID:(NSString *)lastSessionID; - (NSString *)connectionURL; - (void)clearInternalHostCache; - (BOOL)isDemoHost; - (BOOL)isCustomHost; - (id)copyWithZone:(NSZone *)zone; - (NSUInteger)hash; - (BOOL)isEqual:(id)anObject; @end
Sorafenib-induced psoriasiform eruption in a patient with metastatic thyroid carcinoma. Sorafenib is a multikinase inhibitor that blocks tumor cell proliferation and angiogenesis and is used for the treatment of advanced renal cell carcinoma, unresectable hepatocellular carcinoma, and other solid tumors. Various dermatologic side effects have been reported, most notably a hand-foot-skin reaction (HFSR). This is a case of a sorafenib-induced psoriasiform eruption in a patient with metastatic thyroid carcinoma. This patient also developed cutaneous squamous cell carcinoma and HFSR in association with sorafenib. To the authors' knowledge, a psoriasiform eruption due to sorafenib has not been reported in the literature and has important therapeutic implications.
import React, { useState } from 'react'; import firebase from 'firebase/app'; import DeleteIcon from '@material-ui/icons/Delete'; import EditIcon from '@material-ui/icons/Edit'; import Button from '@material-ui/core/Button'; import { Typography } from '@material-ui/core'; import { makeStyles, Theme } from '@material-ui/core/styles'; import useNotify from '../../hooks/useNotify'; import SpeedDial, { Action } from '../../components/SpeedDial/SpeedDial'; import Modal from '../../components/Modal/Modal'; import Input from '../../components/Input/Input'; const useStyles = makeStyles((theme: Theme) => ({ title: { width: 'calc(100% - 40px)', minWidth: '300px', margin: '20px' }, delete: { margin: '20px', color: theme.palette.error.main, float: 'right', }, success: { margin: '20px', color: theme.palette.success.main, float: 'right', }, cancel: { margin: '20px', color: theme.palette.grey[400], float: 'right', }, modal: { padding: '15px', display: 'flex', alignItems: 'center', justifyContent: 'center', flexDirection: 'column', }, buttons: { width: '100%', float: 'right', padding: '30px 10px 0px 10px', }, })); const RenderCollectionEdit = ({ activeCollection, user, updateCollection, removeCollection }: any) => { const [open, setOpen] = useState(false); const [edit, setEdit] = useState(false); const [disabled, setDisabled] = useState(false); const [name, setName] = useState(activeCollection?.name || ''); const classes = useStyles(); const notify = useNotify(); React.useEffect(() => { setName(activeCollection?.name || ''); }, [activeCollection]); const clean = () => [setOpen, setEdit, setDisabled].map((e) => e(false)); const del = async () => { setDisabled(true); if (!Boolean(activeCollection?.docId) || !Boolean(user?.isSignedIn) || !Boolean(user?.uid)) return clean(); await firebase .firestore() .collection('users') .doc(user.uid) .collection('collections') .doc(activeCollection.docId) .delete() .then(() => { removeCollection(); notify({ content: `Collection deleted`, severity: 'success' }); }) .catch(() => { notify({ content: `Failed to delete collection`, severity: 'error' }); }); clean(); }; const update = async () => { setDisabled(true); if ( !Boolean(activeCollection?.docId) || !Boolean(user?.isSignedIn) || !Boolean(user?.uid) || !Boolean(name?.trim()) || name?.trim() === activeCollection?.name ) return clean(); // Update collection information await firebase .firestore() .collection('users') .doc(user.uid) .collection('collections') .doc(activeCollection.docId) .update({ name: name?.trim(), }) .then(() => { updateCollection({ name: name?.trim() }); notify({ content: `Collection updated`, severity: 'success' }); }) .catch(() => { notify({ content: `Failed to update collection`, severity: 'error' }); }); clean(); }; const actions: Action[] = [ { name: 'Delete Collection', icon: <DeleteIcon className={classes.delete} />, onClick: () => setOpen(true), }, { name: 'Edit Details', icon: <EditIcon />, onClick: () => setEdit(true), }, ]; return ( <div> <SpeedDial actions={actions} /> <Modal open={open} setOpen={setOpen}> <div className={classes.modal}> <Typography className={classes.title} variant='h4'> Are you sure you want to delete {activeCollection?.name ? <strong>{activeCollection?.name}</strong> : 'this collection'}? </Typography> <div className={classes.buttons}> <Button disabled={disabled} onClick={del} variant='outlined' color='inherit' className={classes.delete}> Delete </Button> <Button disabled={disabled} onClick={() => setOpen(false)} variant='contained' color='inherit' className={classes.cancel}> Cancel </Button> </div> </div> </Modal> <Modal open={edit} setOpen={setEdit}> <div className={classes.modal}> <Input className={classes.title} onChange={(event: any) => setName(event.target.value)} type='text' label='Collection Name' value={name} /> <div className={classes.buttons}> <Button disabled={disabled} onClick={update} variant='outlined' color='inherit' className={classes.success}> Update </Button> <Button disabled={disabled} onClick={() => setEdit(false)} variant='contained' color='inherit' className={classes.cancel}> Cancel </Button> </div> </div> </Modal> </div> ); }; export default RenderCollectionEdit;
import home from '../home'; describe('quakes data home', () => { it('should return the initial state', () => { // @ts-ignore expect(home(undefined, {})).toEqual({ readyStatus: 'invalid', err: null, list: [] }); }); it('should handle QUAKES_REQUESTING', () => { expect( home(undefined, { type: 'QUAKES_REQUESTING', err: null, data: [] }) ).toEqual({ readyStatus: 'request', err: null, list: [] }); }); it('should handle QUAKES_FAILURE', () => { expect( home(undefined, { type: 'QUAKES_FAILURE', err: 'Oops! Something went wrong.', data: [] }) ).toEqual({ readyStatus: 'failure', err: 'Oops! Something went wrong.', list: [] }); }); it('should handle QUAKES_SUCCESS', () => { expect( home(undefined, { type: 'QUAKES_SUCCESS', err: null, data: [{ id: '1', name: 'Welly' }] }) ).toEqual({ readyStatus: 'success', err: null, list: [{ id: '1', name: 'Welly' }] }); }); });
A biopsy is carried out during a minimal-invasive surgery to determine the status of a suspicious lesion. Since the suspicious lesions must be visible for a surgeon, these biopsies are taken generally in a later stage of a disease. The biopsies are then sent to a pathologist to investigate target tissue sections. The outcome thus depends on the local tissue samples that could or could not represent the actual disease stage in the tissue. Optical biopsy is an alternative method, where in-vivo optical technology is used to determine whether the disease has affected the tissue. This method also enables the diagnosis of the disease in an early stage. Light can interact with the tissue in a number of ways, including elastic and inelastic (multiple or single) scattering, reflection of boundary layers and absorption, and can for instance lead to fluorescence and Raman scattering. All of these can be utilized to measure any abnormal change in the tissue. This is beneficial to a patient, because no tissue is removed and an analysis can be performed in real time on the spot at all necessary locations. Furthermore automatic diagnosis would save a lot of time for the patient as well as for the surgeon who can diagnose and treat the person instead of waiting for pathology results. An optical biopsy device must fulfill two requirements to be useful. Firstly it must be able to scan a significant area within a limited time. Secondly, it must have a high sensitivity and specificity. Currently, various optical methods have been proposed for cancer detection. The methods, capable of screening larger areas (in general non-point-like methods) that are available, have high sensitivity but have a rather low specificity. Hence these methods produce a lot of false positives. Methods that have a much higher specificity are in general point like measuring methods. These methods can give a good diagnosis but are not suited to scan significant areas in a short period of time. To fulfill both the above-mentioned requirements, two different optical devices are required. One based on a “camera” like of imaging capable of viewing larger areas and another one based on a “microscope” like imaging capable of viewing tissue on a cellular level. It is apparent that the biopsy procedures would be more efficient and effective if a single optical biopsy device can switch between two different views of a target site without removing the device from the patient. Although combining a camera and a microscope functions in one device have been described in patent application US20040158129, the two optical modalities are still separate entities placed aside of each other. This results in rather bulky devices. Since for minimal invasive procedures the width of the device is of utmost importance, such solutions as described in US20040158129 may not be preferable. It would therefore be advantageous to have an optical biopsy device which does not have the disadvantage that is described above and more in particular to have a compact optical biopsy device that enables camera like (macroscopic) and microscope like imaging possible. Particular and preferred aspects of the invention are set out in the accompanying independent and dependent claims. Features from the dependent claims may be combined with features of the independent claims and with features of other dependent claims as appropriate and not merely as explicitly set out in the claims.
package main import ( "errors" "fmt" "os" "strings" "github.com/eriktate/jump/svc" ) type options struct { Help bool Back bool Alias string Clean bool Add bool Remove bool Path string Target string } func printError(err error) { fmt.Printf("echo \"%s\"", err) } func cd(path string) { fmt.Printf("cd %s", path) } func main() { opts, err := parseArgs(os.Args) if err != nil { printError(err) } envPaths := strings.Split(os.Getenv("JUMP_PATH"), ":") j := svc.NewJumpSvc(envPaths) if opts.Target != "" { path, err := j.Jump(opts.Target) if err != nil { printError(err) os.Exit(1) return } cd(path) } } func parseArgs(args []string) (options, error) { var opts options for _, arg := range args { parts := strings.Split(arg, "=") switch parts[0] { case "-b", "--back": opts.Back = true case "-h", "--help": opts.Help = true case "-l", "--alias": if len(parts) > 1 { opts.Alias = parts[1] } else { return opts, errors.New("You must provide an alias") } case "-a", "--add": if len(parts) > 1 && parts[1] != "" { opts.Alias = parts[1] } opts.Add = true case "--clean": opts.Clean = true case "--remove": if len(parts) > 1 && parts[1] != "" { opts.Path = parts[1] } opts.Remove = true default: if parts[0][0] == '-' { return opts, fmt.Errorf("unrecognized flag: %s", parts[0]) } opts.Target = parts[0] } } return opts, nil }
<gh_stars>0 package org.jboss.fuse.wsdl2rest.impl.codegen; import java.io.IOException; import java.io.PrintWriter; import java.nio.file.Path; import java.util.ArrayList; import java.util.List; import org.jboss.fuse.wsdl2rest.EndpointInfo; import org.jboss.fuse.wsdl2rest.MethodInfo; import org.jboss.fuse.wsdl2rest.ParamInfo; public class SpringRestClassGenerator extends ClassGeneratorImpl { public SpringRestClassGenerator(Path outpath) { super(outpath); } public SpringRestClassGenerator(Path inpath, String source, Path outpath) { super(inpath, source, outpath); } public SpringRestClassGenerator(Path inpath, String source, Path outpath, boolean domainSplit) { super(inpath, source, outpath, domainSplit); } @Override protected String getClassFileName(String className) { return className + "Controller"; } @Override protected void writeImports(PrintWriter writer, EndpointInfo clazzDef) { writer.println("import org.springframework.web.bind.annotation.RestController;"); writer.println("import org.springframework.web.bind.annotation.GetMapping;"); writer.println("import org.springframework.web.bind.annotation.PutMapping;"); writer.println("import org.springframework.web.bind.annotation.DeleteMapping;"); writer.println("import org.springframework.web.bind.annotation.PostMapping;"); writer.println("import org.springframework.web.bind.annotation.PathVariable;"); super.writeImports(writer, clazzDef); } @Override protected void writeServiceClass(PrintWriter writer, EndpointInfo clazzDef) throws IOException { String pathName = clazzDef.getClassName().toLowerCase(); writer.println("@RestController(\"/" + pathName + "/\")"); super.writeServiceClass(writer, clazzDef); } @Override protected void writeMethod(PrintWriter writer, EndpointInfo clazzDef, MethodInfo minfo) throws IOException { List<String> resources = minfo.getResources(); if (minfo.getPreferredResource() != null) { resources = new ArrayList<String>(); resources.add(minfo.getPreferredResource()); } if (resources != null) { String httpMethod = minfo.getHttpMethod().substring(0,1).toUpperCase() + minfo.getHttpMethod().substring(1, minfo.getHttpMethod().length()).toLowerCase(); writer.print("\t@" + httpMethod); StringBuilder path = new StringBuilder(); //int loc = resources.size() >= 2 ? 1 : 0; //for (int i = loc; i < resources.size(); i++) { path.append(resources.get(0)); //} writer.print("Mapping(\"" + path.toString().toLowerCase()); // Add path param String[] sourceParams = getSourceMethodParams(minfo.getMethodName()); if (minfo.getParams().size() > 0) { ParamInfo pinfo = minfo.getParams().get(0); if (hasPathParam(minfo, pinfo)) { writer.print("/{" + getParamName(pinfo.getParamName(), 0, sourceParams) + "}"); } } writer.println("\")"); } super.writeMethod(writer, clazzDef, minfo); } protected void writeParams(PrintWriter writer, MethodInfo minfo) { List<ParamInfo> params = minfo.getParams(); String[] sourceParams = getSourceMethodParams(minfo.getMethodName()); for (int i = 0; i < params.size(); i++) { ParamInfo pinfo = params.get(i); String type = pinfo.getParamType(); String name = getParamName(pinfo.getParamName(), i, sourceParams); if (i == 0 && hasPathParam(minfo, pinfo)) { writer.print("@PathVariable(\"" + name + "\") "); writer.print(getNestedParameterType(pinfo) + " " + name); } else if (getNestedParameterType(pinfo) != null) { writer.print(i == 0 ? "" : ", "); writer.print(type + " " + name); } } } private boolean hasPathParam(MethodInfo minfo, ParamInfo pinfo) { String httpMethod = minfo.getHttpMethod(); boolean pathParam = httpMethod.equals("GET") || httpMethod.equals("DELETE"); return pathParam && getNestedParameterType(pinfo) != null; } }
def _write(self, lod, pos, data): self._report(data)
/** * Tests that closing the issuer aborts expiry notifications */ public void testCloseAbortsExpiry() { registerMockListener(); issuer.issuePermit("permit"); issuer.close(); listener.validate(LIFETIME * 2); }
DISTRIBUTIVE POWER MIGRATION AND MANAGEMENT ALGORITHM FOR CLOUD ENVIRONMENT In cloud computing, the resources are provided as service over the internet on demand basis. The resources such as servers, networks, storage, applications are dynamically scaled and virtualized. The demand grows gradually for each virtual machine in a cloud computing environment over time. So, it is necessary to manage the resources under each cluster to match the demand in order to maximize total returns by minimizing the power consumption cost. It can be minimized by applying minimal virtual design, live migration and variable resource management. But, the traditional way of scheduling doesnt meet our expected requirements. So we introduce the distributive power migration and management algorithm for cloud environment that uses the resources in an effective and efficient manner ensuring minimal use of power. The proposed algorithm performs computation more efficiently in a scalable cloud computing environment. The results indicate that the algorithm reduces up to 30% of the power consumption to execute services. INTRODUCTION Cloud computing is an emerging technology, which hosts the applications and services over the internet as a service (). Nowadays, customers are investing money in computing services by demand basis and "Pay-per use model" over the internet. It has become similar to the purchase of their household utilities such as gasoline, electricity which are bought on need basis. There is no suitable definition for cloud computing, we will use the definition of the National Institute of Standards and Technology-US department of Commerce (NIST) (Mell and Grance, 2011): "Cloud computing is a model for enabling ubiquitous, convenient, on demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction" One major thing to know and understand is that, cloud, cluster and grid computing are not the same. There are notable differences between cluster computing, grid computing and cloud computing (). Cluster computing is tightly coupled whereas Grid is not so. The computers in the cluster systems are placed in a single location but in cloud computing it is disseminated over the MAN (). The Fig. 1 is extracted from (), which explains relationship among the above mentioned three computing. The prominence of the cloud computing is resource sharing which leads to so many challenges, the reason is that resource utilization, safeguarding. Soon enough, ITindustry will subjugate to cloud computing, because the user applications are placed as web service over the internet, the whole user machine can be put into virtual machines that will be accessed through a terminal and that depends on the users on demand (). STATE OF THE ART CLOUD TECHNOLOGY The cloud computing has become an inevitable and indispensable of IT infrastructure. It is directly proportional to the amount of power utilized for its working. One of the researches states that data centers devour 0.5% of the world's total electricity usage and if present-day demand continues, it is anticipated to increase exponentially by 2020. Servers and their cooling units are burning up nearly 1.2 percentage of the total U.S energy utilization and it doubling up every 5 years (;Von ). As a result, it is compulsory to augment the efficiency and potential sustainability of the resource utilized in the cloud computing. The following graph constructed based on various surveys reveals the danger surrounding the world, both in economics and environment point of view. Figure 2 reveals facts about CO 2 and cost factors based on various reports. Having realized the threat and the constraints, modern day researchers have already started working in this area and few algorithms and methodologies have been drafted for saving the power. Though there are lot of papers and research done in the area of Cloud computing, power saving point of view is not given much focus and this motivated the authors to think of a solution for the same. The resource administration can be achieved by the aspects of virtualization in cloud computing. In a cloud environment, hypervisors help to combine the number of unconnected physical machines into a virtualization environment, which requires less physical resources thanbefore. With a huge number of nodes connecting to cloud power consumption raise over thousand megawatts. So, it is obligatory to develop an efficient cloud computing system that curtails the power consumption. In order to combine Green computing frame work into cloud requires a new set of protocol which carries on cloud computing development by means of data center assembly and administration. The architecture provided in this study helps us to reduce the power spent by finding a resource which is capable of being renewed and consistent energy source from the cloud environment itself. The rest of the study is organized as follows, In chapter 2, we discuss the related state of the art work on power management technique. Followed by a proposed architecture and algorithm in chapter 3 and 4. Chapter 5 shows various outcomes of the algorithm simulations by using CloudSim. Chapter 6 concludes the study and adds the challenges faced and future work. RELATED WORKS The following are the researches that have been done in the area of interest. Beloglazov and Buyya explained a disseminated way out to optimize power efficiently by using their algorithms "Minimization of Migration". The placement of VM in datacenters and due to the power saves how the SLA be violated and redefine the power saving problem by considering the minimal SLA violation. They evaluated the algorithm by using CloudSim (). Ramesh and Krishnan has done an extensive research in the area of resource sharing for the cloud and grid computing. They proposed an algorithm for better performance with good resource utilization as well. But, power was not considered in the research. "Managing server energy and operational costs in hosting centers" () paper focuses on control theoretic feedback technique, queue prediction and the combination of both which helps us to pattern the dynamic resource provisioning. The virtualized environment is not taken care for the consideration. Green Cloud Framework () was described by which help us to optimize power utilization in the cloud environment and measured the fact that put multiple VM on a single host to reduce the number of live hosts that curtails the power usage. The authors also suggested that management for VM image size with respect to the green computing. Centralized provisioning is a conventional model to monitor the performance and attributes of the each VMs and their nodes. The centralized provisioning makes the verdict largely depends on the overall usage of the resources. Administrator will access all the resources shared by the centralized system and makes the decision with the help of available facts and data to reach the desired goals. Fig. 2. Survey summary graph This will work only the numbers of VMs are countable and becomes tougher if the nodes are getting increased. Calheiros et al., authors use capacity planning technique to achieve server consolidation to reduce energy consumption in a web service server cluster environment. CPU utilization based and queue based monitoring approaches are used to estimate resource capacities required to serve future requests. This study considers server cluster (non-virtualized environment) and have used one application. To reduce the power utilization by consolidating the server in web service cluster atmosphere by means of capacity planning approach (). Queue based and CPU utilization based approaches are used to estimate the resource needs for future purpose. The author does not consider the virtual environment for the evaluation. Considering the omission of power related challenges in cloud technology, the need to address the same is apparent. ARCHITECTURE In this section, we present a model for VM Power Scheduler in the cloud environment. It provides efficient way to migrate the VM machines from one server to another by minimizing the power utilization which helps to reduce the operating cost of the cloud environment. This will benefit the customer in indirect way by spending lesser amount to use the cloud model. The Fig. 3 presents the framework of our new green cloud computing environment which have an extra scheduler called as "VM Power Scheduler" that manages the entire VMs in that cloud. This environment get the request from "n" different users, that are pooled into the request queue. The Cloud scheduler and VM Power Scheduler make sure that each and every process are placed in the particular VM in the server with the help of the distributive power migration and management algorithm for cloud environment, that minimize the power consumption. The job of that scheduler is to collects the server details from the resource manager who holds the details of each VM running on the particular server. VM Power Scheduler closely monitors the details of VMs that arerunning on the each server and also have the details of VMs which are all in the idle state. The newly proposed scheduler gets the help from the distributive power migration and management algorithm to find the suitable server that are capable of executing the process in the particular VMs. Primary Objective To manage or allocate the existing virtual machines to servers in such a way that minimal no of severs and in turn least amount of power is used. Proposed Solution We control the Power usage by controlling the allocation of virtual machines (processes) to servers at each stage Every incoming VM is allocated a server by an arbitrary order defined by the user. Thus a server cannot be left alone unless its maximum capacity is reached Science Publications Fig. 3. Proposed architecture Whenever a VM is to be removed from a server the problem here is to check if the VM's in another server can be shifted to this so that the latter can be powered down This is accomplished at every step by retrieving all the VM's and by allocating them again manually therefore each server is filled to its maximum capacity before another server is involved This leaves no chance for any server to be idle or not run at maximum potency Figure 4 can be referred for understanding the flow of the algoritm: Sn: No of servers. Sc: Capacity of each server Step 1: Set the no of servers (sn) and capacity of each (sc) Step 2: Incoming vm or process Step 2.1: Check if current server to be filled has reached maximum capacity Step 2.2: If yes move to the next server in order and check if its maximum capacity has been reached and eventually assign the vm a server slot Step 3: VM has finished executing or is no longer needed Step 3.1: Find in which server the vm is located and remove it Step 3.2: Start allocating all VMs located in each server in order so as to remove server wastage Step 4: Calculate power = (total no of server-no of servers occupied)*power consumed by one server Step 5: Loop back to step 2 or step 3 as needed RESULTS AND INFERENCE The first phase of experimentation was done by analyzing the power consumed against the no of VM's that are running on a setup. The no of VM's tested ranged between 5 and 25 throughout which the power consumed with the algorithm in place was almost half that of the power consumed without the algorithm. Figure 5 reveals the same. The second phase of experimentation was done against the bandwidth of the connection used. Similar to that of the previous experimentation the power consumed by system implementing our algorithm was half that of the system without it. One can understand the impact of the proposed algorithm by referring to above graph shown in Fig. 6. The third level of experimentation was done with the no of DC's requesting resources. As expected irrespective of the no of data centers requesting resources the performance remain almost the same but with our algorithm still performing better without a system that doesn't employ any strategy. The size of the storage medium or the magintude of requests orginating from each DC seems to have no visible impact on the performance. Figure 7 and 8 stands as the support for the testing carried out. Finally all factors were considered to obtain an overall perspective about the performance of the algorithm to clearly decide the efficiency of the algorithm. As expected the performance works as well as in the case of individual case tests. The proposed algorithm was tested on a CloudSim platform simulator. The results obtained were in accordance with that of the expected output. The above graphs demonstrate the same. Figure 5 demonstrates the power consumption based on the no of VM's running, as we can see with our algorithm in place the power consumption is reduced by almost one-fourth of the value without the algorithm. Also factoring in the effect of bandwidth the results seem almost similar at a lower levels but as its increased the need for the algorithm is clearly seen with the apparent difference in the amount of power consumed (Fig. 6). JCS Again one cannot forget the impact of the no of data servers on the power consumed; with the algorithm in effect we see a small decrease in power consumption a reduction nonetheless ( Fig. 7 and 8). Finally factoring in all the parameters to make sure the other results are not exclusive of each other we can clearly the difference brought about by the algorithm. In cases portrayed in Fig. 9 the performance of algorithm justifies the need for its presence. CONCLUSION There were a number of challenges faced by the authors during this research. The major challenge was to create a dynamic algorithm for the expected power management requirements. Maintaining the Quality of service was another major challenge faced. Care has been taken to make sure that the algorithm does not fail in case of addition or removal of machines from the cloud network. The algorithm reduces the power spent on the data centers for cloud thereby reducing the CO 2 emission and global warming created because of it, can be reduced to a very decent extent. The algorithm is not just static in nature; it is dynamic enough to support scaling and improved QoS. This algorithm can be further adapted to any network, not just cloud. We have used simulators to test and support our findings. But, the real test would be to take this up in the real time setup where, the behavior should be tested. ACKNOWLEDGEMENT We thank research guide and doctoral committee members for constructive ideas and encouragement.
Q: How to indicate tooltips? I've got a table like web page and on only one column you can hover over the rows and tooltips appear. At the top of the column I've got a little info symbol ('i' in a blue circle) but I don't think it's intuitive enough. What is a decent way to indicate that tooltips are possible? A: I'm a fan of using a dashed underline to indicate a tooltip. I could see where it may be too clunky-looking in a table, though. A: A tooltip is a helpful thing, but should not be used to display information that might be essential. If the information is additional rather than essential I believe your approach is pretty decent. Let's have a look at someone else who did it exactly like you did: This is taken from the Audi website, where you can choose and configure you car. Some options such as some types of mirrors need further explanation in case you don't understand the difference and have the little "i" icon next to them, as shown on the picture above. The user can hover on the i icon and then gets to see additional information:
package seedu.scheduler.model.person; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertTrue; import static org.junit.jupiter.api.Assertions.fail; import static seedu.scheduler.testutil.Assert.assertThrows; import org.junit.jupiter.api.Test; class SlotTest { @Test public void constructor_null_throwsNullPointerException() { assertThrows(NullPointerException.class, () -> Slot.fromString(null)); } @Test public void constructor_invalidSlot_throwsIllegalArgumentException() { assertThrows(IllegalArgumentException.class, () -> Slot.fromString(" ")); assertThrows(IllegalArgumentException.class, () -> Slot.fromString("1234")); assertThrows(IllegalArgumentException.class, () -> Slot.fromString("12/34/2019 12:34-12:34")); } @Test public void constructorThreeArgs_validInput_noExceptionThrows() { new Slot("16/10/2019", "00:00", "23:59"); } @Test public void isValidSlot() { // null slot assertThrows(NullPointerException.class, () -> Slot.fromString(null)); // invalid slot assertFalse(Slot.isValidSlot("")); assertFalse(Slot.isValidSlot("16-10-2019 00:00-00:01")); // incorrect date separator assertFalse(Slot.isValidSlot("16/10/2019 0000-0001")); // incorrect time format assertFalse(Slot.isValidSlot("16/10/2019 00:00 - 00:01")); // incorrect spacing assertFalse(Slot.isValidSlot("00/10/2019 00:00-00:01")); // incorrect date format assertFalse(Slot.isValidSlot("29/02/2019 00:00-00:01")); // incorrect date format assertFalse(Slot.isValidSlot("30/02/2019 00:00-00:01")); // incorrect date format assertFalse(Slot.isValidSlot("31/02/2019 00:00-00:01")); // incorrect date format assertFalse(Slot.isValidSlot("01/10/2019 23:59-24:00")); // incorrect time format // valid slot assertTrue(Slot.isValidSlot(String.format(Slot.STRING_FORMAT, "16/10/2019", "00:00", "23:59"))); assertTrue(Slot.isValidSlot("01/01/1997 10:00-10:10")); assertTrue(Slot.isValidSlot("01/01/0001 00:00-00:01")); assertTrue(Slot.isValidSlot("11/01/0001 00:00-00:01")); assertTrue(Slot.isValidSlot("01/11/0001 00:00-00:01")); assertTrue(Slot.isValidSlot("01/01/1997 00:00-00:01")); assertTrue(Slot.isValidSlot("30/12/9999 00:00-23:59")); assertTrue(Slot.isValidSlot("16/10/2019 03:01-20:01")); assertTrue(Slot.isValidSlot("03/12/1997 10:00-13:00")); assertTrue(Slot.isValidSlot("29/02/2020 10:00-11:00")); } @Test public void compareTo_equalDate_returnZero() { Slot subjectSlot = new Slot("28/10/2019", "10:00", "10:30"); Slot testSlot = new Slot("28/10/2019", "10:00", "10:30"); String errMessage = "T%d: %d\n"; int comp = subjectSlot.compareTo(testSlot); assert comp == 0 : fail(String.format(errMessage, 1, comp)); } @Test public void compareTo_laterDate_returnLesserThanZero() { Slot subjectSlot = new Slot("28/10/2019", "12:00", "13:00"); Slot testSlot1 = new Slot("01/11/2019", "12:00", "13:00"); Slot testSlot2 = new Slot("01/11/2020", "09:00", "10:00"); Slot testSlot3 = new Slot("01/11/2019", "18:00", "19:00"); Slot testSlot4 = new Slot("28/10/2019", "12:30", "13:00"); Slot testSlot5 = new Slot("28/10/2019", "12:01", "13:00"); Slot testSlot6 = new Slot("28/10/2019", "12:00", "13:01"); assertTrue(subjectSlot.compareTo(testSlot1) < 0); assertTrue(subjectSlot.compareTo(testSlot2) < 0); assertTrue(subjectSlot.compareTo(testSlot3) < 0); assertTrue(subjectSlot.compareTo(testSlot4) < 0); assertTrue(subjectSlot.compareTo(testSlot5) < 0); assertTrue(subjectSlot.compareTo(testSlot6) < 0); } @Test public void compareTo_earlierDate_returnGreaterThanZero() { Slot subjectSlot = new Slot("09/08/2019", "08:00", "10:00"); Slot testSlot1 = new Slot("01/01/2019", "08:00", "10:00"); Slot testSlot2 = new Slot("01/01/2010", "10:00", "12:00"); Slot testSlot3 = new Slot("01/01/2019", "07:00", "08:00"); Slot testSlot4 = new Slot("09/08/2019", "07:00", "08:00"); Slot testSlot5 = new Slot("09/08/2019", "07:59", "08:01"); Slot testSlot6 = new Slot("09/08/2019", "08:00", "08:30"); assertTrue(subjectSlot.compareTo(testSlot1) > 0); assertTrue(subjectSlot.compareTo(testSlot2) > 0); assertTrue(subjectSlot.compareTo(testSlot3) > 0); assertTrue(subjectSlot.compareTo(testSlot4) > 0); assertTrue(subjectSlot.compareTo(testSlot5) > 0); assertTrue(subjectSlot.compareTo(testSlot6) > 0); } }
14-year-old Sonja Harrison was shot and killed after a stray bullet flew into an apartment from the unit above. She was 8 months pregnant at the time of her death. A 20-year-old man has been arrested in the shooting death of 14-year-old Sonja Harrison and her unborn child. Harrison, 14, was babysitting her nephews at an apartment at 532 Cleveland Avenue in southwest Atlanta when she was shot in the head by a stray bullet that was fired in the apartment above her. Souleymane Diallo was arrested for her death on Thanksgiving Day and charged with second-degree murder, feticide, reckless conduct and possession of a firearm during the commission of a felony. Atlanta Police have not released information about whether the shooting was accidental. Harrison was in 8th grade and 8 months pregnant with a girl, her family said. She was due to give birth in December and was the youngest of seven children. "Yes, she was pregnant, but she had a future," said her mother, Sonja Denise Harrison. "She was going to finish school and she was talking about going into the Army for her and her baby." Harrison was in the living room when the deadly shot was fired, her mother said. She learned about what happened from her older daughter’s mother-in-law. 14-year-old Sonja Harrison was shot and killed by a bullet that flew into an apartment building where she was babysitting her nephews. Her family said she was 8 months pregnant at the time of her death. Police believe there were several people inside the upstairs apartment when the shot was fired. Right now, they believe it was only one round was fired.
She was a loving mother to her ailing child, she taught Bible study at her grandfather's church, and her 5-year-old daughter played a lot with 8-year-old Sandra Cantu and other kids on the block. Until Saturday, that was the main impression neighbors and family had of 28-year-old Melissa Huckaby. What they apparently never suspected was that the brown-haired single mother with the quiet smile might kill an 8-year-old girl, then stuff the body in a suitcase and hide it in a pond - but that's just what Tracy police say Huckaby did. "She must have had a double life, because she seemed sweet and the Bible study kids love her," said Carlos Martinez, who lives in the Orchard Estates Mobile Home Park near Huckaby. "This is a total shock." Huckaby's relatives said they were bewildered at the concept that the churchgoing woman they've seen get down on her knees to play with children and lead them in singing religious songs such as "Deep and Wide" would do what police say she has. "I've never seen her truly scold her daughter," said Cynthia Browning of Manteca (San Joaquin County), Huckaby's great-aunt. "She is soft-spoken. I trust my grandchildren with her. I don't believe she could do this." Huckaby lives with her grandparents because she suffers from severe allergies and wanted relatives' help to be able to have more time "to take better care of her daughter, who is super-thin and gets sick a lot," Browning said. She was named "class mother" of her daughter's preschool, friends said. Along the way, however, Huckaby has had legal problems. She was convicted in 2006 in Los Angeles County of property theft, and was due in court Friday to be sentenced for a local January felony burglary conviction, according to court records. In 2002, the Sutter Tracy Community Hospital won a $10,000 civil judgment against her for owed bills; she declared bankruptcy the next year. "She is a good churchgoing girl, but she has had her challenges," said her great-aunt. Neighbors said the only thing that struck them as odd about Huckaby was that instead of sending her child over to other kids' houses to play, she always insisted on playmates coming to her house. Huckaby wasn't overly chummy with everyone on the street, they said, but always cordial when approached. The house, neighbors said, is a neat, inviting place where Huckaby and her daughter live with her grandparents, 77-year-old Lane Lawless - pastor of the nearby Clover Road Baptist Church, where Huckaby teaches - and Connie Lawless, a former elected member of the local Republican Central Committee. In this trim park of beige-toned mobile homes, nothing stuck out as unusual about the place. The family - including her parents, who live in Southern California and sing in their church choirs - is respected. "Look, her grandfather is a preacher; there are some good influences in that family," said Dwight Porsche of Tracy. "This makes no sense at all. I mean, who would do that to a baby?" Huckaby's only occupation was the Bible study job, relatives said. The Baptist church has a small membership, and Sandra's family said they were not involved with the church. After Sandra disappeared March 27, Huckaby attended a community vigil to offer condolences, but evidently was not heavily involved in the 10-day search effort. After the girl's body was discovered Monday, she spent several days at the Tracy hospital in intensive care for various ailments, her great-aunt said. "This just makes no sense at all," said neighbor Maria Ramirez, shaking her head sadly. "I'm so surprised. Such nice people, such bad things to happen."
Kathleen Hilda Christner, 80, 70 Grant St., Salisbury, died Aug. 9, 2004, at the Resh home in Confluence. Born Feb. 25, 1924, in Coal Run, she is the daughter of the late Andrew and Elizabeth (Hinebaugh) Hotchkiss. She is preceded in death by her husband, Everett Fay Christner; son, Roger; four brothers: Leroy, Harold, Bill and Fay; two sisters: Ethel Hotchkiss and Shirley Engle; and a granddaughter, Lisa. She is survived by two sons, David, Birdsboro; and Randy, Meyersdale; one daughter, Sherry Ross, Somerset; one sister, Elaine Franklin, Meyersdale; 11 grandchildren and 10 great-grandchildren. Mrs. Christner was a homemaker. Friends will be received 2 to 4 and 7 to 9 p.m. Tuesday at the Newman Funeral Home Inc., 9168 Mason-Dixon Highway, Salisbury where funeral service will be conducted 1 p.m. Wednesday. The Rev. Paul H. Yoder officiating. Interment, Salisbury Cemetery. Condolences may be sent to the family at www.newmanfuneralhomes.com.
package com.blokaly.ceres.utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Spliterator; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.function.Consumer; public final class EventQueueSpliterator<T> implements Spliterator<T> { private static final Logger LOGGER = LoggerFactory.getLogger(EventQueueSpliterator.class); private static final int DEFAULT_CAPACITY = 128; private final BlockingQueue<T> queue; public EventQueueSpliterator() { this(new ArrayBlockingQueue<>(DEFAULT_CAPACITY)); } public EventQueueSpliterator(BlockingQueue<T> queue) { this.queue = queue; } @Override public boolean tryAdvance(Consumer<? super T> action) { try { action.accept(queue.take()); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } return true; } @Override public Spliterator<T> trySplit() { return null; } @Override public long estimateSize() { return Long.MAX_VALUE; } @Override public int characteristics() { return Spliterator.CONCURRENT | Spliterator.NONNULL | Spliterator.ORDERED; } public void add(T event) { if (event == null) { return; } boolean success = queue.offer(event); if (!success) { LOGGER.error("Failed to add event to queue: {}", event); } } }
<reponame>dibo-software/diboot-v2 /* * Copyright (c) 2015-2029, www.dibo.ltd (<EMAIL>). * <p> * Licensed under the Apache License, Version 2.0 (the "License"); you may not * use this file except in compliance with the License. You may obtain a copy of * the License at * <p> * https://www.apache.org/licenses/LICENSE-2.0 * <p> * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations under * the License. */ package com.diboot.iam.shiro; import lombok.extern.slf4j.Slf4j; import org.apache.shiro.subject.Subject; import org.apache.shiro.subject.SubjectContext; import org.apache.shiro.web.mgt.DefaultWebSubjectFactory; /** * 无状态的SubjectFactory * @author JerryMa * @version v2.6.0 * @date 2022/4/26 * Copyright © diboot.com */ @Slf4j public class StatelessSubjectFactory extends DefaultWebSubjectFactory { public Subject createSubject(SubjectContext context) { //不创建session context.setSessionCreationEnabled(false); return super.createSubject(context); } }
BUFFALO – Prolonged rest is not ideal when it comes to treating concussion, and in fact, those who are active faster, appear to get better faster. This is an idea that John Leddy, MD, director of the University at Buffalo Concussion Management Clinic, first proposed about 10 years ago, and now, he says, several studies later, is beginning to be confirmed. It is also an idea that is now reflected in the latest document that guides treatment when it comes to sport-related concussion. The 2017 Concussion in Sport Group consensus document was developed by experts in the field of concussion for physicians and healthcare providers who are involved in athlete care – at a recreational, elite and professional level. The document, published this month in the British Journal of Sports Medicine, is intended to guide clinical practice, Leddy says, as well as guide future research in the field of sport-related concussion. Guidelines are released every four years to reflect updates that have taken place in the field of concussion research, he says. The biggest change from 2012 to 2016, Leddy says, was the replacement of the recommendation for complete rest beyond the first few days after concussion with guided and controlled activity. The old guidelines indicated that individuals should not return to activity until they were asymptomatic. Until that time, the individual was told to do nothing. “This was known as ‘cocoon therapy’,” Leddy says, “That was how the old guidelines were interpreted and patients were resting, literally being told to do nothing, until they were asymptomatic. The problem with that was even non-concussed people often have some symptoms on any given day. After a two-day conference in Berlin, new guidelines about how to best treat concussion for individuals ages six and older were formed. And for the first time, a complete shut-down post-concussion was updated. Leddy says these guidelines have influence all over the world. The advice offered through the guidelines, he says, is often followed by professional sports teams, college athletic departments, high schools, physical therapists, athletic trainers, sports physicians, primary care doctors and pediatricians.
<filename>JavaContents/src/javacontents/JavaContents.java /* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package javacontents; /** * * @author mauricio.moreira */ public class JavaContents { /** * @param args the command line arguments */ public static void main(String[] args) { System.out.println("Olá Mundo! Minha primeira linha de código Java!"); } }
#!/usr/bin/env python3 # -*- coding: utf-8 -*- # filename: test_02_user.py # modified: 2019-10-28
def process_list_to_tensor(lst): if isinstance(lst, list): lst = tf.transpose(tf.convert_to_tensor(lst, preferred_dtype=tf.float64)) return tf.cast(lst, dtype=tf.float64)
package thundr.redstonerepository.item.util; import cofh.api.item.IMultiModeItem; import cofh.api.item.IToolQuiver; import cofh.core.init.CoreEnchantments; import cofh.core.init.CoreProps; import cofh.core.item.IEnchantableItem; import cofh.core.item.ItemCore; import cofh.core.render.IModelRegister; import cofh.core.util.core.IInitializer; import cofh.core.util.helpers.EnergyHelper; import cofh.core.util.helpers.MathHelper; import cofh.core.util.helpers.StringHelper; import cofh.redstonearsenal.init.RAProps; import cofh.redstoneflux.api.IEnergyContainerItem; import cofh.redstoneflux.util.EnergyContainerItemWrapper; import net.minecraft.client.renderer.block.model.ModelResourceLocation; import net.minecraft.client.util.ITooltipFlag; import net.minecraft.creativetab.CreativeTabs; import net.minecraft.enchantment.Enchantment; import net.minecraft.enchantment.EnchantmentHelper; import net.minecraft.enchantment.EnumEnchantmentType; import net.minecraft.entity.EntityLivingBase; import net.minecraft.entity.player.EntityPlayer; import net.minecraft.entity.projectile.EntityArrow; import net.minecraft.init.Enchantments; import net.minecraft.init.Items; import net.minecraft.init.SoundEvents; import net.minecraft.item.EnumRarity; import net.minecraft.item.ItemStack; import net.minecraft.nbt.NBTTagCompound; import net.minecraft.util.NonNullList; import net.minecraft.util.ResourceLocation; import net.minecraft.util.SoundCategory; import net.minecraft.world.World; import net.minecraftforge.client.model.ModelLoader; import net.minecraftforge.common.capabilities.ICapabilityProvider; import net.minecraftforge.fml.common.registry.ForgeRegistries; import net.minecraftforge.fml.relauncher.Side; import net.minecraftforge.fml.relauncher.SideOnly; import thundr.redstonerepository.RedstoneRepository; import thundr.redstonerepository.entity.projectile.EntityArrowGelid; import javax.annotation.Nullable; import java.util.List; import static cofh.core.util.helpers.RecipeHelper.addShapedRecipe; // TODO: rework this because it is hard-coded. (Credit to <NAME>) public class ItemQuiverGelid extends ItemCore implements IInitializer, IModelRegister, IEnchantableItem, IEnergyContainerItem, IMultiModeItem, IToolQuiver { public static ItemStack quiverGelidEnderium; protected int maxEnergy = 320000; protected int maxTransfer = 4000; protected int energyPerUse = 800; protected int energyPerUseCharged = 6400; protected boolean showInCreative = true; public static boolean enable; public ItemQuiverGelid() { super(RedstoneRepository.MODID); setMaxDamage(0); setNoRepair(); setMaxStackSize(1); setUnlocalizedName("redstonerepository.util.gelidQuiver"); setCreativeTab(RedstoneRepository.tabCommon); addPropertyOverride(new ResourceLocation("active"), (stack, world, entity) -> ItemQuiverGelid.this.getEnergyStored(stack) > 0 && !ItemQuiverGelid.this.isEmpowered(stack) ? 1F : 0F); addPropertyOverride(new ResourceLocation("empowered"), (stack, world, entity) -> ItemQuiverGelid.this.isEmpowered(stack) ? 1F : 0F); } public ItemQuiverGelid setEnergyParams(int maxEnergy, int maxTransfer, int energyPerUse, int energyPerUseCharged) { this.maxEnergy = maxEnergy; this.maxTransfer = maxTransfer; this.energyPerUse = energyPerUse; this.energyPerUseCharged = energyPerUseCharged; return this; } protected boolean isEmpowered(ItemStack stack) { return getMode(stack) == 1 && getEnergyStored(stack) >= energyPerUseCharged; } protected int getEnergyPerUse(ItemStack stack) { int unbreakingLevel = MathHelper.clamp(EnchantmentHelper.getEnchantmentLevel(Enchantments.UNBREAKING, stack), 0, 4); return (isEmpowered(stack) ? energyPerUseCharged : energyPerUse) * (5 - unbreakingLevel) / 5; } @Override public void addInformation(ItemStack stack, @Nullable World worldIn, List<String> tooltip, ITooltipFlag flagIn) { if (StringHelper.displayShiftForDetail && !StringHelper.isShiftKeyDown()) { tooltip.add(StringHelper.shiftForDetails()); } if (!StringHelper.isShiftKeyDown()) { return; } if (stack.getTagCompound() == null) { EnergyHelper.setDefaultEnergyTag(stack, 0); } tooltip.add(StringHelper.localize("info.cofh.charge") + ": " + StringHelper.getScaledNumber(getEnergyStored(stack)) + " / " + StringHelper.getScaledNumber(getMaxEnergyStored(stack)) + " RF"); tooltip.add(StringHelper.ORANGE + getEnergyPerUse(stack) + " " + StringHelper.localize("info.redstonearsenal.tool.energyPerUse") + StringHelper.END); RAProps.addEmpoweredTip(this, stack, tooltip); } @Override public void getSubItems(CreativeTabs tab, NonNullList<ItemStack> items) { if (isInCreativeTab(tab) && showInCreative) { items.add(EnergyHelper.setDefaultEnergyTag(new ItemStack(this, 1, 0), 0)); items.add(EnergyHelper.setDefaultEnergyTag(new ItemStack(this, 1, 0), maxEnergy)); } } @Override public boolean canApplyAtEnchantingTable(ItemStack stack, Enchantment enchantment) { if (EnumEnchantmentType.BREAKABLE.equals(enchantment.type)) { return enchantment.equals(Enchantments.UNBREAKING); } return enchantment.type.canEnchantItem(this); } @Override public boolean getIsRepairable(ItemStack itemToRepair, ItemStack stack) { return false; } @Override public boolean isDamageable() { return true; } @Override public boolean isEnchantable(ItemStack stack) { return true; } @Override public boolean shouldCauseReequipAnimation(ItemStack oldStack, ItemStack newStack, boolean slotChanged) { return super.shouldCauseReequipAnimation(oldStack, newStack, slotChanged) && (slotChanged || getEnergyStored(oldStack) > 0 != getEnergyStored(newStack) > 0); } @Override public boolean showDurabilityBar(ItemStack stack) { return RAProps.showToolCharge && getEnergyStored(stack) > 0; } @Override public int getItemEnchantability(ItemStack stack) { return 10; } @Override public int getMaxDamage(ItemStack stack) { return 0; } @Override public double getDurabilityForDisplay(ItemStack stack) { if (stack.getTagCompound() == null) { EnergyHelper.setDefaultEnergyTag(stack, 0); } return MathHelper.clamp(1.0D - ((double) stack.getTagCompound().getInteger(CoreProps.ENERGY) / (double) getMaxEnergyStored(stack)), 0.0D, 1.0D); } @Override public boolean canEnchant(ItemStack stack, Enchantment enchantment) { return enchantment == CoreEnchantments.holding; } @Override public int receiveEnergy(ItemStack container, int maxReceive, boolean simulate) { if (container.getTagCompound() == null) { EnergyHelper.setDefaultEnergyTag(container, 0); } int stored = Math.min(container.getTagCompound().getInteger(CoreProps.ENERGY), getMaxEnergyStored(container)); int receive = Math.min(maxReceive, Math.min(getMaxEnergyStored(container) - stored, maxTransfer)); if (!simulate) { stored += receive; container.getTagCompound().setInteger(CoreProps.ENERGY, stored); } return receive; } @Override public int extractEnergy(ItemStack container, int maxExtract, boolean simulate) { if (container.getTagCompound() == null) { EnergyHelper.setDefaultEnergyTag(container, 0); } int stored = Math.min(container.getTagCompound().getInteger(CoreProps.ENERGY), getMaxEnergyStored(container)); int extract = Math.min(maxExtract, stored); if (!simulate) { stored -= extract; container.getTagCompound().setInteger(CoreProps.ENERGY, stored); if (stored == 0) { setMode(container, 0); } } return extract; } @Override public int getEnergyStored(ItemStack container) { if (container.getTagCompound() == null) { EnergyHelper.setDefaultEnergyTag(container, 0); } return Math.min(container.getTagCompound().getInteger(CoreProps.ENERGY), getMaxEnergyStored(container)); } @Override public int getMaxEnergyStored(ItemStack container) { int enchant = EnchantmentHelper.getEnchantmentLevel(CoreEnchantments.holding, container); return maxEnergy + maxEnergy * enchant / 2; } @Override public EntityArrow createEntityArrow(World world, ItemStack item, EntityLivingBase shooter) { return new EntityArrowGelid(world, shooter, isEmpowered(item)); } @Override public boolean allowCustomArrowOverride(ItemStack item) { return false; } @Override public boolean isEmpty(ItemStack item, EntityLivingBase shooter) { return !(shooter instanceof EntityPlayer && ((EntityPlayer) shooter).capabilities.isCreativeMode) && getEnergyStored(item) <= 0; } @Override public void onArrowFired(ItemStack item, EntityLivingBase shooter) { if (shooter instanceof EntityPlayer) { extractEnergy(item, getEnergyPerUse(item), ((EntityPlayer) shooter).capabilities.isCreativeMode); } } @Override public void onModeChange(EntityPlayer player, ItemStack stack) { if (isEmpowered(stack)) { player.world.playSound(null, player.getPosition(), SoundEvents.ENTITY_LIGHTNING_THUNDER, SoundCategory.PLAYERS, 0.4F, 1.0F); } else { player.world.playSound(null, player.getPosition(), SoundEvents.ENTITY_EXPERIENCE_ORB_PICKUP, SoundCategory.PLAYERS, 0.2F, 0.6F); } } @Override public ICapabilityProvider initCapabilities(ItemStack stack, NBTTagCompound nbt) { return new EnergyContainerItemWrapper(stack, this); } @Override @SideOnly(Side.CLIENT) public void registerModels() { ModelLoader.setCustomModelResourceLocation(this, 0, new ModelResourceLocation(new ResourceLocation(RedstoneRepository.MODID, "util/quiver_gelid"), "inventory")); } @Override public boolean preInit() { this.setRegistryName("util.quiver_gelid"); ForgeRegistries.ITEMS.register(this); config(); this.showInCreative = enable; quiverGelidEnderium = EnergyHelper.setDefaultEnergyTag(new ItemStack(this, 1, 0), 0); RedstoneRepository.PROXY.addIModelRegister(this); return true; } @Override public boolean initialize() { if (!enable) { return false; } addShapedRecipe(quiverGelidEnderium, "AA ", "GIS", "IGS", 'A', Items.ARROW, 'G', "gemGelidCrystal", 'I', "ingotGelidEnderium", 'S', "stringFluxed"); return true; } private static void config() { String category = "Equipment.Tools.Gelid"; enable = RedstoneRepository.CONFIG_COMMON.get(category, "Quiver", true); } public EnumRarity getRarity(ItemStack stack) { return EnumRarity.RARE; } public int getRGBDurabilityForDisplay(ItemStack stack) { return 1333581; } }
<gh_stars>0 /* * cocos2d for iPhone: http://www.cocos2d-iphone.org * * Copyright (c) 2010 <NAME> * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. * */ #import <Foundation/Foundation.h> #import "CCSprite.h" /** Types of progress @since v0.99.1 */ typedef enum { /// Radial Counter-Clockwise kCCProgressTimerTypeRadialCCW, /// Radial ClockWise kCCProgressTimerTypeRadialCW, /// Horizontal Left-Right kCCProgressTimerTypeHorizontalBarLR, /// Horizontal Right-Left kCCProgressTimerTypeHorizontalBarRL, /// Vertical Bottom-top kCCProgressTimerTypeVerticalBarBT, /// Vertical Top-Bottom kCCProgressTimerTypeVerticalBarTB, } CCProgressTimerType; /** CCProgresstimer is a subclass of CCNode. It renders the inner sprite according to the percentage. The progress can be Radial, Horizontal or vertical. @since v0.99.1 */ @interface CCProgressTimer : CCNode { CCProgressTimerType type_; float percentage_; CCSprite *sprite_; int vertexDataCount_; ccV2F_C4B_T2F *vertexData_; } /** Change the percentage to change progress. */ @property (nonatomic, readwrite) CCProgressTimerType type; /** Percentages are from 0 to 100 */ @property (nonatomic, readwrite) float percentage; /** The image to show the progress percentage */ @property (nonatomic, readwrite, retain) CCSprite *sprite; /** Creates a progress timer with an image filename as the shape the timer goes through */ + (id) progressWithFile:(NSString*) filename; /** Initializes a progress timer with an image filename as the shape the timer goes through */ - (id) initWithFile:(NSString*) filename; /** Creates a progress timer with the texture as the shape the timer goes through */ + (id) progressWithTexture:(CCTexture2D*) texture; /** Creates a progress timer with the texture as the shape the timer goes through */ - (id) initWithTexture:(CCTexture2D*) texture; -(id)initWithSpriteFrameWithName:(NSString*)name; @end
If you’re anything like me, then the members of the Democratic Party hate you. And I don’t mean they disagree with your politics. I don’t even mean they disagree vehemently with your politics. I mean the Democratic Party despises you with the white-hot intensity of a thousand suns. And lest you think there might be common ground upon which both you and the Democratic Party can stand, consider the bile that flowed out of their leading presidential candidates’ blowholes during Tuesday evening’s debate on CNN. Hillary Clinton, who has made a career out of being married to an alleged rapist, certainly didn’t hide her disdain for anyone outside her box. When game-show-host-turned-talking-hairstyle Anderson Cooper asked her: “Which enemy that you made during your political career are you most proud of?” Madame Clinton responded: “(T)he NRA, the health insurance companies, the drug companies, the Iranians; probably the Republicans.” I suppose we should be flattered that somewhere in the neighborhood of half the country made the list alongside the next islamofascist terrorists to ride the Obama train to Nuketown. The old white woman wants to be president of all of us but viscerally hates half of us. Moreover, she’s proud of that attitude. Joining Nana Clinton on the stage was self-titled “democratic socialist” Sen. Bernie Sanders. His popularity probably says more about Clinton’s lack thereof than it does about Sanders’ own palatability. Let’s face it: Bernie Sanders looks like he should be sitting on a park bench, feeding pigeons stale bread crumbs and muttering about “these kids today, with their rock ‘n’ roll and their crazy clothes.” And for anyone out there who somehow managed to avoid knowing anything about Sanders before Tuesday, the old boy plans to use the presidency to resurrect the governing principles of fun guys like Karl Marx and Vladimir Lenin. During his opportunities to rant at the camera, Sanders not only proudly declared himself a devotee of the bearded Bolsheviks, he promised to annex huge swaths of the nation’s economy like Russia gobbling up the Ukraine. Free college, free healthcare, free stuff aplenty awaits us in Bernie’s America. Sanders failed to mention that government-run means government-owned. And that requires government money by the truckloads. Since the government doesn’t actually have any money, Bernie plans to use yours. As Barack Obama’s IRS scandal served to remind us, if the government wants your money, it takes it. If you don’t offer it willingly, then it takes it by force. Bernie, who loudly claims no affinity for the capitalist system that has employed him for the past 35 years, will literally need trillions of our dollars to paint America a nice shade of Russian red. Both Clinton and Sanders agreed to no small fanfare that they’re “sick and tired of” Clinton’s “damned emails.” According to polling, at least half of Americans believe Clinton’s worsening breaches of national security are a legitimate campaign issue; and as many as 70 percent believe an independent special prosecutor is already overdue. Clinton’s and Sanders’ fatigue over her scandals is miniscule — and wildly different — compared to ours. Of the other three placeholders propped up behind lecterns next to the senior citizen front-runners, former Sen. Jim Webb was the only one who made any impact. Webb, who spent most of the evening looking as out of place as a cat in a rat’s nest, accidentally maneuvered the seriously left-leaning audience into reminding the rest of us how interested they are in our fates. When the show moved to the blame-guns-for-crime segment, which has apparently become standard for any gathering of two or more “progressives,” Webb protested meekly: “(W)e have to respect the people in this country who want to defend themselves and their family from violence.” When he dared to suggest that all lives matter, as opposed to just the darker-hued ones, the audience acted like he’d set fire to the stage. The poor guy’s own party hates him because he reminds them of everyone else. Of course, the left’s telling the rest of us that they rank us somewhere between “cancer” and “Ebola” isn’t exactly a new development. Obama, easily the most divisive president in at least a century, runs his entire regime based on the guiding principles of division and hate. “You didn’t build that, someone built that for you.” So quit acting like you earned your way, you “bitter clinger.” Jon Gruber, one of the chief architects of Obama’s signature “accomplishment,” says American voters are “too stupid to understand.” Take that, you racist! It’s only a side effect of your own dim-witted conservatism that you don’t see how defrauding the nation of trillions of dollars will make you healthier and wiser. However, while the Democrats are jockeying to prove who hates the most voters the most, not one of them is currently projected to do more next November than deliver a concession speech. Despite the noisiest efforts of Sanders’ supporters, the only slightly younger Clinton is still handily winning the walker wars by 15- to 20-point margins. And at her best, Nana Hilldawg loses to at least four of the current GOP contenders. It’s a good thing conservatives don’t hate liberals as much as liberals hate pretty much everyone else. Beginning in January 2017, the Democrats will learn just how lucky they are. –Ben Crystal
<filename>lib/utilities.c #include <errno.h> #include <linux/perf_event.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/capability.h> #include <sys/utsname.h> #include <unistd.h> #include "perf.h" #include "utilities.h" // CAP_PERFMON was added in Linux 5.8 // https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/capability.h #ifndef CAP_PERFMON #define CAP_PERFMON 38 #endif #define true 1 #define false 0 void perf_print_error(int error) { switch (error) { case PERF_ERROR_IO: perror("io error"); break; case PERF_ERROR_LIBRARY_FAILURE: perror("library failure"); break; case PERF_ERROR_CAPABILITY_NOT_SUPPORTED: fprintf(stderr, "unsupported capability\n"); break; case PERF_ERROR_EVENT_OPEN: perror("perf_event_open failed"); break; case PERF_ERROR_NOT_SUPPORTED: perror("not supported"); break; case PERF_ERROR_BAD_PARAMETERS: fprintf(stderr, "bad parameters\n"); break; default: fprintf(stderr, "unknown error\n"); break; } } int perf_is_supported() { return access("/proc/sys/kernel/perf_event_paranoid", F_OK) == 0 ? 1 : 0; } int perf_get_event_paranoia() { // See: https://www.kernel.org/doc/Documentation/sysctl/kernel.txt FILE *perf_event_paranoid = fopen("/proc/sys/kernel/perf_event_paranoid", "r"); if (perf_event_paranoid == NULL) return PERF_ERROR_IO; int value; if (fscanf(perf_event_paranoid, "%d", &value) < 1) return PERF_ERROR_IO; if (value >= 2) return PERF_EVENT_PARANOIA_DISALLOW_CPU | PERF_EVENT_PARANOIA_DISALLOW_FTRACE | PERF_EVENT_PARANOIA_DISALLOW_KERNEL; if (value >= 1) return PERF_EVENT_PARANOIA_DISALLOW_CPU | PERF_EVENT_PARANOIA_DISALLOW_FTRACE; if (value >= 0) return PERF_EVENT_PARANOIA_DISALLOW_CPU; return PERF_EVENT_PARANOIA_ALLOW_ALL; } int perf_has_sufficient_privilege(const perf_measurement_t *measurement) { // Immediately return if the user is an admin int has_cap_sys_admin = perf_has_capability(CAP_SYS_ADMIN); if (has_cap_sys_admin == 1) return true; int event_paranoia = perf_get_event_paranoia(); if (event_paranoia < 0) return event_paranoia; // This requires CAP_PERFMON (since Linux 5.8) or CAP_SYS_ADMIN capability or a perf_event_paranoid value of less than 1. if (measurement->pid == -1 && measurement->cpu >= 0) { int kernel_major, kernel_minor; int status = perf_get_kernel_version(&kernel_major, &kernel_minor, NULL); if (status != true) return status; if (kernel_major >= 5 && kernel_minor >= 8) { int has_cap_perfmon = perf_has_capability(CAP_PERFMON); if (has_cap_perfmon != true) return has_cap_perfmon; } else { // CAP_SYS_ADMIN is already checked at the start of this function } } // Immediately return if all events are allowed if (event_paranoia & PERF_EVENT_PARANOIA_ALLOW_ALL) return true; if (event_paranoia & PERF_EVENT_PARANOIA_DISALLOW_FTRACE && measurement->attribute.type == PERF_TYPE_TRACEPOINT) return has_cap_sys_admin; if (event_paranoia & PERF_EVENT_PARANOIA_DISALLOW_CPU && measurement->attribute.type == PERF_TYPE_HARDWARE) return has_cap_sys_admin; if (event_paranoia & PERF_EVENT_PARANOIA_DISALLOW_KERNEL && measurement->attribute.type == PERF_TYPE_SOFTWARE) return has_cap_sys_admin; // Assume privileged return true; } int perf_has_capability(int capability) { if (!CAP_IS_SUPPORTED(capability)) return PERF_ERROR_CAPABILITY_NOT_SUPPORTED; cap_t capabilities = cap_get_proc(); if (capabilities == NULL) return PERF_ERROR_LIBRARY_FAILURE; cap_flag_value_t sys_admin_value; if (cap_get_flag(capabilities, capability, CAP_EFFECTIVE, &sys_admin_value) < 0) { cap_free(capabilities); return PERF_ERROR_LIBRARY_FAILURE; } if (cap_free(capabilities) < 0) return PERF_ERROR_LIBRARY_FAILURE; // Return whether or not the user has the capability return sys_admin_value == CAP_SET; } perf_measurement_t *perf_create_measurement(int type, int config, pid_t pid, int cpu) { perf_measurement_t *measurement = (perf_measurement_t *)malloc(sizeof(perf_measurement_t)); if (measurement == NULL) return NULL; memset((void *)measurement, 0, sizeof(perf_measurement_t)); measurement->pid = pid; measurement->cpu = cpu; measurement->attribute.type = type; measurement->attribute.config = config; measurement->attribute.disabled = 1; measurement->attribute.read_format = PERF_FORMAT_GROUP | PERF_FORMAT_ID; return measurement; } int perf_open_measurement(perf_measurement_t *measurement, int group, int flags) { // Invalid parameters. See: https://man7.org/linux/man-pages/man2/perf_event_open.2.html if (measurement->pid == -1 && measurement->cpu == -1) return PERF_ERROR_BAD_PARAMETERS; int file_descriptor = perf_event_open(&measurement->attribute, measurement->pid, measurement->cpu, group, flags); if (file_descriptor < 0) { if (errno == ENODEV || errno == ENOENT || errno == ENOSYS || errno == EOPNOTSUPP || errno == EPERM) return PERF_ERROR_NOT_SUPPORTED; return PERF_ERROR_EVENT_OPEN; } measurement->file_descriptor = file_descriptor; measurement->group = group; // Get the ID of the measurement if (ioctl(measurement->file_descriptor, PERF_EVENT_IOC_ID, &measurement->id) < 0) return PERF_ERROR_LIBRARY_FAILURE; return 0; } int perf_read_measurement(const perf_measurement_t *measurement, void *target, size_t bytes) { return read(measurement->file_descriptor, target, bytes); } int perf_get_kernel_version(int *major, int *minor, int *patch) { struct utsname name; if (uname(&name) < 0) return PERF_ERROR_LIBRARY_FAILURE; int parsed_major, parsed_minor, parsed_patch; if (sscanf(name.release, "%d.%d.%d", &parsed_major, &parsed_minor, &parsed_patch) < 3) return PERF_ERROR_IO; if (major != NULL) *major = parsed_major; if (minor != NULL) *minor = parsed_minor; if (patch != NULL) *patch = parsed_patch; return 0; } int perf_event_is_supported(const perf_measurement_t *measurement) { // Invalid parameters. See: https://man7.org/linux/man-pages/man2/perf_event_open.2.html if (measurement->pid == -1 && measurement->cpu == -1) return PERF_ERROR_BAD_PARAMETERS; int file_descriptor = perf_event_open(&measurement->attribute, measurement->pid, measurement->cpu, -1, 0); if (file_descriptor < 0) { if (errno == ENODEV || errno == ENOENT || errno == ENOSYS || errno == EOPNOTSUPP || errno == EPERM) return 0; return PERF_ERROR_EVENT_OPEN; } if (close(file_descriptor) < 0) return PERF_ERROR_IO; return 1; } int perf_close_measurement(const perf_measurement_t *measurement) { if (close(measurement->file_descriptor) < 0) return PERF_ERROR_IO; return 0; }
STEM-04. NICARDIPINE SENSITIZES APOPTOSIS OF GLIOMA STEM CELLS INDUCED BY TEMOZOLOMIDE THROUGH INHIBITING AUTOPHAGY Glioma stem cells (GSCs) play an important role in tumor progression and recurrence. Currently, treatments of glioma are limited mainly due to strong tolerance of GSCs against conventional chemotherapeutic drugs such as temozolomide. Nicardipine, a calcium antagonist, is commonly used in the therapy of hypertension, which is recently repurposed in the comprehensive strategies against gliomas in our previous studies. Here, we explored the cytotoxic sensitization effect of nicardipine combining temozolomide and the underlying mechanisms. Two human glioma stem cell lines were applied and treated with nicardipine, temozolomide, and combination of both, respectively. Cell viability was detected by CCK-8, cell apoptosis was analyzed by flow cytometry, and immunoblot was used to detect autophagic-releated proteins including p62, LC3 and mTOR. The mTOR agonists and inhibitors were applied to further evaluate whether nicardipine inhibits autophagy via mTOR pathway. GSCs had strong tolerance against temozolomide while nicardipine could significantly inhibit the viaibility of GSCs and promote cell apoptosis when acting together with temozolomide, indicating that nicardipine could sensitize the toxicity of temozolomide. Furthermore, both temozolomide and nicardipine could inhibit autophagy, and the effects of nicardipine was more prominent, suggesting that nicardipine might enhance GSCs apoptosis induced by temozolomide through inhibiting autophagy. Further molecular studies showed that the cytotoxic sensitization effects of nicardipine was induced through activation of the phosphorylation level of mTOR, which was abolished with the treatment of rapamycin, a mTOR inhibitor. Our results suggested that nicardipine could sensitize apoptosis of glioma stem cells induced by temozolomide through inhibiting autophagy, which was mediated by activation of mTOR activity.
Zenel Batagelj is telling us about Slovenia’s blockchain revolution. We’re sitting outdoors in a café along the river that snakes through Ljubljana, taking in the last sun on a wintry day in the vibrant, picturesque capital of this tiny yet proud Central European nation. Zenel is passionate and big, and the meaty coat he wears to ward off the cold makes him even more imposing and authoritative. He’s telling us about the bold technological advance that helped catapult his country into the future last summer. In just under four days, Iconomi raised a million dollars for its budding cryptocurrency trading platform, and instantly was in business, hiring engineers and making big plans. “It was an experiment,” Zenel said. “It was so cool.” At the time, it was the largest European initial coin offering: One million dollars in 88 hours. $10 million in five weeks, … and that was just the start. Zenel is Slovenia’s global connector, an unapologetic evangelist, fighting the local brain drain that deprived the country of talent and brawn during the recent international financial crisis that hit extra hard here. We met him in Silicon Valley two months ago, when he arrived among a small Slovenian delegation at SEC2SV to learn about US scale-up potential for EU enterprises. He sold us on his country, and himself. His parents were mathematicians with extraordinary access to technology. He started playing with computers at six, coding when he was eleven. He loved computers, but chose instead to study sociology in the 90s: “Computer-mediated emotions, the interactions of emoticons,” he recalled. “The really interesting thing was studying social sciences at the time the Internet was changing the world.” Zenel was the perfect guide to introduce us to the key Slovenian players and lend a framework to grasp what he sees as the latest global shift, how cryptocurrencies, and more importantly blockchain, offer a fresh means of creating and funding startups. Our education came during a long, tasty dinner (filled with Slovenian delicacies, including orange wine), with Tim Zagar, the co-founder of Iconomi, his lawyer Nejc Novak, and Zenel. Relocating the Center of the Universe Tim Zagar wasted no time in getting to the point – that maybe the world shouldn’t revolve around Silicon Valley. “Just open your phone and check your applications,” he said. “Who made them? Mostly, like 95%, are Bay Area-based. This is something we can change in the future.” Radical talk for those who see the traditional old-boy venture capital network of Silicon Valley, SF, and gilded Sand Hill road, as the only acceptable way to develop and fund new startups. But that’s a hard road for Slovenians and other Europeans. “If you were not based in, let’s say, the Bay Area or even outside of the US, it was very difficult to fund your startup, your company,” said Tim. “Something’s happening in London and maybe in Berlin and also Paris … but basically this startup ecosystem does not exist outside of US or it isn’t very strong.” Enter blockchain. “So, with people who are also creative outside of San Francisco, now all of a sudden, they have the opportunity to get funding for their projects,” said Tim. “And this is what happened a year or two ago. With the companies raising money from this part of the world it’s going to be a completely different picture if you look at your phone in your apps.” Much like Zenel, Tim also saw early-on the “great potential” of the internet. He dove into coding and games, but had the self-awareness to recognize that “there are other people who can code better than I do.” Which led to a business epiphany during his first year studying “informatics” at University. “I was selling the projects, and leading the teams.” His company built websites, then soon followed up with Open Hours, a search engine listing opening hours of local businesses that grew to a million users. Native Payments: The Missing Piece In 2011, he heard about Bitcoin, and in early 2013, he started to mine Bitcoin. “And the more I dug into it, the more I thought this is kind of the missing piece of the internet – a native payment solution.” He connected with the key people in Slovenia, and brainstormed a new business with his friend Jani Valjavec: Cashila, a bitcoin payment service: “Everyone can pay [with bitcoin] to whatever bank account,” said Tim. “It’s cool, amazing.” They raised $500,000 in traditional funding from investors, and quickly learned they needed a payment license from a European national regulator. Slovenia said no. UK said no. Luxembourg was possible, but would be incredibly expensive. Finally, they sought a small payment license from the Czech Republic. In the spring of 2015, they got the license and became the first European payment processing company “registered as a financial institution, and just dealing with bitcoin.” Just as quickly they discovered “people want to buy bitcoin more than sell bitcoin,” and it was back to the drawing board. They watched the ICO craze take off, and after Tim’s experience with traditional fundraising, he saw all sorts of advantages in the alternative model, including dramatically reducing all the time spent cultivating and communicating with investors. So he and his partners brainstormed a new business model targeting the real action: trading. The initial concept was to build an asset management company for these new crypto assets. They tried out the idea on a lot of people, and got some surprising feedback from a bitcoin miner. He said, “’Yeah, that’s cool to have funds, but I want to create my own fund.’” So they pivoted from the idea of a classic asset management company to “an open platform where anyone can be a fund manager,” and they would “take care of the secure environment and let investors buy into this different structure.” Disrupt VC On August 6, 2017, Iconomi published their whitepaper, “Open Fund Management Platform to Disrupt the Investment Industry.” Authored by Tim, his partner Jani, Zenel, the lawyer Ervin Kovac and Ales Lekse, the paper began with a quote from the physicist, historian and philosopher Thomas Kuhn, who gave us the term paradigm shift: “The crises of our time … are the necessary impetus for the revolution now underway.” The pitch was aimed straight at upending old-line VC: “We are pairing the business-model fundamentals of the crypto-world and the obvious trend of platform domination with new technological possibilities.” Blockchains were described as “game-changers for the investment world, linking those with disruptive ideas directly to those looking for investing opportunities.” Just ten pages long and about 2,000 words, the message travelled fast to online communities, including Reddit. The world had only seen ten or so ICOs. Iconomi needed two to three million to realize the Ethereum-based project. They set the minimum at a million, and got it in 88 hours, eclipsing the largest kickstarter in Slovenia. The five-week campaign kept building. They hit $5 million, it slowed, and then with just a couple of days left in the campaign, accelerated. The team went to grab beers. “We were at nine million, and I think after the second beer we were over ten,” said Zenel. “So one million while drinking a beer.” The total raised was $10.5 million. To understand Iconomi you have to understand other, deeper factors. The tech ecosystem is tight-knit, and active. The biggest homegrown company is Outfit7, a games and entertainment company acquired this year for $1 billion by Chinese investors and now relocated to the UK. Many smaller startups and entrepreneurs flourish in the developer community, and there’s a magnetic element, one that Iconomi too is counting on to help it scale: Slovenia’s government is actively engaged in tailoring policy and leading EU initiatives to collaborate with startups, and help them succeed. During our Ljubljana visit we had the kind of encounter we couldn’t imagine in San Francisco. We walked into a modern office building housing the Government of the Republic of Slovenia and faced two stony-faced security officials who ordered us to remove our coats, and deposit them together with our bags, laptops, phones, and digital recorder in a locker in the lobby. Trade minister Tadej Slapnik vouched for us and escorted us upstairs with Nena Dukozov, the head of the Cultural Centre of European Space Technologies. Tadej sounds and acts more like a startup founder than a top government official. Far from the typical bureaucratic molasses, there’s an extraordinary public-private partnership taking place, and Tadej is leading that rapid shift: “In this fast-changing world, usually it’s the government who is lacking speed. And if we would like to catch up, we have to do it together with the entrepreneurs. And the most important thing for the government is to be able to accelerate.” In his office overlooking the square, Tadej told us of the extraordinary chain of events that led to the “Slovenian Fourth Industrial Revolution”, a new technological age here in this Central European country of just two million people, marked by major advances in technology, an infusion of wealth, and a fresh confidence among young people in their ability to shape the future. Iconomi’s groundbreaking ICO led to a valuation of several hundred million Euros, followed by the creation of Cofound.it, which received even more funding (that’s another story) and the country’s investment model and technology-based economic development plan was off and running. Slovenia had the potential to transform, yet Tadej was keenly aware that his office was out of the loop. “There was almost no knowledge in the public sector in different ministries, among bureaucrats, on blockchain,” he shared, “And without knowledge, there’s a big possibility they will be afraid of it.” A lifelong civil servant who’d served in Parliament and led innovative social entrepreneurship initiatives, Tadej schooled himself. He drew upon the developer community for personal coaching and advice, read up on blockchain, and then prepared to educate his colleagues. The Challenge In June of this year, the Ministry of Public Administration organized a three-day intensive tutorial on blockchain, inviting “presenters from different ministries, the Ministry of Finance, and also presentations by regulators.” Participants came from the National Bank, the tax office, and other key departments. Why? Tadej explained: “With our experts, we gave them knowledge – that it has a potential beyond bitcoin. To run the country, to run different processes. We invited Estonia and United Kingdom to share examples of how they already did some things. You know? When you are introducing new things, if you are the only frontrunner, then it’s hard.” Building on that collaborative model, Tadej also knew that he had to tap the local developer community’s collective brainpower to mobilize his peers in government. On the last day, they organized a national blockchain meetup: “to see who, besides a few that we already knew, were active on blockchain.” They held the event in the Noordung space center, a futuristic, saucer-shaped building housing a museum dedicated the life and discoveries of the early-twentieth century visionary Herman Potočnik Noordung, whose designs inspired significant developments in aeronautics. The center is a long 100-kilometer drive into the country from Llubljana, uncertain of how many would make the long trip on a Friday evening. Three hundred people showed up to the sold-out event, and they had to add another thousand participants via live webstream. The excitement and success of the endeavor, was captured that day in a challenge by Zenel Batagelj, who said to the assembled crowd: “Vice President of the government, Mr. Koprivnikar, State Secretary Slapnik, all 300 here, we are launching the first and biggest project that will be compliant with EU legislation on data protection. We are launching it in this year. We already know today that we can launch our project in Estonia. We also know that we can launch it from Malta. But we are telling you today that we will make an effort to launch it in three months from Slovenia.” Zenel was waving a giant carrot and a stick. To scale, the technologists needed the support of Slovenia’s financial and regulatory sectors. That’s no small proposition. It’s a work in progress. So far, the developers are happy to provide the talent to educate policymakers, and Tadej says they are on their way to making it a reality: “We’re learning, understanding, and co-creating. And with co-creation, it’s not just we who are learning. The guys from the blockchain community are also learning what is the difference if you are running a company here, from Slovenia.” Now, the Noordung space center is actively being rebranded as the Noordung Blockchain Hub. Blockchain Green Gold in Slovenia Blockchain fever is rampant in Slovenia, most visibly in the government’s high-profile pursuits, such as green tech. Ljubljana won the European Green Capital Award in 2016, for its conscious efforts to preserve the environment (the quaint old city center is blissfully free of cars). Just about any concept or service can be tokenized, to fund, distribute assets, and run a project, and blockchain projects don’t have to entertain the speculation of bitcoin or cryptocurrencies. Blockchain is seen as a new tech super power here, a way to accelerate national green and innovative initiatives. Tadej introduced us to Rok Gornik, a young professional working for SunContract, a blockchain-based peer-to-peer platform for managing surplus energy. The company raised $2 million last winter by proposing its plan on Ethereum, for a transparent energy service where consumers can track their usage and convert energy savings into currency. Rok was inspired by Iconomi’s hit ICO. He blogged about blockchain for Coin Telegraph, a Slovenia-based internet publication, and won a job from SunContract. He’s a believer in the transparency and automation benefits: “Our first main advantage will be that people can follow their consumption on a blockchain,” he said. “SunContract’s peer-to-peer energy trading platform will enable small producers and consumers to buy and sell solar power and heating through a blockchain architecture, and invest in different energy projects. SunContract aims to provide a user-friendly, mainstream model for buying and selling electricity, tapping into the existing grid to optimize current operations while reducing costs and increasing efficiency. And this blockchain-enabled service is right in line with Slovenia’s move to a circular economy. “We are shifting away from a linear economy that is taking out from nature,” said Tadej “to focusing more on reuse and recycling.” The message we heard over and over again in Slovenian was that the country wants to do things right. The global ICO craze has led to frauds as well as overfunded offerings that too often lack the teams and technical know-how to build successful projects. Tim seems keenly aware that a lot is riding on his shoulders. The success of Iconomi led to the idea for a separate company, Cofound.it, co-founded by CEO Jan Isakovic and Daniel Zakrisson (Zenel Batagelj is Team Strategist), a platform to crowdsell promising startup blockchain projects. Demand was so high this June that Cofound.it “white-listed” 3,000 subscribers for a pre-sale that raised $14.8 million in 60 hours. That broke the world record for the largest pre-sale, and eliminated the need for an ICO. Acting much like an online accelerator in just a few months, Cofound.it has already crowdsold five blockchain projects that have raised a total of approximately $50 million. At the end of our visit, we met Tim at the Iconomi office in downtown Ljubljana, and saw the core development team hard at work, huddled around big-screen Macs and white boards filled with schematics. The headcount has already hit 40, and in just a few weeks Tim said they need to move to make room for the 150 they need to stay on track next year. “We’re onboarding people as much as we can,” he said. “You want to do this smart.” Instant blockchain mega-capitalization on a Silicon Valley scale has meant that Iconomi and Cofound.it are needing to compress the management and maturation process. “We are kind of a public company from Day One,” said Tim. “That’s kind of funny. You are a startup, but then you are a public company. We’ve had to discover how to handle this.” That’s the big bet. If they can rise to the challenge, Iconomi, Cofound.it, and more importantly perhaps Slovenia, this tiny Central European nation, may help lead the blockchain revolution in Europe and beyond. This is the Fourth in our European Series. Read the previous stories on Web Summit and on Paris.
On May 19th, The Times-Picayune and New Orleans’s WVUE Fox News 8 will receive one of the nation’s most prestigious prizes in American journalism, the Peabody Award. They are being honored, rightfully, for their collaboration on a series titled “Louisiana Purchased,” which meticulously detailed the outsized role of money in Louisiana politics and government and the ways in which elected officials and their most generous donors seemingly take advantage of large gaps in the state’s campaign finance laws. As a result of The Times-Picayune and WVUE’s exhaustive reporting, several elected officials, from both political parties, were asked to account for a litany of questionable and problematic donations and expenditures, but truth be told, the overwhelming majority of these officials have yet to provide an adequate explanation. Several elected officials blatantly broke the law on donations from political action committees. Many were caught spending campaign funds for things that could hardly qualify as campaign activities: suites at Saints and LSU football games, extravagant dinners at some of the state’s most expensive fine dining restaurants, concert tickets, and high-priced hotel rooms. Others seemed to be earning a living from their campaign donations, spending hundreds of dollars a week on gasoline, groceries, and even their daily cup of coffee. Though he was not specifically profiled as a part of the “Louisiana Purchased” series, I know of a State Representative in Central Louisiana, a Democrat, who has disclosed spending tens of thousands of dollars in campaign contributions, every year, on things most of us would consider basic necessities; according to his campaign finance reports, an enormous sum of that money was spent in one place, a small, inner-city convenience store less than a mile away from the Representative’s home. Every single person in the state of Louisiana who receives campaign contributions also earns a salary that is paid for by taxpayers. So, this begs the question: If these officials actually live off of their campaigns, what is the public really paying them for? And more importantly, who do these officials actually work for, the citizens who elected them and who pay their salaries or the wealthy donors and PACs who contributed to them and subsidize their lifestyles? ***** I know several of the “top 400 campaign donors” listed in the “Louisiana Purchased” series. A few of them, I’ve known for my entire life, and of those, I consider some to be personal family friends. Louisiana is a small state, after all. The population of Dallas and Houston, combined, is nearly four times the entire state of Louisiana. With all due respect to those I know on that top 400 list, they aren’t policy experts; for the most part, they’re just self-interested businessmen looking for a handout or a tax break or a few useful idiots they can convince to look the other way when it comes to regulatory enforcement. They don’t give money to Republicans because they care about Medicaid expansion or marriage equality or school vouchers or gun safety laws or any of the issues that animate the modern Democratic Party; they give money, disproportionately, to Republicans because, first, Republicans are in charge, and, second, because Republican candidates and officials have repeatedly demonstrated their willingness to prioritize corporate welfare over social welfare. It’d be easy enough for Louisiana Democrats to blame an elite group of campaign contributors for their problems at the polls and for hijacking the political process and turning Louisiana into a veritable oligarchy. And to be fair, I absolutely do not discount the pernicious ways in which government, on all levels, is increasingly controlled by the few and for the few, instead of by the people and for the people. But Louisiana Democrats, ultimately, can only blame themselves for their electoral troubles, particularly in statewide elections. Before I get too far into it, I want to make this abundantly clear: While I think there are many things the Louisiana Democratic Party, as an official organization, could be doing more effectively, I am and have always been a supporter of the party’s executive director, Stephen Handwerk, and its chairwoman, Senator Karen Carter-Peterson. I know them both to be extremely competent and deeply dedicated public servants, people who are “in it” for all of the right reasons. Neither of them are to blame for what ails the Louisiana Democratic Party, but if the illness is to be treated, it must first be diagnosed. ***** Republicans control a supermajority of the Louisiana legislature; with the exception of Senator Mary Landrieu, Republicans hold every single statewide office. They’re the beneficiaries of the largesse of campaign contributions. They’ve been able to recruit a deep bench of candidates, ensuring that they can remain competitive for at least the next three or four election cycles. And, crucially, the Louisiana Republican Party, to the greatest extent possible, stays on message. Most political pundits look at these facts and conclude that Louisiana is and will remain, at least in the foreseeable future, a deep red state. This is ridiculous. With the exception of Joey Durel, the Mayor of Lafayette, every major city in the State of Louisiana is led by a Democrat. Mayor Glover in Shreveport in the northwest, Mayor Mayo in Monroe in the northeast, Mayor Roy in Alexandria in the center of the state, Mayor Roach in Lake Charles in the southwest, Mayor Holden in Baton Rouge, and Mayor Landrieu in New Orleans. Republicans may be in charge of the State Capitol, but Democrats, from all corners of the state, control City Hall. And unlike the people in charge of the House that Huey Built, these Democratic Mayors are all popular and, more importantly, they’re all effective. With mad respect to the great men and women who serve in the Louisiana State House and the State Senate, I think it would be complete and utter disaster to let those dysfunctional lunatics define the future of the party. A few quick examples: More than 2 out of every 3 voters thinks marriage equality will eventually become law. More than half already approve civil unions. So what happened, last Tuesday, when the legislature considered a bill that had absolutely nothing to do with marriage equality but would instead have repealed language from an old law that had already been struck down by the Supreme Court in Lawrence v. Texas? This wasn’t about legalizing gay marriage, and only a complete idiot would really believe the Louisiana Family Fourm. It required only that Louisiana follow the United States Supreme Court and stop pretending it has the authority to criminalize private, consensual sex between adults. The bill actually advanced out of committee, but then, of course, it was killed. Ordinarily, I’d blame Republicans; their votes killed the bill, after all, but let’s consider what Louisiana Democrats did and how they voted (bold mine): IN FAVOR of getting rid of the law that criminalizes sodomy: Reps. Jeff Arnold, D-New Orleans; Austin Badon, D-New Orleans; Wesley Bishop, D-New Orleans; Jared Brossett, D-New Orleans; Roy Burrell, D-Shreveport; Herbert Dixon, D-Alexandria; John Bel Edwards D-Amite; Franklin Foil, R-Baton Rouge; A B Franklin, D-Lake Charles; Randal Gaines, D- LaPlace; Lowell (Chris) Hazel, R-Pineville; Dalton Honore, D-Baton Rouge; Marcus Hunter, D-Monroe; Edward “Ted” James, D-Baton Rouge; Patrick Jefferson, D-Homer; Nancy Landry R-Lafayette; Terry Landry D-New Iberia; Walt Leger, D-New Orleans; Jack Mountoucet, D-Crowley; Helena Moreno, D-New Orleans; Vincent Pierre, D-Lafayette; Edward Price, D-Gonzales; Patricia Haynes Smith, D-Baton Rouge, Karen St. Germain, D-Plaquemines; Ledricka Thierry, D-Opelousas; Patrick Willams, D-Shreveport; Ebony Woodruff, D-Harvey AGAINST getting rid of the law criminalizes sodomy: Bryan Adams, R-Gretna; John “Andy” Anders, D-Vidalia; James Armes, D-Leeville; Taylor Barras R-New Iberia; John Berthelot, R-Gonzales; Robert Billiot, R-Westwego; Stuart Bishop, R-Lafayette; Chris Broadwater, R-Hammond; Terry Brown R-Colfax; Terry Burns, R-Haughton; Timothy Burns, R-Mandeville;Thomas Carmody, R-Shreveport; Steve Carter, R-Baton Rouge; Simone Champagne, R-Erath; Charles Chaney, R-Rayville; Patrick Connick, R-Marrero; Gregory Cromer, R-Slidell; Michael Danahay, D-Sulphur; Gordon Dove, R-Houma; Jim Fannin, R-Jonesboro; Ray Garofalo, R-Chalmette; Brett Geymann, R-Lake Charles; Jerry Gisclair, D-Larose; Hunter Greene, R-Baton Rouge; Mickey Guillory, D-Eunice; John Guinn, R-Jennings; Lance Harris, R-Alexandria; Joe Harrison, R-Gray; Kenneth Havard, R-Jackson; Cameron Henry, R-Metairie; Bob Hensgens, R-Abbeville; Dorothy Sue Hill, D-Dry Creek; Valarie Hodges, R-Denham Springs; Frank Hoffman, R-West Monroe; Paul Hollis, R-Covington; Frank Howard, R-Many; Mike Huval, R-Breaux Bridge; Barry Ivey, R-Baton Rouge; Robert Johnson, D-Marksville; Sam Jones, D-Franklin; Eddie Lambert, R-Gonzales; Bernard LeBas, D-Ville Platte; Christopher Leopold, R-Belle Chase; Joe Lopinto, R-Metairie; Nick Lorusso, R-New Orleans; Sherman Mack, R-Livingston; Gregory Miller, R-Norco; Jay Morris, R-Monroe; Jim Morris, R-Oil City; Kevin Pearson, R-Slidell; Erich Ponti, R-Baton Rouge; Rogers Pope, R-Denham Springs; Stephen Pugh, R-Ponchatoula; Steve Pylant, R-Winnsboro; Eugene Reynolds, D-Minden; Jerome Richard, I-Thibodaux; Harold Ritchie, D-Bogalusa ; Clay Schexnayder, R-Gonzales; John Schroder, R-Covington; Alan Seabaugh, R-Shreveport; Robert Shadoin, R-Ruston; Scott Simon, R-Abita Springs; Julia Stokes, R-Kenner; Kirk Talbot, R-River Ridge; Jeff Thompson, R-Bossier City; Lenar Whitney, R-Houma; Thomas Willmott, R-Kenner Interestingly, Chris Hazel is one of only two Republicans who switched from the party line and voted to repeal the unconstitutional law. Although Hazel has most recently signaled his intention to run for District Attorney against his former colleague Chris Roy, Jr., just yesterday, someone began polling for the 5th Congressional District, a seat that, presumably, became much more competitive given the so-called “Kissing Scandal.” Meanwhile, Democratic Representative Robert Johnson, who is also considering a race for the 5th, voted to keep Louisiana’s unconstitutional statute on the books. In others, the Republican took a stand on equality and fairness under the law, and the Democrat capitulated to ignorant fear. If the Louisiana Democratic Party is to reestablish itself and, once again, become competitive statewide, it first must invest in its local leaders. But it cannot do so haphazardly. Investment does not necessarily mean “money;” all of the state’s Democratic Mayors and most of its Democratic councilpersons and police jurors are more than capable of raising their own money and running their own campaigns. Too often, candidates confuse the mission and the purpose of the Louisiana Democratic Party; it should not concern itself with running campaigns. Candidates should run campaigns. “Investment” means coherent messaging and marketing; it means demonstrating that the party’s reach is much greater than inner cities; it means listening to elected officials who know how to win in majority-Republican and majority-Democratic precincts and developing a platform based on those issues: innovative crime prevention strategies, reinvestment in parks and recreation, building back roads and sidewalks and bike paths and, if possible, commuter rail, policies that promote sustainability and energy efficiency, developing partnerships with private entities to transform certified blighted properties to improve quality of life. I’m sure I’m missing a few things in my initial assessment. For good measure, although Mayor Durel is a Republican, it is worth noting that every other municipality in the State of Louisiana would have followed his lead on providing Fiber To The Home had it not been for the collusion between entrenched business interests and Republicans in the Louisiana House and Senate that effectively denies other cities from providing broadband Internet as a public utility, fifty-times faster and fifteen percent cheaper. This was a promise denied to the rest of Louisianans, most of whom still suffer from the “digital divide.” There are other examples: State Representative Katrina Jackson, a Democrat, spearheaded legislation that would create draconian and superfluous regulations against abortion providers, closing three of the state’s five clinics. And Representative Stephen Ortego, a Democrat, voted to make the Bible the state’s official book. Louisiana Democrats should stand tall, instead of selling out. *** ** The Louisiana Republican Party is led by Roger Villere, a florist who only became known because, when he was forty, he lost an election to David Duke. That is, when given the choice between the former grand wizard of the Ku Klux Klan and the current chairman of the Louisiana Republican Party, Villere’s neighbors and fellow Republicans chose the klansman. In 2011, Villere, as chairman of the Republican Party, ran for public office again; this time, for Lt. Governor. Villere received 6.7% of the vote; three times as many people voted for another Republican, Sammy Kershaw, a former country music star running a vanity campaign. Maybe it seems unfair to reach all the way back, 25 years ago, to Villere’s first election, but the truth is, David Duke has his fingerprints on the modern Louisiana Republican Party much more than they could ever possibly admit: Not only did that State House election for District 81 introduce Roger Villere to the Louisiana Republican Party, but Republicans, including former Louisiana Governor Mike Foster, former Congressman Woody Jenkins, and former State Representative Tony Perkins (who is now head of the Family Research Council), paid Duke nearly $100,000 for his mailing list; to get elected, they needed David Duke’s help. ***** Of course, the Louisiana Democratic Party has long been beleaguered by systemic, institutional problems, including, most importantly, the perception that elections for local delegates are rigged. Two years ago, one of the delegates elected to attend the Democratic National Conference in Charlotte allegedly earned her seat after telling local party members that she was already an elected official, specifically the Mayor of a small Louisiana city. Whether these specific allegations are true or not, they speak to a larger problem: The party- on a local, grassroots level- suffers from a lack of credibility. And that is the first diagnosis: To reestablish integrity and credibility, the Louisiana Democratic Party should aggressively recruit a new roster of qualified delegates. On the local level, delegates should be assist in identifying successful policy and elected Democrat’s most important achievements. These titles should no longer be meaningless honorifics entitling you to a couple of subsidized hotel rooms every few years. Job titles carry important social cache; they must be given judiciously. The Louisiana Democratic Party should deemphasize its political work at the State Capitol. The party’s state representatives, instead, should focus on building a vastly more robust, more trustworthy, and more integrated social and professional network. Of course, it’s not simple, but one of the party’s weaknesses is the extent to which they have focused on national issues while ceding ground on issues that more directly affect state policy. The state party would be wise to re-embrace its populist past: Supporting public schools and higher education, holding oil and gas companies responsible for the environmental damage they have inflicted (while, at the same time, advocating a responsible domestic energy production), repealing all of the bad laws that make Louisiana a laughingstock of the nation, enacting meaningful campaign finance reform, demanding accountability and transparency in the Governor’s office, ensuring civil rights for all people– all of these things are poll-tested and voter-approved. And these issues don’t mean the party becomes “the party of the past;” these issues also reflect the priorities of Louisiana’s young progressives. To be sure, again, I am not suggesting that Democratic candidates should be subsidized by the party; that’s a bad, top-down approach. I’m merely suggesting that the party should adopt a more Louisiana-centric platform, and it should be more willing to embarrass and call out Democrats like Katrina Jackson and others who undermine the party’s outreach and message. The Louisiana Republican Party has done a great job rebranding itself and emerging from its sordid connection to David Duke, but they’ve done so by, more often that not, aligning themselves with conservative Christian “values” voters. They are the party of big business disguised as the party of traditional values. In recent years, this has meant electing and then re-electing a family values candidate who frequented prostitutes. Earlier this year, we learned another “family values” Republican official, Congressman Vance McAllister, was cheating on his wife with one of his employees, and according to the woman’s husband and McAllister’s former good friend, the Congressman, in his words, “is the most non-religious person I know.” After McAllister’s affair was exposed, Louisiana Republicans wasted no time distancing themselves from him, even at the risk of looking like hypocrites . They didn’t care. Why? Because they knew, all along, the guy was as phony as a three dollar bill. It’s about coherent, consistent messaging. Editor’s note: An earlier version of this post has been edited. And then reedited.
<filename>src/components/mainParser/noRecoveryParser.ts import { lex } from '../tokenDictionary/tokens'; import PlSqlParser from './rules'; import logParserErrors from './util/logParserErrors'; const parserInstance = new PlSqlParser({ recover: false }); function parse(input: string, log = false) { const lexResult = lex(input); // ".input" is a setter which will reset the parser's internal's state. parserInstance.input = lexResult.tokens; // No semantic actions so this won't return anything yet. const cst = parserInstance.global(); if (parserInstance.errors.length > 0 && log) { logParserErrors(parserInstance.errors); } return { errors: parserInstance.errors, cst }; } export default parse;
n,m=[int(i) for i in input().split()] a=[int(i) for i in input().split()] minB=n//m b=[0]*(m+1) pofig=set() for i in range(n): if a[i]>m: pofig.add(i) elif b[a[i]]>=minB: pofig.add(i) else: b[a[i]]+=1 i=0 ans=0 for k in range(1,m+1): while b[k]<minB: while not (i in pofig): i+=1 a[i]=k b[k]+=1 ans+=1 i+=1 print(minB,ans) for i in a: print(i)
Swiss scientists have developed a new wearable monitor — about the size of a wristwatch — that can track blood pressure as accurately as the standard pressure cuff used by doctors worldwide. The device, developed by company STBL Medical Research AG (STBL), could revolutionize the way people with high blood pressure track their condition and improve the effectiveness of treatment by providing an easy way to monitor it. "This measuring device can be used for medical purposes, for example as a precaution for high-risk patients or for treating high blood pressure, but also as a blood pressure and heart rate monitor for leisure activities and sports as well as for monitoring fitness in high-level sports," said Michael Tschudin, co-founder of STBL, who sees great potential for the device. High blood pressure is one of the most common causes of death worldwide, according to the World Health Organization. Fewer than half of individuals with the condition measure their blood pressure regularly — in part because of the cost and cumbersome procedures now required to measure it, according to the WHO. The new device would allow blood pressure measurements to be taken quickly and easily, outside of a doctor’s office or hospital. The researchers said clinical trials are currently under way to test the best uses for the new monitor. "The sensor will be cheaper than existing 24-hour monitoring devices, such as those currently used in hospitals," said Tschudin. Swiss scientists have developed a new wearable monitor - about the size of a wristwatch - that can track blood pressure as accurately as the standard pressure cuff used by doctors worldwide.
/** * Final implementation of Memoizer using cheap get() followed by * atomic putIfAbsent. * From Goetz p. 108 * @author Brian Goetz and Tim Peierls */ class Memoizer <A, V> implements Computable<A, V> { private final ConcurrentMap<A, V> cache = new ConcurrentHashMap<A, V>(); private final Computable<A, V> c; public Memoizer(Computable<A, V> c) { this.c = c; } public V compute(final A arg) throws InterruptedException { V v = cache.computeIfAbsent(arg, (A argv) -> { try { return c.compute(argv); } catch (InterruptedException e) { throw launderThrowable(e.getCause()); } }); return v; } /** * Coerce a checked Throwable to an unchecked RuntimeException. * sestoft@itu.dk 2014-09-07: This method converts a Throwable * (which is a checked exception) into a RuntimeException (which is * an unchecked exception) or an IllegalStateException (which is a * subclass of RuntimeException and hence unchecked). It is unclear * why RuntimeException and Error are treated differently; both are * unchecked. A simpler (but grosser) approach is to simply throw a * new RuntimeException(t), thus wrapping the Throwable, but that * may lead to a RuntimeException containing a RuntimeException * which is a little strange. The original * java.util.concurrent.ExecutionException that wrapped the * Throwable is itself checked and therefore needs to be caught and * turned into something less obnoxious. * @author Brian Goetz and Tim Peierls */ public static RuntimeException launderThrowable(Throwable t) { if (t instanceof RuntimeException) return (RuntimeException) t; else if (t instanceof Error) throw (Error) t; else throw new IllegalStateException("Not unchecked", t); } }
/** * Tries to construct an instance given ordered set of words. *<p> * Note: currently maximum number of words that can be contained * is limited to {@link #MAX_WORDS}; additionally, maximum length * of all such words can not exceed roughly 28000 characters. * * @return WordResolver constructed for given set of words, if * the word set size is not too big; null to indicate "too big" * instance. */ public static WordResolver constructInstance(TreeSet<String> wordSet) { if (wordSet.size() > MAX_WORDS) { return null; } return new Builder(wordSet).construct(); }
<filename>spark_auto_mapper_fhir/value_sets/patient_medicine_change_types.py<gh_stars>1-10 from __future__ import annotations from spark_auto_mapper_fhir.fhir_types.uri import FhirUri from spark_auto_mapper_fhir.value_sets.generic_type import GenericTypeCode from spark_auto_mapper.type_definitions.defined_types import AutoMapperTextInputType # This file is auto-generated by generate_classes so do not edit manually # noinspection PyPep8Naming class PatientMedicineChangeTypesCode(GenericTypeCode): """ PatientMedicineChangeTypes From: urn:oid:192.168.3.11.2001.1001.101.104.16592 in valuesets.xml Example Item Flags for the List Resource. In this case, these are the kind of flags that would be used on a medication list at the end of a consultation. """ def __init__(self, value: AutoMapperTextInputType): super().__init__(value=value) """ urn:oid:192.168.3.11.2001.1001.101.104.16592 """ codeset: FhirUri = "urn:oid:192.168.3.11.2001.1001.101.104.16592" class PatientMedicineChangeTypesCodeValues: """ No change has been made to the status of this medicine item. From: urn:oid:192.168.3.11.2001.1001.101.104.16592 in valuesets.xml """ Unchanged = PatientMedicineChangeTypesCode("01") """ The medicine item has changed. The change may be described in an extension (not defined yet) From: urn:oid:192.168.3.11.2001.1001.101.104.16592 in valuesets.xml """ Changed = PatientMedicineChangeTypesCode("02") """ The prescription for this medicine item was cancelled by an authorized health care provider. The patient may be advised to complete the course of the prescribed medicine. This advice is a clinical decision made based on assessment of the patient's clinical condition. From: urn:oid:192.168.3.11.2001.1001.101.104.16592 in valuesets.xml """ Cancelled = PatientMedicineChangeTypesCode("03") """ A new medicine item has been prescribed From: urn:oid:192.168.3.11.2001.1001.101.104.16592 in valuesets.xml """ Prescribed = PatientMedicineChangeTypesCode("04") """ Administration of this medication item that the patient is currently taking is stopped or recommended to be stopped (i.e. instructed to be ceased by a health care provider). This cessation is anticipated to be permanent. The Change Description should describe the reason for cessation. Example uses: the medication in question is considered ineffective or has caused serious adverse effects. This value applies both to the cessation of a medication that is prescribed by another healthcare provider or patient self-administration of OTC medicines. From: urn:oid:192.168.3.11.2001.1001.101.104.16592 in valuesets.xml """ Ceased = PatientMedicineChangeTypesCode("05") """ Administration of this medication item that the patient is currently taking is on hold, or instructed or recommended by a health care provider to be temporarily stopped, or subject to clinical review (i.e. the stop may be temporary or permanent depending on the outcome of clinical review), or temporarily suspended as a pre-requisite to certain surgical or diagnostic procedures. From: urn:oid:192.168.3.11.2001.1001.101.104.16592 in valuesets.xml """ Suspended = PatientMedicineChangeTypesCode("06")
ASSESSMENT OF POSTPARTUM DEPRESSION IN A GROUP OF CHILEAN PARENTS Background and Objective Several studies have shown that not only mothers, but also fathers can suffer from peripartum depression. This phenomenon has not been researched in Chile; therefore, the aim of present study is to explore the presence of depressive symptoms in fathers and mothers during the postpartum period and describe their interaction. Material and Methods users of the Western Metropolitan Health Service Unit were assessed 2 months after childbirth with a sociodemographic questionnaire, the Beck Depression Inventory (BDI-I), and the Edinburgh Postnatal Depression Scale (EPDS). Results Even though mothers score significantly higher in both scales, 18.5% of men surpass the cut-off score in the EPDS and 10.5% in the BDI-I. Conclusion These results stress the need to continue researching this phenomenon and incorporate father assessment in perinatal checkups. J Mens Health Vol 14:e56-e64; May 14, 2018 This article is distributed under the terms of the Creative Commons Attribution-Non Commercial 4.0 International License. Assessment of Postpartum Depression in a Group of Chilean Parents e57 resulting negative impact at a personal and at couple level as well as on child development. For many years clinicians suggested and research documented that perinatal depressive syndrome (PDS) mainly related to the maternal figure, with the mother being identified as the main caregiver during the babys first years of life. Evidence showed rates between 10 and 20% in the female population, which are twice as high in developing countries. In Chile female PDS is unequally distributed according to the socioeconomic status, with rates of 41.3% in low-income population versus a 27.7% in high-income sectors. The main factors that have been linked to female PDS are having experienced stressful events during pregnancy, a poor couple relationship, and limited social support. Several studies have shown that maternal PDS has negative effects on the mother, the father, the children, and the relationships among them. Just like mothers, fathers are at a higher risk of displaying depressive symptomatology during the perinatal period. Goodman observed prevalences ranging from 1.2 to 25.5% in the general population during the first year after childbirth, and a second meta-analysis found a 10.4% prevalence of paternal PDS in the general population, which revealed that rates could vary significantly depending on the time of assessment, the country where it was conducted, and the type of instrument used. Some of the risk factors of male PDS described in the literature are depressive symptomatology in the mother, a history of depression, couple relationships marked by a lack of solidarity, poor support networks, joblessness, advanced age, and a low education level. Paternal PDS has been shown to affect family functioning, the well-being of family members, marital satisfaction, and the economy of industrialized countries. In addition, several studies have reported that paternal depressive symptomatology during the postpartum period has an impact on child development. Qualitative studies have shown that men display specific generic manifestations to PDS, such as more hostility, conflict, and anger rather than an increase in sadness. Also, avoidant behaviors have been more frequently encountered, as manifested by an increment in hours spent at work, sports activity, sexual promiscuity, gambling, alcohol use, and self-medication. Paternal PDS is not part of the standardized assessments included in health checkups during the postpartum period; therefore, the information available about this phenomenon has been collected through studies that do not belong to the assessment routines of health services. Most of the studies to date have been carried out in Europe, mainly in England and Sweden, 2 countries with strong family and gender policies. These studies tend to be longitudinal and employ the Edinburgh Postnatal Depression Scale (EPDS). Given that research on paternal peripartum depression is still in a nascent state and that no studies have been conducted to assess it in Chile, the aim of the present study is to examine in an explorative way the presence of PDS in a group of fathers and mothers living in Chile. resulting negative impact at a personal and at couple level as well as on child development. For many years clinicians suggested and research documented that perinatal depressive syndrome (PDS) mainly related to the maternal figure, with the mother being identified as the main caregiver during the baby's first years of life. Evidence showed rates between 10 and 20% in the female population, which are twice as high in developing countries. In Chile female PDS is unequally distributed according to the socioeconomic status, with rates of 41.3% in low-income population versus a 27.7% in high-income sectors. 12 The main factors that have been linked to female PDS are having experienced stressful events during pregnancy, a poor couple relationship, and limited social support. 9,10,13 Several studies have shown that maternal PDS has negative effects on the mother, the father, the children, and the relationships among them. 8, Just like mothers, fathers are at a higher risk of displaying depressive symptomatology during the perinatal period. 6, Goodman observed prevalences ranging from 1.2 to 25.5% in the general population during the first year after childbirth, and a second meta-analysis found a 10.4% prevalence of paternal PDS in the general population, which revealed that rates could vary significantly depending on the time of assessment, the country where it was conducted, and the type of instrument used. 5,7 Some of the risk factors of male PDS described in the literature are depressive symptomatology in the mother, a history of depression, couple relationships marked by a lack of solidarity, poor support networks, joblessness, advanced age, and a low education level. 6,19,23 Paternal PDS has been shown to affect family functioning, the well-being of family members, marital satisfaction, and the economy of industrialized countries. In addition, several studies have reported that paternal depressive symptomatology during the postpartum period has an impact on child development. 24, Qualitative studies have shown that men display specific generic manifestations to PDS, such as more hostility, conflict, and anger rather than an increase in sadness. 18 Also, avoidant behaviors have been more frequently encountered, as manifested by an increment in hours spent at work, sports activity, sexual promiscuity, gambling, alcohol use, and self-medication. 31 Paternal PDS is not part of the standardized assessments included in health checkups during the postpartum period; therefore, the information available about this phenomenon has been collected through studies that do not belong to the assessment routines of health services. Most of the studies to date have been carried out in Europe, mainly in England and Sweden, 2 countries with strong family and gender policies. 32 These studies tend to be longitudinal and employ the Edinburgh Postnatal Depression Scale (EPDS). 33 Given that research on paternal peripartum depression is still in a nascent state and that no studies have been conducted to assess it in Chile, the aim of the present study is to examine in an explorative way the presence of PDS in a group of fathers and mothers living in Chile. METHODS The present study assesses the presence of depressive symptomatology in fathers and mothers during the first year after childbirth through an exploratory, cross-sectional, and quantitative design. Ethical approval was obtained from the Institutional Review Board of the university where this study took place. Participants Legal age couples, users of the Western Metropolitan Health Service Unit, who had had a child between February and June of the year 2016. The inclusion criteria for parents were being of legal age and having a child aged between 0 and 1 year. The exclusion criteria were the presence of somatic diseases, severe psychiatric disorders, or disability in one of the family members. 1574 babies were born between February and June 2016 at the Hospital San Juan de Dios which is part of the Western Metropolitan Health Network. Of this total, 382 couples (mothers and fathers) agreed to participate in the study, but only 128 individuals completed the surveys properly (65 men and 63 women). Table 1 shows the descriptive statistical values of the sample's sociodemographic variables. Procedure With the support of the Director of the Hospital San Juan de Dios and the Head Midwife of the Obstetrics and Gynecology Service, daily postnatal hospitalization visits were scheduled between February and June 2016. On those occasions, a psychologist from our research team invited mothers and fathers, or only the mothers when the father was not present, to participate in the study. The mothers and fathers who agreed to participate signed an informed consent and provided their contact information (e-mail and telephone number), because they would be contacted again 8 weeks later to administer the questionnaires online or over the phone. Instruments Sociodemographic characteristics and family networks questionnaire. A specific questionnaire was created to collect information about the child's development and the family's sociodemographic background (family structure, education level, and occupation, among other aspects). Beck Depression Inventory (BDI-I) 34 Self-report questionnaire with 21 items that assess current depressive symptomatology in adults. In this test, the subject must choose, from a set of 4 options ranked from least to most severe, the statement that best describes his/her state during the last week. Each item can be assigned a score from 0 to 3 points, for a total score ranging between 0 and 63. Higher scores reflect more depressive symptomatology. Regarding the psychometric properties of the instrument, there was an adequate internal consistency in both the Spanish version with =.90 35 and in the Chilean version with =.92. 36 The Chilean version presented an adequate fit to the structure with a single factor, and a score of 13/14 was proposed as a cut-off point to distinguish between a sample with known symptomatology and the sample without known symptoms. 36 In the current sample, =.90 was calculated for the total sample, with =.86 for fathers and =.91 for mothers, which is considered to be adequate. EPDS. Self-report instrument that contains 10 items and can be completed in approximately 5 minutes. It is an effective tool for screening depressive disorders during pregnancy and the postpartum period. Its maximum possible score is 30, with 10 or more points indicating a possible depression of variable severity. The scale was validated in Chile for women during the postpartum period and displayed good internal consistency ( =.77), 37 with the highest sensitivity being achieved with a 9/10 threshold. 38 This value is the most suitable cut-off score for screening studies. 39 It has also been validated during pregnancy, displaying a one-factor structure, high internal consistency (.914), and strong correlation with the BDI-I, with a Spearman's rho value of.85 (p <.001). 40 In the sample used in this study, =.86 was calculated for the total sample. This value reached.83 in the fathers' sample and.86 in that of the mothers; thus, its internal consistency for each sample is adequate. The male sample collected here showed an adequate fit to the structure with a single factor ( 2 = 54.5, p =.02, CFI =.96, RMSEA =.095), when performing a Confirmatory Factor Analysis (Manuscript in preparation). Most of the studies that have employed this scale in male populations have reported good sensitivity and specificity; however, the evidence is still inconclusive. Data Analysis In order to meet the set objectives, descriptive statistical values were calculated both for the main sociodemographic variables and for the studied variables. The differences between men and women in the symptomatology scales were assessed through nonparametric tests of differences in means (Wilcoxon test), given the characteristics of the sample. In the clinical sample, the differences in proportions between the sexes were assessed with the Chi-square test. Finally, bivariate Pearson correlations were calculated between the father's symptomatology and that of the mother. All analyses were performed with version 3.1.2 of the R statistical software package. 46 Presence of Depressive Symptomatology in Fathers and Mothers The percentage of men who displayed depressive symptomatology above the cut-off scores was 18.5% according to the EPDS and 10.5% according to the BDI, while in women these percentages reached 50.8% and 31.4% respectively. Similarly, the percentage of men with symptomatology above the cut-off score was lower than the percentage of women, in both scales (EPDS: 2 = 13.25, p <.001; BDI: 2 = 5.98, p =.01). The fathers and mothers who participated in this study obtained a mean score in the symptomatology scales below the threshold for each instrument (EPDS: M = 7.67, SD = 6.03; BDI: M = 8.04, SD = 7.74 -See Table 2). Mothers scored significantly higher than fathers in the EPDS (W = 1140, p <.001). The same was true of the BDI (W = 887, p <.001). Relationship Between Paternal and Maternal Symptomatology A total of 24 dyads made up by both parents completed the EPDS. These data we used to calculate the bivariate association between the symptomatology of fathers and mothers. The correlation between the EPDS and the BDI-I scores was high, positive, and significant when considering the total sample (r =.84, p <.01). Table 3 shows the correlations between scores in the symptomatology scales, divided into fathers and mothers. It can be observed that, even though the correlation between the father's and the mother's BDI scores is significant and high (r =.70, p =.01), that between their EPDS scores is not significant (r =.11, p =.60). The association between the father's and the mother's BDI was significant even when controlling for education level, age of the mother, and presence or absence of complications during birth ( = 0.43, p =.03). DISCUSSION First of all, it is important to highlight that this study is an exploratory approximation to the topic, given the small and very specific sample, so the conclusions drawn from these results should be taken with caution. In this sample, one in 10 fathers displayed PDS according to BDI-I scores, a figure that nearly doubles when considering EPDS scores. These numbers appear to be higher than those reported by the Chilean National Health Survey, according to which only 8.5% of men displayed depressive symptomatology. 47 However, the different methodologies used make it impossible to conduct a direct comparison. On the other hand, the EPDS scores obtained are higher than what Paulson and Bazemore report, 5 which could be expected given that the sample assessed has a midlow-SES and that there is evidence for a link between poverty and depression. Different authors have explained the presence of paternal PDS. For men, the pregnancy and childbirth stage is also a time of psychological restructuring that forces them to deal with their personal and family history. 51,52 New fathers may feel that the child is monopolizing the mother and they may feel excluded from or jealous of this relationship. 53 Preserving the pre-childbirth interaction and sex life becomes hard or impossible, which can cause insecurity 54 or exhaustion, while the new responsibility and the psychological maladjustment can result in a depressive disorder. 7 In line with prior research, mothers are more than twice as likely to suffer from peripartum depressive symptoms. It has been suggested that this figure is due to the fact that the male population is being underdiagnosed as a result of the atypical symptomatology being expressed: aggressiveness and irritability instead of sadness. 55 In this regard, one of the scales used to assess fathers was originally intended to assess maternal symptomatology; thus, there are certain masculine depressive manifestations, such as avoidant behavior and substance use, that are not taken into account. Thus, the rates observed may be underrepresenting the percentage of fathers with concealed depressive disorders. On the other hand, the strong correlation found between the BDI scores of mothers and fathers appears to contradict the weak and non-significant correlation between the EPDS scores of both progenitors. This inconsistency could be explained considering the fact that the BDI includes items related to somatic components of depression, such as sleep and fatigue, whereas the EPDS only refers to elements of a more emotional nature. Therefore, these somatic elements may be the source of the association between the parents' BDI scores, not the emotional elements, which are also measured by the EPDS. It must also be stressed that aspects of everyday life such as sleep and appetite can be strongly affected by the arrival of a newborn; therefore, a certain level of association between these elements is to be expected. It must be noted that the present study has several limitations. First, its cross-sectional design makes it impossible to obtain more information about the course of the studied phenomenon. Second, the criteria for selecting the sample preclude any conclusions about the total population of parents in Chile, given its low representation. Another major limitation pertains to the fact that, even though postpartum hospitalization was expected to be a good time to establish the first contact with the participants, the e-mail or phone assessment 2 months after this first contact proved to be rather inviable: only 25% of the parents initially contacted actually participated in the study. This was due to the fact that parents were unreachable or unwilling to respond to the instrument battery. Therefore, it is necessary to think of more efficient strategies to conduct testing in the fathers. Lastly, even though the selection of instruments was based on international studies and the research being conducted by the Millennium Institute for Research in Depression and Personality (MIDAP), it must be highlighted that the EPDS has only been validated in Chile to be used with mothers and not fathers. Future research must be more critical both conceptually and methodologically. It is necessary to plan cohort studies that consider representative samples and start the assessment in the prenatal stage in order to identify the course of paternal depression along with risk and protective factors. It would be useful to create and validate acceptable screening and diagnosis instruments specifically for paternal postpartum depression. Making them available to health professionals in charge of pre and postnatal checkups could test for PDS in mothers and fathers. CONCLUSION Our findings warrant expanding our mother-child dyadic view and considering fathers in perinatal checkups, thus acknowledging the fact that mental health does not exist in isolation but is fundamentally a contextual and relational phenomenon. A systemic view is fundamental for assessment and implementing interventions, especially during this stage, because the groundwork for the baby's future mental health is laid during the first year of life. Early diagnosis and timely intervention, not only of maternal but also of paternal PPD, regardless of the type of relationship of the parental couple, is key to fostering responsible parenting and family well-being. DISCLOSURE This study had the support from the Research and Postgrade Direction of the University Alberto Hurtado and with the Millennium Scientific Initiative of the Ministry of Economy, Development and Tourism, Project IS130005
Hybrid Minimally-invasive Esophagectomy for Esophageal Cancer: Clinical and Oncological Outcomes Background/Aim: Esophagectomy is a major surgical procedure associated with a significant risk of morbidity and mortality that has traditionally been performed by an open approach. Although minimally invasive procedures for benign esophageal disease have been widely accepted worldwide, they have not yet been established for the treatment of malignancy. Patients and Methods: A total of 137 consecutive hybrid esophagectomies for cancer were performed by the same surgical team. Surgical approach included either 2-stage or 3-stage hybrid minimally-invasive esophagectomy. Results: Median age of patients was 64 years. Respiratory complication and anastomotic leak rates were 16.78% and 9.48%, respectively. Median follow-up was 48 months with median overall survival and disease free survival were 58 and 48 months, respectively. Conclusion: Advances in minimally invasive surgery can benefit patients with esophageal cancer, mainly by reducing post-operative respiratory complications. Hybrid esophagectomy is safe and feasible in tertiary esophago-gastric centers with vast expertise that can lead to improved clinical and oncological outcomes.
/** * Tester app, to test the FSMThreadPool * @author Jitendra Chittoda * */ public class FSMThreadPoolTest { private static FSMThreadPool threadPool; private final int SIZE = 3; /** * Initialize the FSMThreadPool and clients * */ public void init() { threadPool = new FSMThreadPool(SIZE, new BlockingQueueFacImpl(), true); for (int i = 0; i < SIZE; i++) { // Clients that will assign the tasks in FSMThreadPool Thread machine = new FSMMachine("Key-" + (i+1), 50); machine.start(); } } /** * FSMMachine is acting as a Task creator and assigner. * This is acting as a client app * @author jitendra * */ public class FSMMachine extends Thread{ private final int size; private final String key; /** * Machine created with parameters * @param key Key for doing the task sequencing * @param events Number of events/tasks to be assigned/queued */ public FSMMachine(String key, int events) { this.key = key; this.size = events; } public void run() { AtomicInteger atomicInt = new AtomicInteger(0); for (int i = 0; i < size; i++) { Task task = new Task(key, atomicInt, i); threadPool.assignTask(task); } } } /** * Plain task that would simply log the statement * @author Jitendra Chittoda * */ class Task implements CacheKeyRunnable<String> { private final int event; private final String key; private final AtomicInteger atomicInt; public Task(String key, AtomicInteger expected, int event) { this.key = key; this.event = event; this.atomicInt = expected; } @Override public String getKey() { return key; } @Override public void run() { System.out.println("Thread["+Thread.currentThread().getId()+"] key[" + key + "] expected[" + atomicInt.get() + "] got["+event+"]"); int actual=atomicInt.getAndIncrement(); assertEquals("Thread["+Thread.currentThread().getId()+"] expected[" + event + "] got["+actual+"]", event, actual); } public String toString(){ return "k["+key+"] e["+atomicInt.get()+"] a["+event+"]"; } } @Test public void testFSM() { this.init(); threadPool.shutdown(); } }
package org.zhangbai.display.index; import org.eclipse.swt.SWT; import org.eclipse.swt.events.PaintEvent; import org.eclipse.swt.events.PaintListener; import org.eclipse.swt.graphics.Image; import org.eclipse.swt.graphics.ImageData; import org.eclipse.swt.graphics.PaletteData; import org.eclipse.swt.graphics.RGB; import org.eclipse.swt.layout.FillLayout; import org.eclipse.swt.widgets.Canvas; import org.eclipse.swt.widgets.Display; import org.eclipse.swt.widgets.Shell; public class IndexImageDepth1_kelett_532p { private static int RED_INDEX = 1; private static void resetImageDataBasedOnSigma355(ImageData imageData) { int count = 0, validateNum = 0; double aDouble = 0.0, sum = 0.0; int row = imageData.height-1, value = 0, lastValue=-1; for (String line : ReadFile.readFile("Sigma532p.txt")) { count ++; aDouble = Double.parseDouble(line); if ( 20 == count) { if ( 0 != validateNum ) { value = (int) ( (sum / validateNum) * 1000 * imageData.width ); if (value > imageData.width-1) { System.out.println(value); value = imageData.width-1; } imageData.setPixel(value, row, RED_INDEX); if (lastValue != -1) { if (lastValue < value) { for (int i = lastValue; i < value; i++ ) { imageData.setPixel(i, row, RED_INDEX); } } else { for (int i = value; i < lastValue; i++ ) { imageData.setPixel(i, row, RED_INDEX); } } } lastValue = value; } row --; count = 0; validateNum = 0; sum = 0.0; } if(aDouble < 0.0) { aDouble = -aDouble; // continue; } validateNum++; sum += aDouble; } } public static Image createIndexImage() { PaletteData paletteData = new PaletteData( new RGB[] { MyRGB.BLACK, MyRGB.RED, }); int width = 1024; int height = 500; ImageData imageData = new ImageData(width, height, 1, paletteData); resetImageDataBasedOnSigma355(imageData); return new Image(Display.getDefault(), imageData); } public static void main(String[] args) { Shell shell = new Shell (Display.getDefault()); shell.setLayout (new FillLayout ()); shell.setSize(1200, 700); shell.setLocation(300,300); Canvas canvas = new Canvas (shell, SWT.NONE); canvas.addPaintListener (new PaintListener () { public void paintControl (PaintEvent e) { e.gc.drawImage (createIndexImage(), 50, 50); } }); shell.open (); while (!shell.isDisposed ()) { if (!Display.getDefault().readAndDispatch ()) Display.getDefault().sleep (); } } }
#!/usr/bin/env python """ csvcut is originally the work of eminent hackers <NAME> and <NAME>. This code is forked from: https://gist.github.com/561347/9846ebf8d0a69b06681da9255ffe3d3f59ec2c97 Used and modified with permission. """ from csvkit import CSVKitReader, CSVKitWriter from csvkit.cli import CSVKitUtility, parse_column_identifiers class CSVCut(CSVKitUtility): description = 'Filter and truncate CSV files. Like unix "cut" command, but for tabular data.' def add_arguments(self): self.argparser.add_argument('-n', '--names', dest='names_only', action='store_true', help='Display column names and indices from the input CSV and exit.') self.argparser.add_argument('-c', '--columns', dest='columns', help='A comma separated list of column indices or names to be extracted. Defaults to all columns.') self.argparser.add_argument('-C', '--not-columns', dest='not_columns', help='A comma separated list of column indices or names to be excluded. Defaults to no columns.') self.argparser.add_argument('-x', '--delete-empty-rows', dest='delete_empty', action='store_true', help='After cutting, delete rows which are completely empty.') def main(self): if self.args.names_only: self.print_column_names() return rows = CSVKitReader(self.args.file, **self.reader_kwargs) column_names = rows.next() column_ids = parse_column_identifiers(self.args.columns, column_names, self.args.zero_based, self.args.not_columns) output = CSVKitWriter(self.output_file, **self.writer_kwargs) output.writerow([column_names[c] for c in column_ids]) for i, row in enumerate(rows): out_row = [row[c] if c < len(row) else None for c in column_ids] if self.args.delete_empty: if ''.join(out_row) == '': continue output.writerow(out_row) def launch_new_instance(): utility = CSVCut() utility.main() if __name__ == "__main__": launch_new_instance()
/* Copyright (c) DataStax, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ #ifndef CCM_DEPLOYMENT_TYPE_HPP #define CCM_DEPLOYMENT_TYPE_HPP #include <algorithm> #include <string> namespace CCM { /** * Deployment type indicating how CCM commands should be executed */ class DeploymentType { public: #ifdef CASS_USE_LIBSSH2 enum Type { INVALID, LOCAL, REMOTE }; #else enum Type { INVALID, LOCAL }; #endif DeploymentType(Type type = LOCAL) : type_(type) {} const char* name() const { switch (type_) { case LOCAL: return "LOCAL"; #ifdef CASS_USE_LIBSSH2 case REMOTE: return "REMOTE"; #endif default: return "INVALID"; } } const char* to_string() const { switch (type_) { case LOCAL: return "Local"; #ifdef CASS_USE_LIBSSH2 case REMOTE: return "Remote"; #endif default: return "Invalid Deployment Type"; } } bool operator==(const DeploymentType& other) const { return type_ == other.type_; } static DeploymentType from_string(const std::string& str) { if (iequals(DeploymentType(LOCAL).name(), str)) { return DeploymentType(LOCAL); } #ifdef CASS_USE_LIBSSH2 else if (iequals(DeploymentType(REMOTE).name(), str)) { return DeploymentType(REMOTE); } #endif return DeploymentType(INVALID); } private: static bool iequalsc(char l, char r) { return std::tolower(l) == std::tolower(r); } static bool iequals(const std::string& lhs, const std::string& rhs) { return lhs.size() == rhs.size() && std::equal(lhs.begin(), lhs.end(), rhs.begin(), iequalsc); } private: Type type_; }; } // namespace CCM #endif // CCM_DEPLOYMENT_TYPE_HPP
import { withYup } from "@remix-validated-form/with-yup"; import { ActionFunction, useFetcher } from "remix"; import { validationError, ValidatedForm } from "remix-validated-form"; import * as yup from "yup"; import { Input } from "~/components/Input"; import { SubmitButton } from "~/components/SubmitButton"; const schema = yup.object({ firstName: yup.string().label("<NAME>").required(), lastName: yup.string().label("<NAME>").required(), email: yup.string().label("Email").email().required(), }); const validator = withYup(schema); export const action: ActionFunction = async ({ request }) => { const result = validator.validate(await request.formData()); if (result.error) return validationError(result.error); const { firstName, lastName } = result.data; return { message: `Submitted for ${firstName} ${lastName}!` }; }; export default function FrontendValidation() { const fetcher = useFetcher(); return ( <ValidatedForm validator={validator} method="post" fetcher={fetcher}> {fetcher.data?.message && <h1>{fetcher.data.message}</h1>} <Input name="firstName" label="<NAME>" /> <Input name="lastName" label="<NAME>" /> <Input name="email" label="Email" /> <SubmitButton /> </ValidatedForm> ); }
Is feminism making us fat? The US is one of the only four nations on earth lacking a federally mandated maternity leave, writes Filipovic. Did feminism make us fat? That is the implication of a new study, which shows that as women entered the workforce in larger numbers, their housekeeping hours went down and obesity rates went up. While the authors of the study are careful not to politicise the results, they choose to only look at female household labour and highlight how the decline of the stay-at-home mom means women spend less time preparing food and cleaning up after meals. They inaccurately claim that women spend less time with their kids today than 45 years ago. And the takeaway is that, fewer hours at home means a less healthy population. But that is not quite true. The real culprits of our nationwide bad physical health - which, by the way, afflicts people of all body sizes - are complex, and include both the food we eat and the increasingly sedentary lifestyles we lead. Increased gender equality has in fact been good for our bodies, our minds and our families. But the same policies that stall women's empowerment are also making us physically ill. The authors of the housework and weight study are clear that their results only show correlation, and do not prove that decreased work in the home leads to an uptick in the national obesity rate. And certainly a society-wide decrease in physical activity may be related to a society-wide increase in weight. We know that the weight of the average American has gone up over the past 45 years, which the study attributes at least in part to the decline in housework hours - women in 1965 spent an average of 25.7 hours per week on household tasks, and by 2010 spent 13.2 hours per week. But the household labour examined does not include childcare - which, according to other studies, would bring the numbers up to more than 28 hours per week. And while the authors claim that time spent with children decreased between 1965 and 2010, the study they cite on that point actually says the opposite - "US mothers have shed hours of housework but not the hours they devote to childrearing". Women today, even those who work full-time, spend more time with their kids than the stay-at-home moms of the 1960s. They spend more quality time, too - which means a lot of walking, running and playing that was not tallied in the study of women's household daily energy expenditure. The study also does not look at men's household labour. Other studies have shown that men participate more around the home than they did 40 years ago, and spend much more time with their children. They still do significantly less than women, but their involvement has improved substantially since 1965. Yet even though they do more at home, men's obesity rates have also gone up. Something much bigger is going on than "women are not doing as much housework as they used to". The size of any particular person should not be our concern, and it does not matter to me if people are fat or thin. But it does matter when the American population has a slew of health problems related to our lifestyle, what we eat and the chemicals to which we are routinely exposed. Our rates of diabetes, heart disease, food allergies and many cancers have skyrocketed. We are very sick. The exact causes of all our ills are unclear. There is no doubt that our food plays a major role. Much of what we eat is processed and laden not just with addictive salt and sugar, but crammed full of unpronounceable chemicals and preservatives. And we are much more sedentary than we used to be. Housework and food preparation are easier with modern appliances, and more of our jobs involve long days in front of the computer instead of work on a factory line or on our feet. But there are also political policies that are making us sick and incentivising the actions that lead to poor health. It is easy to tell someone to eat a salad instead of a Big Mac. But if you are a low- or even medium-income parent working full-time or more than full time, the calculus is not so straight-forward. It is harder to find the time to go grocery shopping, chop all the ingredients, prepare a healthy protein, serve a meal your family will actually eat and then clean up. It can also be more expensive. In most places in the country, getting to a grocery store requires a car, or at least reliable public transportation. And the time you are doing the work of preparing food and cleaning is time you often are not spending with your family and unwinding after a stressful day. Americans today work more hours than ever before. Many of us work multiple jobs. We take fewer vacation days than residents of other industrialised nations, and we retire later. We spend more time with our kids than in the heyday of the homemaker. Our kids themselves have jam-packed schedules of school, SAT prep classes, sports, volunteering, after-school activities, lessons and all the other endeavours that are now nearly a necessity for college admissions (and not just for the wealthy). Between all that work and all that family time, something has to give - and for a lot of parents, that "something" is housework, healthy eating and physical exercise. It does not have to be this way. Our government channels enormous sums of money into artificially depressing the price of particularly odious food products through agricultural subsidies and dealings with big food companies. We also artificially depress the price of gasoline while investing relatively little in infrastructure and public transportation, incentivising driving and making walking or taking public transport less realistic. What we do not support - unlike almost every other developed nation - are worker's rights and healthy limits on labour. We are one of the only four nations on earth - along with Liberia, Sierra Leone and Papua New Guinea - without federally mandated maternity leave. Parental leave is correlated with lower child poverty rates, improved child health, greater parental involvement, longer breastfeeding and higher maternal employment. But we have no national paid leave policy for parents. "It is tough to figure out what is more limited: our time or our disposable income." We are also the only industrialised nation that does not mandate paid vacation and sick days. Our minimum wage is startlingly low, and below what is actually liveable. In the 2004 presidential debates between George W Bush and John Kerry, a woman prefaced her question by saying she worked multiple jobs to survive. Bush lauded her work ethic, deeming her a true American. While the unemployment and underemployment rates remain high, Americans are taking any jobs they can get - including the ones that won't accommodate basic health needs, or where a sick day might mean no food on the table. We have an overworked population that does not see wages rising along with productivity or hours spent on the job. We are not guaranteed time off for leisure, let alone sickness or pregnancy. Many of us do not have basic benefits like health care, and just cross our fingers that we do not get sick or meet with an accident. We are stretched in all directions. Women, who tend to be the primary caretakers of children and are much more likely to be single parents and to live in poverty, are especially impacted by these Byzantine workplace policies. It is tough to figure out what is more limited: our time or our disposable income. Then we are confronted with artificially cheap, physically addictive, nutrient-deficient but awfully tasty "food" available with almost no preparation or clean-up. It is being peddled by a big food lobby that bills its products as healthy and convenient. That same lobby faces little government oversight or regulation, and fights back hard on any regulatory attempts. As we work more and struggle harder to make ends meet, food corporations are making a whole lot of money off our limited time and limited means. We are sick because of it. But sure, the problem is that working women means they do not spend as much time cleaning the house and now we are fatter. Look over there.
Repair and Restoration of the Historical Wellesley Bridge at Srirangapatna: A Case Study The Historical Wellesley Bridge, built by the Krishnaraja Wadiyar under the supervision of Dewan Purnaih across river Cauvery at Srirangapatna. Situation of bridge is got when heavy rainfall followed by heavy inflow from Cauvery Catchment area in Kodagu District. At present, the Government of Karnataka has taken measures to do the restoration works using the same previously used materials with slight changes. Hence, in the present investigation the authors are doing a case study on the above structure by testing the ingredients of the materials used for it and also by conducting Non-Destructive Test on the structure to know its strength before and after restoration. Based on the test results obtained, the authors will give a conclusion with respect to durability aspects. In addition to the above, the authors will test for few alternative materials i.e., lime mortar with Cement (i.e. MM2 Grade Masonary mortar). Finally, from the obtained test results here the authors can suggest suitable material for the structures.
from data_collection.management.commands import BaseXpressDemocracyClubCsvImporter class Command(BaseXpressDemocracyClubCsvImporter): council_id = "E08000027" addresses_name = "parl.2019-12-12/Version 1/merged.tsv" stations_name = "parl.2019-12-12/Version 1/merged.tsv" elections = ["parl.2019-12-12"] csv_delimiter = "\t" csv_encoding = "windows-1252" allow_station_point_from_postcode = False def station_record_to_dict(self, record): # St Andrews District Church if record.polling_place_id == "22550": record = record._replace(polling_place_postcode="DY3 3AB") return super().station_record_to_dict(record) def address_record_to_dict(self, record): rec = super().address_record_to_dict(record) uprn = record.property_urn.strip().lstrip("0") if uprn in [ "90146449", # DY69LJ -> DY69NW : Swan Hotel, Stream Road, Kingswinford, West Midlands "90163017", # DY13EP -> DY11EP : 2 The Old Court House, 3 Priory Street, Dudley, West Midlands ]: rec["accept_suggestion"] = True if uprn in [ "90153005", # WV149AR -> WV149LE : Foxgloves, Elmdale Road, Coseley, West Midlands "90213042", # DY81DX -> DY83DF : Flat Above, 78 High Street, Stourbridge, West Midlands "90214339", # B629EN -> B634BN : 23 Hay Barn Close, Halesowen, West Midlands "90214017", # B629EN -> DY84GF : 12 Hay Barn Close, Halesowen, West Midlands "90156425", # B629EJ -> B629LB : 161 Long Lane, Halesowen, West Midlands ]: rec["accept_suggestion"] = False if uprn == "90161895": return None return rec
Geopolitical Shifts in the Evolving New World Order Recent geopolitical developments point to the emergence of a multipolar new world order. Globalisation brought about by the internationalisation of trade and the diffusion of technology has radically changed the impact of world powers. A hegemon today is much better able to extend its influence and enforce its interest worldwide. The purpose of this paper is to look at what are the key requirements for a country to reach world power status in the current globalised world and discuss which countries meet the conditions to have a credible chance of becoming a dominant player in the emerging new world order. The paper concludes that China is best positioned to challenge the economic dominance of the United States. The European Union does not punch its weight in influencing global policies, and the question is whether it will be able to or want to assume the responsibilities of a world power. For the Visegrad 4 countries and the other Central and Eastern European countries, as members of the European Union and NATO that are situated at the cross roads between East and West, it is of vital interest to reflect on what geopolitical shifts one can expect in the decades ahead.
/** * Method that inject dependencies of attributes in a well annoted class * @param target the target of the dependencies injection */ public static void injectAttributes(Object target) { Class<?> targetClass = target.getClass(); for(Field field: targetClass.getDeclaredFields()) { if(field.isAnnotationPresent(InjectDependency.class)) { InjectDependency annotation = field.getAnnotation(InjectDependency.class); String annotationName = annotation.name(); Class<?> containerClass = field.getAnnotation(InjectDependency.class).containerClass(); String getterName = "get" + annotationName.substring(0, 1).toUpperCase() + annotationName.substring(1); try { Object container = containerClass.newInstance(); Method getter = containerClass.getMethod(getterName); Object[] invokeParams = {}; Object dependency = getter.invoke(container, invokeParams); field.setAccessible(true); field.set(target, dependency); field.setAccessible(false); } catch (InstantiationException | IllegalAccessException | NoSuchMethodException | InvocationTargetException e) { logger.error(e); } } } }
The Effect of Kinesitherapy Exercises on the Level of Irisin among Females with Cardio-vascular diseases depending on the body mass and hormonal status. The observation was conducted on 41 female subjects age 32 to 69 with compensated cardio-vascular diseases. 23 of the subjects had an increased body mass index (BMI). It was established that the older the females the less of the irisin muscle hormone is found in the blood. In the subjects with a higher BMI the level of irisin in the blood is also higher. Direct correlations were found between the level of irisin and the level of female sex hormones - estrogen and progesterone. Under the effect of kinesitherapy exercises the level of irisin in females with normal BMI increases; whereas in the females with a higher BMI it generally stays the same or is decreased. The characteristics of irisins response to the kinesitherapy exercises depends on its original level, the intensity of physical exercise and the subjects physique.
/* Ensure that any allocation held by the given SIValue is guaranteed to not go out * of scope during the lifetime of this query by copying references to volatile memory. * Heap allocations that are not scoped to the input SIValue, such as strings from the AST * or a GraphEntity property, are not modified. */ void SIValue_Persist(SIValue *v) { if(v->allocation == M_VOLATILE) *v = SI_CloneValue(*v); }
/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package org.redkale.service; import java.util.logging.Level; import org.redkale.net.http.*; import org.redkale.util.*; /** * 由 org.redkale.net.http.WebSocketNodeService 代替 * * <p> * 详情见: https://redkale.org * * @deprecated 2.6.0 * @author zhangjx */ @Deprecated @AutoLoad(false) @ResourceType(WebSocketNode.class) public class WebSocketNodeService extends org.redkale.net.http.WebSocketNodeService { @Override public void init(AnyValue conf) { super.init(conf); logger.log(Level.WARNING, WebSocketNodeService.class.getName() + "is replaced by " + org.redkale.net.http.WebSocketNodeService.class.getName()); } }
Genome Sequence Analysis of In Vitro and In Vivo Phenotypes of Bunyamwera and Ngari Virus Isolates from Northern Kenya Biological phenotypes of tri-segmented arboviruses display characteristics that map to mutation/s in the S, M or L segments of the genome. Plaque variants have been characterized for other viruses displaying varied phenotypes including attenuation in growth and/or pathogenesis. In order to characterize variants of Bunyamwera and Ngari viruses, we isolated individual plaque size variants; small plaque (SP) and large plaque (LP) and determined in vitro growth properties and in vivo pathogenesis in suckling mice. We performed gene sequencing to identify mutations that may be responsible for the observed phenotype. The LP generally replicated faster than the SP and the difference in growth rate was more pronounced in Bunyamwera virus isolates. Ngari virus isolates were more conserved with few point mutations compared to Bunyamwera virus isolates which displayed mutations in all three genome segments but majority were silent mutations. Contrary to expectation, the SP of Bunyamwera virus killed suckling mice significantly earlier than the LP. The LP attenuation may probably be due to a non-synonymous substitution (T858I) that mapped within the active site of the L protein. In this study, we identify natural mutations whose exact role in growth and pathogenesis need to be determined through site directed mutagenesis studies. Introduction Bunyamwera virus is the prototype virus of the Orthobunyavirus genus of the Bunyaviridae family of arboviruses. The virus is associated with febrile illness with headache, arthralgia, rash and infrequent central nervous system involvement. While viruses of the Orthobunyavirus genus are known to cause human disease, they were previously not associated with hemorrhagic manifestations. However, Ngari virus has been implicated in recent outbreaks of hemorrhagic fevers in Kenya and Somalia. Ngari virus is thought to have arisen through genetic reassortment between two bunyaviruses co-circulating within the same environment. Like other viruses within the Bunyaviridae family, the Bunyamwera virus genome consists of three negative-sense RNA segments that employ a variety of coding strategies leading to generation of a limited set of structural and non-structural proteins. The L (large) segment encodes a large protein that comprises the RNA-dependent RNA polymerase, for replication and transcription of genomic RNA segments. The M (medium) segment encodes a precursor polypeptide which yields the viral surface glycoproteins Gn and Gc, and a nonstructural protein (NSm), and the S (small) segment encodes the nucleocapsid (NC) and a nonstructural protein (NSs) in overlapping reading frames. The prevalence of members of the Bunyaviridae family are likely underestimated because of the lack of detection tools arising partly from their high level of diversity, limited phenotypic and genetic characterization and segmented nature of their genome. Orthobunyaviruses are mostly isolated and amplified in interferon defective African green monkey kidney epithelial Vero cell line that may result in mutations yielding substrains that are phenotypically different from the parental wild type virus. Such observations have been reported among other viruses of the family Bunyaviridae, including Puumala virus in which the large plaque (LP) grows to higher titers than the small plaque (SP) and the parental wild type (WT) virus. Genome sequencing analysis revealed differences at two positions in the NC protein and two positions in the L protein. Attenuation, both in vivo and in vitro has also been observed for the SP of West Nile virus. Attenuated pathogenesis of substrains of Dengue and Japanese encephalitis virus has also been reported in mice experiments. Thus, understanding the genetic diversity in a heterogeneous arbovirus population is important, given that any variant can be favored by selection which ultimately affects fitness. We hypothesize that natural mutations may accumulate during passage of Bunyamwera and Ngari viruses, obtained from entomological surveillance in Kenya. Such mutations may yield substrains with genotypic and phenotypic differences between each other and with the parental WT strains. In analyzing the viral phenotypes, we determined the kinetics of replication following infection of Vero cells. Additionally, we demonstrated pathogenesis of viral strains after intraperitoneal inoculation of mice. We report that Bunyamwera and Ngari virus substrains display contrasting phenotypes compared with each other and to the parental wild type. Ethics statement The study protocol (number SSC 2677) was approved by the Animal Care and Use Committee of the Kenya Medical Research Institute and by the Animal Ethics Committee of the University of Pretoria (Protocol number H012-13). All animal experiments were carried out in accordance with the regulations and guidelines of the Kenya Medical Research Institute and University of Pretoria Animal Ethics Committees. Virus stock preparation The sites in Kenya and vector species from which the 5 virus isolates used in the study were obtained is summarized in Table 1. Vero cells (CCL-81, ATCC) were grown in T-75 culture flasks containing Eagle's minimum essential medium (Sigma) (MEM) supplemented with 10% fetal bovine serum (Gibco-BRL), 2% Lglutamate (Sigma) and 2% penicillin/streptomycin (Gibco-BRL). Confluent cells were rinsed with sterile phosphate buffered saline (PBS), and 0.1 mL clarified homogenate of field collected mosquitoes were added followed by incubation at 37uC for one hour with constant rocking to allow virus adsorption. After incubation, maintenance medium (MEM with Earle's salts, 2% FBS, 2% glutamine, 100 U/mL penicillin, 100 mg/mL streptomycin, and 1 mL/mL amphotericin B) was added, cells incubated at 37uC and observed daily for cytopathic effects (CPE). Each isolate was grown individually to avoid cross-contamination and supernatants were harvested when approximately 75% of the cells exhibited CPE. The culture supernatants were aliquoted and stored at 280uC until used. The stock concentrations were determined by plaque assay titration. Plaque assay and purification Vero cells were seeded on 6 well plates and incubated in a humidified CO 2 incubator at 37uC overnight before use. The cells were used when they attained 75-90% confluence. Ten-fold dilutions of the virus isolates were prepared in maintenance media. Media was carefully aspirated from the wells using sterile transfer pipettes and 100 ml of the appropriate viral dilution added to each of duplicate wells of 6-well plates with gentle rocking to evenly distribute the virus. Plates were incubated at 37uC for 1 hour after which media was carefully aspirated and 3 ml of 1.25% methylcellulose solution gently added to each plate. Plates were placed in a CO 2 humidified incubator and incubated for 5 days. Development of plaques was monitored by visualization under an inverted microscope. To facilitate visualization of plaques, methylcellulose solution was carefully aspirated using transfer pipette followed by fixation in 10% formaldehyde after which plates were stained with crystal violet solution. Bunyamwera and Ngari virus isolates (previously passaged 3 times on Vero cells) with a titer of 1610 9 PFU/ml were diluted in maintenance media to approximately 10 PFU/ml. Confluent Vero cells in 24-well plates were infected with 100 ml of diluted virus per well. After adsorption for 1 h at 37uC, the cells were overlaid with 1.25% methylcellulose. Five days later, the methylcellulose medium was carefully aspirated and sterile pasture pipettes used to pick plaques from wells with single plaques and placed in 500 ml of maintenance media. The plaque phenotypes were then propagated on Vero cells and the procedure repeated twice more, without intermediate amplification, for each of the plaque isolates. The purified isolates were then amplified by propagation on confluent Vero cells in flasks and then frozen at 280uC until use. Invitro growth kinetics The viral isolates including the parental WT, i.e. mixture of SP and LP, were used to infect 90% confluent monolayers of Vero cells at a multiplicity of infection of 0.01 and incubated for one hour to allow virus adsorption. Infected monolayers were washed twice with sterile PBS and overlaid with maintenance medium and incubated at 37uC. An aliquot of tissue culture fluid (0.5 ml) was collected every 12 hours for the first 2 days and once on day 3 of infection, mixed 1:10 with maintenance media and frozen at 280uC until use. Daily samples were titrated by plaque assay as described above. The statistical package R (R Development Core Team, 2008) was used for fitting exponential growth data using the Kruskal-Wallis test. The detection of correlated error structure in the growth curve data was carried out as follows; the log-transformed data was fit to linear mixed effects models using R, and an AR1 model was determined to fit the data better than a repeated measures model. Molecular characterization of plaque purified phenotypes Virus isolation and cDNA synthesis. For RNA extraction, the MagNA Pure LC RNA Isolation Kit I (Roche Diagnostics) was (table S1). Amplified DNA fragments were visualized by electrophoresis on a 1.5% agarose gel. The Amplified DNA was purified and prepared for sequencing using ExoSAP-IT PCR clean-up kit (USB Corp, Cleveland, OH) according to the manufacturer's instructions and stored at 220uC. Sequence analysis of viral genomes. Sequencing was performed using different sets of primers for the S, M and L segments as defined above using Big Dye V3.1 kit (Applied biosystems) and injection on a 3500XL genetic analyser (Foster city, California, USA). The sequences obtained were cleaned and edited using Bioedit software (www.mbio.ncsu.edu/BioEdit/ BioEdit.html), USA for both the reads from the forward and reverse primers. Sequences obtained were compared to those in the gene bank using the Basic Local Alignment Search tool (BLAST) in NCBI GenBank (http://www.ncbi.nlm.nih.gov/ blast/Blast) to identify similar sequences. The clean sequences of each segment of each phenotype were aligned against the corresponding segment sequences of the wild type virus isolate using Bioedit. Nucleotide and amino acid similarity and diversity between the virus phenotypes were computed in MEGA v5.20 using the p-distance method. Clinical disease in mice Pathogenicity of the plaque phenotypes was evaluated in Swiss Albino suckling mice (1-4 days old) and 6 week old mice. Mice were inoculated intraperitoneally with 100 ml of 10 9 PFU/ml of selected wild type or amplified plaque purified virus substrains in maintenance media. All mice were carefully observed twice daily, up to 14 days for clinical disease which included characteristic tremors and hind-limb paralysis. Survival functions were graphed for the two sets of viruses. Pairwise comparisons of survival curves were made using the Wilcoxon-Breslow test to test for equality of survivor functions. Isolation and Purification of Plaque phenotypes Plaque titration of Ngari and Bunyamwera viruses yielded plaques of two significantly distinct phenotypes, large plaques (LP) (Range: 0.88-1.21 mm) and small plaques (SP) (Range: 0.47-0.66 mm) (Figure 1). Each plaque phenotype was sub-cloned twice and purified each time by inoculation onto new Vero cells. The plaque phenotypes retained their plaque size after amplification by single passage in cell culture to generate viral stocks with high titers for onward experimentation. The cloned LP substrain produced larger plaques than the SP suggesting that the former was more efficiently replicated in Vero cells than the latter. In vitro growth curves In general, the LP phenotype of both Bunyamwera virus isolates grew at a faster rate and to a significantly higher titer (p = 0.009) than the SP phenotype by day 3 of infection ( Figure 2A and 2B). The Bunyamwera WT reached approximately 5-logs higher than the virus titer of the SP and LP phenotypes by day 3 post-infection. However, the difference in growth of the SP and LP phenotypes was insignificant. Bunyamwera virus WT isolates generally grew to a higher titer than Ngari virus WT isolates. For Ngari virus isolates ( Figures 2C-E), the difference in titer between the WT and plaque phenotypes at 3 days post-infection was not more than 1 log except for isolate GSA/S7/5170 ( Figure 2D). However, the difference in titer between the WT and plaque variants was not significant. Genetic characterization of plaque phenotypes Comparison of the nucleotide and amino acid sequences of low passage Ngari virus isolates revealed little or no divergence within the S segments ( Table 2). The N and NSs proteins of Ngari virus isolates were 100% conserved between the phenotypes. However, all Bunyamwera virus isolate phenotypes exhibited nucleotide substitutions in all segments compared to the WT except the L segment of isolate GSA/S4/11232SP. There was a single nucleotide change in the M segment of isolate GSA/TS7/ 5170SP and ISL/TS2/5242LP and three changes on the L segment of isolate ISL/TS2/5242SP. All these single nucleotide changes were synonymous. There were no changes in nucleotide sequences of L segments of other Ngari virus isolates except isolate TND/S1/19801LP which had one nucleotide substitution resulting in a non-synonymous change in the amino acid sequence (D84N). For the Bunyamwera virus isolates, there were more transversions than transitions resulting in several non-synonymous codons. Mice pathogenesis We selected the WT and plaque phenotypes of isolates GSA/ S2/11232 and TND/S7/19801 for the mice pathogenesis experiments. Six-week old mice were not susceptible to infection by either virus isolate. However, newborn mice were susceptible to infection with both virus isolates and displayed clinical symptoms such as hind limb paralysis, tremors, disorientation and mortality beginning 2-3 days post inoculation. By day 4 post-infection, mice inoculated with isolate GSA/S4/11232SP had a 50% probability of survival compared to the LP phenotype that had approximately 70% survival probability ( Figure 3A). The difference in survival probability between the SP and LP phenotypes was significant (p = 0.011). The converse was true for mice inoculated with Ngari virus isolate TND/S1/19801 where the LP phenotype was more lethal than the SP phenotype. Mice inoculated with the LP phenotype had a survival probability below 75% by day 4 and below 50% by day 5 post-infection whereas the SP phenotype had a 100% and 50% survival probability at day 4 and 5 post-infection respectively ( Figure 3B). However, the difference in mortality was not significant (p = 0.3579). Figure 2. Growth kinetics of wild type parental and amplified Plaque purified phenotypes of A-B) Bunyamwera and C-E) Ngari virus isolates. The viral isolates including the parental WT, i.e. mixture of SP and LP, were used to infect 90% confluent monolayers of Vero cells at a multiplicity of infection of 0.01. Aliquots of tissue culture fluid were collected at different timepoints and titers determined by plaque assay. The experiment was replicated thrice. The statistical package R was used for fitting exponential growth data using the Kruskal-Wallis test. The detection of correlated error structure in the growth curve data was carried out as follows; the log-transformed data was fit to linear mixed effects models using R, and an AR1 model was determined to fit the data better than a repeated measures model. doi:10.1371/journal.pone.0105446.g002 Discussion In the current study we evaluate the genetic diversity of plaque purified phenotypes of Bunyamwera and Ngari virus isolates by gene sequencing. We also determine the rate of in vivo growth in Vero cells and evaluate pathogenesis of the viral phenotypes in Swiss Albino mice. Difference in growth was more pronounced in Bunyamwera than Ngari virus isolates. This may be explained by the more mutations observed for the former possibly due to the extra passages which the SP and LP phenotypes underwent compared to the WT. In contrast, Ngari virus phenotypes had fewer mutations despite undergoing extra passages than the WT during the purification and amplification processes. As expected, the LP phenotypes grew to a higher titer than the SP phenotypes for both Bunyamwera and Ngari virus isolates. Previous studies of other viruses have correlated plaque size to replication rate with the LP phenotypes displaying faster replication rate than the SP phenotype. The LP phenotypes were generally more virulent than SP phenotypes and would be expected to be the same both in vitro and in vivo on the assumption that LP phenotypes produce larger foci of cell destruction. However, inoculation of mice with selected Bunyamwera and Ngari virus isolate phenotypes resulted in discordant observations in the present study. While mice inoculated with the SP phenotype of Ngari virus isolate TND/SA/19801 survived longer than the LP phenotype, the reverse was true for Bunyamwera virus isolate GSA/S4/11232 in which mice inoculated with the SP phenotype died 3 days post-inoculation compared to 4 days post-inoculation for the LP phenotype and this difference in mortality rate was significant. This difference in neurovirulence for phenotypes of Bunyamwera virus isolate GSA/S4/11232 in mice cannot fully be accounted for by the rate of replication as shown in the one-step growth curves. Previous neurovirulence studies of viruses within the Orthobunyavirus genus have mapped such differences to the L segment. The study by Endres et al., was designed to identify molecular determinants responsible for attenuation of a variant California serogroup virus. Another study investigating the biological function of Bunyamwera L protein demonstrated that mutations in the polymerase genome affect the ability of Bunyamwera virus to replicate in different cells. Thus, the discordance observed in the current study may have been dependent on the single nucleotide substitutions that were present in the different segments of the Bunyamwera virus isolate. It is interesting that all the nucleotide substitutions on the M segment while resulting in non-synonymous amino acid changes, involved substitution of amino acids with similar properties, thus, a significant difference in protein function would be unexpected. The M segment substitutions resulted in exchange of positively charged amino acids, glutamic acid for lysine. However, two mutations in the L segment of the LP phenotype involved substitution of amino acids with different properties which are likely to alter the functionality of the L protein. Thus for isolate GSA/S4/11232, the LP attenuated pathogenesis may be mapped to any of the 2 non-synonymous mutations on the L segment. The T858I mutation resulting in amino acid substitution of a polar for a non-polar amino acid, occurring within the predicted catalytic site of the L protein (AA 597-1330) seems the most plausible cause of the observed attenuation in pathogenesis. Mutation within the catalytic site of the L protein has been demonstrated to abolish polymerase activity in a previous study. With regard to isolate TND/S1/19801, the SP phenotype, which was attenuated in mice but genetically similar to the WT virus, it is likely that this phenotype was present in a higher quantity in the WT virus, which is a mixture of both LP and SP phenotypes, and could have preferentially been sequenced. However, in the mice pathogenesis experiment, it is likely that the LP phenotype in the WT grew at a faster rate as expected and resulted in earlier death of mice compared to the SP phenotype. However, we did not isolate the infecting virus from mice to confirm this observation. Another limitation was the use of interferon defective Vero cells for the one step growth curve analysis which may have limited our comparison with mice pathogenesis as the GSA/S4/11232 SP phenotype may have been better at counteracting the interferon response. Additionally, we did not sequence the non-coding regions of the genomic segments which have been documented to play a role in virus growth and pathogenesis. In summary, we have identified a mutation in the L segment of Bunyamwera virus isolate GSA/S4/11232 LP phenotype which may be associated with decreased pathogenesis in suckling mice and virus replication in Vero cells. In addition, we have identified other natural mutations whose role in viral growth and pathogenesis should be determined. Site directed mutagenesis studies may clarify the exact mutation involved in the observed phenotypic changes. Supporting Information Table S1 Primers used in sequencing of Kenyan Bunyamwera and Ngari virus isolates. Primers for each segment were either designed based on conserved regions of sequences of Bunyamwera, Batai and Ngari viruses available in GenBank or obtained from previous publications. (DOCX)
<reponame>EarlyZhao/id_generator package models import ( "github.com/jinzhu/gorm" // _ "github.com/jinzhu/gorm/dialects/postgres" _ "github.com/jinzhu/gorm/dialects/mysql" "fmt" "github.com/EarlyZhao/id_generator/conf" ) var DB *gorm.DB var ConnectionSucess chan bool func init(){ ConnectionSucess = make(chan bool) go connectionToDB() } func connectionToDB(){ // wait for app filished init process // database connection need the config data <- conf.ConfigInitOverForDb var err error var dbUrl string config_db := conf.Config.Database if config_db == "mysql"{ dbUrl = mysqlConnectionUrl() DB, err = gorm.Open("mysql", dbUrl) }else{ // todo: pg } if err != nil{ fmt.Println(conf.Config.Database) fmt.Println(conf.Config.Mysql) fmt.Println(dbUrl) panic(err) } // todo: DB.Set("gorm:table_options", "ENGINE=InnoDB").AutoMigrate(&List{}) // DB.Ping() ConnectionSucess <- true } func mysqlConnectionUrl() string{ config := conf.Config user := config.Mysql.Username password := config.Mysql.Password host := config.Mysql.Host port := config.Mysql.Port url := fmt.Sprintf("%s:%d", host, port) database := config.Mysql.Database dbUrl := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8&parseTime=True&loc=Local", user, password, url, database) return dbUrl }
Acquired Hearing Loss: Is Prevention or Reversal a Realistic Goal? Acquired hearing loss develops when sensory cells inside the inner ear are damaged. Resulting hearing loss is labeled based on the cause of the damage, including noise-induced hearing loss (NIHL), drug-induced hearing loss (DIHL), and age-related hearing loss (ARHL). In addition, hearing loss can develop suddenly with no known cause, in which case it is termed idiopathic sudden sensorineural hearing loss (ISSNHL). Some 30 to 60% cases of ISSNHL show spontaneous recovery, with the rest resulting in permanent acquired hearing loss ().
<filename>src/main/java/me/neznamy/tab/shared/packets/IChatBaseComponent.java<gh_stars>0 package me.neznamy.tab.shared.packets; import java.lang.reflect.Method; import java.util.ArrayList; import java.util.List; import java.util.UUID; import org.bukkit.Bukkit; import org.bukkit.inventory.ItemStack; import org.json.simple.JSONObject; import org.json.simple.parser.JSONParser; import org.json.simple.parser.ParseException; import me.neznamy.tab.shared.ProtocolVersion; import me.neznamy.tab.shared.RGBUtils; import me.neznamy.tab.shared.config.Configs; import me.neznamy.tab.shared.placeholders.Placeholders; @SuppressWarnings("unchecked") public class IChatBaseComponent { public static final String EMPTY_COMPONENT = "{\"translate\":\"\"}"; private static Class<?> NBTTagCompound; private static Method CraftItemStack_asNMSCopy; private static Method ItemStack_save; static { try { String pack = Bukkit.getServer().getClass().getPackage().getName().split("\\.")[3]; NBTTagCompound = Class.forName("net.minecraft.server." + pack + ".NBTTagCompound"); CraftItemStack_asNMSCopy = Class.forName("org.bukkit.craftbukkit." + pack + ".inventory.CraftItemStack").getMethod("asNMSCopy", ItemStack.class); ItemStack_save = Class.forName("net.minecraft.server." + pack + ".ItemStack").getMethod("save", NBTTagCompound); } catch (Throwable t) { } } private String text; private TextColor color; private Boolean bold; private Boolean italic; private Boolean underlined; private Boolean strikethrough; private Boolean obfuscated; private ClickAction clickAction; private Object clickValue; private HoverAction hoverAction; private String hoverValue; private List<IChatBaseComponent> extra; private JSONObject jsonObject = new JSONObject(); public IChatBaseComponent() { } public IChatBaseComponent(String text) { setText(text); } public List<IChatBaseComponent> getExtra(){ return extra; } public IChatBaseComponent setExtra(List<IChatBaseComponent> components){ this.extra = components; jsonObject.put("extra", extra); return this; } public IChatBaseComponent addExtra(IChatBaseComponent child) { if (extra == null) { extra = new ArrayList<IChatBaseComponent>(); jsonObject.put("extra", extra); } extra.add(child); return this; } public String getText() { return text; } public TextColor getColor() { return color; } public boolean isBold(){ return bold == null ? false : bold; } public boolean isItalic(){ return italic == null ? false : italic; } public boolean isUnderlined(){ return underlined == null ? false : underlined; } public boolean isStrikethrough(){ return strikethrough == null ? false : strikethrough; } public boolean isObfuscated(){ return obfuscated == null ? false : obfuscated; } public IChatBaseComponent setText(String text) { this.text = text; if (text != null) { jsonObject.put("text", text); } else { jsonObject.remove("text"); } return this; } public IChatBaseComponent setColor(TextColor color) { this.color = color; return this; } public IChatBaseComponent setBold(Boolean bold) { this.bold = bold; if (bold != null) { jsonObject.put("bold", bold); } else { jsonObject.remove("bold"); } return this; } public IChatBaseComponent setItalic(Boolean italic) { this.italic = italic; if (italic != null) { jsonObject.put("italic", italic); } else { jsonObject.remove("italic"); } return this; } public IChatBaseComponent setUnderlined(Boolean underlined) { this.underlined = underlined; if (underlined != null) { jsonObject.put("underlined", underlined); } else { jsonObject.remove("underlined"); } return this; } public IChatBaseComponent setStrikethrough(Boolean strikethrough) { this.strikethrough = strikethrough; if (strikethrough != null) { jsonObject.put("strikethrough", strikethrough); } else { jsonObject.remove("strikethrough"); } return this; } public IChatBaseComponent setObfuscated(Boolean obfuscated) { this.obfuscated = obfuscated; if (obfuscated != null) { jsonObject.put("obfuscated", obfuscated); } else { jsonObject.remove("obfuscated"); } return this; } public ClickAction getClickAction() { return clickAction; } public Object getClickValue() { return clickValue; } public IChatBaseComponent onClickOpenUrl(String url) { return onClick(ClickAction.OPEN_URL, url); } public IChatBaseComponent onClickRunCommand(String command) { return onClick(ClickAction.RUN_COMMAND, command); } public IChatBaseComponent onClickSuggestCommand(String command) { return onClick(ClickAction.SUGGEST_COMMAND, command); } public IChatBaseComponent onClickChangePage(int newpage) { return onClick(ClickAction.CHANGE_PAGE, newpage); } private IChatBaseComponent onClick(ClickAction action, Object value) { clickAction = action; clickValue = value; JSONObject click = new JSONObject(); click.put("action", action.toString().toLowerCase()); click.put("value", value); jsonObject.put("clickEvent", click); return this; } public HoverAction getHoverAction() { return hoverAction; } public String getHoverValue() { return hoverValue; } public IChatBaseComponent onHoverShowText(String text) { return onHover(HoverAction.SHOW_TEXT, text); } public IChatBaseComponent onHoverShowItem(ItemStack item) { return onHover(HoverAction.SHOW_ITEM, serialize(item)); } private String serialize(ItemStack item) { try { return ItemStack_save.invoke(CraftItemStack_asNMSCopy.invoke(null, item), NBTTagCompound.getConstructor().newInstance()).toString(); } catch (Throwable t) { t.printStackTrace(); return "null"; } } public IChatBaseComponent onHoverShowEntity(UUID id, String customname, String type) { JSONObject json = new JSONObject(); json.put("id", id.toString()); if (type != null) json.put("type", type); if (customname != null) json.put("name", customname); return onHover(HoverAction.SHOW_ENTITY, json.toString()); } private IChatBaseComponent onHover(HoverAction action, String value) { hoverAction = action; hoverValue = value; JSONObject hover = new JSONObject(); hover.put("action", action.toString().toLowerCase()); hover.put("value", value); jsonObject.put("hoverEvent", hover); return this; } public static IChatBaseComponent fromString(String json) { try { if (json == null) return null; JSONObject jsonObject = ((JSONObject) new JSONParser().parse(json)); IChatBaseComponent component = new IChatBaseComponent(); component.setText((String) jsonObject.get("text")); component.setBold((Boolean) jsonObject.get("bold")); component.setItalic((Boolean) jsonObject.get("italic")); component.setUnderlined((Boolean) jsonObject.get("underlined")); component.setStrikethrough((Boolean) jsonObject.get("strikethrough")); component.setObfuscated((Boolean) jsonObject.get("obfuscated")); component.setColor(TextColor.fromString(((String) jsonObject.get("color")))); if (jsonObject.containsKey("clickEvent")) { JSONObject clickEvent = (JSONObject) jsonObject.get("clickEvent"); String action = (String) clickEvent.get("action"); Object value = (Object) clickEvent.get("value"); component.onClick(ClickAction.valueOf(action.toUpperCase()), value); } if (jsonObject.containsKey("hoverEvent")) { JSONObject hoverEvent = (JSONObject) jsonObject.get("hoverEvent"); String action = (String) hoverEvent.get("action"); String value = (String) hoverEvent.get("value"); component.onHover(HoverAction.valueOf(action.toUpperCase()), value); } if (jsonObject.containsKey("extra")) { List<JSONObject> list = (List<JSONObject>) jsonObject.get("extra"); for (JSONObject extra : list) { component.addExtra(fromString(extra.toString())); } } return component; } catch (ParseException | ClassCastException e) { return fromColoredText(json); } } public String toString(ProtocolVersion clientVersion) { if (extra == null) { if (text == null) return null; if (text.length() == 0) return EMPTY_COMPONENT; } //the core component, fixing all colors if (color != null) { jsonObject.put("color", color.toString(clientVersion)); } if (extra != null) { for (IChatBaseComponent extra : extra) { if (extra.color != null) { extra.jsonObject.put("color", extra.color.toString(clientVersion)); } } } return toString(); } public String toString() { if (ProtocolVersion.SERVER_VERSION.getMinorVersion() >= 7) { //1.7+ return jsonObject.toString(); } else { String text = toColoredText(); if (ProtocolVersion.SERVER_VERSION.getMinorVersion() == 6) { //1.6.x JSONObject jsonObject = new JSONObject(); jsonObject.put("text", text); return jsonObject.toString(); } else { //1.5.x return text; } } } public static IChatBaseComponent fromColoredText(String message){ if (message == null) return new IChatBaseComponent(); if (Configs.SECRET_rgb_support) { message = RGBUtils.applyFormats(message); } List<IChatBaseComponent> components = new ArrayList<IChatBaseComponent>(); StringBuilder builder = new StringBuilder(); IChatBaseComponent component = new IChatBaseComponent(); for (int i = 0; i < message.length(); i++){ char c = message.charAt(i); if (c == Placeholders.colorChar || c == '&'){ i++; if (i >= message.length()) { break; } c = message.charAt(i); if ((c >= 'A') && (c <= 'Z')) { c = (char)(c + ' '); } EnumChatFormat format = EnumChatFormat.getByChar(c); if (format != null){ if (builder.length() > 0){ component.setText(builder.toString()); components.add(component); component = new IChatBaseComponent(); builder = new StringBuilder(); } switch (format){ case BOLD: component.setBold(true); break; case ITALIC: component.setItalic(true); break; case UNDERLINE: component.setUnderlined(true); break; case STRIKETHROUGH: component.setStrikethrough(true); break; case OBFUSCATED: component.setObfuscated(true); break; case RESET: format = EnumChatFormat.WHITE; default: component = new IChatBaseComponent(); component.setColor(new TextColor(format)); break; } } } else if (Configs.SECRET_rgb_support && c == '#'){ try { String hex = message.substring(i+1, i+7); TextColor color = new TextColor(hex); //the validation check is in constructor if (builder.length() > 0){ component.setText(builder.toString()); components.add(component); component = new IChatBaseComponent(); builder = new StringBuilder(); } component = new IChatBaseComponent(); component.setColor(color); i += 6; } catch (Exception e) { //invalid hex code builder.append(c); } } else { builder.append(c); } } component.setText(builder.toString()); components.add(component); return new IChatBaseComponent("").setExtra(components); } public String toColoredText() { StringBuilder builder = new StringBuilder(); if (color != null) builder.append(color.legacy.getFormat()); if (isBold()) builder.append(EnumChatFormat.BOLD.getFormat()); if (isItalic()) builder.append(EnumChatFormat.ITALIC.getFormat()); if (isUnderlined()) builder.append(EnumChatFormat.UNDERLINE.getFormat()); if (isStrikethrough()) builder.append(EnumChatFormat.STRIKETHROUGH.getFormat()); if (isObfuscated()) builder.append(EnumChatFormat.OBFUSCATED.getFormat()); if (text != null) builder.append(text); if (extra != null) { for (IChatBaseComponent component : extra) { builder.append(component.toColoredText()); } } return builder.toString(); } public String toRawText() { StringBuilder builder = new StringBuilder(); if (text != null) builder.append(text); if (extra != null) { for (IChatBaseComponent extra : extra) { if (extra.text != null) builder.append(extra.text); } } return builder.toString(); } public static IChatBaseComponent optimizedComponent(String text){ return text != null && (text.contains("#") || text.contains("&x") || text.contains(Placeholders.colorChar + "x")) ? IChatBaseComponent.fromColoredText(text) : new IChatBaseComponent(text); } public enum ClickAction{ OPEN_URL, @Deprecated OPEN_FILE,//Cannot be sent by server RUN_COMMAND, @Deprecated TWITCH_USER_INFO, //Removed in 1.9 CHANGE_PAGE, SUGGEST_COMMAND, COPY_TO_CLIPBOARD; //since 1.15 } public enum HoverAction{ SHOW_TEXT, SHOW_ITEM, SHOW_ENTITY, @Deprecated SHOW_ACHIEVEMENT;//Removed in 1.12 } public static class TextColor{ private int red; private int green; private int blue; private EnumChatFormat legacy; public TextColor(EnumChatFormat legacy) { this.red = legacy.red; this.green = legacy.green; this.blue = legacy.blue; this.legacy = legacy; } public TextColor(String hexCode) { int hexColor = Integer.parseInt(hexCode, 16); red = (hexColor >> 16) & 0xFF; green = (hexColor >> 8) & 0xFF; blue = hexColor & 0xFF; double minDist = 9999; double dist; for (EnumChatFormat color : EnumChatFormat.values()) { int r = (int) Math.pow(color.red - red, 2); int g = (int) Math.pow(color.green - green, 2); int b = (int) Math.pow(color.blue - blue, 2); dist = Math.sqrt(r + g + b); if (dist < minDist) { minDist = dist; legacy = color; } } } public String toString(ProtocolVersion clientVersion) { if (clientVersion.getMinorVersion() >= 16) { EnumChatFormat legacyEquivalent = EnumChatFormat.fromRGBExact(red, green, blue); if (legacyEquivalent != null) { //not sending old colors as RGB to 1.16 clients if not needed, also viaversion blocks that as well return legacyEquivalent.toString().toLowerCase(); } return "#" + RGBUtils.toHexString(red, green, blue); } else { return legacy.toString().toLowerCase(); } } public static TextColor fromString(String string) { if (string == null) return null; if (string.startsWith("#")) { return new TextColor(string.substring(1)); } else { return new TextColor(EnumChatFormat.valueOf(string.toUpperCase())); } } public int getRed() { return red; } public int getGreen() { return green; } public int getBlue() { return blue; } } }
<gh_stars>0 package jpype.properties; public class TestBean { public static String m1; public String m2; public String m3; public String m4; public String m5; public String getPropertyMember() { return this.m2; } public void setPropertyMember(String value) { this.m2 = value; } public static String getPropertyStatic() { return m1; } public static void setPropertyStatic(String value) { m1 = value; } public String getReadOnly() { return this.m3; } public void setWriteOnly(String value) { this.m4 = value; } public void setWith(String value) { this.m5 = value; } public String getWith() { return this.m5; } public void setFailure1(String value, int i) { } public String getFailure2(int i) { return "fail"; } }
. Calcium level is maintained by parathyroid hormone, 1alpha, 25 (OH)D and calcitonin in cooperation with bone, kidney and intestine. In the kidney activation of vitamin D and transcellular calcium transport in distal tubule regulate calcium concentlation. In the parathyroid glands calcium-sensing receptor sense extracellular calcium and secretes parathyroid hormone. It is received negative feedback from 1alpha, 25 (OH)D and phosphate.
Self, Social, Team, and Situational Factors Influencing Televised Sports Viewership This study examined personal, social, and team motives associated with the consumption of televised sports (CTS) while taking into consideration market constraints variables. Research participants (N = 304) were university students who responded to a questionnaire that consisted of four segments: (a) watching televised sports, (b) motives for watching televised sports, (c) situational constraints, and (d) demographics. Semi-structured interviews as an ad hoc study were conducted with additional 22 frequent viewers of televised sports to ensure inclusion of all relevant factors affecting CTS. Multiple regression analyses revealed that self, team, and social motives were significant factors (p <.05) related to CTS. Two situational factors (weather and ticket availability) were found to have a significant (p <.01) impact on the CTS. Findings from the interviews further revealed that four conceptual themes affected CTS: individual-related factors, team-related factors, event-related factors, and media features.
<gh_stars>0 // // globalConstants.h // HedzupPongGame // // Created by <NAME> on 2014/10/03. // Copyright (c) 2014 tappnology. All rights reserved. // #ifndef HedzupPongGame_globalConstants_h #define HedzupPongGame_globalConstants_h #ifdef PRPDEBUG #define PRPLog(format...)NSLog(format) #else #define PRPLog(format...) #endif #define CMD_STR NSStringFromSelector(_cmd) #define CLS_STR NSStringFromClass([self class]) #define BALL_BITMASK 0x1 << 0 // 00000000000000000000000000000001 #define FLOOR_BITMASK 0x1 << 1 // 00000000000000000000000000000010 #define PLAYER_BITMASK 0x1 << 2 #define BRICKS_BITMASK 0x1 << 3 #define COLOR_HU_PINK [NSColor colorWithRed:249.0/255.0 green:85.0/255.0 blue:190.0/255.0 alpha:1] #define COLOR_HU_LIGHT_BLUE [NSColor colorWithRed:88.0/255.0 green:193.0/255.0 blue:1 alpha:1] #define COLOR_HU_BLUE [NSColor colorWithRed:76.0/255.0 green:68.0/255.0 blue:229.0/255.0 alpha:1] #define COLOR_HU_WHITE [NSColor colorWithRed:1 green:1 blue:1 alpha:1] #define COLOR_HU_YELLOW [NSColor colorWithRed:1 green:1 blue:131.0/255.0 alpha:1] #define SCORE_BLUE 10 #define SCORE_PINK 6 #define SCORE_YELLOW 3 #define SCORE_WHITE 2 //names of our game objects #define NAME_BALL_CATEGORY @"ballObject" #define NAME_PLAYER_CATEGORY @"playerObject" #define NAME_BRICK_CATEGORY @"brickObject" #define NAME_HOME_PLAY_NOW @"playnowWithFlower" #define NAME_EMAIL_FIELD @"EmailFieldPlain" #define NAME_LEADER_BOARD_BANNER @"leaderBanner" #define NAME_LEADER_BOARD_LABEL @"leaderBannerLabels" #define NAME_UPDATE_SCREEN_TEXT @"homeGameTimeText" #define PLAYER_MOVE_VELOCITY_OFFSET 100.0f #define WORLD_BLOCK_COUNT 21 #define MUSIC_VOLUME 0.1f #define SCORE_DEFAULT 3; #define TOTALGAMETIME 60; //TEXT #define HOME_ENTER_EMAIL_TEXT @"ENTER EMAIL & PRESS ENTER" #define ENGILISH TRUE #if ENGILISH #define VOICE_WELCOME @"british_intro.caf" #define VOICE_PRESS_ENTER @"british_press_enter.caf" #define VOICE_HIGHSCORE @"british_high_score.caf" #define VOICE_GAMEOVER @"british_game_over.caf" #define VOICE_FLOWER_POWER @"british_flower_power.caf" #define VOICE_HEDZUP @"british_hedzup-1.caf" #define SOUNDEFFECT_BALL_PLAYER @"nes-00-01.wav" #define SOUNDEFFECT_BRICK_DIE @"nes-14-08.wav" #define SOUNDEFFECT_USER_DIE @"nes-14-08.wav" #else #define VOICE_WELCOME @"chinese_intro.wav" #define VOICE_PRESS_ENTER @"chinese_why_you_no_press_enter.caf" #define VOICE_HIGHSCORE @"british_high_score.caf" #endif #define SONG_ONE @"Magical_8bit_tour_" #define SONG_TWO @"Everything_Is_Awesome" #endif
Quantification of the Elastic Moduli of Lumbar Erector Spinae and Multifidus Muscles Using Shear-Wave Ultrasound Elastography Although spinal surgeries with minimal incisions and a minimal amount of X-ray exposure (MIMA) mostly occur in a prone posture on a Wilson table, the prone postures effects on spinal muscles have not been investigated. Thus, this study used ultrasound shear-wave elastography (SWE) to compare the material properties of the erector spinae and multifidus muscles when subjects lay on the Wilson table used for spinal surgery and the flat table as a control condition. Thirteen male subjects participated in the study. Using ultrasound SWE, the shear elastic moduli (SEM) of the erector spinae and multifidus muscles were investigated. Significant increases were found in the SEM of erector spinae muscle 1, erector spinae muscle 2, and multifidus muscles on the Wilson table (W) compared to in the flat table (F; W:22.19 ± 7.15 kPa, F:10.40 ± 3.20 kPa, p < 0.001; W:12.10 ± 3.31 kPa, F: 7.17 ± 1.71 kPa, p < 0.001; W: 18.39 ± 4.80 kPa, F: 11.43 ± 2.81 kPa, p < 0.001, respectively). Our results indicate that muscle material properties measured by SWE can be changed due to table posture, which should be considered in biomechanical modeling by guiding surgical planning to develop minimal-incision surgical procedures.
Implementation of EU labour law directives by way of national collective agreements Collective agreements are among the panoply of national legal instruments deemed appropriate mechanisms for the implementation of EU directives in the fields of social and employment policy and industrial relations. The role of collective agreements in implementing EU directives is further prescribed by Article 153 TFEU, which states that a Member State may entrust management and labour, at their joint request, with the implementation of directives adopted pursuant to paragraph 2. Article 153 TFEU further states that, in that case, it shall ensure that, no later than the date on which a directive must be transposed, management and labour should have introduced the necessary measures by agreement, the Member State concerned being required to take any necessary measure enabling it to be in a position to guarantee the results imposed by that directive. The vast majority of scholars and practitioners agree that so long as the basic requirements of Community law are met, a Directive in principle I.
<reponame>IDRISSOUM/hospital_management # -*- coding: utf-8 -*- # Part of BrowseInfo. See LICENSE file for full copyright and licensing details. from odoo import api, fields, models, _ class medical_rounding_procedure(models.Model): _name = 'medical.rounding_procedure' medical_rounding_procedure_id = fields.Many2one('medical.procedure',string="Code",required=True) notes = fields.Text(string="Notes") medical_patient_rounding_procedure_id = fields.Many2one('medical.patient.rounding',string="Vaccines")
from flask import Flask, render_template, jsonify from flask_pymongo import PyMongo from datetime import datetime # Create an instance of Flask app = Flask(__name__) # TODO # Use PyMongo to establish Mongo connection mongo = PyMongo(app, uri="mongodb://localhost:27017/USWeather") # mongo = PyMongo(app, uri="mongodb://localhost:27017/USWeatherAgg") # Route to render index.html template using data from Mongo @app.route("/") def home(): # Return template and data return render_template("index.html") # Get all unique station ids # Using techniques from: # https://www.geeksforgeeks.org/python-mongodb-distinct/ @app.route("/api/v1.0/weatherdata/stations") def weather_stations(): db_data = mongo.db.collection.distinct('USAF') parsed = [x for x in db_data] parsed.sort() # print('parsed: ', parsed) return jsonify(parsed) # Get all unique station ids for specified period @app.route("/api/v1.0/weatherdata/period/stations/<start>/<end>") def weather_period_stations(start, end): # Expects that start and end strings are formatted as YYYY-MM-DD (eg. 2018-01-01) start_dt = datetime.strptime(start, '%Y-%m-%d') end_dt = datetime.strptime(end, '%Y-%m-%d') db_data = mongo.db.collection.distinct('USAF', {"$and": [ {'YEARMODA': {'$gt': start_dt}}, {'YEARMODA': {'$lt': end_dt}} ] }) parsed = [x for x in db_data] parsed.sort() # print('parsed: ', parsed) return jsonify(parsed) # print(mycollection.distinct("item.code", {"dept" : "B"})) @app.route("/api/v1.0/weatherdata/period/<start>/<end>/<station_id>") def weather_period(start, end, station_id): # Expects that start and end strings are formatted as YYYY-MM-DD (eg. 2018-01-01) start_dt = datetime.strptime(start, '%Y-%m-%d') end_dt = datetime.strptime(end, '%Y-%m-%d') db_data = mongo.db.collection.find( {"$and": [ {'YEARMODA': {'$gt': start_dt}}, {'YEARMODA': {'$lt': end_dt}}, {'USAF': {'$eq': station_id}} ] }, {'_id': False}) parsed = [x for x in db_data] print('parsed: ', parsed) return jsonify(parsed) @app.route('/<state>/<year>/data') def db_data(state, year): db_data = mongo.db.collection.find( {"$and": [ {'STATE': state}, {'YEAR': float(year)} ] }, {'_id': False}) print('this route was pinged') parsed = [x for x in db_data] print('parsed: ', parsed) return jsonify(parsed) if __name__ == '__main__': app.run(debug=True)
#!/usr/bin/env python """ Gcode cleaner to work around prusa slic3r annoyances for multi-filament single-tool printing on non-Prusa printers. This gist can be found here: * https://gist.github.com/ex-nerd/22d0a9796f4f5df7080f9ac5a07a381f Bugs this attempts to work around: * https://github.com/prusa3d/Slic3r/issues/557 * https://github.com/prusa3d/Slic3r/issues/559 * https://github.com/prusa3d/Slic3r/issues/560 """ import os import re import argparse def comment(str): return '; ' + str def write(str, outfile, delete_comments=False): if delete_comments: if not str.startswith(';'): outfile.write(re.sub(';.*', '', str)) else: outfile.write(str) def rewrite(infile, outfile, verbose=False, delete_comments=False, my_tool_default='T0'): WIPE = 1 UNLOAD = 2 LOAD = 3 toolchange = 0 priming = False temp_change = None my_tool = my_tool_default+'\r\n' for line in infile: if line.startswith('\r\n'): continue # Priming if line.startswith('; CP PRIMING'): if 'START' in line: priming = True elif 'STOP' in line: priming = False # Detect toolchange state elif line.startswith('; CP TOOLCHANGE'): if 'WIPE' in line: toolchange = WIPE elif 'UNLOAD' in line: toolchange = UNLOAD elif 'LOAD' in line: toolchange = LOAD else: toolchange = 0 # Process the various lines if line.startswith(';'): write(line, outfile, delete_comments) elif line.rstrip() in ('G4 S0', ): write(comment(line), outfile, delete_comments) elif line.startswith('M907 '): write(comment(line), outfile, delete_comments) elif priming: write(comment(line), outfile, delete_comments) elif toolchange in (LOAD, UNLOAD): if line.startswith('G1'): # Only remove integer-value E moves (part of the Prusa load/unload routine?) # The other E values appear to be part of the actual wipe tower. if re.search(r'E-?\d+\.0000', line): write(comment(line), outfile, delete_comments) else: write(line, outfile, delete_comments) elif line.startswith('T'): my_tool = line write(line, outfile, delete_comments) if temp_change: # Duplicate the last temperature change. # https://github.com/prusa3d/Slic3r/issues/559 write(temp_change, outfile, delete_comments) temp_change = None else: if line.startswith('M104 S'): temp_change = line write(line, outfile, delete_comments) # retract on T3 elif line.startswith('G10'): write('T3\n'+line, outfile, delete_comments) # unretract on T3 elif line.startswith('G11'): write(line+my_tool, outfile, delete_comments) else: write(line, outfile, delete_comments) def parse_args(): parser = argparse.ArgumentParser( description='Gcode cleaner to work around some multi-extruder bugs in slic3r Prusa edition.' ) parser.set_defaults( verbose=False, overwrite=False, ) parser.add_argument( '--verbose', '-v', action='store_true', help="Enable additional debug output", ) parser.add_argument( '--keepcomments', action='store_false', help="keep comments", ) parser.add_argument( '--defaulttool', help="default tool like \"T0\" ", nargs='?', default='T0', ) parser.add_argument( '--targetfolder', help="target folder like \"z:\\\\printing\" ", nargs='?', default=os.getcwd(), ) parser.add_argument( '--overwrite', action='store_true', help="Overwrite the input file", ) parser.add_argument( 'filenames', type=argparse.FileType('r'), nargs='+', help="One or more paths to .gcode files to clean", ) return parser.parse_args() if __name__ == '__main__': args = parse_args() if args.verbose: print('\r\n\r\nStarting conversion: \r\n\r\n') print('- Default Tool: {}'.format(args.defaulttool)) print('- Target-Folder: {}'.format(args.targetfolder)) print('- Keep comments: {}'.format(args.keepcomments)) print('- overwrite source file: {}'.format(args.overwrite)) print('- source files:') for infile in args.filenames: print(' * {}'.format(infile.name)) print('\r\n') for infile in args.filenames: infilename = infile.name tmpfilename = os.path.join(args.targetfolder, '{}.tmp{}'.format(*os.path.splitext(infilename))) with open(tmpfilename, 'w') as tmpfile: rewrite(infile, tmpfile, args.verbose, args.keepcomments, args.defaulttool) infile.close() if args.overwrite: os.rename(infilename, "{}.bak".format(infilename)) outfilename = infilename else: outfilename = os.path.join(args.targetfolder, '{}.prusaclean{}'.format(*os.path.splitext(infilename))) os.rename(tmpfilename, outfilename) print("{} => {}".format(infilename, outfilename))
Mimicking the Escherichia coli cytoplasmic environment activates longlived and efficient cellfree protein synthesis Cellfree translation systems generally utilize highenergy phosphate compounds to regenerate the adenosine triphosphate (ATP) necessary to drive protein synthesis. This hampers the widespread use and practical implementation of this technology in a batch format due to expensive reagent costs; the accumulation of inhibitory byproducts, such as phosphate; and pH change. To address these problems, a cellfree protein synthesis system has been engineered that is capable of using pyruvate as an energy source to produce high yields of protein. The Cytomim system, synthesizes chloramphenicol acetyltransferase (CAT) for up to 6 h in a batch reaction to yield 700 g/mL of protein. By more closely replicating the physiological conditions of the cytoplasm of Escherichia coli, the Cytomim system provides a stable energy supply for protein expression without phosphate accumulation, pH change, exogenous enzyme addition, or the need for expensive highenergy phosphate compounds. © 2004 Wiley Periodicals, Inc.
Fort Pierce police arrested two men on drug charges after an early morning traffic stop Jan. 18, according to an arrest affidavit. It was about 4:50 a.m. when an officer saw a driver fail to fully stop at a red traffic light at Edwards Road and U.S. 1, and then head south, the affidavit said. He stopped the Honda Civic at Farmers Market Road and U.S. 1 and as he approached the car saw bags being thrown out a passenger side window, it said. He also reported a strong smell of marijuana coming from the car. According to the affidavit and jail records police seized more than 168 grams of marijuana in bags, 53.6 grams of alprazolam, more than 196 grams of MDMA, 51.7 grams of marijuana THC and drug paraphernalia from the car and the surrounding area. James Patrick Purdy Jr., 19, the driver, had more than $1277 in cash on him and passenger Manual Nicholas Montoya, 18, was carrying $1504. Approximately $340 was found in a purse on the back seat and on the floor of the car. Purdy told a deputy the cash came from the sale of a pair of sneakers. Purdy was charged with possession of marijuana with intent to sell, possession of more than 20 grams of marijuana, possession of a controlled substance without a prescription and possession of drug paraphernalia. He was also presented with a warned citation for failure to obey a traffic control device. He was released from the St. Lucie County Jail Jan. 18 after posting $30,750 in bond. Montoya, 18, who lives in the 200 block of Southeast Fallon Drive in Port St. Lucie, was charged with evidence tampering, possession of controlled drugs without a prescription and two counts of possession of more than 20 grams of marijuana. He remained held without bond at the St. Lucie County Jail because of a court order in a petit theft case.
The Beastie Boys have announced details of a three-date Get Out And Vote US tour. Kicking off in Richmond, VA on October 28, the tour is intended to inspire fans to turn out at polling stations and vote on the day of the US Presidential election on November 4. The rap trio told Billboard.com that they are endorsing Barack Obama for president. Santogold, Sheryl Crow and Norah Jones will support at the Richmond gig, with Ben Harper and Tenacious D on the bill for the two final dates in St Paul, MN and Milwaukee, WI. David Crosby and Graham Nash of Crosby, Stills And Nash will also appear at the latter date.
/** * WordRecord model for displaying in list. * * @author Alexander V. Ushakov */ public class ModelWordRecord { // Column "learn percent" private StringProperty learntPercent; // Column "word" private StringProperty word; // Translation index. Column "№" private StringProperty translationNumber = new SimpleStringProperty(); // Column "translation" private StringProperty translation; // Column "description" private StringProperty description; // Word index among the words with the same spelling private int number; // WordRecord private WordRecord wordRecord; // LearnCard for WordRecord private LearnCard learnCard; public ModelWordRecord(String fName, String lName, String description, String learnt) { this.word = new SimpleStringProperty(fName); this.translation = new SimpleStringProperty(lName); this.description = new SimpleStringProperty(description); this.learntPercent = new SimpleStringProperty(learnt); } public StringProperty translationNumberProperty() { return translationNumber; } public String getTranslationNumber() { return translationNumber.get(); } public void setTranslationNumber(String translationNumber) { this.translationNumber.set(translationNumber); } public void setTranslationNumber(int translationNumber) { this.translationNumber.set(String.valueOf(translationNumber)); } public StringProperty wordProperty() { return word; } public String getWord() { return word.get(); } public void setWord(String word) { this.word.set(word); } public StringProperty translationProperty() { return translation; } public String getTranslation() { return translationNumber + ". " + translation.get(); } public void setTranslation(String translation) { this.translation.set(translation); } public StringProperty descriptionProperty() { return description; } public String getDescription() { return description.get(); } public void setDescription(String description) { this.description.set(description); } public String getLearntPercent() { return learntPercent.get(); } public StringProperty learntPercentProperty() { return learntPercent; } public void setLearntPercent(String learntPercent) { this.learntPercent.set(learntPercent); } public void addWordRecord(WordRecord record) { this.wordRecord = record; } public WordRecord getWordRecord() { return wordRecord; } public LearnCard getLearnCard() { return learnCard; } public void setLearnCard(LearnCard learnCard) { learntPercent.set(learnCard.getLearnPercent()); this.learnCard = learnCard; } public int getNumber() { return number; } public void setNumber(int number) { this.number = number; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; ModelWordRecord record = (ModelWordRecord) o; if (wordRecord != null ? !wordRecord.equals(record.wordRecord) : record.wordRecord != null) return false; return true; } @Override public int hashCode() { return wordRecord != null ? wordRecord.hashCode() : 0; } }
Retrospective Deconstruction of Statistical Maps: A Choropleth Case Study The process of creating printed statistical maps in the predigital era was expensive and time consuming. These and other interacting factors constrained the number of design alternatives, such as color choices, that a cartographer might reasonably have been able to consider. In this article, we develop an approach to map deconstruction that enables researchers to investigate the statistical choices made by cartographers by placing each printed map into the universe of all possible choices available to them. We place a particular focus on the specification of choropleth map class intervals for maps produced in the early twentieth century. Three published choropleth maps are used as case studies to illustrate the approach, using four evaluation criteria to evaluate the accuracy of the data classifications. The results indicate that the class interval selection choices made for the examined maps are inferior when compared with available alternatives and that, in one case, classification errors are not only evident, they are abundant.
<filename>src/test/java/com/qa/ims/persistence/domain/ItemTest.java package com.qa.ims.persistence.domain; import static org.junit.Assert.assertEquals; import static org.junit.Assert.fail; import org.junit.After; import org.junit.BeforeClass; import org.junit.Test; import com.qa.ims.persistence.domain.Item; import nl.jqno.equalsverifier.EqualsVerifier; public class ItemTest { private Item item = new Item(null, null); @BeforeClass public static void setup() { } @Test public void testEquals() { EqualsVerifier.simple().forClass(Item.class).verify(); assertEquals(1, 0); assertEquals(null, 0, 0); } @Test public void testContructor1() { final Long value = (long) 1; final Double value1 = (double) 2; final String value2 = "Xbox"; Item i = new Item(value, value1, value2); Long result = i.getItemID(); Double result1 = i.getPrice(); String result2 = i.getItemName(); assertEquals(result, item.getItemID()); assertEquals(result1, item.getPrice()); assertEquals(result2, item.getItemName()); } @Test public void testitemName() { try { new Item(null, "a"); new Item(null, "b"); new Item(null, "c"); } catch (Exception e) { fail(e.getMessage()); } } @Test public void testItems() { assertEquals(50, item.getPrice(), 1); assertEquals(5, item.getItemID(), 1); //assertEquals } @After public void finalise() { System.out.println("After test"); } }
#!/usr/bin/env python #ver='1.0' ; date='July 18, 2019' # + [PT] translating Vinai's original afniB0() function from here: # https://github.com/nih-fmrif/bids-b0-tools/blob/master/distortionFix.py # #ver='1.1' ; date='July 22, 2019' # + [PT] updated I/O, help, defaults, dictionaries # # #ver='1.2' ; date='July 22, 2019' # + [PT] redone shell exec # + [PT] use diff opts/params for distortion direction and scale # #ver='1.3' ; date='July 24, 2019' # + [PT] add in 3dinfo info to use # + [PT] expand 3dROIstats options # + [PT] write out *_cmds.tcsh file, recapitulate utilized param info at top # #ver='1.4' ; date='July 25, 2019' # + [PT] change where scaling is applied-- now separate from 'polarity' issue # + [PT] updated help (included examples); put beta warning messages! # #ver='1.41' ; date='July 26, 2019' # + [PT] update help; include JSON description # #ver='1.5' ; date='July 31, 2019' # + [PT] rename several variables and opts, to undo my misunderstanding... # + [PT] EPI back to being required # #ver='1.6' ; date='Aug 2, 2019' # + [PT] added in obliquity checks: should be able to deal with relative # obl diffs between EPI and freq dset (if they exist) # + [PT] final WARP dset will now be in EPI grid # + [PT] *still need to check on scaling&recentering of Siemens data* # #ver='1.6' ; date='Aug 8, 2019' # + [PT] update/correct help about Siemens scaling, post-discussion-with-Vinai # #ver='1.7' ; date='Aug 12, 2019' # + [PT] *really* correct help @ Siemens scaling # + [PT] change internal scaling: *really* demand units of ang freq (rad/s) # + [PT] py23 compatibility of help file-- single dictionary usage! # #ver='2.0' ; date='Aug 15, 2019' # + [PT] new parameter scaling of freq dset from Vinai-- better params # + [PT] apply obliquity info to output # + [PT] fixed ocmds_fname, if opref contains path # + [PT] output a useful params file # + [PT] another py23 compatibility fix # #ver='2.1' ; date='Aug 16, 2019' # + [PT] change default number of erodes: 3 -> 1. Vinai concurs! # #ver='2.2' ; date='Aug 23, 2019' # + [PT] fix examples (use correct/newer opt names) # + [PT] fix 'eff echo sp' -> 'bwpp' calculation ('matr len', not 'vox dim') # #ver='2.21' ; date='Aug 27, 2019' # + [PT] update help file and descriptions (param text, for example) # + [PT] add in more fields to param text output # #ver='2.22' ; date='Aug 29, 2019' # + [PT] update help file # #ver='2.3' ; date='Aug 30, 2019' # + [PT] add this set_blur_sigma() method, which had been # forgotten... Thanks, L. Dowdle! # #ver='2.31' ; date='Sept 9, 2019' # + [PT] Fixed help file descripts-- thanks again, L. Dowdle. # #ver='2.32' ; date='Sept 10, 2019' # + [PT] "hview"ify---thanks, RCR! # #ver='2.4' ; date='Sept 10, 2019' # + [PT] now output mask from mask_B0() into the main odir, if that # func gets used; useful for scripting+qc # #ver='2.5' ; date='Sept 12, 2019' # + [PT] QC images output: # + images use magn vol as ulay, if entered; otherwise, ulay is EPIs # #ver='2.6' ; date='Sept 25, 2019' # + [PT] major change: update/reverse polarity # + that is, the direction of (un)warping will be opposite for a given # PE direction # + [PT] add in '-in_anat ..' opt, for maybe nicer QC (load in anat to be ulay) # + [PT] add in '-qc_box_focus_ulay' opt, for maybe nicer QC (focus on ulay) # #ver='2.61' ; date='Oct 2, 2019' # + [PT] 3dmask_tool now to do dilate/erosion # #ver='2.62' ; date='Oct 2, 2019' # + [PT] Move to use '3dWarp ...' rather than 'cat_matvec ...' for # changing between EPI-freq dsets, which might have relative # obliquity difference; should be minisculy better for rounding # error considerations # #ver='2.63' ; date='June 3, 2020' # [PT] # + bug fix: ARG_missing_arg() called a func that didn't exist here! # -> that func is now in afni_base, so use that. # ver='2.64' ; date='Sep 23, 2021' # [PT] forgot to process option: -epi_pe_bwpp .. # + now added in that ability... # ########################################################################## import sys, os from afnipy import afni_base as ab from afnipy import lib_b0_corr as lb0 # ============================================================================= if __name__ == "__main__" : iopts = lb0.parse_args_b0_corr(sys.argv) print("\n++ ================== Start B0 correction ================== \n" " Ver : {DEF_ver}\n" " Date : {DEF_date}\n" "".format( **lb0.ddefs )) # Make a mask from a magn dset, if need be did_copy_inps = iopts.copy_inps_to_wdir() # Make a mask from a magn dset, if need be if not(iopts.dset_mask) : did_mask_B0 = iopts.mask_B0() # Do the main work did_B0_corr = iopts.B0_corr() iopts.write_params() iopts.write_history() self_vars = vars( iopts ) print("\n------------") print("++ epi_b0_correct.py finishes.") print("++ Text of commands : {ocmds_fname}" "".format( **self_vars )) print("++ Text of params : {opars_fname}\n" "".format( **self_vars )) if iopts.do_qc_image : print("++ QC images : {outdir}/{outdir_qc}/*.png\n" "".format( **self_vars )) print("++ MASK dset output : {outdir}/{odset_mask}{dext}" "".format( **self_vars )) print("++ WARP dset output : {outdir}/{odset_warp}{dext}" "".format( **self_vars )) print("++ EPI dset output : {outdir}/{odset_epi}{dext}\n" "".format( **self_vars )) sys.exit(0)
Self-supervised Learning for Label-Efficient Sleep Stage Classification: A Comprehensive Evaluation The past few years have witnessed a remarkable advance in deep learning for EEG-based sleep stage classification (SSC). However, the success of these models is attributed to possessing a massive amount of labeled data for training, limiting their applicability in real-world scenarios. In such scenarios, sleep labs can generate a massive amount of data, but labeling these data can be expensive and time-consuming. Recently, the self-supervised learning (SSL) paradigm has shined as one of the most successful techniques to overcome the scarcity of labeled data. In this paper, we evaluate the efficacy of SSL to boost the performance of existing SSC models in the few-labels regime. We conduct a thorough study on three SSC datasets, and we find that fine-tuning the pretrained SSC models with only 5% of labeled data can achieve competitive performance to the supervised training with full labels. Moreover, self-supervised pretraining helps SSC models to be more robust to data imbalance and domain shift problems. The code is publicly available at https://github.com/emadeldeen24/eval_ssl_ssc. I. INTRODUCTION Sleep stage classification (SSC) plays a key role in diagnosing many common diseases such as insomnia and sleep apnea. To assess the sleep quality or diagnose sleep disorders, overnight polysomnogram (PSG) readings are split into 30second segments, i.e., epochs, and assigned a sleep stage. This process is performed manually by specialists, who follow a set of rules, e.g., the American Academy of Sleep Medicine (AASM) to identify the patterns and classify the PSG epochs into sleep stages. This manual process is tedious, exhaustive, and time-consuming. To overcome this issue, numerous deep learning-based SSC models were developed to automate the data labeling process. These models are trained on a massive labeled dataset and applied to the dataset of interest. For example, Jadhav et al. explored different deep learning models to exploit raw Emadeldeen Eldele and Chee-Keong Kwoh are with the School of Computer Science and Engineering, Nanyang Technological University, Singapore (E-mail: {emad0002, asckkwoh}@ntu.edu.sg). Mohamed Ragab and Zhenghua Chen are with the Institute for Infocomm Research (I 2 R) and the Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A * STAR), Singapore (E-mail: {mohamedr002, chen0832}@e.ntu.edu.sg). Min Wu is with the Institute for Infocomm Research (I 2 R), Agency for Science, Technology and Research (A * STAR), Singapore (E-mail: wumin@i2r.astar.edu.sg). Xiaoli Li is with Institute for Infocomm Research (I 2 R), Centre for Frontier Research (CFAR), Agency of Science, Technology and Research (A * STAR), Singapore, and also with the School of Computer Science and Engineering at Nanyang Technological University, Singapore (E-mail: xlli@i2r.a-star.edu.sg). First author is supported by A * STAR SINGA Scholarship. Min Wu is the corresponding author. electroencephalogram (EEG) signals, as well as their timefrequency spectra. Also, Phyo et al. attempted to improve the performance of the deep learning model on the confusing transitioning epochs between stages. In addition, Phan et al. proposed a transformer backbone that provides interpretable, and uncertainty quantified predictions. However, the success of these approaches hinges on a massive amount of labeled data to train the deep learning models, which might not be feasible. In practice, sleep labs can collect a vast amount of overnight recordings, but the difficulties in labeling the data limit deploying these data-hungry models. Thus, unfortunately, the SSC works developed in the past few years have now a bottleneck: the size, quality, and availability of labeled data. One alternative solution to pass through this bottleneck is the self-supervised learning (SSL) paradigm, which witnessed increased interest recently due to its ability to learn useful representations from unlabeled data. In SSL, the model is pretrained on a newly defined task that does not require any labeled data, where ground-truth pseudo labels can be generated for free. Such tasks are designed to learn the model to recognize general characteristics about the data without being directed with labels. Currently, SSL algorithms can produce state-of-the-art performance on standard computer vision benchmarks, -. Consequently, the SSL paradigm has gained more interest to be applied for sleep stage classification problem,. Most prior works aim to propose novel SSL algorithms and show how they could improve the performance of sleep stage classification. Instead, in this work, our aim is to examine the efficacy of SSL paradigm to re-motivate deploying existing SSC works in real-world scenarios, where only few labeled samples are available. Therefore, we revisit a prominent subset of SSC models and perform an empirical study to evaluate their performance under the few-labeled data settings. Moreover, we explore the efficacy of different SSL algorithms on their performance and robustness. We also study the effect of sleep data characteristics, e.g., data imbalance and temporal relations, on the learned self-supervised representations. Finally, we assess the transferability of self-supervised against supervised representations and their robustness to domain shift. The overall framework is illustrated in Fig. 1. We perform an extensive set of experiments on three sleep staging datasets to systemically analyze the SSC models under the few-labeled data settings. The experimental results of this study aim to provide a solid and realistic real-world assessment of the existing sleep stage classification models. 1. The architecture of our evaluation framework. We experiment with three sleep stage classification models, i.e., DeepSleepNet, AttnSleep, and 1D-CNN. We also include four self-supervised learning algorithms, i.e., ClsTran, SimCLR, CPC, and TS-TCC. The different experiments are performed on Sleep-EDF, SHHS, and ISRUC datasets. A. Sleep Stage Classification A wide span of EEG-based sleep stage classification methods have been introduced in recent years. These methods proposed different architecture designs. For example, some methods adopted multiple parallel convolutional neural networks (CNNs) branches to extract better features from EEG signals,,. Also, some methods included residual CNN layers,, while others used graph-based CNN networks. On the other hand, Phan et al. proposed Long Short Term Memory (LSTM) networks to extract features from EEG spectrograms. To handle the temporal dependencies among EEG features, these methods had different approaches. For instance, some works adopted recurrent neural networks (RNNs), e.g., bi-directional LSTM networks as in,,. Other works adopted the multi-head self-attention as a faster and more efficient way to capture the temporal dependencies in timesteps, as in,. Despite the proven performance of these architectures, they require a huge labeled training dataset to feed the deep learning models. None of these works studied the performance of their models in the few labeled data regime, which is our scope in this work. B. Self-supervised Learning Approaches Self-supervised learning received more attention recently because of its ability to learn useful representations from unlabeled data. The first SSL auxiliary tasks showed a big improvement in the performance of the downstream task. For example, Noroozi et al. proposed training the model to solve a jigsaw puzzle on a patched image. In addition, Gidaris et al. proposed rotating the input images, then trained the model to predict the rotation angle. The success of these auxiliary tasks motivated adapting contrastive learning algorithms, which showed to be more effective due to their ability to learn invariant features. The key idea behind contrastive learning is to define positive and negative pairs for each sample, then push the sample closer to the positive pairs, and pull it away from the negative pairs. In general, contrastive-based approaches rely on data augmentations to generate positive and negative pairs. For example, SimCLR considered the augmented views of the sample as positive pairs, while all the other samples within the same mini-batch are considered as negative pairs. Also, MoCo increased the number of negative pairs by keeping samples from other mini-batches in a memory bank. On the other hand, some recent algorithms neglected the negative pairs and proposed using only positive pairs such as BYOL and SimSiam. C. Self-supervised learning for Sleep Staging The success of SSL in computer vision applications motivated their adoption for sleep stage classification. For example, Mohsenvand et al. and Jiang et al. proposed SimCLR-like methodologies and applied EEG-related augmentations for sleep stage classification. Also, Banville et al. applied three pretext tasks, i.e., relative positioning, temporal shuffling, and contrastive predictive coding (CPC) to explore the underlying structure of the unlabeled sleep EEG data. The CPC algorithm predicts the future timesteps in the time-series signal, which motivated other works to build on it. For example, SleepDPC solved two problems, i.e., predicting future representations of epochs, and distinguishing epochs from other different epochs. Also, TS-TCC proposed temporal and contextual contrasting approaches to learn instance-wise representations about the sleep EEG data. In addition, SSLAPP developed a contrastive learning approach with attention-based augmentations in the embedding space to add more positive pairs. Last, CoSleep and SleepECL are yet another two contrastive methods that exploit information, e.g., inter-epoch dependency and frequency domain views, from EEG data to obtain more positive pairs for contrastive learning. III. EVALUATION FRAMEWORK A. Preliminaries In this section, we describe the SSL-related terminologies, i.e., pretext tasks, contrastive learning, and downstream tasks. 1) Problem Formulation: We assume that the input is single-channel EEG data in R d, and each sample has one label from one of C classes. The supervised downstream task has an access to the inputs and the corresponding labels, while the self-supervised learning algorithms have access only to the inputs. The SSC networks consist of three main parts. The first is the feature extractor, which maps the input data into the embedded space f : R d → R m1 parameterized by neural network parameters. The second is the temporal encoder (TE), which is another intermediate network to improve the temporal representations. The TE may change the dimension of the embedded features f : R m1 → R m. Finally, the classifier f : R m → R C, which produces the predictions. The SSL algorithms learn from unlabeled data, while finetuning learns and with also updating. 2) Pretext tasks: Pretext tasks refer to the pre-designed tasks to learn the model generalized representations from the unlabeled data. Here, we describe two main types of pretext tasks, i.e., auxiliary, and contrastive tasks. a) Auxiliary tasks: This category includes defining a new task along with free-to-generate pseudo labels. These tasks can be defined as classification, regression, or any others. In the context of time-series applications, a new classification auxiliary task was defined in, by generating several views to the signals using augmentations such as: adding noise, rotation, and scaling. Each view was assigned a label, and the model was pretrained to classify these transformations. This approach showed success in learning underlying representations from unlabeled data. However, it is usually designed with heuristics that might limit the generality of the learned representations. b) Contrastive learning: In contrastive learning, representations are learned by comparing the similarity between samples. In specific, we define positive and negative pairs for each sample. Next, the feature extractor is trained to achieve the contrastive objective, i.e., push the features of the sample towards the positive pairs, and pull them away from the negative pairs. These pairs are usually generated via data augmentations. Notably, some studies, relied on strong successive augmentations and found them to be a key factor in the success of their contrastive techniques. Formally, given a dataset with N unlabeled samples, we generate two views for each sample x, i.e., {x i,x j } using data augmentations. Therefore, in a multiviewed batch with N samples for each view, we have a total of 2N samples. Next, the feature extractor transforms them into the embedding space, and a projection head h() is used to obtain low-dimensional embeddings, i.e., z i = h(f (x i )) and z j = h(f (x j )). Assuming that for an anchor sample indexed i ∈ I ≡ {1... 2N }, and A(k) ≡ I \{k}. The objective of contrastive learning is to encourage the similarity between positive pairs and separate the negative pairs apart using the NT-Xent loss, defined as follows: where symbol denotes the inner dot product, and is a temperature parameter. 3) Downstream tasks: Downstream tasks are the main tasks of interest that lacked a sufficient amount of labeled data for training the deep learning models. In this paper, the downstream task is sleep stage classification, i.e., classifying the PSG epochs into one of five classes, i.e., W, N1, N2, N3, and REM. However, in general, the downstream task can be different and defined by various applications. Notably, different pretext tasks can have a different impact on the same downstream task. Therefore, it is important to design a relevant pretext task to the problem of interest, to learn better representations. Despite the numerous proposed methods in self-supervised learning, identifying the proper pretext task is still an open research question. B. Sleep Stage Classification Models We perform our experiments on three sleep stage classification models, i.e., DeepSleepNet, AttnSleep, and 1D-CNN. The architectures of these models are shown in Fig 1. Each model has its specifically-designed feature extractor, temporal encoder, and methodology to address the sleep data imbalance issue. Next, we discuss each SSC model in more details. 1) DeepSleepNet: DeepSleepNet consists of two parallel convolutional network branches with dropout to extract features. These features are passed to the temporal encoder that contains a Bidirectional Long Shot Term Memory (BiLSTM) network with a residual connection. To overcome the data imbalance issue in sleep data and achieve good performance in minor classes, DeepSleepNet is trained in two separate phases. In the first, the model is trained with oversampled balanced data, while in the second, the pretrained model is fine-tuned with the original imbalanced data. 2) AttnSleep: AttnSleep extracts features from EEG data with a multi-resolution CNN network followed by an adaptive feature recalibration module. The extracted features are then sent to a causal self-attention network to characterize the temporal relations. AttnSleep deploys a class-aware loss function to handle the class imbalance issue. This loss function assigns different weights to the data based on two factors, i.e., the distinctness of the features of each class, and the number of samples of that class in the dataset. 3) 1D-CNN: The 1D-CNN network consists of three convolutional blocks. Each block consists of a 1D-Convolutional layer followed by a BatchNorm layer, a non-linearity ReLU activation function, and a MaxPooling layer. This architecture does not include any special component to find the temporal relations nor handle the data imbalance issue in sleep EEG data. In our experiments, we pretrain only the feature extractor of the three SSC models. After that, we fine-tune the whole model with the few labeled data in an end-to-end manner. C. Self-supervised Learning Algorithms In this section, we describe the adopted SSL algorithms (see Fig. 1) in more details. We selected four algorithms that can be applied to any feature extractor design. 1) ClsTran: Classifying Transformations is an auxiliary classification task, in which we first apply some transformations to the input signal. Then, we associate an automaticallygenerated pseudo label with each transformation. Last, we train the model to classify the transformed signals based on these pseudo labels. Formally, let's assume a tuple of input signal and its corresponding pseudo label (x i, i ), where x i is i th transformed signal, i is the generated pseudo label that corresponds to the i th transformation, and i ∈ [0, T ), T is the total number of transformations. Next, the transformed signal passes through the feature extractor, the temporal encoder, and the classifier networks to generate the output probability p t. Last, the model is trained to minimize a standard cross-entropy loss based on these pseudo labels: where 1 is the indicator function, which is set to be 1 when the condition is met, and set to 0 otherwise. In this work, we adopt four augmentations, i.e., negation, permutation, adding noise, and time shifting, which were adopted by previous works, and showed good downstream performance. More details about data augmentations are provided in Section S.II in the supplementary materials. 2) SimCLR: Simple framework for Contrastive Learning of Visual Representation is a contrastive SSL algorithm that relies on data augmentations to learn invariant representations. It consists of four major components. The first is data augmentations, which are utilized to generate two correlated views of the same sample. The second is the feature extractor network that transforms the augmented views into latent space. The third is the projection head, which maps the features into a low-dimensional space. The fourth is the NT-Xent loss (Eq. 1), which aims to maximize the similarity between an anchor sample with its augmented views while minimizing its similarity with the augmented views of the other samples within the mini-batch. 3) CPC: Contrastive Predictive Coding is a predictive contrastive SSL approach that learns representations of timeseries signals by predicting the future timesteps in the embedding space. To do so, the feature extractor first generates the latent feature embeddings for the input signals. Next, an autoregressive model receives a part of the embeddings, i.e., the past timesteps, then generates a context vector and uses it to predict the other part, i.e., the future timesteps. CPC deploys a contrastive loss such that the embedding should be close to positive future embeddings and distant from negative future embeddings. CPC showed improved downstream performance in various time-series and speech recognition-related tasks, without the need for any data augmentation. 4) TS-TCC: Time-Series representation learning via Temporal and Contextual Contrasting is yet another contrastive SSL approach for time-series data. TS-TCC relies on strong and weak augmentations to generate two views of an anchor sample. Next, the feature embeddings of these views are generated. Next, similar to CPC, a part of the embeddings of each view is sent to an autoregressive model to generate a context vector. Then, the context vector generated for one augmented view is used to predict the future timesteps of the other augmented view with a contrastive loss. Therefore, it pushes the embeddings of one augmented view to the positive future embeddings of the other augmented view, and vice versa. In addition, it leverages the NT-Xent loss (Eq. 1) to maximize the agreement between the context vectors of the same sample, while maximizing it within the contexts of other samples. A. Datasets We evaluate the SSL algorithms on three sleep stage classification datasets, namely Sleep-EDF, SHHS, and ISRUC. These datasets have different characteristics in terms of sampling rates, EEG channels, and the health conditions of subjects. We use a single EEG channel from each dataset in our experiments following previous works,. 1) Sleep-EDF: Sleep-EDF dataset is a public dataset that contains the polysomnography (PSG) readings of 20 healthy subjects (10 males and 10 females). In our experiments, we adopted the recordings included in the Sleep Cassette (SC) study and used the EEG data from Fpz-Cz channel with a sampling rate of 100 Hz. 2) SHHS: Sleep Heart Health Study, is a multicenter cohort study of the cardiovascular and other consequences of sleep-disordered breathing. The dataset is created to record the PSG readings of patients aged 40 years and older in two visits. In our experiments, we randomly chose 20 subjects from the patients during the first visit and chose the EEG channel C4-A1 with a sampling rate of 125 Hz. 3) ISRUC: ISRUC dataset contains PSG recordings for human adults with different health conditions. We selected the 10 healthy subjects included in subgroup III and extracted the EEG channel C4-A1 with a sampling rate of 200 Hz. More details about the datasets are provided in Table II. B. Implementation Details 1) Dataset preprocessing: For all the datasets, we apply the two preprocessing steps. First, we only considered the five sleep stages according to the AASM standard. Second, we exclude the wake periods that exceed 30 minutes before and after the sleep periods following,. We also split the subjects into five folds, and all the upcoming experiments are performed with 5-fold subject-wise cross-validation. 2) Training scheme: The pretraining as well as the finetuning were performed for 40 epochs with a batch size of 128. The neural network weights were optimized using the Adam optimizer, with a learning rate of 1e-3 and a weight decay of 1e-4. We reported the results in terms of accuracy and macro F1-score. Our codes are built using PyTorch 1.7 and they are publicly available at https://github.com/emadeldeen24/eval ssl ssc. A. Which SSL algorithm performs best? In Tables I, we compare the supervised performance of the three SSC models (Section III-B) against the fine-tuned models with the four SSL algorithms (Section III-C) using 1% of labeled data. We notice that self-supervised pretraining with contrastive methods ensures better performance against supervised training in the few-labeled data regime. Specifically, we find that SimCLR, CPC, and TS-TCC demonstrate remarkable performance on the three datasets. This indicates that learning invariant representations by contrastive learning can achieve good generalization on sleep datasets. Counterpart, pretraining with the auxiliary task learns poorer representations, leading to a downgraded performance except for few cases. This could be regarded to the high complexity of sleep EEG data, which does not help the model identify the difference between several augmented views. We also conducted several experiments to assess the capability of SSL algorithms in learning temporal information, which are provided in the supplementary materials (see Section S.III-C). We find that pretrained models with CPC and TS-TCC can be robust to the existence and the type of temporal encoder while fine-tuning. The reason is that these methods rely on predicting the future timesteps in the latent space, which allows them to learn about temporal features in the EEG data. B. Performance Under Different Few-labels Settings We study the performance of pretrained models when finetuned with different amounts of labeled data, i.e., 1%, 5%, 10%, and 100%. Fig. 2 shows the result of these experiments on the Sleep-EDF dataset (results on SHHS and ISRUC datasets are provided in Section S.III-B in the supplementary materials). We find that for the three SSC models, fine-tuning with 5 or 10% of labels can achieve very close performance to the supervised training with 100% of labels. This demonstrates that self-supervised pretrained models yield richer embeddings than their supervised counterparts, which enhances the downstream task performance with such few labels. Specifically, fine-tuning CPC-pretrained DeepSleepNet with 5% of labeled data could achieve an F1-score of 72.5%, which is only 2.1% less than supervised training with full labels. Also, fine-tuning TS-TCC-pretrained AttnSleep and 1D-CNN with 5% of labeled data had a difference from the fully supervised training of 5.1% and 1.5% respectively. Similarly, finetuning with 10% of labeled data have even lower differences of 1.4, 3.4, and 1.3% on DeepSleepNet, AttnSleep, and 1D-CNN respectively with fully supervised training. These results indicate the applicability of existing SSC works in real-world scenarios provided the self-supervised pretraining. We also find that the gain from self-supervised pretraining tends to diminish with fine-tuning the model with the full labeled data. This observation holds for all the three SSC models on the three datasets. Therefore, we can conclude that self-supervised pretraining can provide better regularization, reducing the overfitting problem. However, it does not improve the optimization to reduce the underfitting problem, which is aligned with the findings in. C. Comparison with Baselines We compare the performance of the adopted pretrained SSC models against state-of-the-art self-supervised methods proposed specifically for the sleep stage classification problem. the reported results of SleepDPC, CoSleep, and SS-LAPP on Sleep-EDF dataset. We compare these methods against existing SSC models with the best-performing SSL method. Despite that experimental settings can be in favor of stateof-the-art sleep-specific SSL methods, e.g., using multiple channels and different data splits, we find that pretraining existing SSC models surpass their performance under the few labels settings. For example, we find that SleepDPC and CoSleep include two EEG channels in training, yet they achieve poor performance. Also, SSLAPP shows less performance despite that it splits the data into 80/20, which may not provide dependable conclusions as the K-fold subjectwise cross-validation. On the other hand, pretrained SSC models with a single EEG channel were able to outperform these methods in terms of both accuracy and macro F1-score. Therefore, it is important to rebirth existing SSC models with self-supervised learning to obtain comparable real-world scenario results. D. Robustness of SSL against Sleep Data Imbalance The nature of sleep stages implies that some stages, e.g., N1 occur less frequently than other stages such as N2. Consequently, the sleep stage datasets are usually imbalanced (see Table II). Therefore, it is important to study whether the data imbalance affects the quality of the learned representations by SSL algorithms. To do so, we compute the performance gap between models pre-trained on balanced and imbalanced datasets. Specifically, we pretrain the SSL algorithms with the original imbalanced data, and also with oversampled balanced data. The experimental results are shown in Fig. 3. We observe that the gap between balanced and imbalanced pretraining is minor for contrastive SSL algorithms, and it does not exceed a maximum of 0.2%, 0.6%, and 0.5% for DeepSleepNet, AttnSleep, and 1D-CNN respectively. These observations show that contrastive SSL algorithms are more robust to dataset imbalance, which is consistent with previous studies,. The main reason is their ability to learn more general and richer features from the majority classes than traditional supervised learning. In specific, the learned selfsupervised representations are not supervised or motivated by any labels, i.e., they are not label-directed, and they could learn other intrinsic properties in the EEG signal. These features can improve the classification performance of the minor classes and the learned features can be more efficient for the downstream task. On the other hand, the ClsTran algorithm is directed by a cross-entropy loss that depends on the assigned pseudo labels. Therefore, it can be affected by the data imbalance, and it shows different performance with oversampled data. In the supplementary material, we also analyze the ability of SSL to improve the performance of the minor classes (see Section S.III-A). E. Robustness to Domain-shift In some scenarios, we may afford to label the samples of one subject, and we aim to transfer the knowledge from this subject to another unlabeled and out-of-distribution subjects. This distribution shift can be caused by a different data collection methodology or differences in subjects' health status. To deal with this challenging scenario, some recent works proposed transfer learning and unsupervised domain adaptation algorithms to mitigate the domain shift -. In this section, we investigate the transferability of supervised training against self-supervised pretraining under the domain-shift settings on five random cross-domain (crosssubject) scenarios from Sleep-EDF dataset. Each cross-domain scenario consists of one source subject, and one target subject. The supervised transferability is obtained by training the model on the source domain and directly testing it on the target domain. For the SSL algorithms, we pretrain the model with the unlabeled source data and fine-tune it with its labels, then test it on the target domain. The experimental results are provided in Table IV. We notice that in all the cross-domain scenarios, at least one SSL algorithm can outperform the supervised transferability. However, we notice that the overall (average) improvement is marginal in DeepSleepNet and AttnSleep between supervised and best SSL algorithm with 1.2% and 1.6%, respectively. Counterpart, 1D-CNN witnessed an overall improvement of 4.2%. Considering that 1D-CNN model is less complex than DeepSleepNet and AttnSleep models (see Section S.I in the supplementary materials), we can conclude that SSL can compensate for its lower transferability capacity and allow it to achieve comparable performance. VI. DISCUSSION & RECOMMENDATIONS In this paper, we studied whether self-supervised pretraining can help improve the performance of existing sleep stage classification models in the few-labeled data regime. Our experiments were held with four SSL algorithms and applied to three SSC models on three different datasets. The experimental results suggest the following conclusions. Contrastive SSL algorithms guarantee superior performance of SSC models over supervised training in the few-labeled data settings. Contrastive SSL algorithms are robust against sleep data imbalance, and this imbalance does not affect the quality of learned representations. Self-supervised pretraining improves the out-of-domain transferability performance in SSC models. SSL with predictive tasks can improve the temporal learning capability of SSC models. The above conclusions reveal some potential future works to enhance the SSL algorithms proposed for sleep stage classification. First, we find that the auxiliary task, ClsTran, yields lower performance even than the supervised training in most cases. Therefore, it is important to study the SSC problem and propose a new SSC-specific auxiliary task to be more beneficial to the downstream performance, similar to the proposed tasks in. Second, our experiments included two contrastive SSL algorithms that rely on data augmentations to chose the positive and negative pairs, i.e., SimCLR and TS-TCC. These two methods consider only the augmented view of each same sample as the positive pair, and all the other samples are considered negative pairs. However, some of these negative pairs may share the same label and semantic information with that anchor sample, and pulling them away from each other may deteriorate the performance. Therefore, one way to improve these algorithms is to reduce the number of false negative samples when applying contrastive learning. In addition, designing well-suited augmentations for sleep EEG data can learn more effective representations. Third, SSL algorithms showed limited improvement to the minority classes in the sleep data, i.e., N1 and N3, which limits the overall improvement. Therefore, another research direction is to study how can self-supervised algorithms learn more about the characteristics of minority classes during pretraining. Last, we noticed that SSL algorithms had a limited transferability improvement, which can be further investigated. VII. CONCLUSIONS In this paper, we assess the efficacy of different selfsupervised learning (SSL) algorithms to improve the performance of sleep stage classification (SSC) models under the few-labels settings. The experimental results reveal that contrastive SSL algorithms can learn more robust and invariant representations about the sleep EEG data. In addition, SSL algorithms that include predictive tasks can learn temporal features about EEG data during pretraining, and hence lessens the need to a temporal encoder in the SSC models. Moreover, self-supervised pretraining can improve the robustness of SSC models against data imbalance and domain shift problems. Hence, we recommend pretraining existing SSC models with contrastive SSL algorithms to become more practical in the real-world label-scarce scenarios. S.I. NUMBER OF PARAMETERS Table S.1 shows the number of parameters of the three adopted sleep stage classification models. We notice that DeepSleepNet has the highest number of trainable parameters, while 1D-CNN has the lowest. Notably, we only use the feature extractor in the self-supervised pretraining, and its complexity is an important factor with respect to the SSL algorithm. In this work, some SSL algorithms need data augmentations to perform. Therefore, we examine four augmentations, as follows. Noise: includes adding a randomly generated noise signal with a mean of 0 and standard deviation of 0.8. Time shift: shifting the signal with 20% of the total signal timesteps and rotating the shifted part back to the beginning of the signal. Negate: multiply the value of the signal by a factor of -1. Permute: randomly split each signal into five segments in the time domain, then permute the segments, and recombine them into their original shape. The ClsTran algorithm applies these four augmentations to classify between them. For SimCLR and TS-TCC, we used their corresponding augmentations as applied in. S.III. EXPERIMENTS A. Does SSL improve the performance of minority classes? The second question is about the ability of SSL algorithms to improve the performance of the minority classes, e.g., N1 stage. This is an emerging problem in SSC works, which usually aim to find a way to improve the N1 class performance. Therefore, we compare the performance of self-supervised pretraining with the SSC models including their proposed techniques to address the class-imbalance problem. Fig. S.1 shows the comparison results, where DeepSleepNet is trained in a two-stage procedure, i.e., oversampled-data pretraining then fine-tuning, and AttnSleep is trained with including its class-aware loss function. The results show that SSL algorithms surpass the techniques proposed by both works, as they can achieve better performance across all classes. However, our focus here is on the minor classes, i.e., N1 and N3. We find that SSL algorithms do not improve their performance significantly, which opens a new research direction to find a way to improve the SSL performance in these classes. B. Performance Under Different Few-labels Settings We discuss the results of pretrained models when fine-tuned with different amounts of labeled data for both SHHS and ISRUC datasets, as shown in Fig. S.2 and S.3. We notice that similar to the conclusions drawn from Sleep-EDF dataset, we find that with 5 or 10% of labels, we can achieve very close performance to the supervised training with full labels. In addition, by fine-tuning with the full labels, we can surpass the fully supervised training on the ISRUC dataset in AttnSleep and 1D-CNN. C. Can SSL compensate temporal encoder? One of the main questions when designing a sleep stage classification model is how to learn the temporal dependencies in EEG data. Therefore, we study the capability of SSL to learn and characterize the temporal dependencies. Specifically, we examine the performance of pretrained SSC models by fine-tuning them with and without the temporal encoders. The We find that supervised training is more affected by the existence and the type of the temporal encoder, as it shows unstable performance with changing these factors. Second, we find that CPC and TS-TCC are more robust to the existence and the type of the temporal encoder regardless of the SSC model or the dataset used. The reason is that these two approaches learn representations by predicting the future timesteps with an autoregressive model. This allows the model to learn the temporal dependencies in the EEG data during the pretraining. Therefore, fine-tuning models pretrained with these two algorithms become more robust against the type or the existence of a temporal encoder. In contrast, ClsTran and SimCLR mainly rely on data augmentations, which help more to learn spatial representations. Therefore, we find that these two methods are more affected by the type or existence of the temporal encoder.
package org.bian.dto; import com.fasterxml.jackson.annotation.JsonProperty; import com.fasterxml.jackson.annotation.JsonCreator; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; import javax.validation.Valid; /** * SDBranchCurrencyDistributionActivateInputModelBranchCurrencyDistributionServiceConfigurationRecordBranchCurrencyDistributionServiceConfigurationSetup */ public class SDBranchCurrencyDistributionActivateInputModelBranchCurrencyDistributionServiceConfigurationRecordBranchCurrencyDistributionServiceConfigurationSetup { private String branchCurrencyDistributionServiceConfigurationParameter = null; /** * `status: Not Mapped` core-data-type-reference: BIAN::DataTypesLibrary::CoreDataTypes::UNCEFACT::Text general-info: The default activation setting for the offered service configuration parameter * @return branchCurrencyDistributionServiceConfigurationParameter **/ public String getBranchCurrencyDistributionServiceConfigurationParameter() { return branchCurrencyDistributionServiceConfigurationParameter; } public void setBranchCurrencyDistributionServiceConfigurationParameter(String branchCurrencyDistributionServiceConfigurationParameter) { this.branchCurrencyDistributionServiceConfigurationParameter = branchCurrencyDistributionServiceConfigurationParameter; } }
import numpy as np n,m = map(int, input().split()) arr = np.zeros(m) for i in range(n): ka = np.array([int(i) for i in input().split()]) for j in range(ka[0]): arr[ka[j+1]-1] += 1 ans = (arr == n).sum() print(ans)
<gh_stars>0 use std::env; use std::fs::File; use std::io::BufReader; use std::io::prelude::*; fn run<F>(program: &mut Vec<i32>, update: F) -> i32 where F : Fn(i32) -> i32 { let mut ip: i32 = 0; let mut count: i32 = 0; loop { if ip < 0 || ip >= program.len() as i32 { return count; } let val = program[ip as usize]; let next_ip = ip + val; program[ip as usize] = update(val); ip = next_ip; count += 1; } } fn run_part1(program: &mut Vec<i32>) -> i32 { run(program, |o| o + 1) } fn run_part2(program: &mut Vec<i32>) -> i32 { run(program, |o| { if o >= 3 { return o - 1; } else { return o + 1; } }) } fn handle_file(filename: &String) { let f = File::open(filename).expect("file not found."); let reader = BufReader::new(f); let mut p1: Vec<i32> = reader.lines() .map(|s| s.unwrap().parse::<i32>().unwrap()) .collect(); let mut p2 = p1.to_vec(); println!("{}: part1: {} jumps. part2 {} jumps.", filename, run_part1(&mut p1), run_part2(&mut p2)); } fn main() { let args: Vec<String> = env::args().collect(); let filenames = &args.as_slice()[1.. ]; for filename in filenames { handle_file(&filename); } } #[cfg(test)] mod test { use super::*; #[test] fn example_part1() { assert_eq!(run_part1(&mut vec![0, 3, 0, 1, -3]), 5); } #[test] fn example_part2() { assert_eq!(run_part2(&mut vec![0, 3, 0, 1, -3]), 10); } }
E-Government Information Systems and Cloud Computing (Readiness and Analysis) A new wave of the IT revolution, e-government, presents a tremendous opportunity to move forward providing higher quality, and cost-effective government services as well as creating a better relationship between citizens and government. The literature review, presented in this paper however, indicated that e-government readiness is a major concern, and that currently there is little availability of comprehensive assessment methods for e-government readiness and most of the assessment frameworks, reviewed for this study, are varied in terms of philosophies, objectives, methodologies, approaches, and results. To this end, this research aims to develop a comprehensive framework of associated guidelines and tools to support E-government Information Systems (EGIS) readiness, with a specific focus on the EGIS migration to the Cloud Computing provisioning model.
SALT LAKE CITY (AP) Rookie Donovan Mitchell couldn’t hold back a wide, toothy smile as Utah Jazz fans chanted his name and gave an ovation that lasted for minutes after a win over the New Orleans Pelicans. Mitchell scored a career-high 41 points and powered the Jazz’s fourth-quarter rally in a 114-108 victory over the New Orleans Pelicans on Friday night. The No. 13 overall draft pick set the Jazz scoring record for a rookie and became the first NBA rookie to score 40 points in a game since Blake Griffin in 2011. He surpassed Darrell Griffith’s team-record 38 in 1981. ”I don’t have any words, to be honest,” Mitchell said. ”I had Jonas (Jerebko) in my ear saying keep taking those shots. Even shots that may not always be good shots, he says keep being aggressive. Coach’s saying it. Everybody’s saying it. New Orleans star Anthony Davis went down with a left groin injury at the beginning of the fourth quarter, hitting the ground under the Jazz basket and laying there until trainers came to help. Coach Alvin Gentry said Davis will have an MRI in Portland on Saturday and is ”very unlikely” to play. They eventually carried the All-Star off because he couldn’t put any pressure on one of his legs. He was immediately placed in a wheelchair and taken to the training room. Davis had 19 points and 10 rebounds before leaving. DeMarcus Cousins led New Orleans with 23 points and 13 rebounds. ”They hit some shots, they hit some big shots,” Cousins said. ”The rookie had a (heck) of a game, he dominated from start to finish. They hit some big shots. Jazz: Rodney Hood missed his third consecutive game with left ankle soreness. … Raul Neto did not play for the second straight game due to left hamstring soreness.
Mechanics of interface debonding in fibre-reinforced materials The evaluation of damage in multiphase materials plays a crucial role in their safety assessment under service mechanical actions. In this context, the quantification of the damage associated to fibrematrix detachment is one of the most important aspects to be carried out for short fibre-reinforced materials. In the present article, the problem of progressive fibrematrix debonding is examined and a mechanics interpretation of such a phenomenon is developed by relating the shear-lag and the fracture mechanics approach in order to determine the fibrematrix interface characteristics. A multiscale approach is employed: at macroscopic level, composites with dilute dispersed fibres, arranged in a undirectional or in random orientation, are analysed through a homogenization approach, whereas the problem of axisymmetric debond growth in short fibres is examined at microscopic level. Moreover, a structured linear elastic interface framework model for crack propagation analysis is applied by defining a microscopic truss structure, enabling to relate each other the classical shear strength approach and the fracture mechanics approach. Finally, a fibre pull-out test and some simple fibre-reinforced structural components are examined. This new proposed point of view on the debonding phenomenon allows a deep understanding of the mechanics of the fibrematrix interface and enables to characterize such an interface layer that has a relevant role in mechanics design of composites materials.
Q: How do I start a Single Player/Multi Player game with a custom map in Starcraft II? This may sound a little bit silly, but I can’t seem to find how to do this. Single Player -> Vs. A.I. seems the way for single player, but then I can’t find a way to select my map. I only see a list of blizzard maps. If, on the other hand, I start a multi player, I get all the ladder nonsense. I can create a custom game (or play cooperatively) but then again, the map selector list doesn’t allow me to “move” to a different folder; I can’t find the maps in the starcraft folder either… Any ideas? I’m used to play on a couple of custom maps that I’m porting to SCII, we used to LAN play (lame that that’s not possible anymore), however, I don’t care to use BNET, if I can use my map. I must be doing something wrong… In essence, how do I create a Map and use it in a custom game? Do I have to publish it? (retarded). Thanks. A: The custom game selection will allow you to draw from maps that you have uploaded to battlenet via the Galaxy Editor. As I understand it this is exactly what you want to do and it sounds like you do understand how to publish a map. I believe there is an option to publish the map privately for custom game use, though I haven't check since Beta.
The latest U.S. Department of Education data indicate that states and school districts have $27 billion in stimulus cash still sitting in the bank, waiting to be spent. They might want the money back. House GOPers are eyeing unspent stimulus funds, of which there might be as much as $45 billion, to pay for those cuts. And several billion of those unspent dollars are education funds. For a look at state-by-state breakdowns of how much education stimulus money states have left, see my earlier post. Any attempt to take back stimulus money, especially from public schools, would undoubtedly be met with a gigantic fight. And of course, the Senate is still controlled by Democrats, although Republicans made election gains in that chamber as well. What's more, a sizable chunk of remaining education stimulus dollars include competitive awards, such as Race to the Top, in which the funding can't be drawn down all at once. My guess is Congress couldn't—or wouldn't—take that money away regardless.
New look at about nature, structure and function of Trietz ligament Background: Trietz ligament connects the duodeno-jejunal flexure to the right crus of the diaphragm. There are various opinions regarding the existence of the smooth muscle fibers in the ligament. We want to resolve this complexity with microscopic study of this part in cadavers. Materials and Methods: This study done on three cadavers in the medical faculty of Isfahan University of Medical Sciences. Three samples of histological specimens were collected from the upper, the central, and the lower parts of Trietz ligament and were stained by H and E staining and Mallorys trichrome stain. Three samples were collected from the regions of exact connection of the main mesentery to the body wall, the intestine, and the region between these two connected regions, and these specimens were stained. Results: In the microscopic survey, no collagen bundles were observed in the collected samples of the Trietz ligament after the dense muscular tissues. In the samples which were collected to work on collagen tissues stretching from the Trietz ligament to the main mesentery of intestine, no collagen bundles were observed. Conclusion: Trietz ligament is connected to the right crus of the diaphragm from the third and the fourth parts of the duodenum. Number of researchers state that there are smooth and striated muscular tissues and some others, with regard to observations of histological phases made from the samples of Trietz muscles, conclude that it can probably be noted that muscular bundles or the dense connective tissue bundles of collagen cannot be observed in the way we imagine.