content
stringlengths 7
2.61M
|
---|
Screening of Diabetes and Hypertension Based on Retinal Fundus Photographs Using Deep Learning Purpose: The aim of this study was to use deep learning to screen for hypertension and diabetes based on retinal fundus images. Methods: We collected 1160 retinal photographs which included 580 from patients with a diagnosis of hypertension or diabetes and 580 from normotensive and non-diabetic control. We divided this image dataset into (i) a development dataset to develop model and (ii) test dataset which were not present during the training process to assess model's performance. A binary classification model was trained by fine-tuning the classifier and the last convolution layer of deep residual network. Precision, recall, the area under the ROC (AUC), and the area under the Precision-Recall curve (AUPR) were used to evaluate the performance of the learned model. Results: When we used 3-channel color retinal photographs to train and test model, its prediction precision for diabetes or hypertension was 65.3%, the recall was 82.5%, the AUC was 0.745, and the AUPR was 0.742. When we used grayscale retinal photographs to train and test model, its prediction precision was 70.0%, the recall was 87.5%, the AUC was 0.803, and the AUPR was 0.779. Conclusions: Our study shows that trained deep learning model based on the retinal fundus photographs alone can be used to screen for diabetes and hypertension, although its current performance was not ideal. |
<reponame>AirGuanZ/Atrc
#pragma once
#include <agz/tracer/core/intersection.h>
#include <agz/tracer/core/texture2d.h>
AGZ_TRACER_BEGIN
/**
* @brief surface material interface
*/
class Material
{
public:
virtual ~Material() = default;
/** @brief entity intersection -> shading point */
virtual ShadingPoint shade(
const EntityIntersection &inct, Arena &arena) const = 0;
};
/**
* @brief helper class for normal mapping
*/
class NormalMapper : public misc::uncopyable_t
{
RC<const Texture2D> normal_map_;
public:
NormalMapper() = default;
explicit NormalMapper(RC<Texture2D> normal_map) noexcept
: normal_map_(std::move(normal_map))
{
}
/**
* @brief reorient the shading frame
*/
FCoord reorient(const Vec2 &uv, const FCoord &old_user_coord) const noexcept
{
if(!normal_map_)
return old_user_coord;
FSpectrum local_nor_spec = normal_map_->sample_spectrum(uv);
FVec3 local_nor = {
local_nor_spec.r * 2 - 1,
local_nor_spec.g * 2 - 1,
local_nor_spec.b * 2 - 1
};
FVec3 world_nor = old_user_coord.local_to_global(local_nor.normalize());
return old_user_coord.rotate_to_new_z(world_nor);
}
};
AGZ_TRACER_END
|
Behavioral Targeting: Which Method produces the most Robust Prediction? A Confrontation between Decisions Trees, Neural Networks and Regressions A method of manufacturing a semiconductor device having a non-single crystalline semiconductor layer including an intrinsic or substantially instrinsic silicon which contains hydrogen or halogen and is formed on a substrate in a reaction chamber which may have a substrate holder. Sodium is removed from the inside of the reaction chamber and/or the surface of the substrate holder to remove sodium therefrom so that the concentration of sodium in the semiconductor layer is preferably 5x1018 atoms/cm3 or less. |
<reponame>LightBit/libkripto<gh_stars>1-10
#ifndef KRIPTO_BLOCK_SKIPJACK_H
#define KRIPTO_BLOCK_SKIPJACK_H
extern const kripto_block_desc *const kripto_block_skipjack;
#endif
|
In Search of Kentum Indo-Europeans in the Himalayas In 1988 and 1989 Claus Peter Zoller reported the astonishing discovery of what appeared to be the remnants of an ancient Kentum Indo-European tongue in the Western Himalayas in a modern language known s Bangni. Zoller's Bangni findings not only had far-reaching implications for our understanding of the prehistoric migrations of ancient Indo-Europeans, they also appeared to violate much of what is received knowledge in historical linguistics. In 1994 we conducted fieldwork in order to verify these remarkable findings. The results of our investigation are presented here. On the basis of these results, it is our contention that no Kentum Indo-European remnants exist in the Bang am language. We also discuss the implications of our findings for the historical linguistic and methodological issues raised by Zoller's work. We have normalized Zoller's phonetic transcriptions with our own in the following way. We indicate the velar nasal (n) and the retroflex sounds (d, /, n, r, /) in accordance with Indological tradition rather than with the newer International Phonetic Association symbols. Likewise, we indicate the so-called long vowels with a macron (0, u). Bangni low tone is indicated by a grave accent In Bangni, s in Hindi, the sibilants / and s have merged to yield a single modern phoneme, which we transcribe s s, which has remained distinct from s. We represent the unyoiced palatal consonants in the conventional manner (c, ch), but we represent the voiced palatal s (z) because of its fricative character. The palatal occlusives have become affricates in Bangni, but the voiced palatal tends strongly towards a fricative realization. The phonetic realization of Bangni /c/ varies , s does that of Bangni /z/ (seldom e.g. in place names). For the former Zoller's notation is ts' or 'tS', and he notes that latter phoneme variously s 'z', 'dz' or 'd3'. Our transcriptions assume a tentative phonological analysis, butlined in Van Driem & Sharm (forthcoming), Bangm toponyms are transliterated s they would be written in Hindi, with some additional phonological details on local pronunciation provided in square brackets. Written Hindi and Nepali are translit- |
Roles and Cellular Localization of GBP2 and NAB2 During the Blood Stage of Malaria Parasites The quality control and export of mRNA by RNA-binding proteins are necessary for the survival of malaria parasites, which have complex life cycles. Nuclear poly(A) binding protein 2 (NAB2), THO complex subunit 4 (THO4), nucleolar protein 3 (NPL3), G-strand binding protein 2 (GBP2) and serine/arginine-rich splicing factor 1 (SR1) are involved in nuclear mRNA export in malaria parasites. However, their roles in asexual and sexual development, and in cellular localization, are not fully understood. In this study using the rodent malaria parasite, Plasmodium berghei, we found that NAB2 and SR1, but not THO4, NPL3 or GBP2, played essential roles in the asexual development of malaria parasites. By contrast, GBP2 but not NPL3 was involved in male and female gametocyte production. THO4 was involved in female gametocyte production, but had a lower impact than GBP2. In this study, we focused on GBP2 and NAB2, which play important roles in the sexual and asexual development of malaria parasites, respectively, and examined their cellular localization. GBP2 localized to both the nucleus and cytoplasm of malaria parasites. Using immunoprecipitation coupled to mass spectrometry (IP-MS), GBP2 interacted with the proteins ALBA4, DOZI, and CITH, which play roles in translational repression. IP-MS also revealed that phosphorylated adapter RNA export protein (PHAX) domain-containing protein, an adaptor protein for exportin-1, also interacted with GBP2, implying that mRNA export occurs via the PHAX domain-containing protein pathway in malaria parasites. Live-cell fluorescence imaging revealed that NAB2 localized at the nuclear periphery. Moreover, IP-MS indicated that NAB2 interacted with transportin. RNA immunoprecipitation coupled to RNA sequencing revealed that NAB2 bound directly to 143 mRNAs, including those encoding 40S and 60S ribosomal proteins. Our findings imply that malaria parasites use an evolutionarily ancient mechanism conserved throughout eukaryotic evolution. INTRODUCTION Malaria parasites are unicellular eukaryotes that belong to the genus Plasmodium in the phylum Apicomplexa. Malaria parasites have complex life cycles, alternating between female Anopheles mosquitoes and vertebrate hosts. The quality control and export of mRNA, as well as the post-transcriptional regulation of gene expression by RNA-binding proteins, play important roles in the life cycles of malaria parasites ). NAB2 and YRA1 play essential roles in mRNA export in Saccharomyces cerevisiae. YRA1 enhances the interaction between NAB2 and Mex67. NAB2 contains an N-terminal PWI domain and a C-terminal CCCH-type zinc finger motif, similar to yeast NAB2 (Tuteja and Mehta, 2010). Moreover, a nuclear localization signal (NLS), RX 2-5 PY NLS (PY-NLS) (), is present in NAB2 according to the PlasmoDB database (www.plasmodb.org/). THO complex subunit 4 (THO4) is a homologue of yeast YRA1 in malaria parasites. THO4 is characterized by its central RNA-binding domain (Tuteja and Mehta, 2010). However, the functions of NAB2 and THO4 in Plasmodium species remain unknown. Homologs of NAB2, GBP2, SR1, and PHAX have been identified in yeast and humans (Tuteja and Mehta, 2010). In S. cerevisiae, individual SR proteins such as NPL3, GBP2, and HRB1 are not essential (Zander and Krebber, 2017). NPL3, GBP2 and SR1 contain conserved RNA recognition motifs. SR1 is essential for asexual development in Plasmodium falciparum (). On the other hand, in the rodent malaria parasite Plasmodium berghei ANKA, GBP2 is primarily involved in sexual development (). However, the roles of NPL3, GBP2, and SR1 in malaria parasites are not fully understood. In this study, we investigated the roles of NAB2, THO4, NPL3, GBP2, and SR1 during asexual and sexual development in P. berghei ANKA via reverse genetics. We found that NAB2 and SR1, but not THO4, NPL3 or GBP2, played essential roles in the asexual development of malaria parasites. On the other hand, THO4 and GBP2, but not NPL3, were involved in gametocyte production. In particular, GBP2 played a pivotal role in sexual development. To investigate the roles of GBP2 and NAB2 during the sexual and asexual developmental stages, respectively, we examined their cellular localization using P. berghei ANKA expressing GBP2 or NAB2 fused to the fluorescent protein mCherry. Moreover, we identified interacting proteins using immunoprecipitation coupled to mass spectrometry (IP-MS) and binding mRNAs using RNA immunoprecipitation coupled to RNA sequencing (RIP-seq). Mouse Studies and Ethics Five-to six-week-old female C57BL/6J (B6) mice were purchased from CLEA Japan Inc. (Tokyo, Japan). The experiments were approved (#221) by the Experimental Animal Ethics Committee of Kyorin University School of Medicine (Tokyo, Japan), and all experimental animals were kept at the animal facility in a specific-pathogen-free unit with sterile bedding, food, and water. The infection studies included frequent observations to determine humane endpoints, at which mice were unable to ambulate sufficiently to obtain water or food. At the indicated time points, mice were euthanized by cervical dislocation under isoflurane or pentobarbital sodium anesthesia (N = 64). All experiments were designed to minimize suffering. When illness or death was expected due to experimental infections, mice were visually checked by investigators at least twice daily (including weekends and holidays). Mice that exhibited signs of neurological distress, such as cerebral paralysis or depression, were humanely sacrificed by cervical dislocation under isoflurane anesthesia and scored as deaths (N = 24). No mice died before meeting the criteria for euthanasia. The investigators who conducted the experiments had completed the Experimental Animal Ethics Committee training course on animal care and handling. Parasites and Infection The p230-deleted P. berghei was generated by the previous study () and used as control parasites. p230 locus (PBANKA_030600) is not an essential gene in the complete life cycle of P. berghei (). The gbp2 (PBANKA_120500)-deleted P. berghei (Dgbp2 parasites) was generated by the previous study (). Malaria parasites were stored as frozen stocks in liquid nitrogen. Infected erythrocytes of transfected parasites were generated in donor mice inoculated intraperitoneally with frozen stocks of parasites. The donor mice were monitored for parasitemia daily and bled for experimental infection during periods in which the level of parasitemia increased. Experimental mice were infected intravenously with 1 10 4 infected erythrocytes or 5 10 6 to 5 10 7 purified mature schizonts harvested by Nycodenz density gradient centrifugation of a given parasite strain. Transfection To generated nab2-, tho4-, npl3-and sr1-deleted P. berghei ANKA, the gene-targeting vectors for nab2 (PBANKA_1122000), tho4 (PBANKA_1230500), npl3 (PBANKA_0506600) and sr1 (PBANKA_1232100) were prepared by PCR (;). Briefly, the 5' and 3' flanking regions of the open reading frame (ORF) of target genes were amplified by PCR. The PCR products were annealed to either side of the human dihydrofolate reductase (hdhfr)-expressing cassette and amplified by PCR using gene-specific primers (Supplementary Table S1, Figure S1). The gene-targeting vectors were introduced into the ORFs of target genes by double-crossover homologous recombination (Supplementary Figure S1). To generate transgenic parasites expressing mCherry-fused GBP2 or mCherry-fused NAB2, the gene-targeting vectors for gbp2 (PBANKA_120500) or nab2 (PBANKA_1122000) were prepared by PCR. The PCR products were annealed to either side of the red fluorescent protein gene (mCherry)-hdhfrexpressing cassette and amplified by PCR using gene-specific primers (Supplementary Table S1, Figures S2A, B). The genetargeting vectors were introduced into the 3' flanking regions of ORFs of target genes by double-crossover homologous recombination (Supplementary Figures S2A, B). To generate transgenic parasites expressing mCherry-fused NAB2 and GFPfused NUP205, the gene-targeting vectors for nup205 (PBANKA_1140100) were prepared by PCR. The PCR products were annealed to either side of the green fluorescent protein gene (gfp)-mutated human deoxyhypusine synthase (hdhps)-expressing cassette () and amplified by PCR using genespecific primers (Supplementary Table S1, Figures S2C). The gene-targeting vectors were introduced into the 3' flanking regions of ORFs of target genes by double-crossover homologous recombination (Supplementary Figures S2C). Transfection was performed using an Amaxa Basic Parasite Nucleofector Kit (Amaxa GmbH, Cologne, Germany) according to the manufacturer's protocol (;). Genomic PCR To generate gene-targeting vectors and confirm the introduction of gene-targeting vectors into target genes, genomic PCR was performed as described previously (). Thirtyfive cycles of PCR were performed on a C1000 thermal cycler (Bio-Rad, Hercules, CA, USA). Each cycle consisted of denaturation at 98°C for 15 s, annealing at 55°C for 15 s, and extension at 68°C for 1-6 min. The PCR products were then analyzed on a 1% (w/v) agarose gel and stained with ethidium bromide. Parasitemia Methanol-fixed tail-blood smears, stained with 3% Giemsa and diluted with phosphate buffer (pH 7.2) for 45 min, were subjected to microscopic examination. The number of infected erythrocytes (out of 250 erythrocytes) was enumerated when the level of parasitemia and gametocytemia exceeded 10%, while 1 10 4 erythrocytes were examined in mice with lower levels of parasitemia and gametocytemia. The parasitemia and gametocytemia percentage values were calculated as follows: 100. Evaluation of Gametocyte Production In Vitro To evaluate gametocyte production, early trophozoite stage malaria parasites were obtained from B6 mice exhibiting 1-2% parasitemia. Infected erythrocytes were incubated for 28 h in a 12-well plate. Methanol-fixed blood smears, stained with 3% Giemsa diluted in phosphate buffer (pH 7.2) for 45 min, were subjected to microscopic examination. Erythrocytes infected with mature schizonts containing 4-15 merozoites, and mature gametocytes showing sex-specific features such as nuclear enlargement, were counted as described previously (). The distribution of pigment granules throughout the cytoplasm, and enlargement of cells, were also assessed (). The proportions of male and female gametocytes were determined in at least 300 infected erythrocytes. The proportions of male and female gametocytes were calculated as follows: 100. Fluorescence Live Cell Imaging Nuclear DNA was stained using Hoechst 33342 dye (Invitrogen, Waltham, MA). To examine the localization of NAB2::mCherry, MitoBright LT Green (Dojindo Laboratories, Kumamoto, Japan) was added to culture medium at 100 nM and incubated for 15 min at 37°C. Next, Hoechst 33342 was added to the culture at a concentration of 1 g/mL. The staining medium was removed after the incubation, and fresh medium was added. Brightfield and fluorescence micrographs were captured at 1000 magnification using an All-in-One Fluorescence Microscope (BZ-X800; KEYENCE Japan, Osaka, Japan). Protein IP Infected erythrocytes were transferred to RPMI1640 medium supplemented with 25% fetal bovine serum, 0.05 mg/mL penicillin and 0.05 mg/mL streptomycin. The infected erythrocytes were incubated for 22 h in 90% N 2, 5% CO 2 and 5% O 2. Mature schizonts and gametocytes were harvested by Nycodenz density gradient centrifugation, as described previously (). Proteins were extracted using Mammalian Protein Extraction Reagent (Thermo Fisher Scientific, Waltham, MA) according to the manufacturer's protocol. Protein IP in transgenic parasites expressing mCherry fused to GBP2 or NAB2 was performed using GFP-or RFP-Trap Agarose and a GFP-Trap-A kit, according to the manufacturer's instructions (Chromotek, Planegg, Germany). MS MS was performed as described previously (;). The database search engines Proteome Discoverer 1.4 (Thermo Scientific) and MASCOT 2.6 (Matrix Science) were used to identify and quantify proteins from the MS, MS/MS and reporter ion spectra of the peptides. Peptide mass data were matched by searching the protein database (PlasmoDB-46_PbergheiANKA.fasta), downloaded from PlasmoDB (updated November 4, 2019). The false discovery rate (FDR) () was calculated by peptide sequence analysis using Percolator software (). High-confidence peptide identifications were obtained by setting a target false discovery rate threshold of ≤ 1.0% at the peptide level. The mass spectrometry proteomics data have been deposited in the ProteomeXchange Consortium via the PRIDE () partner repository with the dataset identifier PXD027302. Proteins exhibiting at least three peptide spectral matches were excluded. RIP-Seq Following protein IP, RNA was isolated using an RIP-Assay Kit (Medical & Biological Laboratories, Tokyo Japan) according to the manufacturer's instructions. RNA from two independent RNA IP assays was prepared to generate cDNA using a SMART-Seq V4 Ultra Low Input RNA Kit for sequencing (Takara, Shiga, Japan). cDNA libraries prepared using Nextera DNA Flex Library Prep Kits (Illumina K.K., Tokyo, Japan) were analyzed using an Illumina NextSeq500 (Illumina K.K.) at FASMAC (Kanagawa, Japan). Data were matched by searching the database (PlasmoDB-47_PbergheiANKA.fasta) downloaded from PlasmoDB (updated April 23, 2020). Statistical Analysis For time-series comparisons, one-and two-way ANOVAs with Fisher's protected least significant difference (PLSD) post hoc test were performed using Statcel program (OMS, Saitama, Japan). P-values < 0.05 were considered statistically significant. NAB2 and SR1, but Not THO4, NPL3 or GBP2, Were Essential for Survival During Asexual Development in Malaria Parasites We first investigated the effects of nab2, tho4, npl3, gbp2 and sr1 deletion on the asexual development of malaria parasites. In our previous study, we generated P. berghei ANKA gbp2 deletion mutants (Dgbp2) (). In this study, we attempted to generate P. berghei ANKA nab2, tho4, npl3, and sr1 deletion mutants (Dnab2, Dtho4, Dnpl3, and Dsr1, respectively) by introducing gene-targeting vectors into the P. berghei ANKA genome (Supplementary Figure S1). In P. berghei knockout (PlasmoGEM) growth phenotypes, data on these mutants are not shown (PlasmoDB). By contrast, the transposon screen in P. falciparum showed THO4, NPL3, and SR1 to be dispensable during the asexual development of malaria parasites (PlasmoDB). In the transposon screen of P. falciparum, data for NAB2 and GBP2 were between dispensable and essential for asexual development (PlasmoDB). Dnab2 and Dsr1 mutants could not be generated, implying that NAB2 and SR1 are essential for the asexual development of malaria parasites. However, Dtho4 and Dnpl3 mutants were successfully generated (Supplementary Figures S1B, C). Dtho4 and Dnpl3 were inoculated intravenously into mice; their growth was monitored in vivo and compared with that of the control and Dgbp2. The courses of parasitemia in mice infected with Dtho4 or Dnpl3 were comparable with those in mice infected with the control or Dgbp2 parasites ( Figure 1A). These results imply that tho4, npl3, and gbp2 deletions did not affect the asexual development of the malaria parasites. Male and Female Gametocyte Production Was Decreased in gbp2 Deletion Mutants Previously, we found that gbp2 deletion affects sexual development in P. berghei ANKA (). Therefore, we investigated the effects of tho4 and npl3 deletion on gametocyte production in malaria parasites. The percentage of male gametocytes in cultured Dtho4 and Dnpl3 parasites was comparable to the percentage in control parasites ( Figure 1B). On the other hand, the percentage of female gametocytes was lower in cultured Dtho4 parasites than in control parasites ( Figure 1B). However, the percentages of both male and female gametocytes were higher in cultured Dtho4 and Dnpl3 parasites than in Dgbp2 parasites ( Figure 1B). These findings imply that GBP2 is involved in male and female gametocyte production in malaria parasites. Cellular Localization of GBP2 and NAB2 in Malaria Parasites Our results imply that NAB2 and SR1 are essential for the asexual development of malaria parasites. SR1, but not NAB2, is localized to the nucleus and bound to RNAs in malaria parasites (). We showed that GBP2 plays a more important role than THO4 and NPL3 in male and female gametocyte production. However, the cellular localization of GBP2 and GBP2-binding RNAs is unknown. Therefore, to elucidate the quality control and export of mRNA by RNAbinding proteins, we focused on GBP2 and NAB2, which play important roles in the sexual and asexual development of malaria parasites, respectively. To investigate the cellular localization of GBP2 and NAB2, we generated transgenic parasites expressing GBP2 or NAB2 fusion proteins (GBP2::mCherry or NAB2:: mCherry, respectively) (Supplementary Figures S2A, B). The mCherry tag was introduced at the C-terminus of endogenous GBP2 or NAB2. gbp2::mCherry and nab2::mCherry expression was controlled by the endogenous gbp2 and nab2 native promoters, respectively. Both the GBP2::mCherry and NAB2:: mCherry mutant lines were successfully generated ( Figures S2A, B) and expressed the fusion protein (Tables 1 and 2). To examine the cellular localization of GBP2 and NAB2, we performed live-cell fluorescence imaging of cultured GBP2:: mCherry and NAB2::mCherry schizonts and infected erythrocytes obtained from mice at 6 h (ring form), 12 h (trophozoite) and 18 h (late trophozoite) post-inoculation with GBP2::mCherry and NAB2::mCherry schizonts ( Figure 2). The mCherry signal was distributed throughout GBP2::mCherry parasite cells at all development stages (Figure 2A). In NAB2::mCherry parasites, a single fluorescent spot representing the mCherry signal was present at the nuclear periphery in cultured schizonts and schizont-infected erythrocytes collected at 6 h (ring form) post-inoculation ( Figure 2B). At 12 h (trophozoite) and 18 h (late trophozoite) post-inoculation, speckled mCherry signals were present at the nuclear periphery in NAB2::mCherry cells ( Figure 2B). A spot of mCherry fluorescence was also detected in the cytoplasm of NAB2:: mCherry parasites at 18 h (late trophozoite) post-inoculation ( Figure 2B). These findings imply that the cellular locations of GBP2 and NAB2 in malaria parasites differ. NAB2 Localizes Mainly to the Inside of the Nuclear Membrane in Malaria Parasites Nuclear pore complexes are composed of Nups, of which five or six have been identified in Plasmodium species to date (). In P. berghei, Nup138, Nup205, Nup221, Nup313, and Nup637 have been identified as potential Nups with frequent phenylalanine-glycine repeats that localize to the nuclear periphery (Xie and Ren, 2019). To determine whether NAB2 localizes to the inside of the nuclear membrane in malaria parasites, we generated a NAB2::mCherry strain expressing NUP205 fused to green fluorescent protein (GFP) (Supplementary Figure S2C). Because NUP205 fluorescence increases with the size of the nucleus in P. berghei (Xie and Ren, 2019), we analyzed parasites during the late trophozoite stage (18 h post-inoculation). Live-cell fluorescence imaging revealed that the GFP signal was localized mainly to the nuclear periphery in P. berghei ANKA trophozoites ( Figure 3A), implying that NAB2 localized to the inner nuclear membrane in P. berghei ANKA. To investigate whether the fluorescent signal of mCherry detected in the cytoplasm represented localization to mitochondria, NAB2::mCherry parasites were stained with mitotracker; the mCherry signal did not overlap that of mitochondria ( Figure 3B). Identification of GBP2-and NAB2-Interacting Proteins in Malaria Parasites To investigate which proteins interact with GBP2 and NAB2, we performed protein IP using anti-mCherry beads and identified the proteins bound to GBP2 and NAB2 by MS. Proteins were extracted from mature schizonts and gametocytes harvested by Nycodenz density gradient centrifugation. IP-MS using anti-mCherry beads in wild-type P. berghei ANKA, as well as anti-GFP beads in the GBP2::mCherry and NAB2::mCherry strains, were performed to provide controls. In three independent comparative proteomics analyses of GBP2, 175, 191, and 236 proteins were detected. Among them, 15 proteins with at least three peptide spectral matches and a fold change ≥ 2.5 compared with the controls among three independent experiments were analyzed further ( Table 1). Nuclear and cytoplasmic proteins-such as FoP domain-containing protein (PBANKA_1234500), which is a homolog of Friend of Prmt1 (van )-and polyadenylate-binding protein 1 (PBANKA_1439200) (), respectively, were detected by IP-MS of GBP2::mCherry (Table 1). Moreover, the gametocyte-related proteins DNA/RNA-binding protein Alba (ALBA) 2 and 4 (PBANKA_1359200 and PBANKA_1360300, respectively), ATP-dependent RNA helicase DDX6 (DOZI; PBANKA_1217700) and trailer hitch homolog (CITH; PBANKA_1301300) were identified as GBP2-interacting proteins ( Table 1 berghei ANKA with gbp2, npl3, or tho4 deletion (Dgbp2, Dnpl3, or Dtho4 parasites, respectively). As a control, P. berghei ANKA with a p230 deletion was inoculated intravenously into mice. Results are expressed as means ± standard deviation (SD) of three mice. Experiments using three mice were performed in duplicate. (B) Percentages of mature male and female gametocytes. Erythrocytes infected with malaria parasites were incubated for 28h. The percentages of male and female gametocytes were calculated as follows: 100. Results are expressed as the mean ± standard deviation of three independent experiments. * indicates a significant difference compared with the control parasites (Tukey-Kramer and Dunnett tests). ** indicates a significant difference compared with the control, Dnpl3, and Dtho4 parasites (Tukey-Kramer and Dunnett tests). (PBANKA_0506100), which is associated with mRNA export from the nucleus into the cytoplasm (), also interacted with GBP2 (Table 1), implying that an mRNA export pathway involving PHAX domain-containing proteins is present in malaria parasites (Supplementary Figure S3). However, no nuclear pore complex proteins or export receptor-like proteins were detected by IP-MS of GBP2::mCherry. In the three independent comparative proteomics analyses of NAB2, 390, 410, and 536 proteins were detected. Among them, 30 proteins with at least three peptide spectral matches and a fold Proteins were extracted from NAB2::mCherry schizont-and gametocyte-enriched cultures after incubation for 22 h. Proteins with at least three peptide spectral matches and a fold change ≥ 2.5 compared with the controls are listed. Control experiments comprising immunoprecipitation of wild-type P. berghei ANKA using anti-mCherry beads coupled to mass spectrometry and of NAB2::mCherry using anti-GFP beads coupled to mass spectrometry. Experiments were performed in triplicate. The results are the sums of three independent experiments. Proteins were extracted from GBP2::mCherry schizont-and gametocyte-enriched cultures after culturing for 22 h. Proteins with at least three peptide spectral matches and a fold change ≥ 2.5 compared with the controls are listed. Control experiments comprising immunoprecipitation in wild-type P. berghei ANKA using anti-mCherry beads coupled to mass spectrometry and GBP2::mCherry using anti-GFP beads coupled to mass spectrometry. Experiments were performed in triplicate. The displayed results are the sum of three independent experiments. change ≥ 2.5 compared with the controls among three independent experiments were analyzed further ( Table 2). Similar to GBP2::mCherry, the nuclear protein FoP domaincontaining protein (PBANKA_1234500) was detected following IP-MS of NAB2::mCherry ( Table 2). In addition, nuclear proteins such as ATP-dependent RNA helicase UAP56 ( P B A N K A _ 0 3 0 6 8 0 0 ) ( S e r p e l o n i et a l., 2 0 1 6 ) and polyadenylate-binding protein 2 (PBANKA_0824800) () interacted with NAB2 (Table 2). Notably, we found that transportin (PBANKA_1126400) interacted with NAB2 ( Table 2), which implies that NAB2 shuttles between the nucleus and the cytoplasm (Supplementary Figure S3). No nuclear pore complex proteins or export receptor-like proteins were detected by IP-MS of NAB2::mCherry ( Table 2). Identification of RNAs Directly Bound by GBP2 and NAB2 To identify RNAs bound by GBP2 and/or NAB2, we performed RIPseq on mature schizonts-and gametocytes-lysates using anti-mCherry beads and sequenced the RNAs bound to GBP2 and NAB2 (Figure 4; Supplementary Tables S2 and S3). As a control, RIP-seq using anti-GFP beads was also performed. In two independent RIP-seq assays of GBP2, 4,753 and 4,906 RNAs were detected. Among them, 58 mRNAs with < 500 transcripts per million (TPM) in both experimental subjects and > 500 TPM in controls (RNA immunoprecipitation in GBP2::mCherry parasites using anti-GFP beads coupled to RNA sequencing) were analyzed further (Supplementary Table S2). By contrast, 5,044 and 5,045 RNAs were detected in the two independent RIP-seq of NAB2, respectively. Among them, 143 mRNAs with < 500 TPM in both experimental subjects and > 500 TPM in controls (RNA immunoprecipitation in NAB2::mCherry parasites using anti-GFP beads coupled to RNA sequencing) were analyzed further (Supplementary Table S3). Several mRNAs bound to GBP2 were expressed during the asexual development stage, such as copper-transporting ATPase (PBANKA_0416500) and a conserved Plasmodium protein of unknown function (PBANKA_0404000), or during the sexual stage, such as subtilisin-like protease 2 (PBANKA_0911700) and a conserved Plasmodium protein of unknown function (PBANKA_1029400) (Supplementary Table S2). Asexual stage mRNAs bound to NAB2 include translation initiation factor eIF-1A (PBANKA_0905600) and RNA lariat debranching enzyme (PBANKA_1354000); NAB2-bound sexual stage mRNAs include actin-related protein (PBANKA_0209300) and plasmepsin VII (PBANKA_0517600) ( Supplementary Table S3). Moreover, we found that NAB2, but not GBP2, interacted with 40S and 60S ribosomal protein mRNAs, and that GBP2 and NAB2 typically bound to different mRNAs, with the exception of RNA-binding protein NOB1 (PBANKA_0720800) ( Figure 4A; Supplementary Figure S3, Tables S2 and S3). The RRM2 domain of yeast GBP2 binds to RNAs containing the core motif GGUG and an intron (;). Among 58 mRNAs bound to GBP2, 54 (93.10%) contained the GGUG motif ( Figure 4B). Among 143 mRNAs bound to NAB2, 80 (55.94%) contained the GGUG motif ( Figure 4B). In contrast, the proportion of mRNAs containing an intron among the mRNAs that bound to NAB2 was 88.81%, higher than among mRNAs that bound to GBP2 (46.55%) ( Figure 4C). The mRNAs bound to GBP2 were longer than those bound to NAB2 ( Figure 4D). These results support our findings that GBP2 and NAB2 bound to different mRNAs in malaria parasites. DISCUSSION In this study, we investigated the roles of NAB2, THO4, NPL3, GBP2 and SR1, during the asexual and sexual developmental stages of P. berghei ANKA using reverse genetics. In yeast, deletion of NPL3, GBP2 or HRB1 (SR1 in Plasmodium) does not affect growth (Zander and Krebber, 2017). In malaria parasites, NPL3 and GBP2 were not essential for growth during the asexual stage. However, we found that SR1 plays an essential role in asexual development. Our results are consistent with those of a previous study, which revealed that SR1 plays an essential role in the asexual development of malaria parasites (). Furthermore, we found that GBP2 plays a more important role in male and female gametocyte development compared with THO4 and NPL3. These results confirm that GBP2 is involved in sexual development in malaria parasites. We found that gametocyte development was less affected by tho4 deletion compared with gbp2 deletion. In Drosophila, always early (aly), a homolog of THO4, encodes a key protein in males both for the onset of spermatid differentiation and for the G2meiosis I transition, but not for mRNA export (). Moreover, NPL3, but not GBP2 or HRB1, plays an essential role in meiotic gene expression in S. cerevisiae (). Therefore, THO4 and NPL3 might be involved in mRNA export during meiosis, such as the mosquito stage of the malaria parasite. In this study, we found that GBP2 was distributed throughout P. berghei ANKA cells. In yeast, GBP2 localizes to the nucleus, but not to the cytoplasm. Cytoplasmic mislocalization of yeast GBP2 was observed previously in gbp2 mutants, in which the binding sites of SR-specific protein kinases were exchanged (Windgassen and Krebber, 2003). Here, IP-MS of GBP2:: mCherry revealed that GBP2 interacted with cytoplasmic proteins such as ALBA4 and DOZI () in addition to nuclear proteins. Moreover, the NLS prediction server (http://www.moseslab.csb.utoronto.ca/NLStradamus/) indicated several nuclear localization signals for GBP2 A B FIGURE 3 | Cellular localization of NAB2 in P. berghei ANKA. Female B6 mice were infected with 5 10 6 to 5 10 7 schizonts of transgenic P. berghei ANKA expressing the NAB2-mCherry fusion protein (NAB2::mCherry) or NAB2::mCherry parasites expressing NUP205 fused to GFP. Erythrocytes infected with malaria parasites at 18 h (late trophozoite) after inoculation were analyzed. At least 50 infected erythrocytes were analyzed, and the same fluorescence pattern was observed in all infected erythrocytes. (A) NAB2::mCherry parasites expressing NUP205 fused to GFP. P. berghei ANKA during the late trophozoite stage are shown. (B) MitoBright LT Green-stained NAB2::mCherry parasites. P. berghei ANKA during the late trophozoite stage are shown. Scale bar = 5 m. Dotted lines indicate malaria parasites. Experiments were performed in triplicate. Representative data are shown. (PRRRR; RR; KKDFRRDNRK), implying that GBP2 localizes not only to the nucleus but also to the cytoplasm. These results imply that the localization of GBP2 differs from that of yeast GBP2. The results of IP-MS of GBP2::mCherry imply that GBP2 interacts with ALBA4, DOZI and CITH. DOZI and CITH are required for zygote development but not for gametocytogenesis ). Furthermore, DOZI and CITH may be involved in translational repression during gametocytogenesis ). ALBA4 is involved in sporozoite development and interacts with DOZI and CITH (). The reduced male and female gametocyte production by gbp2 deletion mutants in this study implies that during the life cycle of malaria parasites, GBP2 functions during an earlier developmental stage than ALBA4, DOZI and CITH. Our RIP-seq results revealed that GBP2 bound to mRNAs encoding proteins that are essential for asexual development; however, the parasitemia course was comparable in mice infected with Dgbp2 versus control parasites. These findings imply that a factor other than GBP2 is involved in the export of these mRNAs. Here, we found that a phosphorylated adapter RNA export protein (PHAX) domain-containing protein associated with mRNA export via the CRM1/exportin pathway () interacted with GBP2, implying that the GBP2binding mRNAs that are essential for asexual development may be exported via the CRM1/exportin pathway in Dgbp2. In yeast, GBP2 is an adaptor protein for the Mex67/Mtr2 mRNA export receptor complex (;;Zander and Krebber, 2017). However, orthologues of Mex67 and Mtr2 are absent from malaria parasite genomes (;). In this study, no export receptor-like proteins were detected by IP-MS of GBP2:: mCherry. Exportin-1, an export receptor for PHAX domaincontaining proteins, was also not detected by IP-MS of GBP2:: mCherry. This indicates that detecting export receptors by IP-MS may be difficult due to weak binding between export receptors and adaptor proteins in malaria parasites. NAB2 and SR1 were found to be essential for the survival of malaria parasites. Similar to SR1 (), NAB2 was located mainly at the periphery of the nucleus. However, no SR1-binding mRNAs () were detected by RIPseq of NAB2. Moreover, no SR1 homologues were detected by IP-MS of NAB2. Four homologues of proteins encoded by SR1-binding mRNAs () (PBANKA_0401700, PBANKA_0404000, PBANKA_1212700 and PBANKA_1365300) were detected by RIP-seq of GBP2, while no SR1 homologues were detected by IP-MS of GBP2. These findings imply that NAB2, SR1 and GBP2 are associated with the export and quality control of different mRNAs. In yeast, NAB2 is an adaptor protein for the Mex67/Mtr2 complex, similar to yeast SR proteins such as NPL3, GBP2 and HRB1 (). The N-terminal domain of NAB2 also interacts with Mlp1, which associates with the nuclear pore complex and is involved in mRNA quality control and export (Xie and Ren, 2019). However, no nuclear pore complex proteins or export receptor-like proteins were detected by IP-MS of NAB2::mCherry. Fluorescence live cell imaging revealed that NAB2 localized not only to the nucleus but also to the nuclear membrane. These findings imply that NAB2 interacts with nuclear pore complex proteins in malaria parasites. Here, we found that the protein transportin interacts with NAB2. Transportin mediates import into the nucleus in eukaryotic cells and recognizes the NLS of NAB2 (Chook and Sel, 2011). The NLS of yeast NAB2 comprises an N-terminal hydrophobic motif and a C-terminal PY-NLS (). NAB2 also contains a PY-NLS, implying that transportin is involved in the nuclear import of NAB2. These findings imply that NAB2 is involved in nuclear mRNA export in malaria parasites. Our findings imply that NAB2 and GBP2 are involved in nuclear mRNA export in malaria parasites. Moreover, NAB2 and GBP2 function at different life-cycle stages. However, the terminal step of nuclear mRNA export in malaria parasites remains unclear. Future investigations should aim to identify export receptor and nuclear pore complex proteins that interact with NAB2 and SR proteins such as NPL3, GBP2 and SR1. ETHICS STATEMENT The animal study was reviewed and approved by the Experimental Animal Ethics Committee of Kyorin University School of Medicine. AUTHOR CONTRIBUTIONS MN designed research. MN and TF performed research. MN, TF, JM, and FK analyzed data. and MN and FK wrote the paper. All authors contributed to the article and approved the submitted version. ACKNOWLEDGMENTS MS/MS data (Project Name: Identification of GBP2-and NAB2interacting proteins in malaria parasites, Project accession: PXD027302) and Sequencing data (Project Name: RNA immunoprecipitation coupled to RNA-sequencing of GBP2::mCherry parasites, Accession: E-MTAB-10775; Project Name: RNA immunoprecipitation coupled to RNAsequencing of NAB2::mCherry parasites, Accession: E-MTAB-10773). The English in this document has been checked by at least two professional editors, both native speakers of English. For a certificate, please see: http://www.textcheck.com/ certificate/2xLlQ1. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcimb.2021. 737457/full#supplementary-material Supplementary Table S1 | Sequence of primers used in this study. Supplementary Table S2 | RNA immunoprecipitation coupled to RNAsequencing of GBP2::mCherry parasites. Protein-mRNA complex was extracted from NAB2::mCherry schizont-and gametocyte-enriched cultures after incubation for 22 h. mRNAs with < 500 transcripts per million (TPM) in experimental subjects and > 500 TPM in controls (RNA immunoprecipitation in GBP2::mCherry parasites using anti-GFP beads coupled to RNA sequencing) were excluded. Experiments were performed in duplicate; similar results were obtained from each experiment. Representative data are shown. PlasmoGEM indicates the phenotype of parasite multiplication during the blood stage of P. berghei ANKA (http://plasmogem.sanger. ac.uk/phenotypes). Expression profile indicates the developmental stage at which expression of mRNA was mainly observed during the blood stage of P. berghei ANKA (https://plasmodb.org/plasmo/app). Supplementary Table S3 | RNA immunoprecipitation coupled to RNAsequencing of NAB2::mCherry parasites. Protein-mRNA complexes were extracted from NAB2::mCherry schizont-and gametocyte-enriched cultures after incubation for 22 h. mRNAs with < 500 transcripts per million (TPM) in experimental subjects and > 500 TPM in controls (RNA immunoprecipitation in NAB2::mCherry parasites using anti-GFP beads coupled to RNA sequencing) were excluded. Experiments were performed in duplicate; similar results were obtained from each experiment. Representative data are shown. PlasmoGEM indicates the phenotype of parasite multiplication during the blood stage of P. berghei ANKA (http://plasmogem.sanger. ac.uk/phenotypes). Expression profile indicates the developmental stage at which expression of mRNA was mainly observed during the blood stage of P. berghei ANKA (https://plasmodb.org/plasmo/app). Supplementary Figure S1 | Schematic representation of the gene-targeting vector used to disrupt NPL3, THO4, NAB2 and SR1. Gene disruption was performed by double-crossover homologous recombination. The gene disruption vectors contained human dihydrofolate reductase-thymidylate synthase (hdhfr) and the 5 and 3 flanking regions of target genes. hDHFR expression was controlled by the elongation factor-1 promoter (PBANKA_113340). Arrows denote primers specific for the 5 and 3 regions of target genes. (A) Introduction of the hDHFRexpressing cassette into the nab2 locus of wild-type (WT) P. berghei ANKA. (B) Introduction of the hDHFR-expressing cassette into the tho4 locus of WT P. berghei ANKA. tho4-specific primer sets were used. PCR products were digested with BamHI (red arrowhead) to distinguish between wild-type P. berghei ANKA (1720 and 1090 bp fragments) and tho4 deletion mutants (Dtho4) (2202 and 508 bp fragments). (C) Introduction of the hDHFR-expressing cassette into the npl3 locus of WT P. berghei ANKA. npl3-specific primer sets were used. PCR products were digested with BamHI (red arrowhead) to distinguish between WT (3088 bp) and npl3 deletion mutants (Dnpl3) (2337 and 826 bp fragments). (D) Introduction of the hDHFR-expressing cassette into the sr1 locus of WT P. berghei ANKA. To generate deletion mutants, two independent transfections were performed in each line. Note: nab2 and sr1 deletion mutants could not be generated. |
New chemotherapeutic agents in acute myeloid leukemia. Only two classes of chemotherapeutic agents have shown activity in acute myeloid leukemia (AML): ara-C and topoisomerase II reactive agents. Frontline combinations of these agents produce complete response (CR) rates of 70% and long-term event free survival rates of 25%. New agents with different mechanisms of action are being explored. Nucleoside analogs such as chlorodeoxyadenosine (2-CdA) or fludarabine have shown single-agent efficacy and may be synergistic with ara-C. Combination therapy with ara-C and nucleoside analogs have shown promising results both as salvage therapy and in newly diagnosed patients. Combinations of topotecan with ara-C, VP16, and anthracyclines are being pursued, as is testing of other Topo-I inhibitors. Hypomethylating agents (5-azacytidine, decitabine) are showing activity in AML, producing CR rates of 5% to 30% as AML salvage therapy as a single agent, and 40%-60% in combinations. Decitabine may be synergistic with topo I inhibitors, biologic agents, and differentiating agents. Homoharringtonine has modest anti-AML activity, with CR rates of 10% to 30% as salvage therapy. Other classes of agents worthy of continuing investigation are platinum analogs and agents with novel mechanisms of action such as tallimustine. |
def build_draco(train_cmd, job_name, resources, logdir):
assert 'submit_job' in cfg.SUBMIT_CMD, \
'Expected \'submit_job\' as SUBMIT_CMD. Exiting ...'
submit_cmd = cfg.SUBMIT_CMD + ' '
submit_cmd += expand_resources(resources)
submit_cmd += f' --name {job_name}'
submit_cmd += f' --command \' {train_cmd} \''
submit_cmd += f' --logdir {logdir}/gcf_log'
return submit_cmd |
# coding: utf-8
# In[2]:
import cv2
import numpy as np
# In[3]:
#using a_b_c_d_*
# w=2
# x=2
# y=12
# z=2
iterations=3
# In[4]:
# X=[]
# Y=[]
Z=[]
for i in range(1,1001):
cnt=0
# print(i)
for a in range(2):
for b in range(2):
for c in range(12):
for d in range(2):
# img =cv2.imread('Images/'+str(a)+'_'+str(b)+'_'+str(c)+'_'+str(d)+'_'+str(i)+'.jpg')
# X.append(img)
# Y.append(cnt)
Z.append('Images/'+str(a)+'_'+str(b)+'_'+str(c)+'_'+str(d)+'_'+str(i)+'.jpg')
cnt+=1
# In[5]:
# X=np.asarray(X)
# Y=np.asarray(Y)
# In[6]:
# print(X.shape,Y.shape)
# In[7]:
#Images have been preloaded and saved into .npy files for faster loading
X=np.load('data/images.npy')
Y=np.load('data/labels.npy')
# In[8]:
import keras
from keras import layers
# In[9]:
Y = keras.utils.to_categorical(Y, 96)
# In[10]:
print(X.shape,Y.shape)
# In[11]:
x_train= X[:72000]
x_test=X[72000:]
y_train=Y[:72000]
y_test=Y[72000:]
# In[12]:
print(x_train.shape,x_test.shape)
print(y_train.shape,y_test.shape)
# In[13]:
import matplotlib.pyplot as plt
# In[14]:
plt.imshow(x_train[58354])
# In[15]:
print(Y[58354])
print(Z[58354])
# In[16]:
#Creating Model
inp = layers.Input(shape=(28,28,3),name='input_image')
#[TO:DO] check why (28,28,1) instead of (28,28)
hl1 = layers.Conv2D(filters=50,kernel_size=(7,7),activation='relu',name='conv7x7_hl1')(inp)
hl2 = layers.BatchNormalization(name='BatchNormalization_hl2') (hl1)
hl3 = layers.MaxPooling2D(pool_size=(2, 2), strides=2,name = 'MaxPooling2D_hl3') (hl2)
flat = layers.Flatten() (hl3)
hl4 = layers.Dense(1024,activation='relu', name='Dense_hl4') (flat)
out = layers.Dense(96,activation='softmax',name='output_layer') (hl4)
model = keras.models.Model(inp,out)
model.summary()
# In[17]:
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
# In[18]:
# checkpointer = keras.callbacks.ModelCheckpoint(filepath='task1b_hrishi_model.weights.best.hdf5', verbose = 1, save_best_only=True)
tbCallBack = keras.callbacks.TensorBoard(log_dir='./Graph1b', histogram_freq=0, write_graph=True, write_images=True)
# In[ ]:
model.fit(x=x_train,
y=y_train,
batch_size=300,
epochs=iterations,
validation_split=0.1,
verbose =1,
callbacks=[tbCallBack] )
# In[ ]:
score = model.evaluate(x_test, y_test)
# In[ ]:
print(score)
model.save('task1b_hrishi_trained_model'+str(iterations)+'.h5') |
#include "Bible/BibleBook.h"
namespace BIBLE
{
/// Gets a string for the name of the book.
/// @param[in] book - The book of the Bible to get the string name of.
/// @return The string name for the book.
std::string ToString(const BibleBook book)
{
switch (book)
{
case BibleBook::INVALID:
return "Invalid";
case BibleBook::GENESIS:
return "Genesis";
case BibleBook::ISAIAH:
return "Isaiah";
case BibleBook::EZEKIEL:
return "Ezekiel";
case BibleBook::MATTHEW:
return "Matthew";
case BibleBook::LUKE:
return "Luke";
case BibleBook::HEBREWS:
return "Hebrews";
case BibleBook::FIRST_PETER:
return "1 Peter";
case BibleBook::SECOND_PETER:
return "2 Peter";
default:
return "Unknown";
}
}
}
|
Statistical methods for a three-period crossover design in which high dose cannot be used first. Design and analysis methods for the three-period crossover trial defined by the sequences: (D0, D1, D2), (D1, D0, D2), and (D1, D2, D0), where D0 is a placebo, and D1 and D2 are a low dose and a high dose of a drug, respectively, are developed. This design may be used when investigators are unwilling to administer a higher dose of a new drug to a patient before administering a lower dose. In using this design, patients should be randomized to sequences in blocks that are integer multiples of 3. Both parametric and non-parametric analysis methods are based on contrasts that capture intrapatient variability only and provide unbiased estimates and hypothesis tests of pairwise differences between carryover, direct dose, and period effects. The design and methods are illustrated with data reflecting the cognitive component of the Alzheimer's disease assessment scale collected in a large clinical trial of Tacrine at doses of 0, 40, and 80 mg/day. |
package ceui.lisa.interfaces;
import android.view.View;
public interface OnItemClickListener {
void onItemClick(View v, int position, int viewType);
} |
Automatic Virtual Metrology for Carbon Fiber Manufacturing Carbon fiber is currently one of the most popular composite materials in the world, with its wide applications ranging from bikes to space shuttles. However, there is no comprehensive method for the total quality inspection of carbon fiber products so far due to its feature of continuous production and destructive examination of quality inspection. Through years of development, the Automatic Virtual Metrology (AVM) technology has been applied to various industries to turn the offline sampling inspection with metrology delay into online and real-time total inspection. With AVM, every workpiece of the carbon fiber products can be examined to ensure total quality inspection. The biggest issue of continuous production is that it can neither accurately define each unit nor label on the products. To solve this problem, a production data traceback (PDT) mechanism is proposed in this letter. The PDT mechanism can provide a virtual label to each unit for referring back to the process parameters of the production. With PDT, the process data of each workpiece of a complete spin can be acquired so as to accomplish the work-in-process (WIP) tracking requirement of applying AVM. Also, the Advanced Manufacturing Cloud of Things (AMCoT) platform is adopted in this letter to fulfill all the smart manufacturing requirements for achieving the goal of zero defects of carbon fiber manufacturing. |
Development of ITER Divertor Vertical Target with Annular Flow ConceptII: Development of Brazing Technique for CFC/CuCrZr Joint and Heating Test of Large-Scale Mock-Up Abstract The first fabrication and heating test of a large-scale carbon-fiber-composite (CFC) monoblock divertor mock-up using an annular flow concept has been performed to demonstrate its manufacturability and thermomechanical performance. This mock-up is based on the design of the lower part of the vertical target of the International Thermonuclear Experimental Reactor (ITER) divertor adapted for the annular flow concept. The annular cooling tube consists of two concentric tubes: an outer tube made of CuCrZr and an inner stainless steel tube with a twisted external fin. Prior to the fabrication of the mock-up, brazed joint tests between the CFC monoblock and the CuCrZr tube have been carried out to find the suitable heat treatment mitigating loss of the high mechanical strength of the CuCrZr material. A basic mechanical examination of CuCrZr undergoing the brazing heat treatment and finite element method analyses are also performed to support the design of the mock-up. High heat flux tests on the large-scale divertor mock-up have been performed in an ion beam facility. The mock-up has successfully withstood more than 1000 thermal cycles of 20 MW/m2 for 15 s and 3000 cycles of >10 MW/m2 for 15 s, which simulates the heat load condition of the ITER divertor. No degradation of the thermal performance of the mock-up has been observed throughout the thermal cycle test although in the tile with exposure to the heat flux of 20 MW/m2, the erosion depth has been measured as 5.8 and 8.8 mm at the 300th and 500th cycles. |
/**
* Value object for HTTP status code and invariants associated with it.
*/
public class HttpStatus
{
private static final Map<Integer, HttpStatus> allStatusCodes = new ConcurrentHashMap<>();
public static final HttpStatus CONTINUE = valueOf(100, "Continue");
public static final HttpStatus OK = valueOf(200, "OK");
public static final HttpStatus CREATED = valueOf(201, "Created");
public static final HttpStatus MULTIPLE_CHOICES = valueOf(300, "Multiple Choices");
public static final HttpStatus BAD_REQUEST = valueOf(400, "Bad Request");
public static final HttpStatus INTERNAL_SERVER_ERROR = valueOf(500, "Internal Server Error");
private final int statusCode;
private final String name;
private HttpStatus(final int code, final String desc)
{
statusCode = code;
name = desc;
}
/**
* Creates new status code from the status code value.
* @param code a status code value, e.g. 200 for OK, 404 for not found, etc.
* @return an {@code HttpCode} initialized to the status code value.
*/
public static HttpStatus valueOf(final int code)
{
return valueOf(code, "");
}
private static HttpStatus valueOf(final int code, final String name)
{
Preconditions.checkArgument(allStatusCodes != null);
return allStatusCodes.computeIfAbsent(code, c -> new HttpStatus(c, name));
}
/**
* Determines whether the status code indicates a successful processing
* @return {@code true} only, if the status code belongs to the SUCCESS group of 2xx values; {@code false}, otherwise.
*/
public boolean isSuccessful()
{
return statusCode >= OK.statusCode && statusCode < MULTIPLE_CHOICES.statusCode;
}
/**
* Determines whether the status code indicates an error processing
* @return {@code true} only, if the status code belongs to either USER ERROR group with 4xx values or to SERVER ERROR group
* with 5xx values; {@code false}, otherwise.
*/
public boolean isError()
{
return statusCode >= BAD_REQUEST.statusCode;
}
@Override
public boolean equals(final Object o)
{
if (this == o)
{
return true;
}
if (o == null || getClass() != o.getClass())
{
return false;
}
final HttpStatus that = (HttpStatus) o;
return statusCode == that.statusCode;
}
@Override
public int hashCode()
{
return statusCode;
}
@Override
public String toString()
{
return "HttpStatus{" + statusCode + " - " + name + '}';
}
} |
# Create the CountVectorizer DataFrame: count_df
count_df = pd.DataFrame(count_train.A, columns=count_vectorizer.get_feature_names())
# Create the TfidfVectorizer DataFrame: tfidf_df
tfidf_df = pd.DataFrame(tfidf_train.A, columns=tfidf_vectorizer.get_feature_names())
# Print the head of count_df
print(count_df.head())
# Print the head of tfidf_df
print(tfidf_df.head())
# Calculate the difference in columns: difference
difference = set(count_df.columns) - set(tfidf_df.columns)
print(difference)
# Check whether the DataFrames are equal
print(count_df.equals(tfidf_df)) |
/*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License (the "License").
* You may not use this file except in compliance with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*/
/*
* Copyright 2008 Sun Microsystems, Inc. All rights reserved.
* Use is subject to license terms.
*/
#ifndef _HB_MDESC_H
#define _HB_MDESC_H
#pragma ident "%Z%%M% %I% %E% SMI"
#include <fm/topo_mod.h>
#ifdef __cplusplus
extern "C" {
#endif
/*
* Node/Field names in the PRI/MD
*/
#define MD_STR_ID "id"
#define MD_STR_IODEVICE "iodevice"
#define MD_STR_DEVICE_TYPE "device-type"
#define MD_STR_PCIEX "pciex"
#define MD_STR_CFGHDL "cfg-handle"
/* A root complex */
typedef struct md_rc {
int16_t id; /* physical id of the rc */
uint64_t cfg_handle; /* bus address */
} md_rc_t;
/* A hostbridge */
typedef struct md_hb {
int16_t id; /* physiscal id of the hostbridge */
md_rc_t *rcs; /* a list of pciex root complexes */
int16_t srcs; /* size of the rcs */
int16_t nrcs; /* count of rc entries in rcs */
} md_hb_t;
typedef struct md_info {
md_hb_t *hbs; /* a list of hostbridges */
int16_t shbs; /* size of the hbs */
int16_t nhbs; /* count of hb entries in hbs */
} md_info_t;
extern int hb_mdesc_init(topo_mod_t *mod, md_info_t *hbmdp);
extern void hb_mdesc_fini(topo_mod_t *mod, md_info_t *hbmdp);
extern md_hb_t *hb_find_hb(md_info_t *hbmd, int hbid);
#ifdef __cplusplus
}
#endif
#endif /* _HB_MDESC_H */
|
<gh_stars>1-10
package com.microsoft.recognizers.text.numberwithunit.german.extractors;
import com.microsoft.recognizers.text.Culture;
import com.microsoft.recognizers.text.CultureInfo;
import com.microsoft.recognizers.text.numberwithunit.Constants;
import com.microsoft.recognizers.text.numberwithunit.resources.GermanNumericWithUnit;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class CurrencyExtractorConfiguration extends GermanNumberWithUnitExtractorConfiguration {
public CurrencyExtractorConfiguration() {
this(new CultureInfo(Culture.German));
}
public CurrencyExtractorConfiguration(CultureInfo ci) {
super(ci);
}
@Override
public String getExtractType() {
return Constants.SYS_UNIT_CURRENCY;
}
@Override
public List<String> getAmbiguousUnitList() {
return GermanNumericWithUnit.AmbiguousCurrencyUnitList;
}
@Override
public Map<String, String> getSuffixList() {
return CurrencySuffixList;
}
@Override
public Map<String, String> getPrefixList() {
return CurrencyPrefixList;
}
public static Map<String, String> CurrencySuffixList = GermanNumericWithUnit.CurrencySuffixList;
public static Map<String, String> CurrencyPrefixList = GermanNumericWithUnit.CurrencyPrefixList;
}
|
<filename>controllers/Recommended.go
package controllers
import (
"lottery/common/helpers"
"sort"
"strconv"
"strings"
)
/**
* 双色球推荐
*/
func DoubleColor() string {
len := 0
str := "#### 双色球推荐 \n\n"
for len < 3 {
tui := tui()
zhu := len + 1
str = str + "**第" + strconv.Itoa(zhu) +"注:**" + tui
len++
}
return str
}
func tui() string {
red_length := 6
red := []string{}
blue := helpers.Rand(1, 16)
for red_length > 0 {
rand := strconv.Itoa(helpers.Rand(1, 33))
_, found := helpers.Find(red, rand)
if found {
continue;
}
red = append(red, rand)
red_length--;
}
sort.Strings(red)
red_str := strings.Join(red, " ")
str := "红球:<font color=#FF0000>" + red_str + "</font> 篮球:<font color=#0000FF>" + strconv.Itoa(blue) + "</font> \n\n"
return str
} |
#include "pch.h"
#include <tchar.h>
#include "vlr-util/util.IsNotBlank.h"
TEST( util_IsNotBlank, general )
{
EXPECT_EQ( vlr::util::IsNotBlank( std::string_view{ "" } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( std::string_view{ "foo" } ), true );
EXPECT_EQ( vlr::util::IsNotBlank( std::wstring_view{ L"" } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( std::wstring_view{ L"foo" } ), true );
EXPECT_EQ( vlr::util::IsNotBlank( "" ), false );
EXPECT_EQ( vlr::util::IsNotBlank( "foo" ), true );
EXPECT_EQ( vlr::util::IsNotBlank( L"" ), false );
EXPECT_EQ( vlr::util::IsNotBlank( L"foo" ), true );
EXPECT_EQ( vlr::util::IsNotBlank( vlr::zstring_view{ "" } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( vlr::zstring_view{ "foo" } ), true );
EXPECT_EQ( vlr::util::IsNotBlank( vlr::wzstring_view{ L"" } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( vlr::wzstring_view{ L"foo" } ), true );
EXPECT_EQ( vlr::util::IsNotBlank( std::string{} ), false );
EXPECT_EQ( vlr::util::IsNotBlank( std::string{ "" } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( std::string{ "foo" } ), true );
EXPECT_EQ( vlr::util::IsNotBlank( std::wstring{} ), false );
EXPECT_EQ( vlr::util::IsNotBlank( std::wstring{ L"" } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( std::wstring{ L"foo" } ), true );
EXPECT_EQ( vlr::util::IsNotBlank( CString{} ), false );
EXPECT_EQ( vlr::util::IsNotBlank( CString{ _T( "" ) } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( CString{ _T( "foo" ) } ), true );
EXPECT_EQ( vlr::util::IsNotBlank( CStringA{} ), false );
EXPECT_EQ( vlr::util::IsNotBlank( CStringA{ "" } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( CStringA{ "foo" } ), true );
EXPECT_EQ( vlr::util::IsNotBlank( CStringW{} ), false );
EXPECT_EQ( vlr::util::IsNotBlank( CStringW{ L"" } ), false );
EXPECT_EQ( vlr::util::IsNotBlank( CStringW{ L"foo" } ), true );
}
|
Today, the hottest game on earth launched on the most ubiquitous device around. While Fortnite on mobile is only rolling out to a select group on iOS for testing purposes, I got a chance to play around with it some. I’ve walked away surprised at how well Fortnite runs on my phone, though it did have some cumbersome quirks.
Right now, the servers seem slammed, and it took a while just to log in. Once I was in, though, the port seemed like what you’d expect from Fortnite. Everything looks exactly the same, only...smaller, and a tiny bit jaggier. Some text is a tinge hard to read, but it’s not a big deal if you’ve already played the game. Still, the packed real estate became a problem during actual matches.
Like other battle royale games, Fortnite is all about awareness. You need to be able to spot enemies from a distance, lest you get shot from who-knows-where. It’s you against 99 other people, after all. On mobile, this hyper-awareness is more difficult to pull off because everything is compressed to a much smaller screen. It was difficult to scan the landscape, and the more pronounced pop-in on mobile didn’t help that. I felt a little more helpless than usual playing Fortnite on my phone.
To mitigate this, Epic Games developed a new feature that tells you a bit more about your environment. When another player is nearby, a small half-circle will show up on your screen, relative to the direction the player is coming from. Depending on the color, it can tell you if players are hanging out or shooting in a particular direction. It’s a necessary addition given the problems with visibility on this platform, but we don’t yet know how this will affect the balance when crossplay is in effect. Will mobile users have an upper hand? Or do the other limitations of mobile Fortnite even things out?
Another annoyance of playing on mobile is that buttons exist on the screen. This means that to play, you’re actively obscuring your arena, which isn’t great. Mobile Fortnite has a “joystick” on the bottom left, and separate buttons for actions like crouching and jumping on the right side. Shooting can happen if you tap on the center of the screen, or if you press a dedicated button on the left—which makes running and gunning tricky. I found that combining different actions was awkward, though at least some of it may be the lack of muscle memory. At least once, I accidentally shot into the air because my thumb crossed into the “shoot” area in the middle of the screen, when in actuality I just wanted to move the camera. Mostly, though, when a fight broke out, it took a little too long just to move the reticule where I wanted it to go. On my first match, I still managed to kill two folks and place in top 30, but I likely would have done better on a different platform. Fortunately, you don’t have to settle for touch controls, as phones allow you to pair controllers with them. At that point, you might as well play on a console or PC if you can swing it! That said, it’s not all bad: I found swapping between items and weapons to be a breeze thanks to the tap controls, and building also seems like it fits in well.
I’ve spent most of this post critiquing mobile Fortnite, but for all of its faults, it’s a marvel that the game runs so well on my phone at all. Control and size issues aside, this is the Fortnite everyone knows and loves—no hitches, hiccups or lag. I felt terrified as I skulked around the map, looking for other survivors. I felt thrilled to shoot down my opponents, who were also doing their best to scrape by in this dog-eat-dog world. It’s classic Fortnite, but you can take it with you. For some, that will be enough.
You can sign up for the Fortnite mobile event here. |
#include <cmath>
#include <string>
using namespace std;
class Solution
{
public:
bool isRationalEqual(string S, string T)
{
pair<int, int> s = rationalTofraction(S);
pair<int, int> t = rationalTofraction(T);
return s == t;
}
pair<int, int> rationalTofraction(const string &s)
{
int integer, nonrepeat, repeat, base = 0, i = 0, j = 0, N = s.length();
j = s.find('.', j);
if (j == string::npos)
j = N;
integer = stoi(s.substr(i, j - i));
if (j >= N)
return {integer, 1};
i = ++j;
j = s.find('(', j);
if (j == string::npos)
j = N;
if (i == j)
nonrepeat = 0;
else
nonrepeat = stoi(s.substr(i, j - i));
base += j - i;
long dividend = nonrepeat + integer * pow(10, j - i);
int divider;
if (j + 1 >= N)
{
divider = pow(10, base);
int g = gcd(dividend, divider);
return {dividend / g, divider / g};
}
i = ++j;
j = s.find(')', j);
if (j == string::npos)
j = N;
repeat = stoi(s.substr(i, j - i));
dividend = dividend * pow(10, j - i) + repeat - dividend;
divider = pow(10, base + j - i) - pow(10, base);
int g = gcd(dividend, divider);
return {dividend / g, divider / g};
}
int gcd(long x, long y)
{
return x == 0 ? y : gcd(y % x, x);
}
}; |
At the Edge of the Internet: Teaching Coding and Sustainability to Himalayan Girls Aux confins d'Internet: Enseigner le codage et la durabilit aux filles himalayennes This report introduces a two-week workshop on web coding and environmental sustainability at a school for girls in Northeastern India. Our discussion of this teaching project reviews issues that shaped the projects development, outlines resources required for implementation, and summarizes the workshops curriculum. Highspeed Internet will soon arrive in the region of this recentlyrecognized UNESCO World Heritage Site. We believe that the training of girls in particular could help redistribute power and resources in regions where women are often poorer, less educated, and excluded from decision-making in institutional and public contexts. Relatively few code-teaching projects have grappled with the difficulty of working in offline environments at the edge of the Internet, and yet moving skills and knowledge into these regions before the Internet becomes widely accessible might help mitigate some of the webs worst impacts on equity and justice. Introduction This report introduces a two-week workshop on coding and sustainability at a school for girls in Northeastern India. Our discussion of this teaching project will review issues that shaped the project's development, outline resources required for implementation, and summarize the workshop's curriculum. This project is part of a collaboration between a faculty and student team at the University of Toronto and two groups in India, the Khangchendzonga Conservation Committee (KCC) and the Kasturba Gandhi Balika Vidyalaya (KGBV) school for girls. The KCC is a small non-governmental organization based in Yuksom, a rural village that is the main access point to a UNESCO World Heritage Site in the Indian Himalaya, along the Nepal border. Located not far from Yuksom, about 30 minutes by car along winding mountain switchbacks or an hour's walk along a lushly forested footpath, the KGBV is a residential school for girls. Our Canadianled workshop for KGBV girls in 2019 aimed to support the KCC's longstanding work to build capacity across the region in natural resource management and conservation, and their promotion of village-level economic growth through culturally and environmentally responsible modes of tourism. Our interest in supporting the KCC's work with girls through developing this workshop was stimulated in part during a trip to Sikkim in 2018, when we noticed bright blue conduit for fiber optic cables emerging from the dirt alongside mountain roads. Highspeed Internet was soon to arrive in this UNESCO World Heritage Site. As the call for climate change adaptation becomes more urgent, women across the Himalaya are playing an increasingly important role in reconfiguring social development and resource management. How will young women living in this ecologically critical area become ready to participate in, and direct, the many changes that will soon flow through those fiberoptic cables? Working for Sustainability in the Himalayan Foothills Mount Khangchendzonga, a massif with five peaks and five massive glaciers, is the world's third highest mountain (8,586 metres; 28,169 feet), straddling Eastern Nepal and the Indian state of Sikkim, which is itself one of the world's 34 biodiversity hotspots (Tambe & Rawat, 2009). The Nepali government designates the entire Khangchendzonga massif as a conservation area, and India's Ganges and Brahmaputra rivers are fed by the mountain's glaciers. Seventy percent of Sikkim's Khangchendzonga National Park (KNP) lies above 4,000 metres, 34% is covered with ice, and the park contains over 150 glaciers and 73 glacial lakes (Tambe & Rawat, 2009, p. 443). In July 2016 the KNP, which covers 25% of the state of Sikkim, was recognized by UNESCO as a natural and cultural World Heritage Site for its extraordinary cultural diversity and range of subtropical to alpine eco-systems. Drawing upon a mix of Buddhist, shamanic, and Hindu perspectives, the multi-ethnic communities living in the Khangchendzonga foothills understand the mountain variously as a barrier to travel, a sacred body, a socio-political space, or the home of deities, and ritual practices dedicated to Himalayan mountains remain essential to the well-being of these communities. Further afield, stories of Khangchendzonga have been told by travelers from the Tibetan plateau for hundreds of years, and in times of social, political, and environmental distress, groups of Tibetans fled south to the lush forests and valleys of Sikkim, which they named the Land of Rice. For more than a century, Europeans too have been drawn to the region's botanical wealth, and mountaineers have told tales of dramatic ascents, capturing the imagination of global audiences. The Sikkimese prohibited early mountaineers from standing on Khangchendzonga's sacred summit, and in 2000 the government again banned expeditions to the summit, inciting conflict between international and local communities (Wangchuk & Zulca, 2007). In recent decades, trekking and mountaineering have had dramatic effects on the environment, economy, settlement patterns, and cultural and religious practices of communities living near Himalayan mountains (Bahuguna, 1998;Dearden, 1989;Draper & Kariel, 1995;Dutta & Singh, 1998;Price, 1995). After hydro-electric power, tourism is now the second largest contributor to Sikkim's Gross Domestic Product (Mitra, Roy, & De). A dramatic increase in tourism to the state began in the early 1990s, and local residents and guides soon noticed a corresponding increase in waste around the trekking routes, as well as damage to sacred sites and unmanaged extraction of medicinal and other fragile plants. The Khangchendzonga Conservation Committee was formed in reaction to these concerns, with an initial focus on the increasingly popular West Sikkim trekking route, which climbs upwards departing from Yuksam, through Dzongri and to the Gochala pass at 4,940 metres, with magnificent views of the Khangchendzonga range when the weather is clear. In 2002, the KCC developed and began to promote the concept of homestay tourism in the region, where visitors stay in local homes rather than hotels. With this model, tourists experience local lifestyles, foods, and customs, and tourism revenue benefits village families. The KCC has worked on a wide variety of environmental management and capacitybuilding projects in Sikkim. It was founded by a small group of community members in 1996 with the aim of promoting socially and ecologically responsible tourism, redesigning practices of waste management, and building capacity among local guides, porters, and homestay providers in West Sikkim and the Khangchendzonga National Park. Illegal collection of medicinal plants and other non-timber forest products has long been recognized to be a significant problem across the Khangchendzonga region in India and Nepal, and sustainable management and conservation has been challenging (Wang, Wu, Kunze, Long, & Perlik, 2019, p. 24). The KCC has been successful in implementing bans on the use of forest wood for fuel, on hunting, and on the collection of medicinal plants in the KNP. The KCC has also worked for two decades on zero waste practices throughout West Sikkim, implementing waste collection and recycling policies in collaboration with Sikkim's Rural Management and Development Department and with several local Buddhist monasteries. A leader across the Himalaya, the KCC has worked with similar environmental protection and eco-tourism organizations in Ladakh, Jammu, Kashmir, and Nepal to provide training and education. The KCC's history opens a window to an important set of dynamics with global implications. Until its annexation into India in 1975, Sikkim was an independent kingdom, and so the KCC has negotiated its work within one of India's newest states, one that is moreover strategically positioned at the border of the global superpowers of India and China. The KCC has influenced tourism practices at a time when global tourism has risen to become a major economic driver in Asia and especially the Himalaya. The group's work has also coincided with the acceleration of climate change and environmental degradation in the region, and with the deployment of environmentalism by local groups as a lever for both regional autonomy and global integration. The migration of peoples has complicated ethnic tensions in many Himalayan regions over the last two decades, some of which have directly addressed conflicting orientations toward environmental protection and economic development. The work of the KCC has also directly addressed conflicted relationships between international mountaineering cultures and local conceptions of sacred space, which has seen increasing attention in news media. Recent research has shown how the close links between poverty and resilience threaten rural mountain communities most directly (Barua, Katyaini, Mili, & Gooch, 2014, p. 269). The KCC's interventions have strategically aimed to address poverty in the region with a multidimensional approach relevant to mountain communities. Sikkim's Kasturba Gandhi Balika Vidyalaya School for Girls Located not far from the KCC's headquarters in Yuksom, the Kasturba Gandhi Balika Vidyalaya (KGBV) school for girls in Labing, West Sikkim, has roughly 200 students between the ages of 12 and 18. Most girls come to this residential school from poor families, and many are orphans. In 2004, the Government of India established the KGBV residential school project with the aim of serving girls belonging to minority and disadvantaged communities. According to the guidelines for the project, schools were to be established in regions where "rural female literacy is below the national average (46.13%: Census 2001) and gender gap in literacy is more than the national average (21.59%: Census 2001)", in areas with a high tribal population, and in other areas with low female literacy rates (India Development Gateway). There are now hundreds of KGBV schools across India but only one in the state of Sikkim. The KGBV school in Labing admits mostly girls from West Sikkim, and only those with legal Sikkimese status (which is inherited from the father's side). The overall curriculum for the KGBV school is set by the Central Board of Secondary Education, and the school prides itself for rich programs in sports, music, and culture. Almost all students go on to further study after their years at KGBV. Those with the highest grades may do further training in science or math at the senior secondary level, those with mid-range grades go on to study humanities or the arts, and the lowest achieving students may obtain further training in household skills. Although most girls do continue to senior secondary education, some girls with heavy domestic responsibilities at home must return to their families without continuing their schooling. Beyond Grade 8, students must purchase their own books and paper; teachers at the KGBV school in Labing will often pay out of pocket for those girls who cannot afford supplies. The Khangchendzonga Conservation Committee has both institutional and personal connections to the KGBV school in Labing. A dynamic and highly engaged science teacher with a degree in sustainable development, Sikshita Bhutia is married to one of the KCC founders and is a key participant in much of the KCC's work in environmental conservation and awareness. Environmental science is not taught as a distinct subject at the KGBV but is woven into the curriculum as much as possible. Sikshita Bhutia runs a "green club," bringing the KCC's emphasis on zero waste to the school community, as well as a garden for the girls, and she appoints girls to serve as environmental stewards of the school grounds. Unlike other schools nearby, the KGBV has a dedicated computer lab with 10 computers and a part-time computer teacher. The lab was set up and networked by an IT service provider in Gangtok, Sikkim's largest city, which is roughly eight hours by car from Labing. Within the school, however, IT expertise is limited, meaning that if the network or its components lose functionality, they may simply be left out of use for a long time. Although the computers are networked to a modem, Internet access is unreliable. Computer classes for students therefore focus on "offline" computing skills, including touch typing and basic training in desktop applications. Computer programming skills are not taught. Although broadband Internet connectivity in West Sikkim was poor to nonexistent as of 2019, people access the web and social media platforms extensively with their cell phones. Girls at KGBV spoke about using YouTube, Facebook, Instagram, and WhatsApp in particular. The challenge to teach coding in this girls' school was therefore of particular interest because of its location at "the edge of the Internet." Today in West Sikkim, the Internet travels only over cellular signals, and the lines of sight in the mountains are short and uncertain. Fiberoptic cable is being laid in the district, however, so over the next several years Internet access will expand dramatically. The district is replete with small businesses, many in the tourism trade, and with the 2016 World Heritage Site designation tourism is expected to increase. A substantial number of these businesses have no web presence at all, and web development expertise in the region is limited. Our goal for this project was therefore to create and teach a web development curriculum appropriate to the region, helping to prepare a group of girls to be tomorrow's web experts in West Sikkim. What might it mean if, when fiberoptic connectivity arrives there, the best-trained and most highly skilled web developers were a group of 18-year-old girls from the most economically disadvantaged families in the region? Recent research has addressed how Himalayan women "perform multiple roles: in the household, agriculture, natural resource management, communities, and the markets and other public spaces at higher scales, and particularly with the trend of large-scale male outmigration" (, p. 494) and moreover how mountain women are "critical actors in mitigating and adapting to climate change" (, p. 497) (Joshi, 2014;Leduc, 2009;Nellemann, Verma, & Hislop, 2011). With its close connections to the KCC, one of West Sikkim's best-established and influential non-governmental organizations, its relatively strong technology infrastructure, and its highly engaged and progressive staff of teachers and administrators, the KGBV was eager to welcome the development of our workshop on coding and sustainability. Web Developers at the Edge of the Internet After meetings in 2017 between KCC leaders Kinzong Bhutia, Uden Bhutia, and Pema Bhutia, and University of Toronto professors Frances Garrett and Matt Price, a collaboration between the two groups took shape in the context of Price's fourth-year undergraduate research seminar, called Hacking History. Over 2017-18, students in this course worked with the KCC on the creation of an archival website. The KCC was in need of a website and a digital archive to preserve more than twenty years of documents accumulating on the mildewed shelves of their Yuksom headquarters. For the university students, collaborating with the KCC accentuated some of the most interesting dynamics of public history: the relationships between experts and lay knowledge; power dynamics between historians and oral history subjects; and the potential for conflicts between the duties of a historian and those of a "contractor" working on behalf of an organization. Each of these problem areas was presented as an opportunity for learning throughout this year-long course. Taught yearly, Hacking History is an innovatively designed course exploring new media as tools for the transmission of historical knowledge, culminating in an intensive group project in which students build a historical website in close collaboration with a community partner. The community partnership is a key element of the course each year and a source of many of its pleasures and challenges. Students learn about the history of digital media and their place in the development of the public sphere, and they study the history and politics of "engaged" and "public" scholarship. They spend a substantial amount of time acquiring the technical skills needed for their project, including the fundamentals of HTML and Javascript, as well as enough PHP to work with WordPress, a popular Content Management System (CMS). No prior technical knowledge is required for this course, but students must be willing to challenge themselves to learn the basics of web programming. 1 The payoff for this effort gives students a rare chance to contribute in a meaningful way to historical discourse beyond the walls of the university, and to explore the frontiers of historical communication. In May 2018, a month after Hacking History had concluded, six undergraduate students traveled to Sikkim with Matt Price and Frances Garrett to discuss their work in person with KCC staff and to offer a week-long web skills workshop at the KCC for staff and community members. The workshop had 12 participants, including four founding members of the KCC, a local entrepreneur interested in opening a homestay business, several local government officials, two members of the local forestry department, and a doctoral student from New Delhi doing research on pastoralism. Matt Price led the workshop, beginning with a presentation of basic information on the structure of the Internet, the global web, computer languages and how they communicate, and an overview of different kinds of digital communication and the differences between websites and social media platforms. Participants discussed how different kinds of digital communication might meet different needs, and they worked in pairs to articulate their own needs. Most participants were interested in new strategies for improving communication between federal and local governmental agencies, and between those agencies and local communities; some were also interested in advertising their businesses to tourists. As the workshop continued, Price taught participants how to create a basic website using a CMS with a locally networked Wordpress installation. Participants worked in small groups, assisted by the University of Toronto students, designing their site architecture and setting up sample pages. Participants in this Yuksom web skills workshop were highly enthusiastic and motivated to learn how to create websites to meet their varied needs. The Hindu Kush Himalaya Assessment: Mountains, Climate Change, Sustainability and People, a comprehensive 2019 report by International Centre for Integrated Mountain Development in Nepal, demonstrates that these interests are shared across mountain communities and felt with a sense of urgency by mountain peoples who witness the impacts of climate change directly. Increased access to "information and knowledge-sharing platforms" the report details, are "key to fostering socioeconomic development as well as enhancing environmental governance" (Wester, Mishra, Mukherji, & Shrestha, 2019, p. 36). Information technologies are also critical tools in spreading natural disaster risk warnings and increasing access to healthcare and education. On the final day of our 2018 visit to Sikkim, the KCC invited the Canadian group to present a brief web skills workshop at the Kasturba Gandhi Balika Vidyalaya girls' school near Yuksom. Matt Price set up an afternoon HTML and CSS workshop for two classes of girls, ages 12-18, and with a locally-networked intranet linking computers in the school lab, the girls quickly learned how to manipulate features of their webpages, happily changing text sizes and colours, resizing borders, and changing background colours. The reception was so enthusiastic that a seed was planted for returning to provide more in-depth training. "Interconnection:" Designing Curriculum for a Himalayan Context In consultation with the KCC and KGBV, Price and Garrett began to develop a plan to return to Labing to deliver a two-week workshop on coding and sustainability, comprised of two interwoven curricular programs united under the theme of "interconnection." Drawing on a concept that is core to both Buddhist communities and popular environmental protection rhetoric, this theme highlighted both the Internet's capacity to connect global communities and the nature of sustainable engagement with the environment. Price and Garrett assembled a team of students in Toronto to develop curricular materials in advance of a scheduled trip in February 2019, including Faraz Khoshbakhtian, an undergraduate student in Computer Science and Philosophy; Laila Strazds, a graduate student in Adult Education and Community Development with a specialization in Environmental Studies; and Dawn Walker, a graduate student in the Faculty of Information with specialization in emerging alternative and decentralized web and Internet infrastructures. Four members of the team (excluding Faraz) were able to go to Sikkim in February to lead the workshop. Over a period of several months, our team worked to develop this double-stranded workshop. The technical challenges of teaching coding in an environment where Internet access was limited or absent were especially daunting. We needed a system that could work independent of access to the web or Internet; work in a web browser without having to install additional applications; allow for export and import of saved work; and provide Uniform Resource Identifiers (URIs) that are shareable and persistent over the course of a workshop and across devices. Using the system, we wanted students to be able to check the results of their work in real-time, save their work and return to it at a later date, and view teaching materials for each specific lesson. The system should also allow teachers to view facilitation material for each specific lesson, as well as an overview for all available materials, find documentation to support setting up the system, and easily configure and modify the system over time. Developers likewise should be able to write easily maintainable lessons using widely-available tools and programming languages (i.e., HTML, CSS, JavaScript, and Markdown, using text editor applications and version controlled with Git), support local instructors seeking to improve the system for a given environment, and iterate rapidly on both the software and the curricular materials. Before we began creating workshop materials, we wanted to better understand the landscape of teaching web development around the world, and we began with a review of technology teaching projects targeting girls. For example, Girls Coding in Lagos, Nigeria is a project of Pearls Africa Foundation that is "passionate about increasing the number of females in STEM, training the girls in underserved community and improving the resourcefulness of young girls and women." Its curriculum includes HTML, CSS, JS, Python, Android, and Microsoft. The ReDI School of Digital Integration is a non-profit free digital school for tech-interested newcomers in Germany that offers multiple courses and a women-focused track on topics including front-end web development, HTML, JS, and CSS. Some useful materials from this project were available on Github. Girl Develop It is a non-profit that provides technology programs for women from diverse backgrounds. All materials are available on Github, as are the materials of Codebar, which aims to enable underrepresented people to learn programming through workshops and self-guided tutorials. 2 These well-established programs were for the most part larger in scope than our own, and yet it was helpful and inspiring to see others doing similar work around the world. We also looked more broadly at openly licensed and open source projects and curricula. In order to find a suitable platform for teaching in Sikkim, we studied the benefits and deficits of code playgrounds such as Code Pan, Web-Maker, HTML House, Thimble, and JS Bin. It was difficult to find a well-developed environment that met our needs. Code Pan runs locally and offline, but it does not allow for saving work without external authentication. Web-Maker, an offline-first code playground designed originally for personal use by a developer in New Delhi, uses local storage to save projects locally, while also maintaining a centralized store of project files. It works offline as a web-app, can be served on HTTPS, and allows for saving projects into local.json file. However, it does not allow persistent URLs, so it would be difficult to get assignments and exercises to students, and its database runs via Google Firebase, which operates in the cloud, so while machine-local projects and cloud-based projects are easy, there is no notion of a "classroom" or other middle level or organization. HTML House uses localStorage for offline editing, has permanent URLs and project milestones, and offers a Chrome extension version, but there are no separate CSS and JS panes, and it requires Go. Thimble, a teaching tool from Mozilla's old open web promotion team, is nicely oriented towards young learners, with simple interface features that can be revealed one-by-one as skill level advances, and there are lots of curriculum materials already available. However, it is oriented towards a Scratch-like centralized community publishing model, and there is no immediately obvious route to ingest HTML/JS/CSS materials. Although each platform offers potential advantages, we settled on JS Bin, which is full-featured and provides a local SQL account and project data storage, so that no cloud access is required. We undertook a similar research process in developing the sustainability-focused lessons for our workshop. Best practice environmental education is contextualized and culturally responsive. Wanting to narrow our scope and also target an area in which Himalayan girls and women might most readily see themselves to hold decision-making power, we decided to focus on food as a window into larger issues involved in creating sustainable environments. To better understand how environmental sustainability is taught in Indian schools, we first reviewed India's Towards a Green School resource book (National Council of Educational Research and Training, 2015). We also looked beyond India for existing projects and resources to help guide our lesson plans. We reviewed and drew from a range of online materials for teaching food systems and the environment (Center for Ecoliteracy, 2014;United Nations Children Fund, 2015;United Nations Food and Agriculture Organization, 2019;World Food Programme, 2018;Yukon First Nations Curriculum Working Group). Lesson plans created by Johns Hopkins' FoodSpan project ("FoodSpan," 2016) were especially helpful to our project design, although all materials had to be considerably modified to fit the context of Sikkim. In general, we took a critical perspective on food studies, going beyond a sole focus on nutrition and taking an interdisciplinary approach that highlighted the many connections food has in our lives. Within every lesson we emphasized students' rights and capacity for choice in the foods they eat, with an overarching message about the many considerations that go into such food decisions. Issues addressed included marketing and advertising, taste, culture, affordability, environmental impact, and food waste. We arrived in West Sikkim with several suitcases of teaching supplies, understanding that the school would have limited art supplies, that a computer shop would be a day's drive away, that the Internet would not be reliably available, and that even consistent electricity could not be guaranteed. For our coding sessions, we carried a travel router, ethernet cables, HMDI cables, power strips, and micro-USB devices, plus three Raspberry Pi 3 B+ single-board computers. The USB devices were loaded with Adobe Acrobat Reader, PuTTY, Chrome, Notepad++, Segoe System Font, and an HTML reference and Raspberry Pi library. Our web development curriculum and teaching materials were loaded on the pre-configured Raspberry Pi, which ran hosted software that both teachers and students could access. This meant that student computers could be networked to shared materials hosted on the Pi materials even without live Internet access. For our sustainability sessions, we filled a suitcase with crayons, markers, sticky note pads, old cooking magazines, scissors, glue, tape, stickers, and string, and we also brought three large rolls of butcher paper. Anticipating power outages, we prepared a series of group games relevant to our curricular themes, which we could start up whenever needed. For example, the "human knot" game aimed to teach about interconnection and teamwork: small groups of approximately six students entangled themselves by holding hands and then competing against other groups to unravel their human knot into a circle. The "broken messenger" game taught about potential implications of online communication by demonstrating how meaning may be misunderstood when messages are rapidly passed from one to another, as may often happen online. In this game a message is whispered from one person to another around a circle until the last person says aloud what they heard, and the group finds that the message has changed significantly. Other games presented in our more extensive curriculum documents include the "food web" game and the "werewolf" game, which teach about food chains and cyber security, respectively. Not only would these games allow us to quickly shift focus in the event of a power outage, but they would also help us raise a group's energy level, enhance learning and memory through fun physical activity, and model a variety of teaching styles for KGBV teachers who observed or participated in our workshop. Teaching Coding and Sustainability We designed our workshop to take place in nine two-hour sessions with two strands, delivered over two weeks at the KGBV school. The first session, with all four instructors present, was meant to introduce the theme of interconnection. (Logistical challenges prevented us from implementing this plan, and the theme was instead introduced to the two groups separately.) Sessions two through five separated the girls into two groups, with Price and Walker leading lessons on coding in the computer lab, and Garrett and Strazds teaching about sustainability in a large hall elsewhere in the school (see Figure 1). For the second week of the workshop, sessions six through nine, Price and Garrett had to return to Canada while Strazds and Walker continued to lead the students through lessons and group work aimed at producing a final project website that united both curricular strands. With four two-hour sessions in a computer lab, our aims for the coding strand of the workshop were to teach students enough HTML and CSS to build a simple web page with assets; to introduce web literacy issues (including privacy and trust); and to teach staff enough about the system to enable them to replicate and expand on the workshop themselves. At the end of the second week, our goal was that every student would have built a web page with HTML and CSS, understand the use of tags, elements, layout tags such as div and how nesting works, be able to modify text (e.g., manipulating headers, using bold, italics, and font face), and add images and links. We also wanted each student to feel pride and excitement about their coding skills and confidence that they could learn more. Our coding lessons began with an overview of basic questions: What is a computer? How do computers work, and how are instructions are given to a computer? What is the Internet? We combined active physical games with lecture-style content delivery for this initial lesson. In our next lesson, students began to practice making web pages using a "code playground" on a locally installed version of JS Bin, a tool that allows students to experiment with source code while seeing its output in real-time. They began to learn about webpages and computer languages, and about the building blocks of HTML by creating a simple page in the code playground. Lessons about tags allowed students to change the look of text on their pages by altering the tags (see Figure 2). They then learned about the main parts of a webpage and adding style using CSS. The next lesson was about making links and adding images. The final lesson continued with more advanced instruction on layout, including more instruction about the parts of a web page, practice with making borders and creating space around text and elements on a page, and putting text and images together (see Figure 3). 3 The second strand of our workshop aimed to introduce sustainability through a focus on food, using a series of art-based exercises. The first session primed students on the concept of sustainability and explored their existing understanding of the topic, and the following three sessions used the theme of food to guide their learning. Reviewing the literature for a meaningful but simple way to present the concept, we defined sustainability as the ability for everyone to live well now and for generations to come (Drew & Gurung, 2014;East, Inmann, & Luger, 1998;;Meadows & Rome, 1972;Robinson & Cole, 2015;World Commission on Environment and Development, 1987). We wanted the girls to develop a level of food literacy that could empower them to make positive food choices derived from an understanding of nutrition, environmental impact, and the food system at large. With its widely publicized environmental initiatives and commitment to organic farming, environmental sustainability is a prominent part of public discourse in Sikkim, and so it was our hope that these lessons would build on and extend students' existing knowledge and allow them to feel pride in and ownership of their region's leadership in the area. Our first lesson introduced the concept of sustainability, highlighting the theme of "interconnection" and introducing three components of sustainability (economic, social, and environmental), with an art project allowing students to create a diagram of how these components manifest in their own lives. We then led small group discussions about food and nutrition, aiming to assess the level of their knowledge, followed by a circle game on food systems using a ball of string (see Figure 4), based on the lesson plans Teaching the Food System from Farm to Fork. 4 Lessons two and three focused on how food connects us with our environments. The heart of these lessons was a body tracing art activity that asked students to illustrate connections between their own health and the health of their environment, such that when they take care of themselves, they are also taking care of their environment. Students traced an outline of their own bodies on large pieces of butcher paper (see Figure 5). We reviewed the concept of sustainability and students were encouraged to draw things related to food on the outer portion of their body tracing, including the environments they live in, where and how food is grown, their favourite foods, their family and friends, and so on. The next lesson focused on nutrition and the digestive system, and the girls were then given time to draw on the insides of their body tracings (see Figure 6). 5 Lesson four addressed nutrition guidelines from around the world by studying food guides from various countries. Students were asked to discuss how other countries' food guides compared to India's. Next, they studied pictures and data presented in Peter Menzel and Faith D'Aluiso's book, Hungry Planet, two copies of which we had brought, and they made charts of the cost of food for a week for families from around the world. The second week of the workshop was devoted to creating websites that would be presented in a final day celebration. Through project-based learning, students integrated concepts from the sustainability and coding curricular strands and practiced skills such as time management and teamwork as they worked in groups to create a web page about a recipe. The design of this project was influenced by the "Recipes for Change" project (International Fund for Agricultural Development) and by materials available from the Global Sustainable Food Project. The week began with a lesson on food miles and carbon emissions, aiming to show how food choices have environmental impact (see Figure 7). Girls in small groups studied world maps and recipes (recipes included sushi, fish tacos, butter tarts, mushroom risotto, or Pad Thai), and they were asked to estimate the weight of each ingredient and the distance it would travel to reach Sikkim. After that exercise, we gave students time to plan their recipes for their own web page recipe projects, with the requirement that the recipe must include a title, an ingredient list, steps on how to make it, and a paragraph on cultural, nutritional, and environmental factors. The remaining days of the week were devoted to creating these websites. Occasional power outages throughout the week were distracting but allowed us to take breaks for games, which the girls greatly enjoyed. On the final day, students shared their work in a celebratory assembly for the entire school. Conclusion In this brief report we have summarized a teaching project implemented in 2018-19 as a collaboration between Canadian university faculty and students and two small but influential organizations in the northeast Himalayan state of Sikkim. As Canadian researchers and educators, it was our aim in this project to support the interests of our partners in Sikkim, who have themselves been working successfully to build capacity and effect practical development outcomes for their communities. Although the majority of Sikkim's mountain population is rural and engages in subsistence farming and animal husbandry, climate disasters make this a precarious living; the mountain topography likewise complicates the likelihood of significant industrial development in the region. But tourism is increasing rapidly, and the natural environment may soon be the state's most valuable resource. Without carefully developed village-based ecotourism ventures that preserve ecological and cultural resources and raise environmental awareness in communities, the growing influx of visitors will only place further stress on this fragile mountain region (Datta & Banerji, 2015). As Internet access further increases the tourism industry's growth rate, web development skills combined with an understanding of sustainable environments could have substantial impact on the economic welfare of girls graduating from the KGBV. Beyond its potential economic impact, however, the training of girls in particular could help redistribute power and resources in a region where women are often poorer, less educated, and excluded from decision-making in institutional and public contexts (Karan, 1989, p. 270). In this project, we have followed an approach to development interventions that supports human wellbeing, not just economic growth (). In our preliminary work for this project we found many inspiring code teaching projects in operation around the world, but few of them have grappled with the difficulty of working in offline environments at the "edge of the Internet." Moving skills and knowledge into these regions before the Internet arrives in full force might help mitigate some of the web's worst impacts on equity and justice. |
Effect of Laser Radiation on the Dynamics of Active Brownian Macroparticles in an Extended Plasma-Dust Monolayer Using the modified method of Brownian dynamics, the dynamics of macroparticles with a uniform metal coating in a plasma-dust monolayer under the action of laser radiation was simulated. The time dependences of the root-mean-square and average linear displacements of particles were calculated for different initial effective parameters of nonideality and different intensities of laser radiation. A relationship was established that connects the effective parameter of nonideality of the dusty plasma system of active particles with the maximum value of the mean linear displacement of particles. Introduction A dusty plasma monolayer is an actively investigated physical object with a number of unique properties. The kinetic temperature of the macroparticles in the dusty plasma monolayer is much higher than the temperature of the surrounding heavy particles (neutrals and plasma ions). This is due to the fact that the presence of a charge in the particles leads to the transfer of the electric energy of the gas discharge into the kinetic energy of the chaotic motion of particles in the plane of the monolayer. In this connection, the dusty plasma system can be considered as one of the options for the implementation of active matter. In recent years, the interest in this form of matter on the part of scientists from various fields of science is growing rapidly. The unusual properties of active matter are largely associated with the thermodynamic openness of such systems and, as a consequence, its study is important for solving fundamental problems of physics, chemistry and biology. The active substance is also of interest for many applications, from efficient drug delivery to the creation of "living" materials with structures and functions that cannot be achieved in passive materials. In this work, the goal is to simulate the dynamics of charged macroparticles with a uniform metal coating in a plasma-dust monolayer under the action of laser radiation. The effect of increasing the kinetic energy of particles in a dusty plasma system using laser radiation has advantages over the traditional method of heating and melting a dusty plasma crystal by reducing the pressure in the discharge chamber. In the first case, the particle charge remains practically unchanged, and the vibration amplitude of the particles in the direction perpendicular to the plane of the monolayer does not increase, the system remains stable. In, the behavior of a single such particle in an RF capacitive discharge under the action of laser radiation on it was experimentally studied. It was found that such a particle performs cyclic movements along a trajectory close to a circle, and the radius of the trajectory depends on the power of the acting laser radiation. The more power, the larger the radius of the trajectory. In, experimental studies of the microscopic dynamics of particles of melamine-formaldehyde with a metal coating in a plasma-dust monolayer when exposed to a limited area of the system of laser radiation were presented. Under the action of laser radiation in the wavelength range below the red border of the photoelectric effect, the particle surface was heated. The collision of neutral atoms with a more heated particle is accompanied by a greater transfer of momentum than in a collision with a less heated one. Thus, a dust particle with a heated surface receives an additional external energy source to increase its own kinetic energy. The additional force associated with particle heating, as well as the Langevin force, is characterized by a random change in both magnitude and direction. However, unlike the Langevin force, this additional force is not related to the equilibrium ambient temperature. Therefore, to set it in modeling, it is necessary to use a non-standard approach, which is described below. Statement of the Problem and Details of Modeling Let us consider the case when a dusty plasma monolayer consists of several hundred micron polymer particles coated with a metal film. Then, the movement of each individual particle will largely depend on the interaction with neighboring particles and on external conditions. The strong interaction of particles is due to the presence of an electric charge on them and the properties of the surrounding nonequilibrium low-pressure gas discharge plasma. The model assumes that the particles have the same size and composition and are uniformly covered with a thin metal film capable of absorbing laser radiation. The initial arrangement of the particles is uniform in the plane of the monolayer. The potential of Yukawa was chosen as the potential for interaction: where Z d -particle charge, r-the distance between two neighboring particles, D -Debye shielding radius, which was assumed to be equal to the mean distance between particles. l p corresponds to the parameters of the simulation of such systems and is close to actual experimental data. One of the key parameters of the dusty subsystem is the effective nonideality parameter *, which is determined from the following relation : where = l p / D -screening parameter, T is the kinetic temperature of the dust subsystem. The phase transition in the "crystal-liquid" system corresponds to the value * ≈ 104. In addition, the particles are affected by the friction force on neutrals -m. r i and the determining random force, the Langevin thermostat L i. m is the mass of the particle, the coefficient of friction is determined from the following expression: m a m n a c a r p. Here, is the coefficient depending on the microscopic mechanism of collision of a gas atom with the particle surface, n a is the concentration of buffer gas atoms, m a is the mass of buffer gas atoms, c a is the velocity of buffer gas atoms, r p is the radius of a dusty macroparticle. For the coefficient, a value of 1.22 was chosen, which corresponds to an intermediate value between the ideal reflecting surface at = 1 and the surface of the ideal heat insulator at = 1.442. Note that in this case, the force L i is specified as a normal random variable with variance 2k B T/dt, where dt is the simulation time step. It is assumed that the forces acting in the vertical direction counterbalance each other. The system is kept within the considered region by specifying potential walls at the boundaries; in the equation of motion term is responsible F con f -the force from the border of the area (confinement). In the case under consideration, the potential trap is specified by applying mirror boundary conditions. In addition, the particles are affected by a random force F tp i, associated with the effect of thermophoresis under the heating effect of laser radiation on the metal surface of the particle, which is proportional to the density of the light flux. The heated surface of the particle leads to the fact that neutral atoms of the buffer gas transfer a greater momentum upon collision with a particle than upon collision with a "cold" particle. In our case, it is assumed that the system is uniformly illuminated by laser radiation, so that the surface of all particles is simultaneously and uniformly heated. Thus, this force is responsible for the activity of the particles. In this work, the magnitude and direction of this force is set randomly with uniformly distributed values of the modulus and angle of rotation of the force vector, and the maximum value of the force is selected so as to provide the desired effective parameter of the nonideality of the dust subsystem, since an increase in force causes an increase in the effective temperature of the dust subsystem. The modulus of its average value is further denoted as F. Thus, the equations of motion of particles in the system are as follows: The two-dimensional region, limited by a potential trap with a radius of 28 mm ( Figure 1), was filled with particles in the amount of N = 625, with an interparticle distance of 1 mm and a particle diameter of 10 m, the mass of particles was calculated based on the density of melamine-formaldehyde and the thickness of the copper coating of 100 nm and amounted to 10 −9 g. The charge on the particles was taken as equal to 10,000 electron charges, which corresponds to the data of many works, see, for example,. The determination of the effective nonideality parameter * was carried out by calculating the radial distribution function of particles (pair correlation function) using the method described in. The studies were carried out for different values of the initial effective nonideality parameter * 0, as well as for different values of the average force F caused by the heating of the particle surface by laser radiation. After "switching on" the force F, the system was held for 50 s in order to come to a stationary state, which was characterized by a fairly stable average kinetic energy of the dust subsystem. Figure 2 shows the trajectories of particles obtained in 0.3 s for different values of the force F and the initial effective parameter of nonideality * 0 = 300. As expected, with an increase in the force F, an increase in the intensity of the kinetic motion of the particles (effective temperature) of the system is observed. In this case, if F= 10 fN, we have * ≈ 200, if F = 20 fN, we have * ≈ 50, and if F = 40 fN, we have * ≈ 15. Molecules 2021, 26, x FOR PEER REVIEW 3 of 8 associated with the effect of thermophoresis under the heating effect of laser radiation on the metal surface of the particle, which is proportional to the density of the light flux. The heated surface of the particle leads to the fact that neutral atoms of the buffer gas transfer a greater momentum upon collision with a particle than upon collision with a "cold" particle. In our case, it is assumed that the system is uniformly illuminated by laser radiation, so that the surface of all particles is simultaneously and uniformly heated. Thus, this force is responsible for the activity of the particles. In this work, the magnitude and direction of this force is set randomly with uniformly distributed values of the modulus and angle of rotation of the force vector, and the maximum value of the force is selected so as to provide the desired effective parameter of the nonideality of the dust subsystem, since an increase in force causes an increase in the effective temperature of the dust subsystem. The modulus of its average value is further denoted as. Thus, the equations of motion of particles in the system are as follows: The two-dimensional region, limited by a potential trap with a radius of 28 mm (Figure 1), was filled with particles in the amount of N = 625, with an interparticle distance of 1 mm and a particle diameter of 10 m, the mass of particles was calculated based on the density of melamine-formaldehyde and the thickness of the copper coating of 100 nm and amounted to 10 −9 g. The charge on the particles was taken as equal to 10,000 electron charges, which corresponds to the data of many works, see, for example,. The determination of the effective nonideality parameter * was carried out by calculating the radial distribution function of particles (pair correlation function) using the method described in. The studies were carried out for different values of the initial effective nonideality parameter * 0, as well as for different values of the average force caused by the heating of the particle surface by laser radiation. After "switching on" the force, the system was held for 50 s in order to come to a stationary state, which was characterized by a fairly stable average kinetic energy of the dust subsystem. Figure 2 shows the trajectories of particles obtained in 0.3 s for different values of the force and the initial effective parameter of nonideality * 0 = 300. As expected, with an increase in the force, an increase in the intensity of the kinetic motion of the particles (effective temperature) of the system is observed. In this case, if = 10 fN, we have * ≈ 200, if = 20 fN, we have * ≈ 50, and if = 40 fN, we have * ≈ 15. Based on the results of numerical experiments, the time dependences of the rootmean-square displacements and average linear displacements of particles along the vectors of the initial velocities of the particles were calculated. As can be seen from these figures, the graph of the rms displacement at a higher average thermophoretic force is higher than at a lower one, which is obviously associated with a more intense kinetic motion of particles in the first case. The values of the rms displacement at a higher value of the initial effective parameter of nonideality are higher, which is due to the fact that at low *0, the effect of the external force will manifest itself less because of the high initial average kinetic energy of the particles. It can be seen that under the action of a thermophoretic force on a dusty system for the root-mean-square displacement of particles in such a system, there are regions corresponding to the ballistic, transient, and diffusion regimes. Moreover, the transient regime is more pronounced at the minimum value of the acting thermophoretic force. Obtained Data, Their Analysis and Discussion To characterize the active motion of a single particle, the value of the average linear displacement along the vector of the initial particle velocity (Figure 4) can be used. It Based on the results of numerical experiments, the time dependences of the root-meansquare displacements and average linear displacements of particles along the vectors of the initial velocities of the particles were calculated. Based on the results of numerical experiments, the time dependences of the rootmean-square displacements and average linear displacements of particles along the vectors of the initial velocities of the particles were calculated. As can be seen from these figures, the graph of the rms displacement at a higher average thermophoretic force is higher than at a lower one, which is obviously associated with a more intense kinetic motion of particles in the first case. The values of the rms displacement at a higher value of the initial effective parameter of nonideality are higher, which is due to the fact that at low *0, the effect of the external force will manifest itself less because of the high initial average kinetic energy of the particles. It can be seen that under the action of a thermophoretic force on a dusty system for the root-mean-square displacement of particles in such a system, there are regions corresponding to the ballistic, transient, and diffusion regimes. Moreover, the transient regime is more pronounced at the minimum value of the acting thermophoretic force. Obtained Data, Their Analysis and Discussion To characterize the active motion of a single particle, the value of the average linear displacement along the vector of the initial particle velocity (Figure 4) can be used. It As can be seen from these figures, the graph of the rms displacement at a higher average thermophoretic force is higher than at a lower one, which is obviously associated with a more intense kinetic motion of particles in the first case. The values of the rms displacement at a higher value of the initial effective parameter of nonideality are higher, which is due to the fact that at low * 0, the effect of the external force will manifest itself less because of the high initial average kinetic energy of the particles. It can be seen that under the action of a thermophoretic force on a dusty system for the root-mean-square displacement of particles in such a system, there are regions corresponding to the ballistic, transient, and diffusion regimes. Moreover, the transient regime is more pronounced at the minimum value of the acting thermophoretic force. To characterize the active motion of a single particle, the value of the average linear displacement along the vector of the initial particle velocity (Figure 4) can be used. It is noted in that for a single active particle, the value of the linear displacement along the initial orientation of the particles is always nonzero. Molecules 2021, 26, x FOR PEER REVIEW 5 of 8 is noted in that for a single active particle, the value of the linear displacement along the initial orientation of the particles is always nonzero. In the case of considering a certain ensemble of particles, it is necessary to use averaging over particles, and therefore, this value is calculated using the following formula: In this case, for the average linear displacement perpendicular to the direction of the initial velocities, we have the relation: In expressions and, the initial velocities vi are taken to be those velocities that the particles acquire 50 s after the "switching on" of the force. Figure 5 shows the graphs of the time dependences of the value ⟨ ( )⟩ (on the inset of the value ⟨ ( )⟩). In the case of considering a certain ensemble of particles, it is necessary to use averaging over particles, and therefore, this value is calculated using the following formula: In this case, for the average linear displacement perpendicular to the direction of the initial velocities, we have the relation: In expressions and, the initial velocities v i are taken to be those velocities that the particles acquire 50 s after the "switching on" of the force F. Figure 5 shows the graphs of the time dependences of the value L vx (t) (on the inset of the value L vy (t) ). As can be seen from the graphs presented, the value of the average linear displacement along the vector of initial velocities first increases sharply, and the growth occurs at times corresponding to the ballistic regime of particle motion (see Figure 3). Here, the particles move mainly along the vector of their initial velocity. Further, the value of the average linear displacement reaches its local maximum and, performing damped oscillations, reaches an almost constant value, where the diffusion mode of motion is realized. It can be seen that both the value of the first maximum and the constant value of saturation in the diffusion mode depend on the value of the average force F. With increasing force values, the displacement increases, i.e., the activity of the system is increasing. Different values of the external force make it possible to realize different effective parameters of the nonideality of the system. For the value of the average linear displacement perpendicular to the vector of the initial velocity (inset in Figure 5), there is a slight increase in the amplitude of oscillations near zero. As can be seen from the graphs presented, the value of the average linear displacement along the vector of initial velocities first increases sharply, and the growth occurs at times corresponding to the ballistic regime of particle motion (see Figure 3). Here, the particles move mainly along the vector of their initial velocity. Further, the value of the average linear displacement reaches its local maximum and, performing damped oscillations, reaches an almost constant value, where the diffusion mode of motion is realized. It can be seen that both the value of the first maximum and the constant value of saturation in the diffusion mode depend on the value of the average force. With increasing force values, the displacement increases, i.e., the activity of the system is increasing. Different values of the external force make it possible to realize different effective parameters of the nonideality of the system. For the value of the average linear displacement perpendicular to the vector of the initial velocity (inset in Figure 5), there is a slight increase in the amplitude of oscillations near zero. Figure 6 shows the dependence of the maximum value of the average linear displacement along the vector of the initial velocity on the effective parameter of the nonideality of the system realized under the given conditions. It can be seen from the graph that the maximum values that the value 〈 ( )〉 takes is associated with the effective parameter of the system's nonideality through a simple relation: Relation can be used to determine the effective nonideality parameter of a system of active dust particles in a dust-plasma monolayer in a wide range with known experimental data on the coordinates and velocities of particles. Figure 6 shows the dependence of the maximum value of the average linear displacement along the vector of the initial velocity on the effective parameter of the nonideality of the system realized under the given conditions. It can be seen from the graph that the maximum values that the value L vx (t max ) takes is associated with the effective parameter of the system's nonideality through a simple relation: Molecules 2021, 26, x FOR PEER REVIEW 7 of 8 Figure 6. Dependence of the maximum values of the average linear displacements of active particles in a plasma-dust monolayer along the vectors of the initial velocities on the steady-state effective parameter of nonideality of the system. Conclusions In this work, using the method of nonequilibrium Brownian dynamics, the microscopic dynamics of a system of interacting dust particles with a uniform metal coating Relation can be used to determine the effective nonideality parameter of a system of active dust particles in a dust-plasma monolayer in a wide range with known experimental data on the coordinates and velocities of particles. Conclusions In this work, using the method of nonequilibrium Brownian dynamics, the microscopic dynamics of a system of interacting dust particles with a uniform metal coating under the action of laser radiation is investigated. Data on the coordinates and velocities of particles are obtained for different initial effective parameters of nonideality and different intensities of laser radiation. Using these data, the time dependences of the root-mean-square and average linear displacements of particles were calculated. It was found that a twofold increase in the average force caused by thermophoresis when the system is irradiated with a laser can lead to a fivefold decrease in the effective nonideality parameter. A simple relationship is established that connects the effective parameter of nonideality of the dusty plasma system of active particles with the maximum value of the mean linear displacement of particles along the direction of their initial velocities. This ratio can be used to diagnose such systems in a wide range of changes in the effective parameter of nonideality. |
def mock_data_dir(monkeypatch):
mock_data_dir = Path(__file__).parent / "data"
monkeypatch.setattr(paths, "DATAPATH", mock_data_dir)
return mock_data_dir |
// update mode is RANGE -> ORDERED BY date, id
@Test
public void testDefaultCopyOnWriteUpdateUnpartitionedUnsortedTable() {
sql("CREATE TABLE %s (id bigint, data string) USING iceberg", tableName);
Table table = validationCatalog.loadTable(tableIdent);
checkCopyOnWriteDistributionAndOrdering(table, UPDATE, FILE_CLUSTERED_DISTRIBUTION, FILE_POSITION_ORDERING);
} |
<reponame>Valeri12580/Gladiators<filename>src/main/java/project/gladiators/service/ArticleService.java
package project.gladiators.service;
import org.springframework.web.multipart.MultipartFile;
import project.gladiators.service.serviceModels.ArticleServiceModel;
import java.io.IOException;
import java.util.List;
public interface ArticleService {
List<ArticleServiceModel> getAllArticles();
ArticleServiceModel findArticleById(String id);
void deleteById(String id);
void registerArticle(ArticleServiceModel articleServiceModel, String username, MultipartFile image) throws IOException;
}
|
#:
@asd
apibrėžti foo():
pereiti
# : comment.line.number-sign.python, punctuation.definition.comment.python, source.python
: : comment.line.number-sign.python, source.python
: meta.function.decorator.python, source.python
@ : entity.name.function.decorator.python, meta.function.decorator.python, source.python
asd : entity.name.function.decorator.python, meta.function.decorator.python, source.python
: meta.function.python, source.python
apibrėžti : meta.function.python, source.python, storage.type.function.python
: meta.function.python, source.python
foo : entity.name.function.python, meta.function.python, source.python
( : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.begin.python, source.python
) : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.end.python, source.python
: : meta.function.python, punctuation.section.function.begin.python, source.python
: source.python
pereiti : keyword.control.flow.python, source.python
|
Numerical analysis of thermal transmittance of hollow concrete blocks Since the 1970s, due to the worldwide energy crisis, some countries have adopted severe legislation to promote energy efficiency in buildings. The electrical power utilized in buildings is directly related to the equipment required to provide thermal comfort to occupants. In this context, one of the most important factors that determine the choice of these systems is the conduction load through the building envelope. In several countries, hollow concrete blocks have become common over the past few years. Therefore, for the assessment of building energy efficiency, the thermal transmittance evaluation of hollow blocks is required. This study obtains values of thermal transmittance for different cavity configurations of concrete blocks by performing computational fluid dynamics simulations. These transmittances are compared to those calculated using the methodology presented in international standards. In the Results section, it is shown that large cavities provide higher transmittance values due to convection and that thermal radiation may play an important role in the overall heat transfer through concrete hollow blocks. Some methods used to reduce the thermal transmittance are presented and discussed. Finally, thermal transmittance of cavities with the same wall height has been compared with the ones of a single block and different strategies are considered for reducing the thermal transmittance of concrete hollow blocks. |
Nonlinear dynamics and parameter control for metamaterial plate with negative Poissons ratio The metamaterial with negative Poisons ratio is widely used due to its special mechanical and physical properties. Based on the theory of periodic solution and bifurcation of nonlinear dynamics, we mainly focus on the nonlinear vibration behaviors and parameter control of a simply supported concave hexagonal composite sandwich plate with negative Poissons ratio in auxetic honeycombs subjected to in-plane and transverse excitation. The Melnikov function is improved by introducing the curvilinear coordinate frame and Poincar map to detect the existence and number of the periodic solutions. The effects of the forcing excitation coefficient on nonlinear dynamics as well as the parameter control conditions are presented. Numerical method is performed to obtain the phase portraits of the number and corresponding positions of multiple periodic orbits. Introduction Metamaterials are man-made structures with special properties that may not be readily available from natural materials. The typical metamaterials are generally associated with four elastic constants, the Young's modulus, shear modulus, bulk modulus and Poisson's ratio. Poisson's ratio, by definition, is the negative ratio of transverse to axial strain. As is well known, most natural materials have a positive Poisson's ratio. However, the metamaterials with negative Poisson's ratio, also called 'auxetic' materials, can exhibit an unconventional behavior, expanding laterally when stretched and contracting laterally when compressed. This kind of materials may have multiple potential applications for automotive, defense and aerospace industries due to their advantages of high energy absorption, fracture toughness, indentation resistance and so on. Auxetic materials, as potential new materials, have attracted great interest. During the last two decades, a variety of materials and structures of negative Poisson's ratio have been discovered. Masters and Evans proposed an auxetic behavior of the traditional 2D re-entrant hexagonal array of honeycomb structure. Smith, Grima and Evans developed a new model to describe the straindependent Poisson's function behavior of honeycomb and foam materials, and suggested a possible new route for the direct manufacture of auxetic foams. Thereafter, extensive efforts have done to achieve negative Poisson's ratio in various structures (see and references therein). As is well known, making the cell of a conventional hexagonal honeycomb re-entrant produces a negative Poisson's ratio, which is described as auxetic honeycomb. When auxetic materials are used as load bearing or energy absorption structures, there exist the geometric nonlinearity and shear IOP Conf. Series: Materials Science and Engineering 531 012041 IOP Publishing doi:10.1088/1757-899X/531/1/012041 2 deformation which can lead to nonlinear oscillations of the structures. Thus, the nonlinear dynamic behaviors for the structures, especially the periodic solutions, bifurcation and chaos etc., becomes important and necessary. Hou et al. demonstrated the potential use of auxetic and graded conventional-auxetic structures under flatwise compression and edgewise loading. Imbalzano et al. conducted the numerical investigations of the dynamic responses and energy absorbing capabilities of auxetic composite panels and equivalent monolithic steel plates. Yang et al. investigated the bifurcation and chaos behavior of a sandwich plate with viscoelastic soft core in supersonic flow by considering the in-plane periodic loading. Guo et al. investigated the free vibration of graphene nanoplatelet reinforced laminated composite quadrilateral plates using the element-free IMLS-Ritz method. Duc et al. applied the analytical solution to investigate the nonlinear dynamic response and vibration of sandwich auxetic composite cylindrical panels. Li, Quan and Zhang performed the curvilinear coordinate transformation to study the bifurcation and number of subharmonic solutions of 4 dimensional non-autonomous slow-fast systems. In this paper, we focus on the nonlinear vibration behaviors and parameter control of a simply supported concave hexagonal composite metamaterial sandwich plate with negative Poisson's ratio in auxetic honeycombs subjected to its in-plane and transverse excitation. The nonlinear motion equation of the model is derived by the method of multiple scales. The bifurcation of multiple periodic solutions occurs under certain conditions. Furthermore, the numbers and relative positions of the periodic solutions can be clearly found from the numerical results. Metamaterial plate system and averaged equation In this section, we focus on the mechanical model of a simply supported concave hexagonal composite metamaterial sandwich plate with length a, width b, core thickness c h, and total thickness h. This plate is subjected to its in-plane excitation and transverse excitation. The section model of the plate and the unit cell of concave hexagonal honeycomb core are shown in figure 1, where 1 l, 2 l represent the length of the inclined and horizontal cell rib, t is the uniform thickness of cell rib, is the inclined angle. The honeycomb is auxetic if is positive and conventional if is negative. The non-dimensional governing equations of transverse motion of the plate is as follows. cos where is a small parameter, 1 and 2 are the first and second order natural frequency of the corresponding linear system respectively, is damping coefficient, are nondimensional coefficients, and 2 1, f f are forcing excitations, 1 and 2 are the frequencies of transverse and in-plane excitation respectively. we focused on the case of 1:1 internal resonance and primary parametric resonance. In this resonant case, are two detuning parameters, and we assume that. By the methods of multiple scales, the averaged equation is obtained as follows., are uniquely determined by the coefficients of equation. Transformations for the system For convenience, we introduce the following rescaling transformation where is a sufficiently small parameter. Then system can be rewritten as. The periods of the orbits are Melnikov function Introducing curvilinear coordinates in the neighbourhood of Assuming that 2 2, then the period For convenience, denote ). Then the unknown coefficients of equation are expressed as follows., equation becomes, and The number of solutions for ),, ( is closely related to the number of periodic solutions of system. We choose a group of parameters as ),,,,,,, Bifurcation and control of multiple periodic solutions The forcing excitation coefficient 1 f is left as the only unknown parameter. The number of solutions for equation is closely related to the value of the parameter 1 f., then there is no real solution for equation. Therefore, system has no periodic solution. The proof of theorem 1 is completed. According to theorem 1, the periodic solutions of the system appear in pairs. When |
import InstanceSkel = require("../../../instance_skel")
import { CompanionVariable } from "../../../instance_skel_types"
import { MagewellConfig } from "./config"
import { DeviceStatus, Duration } from "./magewell"
import { MagewellState } from "./magewellstate"
export function UpdateVariables(instance: InstanceSkel<MagewellConfig>, state: MagewellState): void {
if (!state.status) return;
instance.setVariable(`record_status`, (state.status["cur-status"] & DeviceStatus.statusRecord) == DeviceStatus.statusRecord ? 'Recording' : 'Record');
instance.setVariable(`stream_status`, (state.status["cur-status"] & DeviceStatus.statusLiving) == DeviceStatus.statusLiving ? 'Streaming' : 'Stream');
instance.setVariable(`stream_bitrate`, (state.status["live-status"]["cur-bps"] / 125000).toFixed(2));
const streamDuration = formatDurationMilliseconds(state.status["live-status"]["run-ms"]);
instance.setVariable(`stream_duration_hm`, streamDuration[0]);
instance.setVariable(`stream_duration_hms`, streamDuration[1]);
const recordDuration = formatDurationMilliseconds(state.status["rec-status"]["run-ms"]);
instance.setVariable(`record_duration_hm`, recordDuration[0]);
instance.setVariable(`record_duration_hms`, recordDuration[1]);
}
export function InitVariables(instance: InstanceSkel<MagewellConfig>, state: MagewellState): void {
const variables: CompanionVariable[] = []
variables.push({
name: 'record_status',
label: 'Current recording status: Recording/Record'
});
variables.push({
name: 'stream_status',
label: 'Current streaming status: Streaming/Stream'
});
variables.push({
label: 'Streaming bitrate in Mb/s',
name: 'stream_bitrate',
});
variables.push({
label: 'Streaming duration (hh:mm)',
name: 'stream_duration_hm',
});
variables.push({
label: 'Streaming duration (hh:mm:ss)',
name: 'stream_duration_hms',
});
variables.push({
label: 'Recording duration (hh:mm)',
name: 'record_duration_hm',
});
variables.push({
label: 'Recording duration (hh:mm:ss)',
name: 'record_duration_hms',
});
UpdateVariables(instance, state);
instance.setVariableDefinitions(variables)
}
function formatDuration(durationObj: Duration | undefined): [string, string] {
let durationLong = '00:00:00'
let durationShort = '00:00'
if (durationObj) {
durationShort = `${pad(`${durationObj.hours}`, '0', 2)}:${pad(`${durationObj.minutes}`, '0', 2)}`
durationLong = `${durationShort}:${pad(`${durationObj.seconds}`, '0', 2)}`
}
return [durationShort, durationLong]
}
function pad(str: string, prefix: string, len: number): string {
while (str.length < len) {
str = prefix + str
}
return str
}
function formatDurationMilliseconds(totalMilliseconds: number | undefined): [string, string] {
let duration: Duration | undefined
if (totalMilliseconds) {
duration = {
hours: 0,
minutes: 0,
seconds: 0
}
totalMilliseconds = Math.floor(totalMilliseconds / 1000);
duration.seconds = totalMilliseconds % 60
totalMilliseconds = Math.floor(totalMilliseconds / 60)
duration.minutes = totalMilliseconds % 60
totalMilliseconds = Math.floor(totalMilliseconds / 60)
duration.hours = totalMilliseconds
}
return formatDuration(duration)
}
|
THE ROLE OF INTESTINAL MICROBIOTA IN THE PATHOGENESIS OF SEPSIS PROGRESSION Background. Sepsis is a global public health problem and is associated with high mortality rates in all countries. According to recent views, sepsis is defned as life-threatening organ dysfunction caused by an unregulated response of the host to infection. Objective. To analyze the results of scientifc studies confrming the key role of intestinal dysbiosis in the pathophysiology of sepsis. Material and methods. A qualitative analysis of 34 Russian-language and English-language sources concerning the role of the intestinal microbiota in the onset of sepsis was carried out. Results. It has been established that intestinal microbiota plays an important role in the etiology, pathogenesis and treatment of sepsis and its disbalance can trigger the development of sepsis of various etiologies, mainly gram-negative. Conclusions. The analysis of the literature indicates that bacterial translocation can be natural provided that the immune system functions properly. Intestinal microbiota plays one of the leading roles in the development of sepsis. The use of probiotics and transplantation of intestinal microbiota contribute greatly to the treatment and prevention of sepsis in ICU patients. |
<filename>kata/5-kyu/common-denominators/main/Fracts.java<gh_stars>10-100
import static java.math.BigInteger.valueOf;
import static java.util.Arrays.stream;
import static java.util.stream.Collectors.joining;
import java.util.function.LongBinaryOperator;
interface Fracts {
static String convertFrac(long[][] lst) {
if (lst.length == 0) {
return "";
}
LongBinaryOperator calcGcd = (a, b) -> valueOf(a).gcd(valueOf(b)).longValue();
LongBinaryOperator calcLcm = (a, b) -> b / calcGcd.applyAsLong(a, b) * a;
long lcm = stream(lst).map(r -> r[1]).reduce(lst[0][1], calcLcm::applyAsLong);
long gcd = stream(lst).map(r -> lcm * r[0] / r[1]).reduce(lcm, calcGcd::applyAsLong);
var str = stream(lst).mapToLong(r -> lcm * r[0] / r[1] / gcd).mapToObj(Long::toString);
return "(" + str.collect(joining("," + lcm / gcd + ")(")) + "," + lcm / gcd + ")";
}
}
|
def stripped_spaces_around(converter):
def stripped_text_converter(value):
if value is None:
return None
return converter(value.strip())
return stripped_text_converter |
<reponame>edisonlee0212/UniEngine
#include "DefaultResources.hpp"
#include <EditorManager.hpp>
#include <Gui.hpp>
#include <Material.hpp>
using namespace UniEngine;
static const char *MatPolygonMode[]{"Fill", "Line", "Point"};
static const char *MatCullingMode[]{"BACK", "FRONT", "OFF"};
static const char *MatBlendingMode[]{"OFF", "ONE_MINUS_SRC_ALPHA"};
MaterialFloatProperty::MaterialFloatProperty(const std::string &name, const float &value)
{
m_name = name;
m_value = value;
}
MaterialMat4Property::MaterialMat4Property(const std::string &name, const glm::mat4 &value)
{
m_name = name;
m_value = value;
}
void Material::OnCreate()
{
m_name = "New material";
m_program = DefaultResources::GLPrograms::StandardProgram;
}
void Material::OnInspect()
{
ImGui::Text("Name: %s", m_name.c_str());
if (ImGui::BeginPopupContextItem(m_name.c_str()))
{
if (ImGui::BeginMenu("Rename##Material"))
{
static char newName[256];
ImGui::InputText("New name##Material", newName, 256);
if (ImGui::Button("Confirm##Material"))
{
m_saved = false;
m_name = std::string(newName);
}
ImGui::EndMenu();
}
ImGui::EndPopup();
}
if (EditorManager::DragAndDropButton<OpenGLUtils::GLProgram>(m_program, "Program"))
m_saved = false;
ImGui::Separator();
if (ImGui::TreeNodeEx("PBR##Material", ImGuiTreeNodeFlags_DefaultOpen))
{
if (!m_albedoTexture.Get())
if (ImGui::ColorEdit3("Albedo##Material", &m_albedoColor.x))
{
m_saved = false;
}
if (!m_metallicTexture.Get())
if (ImGui::DragFloat("Metallic##Material", &m_metallic, 0.01f, 0.0f, 1.0f))
{
m_saved = false;
}
if (!m_roughnessTexture.Get())
if (ImGui::DragFloat("Roughness##Material", &m_roughness, 0.01f, 0.0f, 1.0f))
{
m_saved = false;
}
if (!m_aoTexture.Get())
if (ImGui::DragFloat("AO##Material", &m_ambient, 0.01f, 0.0f, 1.0f))
{
m_saved = false;
}
if (ImGui::DragFloat("Emission##Material", &m_emission, 0.01f, 0.0f, 10.0f))
{
m_saved = false;
}
ImGui::TreePop();
}
if (ImGui::TreeNodeEx("Others##Material"))
{
if (ImGui::Checkbox("Enable alpha discard##Material", &m_alphaDiscardEnabled))
{
m_saved = false;
}
if (m_alphaDiscardEnabled)
{
if (ImGui::DragFloat("Alpha discard offset##Material", &m_alphaDiscardOffset, 0.01f, 0.0f, 0.99f))
{
m_saved = false;
}
}
if(ImGui::Combo(
"Polygon Mode##Material",
reinterpret_cast<int *>(&m_polygonMode),
MatPolygonMode,
IM_ARRAYSIZE(MatPolygonMode))){
m_saved = false;
}
if(ImGui::Combo(
"Culling Mode##Material",
reinterpret_cast<int *>(&m_cullingMode),
MatCullingMode,
IM_ARRAYSIZE(MatCullingMode))){
m_saved = false;
}
if(ImGui::Combo(
"Blending Mode##Material",
reinterpret_cast<int *>(&m_blendingMode),
MatBlendingMode,
IM_ARRAYSIZE(MatBlendingMode))){
m_saved = false;
}
ImGui::TreePop();
}
if (ImGui::TreeNode(("Textures##Material" + std::to_string(std::hash<std::string>{}(m_name))).c_str()))
{
if(EditorManager::DragAndDropButton<Texture2D>(m_albedoTexture, "Albedo Tex")){
m_saved = false;
}
if(EditorManager::DragAndDropButton<Texture2D>(m_normalTexture, "Normal Tex")){
m_saved = false;
}
if(EditorManager::DragAndDropButton<Texture2D>(m_metallicTexture, "Metallic Tex")){
m_saved = false;
}
if(EditorManager::DragAndDropButton<Texture2D>(m_roughnessTexture, "Roughness Tex")){
m_saved = false;
}
if(EditorManager::DragAndDropButton<Texture2D>(m_aoTexture, "AO Tex")){
m_saved = false;
}
ImGui::TreePop();
}
}
void UniEngine::Material::SetMaterialProperty(const std::string &name, const float &value)
{
for (auto &property : m_floatPropertyList)
{
if (property.m_name.compare(name) == 0)
{
property.m_value = value;
return;
}
}
m_floatPropertyList.emplace_back(name, value);
}
void UniEngine::Material::SetMaterialProperty(const std::string &name, const glm::mat4 &value)
{
for (auto &property : m_float4X4PropertyList)
{
if (property.m_name.compare(name) == 0)
{
property.m_value = value;
return;
}
}
m_float4X4PropertyList.emplace_back(name, value);
}
void Material::SetTexture(const TextureType &type, std::shared_ptr<Texture2D> texture)
{
switch (type)
{
case TextureType::Albedo:
m_albedoTexture = texture;
break;
case TextureType::Normal:
m_normalTexture = texture;
break;
case TextureType::Metallic:
m_metallicTexture = texture;
break;
case TextureType::Roughness:
m_roughnessTexture = texture;
break;
case TextureType::AO:
m_aoTexture = texture;
break;
}
}
void Material::RemoveTexture(TextureType type)
{
switch (type)
{
case TextureType::Albedo:
m_albedoTexture.Clear();
break;
case TextureType::Normal:
m_normalTexture.Clear();
break;
case TextureType::Metallic:
m_metallicTexture.Clear();
break;
case TextureType::Roughness:
m_roughnessTexture.Clear();
break;
case TextureType::AO:
m_aoTexture.Clear();
break;
}
}
void Material::SetProgram(std::shared_ptr<OpenGLUtils::GLProgram> program)
{
m_program = std::move(program);
}
void Material::Serialize(YAML::Emitter &out)
{
m_albedoTexture.Save("m_albedoTexture", out);
m_normalTexture.Save("m_normalTexture", out);
m_metallicTexture.Save("m_metallicTexture", out);
m_roughnessTexture.Save("m_roughnessTexture", out);
m_aoTexture.Save("m_aoTexture", out);
m_program.Save("m_program", out);
out << YAML::Key << "m_polygonMode" << YAML::Value << (unsigned)m_polygonMode;
out << YAML::Key << "m_cullingMode" << YAML::Value << (unsigned)m_cullingMode;
out << YAML::Key << "m_blendingMode" << YAML::Value << (unsigned)m_blendingMode;
out << YAML::Key << "m_metallic" << YAML::Value << m_metallic;
out << YAML::Key << "m_roughness" << YAML::Value << m_roughness;
out << YAML::Key << "m_ambient" << YAML::Value << m_ambient;
out << YAML::Key << "m_emission" << YAML::Value << m_emission;
out << YAML::Key << "m_albedoColor" << YAML::Value << m_albedoColor;
out << YAML::Key << "m_alphaDiscardEnabled" << YAML::Value << m_alphaDiscardEnabled;
out << YAML::Key << "m_alphaDiscardOffset" << YAML::Value << m_alphaDiscardOffset;
}
void Material::Deserialize(const YAML::Node &in)
{
m_albedoTexture.Load("m_albedoTexture", in);
m_normalTexture.Load("m_normalTexture", in);
m_metallicTexture.Load("m_metallicTexture", in);
m_roughnessTexture.Load("m_roughnessTexture", in);
m_aoTexture.Load("m_aoTexture", in);
m_program.Load("m_program", in);
if (in["m_polygonMode"])
m_polygonMode = (MaterialPolygonMode)in["m_polygonMode"].as<unsigned>();
if (in["m_cullingMode"])
m_cullingMode = (MaterialCullingMode)in["m_cullingMode"].as<unsigned>();
if (in["m_blendingMode"])
m_blendingMode = (MaterialBlendingMode)in["m_blendingMode"].as<unsigned>();
m_metallic = in["m_metallic"].as<float>();
m_roughness = in["m_roughness"].as<float>();
m_ambient = in["m_ambient"].as<float>();
m_emission = in["m_emission"].as<float>();
m_albedoColor = in["m_albedoColor"].as<glm::vec3>();
m_alphaDiscardEnabled = in["m_alphaDiscardEnabled"].as<bool>();
m_alphaDiscardOffset = in["m_alphaDiscardOffset"].as<float>();
}
void Material::CollectAssetRef(std::vector<AssetRef> &list)
{
list.push_back(m_albedoTexture);
list.push_back(m_normalTexture);
list.push_back(m_metallicTexture);
list.push_back(m_roughnessTexture);
list.push_back(m_aoTexture);
list.push_back(m_program);
}
|
/**************************************************************************//**
* @file pdma.h
* @version V1.00
* $Revision: 3 $
* $Date: 15/06/10 9:11a $
* @brief Nano100 series PDMA driver header file
*
* @note
* Copyright (C) 2013 Nuvoton Technology Corp. All rights reserved.
*****************************************************************************/
#ifndef __PDMA_H__
#define __PDMA_H__
#ifdef __cplusplus
extern "C"
{
#endif
/** @addtogroup NANO100_Device_Driver NANO100 Device Driver
@{
*/
/** @addtogroup NANO100_PDMA_Driver PDMA Driver
@{
*/
/** @addtogroup NANO100_PDMA_EXPORTED_CONSTANTS PDMA Exported Constants
@{
*/
/*---------------------------------------------------------------------------------------------------------*/
/* Data Width Constant Definitions */
/*---------------------------------------------------------------------------------------------------------*/
#define PDMA_WIDTH_8 0x00080000UL /*!<DMA Transfer Width 8-bit */
#define PDMA_WIDTH_16 0x00100000UL /*!<DMA Transfer Width 16-bit */
#define PDMA_WIDTH_32 0x00000000UL /*!<DMA Transfer Width 32-bit */
/*---------------------------------------------------------------------------------------------------------*/
/* Address Attribute Constant Definitions */
/*---------------------------------------------------------------------------------------------------------*/
#define PDMA_SAR_INC 0x00000000UL /*!<DMA SAR increment */
#define PDMA_SAR_FIX 0x00000020UL /*!<DMA SAR fix address */
#define PDMA_SAR_WRA 0x00000030UL /*!<DMA SAR wrap around */
#define PDMA_DAR_INC 0x00000000UL /*!<DMA DAR increment */
#define PDMA_DAR_FIX 0x00000080UL /*!<DMA DAR fix address */
#define PDMA_DAR_WRA 0x000000C0UL /*!<DMA DAR wrap around */
/*---------------------------------------------------------------------------------------------------------*/
/* Peripheral Transfer Mode Constant Definitions */
/*---------------------------------------------------------------------------------------------------------*/
#define PDMA_SPI0_TX 0x00000000UL /*!<DMA Connect to SPI0 TX */
#define PDMA_SPI1_TX 0x00000001UL /*!<DMA Connect to SPI1 TX */
#define PDMA_UART0_TX 0x00000002UL /*!<DMA Connect to UART0 TX */
#define PDMA_UART1_TX 0x00000003UL /*!<DMA Connect to UART1 TX */
#define PDMA_USB_TX 0x00000004UL /*!<DMA Connect to USB TX */
#define PDMA_I2S_TX 0x00000005UL /*!<DMA Connect to I2S TX */
#define PDMA_DAC0_TX 0x00000006UL /*!<DMA Connect to DAC0 TX */
#define PDMA_DAC1_TX 0x00000007UL /*!<DMA Connect to DAC1 TX */
#define PDMA_SPI2_TX 0x00000008UL /*!<DMA Connect to SPI2 TX */
#define PDMA_TMR0 0x00000009UL /*!<DMA Connect to TMR0 */
#define PDMA_TMR1 0x0000000AUL /*!<DMA Connect to TMR1 */
#define PDMA_TMR2 0x0000000BUL /*!<DMA Connect to TMR2 */
#define PDMA_TMR3 0x0000000CUL /*!<DMA Connect to TMR3 */
#define PDMA_SPI0_RX 0x00000010UL /*!<DMA Connect to SPI0 RX */
#define PDMA_SPI1_RX 0x00000011UL /*!<DMA Connect to SPI1 RX */
#define PDMA_UART0_RX 0x00000012UL /*!<DMA Connect to UART0 RX */
#define PDMA_UART1_RX 0x00000013UL /*!<DMA Connect to UART1 RX */
#define PDMA_USB_RX 0x00000014UL /*!<DMA Connect to USB RX */
#define PDMA_I2S_RX 0x00000015UL /*!<DMA Connect to I2S RX */
#define PDMA_ADC 0x00000016UL /*!<DMA Connect to I2S1 RX */
#define PDMA_SPI2_RX 0x00000018UL /*!<DMA Connect to SPI2 RX */
#define PDMA_PWM0_CH0 0x00000019UL /*!<DMA Connect to PWM0 CH0 */
#define PDMA_PWM0_CH2 0x0000001AUL /*!<DMA Connect to PWM0 CH2 */
#define PDMA_PWM1_CH0 0x0000001BUL /*!<DMA Connect to PWM1 CH0 */
#define PDMA_PWM1_CH2 0x0000001CUL /*!<DMA Connect to PWM1 CH2 */
#define PDMA_MEM 0x0000001FUL /*!<DMA Connect to Memory */
/*@}*/ /* end of group NANO100_PDMA_EXPORTED_CONSTANTS */
/** @addtogroup NANO100_PDMA_EXPORTED_FUNCTIONS PDMA Exported Functions
@{
*/
/**
* @brief Get PDMA Interrupt Status
*
* @param[in] None
*
* @return None
*
* @details This macro gets the interrupt status.
* \hideinitializer
*/
#define PDMA_GET_INT_STATUS() ((uint32_t)(PDMAGCR->GCRISR))
/**
* @brief Get PDMA Channel Interrupt Status
*
* @param[in] u32Ch Selected DMA channel
*
* @return Interrupt Status
*
* @details This macro gets the channel interrupt status.
* \hideinitializer
*/
#define PDMA_GET_CH_INT_STS(u32Ch) (*((__IO uint32_t *)((uint32_t)&PDMA1->ISR + (uint32_t)((u32Ch-1)*0x100))))
/**
* @brief Clear PDMA Channel Interrupt Flag
*
* @param[in] u32Ch Selected DMA channel
* @param[in] u32Mask Interrupt Mask
*
* @return None
*
* @details This macro clear the channel interrupt flag.
* \hideinitializer
*/
#define PDMA_CLR_CH_INT_FLAG(u32Ch, u32Mask) (*((__IO uint32_t *)((uint32_t)&PDMA1->ISR + (uint32_t)((u32Ch-1)*0x100))) = (u32Mask))
/**
* @brief Check Channel Status
*
* @param[in] u32Ch The selected channel
*
* @return 0 = idle; 1 = busy
*
* @details Check the selected channel is busy or not.
* \hideinitializer
*/
#define PDMA_IS_CH_BUSY(u32Ch) ((*((__IO uint32_t *)((uint32_t)&PDMA1->CSR +(uint32_t)((u32Ch-1)*0x100))) & PDMA_CSR_TRIG_EN_Msk)? 1 : 0)
/**
* @brief Set Source Address
*
* @param[in] u32Ch The selected channel
* @param[in] u32Addr The selected address
*
* @return None
*
* @details This macro set the selected channel source address.
* \hideinitializer
*/
#define PDMA_SET_SRC_ADDR(u32Ch, u32Addr) (*((__IO uint32_t *)((uint32_t)&PDMA1->SAR + (uint32_t)((u32Ch-1)*0x100))) = (u32Addr))
/**
* @brief Set Destination Address
*
* @param[in] u32Ch The selected channel
* @param[in] u32Addr The selected address
*
* @return None
*
* @details This macro set the selected channel destination address.
* \hideinitializer
*/
#define PDMA_SET_DST_ADDR(u32Ch, u32Addr) (*((__IO uint32_t *)((uint32_t)&PDMA1->DAR + (uint32_t)((u32Ch-1)*0x100))) = (u32Addr))
/**
* @brief Set Transfer Count
*
* @param[in] u32Ch The selected channel
* @param[in] u32Count Transfer Count
*
* @return None
*
* @details This macro set the selected channel transfer count.
* \hideinitializer
*/
#define PDMA_SET_TRANS_CNT(u32Ch, u32Count) { \
if (((uint32_t)*((__IO uint32_t *)((uint32_t)&PDMA1->CSR + (uint32_t)((u32Ch-1)*0x100))) & PDMA_CSR_APB_TWS_Msk) == PDMA_WIDTH_32) \
*((__IO uint32_t *)((uint32_t)&PDMA1->BCR + (uint32_t)((u32Ch-1)*0x100))) = ((u32Count) << 2); \
else if (((uint32_t)*((__IO uint32_t *)((uint32_t)&PDMA1->CSR + (uint32_t)((u32Ch-1)*0x100))) & PDMA_CSR_APB_TWS_Msk) == PDMA_WIDTH_8) \
*((__IO uint32_t *)((uint32_t)&PDMA1->BCR + (uint32_t)((u32Ch-1)*0x100))) = (u32Count); \
else if (((uint32_t)*((__IO uint32_t *)((uint32_t)&PDMA1->CSR + (uint32_t)((u32Ch-1)*0x100))) & PDMA_CSR_APB_TWS_Msk) == PDMA_WIDTH_16) \
*((__IO uint32_t *)((uint32_t)&PDMA1->BCR + (uint32_t)((u32Ch-1)*0x100))) = ((u32Count) << 1); \
}
/**
* @brief Stop the channel
*
* @param[in] u32Ch The selected channel
*
* @return None
*
* @details This macro stop the selected channel.
* \hideinitializer
*/
#define PDMA_STOP(u32Ch) (*((__IO uint32_t *)((uint32_t)&PDMA1->CSR + (uint32_t)((u32Ch-1)*0x100))) &= ~PDMA_CSR_PDMACEN_Msk)
void PDMA_Open(uint32_t u32Mask);
void PDMA_Close(void);
void PDMA_SetTransferCnt(uint32_t u32Ch, uint32_t u32Width, uint32_t u32TransCount);
void PDMA_SetTransferAddr(uint32_t u32Ch, uint32_t u32SrcAddr, uint32_t u32SrcCtrl, uint32_t u32DstAddr, uint32_t u32DstCtrl);
void PDMA_SetTransferMode(uint32_t u32Ch, uint32_t u32Periphral, uint32_t u32ScatterEn, uint32_t u32DescAddr);
void PDMA_SetTimeOut(uint32_t u32Ch, uint32_t u32OnOff, uint32_t u32TimeOutCnt);
void PDMA_Trigger(uint32_t u32Ch);
void PDMA_EnableInt(uint32_t u32Ch, uint32_t u32Mask);
void PDMA_DisableInt(uint32_t u32Ch, uint32_t u32Mask);
/*@}*/ /* end of group NANO100_PDMA_EXPORTED_FUNCTIONS */
/*@}*/ /* end of group NANO100_PDMA_Driver */
/*@}*/ /* end of group NANO100_Device_Driver */
#ifdef __cplusplus
}
#endif
#endif //__PDMA_H__
/*** (C) COPYRIGHT 2013 Nuvoton Technology Corp. ***/
|
David Portnoy of Barstool Sports (second from left) joined Felger & Massarotti Tuesday January 27, 2015 at Super Bowl XLIX Radio Row in Phoenix. (Photo by Michael Hurley/CBS Boston)
“El Pres” David Portnoy of Barstool Sports joined 98.5 The Sports Hub’s Felger & Massarotti Tuesday afternoon on Radio Row to give his Boston sports media power rankings and his thoughts on DeflateGate , the latter of which he has a big bone to pick.
Pres is peeved at the two afternoon drive hosts for their perceived bias against the Patriots.
“I’m going to give you guys the chance to get out in front of this. I can’t tell whether you’re doing it for ratings or if you really believe what you’re saying. You’re so anti-Patriot, and you’ve always been anti-Patriot. … This has turned into the hate show,” Portnoy told Felger & Mazz directly.
“No matter what happens, the Patriots could win 1,000 straight games and never get in trouble, and you’re looking for any little angle to go at ’em.”
Mazz disagrees, and genuinely feels like he’s been pretty positive about the team since Week 4 (citing their improved defense among other things), up until this whole ball deflation thing of course.
But Portnoy just doesn’t see the moral outrage response and getting all worked up over pounds per square inch.
“You guys are making it sound like it’s ethically impossible to root for this team, like you’re scarred as a human that cuts you so deep, when the Seahawks are running through PEDs — but nobody cares about that,” said Portnoy, who earlier in the day confronted the Indianapolis report who broke the story.
“[Bob Kravitz] wouldn’t go on camera, but I said to him what about the Colts and pumping in crowd noise? Is that not just as bad? And he [denied it]. So, it’s like, honestly. . . it’s only the Patriots. I think most of it stems from the hate of Belichick.”
Felger & Mazz vs. Portnoy continued for what felt like 10 more rounds, including the new development about the locker room attendant taking the footballs with him inside the bathroom.
“He had to take a leak! You’ve never had to take a leak? Since when is going to the bathroom a federal investigation?” shouted an impassioned Portnoy.
Portnoy then bashed the league for setting up a sting operation, which Felger & Mazz haven’t ruled out as a possibility.
Listen below for the first part of the debate:
Listen below for the second part of the debate:
Tune in to Super Bowl XLIX on 98.5 The Sports Hub — the flagship station of the New England Patriots. It’s the only place to hear Bob Socci & Scott Zolak’s local call of the game! |
import { createHash } from 'crypto';
import { promise as fastq } from 'fastq';
import { promises as fs } from 'fs';
import { extname, resolve } from 'path';
import { KaraList } from '../lib/types/kara';
import { getConfig, resolvedPathRepos } from '../lib/utils/config';
import { createHardsub } from '../lib/utils/ffmpeg';
import { fileExists, resolveFileInDirs } from '../lib/utils/files';
import logger, { profile } from '../lib/utils/logger';
import { getState } from './state';
let queue = null;
export async function initHardsubGeneration() {
queue = fastq<never, [string, string, string, string], void>(wrappedGenerateHS, 1);
}
async function wrappedGenerateHS(payload: [string, string, string, string]) {
const [mediaPath, subPath, outputFile, kid] = payload;
logger.info(`Creating hardsub for ${mediaPath}`, {service: 'Hardsubs'});
if (await fileExists(outputFile)) return;
const assPath = subPath ? `${kid}.ass` : null;
if (assPath) await fs.copyFile(payload[1], assPath);
try {
await createHardsub(mediaPath, assPath, outputFile);
logger.info(`${queue.length()} hardsubs left in queue`, {service: 'Hardsubs'});
} catch (err) {
logger.error(`Error creating hardsub for ${mediaPath} : ${err}`, {service: 'Hardsubs', obj: err});
throw err;
} finally {
if (assPath) await fs.unlink(assPath);
}
}
export async function generateHardsubs(karas: KaraList) {
logger.info('Generate subchecksums', {service: 'Hardsubs'});
interface HardsubInfo {
kid: string,
mediafile: string,
mediasize: number,
subfile: string,
repository: string,
subchecksum: string
}
const mediaMap = new Map<string, HardsubInfo>();
const mediaWithInfosSet = new Set();
for (const k of karas.content) {
mediaMap.set(k.kid, {
kid: k.kid,
mediafile: k.mediafile,
mediasize: k.mediasize,
subfile: k.subfile,
repository: k.repository,
subchecksum: null,
});
}
for (const media of mediaMap.values()) {
try {
const subfile = await resolveFileInDirs(media.subfile || 'no_ass.txt', resolvedPathRepos('Lyrics', media.repository));
media.subchecksum = await generateSubchecksum(subfile[0]);
} catch (err) {
media.subchecksum = 'no_ass_file';
}
const ext = extname(media.mediafile);
mediaWithInfosSet.add(media.mediafile.replace(ext, `.${media.mediasize}.${media.subchecksum}.mp4`));
}
logger.info('Generated subchecksums', {service: 'Hardsubs'});
const hardsubDir = resolve(getState().dataPath, getConfig().System.Path.Hardsubs);
const hardsubFiles = await fs.readdir(hardsubDir);
const hardsubSet = new Set<string>(hardsubFiles);
// Remove unused previewFiles
profile('removeHardsubs');
hardsubFiles.forEach((file: string) => {
const fileParts = file.split('.');
if (mediaMap.has(fileParts[0])) {
// Compare mediasizes. If mediasize or subchecksum are different, remove file
if (mediaMap.get(fileParts[0]).mediasize !== +fileParts[1] || mediaMap.get(fileParts[0]).subchecksum !== fileParts[2]) {
fs.unlink(resolve(hardsubDir, file));
logger.info(`Removing ${file}`, {service: 'Hardsubs'});
}
}
});
profile('removeHardsubs');
profile('createHardsubs');
for (const media of mediaMap.values()) {
try {
const hardsubFile = `${media.kid}.${media.mediasize}.${media.subchecksum}.mp4`;
if (!hardsubSet.has(hardsubFile)) {
const mediaPath = (await resolveFileInDirs(media.mediafile, resolvedPathRepos('Medias', media.repository)))[0];
let subPath = null;
if (media.subfile) {
subPath = (await resolveFileInDirs(media.subfile, resolvedPathRepos('Lyrics', media.repository)))[0];
}
const outputFile = resolve(hardsubDir, hardsubFile);
queue.push([mediaPath, subPath, outputFile, media.kid]);
}
} catch (error) {
logger.error(`Error when creating hardsub for ${media.mediafile}: ${error}`, {service: 'Hardsubs'});
}
}
profile('createHardsubs');
}
async function generateSubchecksum(path: string) {
let ass = await fs.readFile(path, {encoding: 'utf-8'}).catch(reason => {
if (reason.code === 'ENOENT') {
return 'no_ass_file';
}
throw reason;
});
if (ass === 'no_ass_file') {
return ass;
}
ass = ass.replace(/\r/g, '');
return createHash('md5').update(ass, 'utf-8').digest('hex');
}
|
EA is shutting down Visceral Games, the studio famous for creating the Dead Space franchise as well as other games like Battlefield Hardline and Dante’s Inferno.
EA Executive Vice President Patrick Söderlund said in a blog post that Visceral will be “ramping down and closing,” and that they are “in the midst of shifting as many of the team as possible to other projects and teams at EA.”
It’s expected that many of the people at the studio will be left without a job. However, many people in the games industry from studios including Treyarch, Avalanche Studios, Naughty Dog, Ubisoft San Francisco, Insomniac, Rockstar Games, Bungie, and more have shared their studio’s job listings in response to the closure.
My heart goes all of you affected by the closure. It's not much, but we're looking for people! #VisceralJobs https://t.co/RzNXLkdypm — Jose Abalos (@jpalz) October 17, 2017
Another jobs thread :( …Need programmers & designers in NYC, and some of everything in Stockholm. #VisceralJobs https://t.co/6Y9u6vOCP5 — Brian Venisky (@TechAnimator) October 17, 2017
Motive has a new IP and are hiring a ton of positions. Visceral buds, get at me for referrals. https://t.co/0binx1mZI9 #VisceralJobs — Mitch Dyer (@MitchyD) October 17, 2017
Shocked by the Visceral closure. For anyone impacted, here's how to find our current openings: https://t.co/shlMr6AqZe #VisceralJobs https://t.co/Xqr1Son8TP — Scott Lowe (@ScottLowe) October 17, 2017
#VisceralJobs Anyone want to jump the pond and come live in Poland? It's pretty nice over here. https://t.co/9QOoVEfAY5 — Seb McBride also moonlighting as MoonlightCDPR (@Darthsebious) October 17, 2017
https://twitter.com/MalikRahili/status/920373015711494146
Rockstar Games is hiring for multiple roles. Check out https://t.co/Nl8Is1JlWT #VisceralJobs
Edinburgh is windy but lovely :) — Cezar de Almeida (@CezardeAlmeida) October 17, 2017
#VisceralJobs Bungie has plenty of positions open! I hope everyone lands on their feet. ❤️ https://t.co/Jbf1nKS2Wb — Cortana Fiveeeee (@CortanaV) October 17, 2017
Sigh another jobs thread…
We are hiring at Bossa Studios both in Seattle and London.https://t.co/Swl1yPlsQ5 or ask me.#VisceralJobs — Chet Faliszek (@chetfaliszek) October 17, 2017
I'm so sorry to hear about Visceral Games. Lots of love to you guys.
If you need it, Zenimax is always looking for good people and most likely, you won't even have to work with me! #VisceralJobs https://t.co/6577FFVxOI — Ju Li K (@Juleshortstuff) October 17, 2017
Here is a Google Document with a long list of Game studios in need of developers created in response to the closure.
If any game studio is looking for people, be sure to share it on Twitter with the hashtag VisceralJobs.
It’s possible that any developers let go could come together an make their own studio. Perception was a game crowd-funded on Kickstarter and developed by The Deep End Games, which included Visceral level designer Ben Johnson.
Kotaku reports that EA was not clear about the status of Amy Henning, former Naughty Dog director who joined Visceral to direct their Star Wars game. An EA spokesperson told the publication that they “are in discussions with Amy about her next move.”
As for Visceral’s Star Wars game, it will be revamped and moved to a different studio. Söderlund said that the game was shaping up to be a story-based, linear adventure game, but “Throughout the development process, we have been testing the game concept with players, listening to the feedback about what and how they want to play, and closely tracking fundamental shifts in the marketplace. It has become clear that to deliver an experience that players will want to come back to and enjoy for a long time to come, we needed to pivot the design.” He also said that the game’s launch will be moved from it’s original launch period of late fiscal year 2019. Kotaku said that the game will be taken over by a development team from across Worldwide Studios led by the EA Vancouver team. Steve Anthony will lead the team and use much of the work already created by Visceral. |
<reponame>chen0040/mxnet-vqa
import sys
import pandas as pd
from mxnet_vqa.utils.image_utils import Vgg16FeatureExtractor
import os
import numpy as np
import logging
import time
import pickle
import mxnet as mx
def load_coco_2014_val_images_dict(coco_images_dir_path):
result = dict()
if not os.path.exists(coco_images_dir_path):
logging.error('Please download and extract val2014 images to %s before continuing',
coco_images_dir_path)
sys.exit()
for root, dirs, files in os.walk(coco_images_dir_path):
for fn in files:
if fn.lower().endswith('.jpg'):
fp = os.path.join(root, fn)
image_id = int(fn.replace('COCO_val2014_', '').replace('.jpg', ''))
result[image_id] = fp
return result
def get_coco_2014_val_images(data_dir_path, coco_images_dir_path, max_lines_retrieved=-1):
data_path = os.path.join(data_dir_path, 'data/val_qa')
coco_image_paths = load_coco_2014_val_images_dict(coco_images_dir_path)
df = pd.read_pickle(data_path)
image_id_list = df[['image_id']].values.tolist()
result = list()
for image_id in image_id_list:
if image_id[0] in coco_image_paths:
result.append(coco_image_paths[image_id[0]])
else:
logging.warning('Failed to get image path for image id %s', image_id[0])
if max_lines_retrieved != -1 and len(result) == max_lines_retrieved:
break
return result
def checkpoint_features(pickle_path, features):
with open(pickle_path, 'wb') as handle:
logging.debug('saving image features as %s', pickle_path)
pickle.dump(features, handle, protocol=pickle.HIGHEST_PROTOCOL)
def get_coco_2014_val_image_features(data_dir_path, coco_images_dir_path, ctx=mx.cpu(), max_lines_retrieved=-1):
pickle_name = 'coco_val2014_feats'
if max_lines_retrieved == -1:
pickle_name = pickle_name + '.pickle'
else:
pickle_name = pickle_name + '_' + str(max_lines_retrieved) + '.pickle'
pickle_path = os.path.join(data_dir_path, pickle_name)
features = dict()
if os.path.exists(pickle_path):
logging.info('loading val2014 image features from %s', pickle_path)
start_time = time.time()
with open(pickle_path, 'rb') as handle:
features = pickle.load(handle)
duration = time.time() - start_time
logging.debug('loading val2014 features from pickle took %.1f seconds', duration)
fe = Vgg16FeatureExtractor(model_ctx=ctx)
data_path = os.path.join(data_dir_path, 'data/val_qa')
coco_image_paths = None
df = pd.read_pickle(data_path)
image_id_list = df[['image_id']].values.tolist()
result = list()
start_extracting_time = time.time()
features_updated = False
for i, image_id in enumerate(image_id_list):
if image_id[0] not in features:
if coco_image_paths is None:
load_coco_2014_val_images_dict(coco_images_dir_path)
if image_id[0] in coco_image_paths:
img_path = coco_image_paths[image_id[0]]
f = fe.extract_image_features(img_path)
feat = f.asnumpy()
features[image_id[0]] = feat
features_updated = True
else:
logging.warning('Failed to extract image features for image id %s', image_id[0])
if max_lines_retrieved != -1 and (i + 1) == max_lines_retrieved:
break
if (i + 1) % 500 == 0:
if max_lines_retrieved == -1:
logging.debug('Has extracted features for %d records (Elapsed: %.1f seconds)',
i + 1,
(time.time() - start_extracting_time))
else:
logging.debug('Has extracted features for %d records (Progress: %.2f %%) (Elapsed: %.1f seconds)',
i + 1,
((i+1) * 100 / max_lines_retrieved),
(time.time() - start_extracting_time))
if (i+1) % 1000 == 0 and features_updated:
checkpoint_features(pickle_path, features)
features_updated = False
if features_updated:
checkpoint_features(pickle_path, features)
for i, image_id in enumerate(image_id_list):
if image_id[0] in features:
result.append(features[image_id[0]][0])
if max_lines_retrieved != -1 and (i + 1) == max_lines_retrieved:
break
return np.array(result)
|
169Mid-wall fibrosis is an independent predictor of mortality in patients with aortic stenosis Introduction Predicting adverse clinical outcomes in aortic stenosis is challenging. Late gadolinium enhancement (LGE) has been associated with an adverse prognosis in a range of other cardiac conditions. Using late gadolinium enhancement, we sought to assess the prognostic significance of mid-wall and infarct patterns of myocardial fibrosis in aortic stenosis. Methods Between January 2003 and October 2008, consecutive patients with moderate or severe aortic stenosis (aortic valve area <1.5cm2) underwent cardiovascular magnetic resonance with assessment of myocardial fibrosis by late gadolinium enhancement. Patients were categorised into absent, mid-wall or infarct patterns of late gadolinium enhancement by blinded independent observers. Patient follow-up was completed using the National Strategic Tracing Scheme. Results 143 patients (aged 68±14years; 97 male) were followed up for 2.0±1.4years. 81 patients had coronary artery disease, 72 underwent aortic valve replacement and 27 died. Compared to those with no late gadolinium enhancement (n=49), univariate analysis revealed that patients with mid-wall fibrosis (n=54) had an eightfold increase in all-cause mortality despite similar aortic stenosis severity and coronary artery disease burden. Patients with an infarct pattern (n=40) had a six-fold increase. Mid-wall fibrosis (HR, 5.35 (95% CI, 1.16 to 24.56); p=0.03) and ejection fraction (HR 0.96 (95% CI, 0.94 to 0.99); p=0.01) were independent predictors of all cause mortality by multivariate analysis. Conclusion: Mid-wall fibrosis is an independent predictor of mortality in patients with moderate and severe aortic stenosis. It has incremental prognostic value to ejection fraction and may provide a useful method of risk stratification in patients with advanced disease (Abstract 169 figure 1).Abstract 169 Figure 1 Kaplan-Meier curves of cardiac mortality (left) and all cause mortality (right) according to pattern of LGE (A= No LGE, B= Infarct LGE, C= Mid-wall LGE).Abstract 169 Table 1 No LGE Mid-wall LGE Infarct LGE p Value Number of patients 49 54 40 Mean age yrs 64±16 70±11 70±13 0.031 Documented CAD % 37 42 98 <0.001 Ejection fraction % 69±13 58±21 44±18 <0.001 Aortic valve area 1.05±0.37 1.00±0.31 0.91±0.26 0.111 Indexed LV mass g/m2 92.6* (86.0, 99.6) 113.7* (104.5, 123.8) 97.8* (90.9, 105.2) 0.005 Mortality rate (deaths / 1000pt years) 15.7 142.7 173.7 * Geometric mean (95%) |
package io.ebeaninternal.xmlmapping;
import io.ebeaninternal.xmapping.api.XmapEbean;
import io.ebeaninternal.xmapping.api.XmapService;
import java.util.List;
public class JaxbXmapService implements XmapService {
@Override
public List<XmapEbean> read(ClassLoader classLoader, List<String> mappingLocations) {
return new InternalConfigXmlRead(classLoader, mappingLocations).read();
}
}
|
A Unique Way to Analyze the Realities of Health Workers Within A Hermeneutic-Dialectic Perspective We propose an original method of analysis within a hermeneutic-dialectic framework theoretically supported by the work of Hans-Georg Gadamer, Bertell Ollman, and Maria Cecilia Minayo. We draw a unique means of analysis to guide an understanding of the labor realities for health workers who care for older adults. This method of analysis proposes a way to create consensual opinions, question this consensus and then put these aspects in a dialogical encounter with the qualitative researcher and interpreter. We illustrate the application of this methodological process using dialogical conversations and narrative interviews with 12 health workers from the Brazilian Unified Health System (SUS). The hermeneutic-dialectic process of interpretation involves us in movements of comprehension, which allows us to understand realities including the actual sad situation of some territories, creating possibilities for home health care, the integrality of attention as a tool for providing home assistance in the primary care field, and the necessity of a set of services to provide an organizational routine for home visits to older people. This methodological analysis has the potentiality to help develop other research on similar topics. The results illustrate that good home care for older adults requires effective articulation between compassionate workers and public health agencies. Introduction What does it mean to understand the experiences of health professionals who take care of older adults inside their homes? How can we understand these realities from health workers in a public health system? What does it mean to understand the perspectives of professionals from the Brazilian public health system? Do the professionals' traditions and personal agencies reflect the realities of this work? Can we understand these realities by questioning the consensus of the professionals' narratives? In any case, some situations beyond these questions require an analysis supported by different theoretical perspectives to construct a scientific understanding. These situations guide the development of this paper. Health workers who perform home visits and in-home health care for dependent older adults face difficulties and stressful situation during their daily work. The situations these workers face must be analyzed to comprehend the realities of their work. Personal agency, territories of work, nature of agency-based caregiving, witnessing the sad environment and older individuals' vulnerability, and the economic costs of mobility are some examples of these situations (;Marques & Bulgarelli, 2020;;Zoeckler, 2018). Thus, it is important to understand these workers' realties to construct data and scientific knowledge that can guide new polices, new public agency strategies, and new attitudes to make this work less stressful. Because understanding and comprehending are relational processes, we were encouraged and motivated to present this paper. Such understanding takes place in a relational dialogue reached through dialogical conversations surrounded by dialectical questioning. From an ontological perspective, Gergen highlighted that understanding is a process constructed together in our "relatedness" and affection with the "object/phenomenon." Therefore, it is possible for researchers who are affected by this phenomenon to create an understanding of the health workers' realities. Thus, we create and propose a unique way to analyze data within a hermeneutic-dialectic method to understand these realities. To create this possibility of analysis, we appropriate several concepts and assumptions related to Brazilian population aging, realities and traditions of public health agencies, and policies toward family health care. This analysis may be applied worldwide in other research with similar aims, approaches, participants, and phenomenon. We frame this work in terms of gathering hermeneutical, social, and dialectical perspectives. This paper proposes our original inductive analysis that combines the researchers' interpretations and insights with the theoretical perspective of philosophers Hans-Georg Gadamer and Bertell Ollman and the methodological support of sociologist Maria Cecilia Minayo. We collect these epistemological concepts and theoretical supports, within a hermeneuticdialectical method, to propose a methodological way of analysis articulating the narratives, speeches, and emotions of health workers during their daily labor. The aim of this paper is to share our unique scientific method of data analysis to understand specific realities within a hermeneutic-dialectical method. We aim to share the application of this method analysis by illustrating our research object and results. Epistemological Background, Concepts, and Assumptions In Brazil, the growth in the older population is characterized by high speed (Veras & Oliveira, 2016). The contingent of older people has increased from 4% in 1940 to 9% in 2000, and it is estimated that in 2050, 18% of the population will consist of aged people. Such data corroborate projections by the World Health Organization (WHO), which estimates that in 2025, Brazil will be in the sixth position worldwide in the total number of senior citizens. In response, home health care for this population has received investments from a public health agency named the Unified Health System (the Portuguese acronym is SUS) to promote public health actions and services to provide care for this portion of the Brazilian population. Brazil invests in the structuring of primary health care (PHC) services through the National Policy for Basic Care (NPBC), which brings paths for home care health assistance by means of the Family Health Strategy (FHS). This strategic work process is characterized by developing health care in health centers and at the homes of vulnerable people, such as dependent and bedridden older people. Brazilian policies regarding SUS traditionally and historically envision home care as a set of health services provided to individuals allowing them to live with dignity within the comfort of their homes. This is seen as a facility of SUS, which has domiciliary visits and in-home care as strategies to access vulnerable older adults (). Regarding these strategies, teamwork articulation is important, and a team needs to move in the same direction toward a shared goal. This articulation happens among different professionals who discuss and collectively propose clinical treatments, preventive care, palliation, and home healing care (). The SUS in-home health services are organized by health professionals (physicians, nurses, dentists, and health technicians) who work together as an integrated team in a specify territory. Traditionally, this professional team develops monthly domiciliary visits to identify vulnerable older adults' needs in a specific territory. The health assistance is developed inside the older person's home by any kind of health treatment possible depending on technical limitation (). The health professionals' perspective regarding home health assistance can be understood through their discourses and practices. We wish to convey that the current practices and situations that a health professional develops and faces during their work, while visiting a vulnerable old person, can be expressed by speech and daily narratives. Their narratives bring up consensus that can be questioned to create divergencies that, when analyzed together as Minayo proposes, can create a rich comprehension of a research object. There are different interfaces of home care in primary health care, which qualitatively and quantitatively address the perceptions of workers involved in this care, the network of services, and the social support. Discussion in the literature, from family health professionals' perspectives, considers the importance of communication with the family and the service organization (). There are significant factors that influence this communication (), including communication with the palliative care team. Professionals consider communication as an indispensable tool of patient care to promote palliative care for older people who are naturally getting close to the ends of their lives (C. G. D. ). For the methodological analysis that we propose with this paper, the communication between researchers and workers is vital. In addition to these aspects, home care is an instrument of the ontological organizational network of services, particularly for the older population, because it encompasses the use of different care technologies (). However, there are gaps in the literature regarding methodologies that highlight the importance of views from professionals who develop domiciliary care for older people. What does it mean to work in this model of care from the workers' viewpoint? How can we analyze this phenomenon? We believe that contemporary hermeneutics and dialectical perspectives can be used together to potentially answer these questions. We seek to analyze this viewpoint to support the comprehension of a worldwide phenomenon and its experience of truth. We believe that by encountering different perspectives from workers, we can respond to these questions and guide the construction of knowledge on methodological possibilities to fill this literature gaps. Theoretical Perspectives to Support a Unique Way of Qualitative Data Analysis Within a Hermeneutics-Dialectical Method The hermeneutics-dialectical method is a perfect scenario for creating different perspectives of analysis. It is possible because the flexibility of constructing consensus and at the same time discuss the consensus using dialectical questioning, reached in the participant narratives, is something that can enrich different kinds of qualitative data analysis. The hermeneutics-dialectical method has guided several studies on research objects/phenomena including transformative thinking, harm reduction strategies, workers' mental health, and male health (Khasri, 2020;;Tristo & Avellar, 2019). This method is the contemporary art of interpretation (). The scope for new way of analysis is possible from a hermeneutics-dialectical perspective. Therefore, we proposed a unique analysis combining philosophical orientations of Hans-Georg Gadamer, Bertell Ollman, and Maria Cecilia Minayo (Gadamer, 2018;Minayo, 2002;Ollman, 2003;Ollman & Smith, 2008). These viewpoints led us to construct an original way to analyze qualitative data within a hermeneuticsdialectical framework (Figure 1). This led us to design research using an inductive method based on these sociologists' theoretical support, the participant's narratives, and our insights. We created this analysis orientation through different approaches such as those Minayo highlighted with the epistemological and teleological hermeneuticsdialectical process to design a methodological way to understand realities regarding health workers in the current world. Hans-Georg Gadamer was a modern German philosopher who brought the perspective of hermeneutics as one way to understand realities within the language in use. This hermeneutic philosophy is an art of interpretation grounded in the current language in use, while we are talking about health issues. This language exists in our societies surrounded by diverse traditions, dialogue, preconceptions, and social construction of consensus. Currently, the hermeneutics brought by Gadamer is considered a contemporary philosophical school in which a reality is always the result of an interpretation (). Interpretation can be accomplished with the interaction between different voices, languages, preconceptions, and personal agencies articulating themselves to create a consensual opinion about something. This is the hermeneutical movement based on Gadamer's dialogical model of interpretation. This means that comprehension of the consensus emerges from dialogue and conversation. This process was encompassed by the fusion of horizons regarding participants' discourses of practices, emotions, and ways to cope with empirical work problems. This philosophical background teaches us how to understand consensus by listening to participants' voices, identifying their preconceptions, and positioning ourselves as interpreters. To enrich the possibility of analyzing health workers' narratives and understanding their realities in depth, we considered the discursive perspective of Bertel Ollman, a North American contemporary sociologist who fosters understanding how social classes struggle to work in modern societies. His discursive dialectical perspective led us to comprehend the fact that an individual's work makes the world have a sense and representation of classes have their social relevance. According to Ollman, dialectics is a method of thinking about world reality in its scope of interactions. The concepts of representations of classes, in a Marxist perspective, led us to consider Ollman's dialectic perspective to understand health worker labor in modern societies. This guided our analysis to question some consensuses that were first created by means of the hermeneutical philosophy once followed. Bertell Ollmann brings the complex Marxist theory into a dialectical way of understanding work as something that affects us internally in the process of constructing its social results (Ollman & Smith, 2008). With these two different approaches, we followed Maria Cecilia Minayo's framework. This Brazilian contemporary sociologist proposes a way to put hermeneutics and dialectics together to create a comprehension of the world with dialogical encounters. This sociologist states that there is a connection between consensus, dialectical questions, researchers' insights, and interpretations regarding social class speeches. These dialogical encounters happen between the researchers and theories while approaching the chosen theories to construct their interpretations, and between the researchers and the participants while developing interviews with them. The current literature presents the applicability of Minayo's perspective to analyze phenomenon regarding SUS workers. Several issues are studied using this perspective, such as workers' medication to avoid mental suffering, training of professionals to identify situations of violence, stigmatizing, and community health workers' mental health (;;). According to such theoretical orientations, we can create a way to analyze data grounded in methodological steps as follows. We started the construction of our analytical process by means of a conversational perspective of analysis. Conversation about in-home heath care is a topic that spoke to us. Methodologically, such conversations emerged through qualitative interviews, which led us to identify similar and consensual opinions and point-of-views regarding the workers labor realities. This means that our interpreter's voices suspended our prejudices and traditions and we created, initially, a hermeneutical agreement with the participants' voices reflected in consensus/themes ( Figure 1) (Gadamer, 1999;Vlduescu, 2018). While looking to the health workers' speeches, we had in mind that they were representing their work class. In other words, the consensual themes emerged bringing the health worker classes' speeches. In a second dialectical movement, we-the researchers-identified contradictions throughout questioning the professionals' speeches, to identify the actual meaning of the consensuses. We created dialectical themes ( Figure 1). This was a dialectical movement of comprehension as suggested by Minayo. This dialectics perspective, represented in the movement of making questions regarding the consensus, allowed us to go beyond the speeches and understand how it relates to the labor context. The dialectical questions, presented in our analysis, were developed through intriguing situations identified during the dialogue regarding the research object. We identified questions/dialectical matters regarding work practices and coping with emotions and affections during the development of their daily work. It is important to point out that our participants were health workers from SUS, and they represent one class of Brazilian health workers. In this case, our participants represented the class of workers from a public health agency. In a final movement, we connected, by means of our insights and interpretation, the hermeneutical consensuses, and the dialectical themes to understand health workers' realities. This means that we brought the dialogical encounter between our interpreter voices and the participant voices in a constructionist analysis. In summary, we believe this method of analysis within a hermeneutical-dialectical method can be applied to other research because worldwide health workers' narratives are socially and collectively constructed though their daily work speeches. Talking to the health workers and analyzing their narratives under Gadamer, Olmann and Minayo's theoretical support may be a possible methodological process to reach an understanding about their realities. Toward this, we focus on health workers from the Brazilian public health system, which is a very challenging scenario consisting of professionals engaged in developing good healthcare and assistance. We encourage the use of this analysis to understand any kind of health workers' realities in their scenarios of practices. Presenting the Applicability and the Findings in Terms of Illustrating This Unique Way of Analysis We believe our method of analysis is unique, original and different from other qualitative analyses. It enables us to put together three different philosophers' perspectives to analyze the realities of health workers. There is no methodology in the literature that presents a way to use dialogical encounters to reach a specific working-class consensus and at the same time question it under Bertell Ollman's support. It is important to highlight that this method of analysis has some limitations. For example, it may be used to understand realities reported only by health workers. Even when it has a relational perspective, this method of analysis does not require participation from home health care patients. We understand that realities are "objects of the world" that can be understood with many discourses and traditions brought from a unique class, such as the worker class. Further studies with other methodology can examine other discourses such as the ways home health care patients understand home health care. Even with some limitations, we believe that other scholars can engage with and use this method to analyze their data regarding workers' narratives. There is no other perspective in the literature that puts together different philosophers' perspectives to embrace comprehension of health worker classes within a hermeneutical-dialectical method. Scholars can use this method of analysis to understand other realities form different classes of health workers, seeking their contexts, personal agencies, and preconceptions and constructing a dialogical encounter with their own interpretation. This method of analysis can be applied to research studying health workers from hospitals, nursing homes, hospices and primary health care centers from public or private agencies. The development of our analysis happened simultaneously with data production for research aiming to analyze home care for vulnerable older adults from the health workers' perspectives. During the process of data production and the approach to the thoughts of the specific sociologists, we constructed trigger questions to develop narrative interviews, participant selection, dialogical encounters/interviews, data analysis, and data interpretation. Toward this, we collected/produced data through 12 semistructured narrative interviews with health professionals (physicians, dentists, and nurses) who worked in the SUS of Porto Alegre (RS). No worker refused to participate or dropped out of the interview. The interviewees' schooling ranged from eight to twelve years of study, and their time spent working with PHC at SUS ranged from two and a half to six years. The average age was 29 years, and 16 (73%) were female. A female graduate student, the author of the present paper, interviewed the workers personally at health centers. We believe that a narrative is a hermeneutical and dialectical device to reach reality in terms of consensus and contradictions (Abettan, 2017;Gadamer, 1999;Ollman & Smith, 2008). The interviews, which were performed in private offices at the professionals' workplace, were previously scheduled by phone with each professional according to their availability. This research was part of the interviewer's thesis, and she has expertise with home oral health assistance and family practices. This means that the researcher is internally affected and has her own perspective about this kind of work, which is fundamental to the creation and development of an analysis within a hermeneuticaldialectical qualitative research (Gadamer, 1999;Ollman & Smith, 2008). We had an interest in giving voices to the health professionals and workers from SUS because we are advocates and enthusiasts of this service, and we identify ourselves with the importance of investigating this reality. None of the interviewees had prior information about the interview, which ensured no bias. During the interviews, we followed a guiding script. This interview guide was created to include trigger questions to generate debates and discussion regarding the daily work routine, realities of in-home care practices, and questions to stimulate conversations about emotions and feelings. Some examples of trigger questions are as follows: "Do you consider home assistance something inserted in the municipality public health system network?" "Do you feel prepared as a physician, dentist, or nurse to provide home health assistance to vulnerable older adults?" and "What are the advantages and potentialities of home health services to older patients?" These questions had a hermeneutic and dialectic nature intended to seek consensual situations and possibilities of raising dissent. Reaching consensus, or dissensus in a discourse analysis, comprises hermeneutical and dialogical knowledge. The interviews' audios were transcribed according to Atkinson and Heritage, transcription norms, and all transcripts were returned to participants for comment and agreement during the analysis process. The results were returned to participants after the analysis process to validate and confirm the participants' opinions. The duration average for each interview was 35 minutes. Among the professionals who work in PHC developing domiciliary care, we chose participants who effectively performed primary care actions. The professionals had degrees in medicine, dentistry, or nursing and were not trainees. We used the snow-ball sample technique to form reference chains and perform saturation of participants and meaning saturation (;Marshall, 1996). During the process of interviewing, transcribing, and analyzing, the object of the study was always in our thoughts. These sampling and meaning saturation procedures in qualitative research are grounded in studies by the World Health Organization and Maria Cecilia Minayo. We systematized the data according to content analysis from a thematic analysis perspective, creating the aforementioned consensual themes. The interpretation and our inferences sought to approximate the hermeneutic-dialectic method and were also based on the literature regarding home assistance practices and PHC. Thus, the hermeneutic-dialectic approach is relevant for researching health professional practices (Collet & Wetzel, 1996;De ;Minayo, 2002). After that, we started questioning the consensus that generated the trigger discussion in a dialectical movement of interpretation. Therefore, the analysis of the data was an articulation between our inferences within the support of hermeneutics and dialectics (Gadamer, 1999;Ollman & Smith, 2008). All of that composed our hermeneutical-dialectical method of analysis (Figure 1). This proposed analysis encompasses the fact that comprehending, in an ontological and epistemological perspective, is existential and happens between different voices and the interpreter/researcher's efforts to make sense through the research object. In other words, the comprehending process can be constructed with the analysis regarding familiarity and strangeness, consensus and dissensus, dialogical and dialectical situations (Gadamer, 1999;Ollman & Smith, 2008;Schwandt, 1999;Minayo, 2014). This ontological process happens using actual dialogical encounters. In summary, we constructed a hermeneutical consensus and then dialectically questioned it with our perspectives as health qualitative researchers. All these hermeneutic and dialectic movements, which we named referred to as a dialogical encounter, occurred through our insights regarding the health-workers-class narratives. Even in a local basis and regional scenario, the present study brings a Brazilian social construction that is translated into signs to comprehend the meanings of in-home health services to older people from the perspectives of health professionals who work to the Brazilian public health system. We understand home care as any kind of health assistance and services that creates health promotion strategies, prevention and treatment of disease inside a person's home. The interviewer for the present research is a health worker who developed home care; thus, the conversations happened with a common language, which means a dialogue in a mutual recognition of the aspects of home health care and assistance (Gadamer, 1999;Vlduescu, 2018). We believe that there was no distinction among the three different health professionals interviewed (physician, nurse, and dentist) because we assume that all these professionals account for home assistance and are part of the Brazilian multiprofessional work team used to carry out activities in the territory under the care of a primary health care center. This means that they are immersed in the traditional practices of SUS and comprehend health care as a human right that is universal to all Brazilian people. This leads the workers to be responsible for the health of older individuals, and this fact internally affects their labor process. The workers need to be emotionally stronger to develop a good job to heal diseases and negotiate the best treatments along with the older person's caregivers. We created a possible comprehension of the research object, keeping in mind that these workers have their own accounts full of history, experiences, narratives, and opinions that are in the same language that Gadamer portrays as a dialogue. We believe that while performing the narrative interviews, we created a conversation with these professionals to bring a new sense to our topic of interest. This "new sense" has a discordance of opinions and perceptions that are fundamental to our understanding. The dialectic concept used to comprehend their realities is assigned as a positive value (Ollman & Smith, 2008). It helps the team to discuss and construct good praxis regarding taking care of an older adult. We created themes, which we named as "thematic movements of comprehension," regarding hermeneutical dialogical encounters and dialectical questioning. This means that we used content analysis to create them by regarding our state of being and the acting and diverse perspectives from the research participants. Ideologies and critiques of the health system, as well as the process of working with resource constraints in SUS, appear in the narratives and are perceptible as dialectical questioning processes to express opinions. All these manifestations pervade the interpretative movements. The thematic movements of comprehension permeate situations such as the actual sad situation of some territories, creating possibilities of home health care, the integrality of attention as a perspective for providing home assistance in the primary care field, and the necessity of a net of services to provide an organizational routine of home visits to older people ( Figure 2). To illustrate our findings, we present these consensual and dialectical themes and discuss them by means of a narrative text, with some excerpts from the narratives articulating them with regard to our interpretation in order to construct an environment of hermeneutical-dialectical reading to comprise the study's aim. We considered the workers' backgrounds through their reports of practices, knowledge, professional experiences, and personal agency. We had a perception of dedication and valuation of home care practices and services in their territory of agency. For Gadamer, histories and experiences are important in the process of understanding something that we experience and provide consensus. Thus, health professionals daily face the reality of their territory, which requires accurate knowledge of the service assignments and their management, in addition to linking their actions to housing conditions and the population's dignity (Brasil, 1994;). The worker who provides home health assistance, who circulates in a specific territory, and who has an academic background in public health, seeks to interconnect information brought by Brazilian public health system user while also The sad situaon of some homes and territories Integrality of aenon as a perspecve for providing home care Creang possibilies of in-home health care The necessity of a net of services to provide an organizaonal roune of domiciliary visits to older people keeping a critical eye on the situation of the older person who needs health care and assistance at his/her home. These workers are empowered with this capacity to look critically and negotiate actions within the reality of the vulnerable person such as a care person-nurse interaction. During the domiciliary visit, the workers will inevitably judge the elements that can make their work difficult or promote health inside the residence. This judgment is full of preconceptions regarding their private values on what is a good ambiance to promote health. This private judgment needs to be negotiated with their actual possibilities of care. We can see that professionals will narrate based on his/her awareness that some aspects, such as observation and critical observation, should be considered as a work strategy in the context of health care for older people. At the same time, some workers do not even try to negotiate or construct a good home ambiance to promote health for the older person. Why does this happen? Dialectally, it seems that the position that they occupy at the SUS is perceived as an ordinary job. They do not recognize the potentiality of their actions and there is no recognition of his or her workforce. We observe the house, the family, how they relate to each other, the hygiene conditions, the housing conditions, the type of care, whether there is anything that indicates violence, mistreatment. If the person is living a worthy life, you know... So... you can see everything... You can have a good notion... Because we are also trained to take a look, you know(.). We don't go there only for the sake of the patient; so many times we want to know how he lives, who is living with him, people's behavior, interpersonal relations, all the conditions. (interviewee 3, female, nurse) The professional inner movement, feelings, and affections used to understand the older person's life context is important because routine and trivial actions are often vital to care and at the same time are not concretized. This movement regarding a compassionate care perspective is full of social agency (A. M. ;Minayo, 2014). Many times, the prescription of hygiene care must be resignified and readapted in dialectics contexts where there are no minimum conditions of dignity for the elderly (A. ;Freeman, 2007;Ward, et al., 2018). These workers are capable of identifying these conditions of self-identification within their job position (Ollman & Smith, 2008) and their traditions such that we can symbolically see them as "buying a shower head" to promote health. The Gadamerian element of "tradition" is identified here as the linkage with the community. This is a reality in the working process at PHC in SUS at Porto Alegre (RS). Regarding the expert above, we can see the two sides used to fusion the horizons to construct a reality of care. On one side are the people who need care (hygiene to a vulnerable person), and on the other side are the people taking care of them (SUS workers). Hum... To take a bath... Ok... In this case, you go to that patient's house and there is no shower head, they don't have it, you know... So, you must promote health according to that reality, you know... and we bought a shower with the community's support... (Interviewee 12, female, Dentist) Because of some difficulties we can't see, because:... we don't know the person's environment without stepping inside their home... But I do not change anything. I do my job. (Interviewee 01,female,Nurse) This negotiation of care is full of interpersonal contexts and developed through everyday conversation, which in these cases consists of conversations among teams that develop home care. But does this negotiation of care happen all the time? Sometimes, signs show that contradictions, different opinions, and dissensus occur. Sometimes some workers do nothing. A worker could be naturally dispassionate to understanding the role of space and time regarding the older adult's care in a vulnerable situation. This could be a current and acceptable social relation to these professionals, but it is not the consensus presented in the present study. Even in these contradictory situations, the workers are following the theoretical authority of their technical knowledge and job position in Brazilian society (Ollman & Smith, 2008). Healthcare should be understood in a multifactorial context, requiring adaptation to different realities. The optimization of resources inside a vulnerable older person's residence should be negotiated with the caregiver. It is distressful, but we observed that optimized situations are often developed. This is based on the residence reality that one can address the main way of providing care to the older person within his social environment. The contexts of developing home health care generate distressful conditions for these professionals. In this aspect, consensual anguish is perceived by these professionals' perspectives while they dialogue with themselves about it and use a plural language showing their social agency and the sad situations that they face. I think that, potentially, this thing of really knowing the older adult surroundings and daily routines... we have a lot of older people here, in our small area/territory of responsibility... older people who are restricted to their residence, not because they can't walk or have a caregiver, but due to the geographic barriers to leaving their home. So, we have to work on the potential of that environment, their houses... to bring satisfaction to their lives. (Interviewee 10, male, physician). One of the main potentials of home health care is to achieve the social reintegration of vulnerable older adults who feel helpless and alone or even to minimize the suffering of relatives and informal caregivers. As addressed by Gadamer, potentiality stimulates motivation and brings effectiveness to these professionals' actions. A. M. Andrade et al. emphasize the need for more interface with other points of attention in the service network to realize home care for older adults. There is a consensus regarding the impossibility of dissociating primary care and home care (Queiroz, et al., 2013). Therefore, the health professional should understand the territory, the services integrated into the network, and their management to ensure better assistance for older people who need home care and assistance. Do these workers comprehend what the integrality of care is? Historically, the professionals participating in the present research experienced a public health system whose doctrinal principle was the integrality of attention, that is, the integration of services and different health professionals working together to take care of a single case. Therefore, this historically and traditionally constructed integrality is part of these professionals' agency, which is rooted in preconceived positive perceptions of domiciliary care. This integrality of health care services, including the domiciliary visits to home care, is based on a political democracy that leads these professionals to participate in the dialectical thinking of their working process (Ollman & Smith, 2008). This means that teamwork, as a workforce tool used to deliver home care, was dialectically constructed during decades of struggle by Brazilian health professionals and politicians. We identified situations where the performance of integrality of care occurred with the protagonism of some professionals, who, while being questioned about their actions in someone's home care, were eager to report successful cases involving the older adult care at home. As explained by Gadamer, motivation is essential, and it was reported in some speeches: We even had a moment to discuss the case inside the health center, which made me go to the protective service in person for them to take measures, you know... because it was no longer a health issue, but rather a principle of social assistance; so, we... there is a good partnership with a good level of case solution. (Interviewee 7,women,Nurse) The essence of the integrality of care, from a hermeneutic perspective, as illustrated above with the professional narratives, refers to the concept of strength, where the person determines the direction of the action. In other words, some points are expressed in the consensus that everything could be different if everyone acts more proactively. Does everyone act positively within their daily work? One dialectical point is the fact that some workers need to be more proactive. This contradiction, presented in the worker practice, is the concrete representation of the dialectical process of constructing meanings. Thus, understanding care integrality and worker interaction is intricately linked to the daily challenges to performing home assistance in the territory. Protagonism by some health professionals is necessary to effectively act within the residence. Moreover, the dialectic nature is observed in reports in the different demands of care provided in this environment, which needs effective flow through the net of services in health care. Such flow originates from a perspective of teamwork, specifically combined with theoretical and political assumptions that guide practices of teamwork (Ollman & Smith, 2008). Teamwork in in-home health care can be understood from the perspective of integration, compliance, and reciprocal communication between specialized works (J. F. ;Peduzzi, 2001;Savassi, 2016;Vivian & Wilcox, 2000) and requires an established service flow to make workers' actions effective. The consensus emerges that home care, from these workers' viewpoints, does not effectively occur as it could and should: I think that we could be much more involved than we are, you know... we end up staying too long inside the office due to the high demand. Currently, in this territory we don't provide services to this office alone; so, in addition to this health center, we have other health centers to support... it is too much work. (Interviewee 6,male,Nurse) Here, a consideration is made of the concept of necessity as posed by Minayo. This professional feels the necessity to have more representation and action in the health team. The expression "necessity" in this hermeneutic-dialectic context is something that is lacking, and, at a given historical moment may become indispensable. Being "inside the office" shows a lack of articulation with the network of services and directly interferes with the routine of home visits (A. M. ;C. G. D. ). Aligned with the understanding that home assistance occurs in a context where different social actors interact, we can observe that the potentials of this type of care can be understood as including more than biomedical service. Approaching the hermeneutic-dialectic perspective, we can observe the report of a professional awareness that it is possible to understand parts of the older person's life from a home visit. That is close to what Floriani and Schramm described, thereby also evidencing the possibilities that may arise in actions to promote health based on older adults' groups. Moreover, there are situations of professional development, chiefly involving overload of tasks in this flow of services, and societal underestimations. The workers need to be supported through their agencies () or public health systems. There is a consensus, and some dissensus, because they make it explicit that it is not possible to fully comply with PHC protocols and manuals because life is shaped according to the demand and population characteristics of each territory. The professionals work hard to develop good home care, and the public health systems should improve support for them. This support is a dynamic human process with different opinions regarding their occurrence. Final Considerations The possibility of associating the philosophical hermeneutics, pointing out the understanding of the social consensus from a class of health workers, and at the same time associating this with dialectical perspective regarding questioning the reality, dissension and contradictions enabled us to understand different perspectives from health workers and their practices. We believe that the methodological approach that we brought to this qualitative and interpretive possibility was something sustainable that reached the proposed aim of understanding something socially and collectively constructed. We believe that questioning consensus regarding health work issues in modern societies provides more revealing results than traditional thematic analysis. This happens because consensus/ themes can show only "pictured and immutable reality," whereas by using a dialectic approach to question this reality, we can still create dissensus interpretations regarding a reality that is constantly in movement. Illustrating the use of our analysis uncovered different realities regarding home assistance and care to older people faced in the daily routine of the Brazilian public health service. This revealed contexts that generate distress for the studied working class. Within the consensus and few dissents from the professionals' perceptions and our interpretation, we can conclude that articulation between the territory reality and the real possibilities of care in a residence are essential to be acquainted. Furthermore, the integrality of care to a vulnerable older adult is fundamental to make in-home care effective in primary care, as well as the need for a "flowing service" that permits the organizational routine of home visits. The process of constructing health care includes, among other aspects, compassionate care. We understand that compassionate care is essential to the professional working for SUS. Considering these assumptions as strategic actions of care, the methodological analysis and the results of this unique analysis will probably contribute to create a perspective directed to the difficulties and potentials faced by public health practitioners. Finally, this paper shares a description and an understanding of the original method of analysis that we created. We believe that others can engage in the same analysis when studying other "socially and collectively constructed" objects regarding health workers' perspectives. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Ethical Approval The present research protocol was approved by the Committee for Ethics in Research (CEP) of Universidade Federal do Rio Grande do Sul/UFRGS (Federal University of Rio Grande do Sul) (Protocol number 1.990.925). Each participant had signed the informed consent before participation in the semi structured narrative interview. Any underlying research materials related to this paper can be accessed contacting the corresponding author by email. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The study was funded and financially supported by Fundao de Amparo Pesquisa do Estado do Rio Grande do Sul/FAPERGS (Foundation for Research Support of the State of Rio Grande do Sul), under protocol Alexandre Favero Bulgarelli: https://orcid.org/0000-0002-7110-251X. |
/** @file
Copyright (C) 2018, vit9696. All rights reserved.
Copyright (C) 2020, PMheart. All rights reserved.
All rights reserved.
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
http://opensource.org/licenses/bsd-license.php
THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
**/
#include "ocvalidate.h"
#include "OcValidateLib.h"
#include "KextInfo.h"
#include <Library/BaseLib.h>
#include <Library/OcBootManagementLib.h>
#include <Library/OcConfigurationLib.h>
#include <Protocol/OcLog.h>
/**
Callback function to verify whether Arguments and Path are duplicated in Misc->Entries.
@param[in] PrimaryEntry Primary entry to be checked.
@param[in] SecondaryEntry Secondary entry to be checked.
@retval TRUE If PrimaryEntry and SecondaryEntry are duplicated.
**/
STATIC
BOOLEAN
MiscEntriesHasDuplication (
IN CONST VOID *PrimaryEntry,
IN CONST VOID *SecondaryEntry
)
{
//
// NOTE: Entries and Tools share the same constructor.
//
CONST OC_MISC_TOOLS_ENTRY *MiscEntriesPrimaryEntry;
CONST OC_MISC_TOOLS_ENTRY *MiscEntriesSecondaryEntry;
CONST CHAR8 *MiscEntriesPrimaryArgumentsString;
CONST CHAR8 *MiscEntriesSecondaryArgumentsString;
CONST CHAR8 *MiscEntriesPrimaryPathString;
CONST CHAR8 *MiscEntriesSecondaryPathString;
MiscEntriesPrimaryEntry = *(CONST OC_MISC_TOOLS_ENTRY **) PrimaryEntry;
MiscEntriesSecondaryEntry = *(CONST OC_MISC_TOOLS_ENTRY **) SecondaryEntry;
MiscEntriesPrimaryArgumentsString = OC_BLOB_GET (&MiscEntriesPrimaryEntry->Arguments);
MiscEntriesSecondaryArgumentsString = OC_BLOB_GET (&MiscEntriesSecondaryEntry->Arguments);
MiscEntriesPrimaryPathString = OC_BLOB_GET (&MiscEntriesPrimaryEntry->Path);
MiscEntriesSecondaryPathString = OC_BLOB_GET (&MiscEntriesSecondaryEntry->Path);
if (!MiscEntriesPrimaryEntry->Enabled || !MiscEntriesSecondaryEntry->Enabled) {
return FALSE;
}
if (AsciiStrCmp (MiscEntriesPrimaryArgumentsString, MiscEntriesSecondaryArgumentsString) == 0
&& AsciiStrCmp (MiscEntriesPrimaryPathString, MiscEntriesSecondaryPathString) == 0) {
DEBUG ((DEBUG_WARN, "Misc->Entries->Arguments: %a 是重复的 ", MiscEntriesPrimaryPathString));
return TRUE;
}
return FALSE;
}
/**
Callback function to verify whether Arguments and Path are duplicated in Misc->Tools.
@param[in] PrimaryEntry Primary entry to be checked.
@param[in] SecondaryEntry Secondary entry to be checked.
@retval TRUE If PrimaryEntry and SecondaryEntry are duplicated.
**/
STATIC
BOOLEAN
MiscToolsHasDuplication (
IN CONST VOID *PrimaryEntry,
IN CONST VOID *SecondaryEntry
)
{
CONST OC_MISC_TOOLS_ENTRY *MiscToolsPrimaryEntry;
CONST OC_MISC_TOOLS_ENTRY *MiscToolsSecondaryEntry;
CONST CHAR8 *MiscToolsPrimaryArgumentsString;
CONST CHAR8 *MiscToolsSecondaryArgumentsString;
CONST CHAR8 *MiscToolsPrimaryPathString;
CONST CHAR8 *MiscToolsSecondaryPathString;
MiscToolsPrimaryEntry = *(CONST OC_MISC_TOOLS_ENTRY **) PrimaryEntry;
MiscToolsSecondaryEntry = *(CONST OC_MISC_TOOLS_ENTRY **) SecondaryEntry;
MiscToolsPrimaryArgumentsString = OC_BLOB_GET (&MiscToolsPrimaryEntry->Arguments);
MiscToolsSecondaryArgumentsString = OC_BLOB_GET (&MiscToolsSecondaryEntry->Arguments);
MiscToolsPrimaryPathString = OC_BLOB_GET (&MiscToolsPrimaryEntry->Path);
MiscToolsSecondaryPathString = OC_BLOB_GET (&MiscToolsSecondaryEntry->Path);
if (!MiscToolsPrimaryEntry->Enabled || !MiscToolsSecondaryEntry->Enabled) {
return FALSE;
}
if (AsciiStrCmp (MiscToolsPrimaryArgumentsString, MiscToolsSecondaryArgumentsString) == 0
&& AsciiStrCmp (MiscToolsPrimaryPathString, MiscToolsSecondaryPathString) == 0) {
DEBUG ((DEBUG_WARN, "Misc->Tools->Path: %a 是重复的 ", MiscToolsPrimaryPathString));
return TRUE;
}
return FALSE;
}
/**
Validate if SecureBootModel has allowed value.
@param[in] SecureBootModel SecureBootModel retrieved from user config.
@retval TRUE If SecureBootModel is valid.
**/
STATIC
BOOLEAN
ValidateSecureBootModel (
IN CONST CHAR8 *SecureBootModel
)
{
UINTN Index;
STATIC CONST CHAR8 *AllowedSecureBootModel[] = {
"Default", "Disabled",
"j137", "j680", "j132", "j174", "j140k",
"j780", "j213", "j140a", "j152f", "j160",
"j230k", "j214k", "j223", "j215", "j185", "j185f",
"x86legacy"
};
for (Index = 0; Index < ARRAY_SIZE (AllowedSecureBootModel); ++Index) {
if (AsciiStrCmp (SecureBootModel, AllowedSecureBootModel[Index]) == 0) {
return TRUE;
}
}
return FALSE;
}
STATIC
UINT32
CheckBlessOverride (
IN OC_GLOBAL_CONFIG *Config
)
{
UINT32 ErrorCount;
UINT32 Index;
UINTN Index2;
OC_MISC_CONFIG *UserMisc;
CONST CHAR8 *BlessOverrideEntry;
STATIC CONST CHAR8 *DisallowedBlessOverrideValues[] = {
"\\EFI\\Microsoft\\Boot\\bootmgfw.efi",
"\\System\\Library\\CoreServices\\boot.efi",
};
ErrorCount = 0;
UserMisc = &Config->Misc;
for (Index = 0; Index < UserMisc->BlessOverride.Count; ++Index) {
BlessOverrideEntry = OC_BLOB_GET (UserMisc->BlessOverride.Values[Index]);
//
// &DisallowedBlessOverrideValues[][1] means no first '\\'.
//
for (Index2 = 0; Index2 < ARRAY_SIZE (DisallowedBlessOverrideValues); ++Index2) {
if (AsciiStrCmp (BlessOverrideEntry, DisallowedBlessOverrideValues[Index2]) == 0
|| AsciiStrCmp (BlessOverrideEntry, &DisallowedBlessOverrideValues[Index2][1]) == 0) {
DEBUG ((DEBUG_WARN, "Misc->BlessOverride: %a 是多余的!\n", BlessOverrideEntry));
++ErrorCount;
}
}
}
return ErrorCount;
}
STATIC
UINT32
CheckMiscBoot (
IN OC_GLOBAL_CONFIG *Config
)
{
UINT32 ErrorCount;
OC_MISC_CONFIG *UserMisc;
OC_UEFI_CONFIG *UserUefi;
UINT32 ConsoleAttributes;
CONST CHAR8 *HibernateMode;
UINT32 PickerAttributes;
UINT32 Index;
OC_UEFI_DRIVER_ENTRY *DriverEntry;
CONST CHAR8 *Driver;
BOOLEAN HasOpenCanopyEfiDriver;
CONST CHAR8 *PickerMode;
CONST CHAR8 *PickerVariant;
BOOLEAN IsPickerAudioAssistEnabled;
BOOLEAN IsAudioSupportEnabled;
CONST CHAR8 *LauncherOption;
CONST CHAR8 *LauncherPath;
ErrorCount = 0;
UserMisc = &Config->Misc;
UserUefi = &Config->Uefi;
ConsoleAttributes = UserMisc->Boot.ConsoleAttributes;
if ((ConsoleAttributes & ~0x7FU) != 0) {
DEBUG ((DEBUG_WARN, "Misc->Boot->ConsoleAttributes设置了未知的位!\n"));
++ErrorCount;
}
HibernateMode = OC_BLOB_GET (&UserMisc->Boot.HibernateMode);
if (AsciiStrCmp (HibernateMode, "None") != 0
&& AsciiStrCmp (HibernateMode, "Auto") != 0
&& AsciiStrCmp (HibernateMode, "RTC") != 0
&& AsciiStrCmp (HibernateMode, "NVRAM") != 0) {
DEBUG ((DEBUG_WARN, "Misc->Boot->HibernateMode 不太对 (只能是 None, Auto, RTC, 或 NVRAM)!\n"));
++ErrorCount;
}
PickerAttributes = UserMisc->Boot.PickerAttributes;
if ((PickerAttributes & ~OC_ATTR_ALL_BITS) != 0) {
DEBUG ((DEBUG_WARN, "Misc->Boot->PickerAttributes 设置了未知位!\n"));
++ErrorCount;
}
HasOpenCanopyEfiDriver = FALSE;
for (Index = 0; Index < UserUefi->Drivers.Count; ++Index) {
DriverEntry = UserUefi->Drivers.Values[Index];
Driver = OC_BLOB_GET (&DriverEntry->Path);
if (DriverEntry->Enabled && AsciiStrCmp (Driver, "OpenCanopy.efi") == 0) {
HasOpenCanopyEfiDriver = TRUE;
}
}
PickerMode = OC_BLOB_GET (&UserMisc->Boot.PickerMode);
if (AsciiStrCmp (PickerMode, "Builtin") != 0
&& AsciiStrCmp (PickerMode, "External") != 0
&& AsciiStrCmp (PickerMode, "Apple") != 0) {
DEBUG ((DEBUG_WARN, "Misc->Boot->PickerMode 不正确 (只能是Builtin, External, 或 Apple)!\n"));
++ErrorCount;
} else if (HasOpenCanopyEfiDriver && AsciiStrCmp (PickerMode, "External") != 0) {
DEBUG ((DEBUG_WARN, "OpenCanopy.efi在UEFI->Drivers中加载,但Misc->Boot->PickerMode未设置为External!\n"));
++ErrorCount;
}
PickerVariant = OC_BLOB_GET (&UserMisc->Boot.PickerVariant);
if (PickerVariant[0] == '\0') {
DEBUG ((DEBUG_WARN, "Misc->Boot->PickerVariant不能为空!\n"));
++ErrorCount;
}
//
// Check the length of path relative to OC directory.
//
// There is one missing '\\' after the concatenation of PickerVariant and ExtAppleRecv10_15.icns (which has the longest length). Append one.
//
if (StrLen (OPEN_CORE_IMAGE_PATH) + AsciiStrLen (PickerVariant) + 1 + AsciiStrSize ("ExtAppleRecv10_15.icns") > OC_STORAGE_SAFE_PATH_MAX) {
DEBUG ((DEBUG_WARN, "Misc->Boot->PickerVariant is too long (should not exceed %u)!\n", OC_STORAGE_SAFE_PATH_MAX));
++ErrorCount;
}
IsPickerAudioAssistEnabled = UserMisc->Boot.PickerAudioAssist;
IsAudioSupportEnabled = UserUefi->Audio.AudioSupport;
if (IsPickerAudioAssistEnabled && !IsAudioSupportEnabled) {
DEBUG ((DEBUG_WARN, "Misc->Boot->PickerAudioAssist已启用,但未完全启用UEFI->Audio->AudioSupport!\n"));
++ErrorCount;
}
LauncherOption = OC_BLOB_GET (&Config->Misc.Boot.LauncherOption);
if (AsciiStrCmp (LauncherOption, "Disabled") != 0
&& AsciiStrCmp (LauncherOption, "Full") != 0
&& AsciiStrCmp (LauncherOption, "Short") != 0) {
DEBUG ((DEBUG_WARN, "Misc->Boot->LauncherOption 是错误的 (只能是 Disabled, Full, 或 Short)!\n"));
++ErrorCount;
}
LauncherPath = OC_BLOB_GET (&Config->Misc.Boot.LauncherPath);
if (LauncherPath[0] == '\0') {
DEBUG ((DEBUG_WARN, "Misc->Boot->LauncherPath 不能为空!\n"));
++ErrorCount;
}
return ErrorCount;
}
STATIC
UINT32
CheckMiscDebug (
IN OC_GLOBAL_CONFIG *Config
)
{
UINT32 ErrorCount;
OC_MISC_CONFIG *UserMisc;
UINT64 DisplayLevel;
UINT64 AllowedDisplayLevel;
UINT64 HaltLevel;
UINT64 AllowedHaltLevel;
UINT32 Target;
ErrorCount = 0;
UserMisc = &Config->Misc;
//
// FIXME: Check whether DisplayLevel only supports values within AllowedDisplayLevel, or all possible levels in DebugLib.h?
//
DisplayLevel = UserMisc->Debug.DisplayLevel;
AllowedDisplayLevel = DEBUG_WARN | DEBUG_INFO | DEBUG_VERBOSE | DEBUG_ERROR;
if ((DisplayLevel & ~AllowedDisplayLevel) != 0) {
DEBUG ((DEBUG_WARN, "Misc->Debug->DisplayLevel设置了未知位!\n"));
++ErrorCount;
}
HaltLevel = DisplayLevel;
AllowedHaltLevel = AllowedDisplayLevel;
if ((HaltLevel & ~AllowedHaltLevel) != 0) {
DEBUG ((DEBUG_WARN, "Misc->Security->HaltLevel 设置了未知位!\n"));
++ErrorCount;
}
Target = UserMisc->Debug.Target;
if ((Target & ~OC_LOG_ALL_BITS) != 0) {
DEBUG ((DEBUG_WARN, "Misc->Debug->Target 设置了未知位!\n"));
++ErrorCount;
}
return ErrorCount;
}
STATIC
UINT32
ValidateFlavour (
IN CHAR8 *EntryType,
IN UINT32 Index,
IN CONST CHAR8 *Flavour
)
{
UINT32 ErrorCount;
CHAR8 FlavourCopy[OC_MAX_CONTENT_FLAVOUR_SIZE];
UINTN Length;
CONST CHAR8 *Start;
CONST CHAR8 *End;
ErrorCount = 0;
if (Flavour == NULL || *Flavour == '\0') {
DEBUG ((DEBUG_WARN, "Misc->%a[%u]->Flavour不能为空 (使用 \"Auto\")!\n", EntryType, Index));
++ErrorCount;
} else if (AsciiStrSize (Flavour) > OC_MAX_CONTENT_FLAVOUR_SIZE) {
DEBUG ((DEBUG_WARN, "Misc->%a[%u]->Flavour不能超过%d个字节!\n", EntryType, Index, OC_MAX_CONTENT_FLAVOUR_SIZE));
++ErrorCount;
} else {
//
// Illegal chars
//
Length = AsciiStrLen (Flavour);
AsciiStrnCpyS (FlavourCopy, OC_MAX_CONTENT_FLAVOUR_SIZE, Flavour, Length);
AsciiFilterString (FlavourCopy, TRUE);
if (OcAsciiStrniCmp (FlavourCopy, Flavour, Length) != 0) {
DEBUG ((DEBUG_WARN, "Misc->%a[%u]->Flavour 名称不能包含CR, LF, TAB 或任何其他非ASCII字符!\n", EntryType, Index));
++ErrorCount;
}
//
// Per-name tests
//
End = Flavour - 1;
do {
for (Start = ++End; *End != '\0' && *End != ':'; ++End);
if (Start == End) {
DEBUG ((DEBUG_WARN, "Misc->%a[%u]->Flavour 中的Flavour名称不能为空!\n", EntryType, Index));
++ErrorCount;
} else {
AsciiStrnCpyS (FlavourCopy, OC_MAX_CONTENT_FLAVOUR_SIZE, Start, End - Start);
if (OcAsciiStartsWith(FlavourCopy, "Ext", TRUE)) {
DEBUG ((DEBUG_WARN, "Misc->%a[%u]->Flavour 中的Flavour名称不能以\"Ext\"开头 !\n", EntryType, Index));
++ErrorCount;
}
}
} while (*End != '\0');
}
return ErrorCount;
}
STATIC
UINT32
CheckMiscEntries (
IN OC_GLOBAL_CONFIG *Config
)
{
UINT32 ErrorCount;
UINT32 Index;
OC_MISC_CONFIG *UserMisc;
CONST CHAR8 *Arguments;
CONST CHAR8 *Comment;
CONST CHAR8 *AsciiName;
CONST CHAR16 *UnicodeName;
CONST CHAR8 *Path;
CONST CHAR8 *Flavour;
ErrorCount = 0;
UserMisc = &Config->Misc;
for (Index = 0; Index < UserMisc->Entries.Count; ++Index) {
Arguments = OC_BLOB_GET (&UserMisc->Entries.Values[Index]->Arguments);
Comment = OC_BLOB_GET (&UserMisc->Entries.Values[Index]->Comment);
AsciiName = OC_BLOB_GET (&UserMisc->Entries.Values[Index]->Name);
Path = OC_BLOB_GET (&UserMisc->Entries.Values[Index]->Path);
Flavour = OC_BLOB_GET (&UserMisc->Entries.Values[Index]->Flavour);
//
// Sanitise strings.
//
// NOTE: As Arguments takes identical requirements of Comment,
// we use Comment sanitiser here.
//
if (!AsciiCommentIsLegal (Arguments)) {
DEBUG ((DEBUG_WARN, "Misc->Entries[%u]->参数包含非法字符!\n", Index));
++ErrorCount;
}
if (!AsciiCommentIsLegal (Comment)) {
DEBUG ((DEBUG_WARN, "Misc->Entries[%u]->Comment包含非法字符!\n", Index));
++ErrorCount;
}
UnicodeName = AsciiStrCopyToUnicode (AsciiName, 0);
if (UnicodeName != NULL) {
if (!UnicodeIsFilteredString (UnicodeName, TRUE)) {
DEBUG ((DEBUG_WARN, "Misc->Entries[%u]->Name包含非法字符!\n", Index));
++ErrorCount;
}
FreePool ((VOID *) UnicodeName);
}
//
// FIXME: Properly sanitise Path.
//
if (!AsciiCommentIsLegal (Path)) {
DEBUG ((DEBUG_WARN, "Misc->Entries[%u]->Path包含非法字符!\n", Index));
++ErrorCount;
}
ErrorCount += ValidateFlavour("Entries", Index, Flavour);
}
//
// Check duplicated entries in Entries.
//
ErrorCount += FindArrayDuplication (
UserMisc->Entries.Values,
UserMisc->Entries.Count,
sizeof (UserMisc->Entries.Values[0]),
MiscEntriesHasDuplication
);
return ErrorCount;
}
STATIC
UINT32
CheckMiscSecurity (
IN OC_GLOBAL_CONFIG *Config
)
{
UINT32 ErrorCount;
UINT32 Index;
OC_KERNEL_CONFIG *UserKernel;
OC_MISC_CONFIG *UserMisc;
BOOLEAN IsAuthRestartEnabled;
BOOLEAN HasVSMCKext;
CONST CHAR8 *AsciiDmgLoading;
UINT32 ExposeSensitiveData;
CONST CHAR8 *AsciiVault;
UINT32 ScanPolicy;
UINT32 AllowedScanPolicy;
CONST CHAR8 *SecureBootModel;
ErrorCount = 0;
UserKernel = &Config->Kernel;
UserMisc = &Config->Misc;
HasVSMCKext = FALSE;
for (Index = 0; Index < UserKernel->Add.Count; ++Index) {
if (AsciiStrCmp (OC_BLOB_GET (&UserKernel->Add.Values[Index]->BundlePath), mKextInfo[INDEX_KEXT_VSMC].KextBundlePath) == 0) {
HasVSMCKext = TRUE;
}
}
IsAuthRestartEnabled = UserMisc->Security.AuthRestart;
if (IsAuthRestartEnabled && !HasVSMCKext) {
DEBUG ((DEBUG_WARN, "Misc->Security->启用了AuthRestart,但未在Kernel->Add中加载VirtualSMC!\n"));
++ErrorCount;
}
AsciiDmgLoading = OC_BLOB_GET (&UserMisc->Security.DmgLoading);
if (AsciiStrCmp (AsciiDmgLoading, "Disabled") != 0
&& AsciiStrCmp (AsciiDmgLoading, "Signed") != 0
&& AsciiStrCmp (AsciiDmgLoading, "Any") != 0) {
DEBUG ((DEBUG_WARN, "Misc->Security->DmgLoading 不太对 (只能是 Disabled, Signed, 或 Any)!\n"));
++ErrorCount;
}
ExposeSensitiveData = UserMisc->Security.ExposeSensitiveData;
if ((ExposeSensitiveData & ~OCS_EXPOSE_ALL_BITS) != 0) {
DEBUG ((DEBUG_WARN, "Misc->Security->ExposeSensitiveData 设置了未知位!\n"));
++ErrorCount;
}
AsciiVault = OC_BLOB_GET (&UserMisc->Security.Vault);
if (AsciiStrCmp (AsciiVault, "Optional") != 0
&& AsciiStrCmp (AsciiVault, "Basic") != 0
&& AsciiStrCmp (AsciiVault, "Secure") != 0) {
DEBUG ((DEBUG_WARN, "Misc->Security->Vault 不太对 (只能是 Optional, Basic, 或 Secure)!\n"));
++ErrorCount;
}
ScanPolicy = UserMisc->Security.ScanPolicy;
AllowedScanPolicy = OC_SCAN_FILE_SYSTEM_LOCK | OC_SCAN_DEVICE_LOCK | OC_SCAN_DEVICE_BITS | OC_SCAN_FILE_SYSTEM_BITS;
//
// ScanPolicy can be zero (failsafe value), skipping such.
//
if (ScanPolicy != 0) {
if ((ScanPolicy & ~AllowedScanPolicy) != 0) {
DEBUG ((DEBUG_WARN, "Misc->Security->ScanPolicy 设置了未知位!\n"));
++ErrorCount;
}
if ((ScanPolicy & OC_SCAN_FILE_SYSTEM_BITS) != 0 && (ScanPolicy & OC_SCAN_FILE_SYSTEM_LOCK) == 0) {
DEBUG ((DEBUG_WARN, "Misc->Security->ScanPolicy需要扫描文件系统, 但是OC_SCAN_FILE_SYSTEM_LOCK (bit 0)未设置!\n"));
++ErrorCount;
}
if ((ScanPolicy & OC_SCAN_DEVICE_BITS) != 0 && (ScanPolicy & OC_SCAN_DEVICE_LOCK) == 0) {
DEBUG ((DEBUG_WARN, "Misc->Security->ScanPolicy需要扫描设备, 但是OC_SCAN_DEVICE_LOCK (bit 1)未设置!\n"));
++ErrorCount;
}
}
//
// Validate SecureBootModel.
//
SecureBootModel = OC_BLOB_GET (&UserMisc->Security.SecureBootModel);
if (!ValidateSecureBootModel (SecureBootModel)) {
DEBUG ((DEBUG_WARN, "Misc->Security->SecureBootModel 不太对!\n"));
++ErrorCount;
}
return ErrorCount;
}
STATIC
UINT32
CheckMiscTools (
IN OC_GLOBAL_CONFIG *Config
)
{
UINT32 ErrorCount;
UINT32 Index;
OC_MISC_CONFIG *UserMisc;
CONST CHAR8 *Arguments;
CONST CHAR8 *Comment;
CONST CHAR8 *AsciiName;
CONST CHAR16 *UnicodeName;
CONST CHAR8 *Path;
CONST CHAR8 *Flavour;
ErrorCount = 0;
UserMisc = &Config->Misc;
for (Index = 0; Index < UserMisc->Tools.Count; ++Index) {
Arguments = OC_BLOB_GET (&UserMisc->Tools.Values[Index]->Arguments);
Comment = OC_BLOB_GET (&UserMisc->Tools.Values[Index]->Comment);
AsciiName = OC_BLOB_GET (&UserMisc->Tools.Values[Index]->Name);
Path = OC_BLOB_GET (&UserMisc->Tools.Values[Index]->Path);
Flavour = OC_BLOB_GET (&UserMisc->Tools.Values[Index]->Flavour);
//
// Sanitise strings.
//
// NOTE: As Arguments takes identical requirements of Comment,
// we use Comment sanitiser here.
//
if (!AsciiCommentIsLegal (Arguments)) {
DEBUG ((DEBUG_WARN, "Misc->Tools[%u]->Arguments 包含非法字符!\n", Index));
++ErrorCount;
}
if (!AsciiCommentIsLegal (Comment)) {
DEBUG ((DEBUG_WARN, "Misc->Tools[%u]->Comment 包含非法字符!\n", Index));
++ErrorCount;
}
//
// Check the length of path relative to OC directory.
//
if (StrLen (OPEN_CORE_TOOL_PATH) + AsciiStrSize (Path) > OC_STORAGE_SAFE_PATH_MAX) {
DEBUG ((DEBUG_WARN, "Misc->Tools[%u]->路径太长 (不应超过 %u)!\n", Index, OC_STORAGE_SAFE_PATH_MAX));
++ErrorCount;
}
UnicodeName = AsciiStrCopyToUnicode (AsciiName, 0);
if (UnicodeName != NULL) {
if (!UnicodeIsFilteredString (UnicodeName, TRUE)) {
DEBUG ((DEBUG_WARN, "Misc->Tools[%u]->Name 包含非法字符!\n", Index));
++ErrorCount;
}
FreePool ((VOID *) UnicodeName);
}
//
// FIXME: Properly sanitise Path.
//
if (!AsciiCommentIsLegal (Path)) {
DEBUG ((DEBUG_WARN, "Misc->Tools[%u]->Path 包含非法字符!\n", Index));
++ErrorCount;
}
ErrorCount += ValidateFlavour("Tools", Index, Flavour);
}
//
// Check duplicated entries in Tools.
//
ErrorCount += FindArrayDuplication (
UserMisc->Tools.Values,
UserMisc->Tools.Count,
sizeof (UserMisc->Tools.Values[0]),
MiscToolsHasDuplication
);
return ErrorCount;
}
UINT32
CheckMisc (
IN OC_GLOBAL_CONFIG *Config
)
{
UINT32 ErrorCount;
UINTN Index;
STATIC CONFIG_CHECK MiscCheckers[] = {
&CheckBlessOverride,
&CheckMiscBoot,
&CheckMiscDebug,
&CheckMiscEntries,
&CheckMiscSecurity,
&CheckMiscTools
};
DEBUG ((DEBUG_VERBOSE, "config loaded into %a!\n", __func__));
ErrorCount = 0;
for (Index = 0; Index < ARRAY_SIZE (MiscCheckers); ++Index) {
ErrorCount += MiscCheckers[Index] (Config);
}
return ReportError (__func__, ErrorCount);
}
|
Buckled: Josie Blake, executive director of the Wabash Valley Senior Center shows some of the buckling in the first floor main room. A leaking steam pipe under the floor has caused the damage. A repair estimate of $5,900 has been made. |
A case report of nodal CD4-positive T-cell lymphoproliferative disorder with an indolent course Abstract Rationale: Primary nodal CD4-positive T-cell lymophoproliferative disorder with a relatively indolent process is a rare kind of lymphoproliferative disease. Here we report the first case of a 49 year-old man developed indolent nodal CD4-positive T-cell lymophoproliferative disorder. To our knowledge, based on a careful search of PubMed, it is the first case of primary nodal CD4-positive T-cell lymophoproliferative disorder. Patient concerns: A 49-year-old Chinese man presented to our hospital with fever, enlargement of multiple superficial lymphonodes more than 14 years and splenomegaly. Clinical and pathological data were collected under treatment. This case was diagnosed based on histologically characteristic, immunohistochemical staining, and lymphoid clonality testing. On immunohistochemical staining, the abnormal T-cells were CD4 positive and CD8 negative. The lymphoid clonality testing showed positive results. The patient also has enlarged spleen. Diagnoses: The patient was diagnosed with nodal CD4-positive T-cell lymophoproliferative disorder. Interventions: A watch-and-wait stratagem was performed without any chemotherapy or radiation therapy. Outcomes: During 17 years of follow-up, this case presented an indolent course without evidence of systemic dissemination. Lessons: This report presents the first case of indolent nodal CD4-positive T-cell lymophoproliferative disorder. In this case, the proliferated T-cell in the paracortex of lymph node showed T-cell receptor gene rearrangement, which indicated a clonal proliferation. There are several kinds of nodal CD4-positive T-cell lymphoma, which have a relatively aggressive course; however, this case has a relatively indolent course. Introduction Lymphoproliferative disorders (LPDs) are part of a spectrum that includes polyclonal, oligoclonal/partial monoclonal, and monoclonal diseases. For example, immunodeficiency-related LPDs include reactive proliferation (multiple/oligomeric cloning), pleomorphic lesions (partial monoclonal), and monomorphic lesions (monoclonal). These disorders are rare but can be quite serious, often occurring in immunosuppressed patients. There are several kinds of nodal CD4-positive T-cell lymphomas; they are all monoclonal diseases, like peripheral T-cell lymphomas not otherwise specified (PTCL, NOS), angioimmunoblastic T-cell lymphomas, and others, which have a relatively aggressive course. Another CD4-positive T-cell lymphoma that has a relatively indolent course is primary cutaneous CD4-positive small/medium T-cell lymphoma. These tumors usually have a CD3+CD4+ mature helper T-cell phenotype and express functional T-cell receptors (TCRs). However, a monoclonal indolent primary nodal CD4-positive T-cell lymphoma has never been reported. Because it has a relatively indolent course, we prefer to call this case an LPD. Here, we report the first such case, that of a 49-year-old man who developed indolent nodal CD4positive T-cell LPD. To our knowledge, based on a careful search of PubMed, it is the first reported case of primary nodal CD4positive T-cell LPD with an indolent course. The use of this sample has been reviewed and approved by our institutional ethics board (B2015-043, 02/11/2015, Ethics Board of Zhongshan Hospital Affiliated with Fudan University). Case report In December 2014, a 49-year-old Chinese man was admitted to the Zhongshan Hospital for multiple superficial enlarged lymph nodes of more than 14 years' duration as well as fever and chills that had lasted for more than a week. The patient had been admitted to another hospital on April 27, 2001, where multiple palpable superficial lymph nodes were discovered; at that time he had no fever or pain. He underwent right inguinal lymph node biopsy, showing that the diameter of the typical lymph node was about 1.5 cm. The pathology report pointed to chronic inflammation of the hyperplastic lymph nodes, and the patient received no further treatment. On June 7, 2001, the patient visited another hospital and underwent left submandibular lymph nodes biopsy, indicating a typical lymph node size of about 1 by 0.5 cm. The pathology report indicated lymph node inflammation and positron emission tomography/computed tomography (PET/CT) showed enlargement and fusion of multiple superficial lymph nodes. However, the patient received no further treatment. In March 2004, because he was having chest pain, the patient was admitted to another hospital, where radiography showed multiple small nodules in both lungs. At that point the patient underwent a thoracoscopic lung biopsy. The resulting pathology report pointed to Wegener's granulomatosis. Finally, the patient came to our hospital for further treatment, which consisted of prednisone 40 mg daily over several years. As a result, the pulmonary nodules shrank without disappearing completely, while the multiple superficial lymph nodes remained. Laboratory analysis revealed a mildly elevated white blood cell (WBC) and platelet counts and mild anemia; and blood chemistries and liver profile tests were normal. Blood levels of lactate dehydrogenase (LDH) were found to be normal. The test for human T-lymphotropic virus type I (HTLV-I) was negative. Computed tomography (CT) showed scattered small nodules in both lungs (Fig. 1). The patient had an enlarged spleen and multiple palpable superficial lymph nodes the size of peanuts; these were hard and not tender. Finally the patient underwent a right axillary lymph node biopsy, which pointed to T-cell LPD. The specimens obtained in April 2001 and March 2004 were also referred to our hospital for consultation. The lymph node of April 2001 also showed a diffuse proliferation of CD3-and CD4- After the diagnosis, the patient was treated with antiinflammatory drugs and the prednisone was discontinued. We determined that a watch-and-wait strategy with no chemotherapy or radiation would thereafter be the best course. Thirty months later, blood levels of LDH were found to be normal at all the follow-up evaluations. The follow-up CT scan showed that the pulmonary nodules had slowly enlarged, while already existing palpable superficial lymph nodes had not increased in size. Unexpectedly, over 17 years of follow-up, this case presented an indolent course without evidence of systemic dissemination. Discussion We had a patient with primary nodal CD4-positive T-cell LPD that had a uniquely indolent clinical course; it was unlike a common malignant T-cell lymphoma. T-cell LPDs comprise a spectrum including polyclonal, oligoclonal/partial monoclonal, and monoclonal disease. In this case, the proliferating T-cells in the lymph node paracortex showed gene rearrangement of the Tcell receptor, yet the disease took a relatively indolent course. To our knowledge, based on a careful search of PubMed, this is first reported case of primary nodal CD4-positive T-cell LPD with an indolent course. Based on the immunophenotype, this case may have an etiopathogenesis similar to that of peripheral T-cell lymphoma (PTCL). PTCLs are histologically and clinically diverse; they have an aggressive clinical course and show systemic dissemination. In this case, the abnormal T cells invaded the lungs, but without further dissemination; moreover, enlargement of the pulmonary nodules was very slow. There is a smoldering subtype of adult Tcell leukemia/lymphoma (ATL) wherein the survival is quite variable, with some patients living for many years and others developing dissemination in less than a year. The malignant cells in ATL usually have a CD4+/CD8-phenotype and are associated with human HTLV-I, However, this patient was seronegative for HTLV-I. Cases of rare indolent clonal T-cell proliferations in the gastrointestinal (GI) tract have been reported. The infiltrates of cells were dense, but nondestructive, and composed of small, mature appearing lymphoid cells. The immunophenotype of most of these cases were CD4-/CD8+, however, in this case, the T-cell were positive for CD3 and CD4 and do not invade GI. LPDs occur at an increased rate in immunosuppressed patients and are often associated with EBV infection, which can induce many T/NK-cell LPDs. EBV is known to infect approximately 90% of the world population and to persist in the host for life. The EBV-related posttransplantation lymphoproliferative diseases (PTLDs) have been described primarily in association with liver transplantation but also with kidney, lung, heart, and bone marrow transplantation. However, in this case, in situ hybridization for EBER was negative. The patient also had no history of immunosuppression before he began taking prednisone for the treatment of Wegener's granulomatosis. The specimen of the pulmonary nodules was referred to our hospital for consultation. The lung biopsy showed that CD3-and CD4-positive T cells had infiltrated the alveolar septum and the T-cell receptor gene rearrangement studies showed clonal rearrangement. There was not enough evidence to support a diagnosis of Wegener's granulomatosis for the pulmonary nodules. CD4+ primary cutaneous small/medium-sized pleomorphic Tcell lymphoma (CD4+ PCSM-TCL) is a provisional entity in the World Health Organization (WHO) classification of cutaneous lymphomas. A dense dermal infiltrate of CD4+ small/mediumsized pleomorphic T cells can sometimes be seen histopathologically. In the case of CD4+ PCSM-TCL patient, immunophenotyping showed CD3+ and CD4+ cells, with the follicular T-helper phenotype expressing PD-1 and CXCL13; a variable component of B cells could also be found. However, in this case, the patient did not have any cutaneous lesions and the cells were negative for PD-1. Over the 17 years of follow-up, this case took an indolent clinical course without evidence of systemic dissemination. As for treatment, we considered that high-dose chemotherapy , radiation, and autologous hematopoietic stem-cell transplant were not suitable for this case. Instead, we adopted a watch-and-wait strategy with no additional intervention until evidence of further dissemination should emerge. |
Ischemic stroke and purpuric dermatitis as COVID-19-related complications in a peritoneal dialysis patient Patients on dialysis may have an elevated risk of severe coronavirus disease 2019 (COVID-19) and its complications due to their high prevalence of comorbidities. Here we describe the case of an 80-year-old male undergoing peritoneal dialysis with a moderate SARS-CoV-2 infection who developed a purpuric dermatitis and ischemic stroke after successful recovery from his bilateral pneumonia. Erythemato-papular lesions affecting trunk and lower limbs appeared 17 days after the onset of SARS-CoV-2 symptoms. These kind of lesions are an infrequent cutaneous manifestation of COVID-19. The pathology revealed a moderate purpuric dermatitis affecting superficial dermis and corticoesteroids were prescribed achieving complete resolution. Arterial thrombosis affecting cerebellar vermis emerged 30 days after the onset of COVID-19 symptoms. It occurred 5 days after withdrawal of antithrombotic prophylaxis that the patient received from his admission until 2 weeks after discharge. He completely recovered from his paresis and continued on his regular antiaggregation therapy. This is the first case report published of a patient with PD with such COVID-19-related complications. More experience is needed to determine the appropriate length of antithrombotic prophylaxis especially in high-risk individuals. Introduction Since firsts cases in December 2019, acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has killed over 1,000,000 people all around the world. Patients on chronic dialysis are likely to be at increased risk of COVID-19 and its complications. A matter of great interest is the wide spectrum of infection symptomatology and disease manifestation, ranging from asymptomatic carriers, life-threatening acute respiratory distress syndrome, multi-organ failure and death. There are a number of emerging complications of the COVID-19 pandemic such as dermatologic impairment and ischemic stroke considered potentially lethal as the latter. Although the precise incidence of stroke in these patients is not known, some authors describe an incidence between 2.5 and 5%. Cutaneous manifestations in infected patients are beginning to emerge from around the world. Lesions may arise either before or after the signs and symptoms of COVID-19. Although they have been poorly described, may include erythematous maculo-papular, urticarial, chickenpox-like, purpuric periflexural, transient livedo reticularis, and acro-ischemic or chilblain-like lesions. To date in literature, there are no cases of patients requiring maintenance hemodialysis (HD) or peritoneal dialysis (PD) with COVID-19-related ischemic stroke and/or cutaneous manifestations. Here, we describe our experience in the management of a maintenance PD patient with COVID-19 infection who developed both complications. Case report Our patient is an 80-year-old male with end-stage renal disease (ESRD) secondary to ischemic nephropathy undergoing PD since 2015. He has hypertension, peripheral arterial disease stage IIa (Fontaine classification) and carotid endarterectomy in 2011 due to incidentally discovered asymptomatic stenosis. His usual treatment was folic acid, furosemide, calcium acetate/magnesium carbonate, atorvastatin, allopurinol and clopidogrel. The day before his admission he started with cough, yellowish sputum, short-of-breath and fever over 38 C, his chest X-ray at the emergency room showed moderate patchy infiltrates with peripheral distribution affecting both lungs (middle and lower fields). These findings were consistent with SARS-CoV-2 infection, but suspicion was confirmed by a positive real-time reverse transcriptase-polymerase chain reaction (rRT-PCR) test of pharyngeal swab. His laboratory tests (electrolytes, liver function, lactic acid, blood counts), were unremarkable except for an elevated d-dimer (1.1 mg/L, normal value < 0.5 mg/L), high C-reactive protein (15.9 mg/L, normal value < 0.5 mg/L), and a low platelet count (134,000/L, normal value 150,000-450,000). The patient was admitted and treatment was initiated with hydroxychloroquine combined with lopinavir plus ritonavir as well as antithrombotic prophylaxis with low molecular weight heparin (20 mg/ day subcutaneously). Twenty four hours after admission he had six episodes of watery stools that persisted until next day. Due to his partial respiratory insufficiency, the patient needed a nasal cannula with a FiO2 of 0.28 (28%) for the first 5 days of hospitalization with no need for oxygen therapy thereafter. Systemic inflammatory reaction due to SARS-CoV-2 infection caused a situation of infradialysis, so a fourth peritoneal exchange was needed. In contrast, there were no problems in terms of ultrafiltration. He persisted febrile until day 7 of hospitalization progressing then successfully and being discharged on day 15, maintaining antithrombotic prophylaxis for 2 weeks more. Two days after leaving the hospital, he was seen in Dermatology clinic for acute eruption of erythematopapular lesions affecting trunk and lower limbs. A punch biopsy of abdomen was performed and revealed the presence of moderate chronic and superficial purpuric dermatitis (shown in Fig. 1). Dermatologist put the diagnosis in relation to COVID-19 and prescribed 30 mg of prednisone with a taper by 10 mg decrements every week, achieving a complete recovery in day 10 of therapy. Five days after heparin withdrawal, he presented to our emergency department for evaluation of acute right lower limb paresis and unsteadiness of gait. The cranial CT discarded hemorrhage and a MRI demonstrated an acute ischemic stroke in cerebellar vermis (shown in Fig. 2). Further studies during this second hospitalization didnt objectify neither intra/ extracranial arterial stenosis nor emboligenous cardiopathy. The patient fully recovered movement ability and strength and was discharged. It is important to underline that laboratory tests at the time of skin lesions and stroke didnt show worsening of parameters, compared to those at the time of previous discharge. Discussion Coronavirus disease 2019 (COVID-19) infection is an ongoing pandemic, characterized by high morbidity and mortality that emerged from Wuhan (China) in December 2019. It is well known that certain preexisting medical conditions such as hypertension, diabetes, heart disease, and obesity lead to a decrease in resistance, being all of them risk factors making dialysis patients prone to severe SARS-CoV-2 infection and its complications. The infection rate and mortality in this group is much higher than in the general population; however, we have little information yet. The complications described in this case report are of special interest because underline the higher risk of lethal complications of COVID-19 dialysis patients. Those may be preceded by others less severe that should make us to be aware. Recent literature reported multiple neurological symptoms in COVID-19 including anosmia, hypogeusia, seizures, and strokes. In a retrospective study from Wuhan with 214 hospitalized COVID-19 patients, 5.7% of the severe affected individuals developed cerebrovascular disease later in the course of illness. In a study by Li et al. the incidence of stroke in COVID-19 patients was 5% with a median age of 71 years and an average time of onset of 12 days after SARS-CoV-2 infection diagnosis. These patients had more severe disease, hypertension, diabetes, and coronary artery disease. In a recent research letter by Beyrouti et al., six patients with moderate to severe SARS-CoV-2 infection presented a stroke between day 0 and 24 after the onset of symptoms. Lodigiani et al. published an incidence of 2.5% in a cohort of 388 COVID-19 hospitalized patients, despite the use of anticoagulant prophylaxis. Our patient developed an ischemic stroke 30 days after the diagnosis of SARS-CoV-2 infection. The time of presentation of this event varies in the different case series published. In one of the patients reported by Beyrouti et al. stroke preceded COVID-19 symptoms by 2 days, nevertheless according to other authors stroke emerged later on the course of the infection ranging until 24 days after the onset of symptoms. Stroke may be related to severity of the disease as most published cases had moderate to severe, severe or critical illness including our patient with a moderate impairment. This is easy to understand as we know that SARS-CoV-2 infection is linked to a prothrombotic and proinflammatory state with an important release of cytokines, endothelial cells activation and coagulation leading to thrombosis. Hypoxia may have an important role as well, but the exact pathophysiology behind these cerebrovascular accidents, especiallly in patients receiving antithrombotic prophylaxis, is still to be determined. Autopsy findings in lungs and kidneys suggest thrombotic microangiopathy as a possible mechanism of stroke. At the time of this writing, only Solomon et al. have published neuropathological autopsy findings in COVID-19 patients, even though none of their 18 individuals developed a stroke. Similar pathophysiological mechanisms may play a role in cutaneous involvement in SARS-CoV-2 infection, but there is not much information in this regard in the literature. Colmenero et al. have demonstrated the presence of SARS-CoV-2 in endothelial cells and epithelial cells of eccrine glands in 7 paediatric patients presenting with chilblains during the COVID-19 pandemic. Coronavirus particles were found in the cytoplasm of endothelial cells on electron microscopy. Besides microvascular thrombosis, some authors have proposed immune dysregulation, vasculitis, or neoangiogenesis as cause of dermatologic impairment. Reported prevalence of skin disease varies from 0.20% to 20% suggesting a potential underreporting of this complication. This is of crucial importance since it may be the first (even the only) manifestation of the infection. The time of the onset of cutaneous involvement is not well determined in literature ranging from 0 to 21 days (when described) after the onset of the typical symptoms. In our patient skin manifestations emerged 17 days after the diagnosis of the infection. He was treated although some authors describe complete recovery without any therapy. Erythemato-papular lesions found in our patient are not the most common in COVID-19, with an incidence around 20% as reported in different studies. Saglietto et al. published an incidence of cutaneous impairment of 20.4% in a cohort of 88 COVID-19 hospitalized patients in Lecco (Italy), being eythematous rash the most frequent manifestation (15.9%). In a preliminary review by Jia et al. with a pooled total of 997 patients, chilblain-like lesions were the most common (40.1%) followed by maculopapular lesions (23.1%), urticarial lesions (21.8%), vesicular lesions (10.1%), and livedoid/necrotic lesions (2.3%). Seven patients from Wuhan (China) have been reported with acro-ischemia (toe cyanosis, skin bulla and gangrene), 4 of them with disseminated intravascular coagulation. Recalcati et al. observed 14 cases including 11 children with an acral eruption of erythemato-violaceous papules and macules. Lesions were localized on the feet in eight cases, on the hands in four cases and on both sites in 2. Bouaziz et al. published a case series of 14 COVID-19 patients from France. Inflammatory lesions were reported in 7 patients, exanthema in 4 cases, chickenpox-like vesicles in 2 and cold urticaria in 1 patient. Vascular lesions were reported in 7 patients (violaceous macules, livedo, necrotic/nonnecrotic purpura, chilblain appearance, eruptive cherry angioma). Here we have described the case of two COVID-19-related complications not previously reported, neither alone nor together, in patients undergoing chronic dialysis (HD or PD). The temporal relationship with the COVID-19 pandemia, the rapid outbreak and clustering of unusual skin lesions and the slight increase of ischemic strokes according to the growing evidence published from other affected areas, strongly support the relationship. Considering that cutaneous manifestations may be the first or the only symptom of the infection, we should be aware of them to stablish early infection control and quarantine measures. Heparin is a key treatment of SARS-CoV-2, we know it saves lifes from early infection to hyperinflammation phase but it may help also to reduce mortality risk in post-infection period. We highlight the importance of define an appropriate length of antithrombotic prophylaxis after successful recovery from COVID-19 pneumonia in individuals with preexisting cardiovascular conditions, especially in dialysis patients. Compliance with ethical standards Conflict of interest The authors have declared that no conflict of interest exists. Human and animal rights This article does not contain any studies with human participants or animals performed by any of the authors. Informed consent Informed consent was obtained from all individual participants included in the study. |
<reponame>sstanovnik/xopera-opera
from ..bool import Bool
from ..entity import Entity
from ..list import List
from ..map import Map
from ..reference import DataTypeReference
from ..string import String
from ..void import Void
from .constraint_clause import ConstraintClause
from .schema_definition import SchemaDefinition
from .status import Status
# TODO(@tadeboro): Unify ParameterDefinition and PropertyDefinition and add
# runtime checks. Refinement will be a PITA ...
class ParameterDefinition(Entity):
ATTRS = dict(
type=DataTypeReference("data_types"),
description=String,
required=Bool,
default=Void,
status=Status,
constraints=List(ConstraintClause),
key_schema=SchemaDefinition,
entry_schema=SchemaDefinition,
external_schema=String,
metadata=Map(String),
value=Void,
)
|
1. Field of the Invention
The present invention relates to a device and method for the prevention and/or treatment of osteoporosis, fractures of the hip, spine fractures, and/or spine fusions using inductively coupled electric fields generated by coils inserted into a patient's undergarments and powered by a portable power source.
2. Description of the Prior Art
As previously reported in U.S. Pat. No. 4,467,808, issued Aug. 28, 1984, and as reported in three published papers by Brighton et al (Bone, 6:87–97, 1985; J. Orthopaedic Research, 6:676–684, 1988; and J. Bone and Joint Surger, 71A: 228–236, 1989) an appropriate capacitively coupled electric field prevented and/or reversed osteoporosis induced in the rat vertebra or tibia. It has also been previously reported in U.S. Pat. No. 4,535,775, issued Aug. 20, 1985, and in several published papers by Brighton et al (J. Trauma, 24:153–155, 1984; J. Orthopaedic Research, 3:331–340, 1985; and J. Bone and Joint Surgery, 67A: 577–585, 1985) that an appropriate capacitively coupled electric field increased the rate of healing in fresh fractures in the rabbit fibula and healed human fracture nonunions at a rate comparable to that of bone graft surgery (Clin. Orthop. And Related Research, 321: 223–234, 1995). Lastly, it was reported recently that an appropriate capacitively coupled electric field used as an adjunct to lumbar spinal fusion significantly increased the rate of fusion when compared to patients who had spinal fusion without electrical stimulation (Goodwin, Brighton, et al, Spine, 24:1349–1356, 1999).
All of the above studies used capacitive coupling, a method of noninvasively producing an electric field in tissues within the body such as bone and cartilage. Capacitive coupling, as used in those studies, requires the use of a pair of electrodes attached to the surface of the patient's skin adjacent to or near the location of treatment. Capacitive coupling is a very convenient, patient “friendly” method of applying electricity to the patient in the treatment of bone fractures, nonunions, bone defects, and localized lumbar spine fusions. However, capacitive coupling is not a practical way of treating multiple segment spine fusions or to treat a relative large area. This limitation of capacitive coupling led the present inventors to invent a method and device for achieving the same internal electrical fields in vertebrae at multiple levels by using either multiple electrode pairs or by using strip electrodes, as described in U.S. Provisional Patent Application No. 60/302,846. The multiple electrode pairs or strip electrodes (one long electrode on either side of the spine) described therein are designed to be worn 24 hours per day and to be changed periodically for a treatment period of, e.g., 8–12 weeks.
The present inventors also set out to extend the techniques described in the above-mentioned patents and articles to the treatment of osteoporosis, fractures of the hip or spine, and/or spine fusions in humans. However, the present inventors soon discovered that transferring existing data to the application of electric signals to the human spine and hips in patients with osteoporosis, fractures of the hip or spine, and/or spine fusions was far from straightforward. A determination of the proper electric field amplitude and method of applying electricity to the patient for the treatment of osteoporosis, fractures of the hip or spine, and/or spine fusions needed to be developed. Capacitive coupling was substantially eliminated as a method of producing an electric field in the spine to treat osteoporosis because family assistance is required to apply the electrodes. Such assistance is frequently unavailable in this generally older patient population afflicted with osteoporosis and hip and spine injuries. Moreover, such patients typically will require the application of electric fields for months to years and possibly for the duration of the patient's life.
Accordingly, it is desired to develop an equivalent electric field in vertebrae and other bones and tissues, such as the hip, as achieved with capacitive coupling, except that it is desired to use only electromagnetic fields instead of capacitively coupled fields so that electrodes will not need to be applied to the patient. Inductive coupling devices will create the opportunity for the development of garments and the like that can be readily applied to the treatment area by the patient. The present invention addresses the features of such garments. |
Surface-enhanced Raman scattering nanodomes fabricated by nanoreplica molding We demonstrate a surface-enhanced Raman scattering substrate consisting of a closely spaced metal nanodome array fabricated on flexible plastic film. We used a low cost, large area replica molding process to produce a 2-dimensional periodic array of cylinders that is subsequently overcoated with SiO2 and silver thin films to form dome-shaped structures. Finite element modeling was used to investigate the electromagnetic field distribution of the nanodome array structure and the effect of the nanodome separation distance on the electromagnetic field enhancement. The SERS enhancement from the nanodome array substrates was experimentally verified using rhodamine 6G as the analyte. With a separation distance of 17 nm achieved between adjacent domes using a process that is precisely controlled during thin film deposition, a reproducible SERS enhancement factor of 1.37108 was demonstrated. The nanoreplica molding process presented in this work allows for simple, low cost, high-throughput fabrication of uniform nanoscale SERS substrates over large surface areas without the requirement for high resolution lithography or defect-free deposition of spherical microparticle monolayer templates. |
Visualized Co-Simulation of Adaptive Human Behavior and Dynamic Building Performance: An Agent-Based Model (ABM) and Artificial Intelligence (AI) Approach for Smart Architectural Design Human (occupant) behavior has been a topic of active research in the study of architecture and energy. To integrate the work of architectural design with techniques of building performance simulation in the presence of responsive human behavior, this study proposes a computational framework that can visualize and evaluate space occupancy, energy use, and generative envelope design given a space outline. A design simulation platform based on the visual programming language (VPL) of Rhino Grasshopper (GH) and Python is presented so that users (architects) can monitor real-time occupant response to space morphology, environmental building operation, and the formal optimization of three-dimensional (3D) building space. For dynamic co-simulation, the Building Controls Virtual Test Bed, Energy Plus, and Radiance were interfaced, and the agent-based model (ABM) approach and Gaussian process (GP) were applied to represent agents self-learning adaptation, feedback, and impact on room temperature and illuminance. Hypothetical behavior scenarios of virtual agents with experimental building geometry were produced to validate the framework and its effectiveness in supporting dynamic simulation. The studys findings show that building energy and temperature largely depend on ABMs and geometry configuration, which demonstrates the importance of coupled simulation in design decision-making. Performance-Based Design and Challenges In recent years, a heightened awareness of energy and sustainability in built environments has remarkably changed the ways in which we design, construct, and think about building. A number of collective commitments to account for energy efficiency and high performance in architecture have fostered significant progress in terms of building production concepts, manufacturing, and detailing. In this architectural shift, the development of digital tools incorporates computerized performance simulation into the design process, which plays a vital role in bridging the conventional divide between design and engineering in architecture. Yi and Yi proposed an automated optimal three-dimensional space layout framework coordinated by performance simulation and simulated annealing. Lin and Gerber suggested the use of a multi-objective genetic algorithm and performance simulation for generative building form-making workflows. The visualized representation Although the uncertainties of PBD may be unavoidable, we should try to close the gap between simulation and reality-or even create more realistic virtual outcomes than reality-by leveraging tighter data networking of heterogeneous datasets generated from different sources or introducing high-level computational control of simulation processes. In this attempt, preliminary data can be collected to localize environmental inputs while missing information can be estimated using the appropriate statistical methods or machine-learning techniques. In terms of PBD, building is a highly complex and engineered environmental system; therefore, it is very challenging to evaluate all performance criteria simultaneously. The core engines of BPS are specialized for different purposes and require an understanding of heterogeneous algorithmic procedures. Thus, the use of a single simulator provides partial information. Co-simulation is a holistic approach that can overcome this issue via the modular composition of different simulators or the hybridization of algorithms. Co-simulation is not a new idea. There have been many efforts to combine heterogeneous BPS components, subsystems, and algorithms towards a unified simulation scheme, and several techniques to exploit the interoperability of multiple simulations exist in the context of PBD processes. Hansen first attempted to synchronize computational fluid dynamics (CFD) and energy simulation based on the Gauss-Seidel feedback pattern. Wetter developed a software environment named Building Controls Virtual Test Bed (BCVTB) that can activate user-driven external intervention in the simulation process. Hong et al. suggested a cyber-physical system programming method based on the functional mock-up interface (FMI) that can be used in building information modeling (BIM). The most straightforward of these techniques involves running existing simulators successively and making follow-ups on demand. This is more advantageous than real-time processing, but this method can become inefficient if it requires high-level expertise to interpret information and extra elaboration to put together different types of data in every iteration. An alternative approach is to create a design simulation platform, a sort of virtual environment enabling a collective exchange of data, models, criteria, and decisions based on a standard protocol for simulation interoperability. This does not intend to bifurcate the architectural process into design and analysis but rather to integrate information exchange through multipath input/output (I/O) control systems. This approach has the potential to widen the horizon of BPS' roles and functions extensively, increasing the flexibility of individual expert domains. For architects and early-stage PBD, a co-simulation platform must serve to enhance design pursuits and PBD's process-driven workflow, where visual narratives of design and simulation become a design product. However, two major issues arise in this scheme: data synchronization and the development of information feedback loops. BPS is often overwhelmed by large quantities of data streaming, and design decisions may be misleading unless system models are appropriately defined. Excessively complex procedures to preprocess/refine building data may significantly delay the prototyping of design alternatives. The resolution of performance analysis must be uniquely adjusted to intensify design power. Additionally, the organized reduction of complex variables to a set of parameters is necessary to quickly feed calculation results. These tactics must be able to synchronize all data flows in a unified automation process; thus, mathematical algorithms, data analytics, and parameters are blended in design pipelines. Similar design-integrated approaches to PBD have been studied under the names of building information modeling (BIM) and computer-assisted design (CAD) systems. However, most existing studies employ design simulation to retrofit existing buildings rather than to create new buildings from scratch. Funneling large amounts of data in different formats and streamlining algorithms and simulation engines into a few select design parameters as design agents are key in seamlessly interweaving design automation, rapid simulation, and optimal decision-making. The successful regulation of a synchrony of simulators through an agent-based architectural platform is promising for iterative PBD analysis and solution-seeking. Detailed performance analytics can enable the unified and rapid interconnection of design and analysis. Agent-Based Model (ABM) for PBD Known also as agent-based simulation (ABS) or individual-based modeling (IBM), the ABM is a computational method for modeling and analyzing complex systems and processes such as cellular structures, human cognition, or market networks of suppliers and buyers. "Agent" is anything that refers to a discrete model entity (or a rule/attribute-based actor) involved in changing a system's organization, appearance, or phenomena. Based on system design, this could be a single person, a cell, a product, or any other entity engaged in system dynamics. Each agent operates in a system with specific attributes and behavioral rules. ABM is a bottom-up approach because it is usually employed to examine overall system change by configuring autonomous individual actions and agents' decisions at a microscopic level. In particular, if a governing function or norm-authorizing system control is unclear and difficult to observe, this approach provides an effective solution to examine the collective impact to a system as a whole. An ABM is comprised of three components: (i) agents (humans), (ii) a set of rules defining agent behavior, and (iii) a set of model setup parameters (a framework for simulation). Each of Sustainability 2020, 12, 6672 4 of 18 these is interrelated to assist in the construction of different model layers or a network of functional modules. In this regard, ABM is a very natural representation of a real system because actual, real-world phenomena are often convoluted, heterogeneous, and challenging to define with some regulatory parameters. ABM has been applied widely across various areas, e.g., biology, urban planning, social science, and economics, to name a few. In architecture and related areas, the ABM or agent-based approach benefits simulating human-building interactions, as it is able to deal with many arbitrary interventions engaged in dynamic behavioral relationships in terms of the effect-to-cause reasoning. Lee and Malkawi proposed a multi-agent model framework of energy-conscious building occupant behavior, and space syntax analysis (SSA) coupled with human agent simulation was found to be effective to characterize the interaction between human behavior and the physical setting of built environments. Nevertheless, it is still challenging to incorporate ABM seamlessly into the architectural design process, especially in terms of the generative design of performance-based adaptive building geometry. It is noticeable that Gao and Gu suggested agent simulation at a building level, and Andrew et al. attempted to model spatial occupant behavior in a building plan regarding artificial lighting control. However, in most building studies, use of ABM targets the application of mechanical systems and energy use. SSA is mainly adopted on urban scales, exposing some limits in that it supposes completely purposive human movements and their complete knowledge of where they traverse in a static environmental context. There are few or no ABM studies that attempt to address architects' concerns about PBD, i.e., generative geometry design, dynamic performance simulation, and visualized representation of process and outcomes. Simulation of Human Behavior in PBD Human behavior is a moving target for architectural design and among one of the most immediate yet uncertain factors in PBD. Thus, many researchers have worked hard to incorporate behavioral influences and parameters into design and analysis. For instance, the work of Breslav et al. presented a visualization of occupants' spatial movements as well as their perception patterns through digital building modeling. Nagy et al. leveraged crowd simulation techniques and space syntax to develop a generative space layout method. Moreover, there is growing recognition in the study of BPS that occupant behavior is a dominant component of building energy use [26,. Unfortunately, owing to the gaps in research between the fields of architecture and engineering, few PBD studies focusing on a unified approach to human behavior for both design and simulation have been published. Although there has been significant agreement on the need for behavior-centric approaches to building sustainability in architecture, human behavior has yet to be fully elucidated in the practice of PBD. Architects primarily seek to represent human activities and their spatial positioning graphically, and their impact on building energy and thermal comfort has not been fully explored via simulation. In contrast, on the BPS side, only the operation schedules of mechanical systems account for human behavior. Engineers focus on non-visual and numerical representations of occupants, which requires a certain level of scientific expertise. At any rate, an adaptive behavioral response to improve thermal comfort and the variability of space geometry to environmental changes has not been fully considered for the general performance assessment process and formal design development. Therefore, human behavior modeling for PBD raises cross-disciplinary issues. PBD should be extended to be able to associate behavior-driven building system operation with geometry design and space allocation. Although it is challenging to predict bodily motion due to its random (unreasoned) nature, some habitual behavior scenarios of indoor space use can be established based on a situation-specific survey (of occupants' preferences, energy consumption patterns, and so on) for the purpose of design simulation. At the same time, the ambiguity of quantitative modeling must be Sustainability 2020, 12, 6672 5 of 18 clarified with performance-related behavioral definitions so that PBD results regarding sustainability are not biased. Visualized Simulation of Adaptive Building Geometry, Design Automation, and Optimization Generative adaptive form-making in PBD should be fueled by a clear strategy to build up robust data feedback for design automation and optimization. Performance criteria, environmental constraints, and design criteria must be coherently programmed/scripted in a visualized parametric design simulation framework. For the full integration and automation of this framework in architectural design practice, this study employs the visual programming language (VPL) of Grasshopper (GH) for Rhino ® (Robert McNeel & Associates, USA), which is among the most popular and widely-used pieces of digital architecture software, to configure an automatic form-making process driven by BPS and optimization. Using this tool, adaptive design candidates are populated using built-in GH components and Phyton (IronPython) scripts, according to a predefined optimization convergence rule and fitness that represent BPS results. The VPL-based PBD offers a flexible design simulation mock-up for both architects and analysts because their functions and systems can be customized as necessary and compiled to create new language components. GH-VPL modules with scripted feedback loops to control data flow can offer a synchronous workflow of geometry modeling, simulation, and optimization. Results can then be used promptly to update design solutions and performance analysis reports. For large projects with heavy computational loads, data storage can be aided by sharing cloud servers, data networking, or adopting parallel computing techniques. Recent developments of GH-VPL provide an advanced programming environment through the Python editor, and scientific data processing techniques served by Python libraries and packages, such as multi-objective optimization or machine-learning algorithms, can be adapted to visualized design process, which will eventually make the design process more intelligent and produce high-quality PBD solutions. Figure 1 represents the early-phase architectural PBD framework proposed in this study. This scheme features an experimental design simulation integrated environment to support virtual real-time energy monitoring and reporting of BPS results. The developed computational platform is targeted at an initial phase of architectural projects, where users (architects) need to rapidly compare different building forms. Users can visually identify how dynamic formal changes of space potentially influence human behavior (space occupancy) that is actively engaged in energy-efficient operation and, conversely, how they work to optimize building forms interactively. This framework intends to support flexible and user-friendly building design practice through VPL, enabling the immediate visualization of the evolution of building geometry and behavior. Based on this workflow, GH-VPL can be further customized for different occupancy scenarios. In the suggested workflow, ABM and the artificial intelligence (AI) of the Gaussian process offer an algorithmic solution for behavior modeling that ensures the uninterrupted involvement of heterogeneous data sources in the streamlined form-finding process ( Figure 1). Development of a Visual User Interface (VUI) The VUI was made with a VPL and GH and integrated with a BPS engine, EnergyPlus (EP), and an ABM model (Figures 2-4). It features a dashboard, geometry generation component, and integration of AI modeling using Python. The GH, one of the most popular VPLs in architecture, provides a graphic icon editor for the parametric building design process. Since this is a plugin of the CAD tool that supports complex geometric manipulation, it enables the design-oriented modeling of complex building systems as well as customized specification of the form-making process according to parameters set by designers. It also makes it possible to establish a seamless comprehensive model of computation through the co-simulation of a number of distinct analysis tools. However, despite such GH versatility in tool integration, it essentially offers a single option when it comes to data flow control. A scripted process is executed a single time through a synchronous data pipeline whenever an input parameter is available. To facilitate data-loop communication and multithread processing among different modules, a data flow director was coded using Python. This data flow director can control data connection and pathways by scheduling the rate of exchange. Data from external programs are stored in a CSV format, and the director commands an input module indicating when and how to read the data. BPS coupled with automatic form-testing is used to evaluate energy use and daylight space in buildings. Although it is a critical component to gain environmental information on occupant feedback, there is a significant limitation in the use of EP for this purpose because EP's basic simulation setting is inflexible. More specifically, it runs all the way through once a simulation begins according to the initial setup. Thus, it informs design only once when each simulation run is done over a specific period, not responding to parameter variation during each time window. Development of a Visual User Interface (VUI) The VUI was made with a VPL and GH and integrated with a BPS engine, EnergyPlus (EP), and an ABM model (Figures 2-4). It features a dashboard, geometry generation component, and integration of AI modeling using Python. The GH, one of the most popular VPLs in architecture, provides a graphic icon editor for the parametric building design process. Since this is a plugin of the CAD tool that supports complex geometric manipulation, it enables the design-oriented modeling of complex building systems as well as customized specification of the form-making process according to parameters set by designers. It also makes it possible to establish a seamless comprehensive model of computation through the co-simulation of a number of distinct analysis tools. However, despite such GH versatility in tool integration, it essentially offers a single option when it comes to data flow control. A scripted process is executed a single time through a synchronous data pipeline whenever an input parameter is available. To facilitate data-loop communication and multithread processing among different modules, a data flow director was coded using Python. This data flow director can control data connection and pathways by scheduling the rate of exchange. Data from external programs are stored in a CSV format, and the director commands an input module indicating when and how to read the data. EP supports external intervention on some simulation parameters while in a run through an extended setting. Accordingly, the BCVTB was encoded in this VPL interface (Figure 2). Once the user draws a curve or a polyline in Rhino as a building footprint outline, which is the most primitive input in the earliest design phase, the interface recognizes it. A building form generated with random fenestration (windows and doors) over exterior walls is converted to an EP geometry input format. input in the earliest design phase, the interface recognizes it. A building form generated with random fenestration (windows and doors) over exterior walls is converted to an EP geometry input format. Then, the script automatically triggers EP simulation and BCVTB. This initialization generates human figures in different positions, and the process visualizes the adaptive evolution of a building form and occupant behavior to optimize the form and energy use. Finally, a graphic dashboard is installed to present real-time simulation results. ABM Development for Space Occupancy and Cognitive Agent Behavior In this study, agents represent rational human building occupants that randomly belong to one of three social categorizations, Group A, B, and C, and each agent group has its own behavioral characteristics according to predefined attributes influencing energy-related building operation and room occupancy. The studied ABM consists of four behavior-response modules: (i) social cognition, (ii) thermal response, (iii) positioning in space, and (iv) reaction. An agent's attributes are represented in two different condition-action (RA) layers that respectively characterize (i) preferences in terms of room occupancy and social interactions and (ii) thermal sensation and controllability. For the first RA layer, the following set of specific agent rules (R.1~R.6) is set to program agents' cognitive behavior. BPS coupled with automatic form-testing is used to evaluate energy use and daylight space in buildings. Although it is a critical component to gain environmental information on occupant feedback, there is a significant limitation in the use of EP for this purpose because EP's basic simulation setting is inflexible. More specifically, it runs all the way through once a simulation begins according to the initial setup. Thus, it informs design only once when each simulation run is done over a specific period, not responding to parameter variation during each time window. EP supports external intervention on some simulation parameters while in a run through an extended setting. Accordingly, the BCVTB was encoded in this VPL interface ( Figure 2). Once the user draws a curve or a polyline in Rhino as a building footprint outline, which is the most primitive input in the earliest design phase, the interface recognizes it. A building form generated with random fenestration (windows and doors) over exterior walls is converted to an EP geometry input format. Then, the script automatically triggers EP simulation and BCVTB. This initialization generates human figures in different positions, and the process visualizes the adaptive evolution of a building form and occupant behavior to optimize the form and energy use. Finally, a graphic dashboard is installed to present real-time simulation results. ABM Development for Space Occupancy and Cognitive Agent Behavior In this study, agents represent rational human building occupants that randomly belong to one of three social categorizations, Group A, B, and C, and each agent group has its own behavioral characteristics according to predefined attributes influencing energy-related building operation and room occupancy. The studied ABM consists of four behavior-response modules: (i) social cognition, (ii) thermal response, (iii) positioning in space, and (iv) reaction. An agent's attributes are represented in two different condition-action (RA) layers that respectively characterize (i) preferences in terms of room occupancy and social interactions and (ii) thermal sensation and controllability. For the first RA layer, the following set of specific agent rules (R.1~R.6) is set to program agents' cognitive behavior. Agents think, act, and decide based on these reflex rules. Basically, they are self-cognitive, but those within the same group are not supposed to communicate to one another. Individual decisions of agents made by the condition-action rule take on collective behavior in space. For mathematical modeling, we find that the rule definitions involve some ambiguity in their language, such as "most" or "prefer". It is almost impossible, admittedly, to specify different personal preferences or mindsets uniformly without any uncertainty. That said, to make this design experiment as scientifically rigorous as possible, some rules are described with probabilistic formulas in ABM encoding. Assuming that a degree of compliance with the rules is normally distributed, our belief in specific behaviors depends on random variables. Per execution of rule,, and, a random number from 0 to 1 is sampled before positioning an agent. "Most" signifies that an agent will act if the random number is within the interquartile range (IQR, middle 50%). Within this VPL interface, the behavior rules are integrated with geometry optimization. During BPS, all the information gained from the ABM is fed into a form optimization process. This experiment employed adaptive pattern-search hybridized simulated annealing (T-APSSA), which can locate a global optimum in a discrete search space within a smaller number of iterations compared to SA or generic algorithm (GA). The pseudocode ( Figure 5) shows how optimization geometry is obtained in conjunction with the simulation of the ABM-based human-building interaction. Agent Positioning Model: Gaussian Process Classifier In this ABM framework, we assumed that agents are active and intelligent enough to find the best spatial positions for themselves. They are short-memory entities characterized by the Markov property. Individuals remember their specific spatial positions at a present time step and use them to move to the next place. To make decisions about their own movements, agents use a Gaussian process classifier (GPC) to identify positioning suitability based on the information of geometrical room characteristics and other agents' space occupancy. Pseudo code for ABM-based geometry optimization algorithm dynamic behavior-geometry ABM is input: Building geometry G(p) with the number of windows and doors n w and n d, number of agents in each group (A, B, C), a n, b n, c n Initial heating and cooling set point temperature, t h, t c output: Geometry G(p) * such that total energy use (E * ) is minimized Moreover, for the second RA layer, we assumed that every agent has its own thermosensitivity. Each individual is likely to have a different optimal level of thermal comfort. To explore how agents' thermal senses influence space temperature change and energy use, we added the following thermal control scenarios (S.1~S.3). (S.1) Air-conditioning systems operate from 8 a.m. to 6 p.m. During operation, building users have full access to thermostat control. The systems are designed to have dual set points for heating and cooling. (S.2) Group A is sensitive to slight over-heating. If their skin temperature increases above 33 C, and the ratio of the number of Group A agents to total occupants is greater than 0.5, the agents will change the set point temperature of the cooling equipment to 28 C. (S.3) Group C is sensitive to over-cooling. If their skin temperature drops below 32 C, the cooling equipment will be turned off. These scenarios were tested in a hot local climate (Miami, USA) using an electric heat pump with 14 seasonal energy efficiency ratios (system sizing was automatically set by EP). Agent action along with the random fenestration design of a room shape produced different thermal environments. The adaptive space thermal environment was monitored every 15 min, which is the time step subject to the resolution of EP simulation. Realistically estimating personal thermal sensation involves a highly complex physical relationship between various factors such as air velocity, humidity, and radiation, among others. As a design game, this experiment set a limit on complexity, focusing more intensely on the relationship between room geometry and occupant behavior. Skin temperature (T skin ) was estimated as an average temperature of local body parts and represented by personal clothing insulation level (clo) and zone mean air temperature (T room ), such that T skin = . To render this formulation as a stochastic characteristic of each agent, clothing level was represented approximately with probabilistic distribution (normal), setting other simulation parameters to constant values (Table 1). Agent Positioning Model: Gaussian Process Classifier In this ABM framework, we assumed that agents are active and intelligent enough to find the best spatial positions for themselves. They are short-memory entities characterized by the Markov property. Individuals remember their specific spatial positions at a present time step and use them to move to the next place. To make decisions about their own movements, agents use a Gaussian process classifier (GPC) to identify positioning suitability based on the information of geometrical room characteristics and other agents' space occupancy. The Gaussian process (GP) is a machine-learning extension of the multivariate Gaussian in which the probability function values correspond to random output. GP assumes that the output data are jointly normally distributed using a non-parametric Bayesian model. Posterior inference is made without prescribed model parameters but only by random functions drawn from the prior. GP makes sense in cases where it is likely to assume normally distributed data association for inference, especially when predictions must be made with little information about data. Suppose that we have a dataset D of n observations, D = x p, f p p = 1,..., n, where x is a m-variate input vector, x ∈ R nm and f ∈ R n denotes a target output from a function f (x). Given the training dataset, GP would make a prediction for a new input x *. The joint distribution of the training outputs, f, and the test outputs, f *, is represented using the covariance matrix, K, such that where y = f (x) and y * = f (x * ), and K = K(X, X), K * = K(X *, X), K * * = K(X *, X * ). For x, the three types of covariance matrices are defined by where k is the covariance or kernel function used to evaluate the similarity between data points. The kernel is the core ingredient of GP and what makes it different from the general multivariate normal distribution. Rather than relying on a linear product of the deviations of input pairs x and x *, GP calculates joint probability or covariance by mapping it into implicit feature space using kernel functions. One of the most frequently used GP kernels is the squared exponential, also known as the radial-basis function (RBF), where x − x * 2 is the squared Euclidian distance between the two feature (input) vectors, l is the length scale, and f is the variance of the output f. In the kernel, f is a scale factor by which the covariance is limited to 2 f at maximum. For multivariate input, a kernel value between two vector points can be obtained by From and, the conditional probability of y * given y is expressed as with y * = K * K −1 y and var(y * ) = K * * − K * K −1 K T *. Based on the above GP regression scheme, a Gaussian process classifier (GPC) can be obtained by mapping f * on a sigmoid function,, such as the logistic function; i.e., * = ( f * ) = (y * ). An expected mean value of the probability of class membership * is expressed as The GPC provides a prediction to describe building occupant behavior probabilistically with little prior information. The GPC is also efficient at predicting continuous time-series data of unknown occupant activities. For supervised GP modeling, building room designs with different types of space occupancy were generated to create a dataset. As Figure 6 shows, seven features representing space geometry and distances between groups (x p1~xp 7 ) were scaled to and matched with binary classification labels (0: not occupiable, 1: occupiable). On top of this, a simple space transition rule can then be applied. It is assumed that each agent decides whether to stay or to leave at any one time step. Then, we have a state space with two labels such that = {1: stay in the room, 2: leave the room}. Introducing the probability of a transition pair based on a hypothesis, a matrix for occupancy determination (A) can be prepared such that A = (a ij ) = , where a ij is the probability of an agent to be in state i after being in state j. Combining above rules and model parameters, this eventually characterizes a sort of simple "model-based reflex agent". According to spatial and environmental information, agents' perceptual activities to maintain specific occupancy states are driven by the GPC model. Unobserved aspects of occupancy are reflected with the establishment of the AI model. The whole ABM scheme is illustrated in Figure 7. On top of this, a simple space transition rule can then be applied. It is assumed that each agent decides whether to stay or to leave at any one time step. Then, we have a state space with two labels such that = {1: stay in the room, 2: leave the room}. Introducing the probability of a transition pair based on a hypothesis, a matrix for occupancy determination (A) can be prepared such that A = (aij) = , where aij is the probability of an agent to be in state i after being in state j. Combining above rules and model parameters, this eventually characterizes a sort of simple "modelbased reflex agent". According to spatial and environmental information, agents' perceptual activities to maintain specific occupancy states are driven by the GPC model. Unobserved aspects of occupancy are reflected with the establishment of the AI model. The whole ABM scheme is illustrated in Figure 7. Gaussian Process Prediction Results The GPC was integrated within the VUI through a Python editor, using the Python machine learning library, scikit-learn™ 0.20.3. A GPC GH module is triggered every simulation hour through BCTVB. To develop the GPC, a training dataset based on the agent rule was created and tested with a simple random building plan. Figure 8 presents the trained GPC's test results and the status of agents' space recognition visually mapped over the building plan. Figure 8a represents a situation with a single agent of each group in the space, while Figure 8b shows a crowded situation with the On top of this, a simple space transition rule can then be applied. It is assumed that each agent decides whether to stay or to leave at any one time step. Then, we have a state space with two labels such that = {1: stay in the room, 2: leave the room}. Introducing the probability of a transition pair based on a hypothesis, a matrix for occupancy determination (A) can be prepared such that A = (aij) = , where aij is the probability of an agent to be in state i after being in state j. Combining above rules and model parameters, this eventually characterizes a sort of simple "modelbased reflex agent". According to spatial and environmental information, agents' perceptual activities to maintain specific occupancy states are driven by the GPC model. Unobserved aspects of occupancy are reflected with the establishment of the AI model. The whole ABM scheme is illustrated in Figure 7. Gaussian Process Prediction Results The GPC was integrated within the VUI through a Python editor, using the Python machine learning library, scikit-learn™ 0.20.3. A GPC GH module is triggered every simulation hour through BCTVB. To develop the GPC, a training dataset based on the agent rule was created and tested with a simple random building plan. Figure 8 presents the trained GPC's test results and the status of agents' space recognition visually mapped over the building plan. Figure 8a represents a situation with a single agent of each group in the space, while Figure 8b shows a crowded situation with the Gaussian Process Prediction Results The GPC was integrated within the VUI through a Python editor, using the Python machine learning library, scikit-learn™ 0.20.3. A GPC GH module is triggered every simulation hour through BCTVB. To develop the GPC, a training dataset based on the agent rule was created and tested with a simple random building plan. Figure 8 presents the trained GPC's test results and the status of agents' space recognition visually mapped over the building plan. Figure 8a represents a situation with a single agent of each group in the space, while Figure 8b shows a crowded situation with the agents. These results reveal that the ABM behavior rule was successfully implemented through the GPC model. Visualized Model Outcome: Generation of Building Form and Space Occupancy In the visualization through GH, the geometry's surface color changes depending on room temperature; meanwhile, agents appear in Rhino in different colors according to their groups (blue: Group A; pink: Group B; green: Group C) for every discrete time step ( Figure 9; Figure 10). Once a user draws a planar line/curve to design a space, a 3D building model with random window/door designs is generated and the ABM simulation is executed. During simulation and optimization, agent movement and geometry changes occur simultaneously, maintaining design constraints and ABM rules. At the initial stage, human figures are randomly spread out over the building floor within a space outline, and their positions are recorded in the ABM memory. Most of group A (blue) would like to stay with B (pink) When space gets hot, group B would not like to stay with C (green) Group C prefers to enjoy daylight around windows Figure 8. Predicted potential move. Visualized Model Outcome: Generation of Building Form and Space Occupancy In the visualization through GH, the geometry's surface color changes depending on room temperature; meanwhile, agents appear in Rhino in different colors according to their groups (blue: Group A; pink: Group B; green: Group C) for every discrete time step (Figure 9; Figure 10). Once a user draws a planar line/curve to design a space, a 3D building model with random window/door designs is generated and the ABM simulation is executed. During simulation and optimization, agent movement and geometry changes occur simultaneously, maintaining design constraints and ABM rules. At the initial stage, human figures are randomly spread out over the building floor within a space outline, and their positions are recorded in the ABM memory. Visualized Model Outcome: Generation of Building Form and Space Occupancy In the visualization through GH, the geometry's surface color changes depending on room temperature; meanwhile, agents appear in Rhino in different colors according to their groups (blue: Group A; pink: Group B; green: Group C) for every discrete time step ( Figure 9; Figure 10). Once a user draws a planar line/curve to design a space, a 3D building model with random window/door designs is generated and the ABM simulation is executed. During simulation and optimization, agent movement and geometry changes occur simultaneously, maintaining design constraints and ABM rules. At the initial stage, human figures are randomly spread out over the building floor within a space outline, and their positions are recorded in the ABM memory. Most of group A (blue) would like to stay with B (pink) When space gets hot, group B would not like to stay with C (green) Group C prefers to enjoy daylight around windows Analysis of BPS Results The results of EP simulation (space (room) temperature and energy use) are displayed on a graph dashboard in the VPL interface ( Figure 2) to enable the user to monitor the dynamic ABM simulation process. As a pilot test, ABM was simulated with the geometry shown in Figure 10. Like any public space, this space has no specific function, and the agent rules only govern how to use the space. The total floor area is 750.58 m 2, and the number of agents in each group was set to 25 so that the maximum occupancy load would not exceed 10 m 2 per person, which is the limit for a typical office. To test a space's adaptability under extreme conditions, the simulation period was set to 24 h during the day of summer solstice (21 June). Figure 11; Figure 12 present the test results. Figure 11a shows large fluctuations of space occupancy due to the GPC and the agents' stochastic transition rule. The average outdoor temperature in the morning and during the nighttime was around 27.6 °C, while the unconditioned room temperature reached 4-5 °C higher than this. During system operation (S.1), it was possible to identify variations in the thermostat set points according to agent rules (R.1~6) and scenarios. As a thermostat setting changes, room temperature dynamically changes at each time step. ABM does not provide a single solution-instead, due to its randomness, there are as many solutions as the number of simulation runs. Figure 11b plots four different results of ABM simulation. In Figure 12, compared to constant thermostat control (non-ABM), ABM demonstrates that active occupant engagement can ensure energy-saving potential with flexible system operation. While the simulation result only considering variable occupancy (gray line in Figure 12) does not show a major difference, the end energy use of ABM simulation noticeably increased from 0.24 GJ (0.32 MJ/m 2 ) to 0.32 GJ (0.43 MJ/m 2 ). Although the occupant-driven operation did not lead to energy reduction in this case, this result indicates that energy-conscious behavior would benefit building energy use. Moreover, the suggested interface will be helpful in testing dynamic energy performance in advance at any stage during design. Analysis of BPS Results The results of EP simulation (space (room) temperature and energy use) are displayed on a graph dashboard in the VPL interface ( Figure 2) to enable the user to monitor the dynamic ABM simulation process. As a pilot test, ABM was simulated with the geometry shown in Figure 10. Like any public space, this space has no specific function, and the agent rules only govern how to use the space. The total floor area is 750.58 m 2, and the number of agents in each group was set to 25 so that the maximum occupancy load would not exceed 10 m 2 per person, which is the limit for a typical office. To test a space's adaptability under extreme conditions, the simulation period was set to 24 h during the day of summer solstice (21 June). Figure 11; Figure 12 present the test results. Figure 11a shows large fluctuations of space occupancy due to the GPC and the agents' stochastic transition rule. The average outdoor temperature in the morning and during the nighttime was around 27.6 C, while the unconditioned room temperature reached 4-5 C higher than this. During system operation (S.1), it was possible to identify variations in the thermostat set points according to agent rules (R.1~6) and scenarios. As a thermostat setting changes, room temperature dynamically changes at each time step. ABM does not provide a single solution-instead, due to its randomness, there are as many solutions as the number of simulation runs. Figure 11b plots four different results of ABM simulation. In Figure 12, compared to constant thermostat control (non-ABM), ABM demonstrates that active occupant engagement can ensure energy-saving potential with flexible system operation. While the simulation result only considering variable occupancy (gray line in Figure 12) does not show a major difference, the end energy use of ABM simulation noticeably increased from 0.24 GJ (0.32 MJ/m 2 ) to 0.32 GJ (0.43 MJ/m 2 ). Although the occupant-driven operation did not lead to energy reduction in this case, this result indicates that energy-conscious behavior would benefit building energy use. Moreover, the suggested interface will be helpful in testing dynamic energy performance in advance at any stage during design. Concluding Remarks Currently, advanced simulation techniques regarding human behavior call for a transformative approach to sustainable building design methodology. This is because conventional approaches to sustainable building design primarily emphasize time-tested evidence and in-depth engineering pursuits that are invisible to architects. Although early design phases must be sufficiently informed by real-world parameters and design concerns, the existing tools and methods are not fully supportive for architectural practice. An important way to make PBD more accurate is to couple the process of single-purpose simulation with real-world parameters to strengthen the responsive (feedback) mechanism between a digital environment and actual representation. Adding "real" properties to BPS will not only greatly improve the reliability of simulation outcomes but also help users to envision new possibilities of BPS application to environmental architectural design-for example, real-time accessibility and prediction of building conditions and behavior, building operation resilience, or preemptive building diagnosis. In order to propose a more interactive and dynamic PBD process, this study suggested a design-oriented multi-dimensional PBD system using digital building modeling coupled with artificial behavior imitation and simulated data analytics. To this end, the study has pioneered the dynamic integration of ABM and BPS through the parametric interfacing of an automated design process using GH as a VPL, demonstrating that coupling PBD with human behavior can serve as a novel architectural design process. The study's results showed that this ABM framework can significantly make design more intelligent and strengthen the power of simulated worlds beyond mere technical merits. The ABM-based, decision-support interface proposed herein can help designers easily access building thermal information and quickly explore various design options through seamless automation. Moreover, the visualized thermal environment and occupant behavior in a given space can focus our attention on feedback loops between a built form and usage patterns as well as the importance of human-building interactive design to achieve sustainability. The study's proposed ABM is based on building users' realistic attributes, but the behavior rules employed here were hypothetical. Thus, the different agent scenarios must be validated through comparison with actual social human interaction. To improve data quality and outcome precision, an in-depth analysis of actual building use patterns is necessary. Further studies might employ questionnaire surveys or monitoring of human activities to develop a more robust agent model. Tether-free data communication, sensing of physical systems, remote visualization, and self-adjustment in data transmission related to cyber-physical system (CPS) integration can also be taken into consideration in future research. |
package org.dice_research.opal.doc;
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import org.apache.commons.io.FileUtils;
/**
* Utilities.
*
* @author <NAME>
*/
public class Utils {
/**
* Reads JSON from URL.
*
* @see https://stackoverflow.com/a/4308662
*/
public static String readJsonFromUrl(String url) throws IOException {
InputStream is = null;
BufferedReader rd = null;
try {
is = new URL(url).openStream();
rd = new BufferedReader(new InputStreamReader(is, Charset.forName("UTF-8")));
StringBuilder sb = new StringBuilder();
int cp;
while ((cp = rd.read()) != -1) {
sb.append((char) cp);
}
return sb.toString();
} finally {
if (is != null)
is.close();
if (rd != null)
rd.close();
}
}
/**
* Writes string to file.
*/
public static void write(File file, String data) {
try {
FileUtils.write(file, data, StandardCharsets.UTF_8);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
} |
WASHINGTON, April 25 - Republican and Democratic Senate leaders promised on Tuesday to work together to revive the sweeping immigration bill that was killed nearly three weeks ago amid partisan bickering on procedural grounds.
Emerging from a meeting at the White House with President Bush late Tuesday afternoon, Senator Bill Frist, the Republican leader, and Senator Harry Reid, the Democratic leader, said they were confident they could resolve their differences and get a bill passed by Memorial Day.
But participants said that even at the meeting Mr. Frist and Mr. Reid argued over terms for breaking the stalemate, which essentially comes down to how many amendments can be added to the bill. And even should the Senate pass the compromise, which includes provisions for border security, a guest worker program and options for citizenship, it would face stiff opposition in the House.
Representative John A. Boehner, Republican of Ohio, the House majority leader, said Tuesday that he opposed the Senate legislation, which would put a vast majority of illegal immigrants on a path to citizenship.
"I don't think that would be supported by the American people," Mr. Boehner said of the Senate bill while speaking with reporters.
The senators who met with Mr. Bush did not address Mr. Boehner's comments, and said they remained confident that an immigration bill with the stiffer border protections that conservatives want and a path to legalization and even citizenship for illegal workers here would become law by the end of the year.
"In the very near future, we will bring that bill back on the floor of the United States Senate," said Mr. Frist, of Tennessee.
Mr. Reid, of Nevada, a regular critic of Mr. Bush, praised the president for bringing the two sides together and said, "Senator Frist and I have to work out a way to handle the procedural quagmire that the Senate is, and we're going to try to do that."
Senator Arlen Specter, Republican of Pennsylvania, one of about a dozen senators at the meeting, said the session was significant because Mr. Bush seemed to be talking about "not automatic citizenship but the path to citizenship," though Mr. Boehner said that he had no stomach for such a provision and that he hoped Mr. Bush would ultimately not support the Senate bill.
Though senators from both parties said afterward that Mr. Bush seemed to support the bill in principle, they said he did not flat out endorse it.
Senator Edward M. Kennedy, Democrat of Massachusetts, said in an interview that Mr. Bush would have to take a more forceful stand to clear up any ambiguity, but White House officials have said that he is trying to stay away from endorsing any one piece of legislation so he can help broker compromise with the House, which is pushing a bill focusing only on enforcement.
Under the Senate plan, illegal immigrants who have lived in the United States for five years or more would eventually be granted citizenship if they remained employed, had background checks, paid fines and back taxes and learned English.
Illegal immigrants who have lived here two to five years would have to travel to a United States border crossing and apply for a temporary work visa, but they would also be eligible for permanent residency and citizenship over time. Illegal immigrants who have been here less than two years would have to leave the country, though they could apply to be in a temporary worker program.
The Senate bill collapsed amid partisan feuding over whether conservatives should be allowed to offer amendments to the legislation.
Republicans, along with Mr. Kennedy, insisted that votes should be allowed on amendments. Mr. Reid blocked votes on the amendments, saying they were intended to gut the legislation.
The Democratic leadership has insisted on assurances that Democratic members of the Judiciary Committee could serve as negotiators with the House over a final bill, an approach that Mr. Frist has rejected.
Senator Mel Martinez, Republican of Florida, said Tuesday that Mr. Reid and Mr. Frist had a brief argument over the terms as Mr. Bush sat between them. A senior Senate aide who was briefed on the meeting afterward said the argument came to a close with Senator John McCain, Republican of Arizona, pointing to such partisan bickering as an example of what is wrong with Washington. Mr. Martinez said afterward that "they just were back and forth" and that "the meeting moved to a loftier discussion after that."
Mr. Martinez added that the president said, "You know, I don't know what you all do over there, but I understand the difficulty now." |
<gh_stars>0
// /controls/parameters/FiO2Parameter.qml
namespace QmlCacheGeneratedCode {
namespace _controls_parameters_FiO2Parameter_qml {
extern const unsigned char qmlData alignas(16) [] = {
0x71,0x76,0x34,0x63,0x64,0x61,0x74,0x61,
0x20,0x0,0x0,0x0,0x8,0xc,0x5,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xb4,0xb,0x0,0x0,0x31,0x30,0x31,0x37,
0x39,0x39,0x66,0x38,0x61,0x63,0x64,0x62,
0x66,0x63,0x34,0x65,0x62,0x64,0x30,0x35,
0x66,0x33,0x37,0x61,0x37,0x63,0x30,0x39,
0x35,0x34,0x34,0x64,0x32,0x34,0x62,0x34,
0x31,0x61,0x66,0x31,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x23,0x0,0x0,0x0,
0x1c,0x0,0x0,0x0,0x60,0x3,0x0,0x0,
0x6,0x0,0x0,0x0,0xf8,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x10,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x10,0x1,0x0,0x0,
0x1,0x0,0x0,0x0,0x10,0x1,0x0,0x0,
0xb,0x0,0x0,0x0,0x14,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x40,0x1,0x0,0x0,
0x4,0x0,0x0,0x0,0x40,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x60,0x1,0x0,0x0,
0x1,0x0,0x0,0x0,0x60,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x70,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x70,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x70,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x70,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x70,0x1,0x0,0x0,
0xff,0xff,0xff,0xff,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x20,0xa,0x0,0x0,
0x70,0x1,0x0,0x0,0xb0,0x1,0x0,0x0,
0x0,0x2,0x0,0x0,0x50,0x2,0x0,0x0,
0xa0,0x2,0x0,0x0,0xf0,0x2,0x0,0x0,
0x50,0x3,0x0,0x0,0x53,0x1,0x0,0x0,
0x60,0x1,0x0,0x0,0x53,0x1,0x0,0x0,
0x73,0x1,0x0,0x0,0x61,0x1,0x0,0x0,
0x83,0x1,0x0,0x0,0x90,0x1,0x0,0x0,
0xa3,0x1,0x0,0x0,0xa3,0x0,0x0,0x0,
0xc3,0x0,0x0,0x0,0xb0,0x1,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0xc9,0xbf,
0x0,0x0,0x0,0x0,0x0,0x0,0xa5,0xbf,
0x0,0x0,0x0,0x0,0x0,0x0,0xe8,0xbf,
0x0,0x0,0x0,0x0,0x0,0x80,0x3,0x0,
0x7,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x0,0x0,0x0,0x0,
0x38,0x0,0x0,0x0,0x7,0x0,0x0,0x0,
0xe,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x30,0x0,0x0,0x0,0x30,0x0,0x0,0x0,
0x0,0x0,0x1,0x0,0xff,0xff,0xff,0xff,
0x0,0x0,0x7,0x0,0x0,0x0,0x7,0x0,
0xd,0x0,0x50,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0xd,0x0,0x0,0x0,
0x2e,0x0,0x3a,0x1,0x18,0x6,0x2,0x0,
0x38,0x0,0x0,0x0,0x13,0x0,0x0,0x0,
0x10,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x30,0x0,0x0,0x0,0x30,0x0,0x0,0x0,
0x0,0x0,0x1,0x0,0xff,0xff,0xff,0xff,
0x0,0x0,0x7,0x0,0x0,0x0,0x9,0x0,
0xe,0x0,0x50,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0xe,0x0,0x0,0x0,
0xc4,0x2e,0x2,0x18,0x7,0x2e,0x3,0x18,
0x8,0x3e,0x4,0x7,0x1a,0x8,0x6,0xce,
0x16,0x6,0x2,0x0,0x0,0x0,0x0,0x0,
0x40,0x0,0x0,0x0,0x5,0x0,0x0,0x0,
0x12,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x30,0x0,0x0,0x0,0x30,0x0,0x0,0x0,
0x0,0x0,0x2,0x0,0x3,0x0,0x0,0x0,
0x0,0x0,0x7,0x0,0x0,0x0,0x7,0x0,
0xf,0x0,0x50,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0xf,0x0,0x0,0x0,
0x4,0x0,0x0,0x0,0x11,0x0,0x0,0x0,
0x28,0x3,0x18,0x6,0x2,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x3c,0x0,0x0,0x0,0xf,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x1,0x0,0x1,0x0,
0x30,0x0,0x0,0x0,0x34,0x0,0x0,0x0,
0x0,0x0,0x1,0x0,0xff,0xff,0xff,0xff,
0x0,0x0,0x8,0x0,0x0,0x0,0xc,0x0,
0xf,0x0,0x0,0x2,0x0,0x0,0x0,0x0,
0x17,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x10,0x0,0x0,0x0,0xae,0x5,0x1,0x6,
0x18,0x8,0x14,0x3,0xb,0xa4,0x6,0x8,
0x1,0xb,0x2,0x0,0x0,0x0,0x0,0x0,
0x40,0x0,0x0,0x0,0x5,0x0,0x0,0x0,
0x14,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x30,0x0,0x0,0x0,0x30,0x0,0x0,0x0,
0x0,0x0,0x2,0x0,0x5,0x0,0x0,0x0,
0x0,0x0,0x7,0x0,0x0,0x0,0x7,0x0,
0x12,0x0,0x50,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x12,0x0,0x0,0x0,
0x4,0x0,0x0,0x0,0x14,0x0,0x0,0x0,
0x28,0x5,0x18,0x6,0x2,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x3c,0x0,0x0,0x0,0x1c,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x1,0x0,0x1,0x0,
0x30,0x0,0x0,0x0,0x34,0x0,0x0,0x0,
0x0,0x0,0x1,0x0,0xff,0xff,0xff,0xff,
0x0,0x0,0x8,0x0,0x0,0x0,0xf,0x0,
0x12,0x0,0x50,0x1,0x0,0x0,0x0,0x0,
0x17,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x13,0x0,0x0,0x0,0x2e,0x7,0x18,0x8,
0x2e,0x8,0x18,0xb,0x1a,0x6,0xd,0x1a,
0x6,0xe,0x2e,0x9,0x9a,0xe,0x9c,0xd,
0x18,0xc,0xa4,0xa,0x8,0x2,0xb,0x2,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x10,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xd0,0x3,0x0,0x0,0xf0,0x3,0x0,0x0,
0x18,0x4,0x0,0x0,0x58,0x4,0x0,0x0,
0x80,0x4,0x0,0x0,0xa0,0x4,0x0,0x0,
0xd8,0x4,0x0,0x0,0x10,0x5,0x0,0x0,
0x48,0x5,0x0,0x0,0x80,0x5,0x0,0x0,
0xa0,0x5,0x0,0x0,0xe0,0x5,0x0,0x0,
0x20,0x6,0x0,0x0,0x60,0x6,0x0,0x0,
0x98,0x6,0x0,0x0,0xf0,0x6,0x0,0x0,
0x30,0x7,0x0,0x0,0x88,0x7,0x0,0x0,
0xd8,0x7,0x0,0x0,0x48,0x8,0x0,0x0,
0x80,0x8,0x0,0x0,0xd8,0x8,0x0,0x0,
0x18,0x9,0x0,0x0,0x60,0x9,0x0,0x0,
0x88,0x9,0x0,0x0,0xb0,0x9,0x0,0x0,
0xd8,0x9,0x0,0x0,0x0,0xa,0x0,0x0,
0xff,0xff,0xff,0xff,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x7,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x51,0x0,0x74,0x0,0x51,0x0,0x75,0x0,
0x69,0x0,0x63,0x0,0x6b,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x10,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x51,0x0,0x74,0x0,0x51,0x0,0x75,0x0,
0x69,0x0,0x63,0x0,0x6b,0x0,0x2e,0x0,
0x43,0x0,0x6f,0x0,0x6e,0x0,0x74,0x0,
0x72,0x0,0x6f,0x0,0x6c,0x0,0x73,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x7,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x52,0x0,0x65,0x0,0x73,0x0,0x70,0x0,
0x69,0x0,0x72,0x0,0x61,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x2,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x2e,0x0,0x2e,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0xf,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x50,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x42,0x0,0x75,0x0,0x74,0x0,
0x74,0x0,0x6f,0x0,0x6e,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0xd,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x70,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x4e,0x0,0x61,0x0,0x6d,0x0,
0x65,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0xf,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x46,0x0,0x69,0x0,0x4f,0x0,0x3c,0x0,
0x73,0x0,0x75,0x0,0x62,0x0,0x3e,0x0,
0x32,0x0,0x3c,0x0,0x2f,0x0,0x73,0x0,
0x75,0x0,0x62,0x0,0x3e,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0xd,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x70,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x55,0x0,0x6e,0x0,0x69,0x0,
0x74,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x1,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x25,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x11,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x70,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x4d,0x0,0x69,0x0,0x6e,0x0,
0x56,0x0,0x61,0x0,0x6c,0x0,0x75,0x0,
0x65,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x11,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x70,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x4d,0x0,0x61,0x0,0x78,0x0,
0x56,0x0,0x61,0x0,0x6c,0x0,0x75,0x0,
0x65,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x11,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x70,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x53,0x0,0x74,0x0,0x65,0x0,
0x70,0x0,0x53,0x0,0x69,0x0,0x7a,0x0,
0x65,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0xe,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x70,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x56,0x0,0x61,0x0,0x6c,0x0,
0x75,0x0,0x65,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x1d,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x65,0x0,0x78,0x0,0x70,0x0,0x72,0x0,
0x65,0x0,0x73,0x0,0x73,0x0,0x69,0x0,
0x6f,0x0,0x6e,0x0,0x20,0x0,0x66,0x0,
0x6f,0x0,0x72,0x0,0x20,0x0,0x70,0x0,
0x61,0x0,0x72,0x0,0x61,0x0,0x6d,0x0,
0x65,0x0,0x74,0x0,0x65,0x0,0x72,0x0,
0x56,0x0,0x61,0x0,0x6c,0x0,0x75,0x0,
0x65,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x10,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x6f,0x0,0x6e,0x0,0x56,0x0,0x61,0x0,
0x6c,0x0,0x75,0x0,0x65,0x0,0x43,0x0,
0x6f,0x0,0x6e,0x0,0x66,0x0,0x69,0x0,
0x72,0x0,0x6d,0x0,0x65,0x0,0x64,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x1f,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x65,0x0,0x78,0x0,0x70,0x0,0x72,0x0,
0x65,0x0,0x73,0x0,0x73,0x0,0x69,0x0,
0x6f,0x0,0x6e,0x0,0x20,0x0,0x66,0x0,
0x6f,0x0,0x72,0x0,0x20,0x0,0x6f,0x0,
0x6e,0x0,0x56,0x0,0x61,0x0,0x6c,0x0,
0x75,0x0,0x65,0x0,0x43,0x0,0x6f,0x0,
0x6e,0x0,0x66,0x0,0x69,0x0,0x72,0x0,
0x6d,0x0,0x65,0x0,0x64,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x19,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x70,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x44,0x0,0x69,0x0,0x73,0x0,
0x70,0x0,0x6c,0x0,0x61,0x0,0x79,0x0,
0x46,0x0,0x6f,0x0,0x72,0x0,0x6d,0x0,
0x61,0x0,0x74,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x28,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x65,0x0,0x78,0x0,0x70,0x0,0x72,0x0,
0x65,0x0,0x73,0x0,0x73,0x0,0x69,0x0,
0x6f,0x0,0x6e,0x0,0x20,0x0,0x66,0x0,
0x6f,0x0,0x72,0x0,0x20,0x0,0x70,0x0,
0x61,0x0,0x72,0x0,0x61,0x0,0x6d,0x0,
0x65,0x0,0x74,0x0,0x65,0x0,0x72,0x0,
0x44,0x0,0x69,0x0,0x73,0x0,0x70,0x0,
0x6c,0x0,0x61,0x0,0x79,0x0,0x46,0x0,
0x6f,0x0,0x72,0x0,0x6d,0x0,0x61,0x0,
0x74,0x0,0x74,0x0,0x65,0x0,0x72,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0xe,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x70,0x0,0x61,0x0,0x72,0x0,0x61,0x0,
0x6d,0x0,0x65,0x0,0x74,0x0,0x65,0x0,
0x72,0x0,0x46,0x0,0x69,0x0,0x78,0x0,
0x75,0x0,0x70,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x1d,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x65,0x0,0x78,0x0,0x70,0x0,0x72,0x0,
0x65,0x0,0x73,0x0,0x73,0x0,0x69,0x0,
0x6f,0x0,0x6e,0x0,0x20,0x0,0x66,0x0,
0x6f,0x0,0x72,0x0,0x20,0x0,0x70,0x0,
0x61,0x0,0x72,0x0,0x61,0x0,0x6d,0x0,
0x65,0x0,0x74,0x0,0x65,0x0,0x72,0x0,
0x46,0x0,0x69,0x0,0x78,0x0,0x75,0x0,
0x70,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x11,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x47,0x0,0x75,0x0,0x69,0x0,0x53,0x0,
0x74,0x0,0x61,0x0,0x74,0x0,0x65,0x0,
0x43,0x0,0x6f,0x0,0x6e,0x0,0x74,0x0,
0x61,0x0,0x69,0x0,0x6e,0x0,0x65,0x0,
0x72,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x16,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x63,0x0,0x6f,0x0,0x6d,0x0,0x6d,0x0,
0x61,0x0,0x6e,0x0,0x64,0x0,0x65,0x0,
0x64,0x0,0x5f,0x0,0x66,0x0,0x69,0x0,
0x6f,0x0,0x32,0x0,0x5f,0x0,0x70,0x0,
0x65,0x0,0x72,0x0,0x63,0x0,0x65,0x0,
0x6e,0x0,0x74,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x5,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x76,0x0,0x61,0x0,0x6c,0x0,0x75,0x0,
0x65,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x6,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x4e,0x0,0x75,0x0,0x6d,0x0,0x62,0x0,
0x65,0x0,0x72,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x7,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x74,0x0,0x6f,0x0,0x46,0x0,0x69,0x0,
0x78,0x0,0x65,0x0,0x64,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x4,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x4d,0x0,0x61,0x0,0x74,0x0,0x68,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xff,0xff,0xff,0xff,0x3,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x18,0x0,0x0,0x0,
0x18,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x6d,0x0,0x61,0x0,0x78,0x0,0x0,0x0,
0x4,0x0,0x0,0x0,0x10,0x0,0x0,0x0,
0x1,0x0,0x0,0x0,0x70,0x0,0x0,0x0,
0x1,0x0,0x0,0x0,0x1,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x2,0x0,0x0,0x0,
0xb,0x0,0x0,0x0,0x1,0x0,0x10,0x0,
0x1,0x0,0x0,0x0,0x2,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x2,0x0,0x0,0x0,
0x4,0x0,0x0,0x0,0x2,0x0,0x10,0x0,
0x1,0x0,0x0,0x0,0x3,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x1,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x3,0x0,0x10,0x0,
0x2,0x0,0x0,0x0,0x4,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0xff,0xff,0xff,0xff,
0xff,0xff,0xff,0xff,0x4,0x0,0x10,0x0,
0x74,0x0,0x0,0x0,0x5,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0xff,0xff,
0xff,0xff,0xff,0xff,0x0,0x0,0x0,0x0,
0x44,0x0,0x0,0x0,0x44,0x0,0x0,0x0,
0x44,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x44,0x0,0x0,0x0,0x44,0x0,0x0,0x0,
0x0,0x0,0x9,0x0,0x44,0x0,0x0,0x0,
0x0,0x0,0x0,0x0,0x1c,0x1,0x0,0x0,
0x6,0x0,0x10,0x0,0x0,0x0,0x0,0x0,
0x13,0x0,0x0,0x0,0x0,0x2,0x6,0x0,
0x4,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x12,0x0,0x50,0x0,0x12,0x0,0x50,0x1,
0x11,0x0,0x0,0x0,0x0,0x2,0x6,0x0,
0x2,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xf,0x0,0x50,0x0,0xf,0x0,0x0,0x2,
0xf,0x0,0x0,0x0,0x0,0x0,0x6,0x0,
0x1,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xe,0x0,0x50,0x0,0xe,0x0,0x70,0x1,
0xd,0x0,0x0,0x0,0x0,0x0,0x6,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xd,0x0,0x50,0x0,0xd,0x0,0x50,0x1,
0xc,0x0,0x0,0x0,0x0,0x0,0x2,0x0,
0x2,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xc,0x0,0x50,0x0,0xc,0x0,0x80,0x1,
0xb,0x0,0x0,0x0,0x0,0x0,0x2,0x0,
0x1,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xb,0x0,0x50,0x0,0xb,0x0,0x80,0x1,
0xa,0x0,0x0,0x0,0x0,0x0,0x2,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0xa,0x0,0x50,0x0,0xa,0x0,0x80,0x1,
0x8,0x0,0x0,0x0,0x0,0x0,0x3,0x0,
0x0,0x0,0x0,0x0,0x9,0x0,0x0,0x0,
0x8,0x0,0x50,0x0,0x8,0x0,0x40,0x1,
0x6,0x0,0x0,0x0,0x0,0x0,0x4,0x0,
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,
0x7,0x0,0x50,0x0,0x7,0x0,0x40,0x1,
0x0,0x0,0x0,0x0
};
}
}
|
<reponame>NelsonGomesNeto/ProgramC<filename>Competitive Programming/Begginer Contests/Ic/An.cpp<gh_stars>1-10
#include <bits/stdc++.h>
using namespace std;
string magic() {
map<char,unsigned> dp;
dp['A'] = 2;
dp['B'] = 2;
dp['C'] = 2;
dp['D'] = 3;
dp['E'] = 3;
dp['F'] = 3;
dp['G'] = 4;
dp['H'] = 4;
dp['I'] = 4;
dp['J'] = 5;
dp['K'] = 5;
dp['L'] = 5;
dp['M'] = 6;
dp['N'] = 6;
dp['O'] = 6;
dp['P'] = 7;
dp['R'] = 7;
dp['S'] = 7;
dp['T'] = 8;
dp['U'] = 8;
dp['V'] = 8;
dp['W'] = 9;
dp['X'] = 9;
dp['Y'] = 9;
string str;
getline(cin, str);
// remoção de espaços em branco e -
str.erase(std::remove(str.begin(), str.end(), '-'),str.end());
str.erase(std::remove(str.begin(), str.end(), ' '),str.end());
// convertendo não difitos em digitos
for(int i = 0; i < str.size();i++)
if(!isdigit(str[i]))
str[i] = '0' + dp[str[i]];
str.insert(3, "-");
return str;
}
void simulation() {
int n;
cin >> n;
cin.ignore(256, '\n');
vector<string> str(n);
for(int i = 0; i < n; i++)
str[i] = magic();
sort(str.begin(), str.end());
int p = 0;
int result = false;
// como ordenei consigo contar a ocorrencia dessa forma
for(int i = 0;i < n - 1; i++){
if(str[i] == str[i + 1]) {
p++;
result = true;
}
else {
if(p > 0) cout << str[i] << " " << p + 1 << endl;
p = 0;
}
}
if(!result) cout << "No duplicates." <<endl;
}
int main() {
int n;
cin>>n;
while(n--){
simulation();
if(n) cout << endl;
}
}
|
<gh_stars>0
package com.thebluealliance.androidclient.fragments.district;
import android.os.Bundle;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import com.thebluealliance.androidclient.R;
import com.thebluealliance.androidclient.binders.ExpandableListViewBinder;
import com.thebluealliance.androidclient.fragments.ExpandableListViewFragment;
import com.thebluealliance.androidclient.helpers.DistrictHelper;
import com.thebluealliance.androidclient.models.DistrictRanking;
import com.thebluealliance.androidclient.models.NoDataViewParams;
import com.thebluealliance.androidclient.subscribers.TeamAtDistrictBreakdownSubscriber;
import rx.Observable;
public class TeamAtDistrictBreakdownFragment
extends ExpandableListViewFragment<DistrictRanking, TeamAtDistrictBreakdownSubscriber> {
public static final String DISTRICT = "districtKey", TEAM = "teamKey";
private String mTeamKey;
private String mDistrictKey;
public static TeamAtDistrictBreakdownFragment newInstance(String teamKey, String districtKey) {
TeamAtDistrictBreakdownFragment f = new TeamAtDistrictBreakdownFragment();
Bundle args = new Bundle();
args.putString(TEAM, teamKey);
args.putString(DISTRICT, districtKey);
f.setArguments(args);
return f;
}
@Override
public void onCreate(Bundle savedInstanceState) {
if (getArguments() != null) {
mTeamKey = getArguments().getString(TEAM);
String districtKey = getArguments().getString(DISTRICT);
if (!DistrictHelper.validateDistrictKey(districtKey)) {
throw new IllegalArgumentException("Invalid District Key " + districtKey);
}
mDistrictKey = districtKey;
}
super.onCreate(savedInstanceState);
mBinder.setExpandMode(ExpandableListViewBinder.MODE_EXPAND_NONE);
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
return super.onCreateView(inflater, container, savedInstanceState);
}
@Override
protected void inject() {
mComponent.inject(this);
}
@Override
protected Observable<DistrictRanking> getObservable(String cacheHeader) {
return mDatafeed.fetchTeamAtDistrictRankings(mTeamKey, mDistrictKey, cacheHeader);
}
@Override
protected String getRefreshTag() {
return String.format("teamAtDistrictBreakdown_%1$s_%2$s", mTeamKey, mDistrictKey);
}
@Override public NoDataViewParams getNoDataParams() {
return new NoDataViewParams(R.drawable.ic_assignment_black_48dp, R.string.no_team_district_breakdown);
}
}
|
Naive Bayesian Classification Approach in Healthcare Applications In data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Nave Bayesian (NB). Bayesian approaches are a fundamentally important DM technique. Given the probability distribution, Bayes classifier can provably achieve the optimal result. Bayesian method is based on the probability theory. Bayes Rule is applied here to calculate the posterior from the prior and the likelihood, because the later two is generally easier to be calculated from a probability model. Statistics provide a strong fundamental background for quantification and evaluation of results. However, algorithms based on statistics need to be modified and scaled before they are applied to data mining. |
import * as indicators from "../";
import * as marketData from "../../data/market/";
export class BalanceOfPower
extends indicators.AbstractIndicator<marketData.PriceBar> {
static INDICATOR_NAME: string = "BOP";
static INDICATOR_DESCR: string = "Balance Of Power";
private highLow: number;
constructor() {
super(BalanceOfPower.INDICATOR_NAME, BalanceOfPower.INDICATOR_DESCR);
this.highLow = 0;
this.setLookBack(0);
}
receiveData(inputData: marketData.PriceBar): boolean {
this.highLow = inputData.high - inputData.low;
if (this.highLow === 0) {
this.setCurrentValue(0);
} else {
this.setCurrentValue((inputData.close - inputData.open) / (this.highLow));
}
return this.isReady;
}
}
export class BOP extends BalanceOfPower {
}
|
Multiple Introductions of Mycobacterium tuberculosis Lineage 2Beijing Into Africa Over Centuries The Lineage 2Beijing (L2Beijing) sub-lineage of Mycobacterium tuberculosis has received much attention due to its high virulence, fast disease progression, and association with antibiotic resistance. Despite several reports of the recent emergence of L2Beijing in Africa, no study has investigated the evolutionary history of this sub-lineage on the continent. In this study, we used whole genome sequences of 781 L2 clinical strains from 14 geographical regions globally distributed to investigate the origins and onward spread of this lineage in Africa. Our results reveal multiple introductions of L2Beijing into Africa linked to independent bacterial populations from East- and Southeast Asia. Bayesian analyses further indicate that these introductions occurred during the past 300 years, with most of these events pre-dating the antibiotic era. Hence, the success of L2Beijing in Africa is most likely due to its hypervirulence and high transmissibility rather than drug resistance. INTRODUCTION Tuberculosis (TB) is mainly caused by a group of closely related bacteria referred to as the Mycobacterium tuberculosis Complex (MTBC). The MTBC comprises seven phylogenetic lineages adapted to humans and several lineages adapted to different wild and domestic animal species. The human-adapted lineages of the MTBC show a distinct geographic distribution, with some "generalist" lineages such as Lineage (L)2 and L4 occurring all around the world and others being geographically restricted "specialist" that include L5, L6, and L7 ). Africa is the only continent which is home to all seven human-adapted lineages, including the three "specialist" lineages exclusively found on the continent. Current evidence suggests that the MTBC overall originated in Africa and subsequently spread around the globe following human migratory events (;). The broad distribution of some of the "generalist" lineages and their presence in Africa has been attributed to past exploration, trade, and conquest. For instance, an important part of the TB epidemics in sub-Saharan Africa is driven by the generalist Latin-American-Mediterranean (LAM) sublineage of L4, which is postulated to have been introduced to the continent post-European contact (;). Among the different human-adapted MTBC lineages, the L2-Beijing sublineage has been of particular interest (). L2-Beijing has expanded and emerged worldwide from East Asia; its most likely geographical origin (;). In some parts of the world, the recent emergence of L2-Beijing has been linked to increased transmission (;), high prevalence of multidrug-resistant TB (MDR-TB) (Borrell and Gagneux, 2009), and to social and political instability, resulting into displacement of people and poor health systems (). Increasingly, L2-Beijing is also being reported in Africa (;;;), and evidence suggests that L2-Beijing in African regions is becoming more prevalent over time (;van der ;). Some authors have hypothesized that the introduction of L2-Beijing into South Africa resulted from the importation of slaves from Southeast Asia during the 17th and 18th centuries and/or the Chinese labor forces arriving in the 1900s (van ). Alternatively, in West Africa, the presence of L2-Beijing was proposed to reflect more recent immigration from Asia (;). To a certain extent, the recent expansion of L2-Beijing in parts of Africa has been associated with drug resistance (;) and higher transmissibility (). In addition, a study in the Gambia showed a faster progression from latent infection to active TB disease in patient house-hold contacts exposed to L2-Beijing (de ). Whilst L2-Beijing seems to be expanding in several regions of Africa, no study has formally investigated the evolutionary history of L2-Beijing on the continent. In this study, we used whole genome sequencing data from a global collection of L2 clinical strains to determine the most likely geographical origin of L2-Beijing in Africa and its spread across the continent. Whole Genome Sequence Analysis and Phylogenetic Inference We used a customized pipeline previously described to map short sequencing reads with BWA 0.6.2 to a reconstructed hypothetical MTBC ancestor used as reference (). SAMtools 0.1.19 was used to call single nucleotide polymorphisms (SNPs), and these SNPs were annotated using ANNOVAR and customized scripts based on the M. tuberculosis H37Rv reference annotation (AL123456.2). For downstream analyses, we excluded SNPs in repetitive regions, those annotated in problematic regions such as "PE/PPE/PGRS" and SNPs in drug-resistance associated genes. Small insertions and deletions were also excluded from the analyses. Only SNPs with minimum coverage of 20x and minimum mapping quality of 30 were kept. All SNPs classified by Samtools as having frequencies of the major non-reference allele lower than 100% (AF1<1) within each genome were considered to be heterogeneous and were treated as ambiguities and excluded, and were otherwise considered fixed (AF1 = 1). Mixed infections or contaminations were discarded by excluding genomes with more than 1,000 heterogeneous positions and genomes for which the number of heterogeneous SNPs was higher than the number of fixed SNPs. In addition we excluded genomes for which the number of fixed SNPs would fall below Q1-1.5 IQR of all fixed SNPs considering all L2 genomes (Q1 being the first quantile and IQR the inner quantile range as calculated in R3.5.0). All genomes were typed for lineage and sub-lineage using SNP markers as defined in Steiner et al. plus Coll et al. and those showing simultaneously more than one marker were excluded. We concatenated fixed SNPs from the variable positions obtained, which yielded a 32,269 bp alignment. The alignment was then used to infer a maximum likelihood phylogeny using RAxML 8.3.2 with a general time reversible (GTR) model in RAxML and 1,000 rapid bootstrap inferences, followed by a thorough maximum-likelihood search. The topology was rooted using the reference strain, H37Rv which belongs to Lineage 4. Reconstruction of the Ancestral Geographic Range To investigate the likely geographic origin of L2-Beijing strains in Africa, we inferred the historical biogeography of L2 using the RASP software () on a representative subset of 422 genomes due to software's sample limitation. We achieved this by randomly removing clustered genomes (i.e., those with 12 or less SNP distances) from the same country of origin, using hierarchical clustering implemented in pvclust package in R (Suzuki and Shimodaira, 2006) on a distance matrix of the 781 genome sequences. We then applied a Bayesian binary based method (BBM) in RASP to reconstruct geographical states at the ancestral nodes on the best-scoring ML tree inferred with RAxML using the 422 L2 genomes. We used geographical regions (according to United Nations geoscheme) as proxy for origins of the L2 strains. In total 14 regions were loaded as geographic distributions (indicated in Table S1). The ancestral reconstruction was performed with the Proto-Beijing clade as outgroup. We finally ran Bayesian analysis with 10 chains and 50,000 generations. Stochastic Character Mapping To determine the number of introduction events of L2-Beijing into African regions, we applied stochastic character mapping as implemented in SIMMAP on the 781 L2 phylogeny inferred from the best-scoring ML tree rooted on the Proto-Beijing clade, using the make.simmap function in phytools package 0.6.60 in R 3.5.0 (Revell, 2012;R Core Team, 2018). Geographical origin of the L2 strains was treated as a discrete trait and modeled onto the phylogeny using ARD model with 100 replicates. This model allows unequal rates of state transition permiting independent region-to-region transfers. We summarized the results of the 100 replicates using the function summary in phytools package 0.6.60 in R. We referred to the resulting introductions as migration events "M, " and discuss only those introductions with 5 or more genomes. Population Genetic Analyses Nucleotide diversity (pi) We calculated the mean pair-wise nucleotide diversity per site (Pi) measured by geographic region. We excluded geographic regions represented by <20 genomes. Confidence intervals were obtained by bootstrapping through resampling using the sample function in R with replacement and the respective lower and upper confidence levels by calculating 2.5th and 97.5th quartiles. Resampling was additionally done using the smallest size of the geographical regions to account for the effect of different sample sizes. Pairwise SNP distances We used dist.dna function of ape package implemented in R () to calculate pairwise SNP distances with raw mutation counts and pairwise deletions for gaps. Mean pairwise SNP distance to all strains of the same geographic population was calculated per strain and the distribution of the mean SNP pairwise distance for all strains plotted. The mean pairwise SNP distances were assumed not to be normally distributed and we therefore used Wilcoxon rank-sum test to test the differences among geographic regions. Additionally, we calculated pairwise SNP distances within African L2-Beijing populations for migration events with more than 10 genomes each. Drug Resistance To distinguish between drug-susceptible and drug-resistant strains, we used genotypic drug resistance molecular markers previously described (). We categorized strains into: susceptible as having no drug resistance specific mutations; monoresistant as having mutations conferring resistance to a single drug; MDR as having mutations conferring resistance to isoniazid and rifampicin; and extensively drug-resistant (XDR) as having mutations conferring resistance to fluoroquinolones and aminoglycosides in addition to being MDR (Table S2). Bayesian Molecular Dating Data preparation and preliminary analysis To estimate the historical period in which L2-Beijing was introduced to Africa, we performed a set of Bayesian phylogenetic analyses using tip-calibration (Rieux and Khatchikian, 2017). Among the 781 studied L2 strains, we had information on the year of sampling for 308. We performed all further analysis on this subset of 308 strains. We excluded all genomic positions that were invariable in this subset and all positions that were undetermined (missing data or deletions) in more than 25% of the strains, and obtained an alignment of 10,769 polymorphic positions. In tip dating analysis it is important to test whether the dataset contains strong enough temporal signal (Rieux and Balloux, 2016). To do this, we performed a tip randomization test () as follows. We used BEAST2 v. 2.4.8 () to run a phylogenetic analysis with a HKY + GAMMA model (), a constant population size prior on the tree and a strict molecular clock. Additionally, we used the years in which the strains were sampled to time-calibrate the tree, and we modified the extensible markup language (xml) file to specify the number of invariant sites as indicated by the developers of BEAST2 here: https://groups. google.com/forum/#!topic/beast-(strict_preliminary.xml). We ran three independent runs (245 million generations in total), and we used Tracer 1.7 () to identify the burn-in (8 million generations), to assess that the different runs converged, and to estimate the effective sample size (ESS) for all parameters, the posterior and the likelihood (ESS > 110 for all parameters). We then used TipDatingBeast (Rieux and Khatchikian, 2017) to generate 20 replicates of the xml file in which the sampling dates were randomly reassigned to different strains. For each replication, we ran the same BEAST2 analysis as for the original (observed) dataset (one run per replicate, 50 million generations, 10% burn-in). We used TipDatingBeast to parse the log files output of BEAST2 and compare the clock rate estimates for the observed data and the randomized replications. The estimates of the molecular clock rate did not overlap between the observed and the randomized dataset, indicating that there is a clear temporal signal and that we could proceed with further analysis ( Figure S2). Model selection To identify the clock model that best fits the data, we estimated the marginal likelihood of three different clock models: UCED and UCLD (), assuming a coalescent constant population size tree prior and the HKY model of nucleotide substitution. We used the Model selection package of BEAST2 to run a path sampling analysis (Lartillot and Philippe, 2006) following the recommendations of the BEAST2 developers (http://www.beast2.org/path-sampling/). We used the following settings: 100 steps, 4 million generations per step, alpha = 0.3, pre-burn-in = 1 million generations, burn-in for each step = 40% ( * PS.xml). For these analyses, we used proper priors as suggested by (). UCLD analysis Since the model selection analysis indicated that the UCLD clock was the best fitting model, we repeated the analysis using the UCLD and the same settings used in the path sampling analysis, sampling every 10,000 generations. We ran three independent runs (800 million generations in total), we used Tracer 1.7 () to identify the burn-in (10 million generations), to assess that the different runs converged and to estimate the effective sample size (ESS) for all parameters, the posterior and the likelihood (ESS > 260 for all parameters) (UCLD_final.xml and Table S3). We checked the sensitivity to the priors by running one analysis of 250 million generation sampling from the prior, and compared the parameter estimates with the analysis using the data. We observed the posterior distribution and the prior distribution of all parameters are very distinct ( Table S4), indicating that the parameter estimates are influenced by the data and not by the priors (). Additionally we repeated the dating analysis with a coalescent exponential population growth tree prior (UCLD + HKY + exponential growth) and with a GTR model of nucleotide substitution (UCLD + GTR + constant size). All these analyses resulted in similar estimates of the age of the introductions of L2 in Africa, thus showing that our results are not strongly influenced by the tree prior and the nucleotide substitution model (Table S6). We repeated the tip randomization test with the UCLD model as described above (20 replicates, one run per replicate, 105 million generations per replicate or more, burn-in 10%), and again we found a temporal signal ( Figure S3). To summarize the results, we sampled the trees from the three runs (5% burn-in corresponding to 10 million generations or more, sampling every 25,000 generation). We then summarized the 31,758 sampled trees, created a maximum clade credibility tree using the software TreeAnnotator from the BEAST2 package and used FigTree version 1.4.2 (http://tree.bio.ed.ac.uk/software/ figtree) for visualization ( Figure S4). Phylogenetic Inference of L2 Strains We analyzed a total of 781 L2 genomes originating from 14 geographical regions including Eastern and Southern Africa ( Figure S1 and Table S1). We focused on seven geographical regions that had more than 20 genomes each, and assigned the remainder to "Other, " including two genomes from Western Africa ( Figure 1A). The resulting phylogeny of L2 was divided into two main sublineages: the L2-proto-Beijing and L2-Beijing, supporting previous results (;). The L2-proto-Beijing was the most basal L2 sublineage and was restricted to East-and Southeast Asia. L2-Beijing, particularly the "modern" (also known as "typical") sublineage, was geographically widely distributed and included strains from Africa. We further characterized L2-Beijing using the recently described unified classification scheme for L2 (). The Population Structure of L2-Beijing in Eastern and Southern Africa Our findings showed the population of African L2-Beijing to be heterogeneous (Figures 1B, 2 and Table S5). Most of the African L2-Beijing strains were classified into several groups within the "modern" sublineage, which included primarily the "Asian-African" sublineages (L2.2.4, L2.2.5 and L2.2.7), consistent with previous findings (). We also identified the "ancient" (atypical) strains among the African L2-Beijing. Given that "ancient" L2-Beijing strains (L2.2.1-L2.2.3) are generally uncommon (), it is interesting to observe such strains in both African regions. In several instances, African L2-Beijing strains did not fall into any of the previously defined groups (Figure 2). Of the two African regions studied here, East Africa had higher proportion of previously uncharacterized L2-Beijing strains (43/92, 46.7%). In summary, our findings show that African regions harbored distinct L2-Beijing populations. This is unlike Eastern Europe and Central Asia, where L2-Beijing is dominated by a few highly similar strains (;). Of note, L2-Beijing strains typical Eastern Europe and Central Asia were completely absent from the African populations (Figure 2). Genetic Diversity of L2-Beijing Strains Across Geographic Regions The spatial distribution of L2-Beijing sublineages and the prevalence of "ancient" L2-Beijing strains observed in this study and previously (;), suggest that L2-Beijing has expanded worldwide from Asia. This view can further be supported by the measures of genetic diversity of L2-Beijing in the different geographical regions (Figure 3). As expected, East-and Southeast Asia contained the most genetically diverse L2 populations, which is consistent with previous results (). Conversely, L2 populations in other geographies were less genetically diverse, suggesting recent dissemination of L2 to these regions. Within Africa, Southern Africa showed a higher diversity in L2-Beijing populations compared to Eastern Africa. The genetic diversity within the African L2-Beijing populations not only reflects the number and variety of source populations but also local patterns of diversification that occurred after their introduction. Therefore, the higher genetic diversity of the L2-Beijing populations in Southern Africa compared to Eastern Africa likely reflects both aspects. Multiple Introductions of L2-Beijing From Asia Into Africa Based on our reconstructed phylogeny, African L2-Beijing strains clustered into several unrelated clades indicating multiple introductions into Africa (Figure 1B). We next investigated the most likely geographical origins of those introductions. As anticipated, our ancestral reconstruction using RASP estimated East Asia as the most likely origin of all L2 (posterior probability of 96.1%) and L2-Beijing (posterior probability 92.5%) ( Figure S5). Our data further indicate that L2-Beijing was introduced into Africa from East-and Southeast Asia on multiple occasions independently. Furthermore, we observed both direct introductions from Asia into Africa as well as subsequent dispersal within the continent (Figures S6, S7). Using stochastic mapping, we estimated a total of 13 introductions or migration events (M1-M13) into Africa (Figure 4). Eight of the African L2-Beijing introductions originated from East Asia and five from Southeast Asia. Out of the 13 introductions, three (M3, M10, and M13) were present in both African regions analyzed here, suggesting initial introductions from Asia followed by subsequent spread within Africa. Overall, our analysis inferred more independent introductions into Southern Africa (n = 7, M1, M4, M7-9, M11 and M12, all of them with extant strains sampled in South Africa) than Eastern Africa (n = 3, M2 sampled in Malawi and Tanzania; M5 and M6 sampled in Kenya and in Malawi, respectively). Taken together, our data suggest that multiple migration events have shaped the populations of L2-Beijing in Africa. Bayesian Molecular Dating Different hypotheses have been formulated on the possible timing of the introduction of L2-Beijing into Africa (van ). Here we used tip-calibration to date the phylogenetic tree of L2 and estimate the age of its introduction to Africa. For these analyses, we identified 308 strains among the 781 for which the sampling year was known. These strains were sampled during a period of 19 years; 1995-2014 (Figure S8), were evenly distributed on the complete phylogenetic tree ( Figure S9) and included 40% members of the African L2-Beijing strains ( Figure S10). Eleven of the 13 African introductions were represented in this dataset (M1-M3 and M6-M13). We performed a Date Randomization Test with a strict clock and with a relaxed clock. With both models we detected no overlap in the 95% credibility interval of the clock rate estimates of observed and randomized datasets indicating that there was sufficient temporal signal in the dataset to perform inference (see methods, Figures S2, S3). Further, We found that the UCLD clock had the highest marginal likelihood and a Bayes Factor of 27 with the second best fitting model, the strict clock (Table 1), indicating strong evidence in favor of the UCLD clock (Kass and Raftery, 1995). We performed a phylogenetic analysis with BEAST2 using the UCLD clock. Under the UCLD model, the coefficient of variation (COV), which is a summary of the branch rates distribution (standard deviation divided by the mean), gives an indication on the clock-likeness of the data (). A coefficient of variation of zero indicates that the data fit a strict clock, whilst a greater COV indicates a higher heterogeneity of rates through the phylogeny. We obtained a mean COV of 0.22 (95% credibility interval= 0.1732, 0.2732), indicating a moderate level of rate variation across different branches and thus supporting the results of the path sampling analysis that favored the UCLD model. Recent Origins of the African L2-Beijing Clades We used the UCLD clock model to infer the clock rate and divergent times of the 308 L2 strains with known sampling dates and estimated a mean substitution rate of 1.34 10 −7 . These estimates are in agreement with previously reported rates from epidemiological studies (;). However, the estimated rate by Eldholm et al. is relatively higher compared to our estimates, which likely reflects the fact that our dataset included the entire Lineage 2 as opposed to a only a single particular clade in the case of Eldholm et al.. We estimated the most recent common ancestor (MRCA) of the extant L2-Beijing of the 308 strains to the year 1225 (Figure S4). This estimate was slightly younger than the previous estimate for the whole of Lineage 2 by the study of (), likely reflecting the difference in methodology. The latter study used ancient M. pinipeddii DNA recovered from pre-Columbian human remains to calibrate the phylogeny as opposed to tip-dating based on contemporary sampling dates. For each African clade, we estimated the year of introduction using the 0.975 quantile of the HPD of the age of the MRCA as the upper limit (most recent possible year) and the 0.025 quantile of the HPD of the divergence time between the closest non-African L2-Beijing strain (the closest outgroup) and the African clade of interest as lowest limit (most ancient possible year). This approach produced conservative estimates, while relying only on the age of the MRCA of the African clades would systematically underestimate the age of the introductions. Our estimates placed the earliest introductions of the African L2-Beijing (M1, M3, M7, and M12) in the eighteenth and nineteenth century (Figure 5 and Table S6). Four additional migration events (M6, M9, M10, and M11) were estimated to have occurred between the beginning of the nineteenth century and the first half of the twentieth century. Finally, the three most recent introductions to Africa happened in the second half of the twentieth century (M2, M8, and M13). Diversity patterns of the African clades exclusive to Easternand Southern Africa could further provide support for the recent introductions of African L2-Beijing. We thus calculated the pairwise SNP distances within the individual introductions to explore the local patterns of diversification associated with regional epidemics after the introductions. Although strains within Southern African introductions were relatively more distantly related, L2-Beijing strains from both African regions were on average 20 to 40 SNPs apart (Figures S11-S13). The latter thresholds were proposed to correspond to strains involved in transmission clusters of estimated 50 to100 years (), supporting the relatively recent introductions of L2-Beijing into the African continent. Overall, these results indicate that the different introductions of L2-Beijing to Africa occurred over a period of 300 years. While the earliest introduction is unlikely to have happened after 1732-1874, the most recent is unlikely to have occurred before 1946-1980. However, our 95% HPDs show a wide range of uncertainty, which likely resulted from extrapolating several centuries back in time based on a comparably short calibration interval spanning only 19 years. Introductions of L2-Beijing Into Africa Unrelated to Drug Resistance Because of the repeated association of L2-Beijing with antibiotic resistance (Borrell and Gagneux, 2009), the emergence and dissemination of L2-Beijing strains has often been attributed to drug resistance. However, our estimated timing of these introductions suggest that African L2-Beijing strains were introduced prior the discovery of TB antibiotics, and thus must have involved drug-susceptible strains (Figure 5). To explore this question further, we assessed the drug resistance profiles of L2-Beijing strains linked to the various introduction events into the two African regions. We found that all the Eastern African populations contained only drug-susceptible strains and that approximately three-quarters of L2-Beijing strains in the Southern African populations were drug-susceptible, with the remaining being either mono-, multi-, or extensively drugresistant (Figure 6 and Figure S14). Taken together, these results suggest that the emergence of L2-Beijing in Africa, particularly in Eastern Africa, was not driven by drug resistance. Moreover, our data indicate independent acquisition of drug resistance for the resistant strains detected in the Southern African L2-Beijing population (Figure 6), which might partly contribute to the subsequent spread of L2-Beijing in Southern Africa but not in Eastern Africa. DISCUSSION This study investigated the most likely geographical origin of the L2-Beijing in Africa. In line with previous findings (;), we identified East Asia as the most likely place of origin of L2 and L2-Beijing. Our findings further revealed multiple independent introductions of L2-Beijing into Africa linked to separate populations originating from both Eastand Southeast Asia. Some of these introductions were followed by further onward spread of L2-Beijing within African regions. Finally, we demonstrate that most introductions of L2-Beijing on the continent occurred before the age of antibiotics. L2-Beijing has received much attention given its hypervirulence in infection models (;), faster progression to disease and higher transmission potential in humans (de ;), frequent association with drug resistance, and recent emergence in different regions of the world (;Borrell and Gagneux, 2009;). Several studies indicate L2-Beijing originated in Asia and spread from there to the rest of the world (;). Our results support this notion by identifying "Asia" as the most likely geographical origin of both L2 and L2-Beijing based on our ancestral reconstructions and the fact the L2-Beijing populations in Asia are much more diverse than in other regions. In addition, our findings show that L2-Beijing was introduced into Africa multiple times from both East-and Southeast Asia. The presence of L2-Beijing in South Africa has previously been proposed to be due to the importation of slaves from Southeastern Asia by Europeans in the seventeenth and eighteenth centuries followed by the import of Chinese labor-forces in the early 1900s (van ;). Our Bayesian dating estimates predicted the earliest introductions of L2-Beijing into Africa to have occurred in the eighteenth and nineteenth centuries, concurring with these proposed time periods. However, our findings also point to later introductions of L2-Beijing into the continent in the nineteenth and early twentieth centuries. The timings of the latest three introductions in the second half of the twentieth century coincide with the decolonization and post-colonial period in Africa when investments into infrastructure and other projects by Chinese enterprises substantially increased. These activities also brought many Chinese workers to Africa during a time when TB was still very prevalent in China. Hence, many of these workers were likely latently infected with L2-Beijing and might have later reactivated (). Overall, our findings suggest that L2-Beijing has emerged in Africa over the last 300 years and that these introductions have occurred sporadically ever since. The repeated association of L2-Beijing with drug resistance (Borrell and Gagneux, 2009) has led some to propose that drug resistance is another reason why this sublineage might successfully compete against and eventually replace other M. tuberculosis genotypes (). However, the underlying reason for the association of L2-Beijing with drug resistance remains unclear (Borrell and Trauner, 2017), and it is also far from universal, with several reports from e.g., China and other regions finding no such association (;). Our results show that most introduction events of L2-Beijing into Africa pre-date the antibiotic era, and because of that, these introductions were most likely caused by drug-susceptible strains. The notion that the initial emergence of L2-Beijing in Africa was not driven by drug resistance is further supported by our findings that none of L2-Beijing strains from Eastern Africa strains analyzed here were drug-resistant. Of note, our observations suggest that drug resistance in South Africa was acquired via independent events post initial introductions from Asia. This is in sharp contrast to the situation in Eastern Europe and Central Asia, where L2-Beijing is highly prevalent but dominated by few recently expanded drug-resistant clones, which account for up to 60% of the L2-Beijing populations in some of these countries (;). The association of L2-Beijing with drug resistance in these regions were likely favored by the economic and public health crises that followed the collapse of Soviet Union (;). Based on our finding that the original introductions of L2-Beijing into Africa involved drug-susceptible strains and that the prevalence of drug-resistant L2-Beijing in Africa overall is comparably low, we propose that some of the other characteristics of this sub-lineage, in particular its high virulence, high transmissibility and rapid progression from infection to disease, were responsible for the initial competitive success of L2-Beijing in Africa. Until recently, Africa was considered a Virgin Soil for TB and that TB was only introduced following European contact. However, this notion is incompatible with the current evidence supporting an African origin for the MTBC overall (;;). Indeed, Africa is the only continent to harbor all seven known human-adapted MTBC lineages, three of which are almost exclusively observed there, as well as several animal-adapted MTBC ecotypes, which affect several wild African mammals (). Moreover, one study showed that TB epidemics on the continent were caused by many different "native" genotypes prior to foreign contacts, suggesting that TB existed in Africa before the initial European contact. Perhaps the clinical and epidemiological picture of TB in Africa was different at the time, characterized by a low virulent and slow progressing disease, which would have escaped the attention of European colonial officials reporting the absence of TB in Africa, at a time when the TB epidemic was raging in the cities of Europe and North America, killing up to 20% of the adult population (Comas and Gagneux, 2011). We hypothesized previously that rapid co-expansion of MTBC L2, L3, and L4 with their respective human host populations in China, India and Europe might have selected for higher virulence and shorter latency in these "modern" lineages (). The emergence and expansion of these "foreign" genotypes including L2-Beijing into Africa as reported here, and as reported earlier for L4 (;), demonstrate the ability of these lineages to successfully compete against the existing genotypes on the continent, likely as a result of their high transmissibility and rapid progression to disease. Following their initial establishment, poor TB treatment programs subsequently selected for drug resistance in L2-Beijing but also in other MTBC lineages including L4, which might have facilitated their further spread in countries such as South Africa. Finally, the estimates of the TMRCA for L2 reported here are largely consistent with recent reports using similar tipdating analyses based on isolation dates (), and support the notion that the MTBC overall is younger than what has been proposed in earlier studies, based on a hypothesized co-divergence of the human-adapted MTBC and modern humans since their migration out of Africa (;). This study is limited by the fact that we analyzed a globally diverse collection of L2 genomes available in public repositories. Hence, these strains might not be fully representative of the respective geographical regions. Moreover, our African L2-Beijing dataset came from convenient sampling and comprised L2-Beijing mainly from Eastern and Southern Africa, as whole genome data of L2-Beijing from the other African regions were unavailable at the time of the study. However, the representation of African L2-Beijing in our sample reflects the overall prevalence of this sub-lineage as recently reported for the continent (;). Moreover, although regions outside of Eastern-and Southern Africa were underrepresented, this is unlikely to invalidate our findings regarding the multiple independents of L2-Beijing into Africa, except by underestimating the number of true introductions. In conclusion, this is the first study to address the geographical origins of L2-Beijing in Africa using whole genome sequencing data. Our findings indicate multiple independent introductions of L2-Beijing epidemics into Africa from East-and Southeast Asia during the last 300 years that were unrelated to drug resistance. The TB epidemics in Africa have remained fairly stable over the last few decades. However, Africa's population growth and increasing urbanization (driven by booming economies) are likely to have an impact on the future of TB in this continent, whether directly by e.g., facilitating transmission or indirectly by promoting new risk factors such as diabetes that increase TB susceptibility (Dye and Williams, 2010). It is therefore crucial to follow the TB epidemics in the continent very closely, especially those related to hypervirulent strains such as L2-Beijing, as these might take particular advantage of this expanding ecological niche (). DATA AVAILABILITY The xml files used for this study can be found here https://github. com/SwissTPH/TBRU_L2Africa.git. AUTHOR CONTRIBUTIONS LR, DB, FM, DS, LF, and SG planned the study. SL, BM, and JF performed the experiments. LR, DB, FM, SMG, SL, BM, CB, SB, KM, MB, LJ, KR, EJC, LD and LF contributed strains and prepared the data. LR, DB, FM, and SG analyzed the data. LR, DB, FM, and SG drafted the manuscript. All authors critically reviewed the manuscript. ACKNOWLEDGMENTS We would like to thank Sebastin Duchne and Yan Yu for their technical support and Linda-Gail Bekker for contributing strains. All bioinformatics analyses were performed at the scientific computing core facility of the University of Basel, sciCORE (http://scicore.unibas.ch/). This work was supported by the Swiss National Science Foundation (grants 310030_166687 to SG), the European Research Council (309540-EVODRTB to SG) and SystemsX.ch This research was also partially supported (strain collection) by a funding supplement from the National Institutes of Allergy and Infectious Diseases (NIAID) under award numbers U01 AI069924 (IeDEA Southern Africa) and U01 AI069911 (IeDEA East Africa). |
Technology for Predicting Particulate Matter Emissions at Construction Sites in South Korea : In recent years, particulate matter (PM) has emerged as a major social issue in various industries, particularly in East Asia. PM not only causes various environmental, social, and economic problems but also has a large impact on public health. Thus, there is an urgent requirement for reducing PM emissions. In South Korea, the PM generated at construction sites in urban areas directly or indirectly causes various environmental problems in surrounding areas. Construction sites are considered a major source of PM that must be managed at the national level. Therefore, this study aims to develop a technology for predicting PM emissions at construction sites. First, the major sources of PM at construction sites are determined. Then, PM emission factors are calculated for each source. Furthermore, an algorithm is developed for calculating PM emissions on the basis of an emission factor database, and a system is built for predicting PM emissions at construction sites. The reliability of the proposed technology is evaluated through a case study. The technology is expected to be used for predicting potential PM emissions at construction sites before the start of construction. Introduction The World Health Organization has recently classified particulate matter (PM) as a class 1 carcinogen because research has shown that it may cause lung, cardiovascular, and respiratory diseases. Thus, considerable effort is being made to reduce PM emissions in various industries worldwide. In East Asian countries, particularly South Korea and China, the emission of large amounts of PM due to radical industrialization is emerging as a major social problem. According to the statistical data of the Ministry of Environment (ME) of South Korea, fugitive emissions account for approximately 50% of PM10 and smaller PM emissions in South Korea. Furthermore, the fugitive emissions caused by construction work are the largest at 33%. PM is generated during construction mainly by the movement of construction equipment and construction activities. Accordingly, there is an urgent requirement for research on various PM emission sources associated with construction equipment. Currently, there is no standard for regulating PM emissions at construction sites in South Korea and no method for calculating PM emissions to set such a standard. The National Institute of Environmental Research (NIER) of South Korea uses the method provided by the US Environmental Protection Agency (EPA) to calculate the amount of PM10 in fugitive dust at construction sites using an equation. However, the equation calculates PM10 using only the construction period and the size of a construction area, and it cannot reflect the various construction conditions that generate PM. In South Korea, PM at construction sites is managed on the basis of the emission concentration through "PM Emergency Reduction Measures". However, this is a passive method that is applied after a high concentration of PM has already been generated. Therefore, a method to preventively manage PM emissions by controlling the quantity of emissions is needed for accurate PM management. As the South Korean government has recognized the need for quantitative management of dust emission, the "Business Site Total Air Pollution Management System" was implemented nationwide in South Korea in 2020. This is a preemptive system for managing air pollutants by presenting quantitative emission standards in advance. It has realized an active reduction in air pollution by setting the total amount of allowed emissions and then assigning an amount to each business site to maintain pollutants within the allotted range. However, this system only regulates the business site emissions and does not include construction sites as management targets. The South Korean government continuously discusses the importance of managing construction site emissions; however, there is no clear standard to evaluate PM emissions generated from construction sites. Therefore, it is necessary to develop a systematic method for calculating PM emissions at construction sites quantitively by developing emission factors and calculating methods to expand the national air pollution management system to construction sites in the future. Therefore, this study develops a technology to predict PM emissions at construction sites by calculating the emission factors for PM10 and PM2.5 for major PM emission sources. PM emissions are calculated on the basis of an emission factor database (DB). Furthermore, a system is developed for predicting PM emissions. The reliability of the prediction technology is examined through a case study. Materials and Methods The PM emission factors at construction sites were calculated and used to construct a DB. Then, this DB was utilized to develop a method for calculating PM emissions. Finally, an algorithm was developed to predict PM emissions at actual construction sites. The PM emission factors were calculated by identifying the major sources of PM emissions and construction activities. The emission factor DB consisted of direct and fugitive emission factors for each construction activity and emission source. In addition, emission scenarios and calculation methods were proposed for each emission source. Finally, a system for predicting PM emissions at construction sites was developed on the basis of the emission factor DB and calculation method. Figure 1 shows the research method of this study. In South Korea, PM at construction sites is managed on the basis of the emission concentration through "PM Emergency Reduction Measures". However, this is a passive method that is applied after a high concentration of PM has already been generated. Therefore, a method to preventively manage PM emissions by controlling the quantity of emissions is needed for accurate PM management. As the South Korean government has recognized the need for quantitative management of dust emission, the "Business Site Total Air Pollution Management System" was implemented nationwide in South Korea in 2020. This is a preemptive system for managing air pollutants by presenting quantitative emission standards in advance. It has realized an active reduction in air pollution by setting the total amount of allowed emissions and then assigning an amount to each business site to maintain pollutants within the allotted range. However, this system only regulates the business site emissions and does not include construction sites as management targets. The South Korean government continuously discusses the importance of managing construction site emissions; however, there is no clear standard to evaluate PM emissions generated from construction sites. Therefore, it is necessary to develop a systematic method for calculating PM emissions at construction sites quantitively by developing emission factors and calculating methods to expand the national air pollution management system to construction sites in the future. Therefore, this study develops a technology to predict PM emissions at construction sites by calculating the emission factors for PM10 and PM2.5 for major PM emission sources. PM emissions are calculated on the basis of an emission factor database (DB). Furthermore, a system is developed for predicting PM emissions. The reliability of the prediction technology is examined through a case study. Materials and Methods The PM emission factors at construction sites were calculated and used to construct a DB. Then, this DB was utilized to develop a method for calculating PM emissions. Finally, an algorithm was developed to predict PM emissions at actual construction sites. The PM emission factors were calculated by identifying the major sources of PM emissions and construction activities. The emission factor DB consisted of direct and fugitive emission factors for each construction activity and emission source. In addition, emission scenarios and calculation methods were proposed for each emission source. Finally, a system for predicting PM emissions at construction sites was developed on the basis of the emission factor DB and calculation method. Figure 1 shows the research method of this study. Major PM Emission Sources at Construction Sites Earthworks are considered the major source of PM emissions at construction sites. In addition, according to US EPA AP-42 and the ME of South Korea, the movement of construction equipment used for civil works and construction activities is a major Major PM Emission Sources at Construction Sites Earthworks are considered the major source of PM emissions at construction sites. In addition, according to US EPA AP-42 and the ME of South Korea, the movement of construction equipment used for civil works and construction activities is a major source of PM emissions at construction sites. Figure 2 shows the major PM emission sources selected in this study. These sources were selected on the basis of the construction-related air pollutant emission sources defined by the US EPA and ME of South Korea [17,20,. Among the large category of air pollutant emission sources defined in South Korea, the categories of on-road and nonroad mobile pollution sources were selected. The major dust generating sources defined in AP-42 (Heavy Construction Operations) of US EPA match with the nine types of construction equipment and four types of material transport equipment in the small category of South Korean air pollutant emission sources. As a result, 13 major PM emission sources were selected in this study. PM Emission Factor DB for Construction Sites Two types of PM emissions due to construction equipment were considered. The first type was direct emission, in which PM is released into the air through fuel combustion in construction equipment. The second type was fugitive emission, in which PM is released by the construction activities of construction equipment. Different methods were used to calculate the emission factors for each type of emission. For direct emission, the air pollutant emission factor data presented by the NIER of South Korea were applied mutatis mutandis. The emission factors for dump trucks and trailers were directly specified in the freight car category of on-road mobile pollution sources. The emission factors for the nine types of construction equipment were defined according to the rated output of each equipment type. This study directly calculated the emission factors using the average rated output data for construction equipment manufactured in South Korea in accordance with the National Air Pollutant Emission Calculation Method Manual of South Korea. Table 1 shows the direct emission factors for PM10 and PM2.5 for different types of construction equipment. For fugitive emission, the emission factors were calculated according to US EPA AP-42. However, forklifts, concrete pumps, and air compressors were excluded because the definition of fugitive emission activities was unclear. Table 2 shows the equations for calculating the emission factors for the construction activities of construction equipment. The information required for calculating the emission factors, such as the silt content, moisture content, and mean wind speed, was obtained from South Korean literature. However, according to the Korean national air pollutant emission factors which are used for calculating the annual emission in Korea, fugitive emissions are calculated using Equation and the emission factors in Table 3. where E denotes the total fugitive emissions (kg/year), A denotes the annual construction area (m 2 ), P denotes the annual earthwork construction period (month/year), and EF denotes emission factor (kg/m 2 /month). The emission factors were derived from a NIER study to select the most appropriate emission factors for construction sites among the emission factors developed by the US EPA. According to this research study, the fugitive emission factors for each construction equipment were calculated using AP-42 as in this study. However, the types of construction equipment considered in the research were limited compared to this study, which implies that the fugitive emission factor database of this study is more appropriate for calculating the fugitive emissions of construction sites compared to Equation. The comparison of construction equipment considered in both studies is shown in Table 4. Calculation Method for PM Emissions at Construction Sites The PM emissions at construction sites in South Korea were calculated using Equation of US EPA AP-42. This equation requires the activity rate, emission factor, and overall emission reduction efficiency, which quantify the degree and scope of construction activities. These variables should be presented as applicable figures in South Korea. The activity rate applicable to South Korea was obtained by deriving the construction activity scenarios for each type of construction equipment using the Standard of Construction Estimate. In particular, the activity rates for bulldozers and rollers were derived by selecting the most typical usage of each equipment type, and other scenarios were derived by following the use of each equipment type described in the standard. Table 5 shows the construction activity scenarios for the construction equipment. In addition, the emission factor applicable to South Korea was obtained using the emission factor DB described in Section 2.2. E, A, EF, and ER denote the emissions, activity rate, emission factor, and overall emission reduction efficiency, respectively. However, ER in this study was considered 0, as there is no standard for ER. Table 5. Scenarios for calculating the amount of construction activity. Unloading external soil -Amount of construction activity: amount of earthwork -When the fill amount is greater than the cut amount, loading soil from the outside (dumping) Bulldozer Transporting external soil -Amount of construction activity: amount of earthwork -Exiting through the site main entrance after transporting external soil from the site main entrance to the site center Transporting waste soil -Amount of construction activity: amount of earthwork -When the fill amount is less than the cut amount, moving from the site main entrance to the site center to transport the waste soil from the site center to the main entrance (Exclusions) -Movement between the main entrance and the dumpsite of waste soil is not considered in this study as an activity outside the site -Loading or unloading of soil at the dumpsite of waste soil is not considered in this study as an activity outside the site Trailer -Amount of construction activity: rebar work -Exiting through the site main entrance after transporting materials from the site main entrance to the site center Concrete mixer -Amount of construction activity: amount of ready-mixed concrete work -Exiting through the site main entrance after transporting materials from the site main entrance to the site center Table 6 shows the emission factors obtained for major types of construction equipment. The construction equipment is divided into two categories, i.e., working and moving, according to the work type (construction activities). The direct and fugitive emission factors are obtained for PM10 and PM2.5 for each type of construction equipment. Hours, tons, and kilometers (or vehicle kilometers traveled (VKT)) are used as the functional units for the work time, amount of work, and distance traveled, respectively. According to Table 6, the direct emission factors for heavy construction equipment, such as bulldozers, loaders, and rollers, were higher than those for transportation equipment such as dump trucks. Furthermore, the fugitive emission factors for cranes, forklifts, bulldozers, and loaders were higher than those for other types of construction equipment. The above construction equipment is mainly used for cutting, excavating, and filling soil in earthworks, a large amount of fugitive dust is generated from the soil during these operations. In the case of transport equipment, the fugitive emission factors were higher than the direct emission factors. The direct emissions of PM due to the unloading of dump trucks were considered to be negligible and excluded from this study. Figure 3 shows the components and algorithm of the system for predicting PM emissions at construction sites. The total PM emissions at construction sites were the sum of the direct and fugitive emissions caused by construction equipment, which were calculated as described in Section 3.1. An Excel-based system was developed for predicting PM emissions at construction sites. Figure 4 shows the information input screen of the system, where a user enters a design statement and general information about a construction project, and a sheet for displaying the evaluation result. The information input sheet was divided into two parts for entering the general information and design statement. The general information part was configured to input general site information such as the name of the construction project, construction period, site area, gross floor area, underground depth, and ground height. The design statement part was configured to input the amount of earthwork and construction material, which determined the amount of activity of construction equipment, on the basis of design documents and detailed statements. where a user enters a design statement and general information about a construction project, and a sheet for displaying the evaluation result. The information input sheet was divided into two parts for entering the general information and design statement. The general information part was configured to input general site information such as the name of the construction project, construction period, site area, gross floor area, underground depth, and ground height. The design statement part was configured to input the amount of earthwork and construction material, which determined the amount of activity of construction equipment, on the basis of design documents and detailed statements. Overview of Case Study The applicability of the developed system was examined by predicting PM emissions using actual construction site information, as shown in Table 7. This information was obtained from the design outline, elevation drawing, floor plan, and design details of the building. The duration of earthworks and general works and the number of concrete piles were not available; these were assumed based on the scale of construction. The amount of work was calculated according to the PM emission scenario at the construction site in the Overview of Case Study The applicability of the developed system was examined by predicting PM emissions using actual construction site information, as shown in Table 7. This information was obtained from the design outline, elevation drawing, floor plan, and design details of the building. The duration of earthworks and general works and the number of concrete piles were not available; these were assumed based on the scale of construction. The amount of work was calculated according to the PM emission scenario at the construction site in the prediction system. Then, the final PM emissions were predicted using the calculated amount of work and established emission factor DB. Table 8 shows the evaluation results of PM emissions by construction equipment. The PM10 emissions by the concrete mixer truck were the largest (50.36% of the total PM10 emissions), followed by the 25-ton dump truck, 8-ton dump truck, and excavator. The PM10 emissions from these types of equipment accounted for approximately 95% of the total emissions at the construction site. The PM2.5 emissions showed a similar trend, with a few differences. The concrete mixer truck generated the largest amount of PM2.5 emissions. Nevertheless, they accounted for only 42.53% of the total emissions. Furthermore, the PM2.5 emissions decreased in the order of the 25-ton dump truck, excavator, and 8-ton dump truck. Table 9 shows the predicted total emissions and Figure 5 shows the predicted PM2.5 and PM10 emissions for construction equipment. The concrete mixer truck and dump trucks (25 tons and 8 tons) emitted high amounts of PM. Thus, a plan for reducing PM emissions should focus on these types of equipment. Moreover, while the direct emission factors applied in this study are reasonable since they are based on the national air pollutant emissions statistics data which are widely used in Korea, the accuracy of the fugitive emission factors needs to be verified. However, there is no proper comparison target for verification, and it is necessary to study the detailed method of fugitive emission factor development in further studies. Thus, in this study, the comparison between the fugitive emissions result of the case study and fugitive emissions calculated by Equation was studied to provide basic points of fugitive emission factor Moreover, while the direct emission factors applied in this study are reasonable since they are based on the national air pollutant emissions statistics data which are widely used in Korea, the accuracy of the fugitive emission factors needs to be verified. However, there is no proper comparison target for verification, and it is necessary to study the detailed method of fugitive emission factor development in further studies. Thus, in this study, the comparison between the fugitive emissions result of the case study and fugitive emissions calculated by Equation was studied to provide basic points of fugitive emission factor development. Results of Case Study According to the general information of the construction site used for the case study, the annual construction area is 2564 m 2 and the annual earthwork period is 6.67 months. Since the building is a nonresidential building, the emission factors for Equation are 0.0426 for PM10 and 0.00426 for PM2.5. Therefore, the total fugitive emissions calculated by Equation are 7.28 10 2 kg/year for PM10 and 7.28 10 1 kg/year for PM2.5. Compared to the annual fugitive dust emissions derived in Table 7, the results of Equation are approximately 5 times bigger in both PM10 and PM2.5. This gap is basically caused by the difference in the variables used in fugitive emission factor equations; in particular, the silt content and the moisture content highly affect fugitive emissions. Previously, it was found that the PM emissions increase as the silt content increases and that the PM emissions decrease as the moisture content increases. According to the NIER research for Equation development, the moisture content used for fugitive emission factor calculation was 0.7% and silt content was 14.1%. However, the moisture content used in this study was 12% and the silt content was 9%, as shown in Table 2. Since the moisture content in this study is higher and the silt content is lower compared to the NIER research the fugitive PM emissions calculated in this study were low. Thus, the moisture and silt content should be supplemented in further studies to improve the accuracy of PM emissions evaluation. Discussion This study has developed an original technology for quantitatively predicting PM emissions at construction sites. The proposed technology overcomes the limitations of the existing concentration-based PM assessment and management method. Furthermore, this study is expected to provide a guideline for investigating PM emissions at construction sites nationwide. However, the variables applied for generating fugitive PM emission factors in this study only rely on the single study from GRI and still need to be studied to retain accuracy. Variables such as silt content and moisture content highly influence the value of fugitive PM emission factors. However, methods of how the variables were selected were not considered in this study. According to the result of the case study, fugitive emissions compose the largest share of total emissions; thus, updating the fugitive emission factors by applying reasonable variables should be performed in further studies. Moreover, construction equipment considered in this study is limited to only 13 types. US EPA provides more equipment emission factors through AP-42, such as those for graders and batch plants. Considering the diversity of construction events in large-scale construction sites, additional emission factors for equipment and construction work should be continuously developed in future studies. The proposed technology is based on a limited amount of existing literature and data. Thus, further research is required to establish a more precise emission factor DB and calculate input scenarios for construction equipment. In addition, the data that are assumed in the calculation of emission factors, such as the topsoil silt content and soil moisture content, should be supplemented with geographic information. Conclusions This study aimed to develop a technology for predicting particulate matter (PM) emissions at construction sites. The primary findings of this study are as follows: 1. Thirteen types of construction equipment were selected as the main PM emission sources at construction sites. Then, an emission factor DB was established, which consisted of direct and fugitive emission factors for PM10 and PM2.5 for various types of construction equipment. 2. The PM emission activity scenarios for construction equipment were presented and used to develop a method for predicting PM emissions at construction sites. 3. A system for predicting PM emissions at construction sites was built using the emission factor DB and activity scenarios. 4. A case study was performed using the developed system, and the fugitive and direct emissions of PM2.5 and PM10 were calculated for construction equipment. 5. Moisture content and silt content values applied in fugitive factor development equations should be supplemented in further studies to improve the accuracy of PM emissions evaluation. |
Estimation of Discrete-Time Linear Systems with Structural Disturbances In this paper, we propose a regularized least-squares estimation method for discrete-time linear systems with structural disturbances. In practice, disturbances or system faults are usually described by step and ramp functions representing abrupt and incipient faults (bias or drift). Therefore, a difference operator matrix is introduced to the regularization term to capture such trends in disturbance signals. The regularized optimization problem is then solved by coordinate descent method, and the resultant estimator is able to simultaneously produce the state and disturbance estimates, eliminating the adverse influences of disturbances as well as system faults. A numerical example is given to demonstrate the effectiveness of the proposed approach. |
The Mountain West has lost three coaches this offseason all due to being let go, but Boise State might be the first team this offseason within the conference to lose their coach to another school.
Oregon is narrowing in on its coaching search to replace Mark Helfrich and Bryan Harsin is in the mix. Fox Sports is reporting that Harsin and Duck officials have spoken to each other, and ESPN’s Brett McMurphy is reporting that he has already interviewed for the opening.
Another candidate in the mix is Willie Taggart out of South Florida and he met with Oregon athletics director Rob Mullens earlier this week on Thursday. Taggart has rebuilt the Bulls from two wins in his first year and has improved each year with four, eight and now 10 wins this year.
Western Michigan’s P.J. Fleck is also a name that is in the mix and deservedly so with a 13-0 record this year and a berth to the Cotton Bowl. He also has 30 wins in four seasons and similar to Taggart has gone through rebuilding a program.
Harsin has been a head coach for four seasons with a record of 38-13 at Arkansas State and three others at Boise State, has a pair of conference titles in that time and won the 2014 Fiesta Bowl.
The Oregon job is going to be a rebuild and Harsin has never had to coach in that environment like some of the other candidates. He took over the head job at Arkansas State form a team that won a conference title the year prior and he then came to Boise State he took over the Broncos program that was flourishing under Chris Petersen.
This may not be a deal breaker for Oregon to pick a coach who has not gong through those struggles, but it is also not easy to keep a good program up at a high level. Plus, Harsin also has had to deal with multiple coordinators all while keeping Boise State ranked and in contention for a conference title every year.
Football Scoop took a deeper look at candidates and said, “Harsin is in a rhythm at Boise, one he can maintain for a long time. Coaches we have spoken with aren’t so sure the upside is worth the risk associated with this move.”
There is always a risk of taking a bigger job which can have a lot more pressure, but there is also to consider that Harsin could be comfortable at Boise State as he grew up there, played quarterback and is now their head coach.
However, money is a big motivating factor for switching jobs and going from the Mountain West to a Pac-12 school would more than double Harsin’s salary as Oregon was paying Helfrich $3.3 million while Harsin was paying $1.3 million per year.
There is no word if Harsin is even a finalist but as of now just him and Taggart are the known candidates who have interviewed for the open job in Euguene. |
package com.reworkplan.auth;
import com.github.toastshaman.dropwizard.auth.jwt.JsonWebTokenParser;
import com.github.toastshaman.dropwizard.auth.jwt.hmac.HmacSHA512Verifier;
import com.github.toastshaman.dropwizard.auth.jwt.parser.DefaultJsonWebTokenParser;
import com.reworkplan.mappers.UserMapper;
import com.reworkplan.models.User;
public class UserAuthenticationFilter {
private static final String REALM = "Parrot";
private static final String BEARER = "Bearer";
private UserAuthenticationFilter() {
}
public static CustomJWTAuthFilter<User> build(byte[] jwtTokenSecret, UserMapper userMapper) {
final JsonWebTokenParser tokenParser = new DefaultJsonWebTokenParser();
final HmacSHA512Verifier tokenVerifier = new HmacSHA512Verifier(jwtTokenSecret);
return new CustomJWTAuthFilter.Builder<User>()
.setTokenParser(tokenParser)
.setTokenVerifier(tokenVerifier)
.setRealm(REALM)
.setPrefix(BEARER)
.setAuthenticator(new UserAuthenticator(userMapper))
.setAuthorizer(new UserAuthorizer())
.buildAuthFilter();
}
}
|
Scottsdale already tops the Valley in gas prices, and the difference is becoming especially glaring as consumers watch unrest in oil-sensitive regions of the world drive up overall prices at the pump.
Motorists, especially those who travel in other Valley cities, are well aware of this difference.
Troy Coriell, a taxi driver who lives in Phoenix and works all over the Valley, said he doesn't mind paying a little more for gas, "but this is really gouging" he said of the higher Scottsdale prices.
He filled his tank at the Shell Superpumper on Frank Lloyd Wright Boulevard and 90th Street.
"I mean it's early in the game, but I don't think it needs to be this high," he said. "It's hard to give (the attendants) a smile when you go in there."
Andrea Martincic, executive director of the Arizona Petroleum Marketers Association, said it's not surprising that motorists pay more for gas in Scottsdale because it costs more for gas stations to operate in that market.
Her association represents anyone who markets petroleum products in Arizona, including independently owned retailers and distributors.
"Higher land and regulatory expenses in Scottsdale definitely affect the pump price," she said. "Scottsdale has some serious design requirements which can add to the cost of a new station. That's why most gas stations don't even look like gas stations."
Ongoing compliance with regulations also is costly, Martincic said.
"If you're going to locate in a more expensive area, you have to recover any of the costs to locate in that area," she said.
And when gas prices rise, as was the case during the unrest in Egypt and now in Libya, retailers in Scottsdale have to raise their pump prices to not only cover their fuel costs, but to ensure they make enough to cover their fixed costs, such as a mortgage, property maintenance, compliance with city, county and state regulations, and employees, Martincic said.
"If you own a retail gas station, you have to purchase your supply, and if all of a sudden the price goes up . . . each load of fuel is much more expensive," she said. "Each truckload can be $25,000."
Michelle Donati, public affairs supervisor for AAA Arizona, pointed out that Scottsdale trails Flagstaff and Yuma in having the highest gas prices in the state. Tucson usually has the lowest price in the state.
"If you look at Flagstaff, one of the things people pay for is the transportation costs of fuel . . . because there's not a pipeline that goes to Flagstaff, so it has to be trucked in," she said. "In terms of Scottsdale, location is likely a factor." |
#include <ctype.h>
#include <string.h>
#include "matrix_input.h"
void input_check(char *in) {
if (!strpbrk(in, "[]") || *in != '[') USAGE("invalid input");
int depth = 0;
// input correctness check
for (; *in; in++) {
if (*in == '[') depth++;
if (*in == ']') depth--;
if (isalpha(*in) || (!isdigit(*in) && *(in + 1) == ']')) {
USAGE("invalid input");
}
if (*(in + 1) == '[' && isdigit(*in)) USAGE("invalid input");
if (*in == ',' && *(in + 1) == ',') USAGE("invalid input");
if (*in == '[' && *(in + 1) == ']') USAGE("empty matrix inputted");
}
if (*(in - 1) != ']') USAGE("invalid input");
if (depth != 0) USAGE("inequal brackets");
}
double **parse_input(char *mat, double **data) {
int depth = -1;
char *delimit = mat;
double n;
// place contents of mat into data
while (*delimit) {
for (int i = 0; *delimit != ']'; i++) {
n = strtod(mat, &delimit);
if (*delimit == '[') {
delimit++;
depth++;
}
if (*delimit == ',') delimit++;
(*mat != '[') ? *(*(data + depth) + i) = n : i--;
mat = delimit;
}
mat = ++delimit;
}
return data;
}
matrix parse_data(char *mat) {
char *temp = mat;
matrix m;
m.rows = m.cols = 0;
// get rows and columns from input
for (; *temp; temp++) {
if (*temp == '[') m.rows++;
if (*temp == ',') m.cols++;
}
if (m.rows == 0) m.rows++;
m.cols = (m.cols + m.rows) / m.rows + ((m.cols + m.rows) % m.rows != 0);
m.data = parse_input(mat, set_data(m.rows, m.cols));
return m;
}
|
<reponame>mfateev/temporal-aws-sdk<gh_stars>1-10
// Generated by github.com/temporalio/temporal-aws-sdk-generator
// from github.com/aws/aws-sdk-go version 1.35.7
// Copyright (c) 2020 Temporal Technologies Inc. All rights reserved.
package worklinkstub
import (
"github.com/aws/aws-sdk-go/service/worklink"
"go.temporal.io/sdk/workflow"
"go.temporal.io/aws-sdk/clients"
)
// ensure that imports are valid even if not used by the generated code
var _ clients.VoidFuture
type Client interface {
AssociateDomain(ctx workflow.Context, input *worklink.AssociateDomainInput) (*worklink.AssociateDomainOutput, error)
AssociateDomainAsync(ctx workflow.Context, input *worklink.AssociateDomainInput) *WorkLinkAssociateDomainFuture
AssociateWebsiteAuthorizationProvider(ctx workflow.Context, input *worklink.AssociateWebsiteAuthorizationProviderInput) (*worklink.AssociateWebsiteAuthorizationProviderOutput, error)
AssociateWebsiteAuthorizationProviderAsync(ctx workflow.Context, input *worklink.AssociateWebsiteAuthorizationProviderInput) *WorkLinkAssociateWebsiteAuthorizationProviderFuture
AssociateWebsiteCertificateAuthority(ctx workflow.Context, input *worklink.AssociateWebsiteCertificateAuthorityInput) (*worklink.AssociateWebsiteCertificateAuthorityOutput, error)
AssociateWebsiteCertificateAuthorityAsync(ctx workflow.Context, input *worklink.AssociateWebsiteCertificateAuthorityInput) *WorkLinkAssociateWebsiteCertificateAuthorityFuture
CreateFleet(ctx workflow.Context, input *worklink.CreateFleetInput) (*worklink.CreateFleetOutput, error)
CreateFleetAsync(ctx workflow.Context, input *worklink.CreateFleetInput) *WorkLinkCreateFleetFuture
DeleteFleet(ctx workflow.Context, input *worklink.DeleteFleetInput) (*worklink.DeleteFleetOutput, error)
DeleteFleetAsync(ctx workflow.Context, input *worklink.DeleteFleetInput) *WorkLinkDeleteFleetFuture
DescribeAuditStreamConfiguration(ctx workflow.Context, input *worklink.DescribeAuditStreamConfigurationInput) (*worklink.DescribeAuditStreamConfigurationOutput, error)
DescribeAuditStreamConfigurationAsync(ctx workflow.Context, input *worklink.DescribeAuditStreamConfigurationInput) *WorkLinkDescribeAuditStreamConfigurationFuture
DescribeCompanyNetworkConfiguration(ctx workflow.Context, input *worklink.DescribeCompanyNetworkConfigurationInput) (*worklink.DescribeCompanyNetworkConfigurationOutput, error)
DescribeCompanyNetworkConfigurationAsync(ctx workflow.Context, input *worklink.DescribeCompanyNetworkConfigurationInput) *WorkLinkDescribeCompanyNetworkConfigurationFuture
DescribeDevice(ctx workflow.Context, input *worklink.DescribeDeviceInput) (*worklink.DescribeDeviceOutput, error)
DescribeDeviceAsync(ctx workflow.Context, input *worklink.DescribeDeviceInput) *WorkLinkDescribeDeviceFuture
DescribeDevicePolicyConfiguration(ctx workflow.Context, input *worklink.DescribeDevicePolicyConfigurationInput) (*worklink.DescribeDevicePolicyConfigurationOutput, error)
DescribeDevicePolicyConfigurationAsync(ctx workflow.Context, input *worklink.DescribeDevicePolicyConfigurationInput) *WorkLinkDescribeDevicePolicyConfigurationFuture
DescribeDomain(ctx workflow.Context, input *worklink.DescribeDomainInput) (*worklink.DescribeDomainOutput, error)
DescribeDomainAsync(ctx workflow.Context, input *worklink.DescribeDomainInput) *WorkLinkDescribeDomainFuture
DescribeFleetMetadata(ctx workflow.Context, input *worklink.DescribeFleetMetadataInput) (*worklink.DescribeFleetMetadataOutput, error)
DescribeFleetMetadataAsync(ctx workflow.Context, input *worklink.DescribeFleetMetadataInput) *WorkLinkDescribeFleetMetadataFuture
DescribeIdentityProviderConfiguration(ctx workflow.Context, input *worklink.DescribeIdentityProviderConfigurationInput) (*worklink.DescribeIdentityProviderConfigurationOutput, error)
DescribeIdentityProviderConfigurationAsync(ctx workflow.Context, input *worklink.DescribeIdentityProviderConfigurationInput) *WorkLinkDescribeIdentityProviderConfigurationFuture
DescribeWebsiteCertificateAuthority(ctx workflow.Context, input *worklink.DescribeWebsiteCertificateAuthorityInput) (*worklink.DescribeWebsiteCertificateAuthorityOutput, error)
DescribeWebsiteCertificateAuthorityAsync(ctx workflow.Context, input *worklink.DescribeWebsiteCertificateAuthorityInput) *WorkLinkDescribeWebsiteCertificateAuthorityFuture
DisassociateDomain(ctx workflow.Context, input *worklink.DisassociateDomainInput) (*worklink.DisassociateDomainOutput, error)
DisassociateDomainAsync(ctx workflow.Context, input *worklink.DisassociateDomainInput) *WorkLinkDisassociateDomainFuture
DisassociateWebsiteAuthorizationProvider(ctx workflow.Context, input *worklink.DisassociateWebsiteAuthorizationProviderInput) (*worklink.DisassociateWebsiteAuthorizationProviderOutput, error)
DisassociateWebsiteAuthorizationProviderAsync(ctx workflow.Context, input *worklink.DisassociateWebsiteAuthorizationProviderInput) *WorkLinkDisassociateWebsiteAuthorizationProviderFuture
DisassociateWebsiteCertificateAuthority(ctx workflow.Context, input *worklink.DisassociateWebsiteCertificateAuthorityInput) (*worklink.DisassociateWebsiteCertificateAuthorityOutput, error)
DisassociateWebsiteCertificateAuthorityAsync(ctx workflow.Context, input *worklink.DisassociateWebsiteCertificateAuthorityInput) *WorkLinkDisassociateWebsiteCertificateAuthorityFuture
ListDevices(ctx workflow.Context, input *worklink.ListDevicesInput) (*worklink.ListDevicesOutput, error)
ListDevicesAsync(ctx workflow.Context, input *worklink.ListDevicesInput) *WorkLinkListDevicesFuture
ListDomains(ctx workflow.Context, input *worklink.ListDomainsInput) (*worklink.ListDomainsOutput, error)
ListDomainsAsync(ctx workflow.Context, input *worklink.ListDomainsInput) *WorkLinkListDomainsFuture
ListFleets(ctx workflow.Context, input *worklink.ListFleetsInput) (*worklink.ListFleetsOutput, error)
ListFleetsAsync(ctx workflow.Context, input *worklink.ListFleetsInput) *WorkLinkListFleetsFuture
ListTagsForResource(ctx workflow.Context, input *worklink.ListTagsForResourceInput) (*worklink.ListTagsForResourceOutput, error)
ListTagsForResourceAsync(ctx workflow.Context, input *worklink.ListTagsForResourceInput) *WorkLinkListTagsForResourceFuture
ListWebsiteAuthorizationProviders(ctx workflow.Context, input *worklink.ListWebsiteAuthorizationProvidersInput) (*worklink.ListWebsiteAuthorizationProvidersOutput, error)
ListWebsiteAuthorizationProvidersAsync(ctx workflow.Context, input *worklink.ListWebsiteAuthorizationProvidersInput) *WorkLinkListWebsiteAuthorizationProvidersFuture
ListWebsiteCertificateAuthorities(ctx workflow.Context, input *worklink.ListWebsiteCertificateAuthoritiesInput) (*worklink.ListWebsiteCertificateAuthoritiesOutput, error)
ListWebsiteCertificateAuthoritiesAsync(ctx workflow.Context, input *worklink.ListWebsiteCertificateAuthoritiesInput) *WorkLinkListWebsiteCertificateAuthoritiesFuture
RestoreDomainAccess(ctx workflow.Context, input *worklink.RestoreDomainAccessInput) (*worklink.RestoreDomainAccessOutput, error)
RestoreDomainAccessAsync(ctx workflow.Context, input *worklink.RestoreDomainAccessInput) *WorkLinkRestoreDomainAccessFuture
RevokeDomainAccess(ctx workflow.Context, input *worklink.RevokeDomainAccessInput) (*worklink.RevokeDomainAccessOutput, error)
RevokeDomainAccessAsync(ctx workflow.Context, input *worklink.RevokeDomainAccessInput) *WorkLinkRevokeDomainAccessFuture
SignOutUser(ctx workflow.Context, input *worklink.SignOutUserInput) (*worklink.SignOutUserOutput, error)
SignOutUserAsync(ctx workflow.Context, input *worklink.SignOutUserInput) *WorkLinkSignOutUserFuture
TagResource(ctx workflow.Context, input *worklink.TagResourceInput) (*worklink.TagResourceOutput, error)
TagResourceAsync(ctx workflow.Context, input *worklink.TagResourceInput) *WorkLinkTagResourceFuture
UntagResource(ctx workflow.Context, input *worklink.UntagResourceInput) (*worklink.UntagResourceOutput, error)
UntagResourceAsync(ctx workflow.Context, input *worklink.UntagResourceInput) *WorkLinkUntagResourceFuture
UpdateAuditStreamConfiguration(ctx workflow.Context, input *worklink.UpdateAuditStreamConfigurationInput) (*worklink.UpdateAuditStreamConfigurationOutput, error)
UpdateAuditStreamConfigurationAsync(ctx workflow.Context, input *worklink.UpdateAuditStreamConfigurationInput) *WorkLinkUpdateAuditStreamConfigurationFuture
UpdateCompanyNetworkConfiguration(ctx workflow.Context, input *worklink.UpdateCompanyNetworkConfigurationInput) (*worklink.UpdateCompanyNetworkConfigurationOutput, error)
UpdateCompanyNetworkConfigurationAsync(ctx workflow.Context, input *worklink.UpdateCompanyNetworkConfigurationInput) *WorkLinkUpdateCompanyNetworkConfigurationFuture
UpdateDevicePolicyConfiguration(ctx workflow.Context, input *worklink.UpdateDevicePolicyConfigurationInput) (*worklink.UpdateDevicePolicyConfigurationOutput, error)
UpdateDevicePolicyConfigurationAsync(ctx workflow.Context, input *worklink.UpdateDevicePolicyConfigurationInput) *WorkLinkUpdateDevicePolicyConfigurationFuture
UpdateDomainMetadata(ctx workflow.Context, input *worklink.UpdateDomainMetadataInput) (*worklink.UpdateDomainMetadataOutput, error)
UpdateDomainMetadataAsync(ctx workflow.Context, input *worklink.UpdateDomainMetadataInput) *WorkLinkUpdateDomainMetadataFuture
UpdateFleetMetadata(ctx workflow.Context, input *worklink.UpdateFleetMetadataInput) (*worklink.UpdateFleetMetadataOutput, error)
UpdateFleetMetadataAsync(ctx workflow.Context, input *worklink.UpdateFleetMetadataInput) *WorkLinkUpdateFleetMetadataFuture
UpdateIdentityProviderConfiguration(ctx workflow.Context, input *worklink.UpdateIdentityProviderConfigurationInput) (*worklink.UpdateIdentityProviderConfigurationOutput, error)
UpdateIdentityProviderConfigurationAsync(ctx workflow.Context, input *worklink.UpdateIdentityProviderConfigurationInput) *WorkLinkUpdateIdentityProviderConfigurationFuture
}
func NewClient() Client {
return &stub{}
}
|
Budget 2016: With FM bouncer, PF gains retire hurt The government on Monday changed the tax policy for employees' provident fund (EPF) contributions from April 2016 by restricting tax-free withdrawals to 40% of the corpus and mandating that the remaining amount be taxed unless it was being used to buy annuities.
Budget 2016: 6 ways to pay less tax, legally Post-Budget, many salaried Indians are left wondering whether the FM has left them richer or poorer. But hidden in the tangle of provisions are many ways and means that can help you save a chunk of tax. The Times of India-EY Guide is designed to help you understand the key tax proposals of the Budget and minimize the pinch.
NEW DELHI: Under all-round attack, the government on Tuesday promised to consider demands for a rollback of the proposal to tax 60 per cent of withdrawals from provident fund and a ceiling on employers contribution but made it clear that PPF will continue to be exempt from tax.Revenue secretary Hashmukh Adhia went a step further to say that only 60 per cent of interest on contributions made after April 1 will be taxed and that the principal amount of contribution will remain untouched at the time of withdrawal.However, a government press note issued on Tuesday made no mention about taxing only the interest.It claimed that the new tax proposal was aimed at taxing only the high salaried individuals totalling about 70 lakh people out of the 3.7 crore employee provident fund (EPF) members. About 3 crore individuals come under the statutory wage limit of Rs 15,000 per month so will not be affected by the proposed changes.Finance minister Arun Jaitley in his Budget for 2016-17 on Monday had proposed that 60 per cent of the withdrawal on contribution to employee PF made after April 1 this year will be subject to tax. This would apply to superannuation funds and recognised provident funds including EPF.He also proposed a monetary limit for contribution of employer in recognized PF and superannuation fund at Rs 1.5 lakh per annum for taking tax benefit.The proposal came under immediate attack from various employees unions including RSS-backed BMS, and political parties who termed it as "an attack on the working class and a clear case of double taxation."The finance ministry issued a press note containing a clarification about the proposed changes in the tax treatment of recognized PFs and recognized pension schemes noting that there seems to be some amount of lack of understanding about the changes made in the Budget on the issue."We have received representations today from various sections suggesting that if the amount of 60 per cent of corpus is not invested in the annuity products, the tax should be levied only on accumulated returns on the corpus and not on the contributed amount."We have also received representations asking for not having any monetary limit on the employer contribution under EPF, because such a limit is not there in NPS. The finance minister would be considering all these suggestions and taking a view on it in due course," the press note said.All contributions and interest accrued to EPF before April 1, 2016, will not attract any tax on withdrawal.The press note said the purpose of this reform of making the change in tax regime is to encourage more number of private sector employees to go for pension security after retirement instead of withdrawing the entire money from the Provident Fund account.Towards this objective, the government announced that 40 per cent of the total corpus withdrawn at the time of retirement will be tax exempt both under recognised Provident Fund and NPS."It is expected that the employees of private companies will place the remaining 60 per cent of the corpus in annuity, out of which they can get regular pension. When this 60 per cent of the remaining corpus is invested in annuity, no tax is chargeable. So what it means is that the entire corpus will be tax free, if invested in annuity," it said.The government in the Budget also made another change to say that when the person investing in annuity dies and when the original corpus goes in the hands of his heirs, then again there will be no tax.The idea behind this mechanism, it said, was to encourage people to invest in pension products rather than withdraw and use the entire corpus after retirement."However, in EPFO, there are about 60 lakh contributing members who have accepted EPF voluntarily and they are highly-paid employees of private sector companies. For this category of people, amount at present can be withdrawn without any tax liability. We are changing this," the press note said.Such employee can withdraw without tax liability provided they contribute 60 per cent in annuity product so that pension security can be created for them according to his earning level. However, if the employee chooses not to put any amount in annuity product the tax would not be charged on 40 per cent only."There is no change in the existing tax treatment of Public Provident Fund (PPF)," it said.Currently there are no monetary ceilings on the employer contribution under EPF with only ceiling being that it would be 12 per cent of the salary of the employee member. Similarly, there is no monetary ceiling on the employer contribution under NPS, except that it would be 10 per cent of salary.The Finance Bill 2016 provides that there would be monetary ceiling of Rs 1.5 lakh on employer contribution considered with the ceiling of the 12 per cent rate of employer contribution, whichever is less.In an interview to PTI, Adhia said said withdrawal of principal amount contributed to EPF after April 1 would remain exempt from any tax and its only the interest on contributions made after April 1, 2016 which will be taxed."The purpose (of the Budget proposal) is not to mobilise revenue. We want people to move towards a pension society. So we have given another incentive wherein the investment in annuity product will be tax exempt. Annuity product was always taxable. But here, even after death of a person when the money is transferred to legal heir, we have made it tax exempt," he said. |
Regional distribution of total lipids, free fatty acids and free carnitine in human heart. The concentration of total lipids, free fatty acids and free carnitine in seven different regions of human ventricular myocardium has been studied. Statistically significant differences were found in the concentration of these compounds in the studied regions, with a similar behaviour for free carnitine and free fatty acids (their highest concentrations were found in the lower zones of the left ventricle). The results obtained support the theory of a different metabolic requirement in relation to the intensity of muscular contractile activity. |
Liquid chromatographic determination of moxidectin residues in cattle tissues and confirmation in cattle fat by liquid chromatography/mass spectrometry. Moxidectin, a potent new endo- and ectoparasitic agent, is determined in cattle tissues by liquid chromatography (LC) with fluorescence detection. Moxidectin residues in cattle fat are confirmed with thermospray LC/mass spectrometry (MS). Moxidectin is extracted from the tissue with acetonitrile; the extract is partitioned with hexane, concentrated, and reacted with acetic anhydride, 1-methylimidazole, and dimethylformamide to produce a fluorescent product. The validated sensitivity of the LC/fluorescence method was 10 ppb, with a limit of detection typically between 1 and 2 ppb. Average recoveries from cattle fat, muscle, liver, and kidney were 99, 95, 89, and 92%, respectively. LC/MS confirmatory method determined the underivatized parent compound following the acetonitrile-hexane partitioning step, with an average recovery of 108% at the 250 ppb level in cattle fat. |
Keanu Reeves and Martin Scorsese appear in new documentary Side By Side, which addresses filmmaking in the digital era.
A common hysteria surrounds the inevitable switch from analog to digital, regardless of the industry: In journalism, it's the evolution from print to online; in photography and cinema, it's the switch from film to digital. In new documentary Side By Side, the filmmakers turn the lens on their own form, interviewing the world's most respected directors, cinematographers, editors and other Hollywood professionals to spark a meaningful dialog about the coming of a new era.
Side By Side begins with a simple question: Is this the end of film?
The answer, it seems, isn't so simple. Writer/director Christopher Kenneally's documentary, which made its North American premiere at the Tribeca Film Festival this week, takes us on a journey through the history of photochemical film, moving through time and tech, all while showing how the invention of the CCD chip in 1969 has altered the profession, and the art, of filmmaking.
Everyone has a different opinion, a different reason that they love making movies. George Lucas and James Cameron take the lead on the pro-digital side, while Christopher Nolan – one of digital filmmaking's more outspoken critics – warns against the immediacy of moving away from film.
"The manipulations that digital media allows you to do are seductive, but ultimately a little bit hollow," Nolan says in the film, comparing the allure of digital media to soft Chips Ahoy cookies. "Oh this is amazing, this is a soft cookie, but after a few months you realize it's this horrible chemical making it this way."
The interviews, conducted by Side By Side co-producer Keanu Reeves, are interspersed with clips from movies everyone knows. But this time we see these classic films differently – we discover what kind of camera they were shot with, how their memorable imagery was captured, and the implications of the filmmakers' decisions. To see the industry's technical transformation through the lens of the films we love keeps the average movie-loving audience member invested in the digital-versus-film debate.
The best part of this movie, however, is that everyone interviewed proves equally convincing – not to mention "whip-smart," as Side By Side co-producer Justin Szlasa told Wired. The apparent wisdom behind the various points of view makes the clash of ideas even more compelling.
"No matter who we interviewed we were convinced," said Szlasa. "After the Christopher Nolan interview, you step back and say, 'This is how it has to be done,' and then somebody else, like Robert Rodriguez or George Lucas who has a completely different take on it, you finish that interview and you are convinced."
Reeves' presence on-screen and familiarity with his interview subjects seems to encourage them to let down their guard. It wouldn't be absurd to be skeptical of the actor's interviewing skills, but it becomes clear that he is motivated by the same thing as the people he is speaking with: They are passionate about making movies.
Good stories come out of this relaxed setting, like an anecdote from The Girl With the Dragon Tattoo director David Fincher about Robert Downey Jr. peeing in Mason jars around a movie set to protest the speed and efficiency that shooting in digital allows (and the resulting time actors spend on their feet).
With the way we make and watch movies changing, what's at stake for you and me? Does it matter if the movies we watch are shot in film or done digitally? Side By Side director Kennealy made the point, as does the movie, that it's the medium of film that's at stake, and the question is, what does that mean for visual storytelling?
>"The audience want to be entertained and they want good stories to be told to them."
"The audience want to be entertained and they want good stories to be told to them," Kennealy told Wired. "Is this going to change that? Are we going to lose something? I don't care if something changes, but the more important questions are, 'What's the downside? What's the upside?' And I think that's still kind of being figured out."
It's comforting to know, however, that stories and the way they have been told are constantly in flux, and have been forever. Fincher puts it perfectly in the movie: "I don't believe for one second that digital imaging or tech will ever take away the humanity of storytelling, because storytelling in itself is a wholly human concern."
The point is that film, as of today, is not dead. The fear is out there, though, that the option to choose film over digital will soon disappear. Side By Side closes in a more complex fashion than it begins. Instead of giving us answers, it poses more questions.
The documentary is successful, though. It shows us why we should care about movies by taking an in-depth look at how they are made, as told by some of the people who love movies most.
Oh, and Side By Side was shot in digital. |
Electronic structure, dynamic stability, elastic, and optical properties of MgTMN2 (TM=Ti, Zr, Hf) ternary nitrides from first-principles calculations Ternary nitride semiconductors with tunable electronic structure and charge transport properties have attracted increasing attention as optoelectronic materials. The recently discovered ternary Mg T M N 2 ( T M = Ti, Zr, Hf) are predicted to be nondegenerate semiconductors with visible-range optical absorption onsets. In the present study, the electronic structure, elastic properties, optical absorption spectrum, and dynamic stability of the Mg T M N 2 system have been systematically studied by first-principles calculations based on the density functional theory. These compounds show semiconductor characteristics with a bandgap ranging from 1.0 to 1.5eV predicted by the HeydScuseriaErnzerhof approach. Compared to the traditional semiconductors of Si and GaAs and IIIV nitrides of GaN and AlN, these ternary nitrides have stronger resistance to external compression, shear strain, and deformation due to the larger elastic modulus. MgTiN 2 shows a strong anisotropy characteristic along the x y plane and z axis, while for MgZrN 2 and MgHfN 2, a weak elastic anisotropy is predicted. The absorption regions of these compounds are mainly concentrated in the ultraviolet region, and MgTiN 2 is more sensitive to visible light with respect to the other two compounds. The thermodynamic stability of MgTiN 2, MgZrN 2, and MgHfN 2 is verified by the stable phonon dispersion relations. It is found that the most stable low Miller index surface is for MgTiN 2 and for MgZrN 2 and MgHfN 2. |
<filename>app/src/main/java/Activity/RaceModeActivity.java
package Activity;
import android.Manifest;
import android.annotation.SuppressLint;
import android.content.Context;
import android.content.pm.PackageManager;
import android.location.Location;
import android.location.LocationListener;
import android.location.LocationManager;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.os.SystemClock;
import android.view.View;
import android.widget.ArrayAdapter;
import android.widget.Button;
import android.widget.ListView;
import android.widget.TextView;
import android.widget.Toast;
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import com.example.speedometer.R;
import com.google.firebase.database.DatabaseReference;
import com.google.firebase.database.FirebaseDatabase;
import java.text.DecimalFormat;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Calendar;
import java.util.Collections;
import java.util.Date;
import java.util.List;
import java.util.Locale;
import java.util.Random;
import Other.LoggedAccHandler;
import Other.RaceTimesHandler;
public class RaceModeActivity extends AppCompatActivity implements LocationListener {
private LocationManager locationManager;
private TextView raceModeSpeedometerTextView;
private TextView raceModeInfo;
private TextView raceModeTimerTextView;
private TextView distanceMeter;
private ListView raceModeTimesListView;
private Button saveLapTimesButton;
FirebaseDatabase firebaseDatabase;
DatabaseReference databaseReference;
private Handler handler = new Handler();
private long startTime = 0L;
private long timeInMs = 0L;
private long updateTime = 0L;
private long timeSwapBuff = 0L;
private int sec;
private int min;
private int ms;
private double startPointLatitude;
private double startPointLongitude;
private double positionA;
private double positionB;
private double distanceBetweenAndB;
private double distance;
private double trackLength;
private String car;
private String user;
private String track;
private static final String TRACK_LENGTH = "TRACK_LENGTH";
private static final String CHOSEN_CAR = "CHOSEN_CAR";
private static final String CHOSEN_TRACK = "CHOSEN_TRACK";
private Context context = this;
String[] ListElements = new String[]{};
List<String> ListElementsArrayList;
ArrayAdapter<String> adapter;
Runnable updateTimeThread = new Runnable() {
@Override
public void run() {
timeInMs = SystemClock.uptimeMillis() - startTime;
updateTime = timeSwapBuff + timeInMs;
sec = (int) updateTime / 1000;
min = sec / 60;
sec %= 60;
ms = (int) updateTime % 1000;
raceModeTimerTextView.setText("" + String.format("%02d", min) + ":" + String.format("%02d", sec) + ":" + String.format("%03d", ms));
handler.postDelayed(this, 0);
}
};
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.racemode_activity);
Bundle bundle = getIntent().getExtras();
trackLength = Double.parseDouble(bundle.getString(TRACK_LENGTH));
car = bundle.getString(CHOSEN_CAR);
track = bundle.getString(CHOSEN_TRACK);
LoggedAccHandler lah = new LoggedAccHandler();
user = lah.getLoggedUserName();
initView();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && checkSelfPermission(Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) {
raceModeInfo.setTextSize(20f);
raceModeInfo.setText(R.string.gpsFail);
}
}
private void initView() {
raceModeSpeedometerTextView = findViewById(R.id.raceModeSpeedometerTextView);
raceModeTimesListView = findViewById(R.id.raceModeTimesListView);
raceModeTimerTextView = findViewById(R.id.raceModeTimerTextView);
raceModeInfo = findViewById(R.id.raceModeInfo);
distanceMeter = findViewById(R.id.distanceMeter);
saveLapTimesButton = findViewById(R.id.saveLapTimesButton);
ListElementsArrayList = new ArrayList<String>(Arrays.asList(ListElements));
adapter = new ArrayAdapter<String>(context,
android.R.layout.simple_list_item_1,
ListElementsArrayList
);
raceModeTimesListView.setAdapter(adapter);
saveLapTimesButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (ListElementsArrayList.isEmpty()){
Toast.makeText(context, "Waiting for complete Run!.....", Toast.LENGTH_LONG).show();
}else {
firebaseDatabase = FirebaseDatabase.getInstance();
databaseReference = firebaseDatabase.getReference("Users").child(user).child("RaceMode");
Collections.sort(ListElementsArrayList);
String bestTime = ListElementsArrayList.get(0);
int lapCounter = 0;
for (int i = 0; i <= ListElementsArrayList.size(); i++){
lapCounter = i;
}
SimpleDateFormat format = new SimpleDateFormat("dd.MM.yyyy HH:mm:ss", Locale.getDefault());
String currentDate = format.format(new Date());
RaceTimesHandler rth = new RaceTimesHandler(bestTime, car, user, track, String.valueOf(trackLength), currentDate, String.valueOf(lapCounter));
Random random = new Random();
int randomInt = random.nextInt(500000);
databaseReference.child(String.valueOf(randomInt)).setValue(rth);
Toast.makeText(context, "Times Saved!", Toast.LENGTH_LONG).show();
}
}
});
}
@Override
protected void onPause() {
super.onPause();
permissionGranted();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && checkSelfPermission(Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) {
locationManager.removeUpdates(this);
}
}
@Override
protected void onResume() {
super.onResume();
permissionGranted();
}
@Override
protected void onStart() {
super.onStart();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && checkSelfPermission(Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) {
checkGPSConnection();
}
}
@SuppressLint("MissingPermission")
private void checkGPSConnection() {
permissionGranted();
Toast.makeText(context, "Waiting for GPS connection...", Toast.LENGTH_LONG).show();
}
public void permissionGranted() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && checkSelfPermission(Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) {
locationManager = (LocationManager) this.getSystemService(Context.LOCATION_SERVICE);
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, (LocationListener) this);
}
}
@SuppressLint("MissingPermission")
@Override
public void onLocationChanged(Location location) {
if (location != null) {
float speed = location.getSpeed();
float convertedSpeedToKmH = speed * 3600 / 1000;
int nCurrentSpeed = (int) convertedSpeedToKmH;
String nStrCurrentSpeed = String.valueOf(nCurrentSpeed);
raceModeSpeedometerTextView.setText(nStrCurrentSpeed);
double currentLatitude = location.getLatitude();
double currentLongitude = location.getLongitude();
traveledDistance(currentLatitude, currentLongitude);
if (convertedSpeedToKmH >= 10 && convertedSpeedToKmH <= 15) {
startTimer();
//zmienic na liczenie i sumowanie odcinkow, bo narazie liczy od punktu startowego do obecnego w linii prostej
setStartPoint(location);
} else if (convertedSpeedToKmH >= 7 && convertedSpeedToKmH <= 14) {
stopTimer();
}
}
}
private double traveledDistance(double currentLatitude, double currentLongitude) {
if (startPointLatitude != 0 && startPointLongitude != 0){
if (distance < trackLength){
double theta = startPointLongitude - currentLongitude;
distance = Math.sin(deg2rad(startPointLatitude)) * Math.sin(deg2rad(currentLatitude))
+ Math.cos(deg2rad(startPointLatitude)) * Math.cos(deg2rad(currentLatitude))
* Math.cos(deg2rad(theta));
distance = Math.acos(distance);
distance = rad2deg(distance);
distance = distance * 60 * 1.1515 * 1000;
DecimalFormat decimalFormat = new DecimalFormat("#.##");
distanceMeter.setText(decimalFormat.format(distance) + " m");
if (distance >= trackLength){
lapTime();
distance = 0;
distanceMeter.setText(distance + "m");
}
}else {
distance = 0;
}
return distance;
}
return distance;
}
private void lapTime() {
Toast.makeText(context, "LAP!", Toast.LENGTH_LONG).show();
ListElementsArrayList.add(raceModeTimerTextView.getText().toString());
adapter.notifyDataSetChanged();
}
private void setStartPoint(Location location) {
startPointLatitude = location.getLatitude();
startPointLongitude = location.getLongitude();
}
private double deg2rad(double deg){
return (deg * Math.PI / 180.0);
}
private double rad2deg(double rad){
return (rad * 180.0 / Math.PI);
}
private void startTimer() {
startTime = SystemClock.uptimeMillis();
handler.postDelayed(updateTimeThread, 0);
}
private void stopTimer() {
timeSwapBuff += timeInMs;
handler.removeCallbacks(updateTimeThread);
clearData();
}
private void clearData() {
Toast.makeText(context, "RESET!", Toast.LENGTH_LONG).show();
startTime = 0L;
timeInMs = 0L;
timeSwapBuff = 0L;
updateTime = 0L;
sec = 0;
min = 0;
ms = 0;
distance = 0;
raceModeTimerTextView.setText("00:00:000");
raceModeSpeedometerTextView.setText("0");
}
@Override
public void onStatusChanged(String provider, int status, Bundle extras) {
}
@Override
public void onProviderEnabled(String provider) {
}
@Override
public void onProviderDisabled(String provider) {
}
}
|
Q:
What was the political significance of the investiture conflict?
I understand that Pope Gregory VII and Henry IV were heavily involved in determining who gained the power to appoint church officials. However, I'm a little unsure of the consequences of this conflict and what long term political and economic repercussions there was because of this.
A:
The investiture conflict was one very public aspect of a larger struggle for power. Precursors to it could be seen 300 years earlier at the coronation of Charlemagne by Pope Leo III. The larger questions were ones like:
Can the Pope tell a monarch what to do?
Should a monarch have complete control of what happens in his territory?
Is a king theocratic (ruling by the choice of God) or a human who happens to have power?
Should the church be involved with 'worldly' things like who is the ruler, how is a state organized?
Is the leadership of the church in the hands of the pope or the abbots of the wealthy monasteries?
How people chose to answer these questions had profound consequences for the development of Europe. Some possible answers:
Charlemagne, King of the Franks: "The Pope shouldn't tell me what to do. My father and I saved him from the Lombards. I am anointed by God like King David and I know what is best for my people."
Holy Roman Emperor Henry IV: "I should have the right to choose who is a bishop in my lands. Much of the church property in Saxony is held in vassalage to me. This right has been held by my predecessors for generations. For the last hundred years the papacy has been weak compared to the flourishing northern abbeys. What makes Pope Gregory think he can change the rules now?"
Pope Gregory VII (Hildebrand) with Cardinal Humbert: "I am the vicar of Christ on earth. The papal office is universal in authority can cannot err. I am the true successor to Constantine and my actions can only to be judged by God. I have in my hand both the temporal and spiritual swords. I may allow some of these kings or emperors to have authority for the good of the Christian mission, but if they misuse that power, I will reclaim it and give it to who I think deserves it. I will not tolerate simony (selling of offices) or lay investiture (letting laymen choose bishops). Only by submitting to my will, as the bishop of Rome, can anyone achieve salvation."
Abbot Hugh of Cluny: "We have had a balanced approach of the church growing from the support of these theocratic monarchs for hundreds of years. Piety in society is increasing and our monastic orders are thriving. Gregory, in pursuit of his own power, it upsetting a system that works well and does not need reformed."
Cardinal Peter Damiani: "To bring God's love into this world, the church must lead by example. Bishops must stop cohabiting with women and siring bastards. We must show the laity how to act by having a change within our hearts toward charity and an emotional connection with God. In this way we will elevate Christendom, not by locking horns with the emperor."
Pope Paschal II: "We have seen how difficult it was for Gregory to keep laymen out of church affairs. My simple solution is that we must divest ourselves of our lands and our riches, turning them over to the local rulers, and reconstitute ourselves as a purely spiritual organization. In exchange, the kings and emperors must stay out of our affairs and allow us to work freely in their lands. Our bishops must not be prosecuted under their courts as all churchmen are answerable only to me. In constructing this new arrangement, we are returning to the original apostolic poverty. We will spread the gospel, not as rich princes, but as poor itinerant preachers." [Note: this policy was abandoned because everyone hated it except Paschal and the German emperor.]
All that as a preface, your question asked what the political repercussions of this many-sided debate were.
1) The papacy reached the apex of their power from about 1073 to 1300. They could call for crusades, decide who was or was not a legal choice for a throne, tell kings what to do (with varying degrees of success), and collect huge amounts of money into their coffers. Arguably the high-water mark for this powerful papal style was the rule of Innocent III (1198-1216). Even after the schisms of the 1300's, papal power continued to play an huge role in European life for centuries.
2) The power of states had already begun to rise as rulers like Otto the Great of Germany and William the Conqueror beat back the great barons and consolidated power around their courts. In spite of the efforts of the Gregorian reformers, this increase in state power would continue under effective rulers like Henry II of England and Louis IX of France. After the turmoil of the Gregorian revolution though, successful kings came to rely on effective bureaucracies to build a secular state rather than emphasizing the older ideals of theocratic kingship and the personal charisma of the ruler. One place these states failed to consolidate was in Germany, in part because the system of electors kept power in the hands of the nobles, but also because of the damage done to the Holy Roman Emperor's power after the assault by Gregory VII. Later emperors like Fredrick Barbarossa and Fredrick II Hohenstaufen burnt up much of their energy fighting with the pope over control of Italy. One could even say that the late dates of the unifications of Italy and Germany (both in 1871) even have their roots in the Gregorian revolution. The power of the pope was one of the things that prevented those countries from charting the course that France, England, and Spain followed in centralizing around a monarchical state.
3) The emotional shifts that occurred during the Gregorian revolution bore fruit for years as well. Clerical celibacy came to be enforced more strictly. The new piety was taken up by St. Francis and organized by Innocent III into the order of Franciscan Friars (actually named Friars Minor). The older monasteries of the Clunaic style dropped in influence since they were seen as old-fashioned and not prepared to cope with the new piety of the masses.
In short, this time period is referred to as the "Gregorian revolution" because of the major shift in thinking and its long-reaching political and ideological effects.
References: I strongly recommend The Civilization of the Middle Ages by Norman F Cantor. |
Let's Give 'Em Something to TOC About: Transforming the Table of Contents of Your PDF File In PDF files, the table of contents provides a map that helps your audience to navigate the document easily. However, the default table of contents that is generated by the SAS ® Output Delivery System (ODS) destination, while informative, is fairly utilitarian. Your procedures and DATA steps generate tables and graphs that have meaning to you and your audience. Likewise, the table of contents should also be as meaningful as possible by clarifying the contents of your PDF. This paper explains and demonstrates step by step how to use the following statement, options, and procedures to customize your table of contents: the ODS PROCLABEL statement the CONTENTS= and the DESCRIPTION= options the DOCUMENT destination and procedure the TEMPLATE procedure These tools provide you with the flexibility and the power to customize your table of contents so that you really leave your audience with something to TOC about! |
Prevalence and predictors of left ventricular diastolic dysfunction in a Hispanic patient population. Minimal data exist on attributes of diastolic dysfunction in the Hispanic population. The purpose of this study was to evaluate the prevalence and predictors of diastolic dysfunction in a Hispanic patient population. We performed a retrospective review of 166 consecutive echocardiograms in a southwestern Texas Hospital that caters to a large Hispanic patient population. We identified all echocardiograms that met criteria for diastolic dysfunction and assessed baseline demographics and comorbidities in the cohort of Hispanic patients. A multivariate analysis was performed to identify the independent predictors of diastolic dysfunction. A total of 129 out of 166 patients (77.8%) were of Hispanic origin. Out of the 129 patients, 87 (67.4%) had some degree of diastolic dysfunction in this population suggesting a high prevalence in the study cohort. In the diastolic dysfunction group, the mean age was 64.5±13.9, 37% were male and 63% female, 78% had diabetes, 85% had hypertension, and 49% had some degree of renal insufficiency (stages 3-5). A logistic multivariate analysis showed that diabetes was an independent predictor of diastolic dysfunction with odds ratio of 2.69 (95% confidence interval , 1.06-6.28; p=0.038). Similarly age (per year increase) and chronic kidney disease were independent predictors of diastolic dysfunction. We demonstrated that older age, presence of diabetes, and renal dysfunction are independent predictors of diastolic dysfunction in the Hispanic patient population. Strategies geared toward reducing diabetes and preventing renal dysfunction are likely to decrease prevalence of diastolic dysfunction and heart failure in this community. |
(CNN) — Puerto Rico’s true death toll from Hurricane Maria remains elusive as the storm’s one-year anniversary approaches.
The island government raised the official death toll to 2,975 on Tuesday after maintaining for months that 64 people had died as a result of the storm.
But the higher figure, based on the findings of researchers from George Washington University in a study commissioned by the US commonwealth’s government, is only an approximation, not a concrete list of names, according to Gov. Ricardo Rossello.
In the chaos after the storm, the island’s public safety director, Héctor M. Pesquera, said at least six people were killed.
Rossello told CNN two days after the storm hit that 13 people had died in the storm. That figure was based on reports from mayors on the island, but law enforcement authorities hadn’t confirmed the total, the government said.
Nearly two weeks after Maria, President Donald Trump touched down for the first time and downplayed the devastation.
“Every death is a horror,” Trump said in early October before comparing Puerto Rico’s official death toll of 16 at the time to “a real catastrophe, like Katrina,” in which more than 1,800 people perished from the 2005 storm that ravaged New Orleans.
After Trump departed, the governor announced the death toll had risen to 34.
One of the conclusions of the George Washington University study was that officials did nothing to respond to public criticism and concerns about political motivations that surged when the official tally jumped to 34 shortly after Trump’s visit.
In November, CNN reporters surveyed 112 funeral homes across the island, about half the total. They found that funeral home directors identified 499 deaths considered to be hurricane-related.
In December, public safety officials revised the official count to 64, adding some fatalities newly certified as indirect deaths related to the storm.
For instance, emergency personnel were unable to reach the home of a man who collapsed during the storm. Doctors had classified his death as natural, and it was not initially considered a storm-related death.
Other deaths later determined to be indirectly caused by the storm included a case of exposure to carbon monoxide, a suicide, a person run over by his own vehicle and a death from complications following a fall.
Also in December, The New York Times estimated 1,052 “excess deaths” occurred after Maria. Other media produced similar estimates.
In May, a team that included researchers from Harvard University published a study in the New England Journal of Medicine estimating 793 to 8,498 people died in Maria’s wake, a range that some academics have criticized as overly broad.
The study’s midpoint estimate — 4,645 deaths — became a rallying cry for activists upset by what they see as a lack of accountability for the scale of the catastrophe by officials in Puerto Rico and the United States.
A research letter published this month in the medical journal JAMA estimated that between 1,006 and 1,272 people died in connection to the storm.
The Puerto Rican government on August 8 quietly admitted the official toll was higher than its December tally.
In a report to Congress, the government said documents show that 1,427 more deaths occurred in the four months after the storm than “normal,” compared with deaths that occurred the previous four years.
The 1,427 figure also appeared in a draft of the report — “Transformation and Innovation in the Wake of Devastation” — which was published and opened for public comment July 9.
The revised figure was first “revealed” by the Puerto Rico government, according to the final report, on June 13, one day after officials were forced by a judge to release death records that CNN and the Centro de Periodismo Investigativo in Puerto Rico had sued to make public.
But officials at the time stopped short of updating the official death toll. |
def after_all(context):
if context.running_locally:
try:
_teardown_system(context)
except subprocess.CalledProcessError as e:
raise Exception('Failed to teardown system. Command "{c}" failed:\n{o}'.
format(c=' '.join(e.cmd), o=e.output)) |
<filename>source/puma/private/internal/renderer/renderables/irenderable.h<gh_stars>0
#pragma once
#include <engine/utils/position.h>
#include <engine/utils/renderdefinitions.h>
#include <texturemanager/texture.h>
#include <texturemanager/texturedefinitions.h>
#include <utils/graphics/dimensions.h>
#include <utils/genericid.h>
namespace puma
{
class IRenderable
{
public:
virtual ~IRenderable() {}
virtual void render() = 0;
void setRenderLayer( RenderLayer _value ) { m_renderLayer = _value; }
RenderLayer getRenderLayer() const { return m_renderLayer; }
private:
RenderLayer m_renderLayer = RenderLayer(0);
};
} |
// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT.
package storagegateway
import (
"context"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/internal/awsutil"
)
// DescribeTapeRecoveryPointsInput
// Please also see https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapeRecoveryPointsInput
type DescribeTapeRecoveryPointsInput struct {
_ struct{} `type:"structure"`
// The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation
// to return a list of gateways for your account and region.
//
// GatewayARN is a required field
GatewayARN *string `min:"50" type:"string" required:"true"`
// Specifies that the number of virtual tape recovery points that are described
// be limited to the specified number.
Limit *int64 `min:"1" type:"integer"`
// An opaque string that indicates the position at which to begin describing
// the virtual tape recovery points.
Marker *string `min:"1" type:"string"`
}
// String returns the string representation
func (s DescribeTapeRecoveryPointsInput) String() string {
return awsutil.Prettify(s)
}
// Validate inspects the fields of the type to determine if they are valid.
func (s *DescribeTapeRecoveryPointsInput) Validate() error {
invalidParams := aws.ErrInvalidParams{Context: "DescribeTapeRecoveryPointsInput"}
if s.GatewayARN == nil {
invalidParams.Add(aws.NewErrParamRequired("GatewayARN"))
}
if s.GatewayARN != nil && len(*s.GatewayARN) < 50 {
invalidParams.Add(aws.NewErrParamMinLen("GatewayARN", 50))
}
if s.Limit != nil && *s.Limit < 1 {
invalidParams.Add(aws.NewErrParamMinValue("Limit", 1))
}
if s.Marker != nil && len(*s.Marker) < 1 {
invalidParams.Add(aws.NewErrParamMinLen("Marker", 1))
}
if invalidParams.Len() > 0 {
return invalidParams
}
return nil
}
// DescribeTapeRecoveryPointsOutput
// Please also see https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapeRecoveryPointsOutput
type DescribeTapeRecoveryPointsOutput struct {
_ struct{} `type:"structure"`
// The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation
// to return a list of gateways for your account and region.
GatewayARN *string `min:"50" type:"string"`
// An opaque string that indicates the position at which the virtual tape recovery
// points that were listed for description ended.
//
// Use this marker in your next request to list the next set of virtual tape
// recovery points in the list. If there are no more recovery points to describe,
// this field does not appear in the response.
Marker *string `min:"1" type:"string"`
// An array of TapeRecoveryPointInfos that are available for the specified gateway.
TapeRecoveryPointInfos []TapeRecoveryPointInfo `type:"list"`
}
// String returns the string representation
func (s DescribeTapeRecoveryPointsOutput) String() string {
return awsutil.Prettify(s)
}
const opDescribeTapeRecoveryPoints = "DescribeTapeRecoveryPoints"
// DescribeTapeRecoveryPointsRequest returns a request value for making API operation for
// AWS Storage Gateway.
//
// Returns a list of virtual tape recovery points that are available for the
// specified tape gateway.
//
// A recovery point is a point-in-time view of a virtual tape at which all the
// data on the virtual tape is consistent. If your gateway crashes, virtual
// tapes that have recovery points can be recovered to a new gateway. This operation
// is only supported in the tape gateway type.
//
// // Example sending a request using DescribeTapeRecoveryPointsRequest.
// req := client.DescribeTapeRecoveryPointsRequest(params)
// resp, err := req.Send(context.TODO())
// if err == nil {
// fmt.Println(resp)
// }
//
// Please also see https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapeRecoveryPoints
func (c *Client) DescribeTapeRecoveryPointsRequest(input *DescribeTapeRecoveryPointsInput) DescribeTapeRecoveryPointsRequest {
op := &aws.Operation{
Name: opDescribeTapeRecoveryPoints,
HTTPMethod: "POST",
HTTPPath: "/",
Paginator: &aws.Paginator{
InputTokens: []string{"Marker"},
OutputTokens: []string{"Marker"},
LimitToken: "Limit",
TruncationToken: "",
},
}
if input == nil {
input = &DescribeTapeRecoveryPointsInput{}
}
req := c.newRequest(op, input, &DescribeTapeRecoveryPointsOutput{})
return DescribeTapeRecoveryPointsRequest{Request: req, Input: input, Copy: c.DescribeTapeRecoveryPointsRequest}
}
// DescribeTapeRecoveryPointsRequest is the request type for the
// DescribeTapeRecoveryPoints API operation.
type DescribeTapeRecoveryPointsRequest struct {
*aws.Request
Input *DescribeTapeRecoveryPointsInput
Copy func(*DescribeTapeRecoveryPointsInput) DescribeTapeRecoveryPointsRequest
}
// Send marshals and sends the DescribeTapeRecoveryPoints API request.
func (r DescribeTapeRecoveryPointsRequest) Send(ctx context.Context) (*DescribeTapeRecoveryPointsResponse, error) {
r.Request.SetContext(ctx)
err := r.Request.Send()
if err != nil {
return nil, err
}
resp := &DescribeTapeRecoveryPointsResponse{
DescribeTapeRecoveryPointsOutput: r.Request.Data.(*DescribeTapeRecoveryPointsOutput),
response: &aws.Response{Request: r.Request},
}
return resp, nil
}
// NewDescribeTapeRecoveryPointsRequestPaginator returns a paginator for DescribeTapeRecoveryPoints.
// Use Next method to get the next page, and CurrentPage to get the current
// response page from the paginator. Next will return false, if there are
// no more pages, or an error was encountered.
//
// Note: This operation can generate multiple requests to a service.
//
// // Example iterating over pages.
// req := client.DescribeTapeRecoveryPointsRequest(input)
// p := storagegateway.NewDescribeTapeRecoveryPointsRequestPaginator(req)
//
// for p.Next(context.TODO()) {
// page := p.CurrentPage()
// }
//
// if err := p.Err(); err != nil {
// return err
// }
//
func NewDescribeTapeRecoveryPointsPaginator(req DescribeTapeRecoveryPointsRequest) DescribeTapeRecoveryPointsPaginator {
return DescribeTapeRecoveryPointsPaginator{
Pager: aws.Pager{
NewRequest: func(ctx context.Context) (*aws.Request, error) {
var inCpy *DescribeTapeRecoveryPointsInput
if req.Input != nil {
tmp := *req.Input
inCpy = &tmp
}
newReq := req.Copy(inCpy)
newReq.SetContext(ctx)
return newReq.Request, nil
},
},
}
}
// DescribeTapeRecoveryPointsPaginator is used to paginate the request. This can be done by
// calling Next and CurrentPage.
type DescribeTapeRecoveryPointsPaginator struct {
aws.Pager
}
func (p *DescribeTapeRecoveryPointsPaginator) CurrentPage() *DescribeTapeRecoveryPointsOutput {
return p.Pager.CurrentPage().(*DescribeTapeRecoveryPointsOutput)
}
// DescribeTapeRecoveryPointsResponse is the response type for the
// DescribeTapeRecoveryPoints API operation.
type DescribeTapeRecoveryPointsResponse struct {
*DescribeTapeRecoveryPointsOutput
response *aws.Response
}
// SDKResponseMetdata returns the response metadata for the
// DescribeTapeRecoveryPoints request.
func (r *DescribeTapeRecoveryPointsResponse) SDKResponseMetdata() *aws.Response {
return r.response
}
|
The retired lieutenant general who oversaw the government’s Hurricane Katrina relief efforts said President Trump should have done more to prepare for Hurricane Maria.
"It’s kind of like Katrina: We got it. We got it. Oh, shit, send in the cavalry," Russel Honore told Bloomberg in an interview.
Honore said the White House should have prepared before Maria hit by sending in more personnel and equipment. He was also critical of Trump’s response to Hurricane Harvey earlier this year.
President George W. Bush faced widespread criticism for his response to Hurricane Katrina.
ADVERTISEMENT
The Trump administration announced Wednesday that more Pentagon resources would be sent to the island to assist with aid and restoration for the U.S. territory's 3.5 million residents, nearly half of whom are without clean drinking water.
Trump said earlier this week that delays in sending supplies to Puerto Rico have been due to the difficulty in getting supplies to the island. The White House announced Thursday that it would temporarily lift shipping restrictions to help deliver aid. |
ABOVE: John Tory delivers “State of the City” address
TORONTO – John Tory says the city’s transit system is “reeling” after recent cuts and its housing infrastructure is in “crisis.”
But city taxpayers can’t pay for it alone. He wants the provincial and federal governments to step up and said he will open a dialogue with them that has recently been “basically non-existent.”
The mayor-elect made the comments during an afternoon press conference Thursday, a month to the day after he was elected mayor of Toronto. Since the election, Tory has been meeting with his transition team to get updates on a number of files including transit, housing, poverty and gridlock.
In short, the city’s transit infrastructure badly needs more funding, Tory said.
“Those issues, the issue of capital and operating funding, has not been the subject of any kind of consistent dialogue between the city and other levels of the government,” he said.
He pointed to council’s decision to freeze the city subsidy to the TTC in 2011 which led to 41 bus routes being cut.
“The system, the TTC system is still reeling from operating cuts imposed by council in 2011 and again in 2012 which resulted on 41 bus routes across the city being reduced,” he said.
He said he’s asked TTC CEO Andy Byford to look into restoring service on some of the affected routes but noted some of the buses have since been retired.
“It is something that we didn’t want to do at the time, and it is something we are keen to bring back,” Byford said. “We clearly have a mayor coming into office who believes in public transit.”
Byford said he hopes to restore crowding standards that were also cut in 2010.
Tory cautioned all of his promises are contingent on being able to pay for them. But he added if there was any division which he thought should have their funding restored, it should be the TTC.
A number of transit projects are underway: the implementation of new subway trains and streetcars, installation of Wi-Fi at subway stations, signal upgrades which Tory wants to see expedited, the Union Station revitalization, planned light rail transit projects and the Scarborough subway. Also, Tory said “detailed analysis” has started on his SmartTrack transit plan which he campaigned on through the election.
Tory also spoke about several other issues facing the city including finding a new police chief, fixing child poverty and the upcoming Pan Am Games which he said was “crucially important” for the city.
But the city also faces another crisis: housing.
“I don’t think it is an exaggeration to say we are in the midst of a housing crisis in the city of Toronto,” Tory said. “This problem will not be solved by the resources of city taxpayers alone. We need other levels of government to provide greater and broader assistance.”
Tory noted the city is facing a growing backlog of needed repairs at Toronto Community Housing which could reach $2.6 billion in the next decade. According to the TCHC website, the organization faces a $1.73 billion funding shortfall.
The number of uninhabitable TCHC units has risen from 56 in 2010 to an estimated 4,000 in 2018.
“If we don’t get the assistance from the provincial and federal governments, it will be that bad,” City manager Joe Pennachetti said at the afternoon press conference.
A recent report titled “The Hidden Epidemic: A Report on Child and Family Poverty in Toronto” revealed 29 per cent of Toronto children are part of low-income families.
“That is obviously unacceptable, I think, to every Torontonian and certainly it’s mayor-to-be,” he said. |
Either 3D printing has totally jumped the shark, or it’s hitting the mainstream.
Maybe both.
In any case, Disney has definitely jumped — on the 3D printing bandwagon. You can now turn yourself in to a real, live (sort of) Disney action figure: a Imperial Stormtrooper. Inspired, Disney says, by a scene from Star Wars: The Empire Strikes Back, the action figure will be 7.5-inches tall and, of course, have your smiling face.
Fortunately, the figurine’s helmet is off, so your face is visible.
It’s part of the “D-tech weekends” at Disney’s Star Wars celebrations, and you’ll need to be at the company’s Hollywood Studios to get the figurine. You need to book the experience, which involved a 10-minute photo shoot of your face by what Disney says is “the world’s highest-resolution, single-shot 3D face scanner,” created by Disney’s own Imagineers, working with Disney Research Labs.
Photo done, the shot is sent to Disney’s high-res 3D printers and the completed Mini-you aka Imperial Stormtrooper will be in the mail and at your home within 7-8 weeks.
For more details, and how to reserve your shoot, check Disney’s site.
Image credit: Disney; Hat tip: C|Net |
package com.zebrunner.reporting.web;
import com.zebrunner.reporting.domain.dto.integration.IntegrationGroupDTO;
import com.zebrunner.reporting.service.integration.IntegrationGroupService;
import com.zebrunner.reporting.web.documented.IntegrationGroupDocumentedController;
import lombok.RequiredArgsConstructor;
import org.dozer.Mapper;
import org.springframework.http.MediaType;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.List;
import java.util.stream.Collectors;
@CrossOrigin
@RequestMapping(path = "api/integration-groups", produces = MediaType.APPLICATION_JSON_VALUE)
@RestController
@RequiredArgsConstructor
public class IntegrationGroupController extends AbstractController implements IntegrationGroupDocumentedController {
private final IntegrationGroupService integrationGroupService;
private final Mapper mapper;
@PreAuthorize("hasPermission('VIEW_INTEGRATIONS')")
@GetMapping()
@Override
public List<IntegrationGroupDTO> getAll() {
return integrationGroupService.retrieveAll().stream()
.map(integration -> mapper.map(integration, IntegrationGroupDTO.class))
.collect(Collectors.toList());
}
}
|
There are numerous control applications in which a reliable electrical signal representative of the speed of a rotating shaft is essential. For example, in a bus or other passenger vehicle it may be desirable to provide an interlock to prevent opening of the doors whenever the vehicle is moving above a given speed. Similarly, in any vehicle it may be desirable to provide an interlock control to prevent shifting the transmission into reverse gear while the vehicle is moving in a forward direction at even a very low speed. There are also numerous industrial applications which require active or preventive control for one part of a machine whenever some other part of the machine is operating either above or below a particular critical speed.
To meet the needs of these various applications, a signal generator that provides a useful signal amplitude over a broad speed range is highly desirable. This presents substantial problems, particularly at relative low speeds. The signal generator must also afford a consistent relationship between some parameter of its signal output and the speed of the rotating member that it monitors. Thus, there should be a consistent relation between either the signal amplitude and the speed of the rotating member of between the signal frequency and the speed of the rotating member. Preferably, for maximum flexibility in various different applications, both the amplitude and the frequency of the output signal from the generator should be consistently related to the speed of the rotating shaft or other member being monitored, particularly at low speeds.
Sub-fractional permanent magnet signal generators have been employed in applications of the kind described above, but present substantial operating difficulties and cost problems. These devices are A.C. generators, and effective utilization of their output signals frequently requires a relatively consistent wave form. However, the wave form of the output signal is often subject to substantial variation due to lack of concentricity in the signal generator components. Furthermore, the same lack of concentricity may produce substantial variations in signal amplitude, at corresponding speeds, from unit to unit. These difficulties can be alleviated by adopting high precision manufacturing techniques and relatively expensive structures, but the resulting cost is frequently prohibitive, particularly for high volume applications in the automotive field.
In many applications, the speed signal generator is most conveniently used by being incorporated in an existing mechanism employed for other purposes. For example, in many automotive applications, where the speed of movement of the vehicle is the critical factor to be monitored, it is most convenient to connect the signal generator to the flexible rotating cable that drives the speedometer. In an application of this kind, the signal generator should provide for rapid and convenient connection to the speedometer cable, either at the transmission end or at the speedometer end, and should allow the speedometer drive cable to extend through the signal generator to maintain its original use.
In other applications, there may be two critical speeds for the same rotating member. Again referring to the automotive field, it may be desirable to provide for locking vehicles doors above one critical speed and to lock out the transmission from a shift into reverse at a substantially lower critical speed. To meet the needs of applications of this kind, the signal generators should be capable of being stacked in series with each other. |
<gh_stars>1-10
package com.readme.app.view.activity;
import android.os.Bundle;
import android.text.InputType;
import android.text.TextUtils;
import android.text.method.PasswordTransformationMethod;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.widget.ArrayAdapter;
import android.widget.EditText;
import android.widget.ImageButton;
import android.widget.Spinner;
import com.readme.app.R;
import com.readme.app.model.entity.User;
import com.readme.app.model.database.AppDatabase;
import com.readme.app.model.database.dao.UserDao;
import com.readme.app.model.util.Message;
import com.readme.app.model.util.Validator;
import androidx.appcompat.app.ActionBar;
import androidx.appcompat.app.AppCompatActivity;
import androidx.appcompat.widget.Toolbar;
public class UserEditActivity extends AppCompatActivity {
private EditText nameEditText;
private EditText emailEditText;
private EditText passwordEditText;
private SessionManager sessionManager;
private UserDao userDao;
private User userToEdit;
private boolean editing;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_user_edit);
userDao = AppDatabase.getInstance(this).getUserDao();
sessionManager = SessionManager.getInstance(this);
Toolbar toolbar = findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
ActionBar actionBar = getSupportActionBar();
actionBar.setDisplayHomeAsUpEnabled(true);
nameEditText = findViewById(R.id.user_edit_name);
emailEditText = findViewById(R.id.user_edit_email);
passwordEditText = findViewById(R.id.user_edit_password);
// Receiving data from parent activity
Integer userIdToEdit = getIntent().getIntExtra(getString(R.string.intent_extra_user_id), -1);
String emailAdressFromIntent = getIntent().getStringExtra("user_email");
String passwordFromIntent = getIntent().getStringExtra("user_password");
editing = userIdToEdit != -1;
// Editing
if (editing) {
userToEdit = userDao.getById(userIdToEdit);
}
// Adding
else {
userToEdit = new User();
userToEdit.setEmail(emailAdressFromIntent);
userToEdit.setPassword(passwordFromIntent);
actionBar.setTitle(R.string.title_activity_user_add);
}
nameEditText.setText(userToEdit.getName());
passwordEditText.setText(userToEdit.getPassword());
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.menu_edit_common, menu);
// Registering
if (!editing) {
MenuItem actionDelete = menu.findItem(R.id.action_delete);
actionDelete.setEnabled(false);
actionDelete.setVisible(false);
}
return super.onCreateOptionsMenu(menu);
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()) {
case R.id.action_save:
save();
return true;
case R.id.action_delete:
delete();
return true;
default:
return super.onOptionsItemSelected(item);
}
}
private void save(){
String name = nameEditText.getText().toString();
String password = <PASSWORD>.getText().<PASSWORD>();
String email = emailEditText.getText().toString();
boolean cancel = false;
View focusView = null;
if (TextUtils.isEmpty(password)) {
passwordEditText.setError(getString(R.string.error_field_required));
focusView = passwordEditText;
cancel = true;
} else if (!Validator.isPasswordValid(password)) {
passwordEditText.setError(getString(R.string.error_invalid_password));
focusView = passwordEditText;
cancel = true;
}
User emailFound = userDao.getByEmail(email);
if (emailFound != null && !userToEdit.getId().equals(emailFound.getId())) {
emailEditText.setError(getString(R.string.error_email_already_exist));
focusView = emailEditText;
cancel = true;
} else if (email.isEmpty()){
emailEditText.setError(getString(R.string.error_field_required));
focusView = emailEditText;
cancel = true;
} else if (!Validator.isEmailValid(email)) {
emailEditText.setError(getString(R.string.error_email_invalid));
focusView = emailEditText;
cancel = true;
}
if(cancel) {
focusView.requestFocus();
} else {
EditText confirmPasswordEditText = new EditText(this);
confirmPasswordEditText.setTransformationMethod(PasswordTransformationMethod.getInstance());
if(userToEdit.passwordMatch(password) || !editing) {
// User did not changed password, must not confirm
save(name,email,password);
} else {
// User changed password, must confirm
Message.showConfirmation(this, getString(R.string.dialog_confirm_password_title), getString(R.string.dialog_confirm_password_description),
(dialogInterface, i) -> {
if(password.equals(confirmPasswordEditText.getText().toString())){
save(name,email,password);
} else {
Message.show(this, "Confirmation password don't match");
}
}, confirmPasswordEditText).show();
}
}
}
private void save (String name, String emailAdress, String password) {
userToEdit.setName(name);
userToEdit.setEmail(emailAdress);
userToEdit.setPassword(password);
userDao.save(userToEdit);
// Editing
if (editing) {
Message.show(this, getString(R.string.message_user_updated));
}
// Adding
else {
Message.show(this, getString(R.string.message_user_registered));
}
finish();
}
private void delete () {
Message.showConfirmation(this,
getString(R.string.confirm_remove_user_title),
getString(R.string.confirm_remove_user_description),
(dialogInterface, i) -> {
userDao.delete(userToEdit);
Message.show(this, getString(R.string.message_user_deleted));
sessionManager.logout(this);
}, null).show();
}
}
|
Imagine the Fuel Cell that Keeps Running After the Hydrogen Runs Out
July 2nd, 2012 by Joshua S Hill
“This thin-film SOFC takes advantage of recent advances in low-temperature operation to incorporate a new and more versatile material,” explains principal investigator Shriram Ramanathan, Associate Professor of Materials Science at the Harvard School of Engineering and Applied Sciences (SEAS). “Vanadium oxide (VO x ) at the anode behaves as a multifunctional material, allowing the fuel cell to both generate and store energy.”
This new discovery will be most important in situations where a compact and lightweight power supply is essential but where the fuel supply may be interrupted.
“Unmanned aerial vehicles, for instance, would really benefit from this,” says lead author Quentin Van Overmeere, a postdoctoral fellow at SEAS. “When it’s impossible to refuel in the field, an extra boost of stored energy could extend the device’s lifespan significantly.”
There isn’t much use my attempting to rewrite the wonderful explanation available over at the Harvard School of Engineering and Applied Sciences website. So head on over there now for more. |
Machine learning based detection of digital documents maliciously recaptured from displays We used to say seeing is believing": this is no longer true. The digitization is changing all aspects of life and business. One of the more noticeable impacts is in how business documents are being authored, exchanged and processed. Many documents such as passports and IDs are being at first created in paper form but are immediately scanned, digitized, and further processed in electronic form. Widely available photo editing software makes image manipulation quite literally a child's play increasing the number of forged contents tremendously. With the growing concerns over authenticity and integrity of scanned and image-based documents such as passports and IDs, it is more than urgent to be able to quickly validate scanned and photographic documents. The same machine learning that is behind some of the most successful content manipulation solutions can also be used as a counter measure to detect them. In this paper, we describe an efficient recaptured digital document detection based on machine learning. The core of the system is composed of a binary classification approach based on support vector machine (SVM), properly trained with authentic and recaptured digital passports. The detector informs when it encounters a digital document that is the result of photographic capture of another digital document displayed on an LCD monitor. To assess the proposed detector, a specific dataset of authentic and recaptured passports with a number of different cameras was created. Several experiments were set up to assess the overall performance of the detector as well as its efficacy for special situations, such as when the machine learning engine is trained on a specific type of camera or when it encounters a new type of camera for which it was not trained. Results show that the performance of the detector remains above 90 percent accuracy for the large majority of cases. |
On July 2, the International Maritime Organisation (IMO) – the United Nations specialised agency with responsibility for the safety and security of shipping and the prevention of marine and atmospheric pollution by ships and whose work supports the UN Sustainable Development Goals (SDGs) – released a new report evaluating its governance structure and considering whether it will help or hinder the development of policies, including an effective GHG strategy.
Commenting on the report, Transparency International identifies a number of critical governance flaws at the IMO and says that private influence and poor transparency and accountability policies put the IMO at risk of severely under-delivering on its targets.
"In order for the IMO to meet its ambitious goals to reduce shipping emissions, several things need to change," said Rueben Lifuka, vice chair of Transparency International. "Our biggest recommendation is to transform the IMO's accountability policies, which are currently hindering policy-making and leaving the agency susceptible to private influence. While the IMO’s initial strategy adopted in April is a big step forward for the international shipping sector, more must be done to ensure the agency meets its targets."
In April 2018, the IMO announced an initial strategy to reduce GHG emissions by at least 50 percent by 2050 compared with 2008 levels. The announcement was widely welcomed and expected to trigger some immediate decarbonisation measures. However, a revised, final strategy will not be adopted until 2023 and the next five years will see the IMO's Member States enter politically charged and technically complex negotiations to agree a final GHG deal.
According to Transparency International, the IMO's 2018 GHG strategy will probably need to be revised upwards in light of the findings of the forthcoming special report of the Intergovernmental Panel on Climate Change (IPCC) on the impacts of global warming of 1.5°C, in order to decarbonise the maritime sector in line with well below 2ºC and/or 1.5ºC temperature goals of the Paris Agreement.
An increasing body of research suggests that the shipping sector’s emissions must decline to zero by 2050 at the latest.
In its analysis of the IMO report, Transparency International points the finger at four key issues that could endanger achievement of climate goals: uneven influence of Member States, influence of open and private registries, disproportionate influence of industry, and lack of delegate accountability.
According to Transparency International, a small group of Member States has the power to exert undue influence over the IMO because of structural weaknesses in the organisation’s financing and policy-making processes that tip the scales in favour of states that have the most ships registered under their flags.
Under current rules, two-thirds of the IMO’s financial contributions come from just 10 countries, which provide contributions based on the size of their fleets (measured in deadweight tonnage). Nine of the IMO's top 10 contributors currently occupy elected positions on the Council, which is the organisation’s executive body.
The provision of funding does not necessarily equate to a seat on the Council or to influence within it. Yet, says the NGO, the Council – which publishes no substantive information about its regular activities or elections – lacks mechanisms to provide public assurance that the states that fund the IMO are not simply buying influence.
The same states that finance the IMO also have an advantage in the policy-making process. IMO policies do not become active until they have been ratified by Member States that collectively regulate a specified percentage of the world’s shipping fleet (also measured in deadweight tonnage).
The states with greater tonnages not only contribute more funding to the IMO, but also have a greater say, in proportion to their tonnage, on whether and when a policy comes into effect.
In practice, argues Transparency International, the risks of undue influence are exacerbated because tonnage is concentrated in the handful of states that operate open registries.
Also known as flags of convenience and international registries, open registries allow ship-owners of any nationality to register under their flag. They are controversial because they offer ship-owners extremely favourable regulatory environments that commonly include effective anonymity, a zero corporate tax rate and minimal implementation and enforcement of environmental and social regulations – all in exchange for the registration fees from ship-owners.
More than 50 percent of the world’s fleet sails under the flags of just five open registries: those of Panama, Liberia, the Marshall Islands, Malta and the Bahamas. These states, by virtue of their tonnage, can exercise influence over the IMO through the funding and ratification mechanisms – yet concerns remain about their commitment to regulation and enforcement.
For example, three of these five open registry states (Panama, the Marshall Islands and Bahamas) were recently classified as non-cooperative tax havens by the European Union.
There are an estimated 35 open registry states and, while their approach to regulation is not uniform, the Berlin-based NGO says that serious questions could be asked regarding their interest in formulating and implementing ambitious decarbonisation measures.
At least 17 open registries have outsourced the management of their registries to private companies, which suggests around 10 percent of delegates to the IMO may actually be drawn from the private sector. By allowing private companies to debate and vote on issues of transnational public interest, the IMO is said to undermine a basic premise of the UN system of international governance.
As individual companies and as a sector collectively, Transparency International notes that the shipping industry has a pervasive influence over the policy-making process and can access and submit documents and observe and speak at meetings at every level of IMO decision-making.
These privileges are available to other interest groups but attendance records of recent meetings of the IMO’s five committees show that industry representatives outnumbered civil society organisation (CSO) representatives by almost five to one (312 to 64) and labour organisation representatives by more than three to one (312 to 101).
Private interests also have other ways of exerting influence. There are no rules governing the appointment of national delegations and states appoint companies and representatives of ship-owners directly to their national delegations.
For example, a recent delegation from Brazil to an environmental meeting contained five advisors who were employees of the logistics multinational company Vale SA, which has substantial shipping interests.
Transparency International points out that an effective GHG strategy would require long-term investment in clean technologies, but should companies and trade associations want to resist these measures, they are well placed throughout the IMO to delay or dilute polices that promote such investment.
Finally, across the IMO, Member State delegates are shielded from public scrutiny. IMO reports of meetings do not reflect the positions taken by individual representatives, while journalists are forbidden from naming speakers at meetings without gaining their consent. The result is that the public do not know which delegates are arguing for which policies.
Delegates are also unaccountable to the IMO itself, says Transparency International. The organisation has no code of conduct to regulate how delegates are appointed or place restrictions on secondary employment, conflicts of interest, gifts and hospitality. Meanwhile, the organisation’s whistleblowing policy and complaints mechanism only apply to staff in the Secretariat, and the IMO’s oversight body has no jurisdiction to investigate the activities of delegates.
Nevertheless, concludes Transparency International, the IMO does perform more positively in some areas and, in particular, the transparency around the organisation’s governance framework is relatively high. |
<filename>range/main.go
package main
import (
"fmt"
)
func main() {
drinks := []string{
"coke",
"pepsi",
"mountain dew",
"beer",
"wine",
"rum",
}
// i is index
for i := range drinks {
fmt.Println("Would you like some", drinks[i])
}
// _ is index d is string
for _, d := range drinks {
fmt.Println(d)
}
menu := map[string]float32{
"coke": 3.44,
"pepsi": 4.52,
"mountain dew": 3.65,
"beer": 6.43,
"wine": 2.33,
"rum": 2.22,
}
for name, cost := range menu {
fmt.Println(name, "is", cost)
}
}
|
Magnetic state of the alpha 3 center of cytochrome c oxidase and some of its derivatives. The temperature dependence of the magnetic susceptibility was used to investigate the nature of the coupling between cytochrome alpha 3 and CuB in resting and oxidized cyanide- and formate-bound cytochrome oxidase. Resting and formate-bound enzymes were found to have strong antiferromagnetic coupling with an S = 5/2 cytochrome alpha 3, results that were independent of the dispersing detergent and the enzyme isolation method. The cyanide-bound enzyme was heterogeneous, with a minor fraction showing intermediate strength antiferromagnetic coupling. The magnitude of this coupling was independent of the enzyme isolation method and depended moderately on the identity of the dispersing detergent. The major fraction of the cyanide-bound enzyme had a lowest energy state of Ms = 0. The coupling constant for this fraction did not depend on the isolation technique or on the identity of the dispersing detergent. The use of glucose-glucose oxidase to deoxygenate samples influenced the susceptibility behavior of some preparations of both the resting and formate-bound enzymes, with results indicating an S = 3/2 cytochrome alpha 3 in the resting enzyme samples. Retention of a 417-nm Soret band for formate-bound enzyme concomitant with peroxide-induced changes in susceptibility behavior indicates different sites of enzyme interactions for the formate ion and hydrogen peroxide. |
Durban - A question in the matric dramatic art exam this week, where pupils were asked to describe how they would stage the rape of a baby using a loaf of bread and a broomstick as props, has horrified teachers, parents and pupils. The Department of Basic Education, however, stood by its decision to include the question, with its spokesman, Elijah Mhlanga, saying that a subject, like the rape of a 9-month-old baby, was “not new” to a Grade 12 pupil.
“By the time pupils are in matric, they have begun to be faced with the realities of adulthood, often beyond the security of their homes and the school system,” he said on Tuesday.
“They will, through media and cinema, have been exposed to many horrific images and reports,” said Mhlanga.
Drama allowed pupils to confront real matters “through the safety of story”, he added.
The question was in a compulsory section of the dramatic arts exam written by government schools on Monday, which also marked the start of the 16 Days of Activism for No Violence Against Women and Children campaign.
Pupils were given an unseen extract from South African playwright Lara Foot’s play Tshepang, which was inspired by the rape of a 9-month-old baby, known as “Baby Tshepang”, by her mother’s boyfriend, in Upington in the Free State in 2001.
The extract the pupils were given included stage directions that read: “He (a character named Simon) acts out the rape, using the broomstick and the loaf of bread.”
The pupils were asked to describe how they would get the actor portraying Simon to perform this, to “maximise the horror of the rape”.
Not long after the exam finished, @kath_joubert tweeted: “Drama exam asking us how we’d rape a loaf of bread on stage, uhh #comeagain?”
On Tuesday night, Foot told The Mercury she found the question “problematic”. As it was presented, it indicated a misunderstanding on the part of the person who set it.
“I would really like to know what the proposed correct answer is.”
The question had missed her “stylistic choices”, Foot said.
“I have played Tshepang to 12 000 kids with very successful and rewarding question-and-answer sessions afterwards,” she said.
Asked to comment on Tuesday, drama teachers, school organisations and gender rights’ activists agreed the question was too “graphic” to ask high school pupils.
They were concerned that pupils who had experienced sexual abuse could have been further traumatised by having to answer the question.
A teacher at a government-run all-girls high school in Durban, who asked to remain anonymous, said it was “revolting”.
She knew of pupils at her school who had been sexually abused.
“How dare they (the department) make them (the pupils) confront something so horrific and graphic in an exam situation,” she said.
Some upmarket schools took issue with the question and indicated they would be addressing the Department of Basic Education on their “concerns” regarding the question.
The national head of the Governing Body Foundation, Tim Gordon, said the exam had been written by a diverse group of pupils and the question was “insensitive”.
He did not know how it had been allowed in the paper.
It could be argued that it was an acceptable question to be asked of university students, he said.
“They (pupils) are young adults, but some matric pupils are only 17,” he said.
The deputy chief executive of Gender Links, Kubi Rama, said she had read the question “with horror”.
She called for teachers to be trained to understand the causes and effects of gender violence and to understand gender.
“The insensitivity of the question raises alarm bells about the examiner’s attitudes and beliefs,” she said. “How can we have someone so gender-blind teaching young people?”
[email protected]
The Mercury |
Republican Alfred W. Boyer filed Wednesday to run for one of the five Hagerstown City Council seats up for grabs in the May 20 general election.
Boyer, of 1142 The Terrace, serves on the Hagerstown Planning Commission.
Two other candidates had filed to run for City Council seats as of Wednesday - incumbent William M. Breichner and former Councilman Larry A. Vaughn. Both are Democrats.
The deadline to file for a City Council seat or for the mayor's office is Friday, Jan. 24. |
CXCL8 Up-Regulated LSECtin through AKT Signal and Correlates with the Immune Microenvironment Modulation in Colon Cancer Simple Summary Patients with high expression of CXCL8 are not sensitive to immune checkpoint inhibitors (ICIs) treatment, but the mechanism is unclear. LSECtin is the immune checkpoint ligand of LAG3, and is considered as an important factor of ICIs resistance. This study confirmed the role of CXCL8 and LSECtin in immune microenvironment modulation of colon cancer. The expression of CXCL8 is positively correlated with more than 40 immune checkpoints. CXCL8 could up-regulate LSECtin through AKT signal and promoted the proliferation and invasion ability of colon cancer. These results may be important reasons for the primary drug resistance of ICIs in colon cancer. Abstract Background: The role of CXCL8 and LSECtin in colon cancer liver metastasis and immune checkpoint inhibitors (ICIs) treatment effect were widely recognized. However, the regulatory role of CXCL8 on LSECtin is still unclear. Methods: The expression of CXCL8 or LSECtin was analyzed by TCGA database, and verified by GES110225 and clinical samples. The relationship between the expression of CXCL8 or LSECtin and immune cells infiltration, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway, Gene Ontology (GO) items, stromal score, Estimation of STromal and Immune cells in MAlignant Tumours (ESTIMAT) immune score, tumor mutation burden (TMB), mismatch repair gene and immune checkpoints expression were analyzed by Spearman. The effects of CXCL8 on LSECtin expression, proliferation, and invasion ability were clarified by recombinant CXCL8 or CXCL8 interfering RNA. Results: In colon cancer, the expression of CXCL8 was higher, but LSECtin was lower than that in normal mucosa. The expression of CXCL8 or LSECtin was significantly positively correlated with immune cells infiltration, stromal score, ESTIMATE immune score, TMB, and immune checkpoints expression. The expression of LSECtin was closely related to the cytokine-cytokine receptor interaction pathway and response of chemokine function, such as CXCL8/CXCR1/2 pathway. There was a significant positive correlation between the expression of CXCL8 and LSECtin in colon cancer. CXCL8 up-regulated LSECtin through AKT signal and promoted the proliferation and invasion ability of colon cancer. Conclusions: CXCL8 up-regulated LSECtin by activating AKT signal and correlated with the immune microenvironment modulation in colon cancer. Introduction Immune checkpoints are a group of molecules expressed mainly on immune and tumor cells that can function as inhibitory mediators for immune escape in tumor. Immune checkpoint inhibitors (ICIs) can re-activate anti-tumor immune response and inhibit tumor growth, and have become the focus of tumor therapy. At present, ICIs targeting PD-1/PD-L1 or CTLA4 have been rapidly developed. Treatment with ICIs have improved the survival in a variety of cancer. However, not all patients can benefit from ICIs. Almost 90% of colorectal cancer (CRC) patients show primary resistance to anti-PD-1/PD-L1 or anti-CTLA4. Thus, understanding the mechanisms of ICIs primary resistance, as well as identifying novel and effective biomarkers, is crucial for improving the sensitivity of ICIs. Chemokine ligand 8 (CXCL8), also known as interleukin-8 (IL-8), plays a biological role by binding to CXCR1 and CXCR2. CXCL8 is an indispensable important inflammatory response factor and immunosuppressive factor in the tumor microenvironment and has been shown to be up-regulated in a variety of tumor tissues or tumor patient's serum. Our previous studies suggested that CXCL8 could promote liver metastasis by inducing epithelial mesenchymal transformation (EMT), promoting anoikis resistance and other mechanisms. High expression of CXCL8 in cancer cells suggests poor survival prognosis in colorectal cancer. Latest evidence suggested that CXCL8 has the ability to recruit immunosuppressive cells and inhibit anti-tumor immune response. Data from vitro and clinical studies also suggested that the activation of CXCL8/CXCR1/2 signal axis is an important factor in regulating the immune microenvironment of colon cancer and leading to less benefit of ICIs treatment in colon cancer. Therefore, CXCL8 is a key factor of ICIs primary resistance in colon cancer. However, the mechanism is still unclear. Liver sinusoidal endothelial cell lectin (LSECtin), a type II membrane protein encoded by CLEC4G gene, belongs to the C-type lectin receptor family. Available evidence suggested that LSECtin is an immune checkpoint ligand for lymphocyte activation gene 3 (LAG-3). Studies have revealed that the interaction between LSECtin and LAG-3 can inhibit the secretion of IFN- by T cells, and promote melanoma cell proliferation and immune escape. LAG-3 antibody C9B7W could block the binding of LAG-3 to the LSECtin, inhibit the proliferation of effector T cells and the growth of transplanted tumor. It has been determined that blocking LAG3/LSECtin signal can not only restore the cytotoxic activity of T lymphocytes, but also reduce the inhibitory function of regulatory T (Treg) cells. It is noteworthy that LSECtin can significantly enhance the adhesion, migration, and invasion of colon cancer cells. Results from Na et al. showed that the probability of liver metastasis in LSECtin knockout group was significantly lower than that in wildtype group, the level of serum LSECtin in patients with liver metastasis was significantly higher than that in patients without liver metastasis. Therefore, LSECtin is an important immune checkpoint ligand to promote liver metastasis of colon cancer, immune escape is a potential mechanism. In this study, we analyzed the correlation between CXCL8 and LSECtin through the cancer genome atlas (TCGA) and gene expression omnibus (GEO) public data resources and explored the potential role of the CXCL8 or LSECtin in the immune microenvironment modulation of colon cancer. Later, we demonstrated that CXCL8 up-regulated LSECtin expression by activating AKT signal in colon cancer cells. Our results provide a theoretical basis for clarifying the mechanism of primary drug resistance of ICIs, and might provide a new idea for solving the primary drug resistance of ICIs in colon cancer. Data Source The different expression of CXCL8 or LSECtin between cancer tissues and normal mucosa tissues of 20 tumor types were analyzed by TCGA. Combined with Genotype-Tissue Expression (GTEx) database, different expression of CXCL8 or LSECtin in 27 tumor types were also compared. GSE110225 data were download from the GEO (https://www. ncbi.nlm.nih.gov/geo/ (accessed on 12 May 2022) for verification. Immune Correlation Analyst Tumor IMmune Estimation Resource (TIMER) (https://cistrome.shinyapps.io/timer/ (accessed on 12 May 2022)) is an open online tool for systematic analysis of immune cell infiltration in different cancer types. The relationship between the expression of CXCL8 or LSECtin with the infiltration of six types of immune cells in colon cancer such as B cells, CD4+ T cells, CD8+ T cells, neutrophils, macrophages, and dendritic cells were analyzed by TIMER. The correlation between the expression of CXCL8 or LSECtin and stromal score, Estimation of STromal and Immune cells in MAlignant Tumours (ESTIMAT) immune score, tumor mutation burden (TMB), microsatellite instability (MSI), five mismatch repair genes mutation (MLH1, MSH2, MSH6, PMS2, EpCAM), and more than 40 immune checkpoints expression were analyzed by Spearman correlation analysis. KEGG and GO Functional Enrichment Analysis Linkedomics (http://www.linkedomics.org/login.php (accessed on 12 May 2022)) is a public platform for analyzing and comparing different types of cancer multiomics data. Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and Gene Ontology (GO) terms of CXCL8 and LSECtin were enriched according to the standard of FDR < 0.05 and 1000 simulations through the function module of Linkedomics. Human Colon Cancer Tissues Ten pairs of colon cancer tissues and their corresponding normal mucosa tissues were obtained from Yunnan Cancer Hospital (The Third Affiliated Hospital of Kunming Medical University) and used in accordance with the policies of the institutional review board of the hospital (NO. KYCS2022072). All tissue diagnoses were confirmed histologically. Informed consent was obtained from all subjects involved in the study. RNA Interference CXCL8 knockdown in colon cancer cells was performed by siRNA transfection. According to the scheme of siRNA design company (Genepharma, Suzhou, China), CXCL8 siRNA, non-specific siRNA, and Lipofectamine 3000 (Invitrogen, Waltham, MA, USA) were diluted respectively, mixed gently, and incubated at room temperature for 20 min. siRNA Lipofectamine 2000 mixture was added to colon cancer cells grown in 6-well plates to achieve 80-90% confluence. The transfection reagent was changed after 6 h. Non-specific siRNA was used as normal control. Information about siRNA is described below. Proliferation and Invasion Assays MTT was used to evaluate the proliferation of colon cancer cells. Cells treated with CXCL8 or transfected with CXCL8 siRNA/non-specific siRNA were seeded into 96 well plates (2000 cells/well). MTT (20 L/well) was added to each plate at 0, 24, 48, 72 h, and incubated for 4 h in a cell incubator at 37 C and 5% CO 2, dissolved the crystallize by dimethyl sulfoxide (DMSO), and then detected the absorbance at 450 nm in the microplate reader. Transwell assay was used to evaluate the invasive ability of colon cancer cells after treating with CXCL8 or interference with CXCL8 siRNA. Matrigel (BD Biosciences, Haryana, India) was used to coat the basement membrane of Transwell chamber. Cells were diluted with serum-free medium and seeded into Transwell chamber (5000 cells/chamber). The lower chamber was filled with serum containing medium or CXCL8. The chamber was taken out, stained with crystal violet after 48 h, and 5 points were randomly taken under 200 times microscope for photographing and counting. Western Blot After treated with CXCL8 or transfected with CXCL8 siRNA/non-specific siRNA for 48 h, the cells were lysed by RIPA (Beyotime, Haimen, China) supplemented with protease inhibitor (Beyotime, Haimen, China). Electrophoresis was carried out on sodium dodecyl sulfate polyacrylamide gel according to the standard of 30 ug/lane. After electrophoresis, the protein was transferred to polyvinylidene fluoride membrane. The primary antibody was incubated overnight at 4 C and the secondary antibody labeled with horseradish peroxidase was incubated at room temperature for 1 h. Western Blot bands were detected by emitter coupled logic (ECL) solution (Beyotime, Haimen, China). The image was semi quantitatively analyzed by Image J software. RT-qPCR Total RNA was extracted from ten pairs of colon cancer tissues and normal mucosa pretreated with Trizol (servicebio, Wuhan, China). After the concentration and purity of RNA were detected, the RNA was reverse transcribed with cDNA synthesis Kit (servicebio, Wuhan, China). Refer to the instructions of the qPCR Kit (servicebio, Wuhan, China), qPCR was performed under the thermal cycle conditions: 40 cycles, 95 C for 60 s, 95 C for 20 s, 55 C for 20 s and 72 C for 30 s. The gene expression was calculated by 2 −∆∆Ct method. The primers are given in Table 1. Statistical Analysis The data are expressed as mean ± standard deviation. The different expression of CXCL8 and LSECtin in cancer tissues and normal mucosal tissues were compared by paired t-test. The correlation between CXCL8 and LSECtin was analyzed by Spearman correlation analysis. The comparison of proliferation and invasion ability was statistically analyzed by one-way analysis of variance (ANOVA). p < 0.05 means the difference is statistically significant. Higher Expression of CXCL8 Is Related to the Activation of Immune Related Function in Colon Cancer We evaluated the transcriptional level of CXCL8 in 20 tumor types through TCGA database. The results showed that the expression of CXCL8 in colon cancer was significantly higher than that in normal mucosa ( Figure 1A). These results were verified by TCGA and GTEX integrated data and GSE110225 data ( Figure 1B,C). In addition, we detected CXCL8 from 10 pairs of colon cancer tissues and adjacent normal mucosa by RT-qPCR and found the similar results (t = 3.312, p = 0.0091) ( Figure 1D). Immune cell infiltration analysis showed that the expression of CXCL8 was significantly positively correlated with the infiltration of CD8+ T lymphocytes, DC cells, macrophages and neutrophils (Figure 2A). GSEA function enrichment suggested that the immune related KEGG pathways ( Figure 2B) and GO terms ( Figure 2C) were significantly enriched in patients with high CXCL8 expression, such as KEGG pathways of cytokinecytokine receptor interaction, chemokine signaling pathway, intestinal immune network for IgA production and Toll-like receptor signaling pathway, GO terms of macrophage activation and leukocyte migration. Higher Expression of CXCL8 Is Positively Related to ICIs Efficacy Markers in Colon Cancer Next, we analyzed the correlation between CXCL8 and ICIs efficacy markers. The results showed that CXCL8 expression in colon cancer tissues was significantly positively correlated with stromal score (r = 0.416, p < 0.0001), ESTIMATE immune score (r = 0.465, p < 0.001) ( Figure 3A Higher Expression of LSECtin Is Related to the CXCL8/CXCR1/2 Axis Related Function in Colon Cancer Referring to CXCL8 analysis methods, we found that all of the results from TCGA data, TCGA and GTEX integrated data and GSE110225 data indicated that the expression of LSECtin in colon cancer tissues was significantly lower than that in normal mucosa tissues ( Figure 4A-C). These results were also verified by 10 pairs of clinical paired samples (t = 13.30, p < 0.0001) ( Figure 4D). Higher Expression of LSECtin Is Related to the CXCL8/CXCR1/2 Axis Related Function in Colon Cancer Referring to CXCL8 analysis methods, we found that all of the results from TCGA data, TCGA and GTEX integrated data and GSE110225 data indicated that the expression of LSECtin in colon cancer tissues was significantly lower than that in normal mucosa tissues ( Figure 4A-C). These results were also verified by 10 pairs of clinical paired samples (t = 13.30, p < 0.0001) ( Figure 4D). Immune cell infiltration analysis showed that the expression of LSECtin was significantly positively correlated with the infiltration of 6 types of immune cells, such as B cells, CD4+ T lymphocytes, CD8+ T lymphocytes, DC cells, macrophages, and neutrophils ( Figure 5A). GSEA function enrichment suggested that the immune-related KEGG pathway Figure 5B) and GO terms ( Figure 5C) were significantly enriched in patients with high expression of LSECtin, such as KEGG pathways of cytokine-cytokine receptor interaction, intestinal immune network for IgA production and allograft rejection, GO terms of response to chemokine, inflammatory response to antigen stimulus and myeloid dendritic cell activation. CXCL8/CXCR1/2 signal axis is the key pathway in the cytokine-cytokine receptor interaction ( Figure 5D) and response to chemokine ( Figure 5E) function. Higher Expression of LSECtin Is Positively Related to ICIs Efficacy Markers in Colon Cancer Similarly, we further analyzed the correlation between LSECtin and ICIs efficacy markers, such as TMB, stromal score, ESTIMAT immune score, MSI and mismatch repair gene mutation. The results showed that LSECtin expression in colon cancer tissues was Higher Expression of LSECtin Is Positively Related to ICIs Efficacy Markers in Colon Cancer Similarly, we further analyzed the correlation between LSECtin and ICIs efficacy markers, such as TMB, stromal score, ESTIMAT immune score, MSI and mismatch repair gene mutation. The results showed that LSECtin expression in colon cancer tissues was significantly positively correlated with stromal score (r = 0.547, p = 3.66 10 −37 ), ESTIMAT immune score (r = 0.608, p = 1.1 10 −42 ) ( Figure 6A The Expression of CXCL8 Is Positively Correlated with LSECtin in Colon Cancer In order to clarify the correlation between CXCL8 and LSECtin in colon cancer, we performed the analysis in the TIMER database and found that there was a significant positive correlation between CXCL8 and LSECtin expression (r = 0.357, p = 3.24 10 −15 ) ( Figure 7A). This result was also verified in the clinical paired sample test (r = 0.9578, p = 3.24 10 −15 ) ( Figure 7B). CXCL8 Regulates the Expression of LSECtin through AKT Signal in Colon Cancer To clarify the expression regulation relationship between CXCL8 and LSECtin, we designed three CXCL8 interfering RNAs. We found that CXCL8 RNAi-2 had the best inhibitory effect on CXCL8 expression ( Figure 7C). Therefore, we chose CXCL8 RNAi-2 to complete the relevant experiments. We performed cell proliferation and invasion experiments by recombinant human CXCL8 (100 ng/mL) and CXCL8 RNAi-2. Recombinant human CXCL8 can significantly promote the proliferation ( Figure 7D) and invasion of colon cancer cells ( Figure 7F,G). CXCL8 RNAi-2 can significantly inhibit the proliferation ( Figure 7E) and invasion of colon cancer cells ( Figure 7H,I). Through WB detection, it was found that recombinant human CXCL8 could significantly activate AKT signal, accompanied by up-regulation of LSECtin ( Figure 8A-C), and CXCL8 RNAi-2 could significantly inhibit AKT signal, accompanied by down-regulation of LSECtin ( Figure 8D-F). After the AKT signal was blocked by MK2206, we found that while the role of CXCL8 in promoting the invasion of colon cancer cells was inhibited ( Figure 8A-C), the regulatory effect of CXCL8 on LSECtin also disappeared ( Figure 8D-F). (TIMER). (B) Correlation between CXCL8 and LSECtin expression (clinical samples). (C) Screening of CXCL8 interfering RNA. (D) MTT assay was used to detect the effect of CXCL8 (100 ng/mL) on the proliferation of colon cancer cells. (E) MTT assay was used to detect the effect of CXCL8 siRNA on the proliferation of colon cancer cells. (F,G) Transwell assay was used to detect the effect of CXCL8 (100 ng/mL) on the invasion ability of colon cancer cells. (H,I) Transwell assay was used to detect the effect of CXCL8 siRNA on the invasion ability of colon cancer cells. CXCL8 Regulates the Expression of LSECtin through AKT Signal in Colon Cancer To clarify the expression regulation relationship between CXCL8 and LSECtin, we designed three CXCL8 interfering RNAs. We found that CXCL8 RNAi-2 had the best inhibitory effect on CXCL8 expression ( Figure 7C). Therefore, we chose CXCL8 RNAi-2 to complete the relevant experiments. We performed cell proliferation and invasion experiments by recombinant human CXCL8 (100 ng/mL) and CXCL8 RNAi-2. Recombinant human CXCL8 can significantly promote the proliferation ( Figure 7D) and invasion of colon cancer cells ( Figure 7F,G). CXCL8 RNAi-2 can significantly inhibit the proliferation ( Figure 7E) and invasion of colon cancer cells ( Figure 7H,I). Through WB detection, it was found that recombinant human CXCL8 could significantly activate AKT signal, accompanied by up-regulation of LSECtin ( Figure 8A-C), and CXCL8 RNAi-2 could significantly inhibit AKT signal, accompanied by down-regulation of LSECtin ( Figure 8D-F). After the AKT signal was blocked by MK2206, we found that while the role of CXCL8 in promoting the invasion of colon cancer cells was inhibited ( Figure 8A-C), the regulatory effect of CXCL8 on LSECtin also disappeared ( Figure 8D-F). Discussion Emerging clinical evidence suggests that CXCL8/CXCR1/2 signal axis plays a key role in immune escape and CXCL8 high expressed colon cancer seldom benefit from ICIs therapy. However, the mechanism is still unclear. Up-regulation of alternate immune checkpoints is one of the important mechanisms for ICIs primary resistance. LSECtin, a ligand of immune checkpoint LAG3, has been shown to be an important factor in promoting liver metastasis of colon cancer. Our results showed that the expression of LSECtin was closely related to the activation of CXCL8/CXCR1/2 signal axis in colon cancer. Moreover, our previous studies and existing evidence suggested that CXCL8/CXCR1/2 is an important signal to promote liver metastasis of colon cancer. Various evidence suggested that CXCL8 may be an important factor in regulating the expression of LSECtin. In order to validate this hypothesis, we carried out a preliminary exploration and found that CXCL8 could up-regulate LSECtin expression through AKT signal. CXCL8/AKT/LSECtin activation is positively related to the immune microenvironment regulation, and might be an important mechanism of ICIs primary drug resistance in colon cancer. First, we analyzed the different expression of CXCL8 and LSECtin in colon cancer and the main biological functions involved. The expression of CXCL8 in cancer tissues was significantly higher than that in normal mucosal tissues, which was consistent with our previous results. GSEA function enrichment analysis suggested that high expression of CXCL8 is mainly involved in the immune related KEGG pathways and GO terms, such as KEGG pathways of cytokine-cytokine receptor interaction, chemokine signaling pathway, intestinal immune network for IgA production and Toll-like receptor signaling pathway, GO terms of macrophage activation and leukocyte migration. The expression of LSECtin in cancer tissues is significantly lower than that in normal mucosal tissues. Its high expression is also mainly involved in the immune related KEGG pathway and GO terms, such as KEGG pathways of cytokine-cytokine receptor interaction, intestinal immune network for IgA production and allograft rejection, GO terms of response to chemokine, inflammatory response to antigen stimulus and myeloid dendritic cell activation. Previous studies have shown that cytokine-cytokine receptor interaction and chemokine signaling pathway are important mechanisms in tumor immune microenvironment regulation, immune escape, and immunotherapy tolerance. CXCL8/CXCR1/2 is one of the key signals in these two pathways. Studies have shown that CXCL8 is closely related to the immune cells infiltration in colon cancer. CXCL8 is a key chemokine of the innate immune system that recruits immunosuppressive cells, such as neutrophils, myeloid derived suppressor cells (MDSC) and monocytes. Our results showed that CXCL8 expression is positively correlated with LSECtin expression in colon cancer. Therefore, CXCL8/CXCR1/2 signal may regulate the expression of LSECtin and play an important role in the immune microenvironment regulation in colon cancer. To clarify the role of CXCL8 or LSECtin in tumor immune microenvironment and ICIs therapy, we carried out further analysis on immune cells infiltration and ICIs markers. The results suggest that the expression of CXCL8 or LSECtin in colon cancer is significantly positively related with the infiltration of CD8+ T lymphocytes, DC cells, macrophages, and neutrophils, which is a key factor affecting the therapeutic efficacy of ICIs. Our results are consistent with those reported by Yang et al.. CXCL8 can recruit neutrophils and MDSC around cancer cells to form neutrophil extracellular traps (NETs) and protect them from cytotoxicity mediated by CD8+ T cells and natural kill (NK) cells, which in turn affects the efficacy of ICIs therapy. Zhang et al.. suggested that TNF- can inhibit the expression of CXCL8, inhibit the recruitment of CXCR2+, CD68+ macrophages, and increase the sensitivity of pancreatic cancer against anti-PD-1. Five common immune checkpoint efficacy predict markers, such as TMB, stromal score, ESTIMAT immune score, MSI and mismatch repair gene mutation were analyzed, and the results showed that the expression of CXCL8 or LSECtin is positively correlated with these markers. Therefore, patients with high expression of CXCL8 or LSECtin may benefit from ICIs. However, the existing clinical data suggest that the high expression of CXCL8 in serum portends that patients cannot benefit from anti-PD-1/PD-L1 or anti-CTLA4 treatment. Regulatory immune cell infiltration, disruption of antigen presentation mechanism, immunosuppressive microenvironment, and cancer stem cells are the main mechanisms of primary drug resistance of ICIs, up-regulation of other immune checkpoints is one of the important mechanisms. We analyzed the relationship between the expression of CXCL8 or LSECtin with more than 40 immune checkpoints. The results suggest that patients with high expression of CXCL8 or LSECtin are often accompanied by the high expression of immune checkpoints such as CTLA4, CD86, CD274, CD276, TIGIT, VSIR, LAG3, HAVCR2, and PDCD1. This evidence indirectly explains the phenomenon that patients with high CXCL8 expression cannot benefit from anti-PD-1/PD-L1 or anti-CTLA4. So, the increased expression of immune checkpoints may be the main reason in weakening the sensitivity to anti-PD-1/PD-L1 or anti-CTLA4 treatment in patients with CXCL8 overexpression. In view of the close correlation between the expression of CXCL8 and LSECtin as described above, we added human recombinant CXCL8 or knocked down the expression of CXCL8 in SW480 and SW620 cells. Later, we detected the proliferation ability, invasion ability, and the expression of LSECtin. Results showed that human recombinant CXCL8 enhanced the up-regulation of LSECtin, the proliferation and invasion ability of SW480 and SW620 cells by activating the AKT signaling pathway. After inhibiting AKT signal with MK2206, the LSECtin expression, and the proliferation and invasion ability of SW480 and SW620 cells were significantly inhibited. After interfering with the expression of CXCL8 in SW480 and SW620 cells, we observed the suppression of AKT signal, the down-regulation of LSECtin, and the inhibition of cell proliferation and invasion ability. Therefore, CXCL8 can promote the expression of LSECtin through AKT signal activation. These results partly explain the phenomenon that patients with high CXCL8 expression cannot benefit from anti-PD-1/PD-L1 or anti-CTLA4 treatment. This study evaluated the role of CXCL8 and LSECtin in the immune microenvironment and ICIs efficacy prediction of colon cancer, and clarified the regulatory mechanism of CXCL8 regulating LSECtin through AKT activation. This mechanism partly explains the phenomenon of primary drug resistance to anti-PD-1/PD-L1 or anti-CTLA4 in colon cancer patients with high expression of CXCL8. Our research combines bioinformatics and basic experimental verification. However, whether blocking CXCL8/CXCR1/2 signal or LAG3/LSECtin can increase the efficacy of anti-PD-1/PD-L1 or anti-CTLA4 in patients with colon cancer is still lack of effective experimental verification. At the same time, it is still unknown whether CXCL8 regulates the other immune checkpoints expression. This field is also worth exploring. Conclusions In conclusion, this study is the first to clarify the regulatory effect of CXCL8 on LSECtin, which is an important mechanism for CXCL8 to regulate the immune microenvironment of colon cancer. The results could provide a theoretical basis for clarifying the mechanism of anti-PD-1/PD-L1 or anti-CTLA4 primary drug resistance in colon cancer, and provide a new idea for solving the phenomenon of anti-PD-1/PD-L1 or anti-CTLA4 primary drug resistance in colon cancer. Institutional Review Board Statement: Colon cancer tissues and their corresponding normal intestinal mucosa tissues were obtained from Yunnan tumor hospital (The third affiliated hospital of Kunming medical university) and were used in accordance with the policies of the institutional review board of the hospital (NO.KYCS2022072). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper. Data Availability Statement: The data presented in this study are available in this article. |
. OBJECTIVE To compare and research the process of wound healing in occlusive moist environment and dry environment on the skin donor site. METHODS The wound healing of adult skin donor site was studied by clinical observation, histological and electromicroscopical examinations on the operative day and the 1st, 2nd, 3rd, 4th, 7th days postoperatively, each skin donor site was divided into two parts: occlusive environment and dry environment. RESULTS The wounds of occlusive moist environment healed faster than those of dry environment; the fibroblasts were more active and activated earlier, revascularization and re-epithelialization happened earlier and more quickly. CONCLUSION In occlusive environment, more active fibroblasts can accelerate granulation growth; quicker regenerative capillaries bring more nourishment; quicker re-epithelialization accelerates the wound healing. |
/**
* Servlet implementation class ClientAccountServlet
*/
@WebServlet("/ClientAccountServlet")
public class ClientAccountServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger logger = LoggerFactory.getLogger(HttpServlet.class);
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
doPost(request, response);
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// Creates EntityManager to query database
EntityManager em = EMFUtil.getEMFactory().createEntityManager();
// Transactions are employed to roll back changes in case of error and
// ensure data integrity
EntityTransaction trans = em.getTransaction();
//Creates a session object and retrieves entities
HttpSession session = request.getSession();
int clientID = (int) session.getAttribute("clientID");
Clients user = em.find(Clients.class, clientID);
String password = request.getParameter("password");
//Updates entities based on input parameters
CustomUtilities custom = new CustomUtilities();
String [] passwordArray = custom.hashPassword(password, null);
user.setFirstName(request.getParameter("firstName"));
user.setLastName(request.getParameter("lastName"));
user.setPassword(passwordArray[0]);
user.setSalt(passwordArray[1]);
try {
trans.begin();
em.merge(user);
trans.commit();
response.sendRedirect("landing.jsp");
} catch (IllegalStateException e) {
e.printStackTrace();
logger.error("Candidate account update bug", e);
trans.rollback();
} finally {
em.close();
}
}
} |
. DNA methylation is a common event in malignancies and implicated in tumor initiation and progression. There is a battery of genes involved in critical cell processes such as DNA damage repair, which are hypermethylated in prostate cancer. DNA methylation has also been found in premalignant lesions such as prostatic intraepithelial neoplasia, but to a less extent compared with that in prostate cancer. Intensive study on DNA methylation would provide a new opportunity for the early diagnosis, prognosis and treatment of prostate cancer. |
<filename>src/error.rs<gh_stars>1-10
use thiserror::Error;
#[derive(Error, Debug)]
pub enum RequestError {
#[error("payload too large, limit: {limit:?}")]
PayloadTooLarge { limit: usize },
}
|
Comparison of the psychometric properties of health-related quality of life measures used in adults with systemic lupus erythematosus: a review of the literature. OBJECTIVE A review of the literature was undertaken to evaluate the development and psychometric properties of health-related quality of life (HRQoL) measures used in adults with SLE. This information will help clinicians make an informed choice about the measures most appropriate for research and clinical practice. METHODS Using the key words lupus and quality of life, full original papers in English were identified from six databases: OVID MEDLINE, EMBASE, Allied and Complementary Medicine, Psychinfo, Web of Science and Health and Psychosocial Instruments. Only studies describing the validation of HRQoL measures in adult SLE patients were retrieved. RESULTS Thirteen papers were relevant; five evaluated generic instruments and eight evaluated disease-specific measures . For the generic measures, there is moderate evidence of good content validity and internal consistency, whereas there is strong evidence for both these psychometric properties in disease-specific measures. There is limited to moderate evidence to support the construct validity and test-retest reliability for the disease-specific measures. Responsiveness and floor/ceiling effects have not been adequately investigated in any of the measures. CONCLUSIONS Direct comparison of the psychometric properties was difficult because of the different methodologies employed in the development and evaluation of the different HRQoL measures. However, there is supportive evidence that multidimensional disease-specific measures are the most suitable in terms of content and internal reliability for use in studies of adult patients with SLE. |
Swedish police said a Molotov cocktail was thrown at a mosque in Uppsala, on January 1, 2015 (AFP Photo/Pontus Lundahl)
Stockholm (AFP) - Swedish police launched a manhunt Thursday after the third arson attack against a mosque in a week, amid growing tensions over the rise of a far right anti-immigration movement.
"People saw a man throwing something burning at the building," police in Uppsala said in a statement, adding that the mosque in eastern Sweden did not catch fire and that the suspect had left behind "a text on the door expressing contempt for religion."
A police spokesman told Swedish news agency TT that the burning object was a Molotov cocktail and that no one was in the building at the time.
Sweden's Islamic Association posted a photograph online of the main door of the mosque, which was emblazoned with the slogan "Go home Muslim shit".
The police were alerted by passers-by, who reportedly witnessed the attack at around 0430 GMT.
"The crime has been classed as attempted arson, vandalism and incitement to hatred," the police said, appealing for witnesses to come forward.
Thursday's attack in Sweden's fourth-largest city came just three days after a late-night blaze at a mosque in Esloev in the south, which police suspect was also arson.
On Christmas Day, five people were injured when a petrol bomb was thrown through the window of a mosque in Eskilstuna, east of the capital Stockholm.
Sweden's leftist Prime Minister Stefan Loefven led condemnation of the latest attack.
"The most important thing now is that everyone distances themselves from this," he told TT.
"In Sweden no one should have to be afraid when they practice their religion," he added, saying the government would increase funding for securing places of worship.
- 'People are afraid' -
According to the anti-racism magazine Expo, there have been at least a dozen confirmed attacks on mosques in Sweden in the last year and a far larger number are believed to have gone unreported.
"People are afraid, they fear for their safety," Mohammad Kharraki a spokesman for Sweden's Islamic Association told AFP.
"We've seen through history that people use violence as a way of polarising society against minorities."
The attacks come as debate intensifies in Sweden over immigration and the integration of asylum seekers in the traditionally tolerant Nordic country, which is expected to receive more than 100,000 asylum applications this year, breaking all previous records.
Last month the far right Sweden Democrats -- which doubled its support to 13 percent in September elections -- came close to bringing down the left-green government over its liberal refugee policies. The party's support in opinion polls has risen to around 16 percent.
However in a last minute agreement on December 27, the government and centre right opposition parties cut a deal effectively denying the Sweden Democrats influence over major policy -- including over immigration.
Kharraki said the arson attacks could be carried out by "Sweden Democrats people who are angry because they've been pushed aside."
"They think Muslims are the problem," he said, while "mainstream political parties have taken a stand against racism and Islamophobia."
However, a spokesman for the Sweden Democrats said there was no reason to consider the attacks to be politically motivated.
"This is not political, it's criminal. It's criminals doing this and it's a police matter, not a political question," said Henrik Vinge.
"This type of violence is something we take very seriously.... It's unacceptable of course."
Muslim groups have called on politicians to join vigils in several cities around the country Friday to show their opposition to racially-motivated violence. |
New York, NY — (July 13, 2011) — Days away from the release of their new EP Io, Sarah Masimore, lead singer and lyricist of alternative rock band-FINDING JUPITER-reveals an in-depth portrait of the experiences that inspired current singles, "Roses" and "Stars." An active and proud member of the LGBT community, Masimore's lyrical storytelling draws from her own experiences capturing love, life, and heartache from a viewpoint outside of the Alt-Rock norm. Paired with Dean Schaffer's quick fingers on guitar, Taylor Brady's mastery on the kit, and Peter McDonald's grounding bass, Masimoire's insightful lyrics will surprise listeners with deliciously catchy pop sensibilities beneath an alt-rock exterior, sure to make these songs stick.
"I had just fallen for a girl like I had never fallen before. Not only was I madly in love, but for some strange reason she was madly in love with me too. Because of the unique complexity that exists within lesbian communities, there was a ton of backlash in our group of friends/ex's once we started dating. It actually got really intense... 'Roses' was a way for me to embrace this love that I knew was literally a once in a lifetime experience and to deal with the unabashed and thoughtless judgment I was getting from so many."
Masimore's relatable struggle with self-confidence provides the backdrop to Finding Jupiter's song "Stars." She says the song is about "being on the brink of doubt and failure, but making the conscious decision to make a change in your life. It's about living life fully… and choosing optimism. Things will get better. 'Leave all your fires lit, cause it won't always be like this' is probably my favorite line."
Io will be available for purchase and download Tuesday, July 19th, 2011. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.