content
stringlengths
7
2.61M
Modeling, simulation, and analysis of birefringent effects in plastic optics Plastic optics has been widely used in different application. They have been facing birefringent effects during manufacturing or during certain application. Finite element modeling of plastic optics in CAD interface is done along with experimental and theoretical comparison of the specimen with the help of solid mechanics and image processing. Low birefringence plastic optics is chosen for the experiment and varying load is applied to observe the characteristics both in experiment and simulation. Low birefringence polariscope was used to measure the birefringence in the plastic specimen. Birefringence is caused due to many effects like stress induced birefringence temperature induced due to thermal gradient and pressure during manufacturing. Here stress is induced on low birefringence specimen by two point compression loading and is compared on the base of solid mechanics, finite element modeling and image processing. The results were found to be similar and convincing.
// This file is auto-generated, don't edit it. Thanks. package com.antgroup.antchain.openapi.bot.models; import com.aliyun.tea.*; public class ImportPeripheralRequest extends TeaModel { // OAuth模式下的授权token @NameInMap("auth_token") public String authToken; @NameInMap("product_instance_id") public String productInstanceId; // 厂商名称 @NameInMap("corp_name") public String corpName; // 数据模型id @NameInMap("peripheral_data_model_id") @Validation(required = true) public String peripheralDataModelId; // 外围设备ID @NameInMap("peripheral_id") @Validation(required = true) public String peripheralId; // 外围设备名称 @NameInMap("peripheral_name") public String peripheralName; // 场景码 @NameInMap("scene") @Validation(required = true) public String scene; // 设备类型编码,必填,对应资管平台中的设备类型 // // 枚举值: // // 车辆 1000 // 车辆 四轮车 1001 // 车辆 四轮车 纯电四轮车 1002 // 车辆 四轮车 混动四轮车 1003 // 车辆 四轮车 燃油四轮车 1004 // 车辆 两轮车 1011 // 车辆 两轮车 两轮单车 1012 // 车辆 两轮车 两轮助力车 1013 // // 换电柜 2000 // 换电柜 二轮车换电柜 2001 // // 电池 3000 // 电池 磷酸铁电池 3001 // 电池 三元锂电池 3002 // // 回收设备 4000 // // 垃圾分类回收 4001 // // 洗车机 5000 @NameInMap("device_type_code") public Long deviceTypeCode; // 设备单价 单位:分 @NameInMap("initial_price") public Long initialPrice; // 出厂时间 @NameInMap("factory_time") @Validation(pattern = "\\d{4}[-]\\d{1,2}[-]\\d{1,2}[T]\\d{2}:\\d{2}:\\d{2}([Z]|([\\.]\\d{1,9})?[\\+]\\d{2}[\\:]?\\d{2})") public String factoryTime; // 投放时间 @NameInMap("release_time") @Validation(pattern = "\\d{4}[-]\\d{1,2}[-]\\d{1,2}[T]\\d{2}:\\d{2}:\\d{2}([Z]|([\\.]\\d{1,9})?[\\+]\\d{2}[\\:]?\\d{2})") public String releaseTime; public static ImportPeripheralRequest build(java.util.Map<String, ?> map) throws Exception { ImportPeripheralRequest self = new ImportPeripheralRequest(); return TeaModel.build(map, self); } public ImportPeripheralRequest setAuthToken(String authToken) { this.authToken = authToken; return this; } public String getAuthToken() { return this.authToken; } public ImportPeripheralRequest setProductInstanceId(String productInstanceId) { this.productInstanceId = productInstanceId; return this; } public String getProductInstanceId() { return this.productInstanceId; } public ImportPeripheralRequest setCorpName(String corpName) { this.corpName = corpName; return this; } public String getCorpName() { return this.corpName; } public ImportPeripheralRequest setPeripheralDataModelId(String peripheralDataModelId) { this.peripheralDataModelId = peripheralDataModelId; return this; } public String getPeripheralDataModelId() { return this.peripheralDataModelId; } public ImportPeripheralRequest setPeripheralId(String peripheralId) { this.peripheralId = peripheralId; return this; } public String getPeripheralId() { return this.peripheralId; } public ImportPeripheralRequest setPeripheralName(String peripheralName) { this.peripheralName = peripheralName; return this; } public String getPeripheralName() { return this.peripheralName; } public ImportPeripheralRequest setScene(String scene) { this.scene = scene; return this; } public String getScene() { return this.scene; } public ImportPeripheralRequest setDeviceTypeCode(Long deviceTypeCode) { this.deviceTypeCode = deviceTypeCode; return this; } public Long getDeviceTypeCode() { return this.deviceTypeCode; } public ImportPeripheralRequest setInitialPrice(Long initialPrice) { this.initialPrice = initialPrice; return this; } public Long getInitialPrice() { return this.initialPrice; } public ImportPeripheralRequest setFactoryTime(String factoryTime) { this.factoryTime = factoryTime; return this; } public String getFactoryTime() { return this.factoryTime; } public ImportPeripheralRequest setReleaseTime(String releaseTime) { this.releaseTime = releaseTime; return this; } public String getReleaseTime() { return this.releaseTime; } }
/** * Finds the script file that was passed by the command line * * @param pargs * @return the {@link File} */ private static File findScriptFile(Args pargs) { String fileName = pargs.getFileName(); File file = new File(fileName); if(!file.exists()) { file = new File(System.getProperty("user.dir"), fileName); if(!file.exists()) { for(File dir : pargs.getIncludeDirectories()) { file = new File(dir, fileName); if(file.exists()) { return file; } } } } if(!file.exists()) { System.out.println("Unable to find '" + fileName + "'"); System.exit(1); } pargs.getIncludeDirectories().add(file.getParentFile()); return file; }
// Download performs file download of the given url // this method provides no feedback to the system func Download(url string, downloadTofileName string) { log.Println("Downloading", url) log.Println("Destination", downloadTofileName) log.Println("This could take a few mins :)") output, err := os.Create(downloadTofileName) response, err := http.Get(url) if err != nil { log.Println("Error while downloading", url, "-", err) return } defer response.Body.Close() n, err := io.Copy(output, response.Body) if err != nil { log.Println("Error while downloading", url, "-", err) return } log.Println(n, "bytes downloaded") }
package platform; import jade.core.Agent; /** * Class helping with logging to standard output. * * @author <NAME> */ public class Log { /** * Prints to standard output a message emmited by the agent, formed of the objects in the output. * * @param agent * - the agent logging the message. * @param output * - components of the output. */ public static void log(Agent agent, Object... output) { String out = agent.getLocalName() + ": "; for(Object o : output) out += (o != null ? o.toString() : "<null>") + " "; System.out.println(out); } }
The above video is of two Russian men who decided to film themselves walking the streets of Russia holding hands. As many of you know, homosexuality is quite the hot button issue in Russia, and when I say hot button, I mean that you may get physically beaten if you express it out in the open. The two Russian men, to my knowledge did not openly claim to be homosexual at any time, but two men simply walking down the street joined at their hands is enough to imply this in our gynocentric culture. Men, you see, are not supposed to touch each other, whether it be in friendship or otherwise. But the most telling aspect of the video is that it is not women, but men who are the most incensed at the sight of two other men holding hands. It’s pure gynocentrism. The vast majority of people that seemed to have a problem with it are men, the women seem to have just been going about their business, while the men were threatening, verbally abusive and even violent. As men we like to believe that we are more accepting of a “live and let live” mentality, that we prize freedom and self sufficiency above all other things, but the truth is that in many respects we aren’t. In many respects we are just as intent on forcing our way of life on others as women are. We do not like to see two men that aren’t available for copulation with women and openly tell the world they aren’t. We have to beat them down and try to scare them into melting back into the shadows where they belong. How many of these Russian men enjoy a bit of lesbian porn in the privacy of their own homes, but hate that two men hold hands together? It’s hypocritical as hell, but I guess you cant really blame men, they’re just doing what they’re good at, acting as the enforces for the gynocentric machine.
<gh_stars>1-10 from abc import ABCMeta, abstractmethod from clean_architecture_example_application.core.usecases.usecase import UseCase class Mapper(metaclass=ABCMeta): @abstractmethod def map(self, input: UseCase.OutputValues): pass
Clinical presentation and management of hypophysitis: An observational study of case series Background: Hypophysitis is described as a rare chronic inflammatory affection of the pituitary gland. However, to date, its pathogenesis has not been completely cleared up. Clinical features are polymorphic, including symptoms related to inflammatory compression and/or hypopituitarism. Laboratory tests determine hormone deficiencies orientating replacement therapys protocol. MRI of the hypothalamic-pituitary region is crucial in exhibiting major radiological signs such as pituitary homogeneous enlargement and gland stalks thickening. The etiological diagnosis is still challenging without affecting the management strategy. Corticosteroids have widely been used but a close follow-up without any treatment has also been approved. Case Description: In this report, seven patients with hypophysitis have been collected over a period of 6 years. The average age of our patients was 32.1 years ± 11.8 with a female predominance (71.4%). Panhypopituitarism was objective in 42.9% of cases, a combined deficiency of the hypothalamic-pituitary thyroid, adrenal and gonadal axes in 28.6% of cases. A central diabetes insipidus was noted in 42.9% of the patients. Idiopathic hypophysitis was the most common etiology. The use of long course corticosteroids was required in 28.6% when compressive signs were reported. Conclusion: Hypophysitis remains a rare disease with nonspecific clinical and radiological patterns. Autoimmune origin seems to be the most frequent etiology. No guidelines have been established for hypophysitis management and the evolution is still unpredictable. INTRODUCTION Hypophysitis is described as a rare chronic inflammatory affection of the pituitary gland, which may then damage the pituitary tissue and be responsible for temporary or permanent endocrine disorders. However, to date, its pathophysiological mechanism has not been completely cleared up. Hypophysitis classification is mainly based on histological and etiological patterns. erefore, four histological types have been described, starting with the granulomatous hypophysitis described in 1917, then the lymphocytic or autoimmune hypophysitis as the most frequent variant, along with three other subtypes; depending on the pituitary region involvement. More recently, Folkerth et al. described the xanthomatous hypophysitis followed by the latest new reported type which has been described regarding IgG4related hypophysitis. Hypophysitis clinical features are polymorphic usually related to sella and parasella compression illustrated clinically by headache, nausea, vomiting and visual disturbances, pituitary hormone deficiencies, or diabetes insipidus. Several radiological signs may be very useful for diagnosis discussion, especially homogeneous pituitary enlargement with intense contrast enhancement and loss of bright spot of the neurohypophysis on T1-weighted images. Hypophysitis management revolves around the necessity of precocious hormone replacement therapy, the decision of corticosteroid treatment initiation, and the use of decompression surgery once we have the indication. In this report, we describe seven cases of hypophysitis diagnosed through clinical symptoms and imaging patterns without pathology examination, with good evolution under replacement therapy and corticosteroids in one case. Regression of hypophysitis clinical and radiological signs was observed in most of our patients even without corticosteroids therapy. CLINICAL AND PARACLINICAL EVALUATION All of the patients included in this study were over the age of 18 years old; followed up in our center between July 2014 and January 2021, with the diagnosis of hypophysitis. ey all had a complete updated medical record according to the data exploitation features. Patients who did not match with those criteria were excluded from the study. e mean age of the cases (±SD) was 32.1 years ± 11.8, with 71.4% of women, and the diagnosis of hypophysitis was based on the analysis of all the data from the clinical, biological, and radiological assessment. Symptoms were variable; including headache, visual acuity disorders, nausea, and vomiting. Laboratory tests evaluated cortisol, gonadotropins, prolactin, thyroid hormone levels, and insulin-like growth factor 1. Imaging signs of hypophysitis were assessed by magnetic resonance imaging (MRI) of the hypothalamopituitary region using sagittal and coronal T1-T2-weighted sections with gadolinium enhancement. Many radiological findings were suggestive for our diagnosis counting gland enlargement; and increased stalk thickening with the absence of posterior pituitary bright spot traduced clinically by the presence of diabetes insipidus. RESULTS Each patient underwent a clinical examination, besides the evaluation of both anterior and posterior pituitary functions. Patients consulted for signs of sella compression in 85.7% of cases (headache: 85.7%, vomiting: 71.4%, and visual disturbance in 57.1%) and a polyuria-polydipsia syndrome (PPS) in 42.9% of the cases. Endocrine assessment of our patients showed a corticotropic, thyrotropic, and gonadotropic axis deficit. In 71%, the somatotropic axis was deficient in 28% of cases, however, hyperprolactinemia was observed in one patient. A panhypopituitarism in 42.9% of cases. A central diabetes insipidus was observed in 42.9% of the patients. e most observed radiological features are homogeneous and symmetric pituitary enlargement with intense contrast enhancement in 57.1% , a loss of neurohypophysis bright spot without any involvement of the anterior pituitary in Figure 2]. In addition, patients with infundibulo-neurohypophysitis still had diabetes insipidus with the same radiological features (loss of hyperintensity of the neurohypophysis on T1-weighted images). However, the same deficiencies mentioned in the initial assessment were observed. Only one patient died of acute respiratory distress related to an uninvestigated lung condition. DISCUSSION e chronic inflammation of the pituitary gland is a rare condition, explaining the low number of reported cases in the literature. We have been able to compile seven cases over a period of 6 years, whereas Fedala et al. in Algeria reported a total of 15 cases over 16 years and Imber et al. reported 21 cases over 17 years. A clear predominance of women was noted by most authors throughout literature. is was also observed in our study which clearly demonstrates the significant female involvement in this pathology. Literature data demonstrated that hypophysitis had an nonspecific clinical presentation including some usual symptoms related to inflammatory compression of sella and parasella structures or lymphocytic pleocytosis (headache and visual disturbances). Anterior hypopituitarism was seen in cases of adenohypophysis involvement by the inflammatory process. PPS is also frequently observed due to neurohypophysis involvement, All these clinical findings were also observed in our study. Furthermore, Honegger et al. observed a weight gain in 18% of cases, which is explained by the autoimmune involvement of the hypothalamic base leading to a central leptin insensitivity which plays an important role in satiety leading to hyperphagia and obesity. Biological assessment of hypophysitis aimed to determine which axes are deficient and to confirm the etiology of the observed central diabetes insipidus responsible of a PPS. e number of axes involved is variously reported in literature depending on the authors. e adrenal axis seems to be the most involved followed by the thyroid axis and then the gonadal axis. Buxton and Robertson analyzed the chronology of the anterior pituitary axis involvement to differentiate between hypophysitis and its main differential diagnosis. In fact, in pituitary adenoma, unlike hypophysitis, gonadal and somatotropic axes are vulnerable, thus a conserved somatotropin secretion is more in favor of a primary hypophysitis diagnosis. A pronounced hypopituitarism contrasting with a small sized lesion raises suspicion about hypophysitis, however, a panhypopituitarism or a combined deficit of adrenal, thyroid, and gonadal axes was frequently observed in our study concurring with data of the literature. e presence of a diabetes insipidus implies an autoimmune involvement of the infundibulo-neurohypophysis specifically the vasopressin-secreting cells, by anti-hypothalamic antibodies. MRI was an efficient imaging tool in the assessment of the hypothalamic-pituitary region. Radiological diagnosis of hypophysitis can sometimes tend to be difficult due to the polymorphic nature of the lesions, however, certain findings are of great orientation value. In particular, a pituitary enlargement often homogenous and symmetric, intense homogenous enhancement postgadolinium on T1WI and T2WI, thickening of the pituitary stalk, and loss of bright spot of the neurohypophysis on T1WI and potentially on T1 Fat-Sat sequence in the case of an infundibulo-neurohypophysitis. e real challenge for an endocrinologist of a radiologist is to differentiate between a hypophysitis and its main differential, the holosellar pituitary adenoma. For that purpose, we use Gutenberg et al. radiological score which takes into account: patient's age (<30 yo), relation to pregnancy, pituitary volume, signal intensity and homogeneity postgadolinium, mass symmetry, presence or loss of posterior pituitary bright spot, and stalk size. ese various radiological findings were observed by several authors at different rates, with regard to our study. More specifically, the radiological lesions observed were a pituitary enlargement, intense and homogeneous enhancement postgadolinium injection, pituitary stalk thickening, and loss of bright spot of the neurohypophysis in case of a posterior involvement, whether isolated or associated to an anterior involvement. roughout literature, various etiologies were described by different authors. Our study reported a lymphocytic involvement in 80% of cases versus granulomatous involvement in 20% of cases in accordance with data from Guo et al. series. In the absence of any consensus regarding management, multiple therapeutic protocols were described throughout literature, joining clinical, biological, and radiological monitoring to hormone replacement therapy and/or use of desmopressin in case of an associated central diabetes insipidus, or corticosteroids or immunosuppressive therapy and surgery, even radiation therapy in case of refractory hypophysitis. A more conservative approach by Honegger et al. was promising (30/76) with MRI regression of the disease in 46% of cases, and a stabilized pituitary size in 27%, with improvement of the adenohypophysis function in 27% of cases and neurohypophysis function in three patients. In our series of cases, the conservative approach was favored in the absence of compressive signs and to avoid side effects of surgery, especially in young patients and long-term corticosteroid therapy. Multiple corticosteroid therapy protocols were reported throughout literature [15,17, with different success and failure rates. Generally speaking, corticosteroid therapy was used in cases of compression signs of neighboring structures. Kristof et al. reported a normalization or an improvement of MRI results in 89% of patients, with a recovery of adenohypophysis function with no or minimal side effects related to a standardized treatment by glucocorticoids over a 6-week period. In a review of the literature, Lupi et al. found a pituitary mass reduction in 87% of patients utilizing oral glucocorticoids, and 75% of patients receiving IV glucocorticoids. In Honegger et al. series, 32 patients received corticosteroid therapy with a dose varying between 20 and 500 mg/day over a period considerably varied, ranged from 4 days to a year and a follow-up up to 12 years. A good radiological evolution was noted in in 65.5% of patients. In addition, an endocrine improvement was noticed in 15% of cases. is study was considered as the first study providing clear evidence of a high recurrence rate in primary hypophysitis after glucocorticoids treatment with a relapse observed in 38% of cases. However, this study showed no correlation between recurrence and the duration of high-dose glucocorticoid therapy. Furthermore, recurrence was not related to the initial dose of glucocorticoids. Corticosteroid therapy was administered to only two patients with visual disturbances at a dose of 1 mg/kg/day for 1 month, followed by gradual dose reduction. Pituitary volume regression was assessed in one patient while stabilization was observed in the other one. Nonetheless, no endocrine improvement was noticed, which is contradictory of the findings in the previous literature. In the literature review carried out by Karaca and Kelestimur, surgical management of hypophysitis was indicated when a mass effect was determined such as visual deterioration, ophthalmoplegia, severe symptoms, an uncertain differential diagnosis, and corticosteroids failure, with a significant clinical and radiological improvement rate between 68% and 100%. e risk of pituitary functions deterioration after surgery had been reported from 11% up to 40% in the literature. e recurrence rate is estimated between 8% and 20% throughout the literature. Pituitary surgery like all surgeries, it is not without complications, which in this condition can occur in up to 10% of cases. ese complications include postoperative meningitis and rhinorrhea, which require surgical revision. Decompression surgery was not required in our series of patients given the success with corticosteroids along with hormone replacement therapies. Right after the analysis of this study and the available data in the literature, we found that the evolution of hypophysitis was unpredictable and inconstant. In fact, it could be either favorable ( of hypophysitis can lead to an empty sella . ese experimental observations were also observed in humans and more results might be required to assess this scientific hypothesis. CONCLUSION Hypophysitis remains a rare disease with a polymorphic and nonspecific clinical and radiological presentation. Primary damage to the pituitary gland of autoimmune origin seems to be the most frequent etiology and its management should revolve around the need to correct hormonal deficits, and to remove the compression on neighboring structures if present. However, the evolution varies from patient to patient hence the need for multidisciplinary management. Declaration of patient consent Patient's consent not required as patients identity is not disclosed or compromised. Financial support and sponsorship Nil. Conflicts of interest ere are no conflicts of interest.
SPRINGFIELD -- A guide to the city's cultural attractions -- from the historic H.H. Richardson Courthouse to a new pop-up art gallery -- is now as close as your phone or mobile device. The Springfield Cultural District announced Tuesday that its Cultural Walking Tour Map is now available as a free app for devices running either Google's Android or Apple's iOS. The map, created last summer, was previously only available as a paper handout or as a download from the website. It will still be available in those formats, and as a video map at springfieldculture.org. The app can be found found by searching "Springfield Cultural Tour" in Apple's App Store or in Google Play, according to a statement released Tuesday by Morgan Drewniany, executive director of the district. The app allows users to sort attractions by topic and highlight what they want to see. Listed attractions include the Springfield Museums, the H.H. Richardson Courthouse -- now home to the juvenile court and the Western Division of the Massachusetts Housing Court -- and Duryea Way. It also lists new or temporary attractions such as the downtown painted utility boxes and pop-up Art Stop Galleries. The navigation and display are based on Google Maps, according to the release. The app allows users to see themselves move through the city in real time, with Cultural District highlights and points of interest showing up in the moment. The project was funded by the Community Foundation of Western MA. Additional support came from the City of Springfield, Greater Springfield Convention and Visitor's Bureau, and the Springfield Business Improvement District, Drewniany said in the release.
Association of H19 promoter methylation with the expression of H19 and IGF-II genes in adrenocortical tumors. Low H19 and abundant IGF-II expression may have a role in the development of adrenocortical carcinomas. In the mouse, the H19 promoter area has been found to be methylated when transcription of the H19 gene is silent and unmethylated when it is active. We used PCR-based methylation analysis and bisulfite genomic sequencing to study the cytosine methylation status of the H19 promoter region in 16 normal adrenals and 30 pathological adrenocortical samples. PCR-based analysis showed higher methylation status at three HpaII-cutting CpG sites of the H19 promoter in adrenocortical carcinomas and in a virilizing adenoma than in their adjacent normal adrenal tissues. Bisulfite genomic sequencing revealed a significantly higher mean degree of methylation at each of 12 CpG sites of the H19 promoter in adrenocortical carcinomas than in normal adrenals (P < 0.01 for all sites) or adrenocortical adenomas (P < 0.01, except P < 0.05 for site 12 and P > 0.05 for site 11). The mean methylation degree of the 12 CpG sites was significantly higher in the adrenocortical carcinomas (mean +/- SE, 76 +/- 7%) than in normal adrenals (41 +/- 2%) or adrenocortical adenomas (45 +/- 3%; both P < 0.005). RNA analysis indicated that the adrenocortical carcinomas expressed less H19 but more IGF-II RNAs than normal adrenal tissues did. The mean methylation degree of the 12 H19 promoter CpG sites correlated negatively with H19 RNA levels (r = -0.550; P < 0.01), but positively with IGF-II mRNA levels (r = 0.805; P < 0.001). In the adrenocortical carcinoma cell line NCI-H295R, abundant IGF-II, but minimal H19, RNA expression was detected by Northern blotting. Treatment with a cytosine methylation inhibitor, 5-aza-2'-deoxycytidine, increased H19 RNA expression, whereas it decreased IGF-II mRNA accumulation dose- and time-dependently (both P < 0.005) and reduced cell proliferation to 10% in 7 d. Our results suggest that altered DNA methylation of the H19 promoter is involved in the abnormal expression of both H19 and IGF-II genes in human adrenocortical carcinomas.
<filename>packages/entity/src/entity/entity.ts<gh_stars>1-10 import { InvalidArgumentException, Serializable } from "@swindle/core"; import { State } from "@domeniere/state"; import { Identifier } from "@domeniere/value"; import { EntityInterface } from "./entity.interface"; /** * Entity * * An entity is a domain object with an established identity. */ export abstract class Entity implements EntityInterface, Serializable { private static ID_STATE_KEY = "__id__"; public readonly __state__: State; /** * creates a new entity instance. * @param id The entity identifier. * @throws InvalidArgumentException when the id is undefined. */ constructor(id: Identifier) { if (!id) { // id is undefined. throw new InvalidArgumentException("An entity's id cannot be undefined."); } this.__state__ = new State(); this.__state__.initialize(Entity.ID_STATE_KEY, id); } /** * commitStateChange() * * commitStateChange() informs the entity that a state change has occured. */ protected commitStateChanges(): void { // } /** * confirmStateChanges() * * confirms the state changes. */ public confirmStateChanges(): void { this.__state__.confirmChanges(); } /** * equals() * * Compares the entity to the suspect, to determine if they are equal. * @param suspect The suspect to be compared. */ public abstract equals(suspect: any): boolean; /** * id() * * id() gets the id value of the entity. */ public id(): Identifier { return this.__state__.get(Entity.ID_STATE_KEY); } /** * rollbackStateChanges() * * rolls back the committed state changes. */ public rollbackStateChanges(): void { this.__state__.discardChanges(); } public serialize(): string { return JSON.stringify({ id: this.id().id().toString(), data: this.serializeData(), }); } /** * serializeData() * * serializes the data. */ protected abstract serializeData(): string; public toString(): string { return this.id().toString(); } /** * setId() * * setId() sets the entity id. * @param id The id to set. */ protected setId(id: Identifier): void { this.__state__.set(Entity.ID_STATE_KEY, id); } }
/// Add a [`Multiaddr`] to the collection. /// /// Adding an existing address is interpreted as additional /// confirmation and thus increases its score. pub fn add(&mut self, a: Multiaddr) { for r in &mut self.registry { if &r.addr == &a { r.score = r.score.saturating_add(1); isort(&mut self.registry); return () } } if self.registry.len() == self.limit.get() { self.registry.pop(); } let r = Record { score: 0, addr: a }; self.registry.push(r) }
Sikhing Answers Why Did Sikhs Side With The British During The 1857 Mutiny? Sikhing Answers - XXVII This is the 27th in our series of questions and answers where we seek your active participation. A question is posed to you, our readers, inviting you to provide your answers. That is, each one of you - young and old - is invited to share with us what YOU believe is the correct answer. There is no presumption of a right or wrong answer, and nothing is sacrosanct - that is, please feel free to tell us what you honestly think, believe or conjecture. Each question will remain open for answers for ONE WEEK at the end of which, we’ll close the question, and have a moderator review all the answers, do some research as well, and collate it all in order to come up with a concise and definitive answer. Once the moderator formulates the “final answer”, it’ll be posted, and all the answers provided to date to that particular question will be deleted. This is not an academic exercise. Sikhi being a layperson’s religion, we encourage all to provide what they know through their personal knowledge and research. All we ask is that: 1 you steer away from academic or esoteric lingo 2 not regurgitate what you unearth on google, wikipedia, etc. 3 be very short, and to the point We’ll fine tune this process as we go along and, before long, hope to have several questions on the table at the same time, with their closing dates staggered so as to allow you to concentrate on one question at a time. The answers will to be posted at the bottom of each question page, where space has been provided for “Comments”. We suggest that you encourage each of your children to participate separately, as can each adult in a family or household. Thus, we will teach each other. TODAY'S QUESTION - # 27 Why did Punjab's Sikhs side with the British in helping them quell the 1857 Indian Mutiny - a rag-tag series of events which has been inflated by modern-day Indian jingoism and given the overblown and fictitious description of a 'war of independence'? How does 1857 relate to the so-called Anglo-Sikh Wars - in which soldiers from the rest of the sub-continent outside Punjab helped the British conquer the Sikh Kingdom, the last free country in the region - which concluded a mere 8 years earlier with the annexation of Punjab to the Raj? Posted on May 25, 2012 Closing Date: June 1, 2012 Conversation about this article Comment on "Why Did Sikhs Side With The British During The 1857 Mutiny? Sikhing Answers - XXVII " * Your Name * Email * City / Country * Comments To help us distinguish between comments submitted by individuals and those automatically entered by software robots, please complete the following. Submit Please note: your email address will not be shown on the site, this is for contact and follow-up purposes only. All information will be handled in accordance with our Privacy Policy. Sikhchic reserves the right to edit or remove content at any time. read other articles in Sikhing Answers
Cigarette smoking, body mass index, and physical fitness changes among male navy personnel. INTRODUCTION Cigarette smoking has been reported to be higher among deployed military men than among similarly aged civilian or nondeployed men, but the short-term effect of smoking on physical fitness among these young healthy men is unclear. This study examined self-reported smoking status and change in objectively measured fitness over 1-4 years while controlling for body mass index (BMI). METHODS This study included a large sample of male U.S. navy personnel who deployed to Iraq or Kuwait between 2005 and 2008. A mixed modeling procedure was used to determine factors contributing to longitudinal changes in both BMI and fitness (measured by run/walk times, curl-ups, and push-ups). RESULTS Of the total sample (n = 18,537), the 20% current smokers were more likely than nonsmokers to be enlisted, younger, and have lower BMI measurements at baseline. In addition, smokers had slower 1.5-mile run/walk times and could do fewer curl-ups and push-ups compared with nonsmokers. The run/walk time model indicated that over 4 years, smokers (compared with nonsmokers) experienced a significantly greater rate of decrease in cardiorespiratory fitness, even after controlling for changes in BMI. CONCLUSIONS These results call for continued attention to the problem of nicotine use among young healthy men.
<filename>tests/main.cpp #define CATCH_CONFIG_RUNNER #include <catch2/catch.hpp> #include <sys/types.h> #include <unistd.h> #include "thread.h" #pragma GCC diagnostic ignored "-Wwrite-strings" // It's necessary to setup the sigsegv fatal handler and to setup our own process group so it's not possible to use // the default main function provided by Catch2 int main(int argc, char* argv[]) { Catch::Session session; // Switch to it's own process process group to avoid propagating the signals to the parent setpgid(getpid(), getpid()); int returnCode = session.applyCommandLine(argc, argv); if( returnCode != 0 ) { return returnCode; } // Ensure that the current thread is pinned to the core 0 otherwise some tests can fail if the kernel shift around // the main thread of the process thread_current_set_affinity(0); return session.run(); }
def feature_network(args): return setup_convolutional_network( (args.image_size ** 2) * args.number_of_channels, get_number_of_labels(generate_dict_from_directory(args.train_path), args), args )
Light-directed generation of the actin-activated ATPase activity of caged heavy meromyosin. An understanding of the molecular mechanism of muscle contraction will require a complete description of the kinetics of the myosin motor in vitro and in vivo. To this end chemical relaxation studies employing light-directed generation of ATP from caged ATP have provided detailed kinetic information in muscle fibers. A more direct approach would be to trigger the actin-activated ATPase activity from a caged myosin, i.e., myosin whose activity is blocked upon derivatization with a photolabile protection group. Herein we report that a new type of caged reagent can be used to prepare a caged heavy meromyosin by modification of critical thiol groups, i.e., a chemically modified motor without activity that can be reactivated at will using a pulse of near-ultraviolet light. Heavy meromyosin modified at Cys-707 with the thiol reactive reagent 1-(bromomethyl)-2-nitro-4,5-dimethoxybenzene does not exhibit an actin-activated ATPase activity and may be viewed as a caged protein. Absorption spectroscopy showed that the thioether bond linking the cage group to Cys-707 is cleaved following irradiation (340-400 nm) via a transient aci-nitro intermediate which has an absorption maximum at 440 nm and decays with a rate constant of 45.6 s(-1). The in vitro motility assay showed that caged heavy meromyosin cannot generate the force necessary to move actin filaments although following irradiation of the image field with a 30 ms pulse of 340-400 nm light the caged group was removed with the concomitant movement of most filaments at a velocity of 0.5-2 micron/s compared to 3-4 micron/s for unmodified HMM. The specificity and simplicity of labeling myosin with the caged reagent should prove useful in studies of muscle contraction in vivo.
<gh_stars>0 /* * FRAGMENT SHADER * Copyright © 2014+ <NAME> * * Distributed under the Boost Software License, version 1.0 * See documents/LICENSE.TXT or www.boost.org/LICENSE_1_0.txt * * <EMAIL> */ #ifndef OPENGL_TOOLKIT_FRAGMENT_SHADER_HEADER #define OPENGL_TOOLKIT_FRAGMENT_SHADER_HEADER #include <Shader.hpp> namespace glt { class Fragment_Shader : public Shader { public: Fragment_Shader(const Source_Code & source_code) : Shader(source_code, GL_FRAGMENT_SHADER) { } }; } #endif
/*! \file process.cpp * \brief MXF wrapping functions * * \version $Id$ * */ /* * This software is provided 'as-is', without any express or implied warranty. * In no event will the authors be held liable for any damages arising from * the use of this software. * * Permission is granted to anyone to use this software for any purpose, * including commercial applications, and to alter it and redistribute it * freely, subject to the following restrictions: * * 1. The origin of this software must not be misrepresented; you must * not claim that you wrote the original software. If you use this * software in a product, you must include an acknowledgment of the * authorship in the product documentation. * * 2. Altered source versions must be plainly marked as such, and must * not be misrepresented as being the original software. * * 3. This notice may not be removed or altered from any source * distribution. */ #include <stdio.h> #include <iostream> #include <string> using namespace std; #include "mxflib/mxflib.h" using namespace mxflib; #include "process.h" #include "process_utils.h" #include "productIDs.h" FILE * hLogout=stdout; //file handle to send all the informational output to std::string GetVersionText() { char VersionText[256]; snprintf(VersionText, 255, "MXFWrap %s.%s.%s(%s)%s of %s %s", PRODUCT_VERSION_MAJOR, PRODUCT_VERSION_MINOR, PRODUCT_VERSION_TWEAK, PRODUCT_VERSION_BUILD, MXFLIB_VERSION_RELTEXT(PRODUCT_VERSION_REL), __DATE__, __TIME__); return VersionText; } void SetUpIndex(int OutFileNum, ProcessOptions *pOpt, MetadataPtr MData, EssenceSourcePair *Source, EssenceParser::WrappingConfigList WrapCfgList, EssenceStreamInfo*EssStrInf) { EssenceParser::WrappingConfigList::iterator WrapCfgList_it; // Find all essence container data sets so we can update "IndexSID" MDObjectPtr ECDataSets = MData[ContentStorageObject_UL]; if(ECDataSets) ECDataSets = ECDataSets->GetLink(); if(ECDataSets) ECDataSets = ECDataSets[EssenceDataObjects_UL]; int PreviousFP = -1; // The index of the previous file package used - allows us to know if we treat this is a sub-stream int iStream = -1; // Stream index (note that it will be incremented to 0 in the first iteration) int iTrack=0; bool IndexAdded = false; // Set once we have added an index so we only add to the first stream in frame-group WrapCfgList_it = WrapCfgList.begin(); while(WrapCfgList_it != WrapCfgList.end()) { // Move on to a new stream if we are starting a new file package if(Source[iTrack].first != PreviousFP) iStream++; // Only process the index for the first stream of a file package if((Source[iTrack].first != PreviousFP || pOpt->OPAtom ) && (!(*WrapCfgList_it)->IsExternal)) { // Write File Packages except for externally ref'ed essence in OP-Atom bool WriteFP = (!pOpt->OPAtom) || (iStream == OutFileNum); if(WriteFP) { // Only index it if we can // Currently we can only VBR index frame wrapped essence // FIXME: We enable the VBR mode twice doing it this way, which is not ideal - should we cache the result? Or do we even need to check? if( ((*WrapCfgList_it)->WrapOpt->CBRIndex && (Source[iTrack].second->GetBytesPerEditUnit() != 0)) || ((*WrapCfgList_it)->WrapOpt->CanIndex || ((*WrapCfgList_it)->WrapOpt->ThisWrapType == WrappingOption::Frame || Source[iTrack].second->EnableVBRIndexMode() ))) { if( ( pOpt->OPAtom && iTrack==OutFileNum) || (!pOpt->OPAtom && pOpt->FrameGroup && (!IndexAdded)) || (!pOpt->OPAtom && !pOpt->FrameGroup) ) { UInt32 BodySID; // Body SID for this essence stream UInt32 IndexSID; // Index SID for the index of this essence stream IndexAdded = true; BodySID = EssStrInf[iStream].Stream->GetBodySID(); IndexSID = BodySID + 128; EssStrInf[iStream].Stream->SetIndexSID(IndexSID); // Update IndexSID in essence container data set if(ECDataSets) { MDObject::iterator ECD_it = ECDataSets->begin(); while(ECD_it != ECDataSets->end()) { if((*ECD_it).second->GetLink()) { if((*ECD_it).second->GetLink()->GetUInt(BodySID_UL) == BodySID) { (*ECD_it).second->GetLink()->SetUInt(IndexSID_UL, IndexSID); break; } } ECD_it++; } } } } } } // Record the file package index used this time PreviousFP = Source[iTrack].first; WrapCfgList_it++; iTrack++; } } //!return Essence duration Length ProcessEssence(int OutFileNum, ProcessOptions *pOpt, EssenceSourcePair *Source, EssenceParser::WrappingConfigList WrapCfgList, BodyWriterPtr Writer, Rational EditRate, MetadataPtr MData, EssenceStreamInfo *EssStrInf, TimecodeComponentPtr MPTimecodeComponent ) { #ifdef _WIN32 LARGE_INTEGER start; QueryPerformanceCounter(&start); #else struct timeval start; struct timezone tz; gettimeofday(& start, &tz); #endif // Write the body if(pOpt->BodyMode == Body_None) { Writer->WriteBody(); } else { while(!Writer->BodyDone()) { if(pOpt->BodyMode == Body_Duration) Writer->WritePartition(pOpt->BodyRate, 0); else Writer->WritePartition(0, pOpt->BodyRate); } } // Work out the durations Length EssenceDuration; Int32 IndexBaseTrack; if( pOpt->OPAtom ) IndexBaseTrack = OutFileNum; else if( pOpt->FrameGroup ) IndexBaseTrack = 0; else IndexBaseTrack = 0; if(EssStrInf[IndexBaseTrack].Stream) EssenceDuration = (Length) EssStrInf[IndexBaseTrack].Stream->GetSource()->GetCurrentPosition(); else EssenceDuration = -1; #ifdef DEMO if(EssenceDuration>20) { puts("Evaluation version cannot make file this long"); return 0; } #endif #ifdef _WIN32 LARGE_INTEGER end; QueryPerformanceCounter(&end); LARGE_INTEGER Freq; QueryPerformanceFrequency(&Freq); if(Freq.QuadPart!=0) { __int64 diff=end.QuadPart-start.QuadPart; float time=((float)diff)/Freq.QuadPart; float fps=EssenceDuration/time; if(pOpt->ShowTiming) printf("Completed %s samples at %4.3f per second\n", Int64toString(EssenceDuration).c_str(), fps); else printf("Completed %s samples\n", Int64toString(EssenceDuration).c_str()); } #else struct timeval end; gettimeofday(& end, &tz); time_t secs=end.tv_sec-start.tv_sec; int usecs=end.tv_usec-start.tv_usec; float time=(float)secs+(float)usecs/1000000.0; float fps=EssenceDuration/time; if(pOpt->ShowTiming) printf("Completed %s samples at %4.3f per second\n", Int64toString(EssenceDuration).c_str(), fps); else printf("Completed %s samples\n", Int64toString(EssenceDuration).c_str()); #endif // Update the modification time MData->SetTime(); // Update all durations // (Index Duration forced above) // Update Material Package Timecode Track Duration Length EditRateDuration = (Length) EssenceDuration * ( EditRate/(EssStrInf[IndexBaseTrack].Stream->GetSource()->GetEditRate()) ); fprintf( hLogout,"EditRateDuration = %s\n", Int64toString(EditRateDuration).c_str()); if(MPTimecodeComponent) MPTimecodeComponent->SetDuration(EditRateDuration); EssenceParser::WrappingConfigList::iterator WrapCfgList_it; int PreviousFP = -1; // The index of the previous file package used - allows us to know if we treat this is a sub-stream int iStream = -1; // Stream index (note that it will be incremented to 0 in the first iteration) int iTrack = 0; WrapCfgList_it = WrapCfgList.begin(); while(WrapCfgList_it != WrapCfgList.end()) { // Move on to a new stream if we are starting a new file package if(Source[iTrack].first != PreviousFP) iStream++; if(EssStrInf[iTrack].MPClip) { EssStrInf[iTrack].MPClip->SetDuration(EditRateDuration); // Set sub-track durations if(!EssStrInf[iTrack].MPSubTracks.empty()) { TrackList::iterator m_it = EssStrInf[iTrack].MPSubTracks.begin(); while(m_it != EssStrInf[iTrack].MPSubTracks.end()) { if(!(*m_it)->Components.empty()) (*m_it)->Components.front()->SetDuration(EditRateDuration); m_it++; } } if( (!pOpt->OPAtom) || (iStream == OutFileNum) ) { if(EssStrInf[iTrack].FPTimecodeComponent) EssStrInf[iTrack].FPTimecodeComponent->SetDuration(EditRateDuration); EssStrInf[iTrack].FPClip->SetDuration(EssenceDuration); //IDB july2012 this line was here negating the point of the logic above // - I assume there is no reason so we can delete in a bit if there are no consequences //EssStrInf[iTrack].FPClip->SetDuration(EssenceDuration); // Set sub-track durations if(!EssStrInf[iTrack].FPSubTracks.empty()) { TrackList::iterator f_it = EssStrInf[iTrack].FPSubTracks.begin(); while(f_it != EssStrInf[iTrack].FPSubTracks.end()) { if(!(*f_it)->Components.empty()) (*f_it)->Components.front()->SetDuration(EssenceDuration); f_it++; } } // Set file descriptor durations (*WrapCfgList_it)->EssenceDescriptor->SetInt64(ContainerDuration_UL,EssenceDuration); if((*WrapCfgList_it)->EssenceDescriptor->IsA(MultipleDescriptor_UL)) { MDObjectPtr FileDescriptors = (*WrapCfgList_it)->EssenceDescriptor->Child(FileDescriptors_UL); if(FileDescriptors) { MDObject::iterator it = FileDescriptors->begin(); while(it != FileDescriptors->end()) { if((*it).second->GetRef()) ((*it).second->GetRef())->SetInt64(ContainerDuration_UL,EssenceDuration); it++; } } } // If frame grouping, we will have added a manufactured Multiple Descriptor, so set its duration // DRAGONS: This will get called for each time around the loop, so we set the duration multiple times, but this is not an issue if(pOpt->FrameGroup) { if(EssStrInf[iTrack].FPTrack->GetParent()) { MDObjectPtr Descriptor = EssStrInf[iTrack].FPTrack->GetParent()->GetRef(Descriptor_UL); if(Descriptor) Descriptor->SetInt64(ContainerDuration_UL,EssenceDuration); } } // Update origin if required // DRAGONS: This is set in the File Package - the spec seems unclear about which Origin should be set! Position Origin = Source[iTrack].second->GetPrechargeSize(); if(Origin) { TrackParent FPTrack = EssStrInf[iTrack].FPClip->GetParent(); if(FPTrack) FPTrack->SetInt64(Origin_UL, Origin); } } } // Record the file package index used this time PreviousFP = Source[iTrack].first; WrapCfgList_it++; iTrack++; } // return the finished length to the caller return EssenceDuration; } #include "process_metadata.h" //! Process an output file Length Process( int OutFileNum, MXFFilePtr Out, ProcessOptions *pOpt, EssenceParser::WrappingConfigList WrapCfgList, EssenceSourcePair *Source, Rational EditRate, UMIDPtr MPUMID, UMIDPtr *FPUMID, UMIDPtr *SPUMID, bool *pReadyForEssenceFlag /* =NULL */ ) { TimecodeComponentPtr MPTimecodeComponent ; Length Ret = 0; EssenceStreamInfo EssStrInf[ProcessOptions::MaxInFiles]; // FP UMIDs are the same for all OutFiles, so they are supplied as a parameter PackagePtr FilePackage; /* Step: Create a set of header metadata */ MetadataPtr MData = new Metadata(); mxflib_assert(MData); mxflib_assert(MData->Object); // Build the body writer BodyWriterPtr Writer = new BodyWriter(Out); #if defined FORCEGCMULTI // 377M MultipleDescriptor (D.5) requires an EssenceContainer label (D.1), which must be this // degenerate label (see mxfIG FAQ). Therefore the degenerate value must also appear in the // Header (A.1) and partition pack... // also, explicitly required by AS-CNN sec 2.1.6 // DRAGONS: Why is this here? It unconditionally adds "Used to describe multiple wrappings not // otherwise covered under the MXF Generic Container node" to all MXF files!! // Assume we are doing GC ULPtr GCUL = new UL( mxflib::GCMulti_Data ); MData->AddEssenceType( GCUL ); // This appears to be acceptable to Avid XpressProHD 5.1.2 #endif // Process Metadata ProcessMetadata( OutFileNum, pOpt, Source, WrapCfgList, EditRate, Writer, MData, MPUMID, FPUMID, SPUMID, EssStrInf, FilePackage, MPTimecodeComponent //OUT variables ); // // ** Set up IndexSID ** // if(pOpt->UseIndex || pOpt->SparseIndex || pOpt->SprinkledIndex) { SetUpIndex(OutFileNum,pOpt, MData,Source,WrapCfgList,EssStrInf); } // // ** Set up the base partition pack ** // PartitionPtr ThisPartition = new Partition(OpenHeader_UL); mxflib_assert(ThisPartition); ThisPartition->SetKAG(pOpt->KAGSize); // Everything else can stay at default ThisPartition->SetUInt(BodySID_UL, 1); // Build an Ident set describing us and link into the metadata MDObjectPtr Ident=Metadata::MakeIdent( CompanyName, Product_UUID, ProductName, ProductVersionString, ProductProductVersion); // Link the new Ident set with all new metadata // Note that this is done even for OP-Atom as the 'dummy' header written first // could have been read by another device. This flags that items have changed. MData->UpdateGenerations(Ident); ThisPartition->AddMetadata(MData); // Add the template partition to the body writer Writer->SetPartition(ThisPartition); // // ** Process Essence ** // // Write the header (open and incomplete so far) // Set block alignment for Avid compatibility // with an extra -ve offset for essence to align the V rather than the K const UInt32 PartitionPackLength = 0x7c; const UInt32 AvidBlockSize = 0x60000; const UInt32 AvidKAGSize = 512; const UInt32 AvidIndexBERSize = 9; const UInt32 ULSize = 16; int DynamicOffset = 0-ULSize; // Kludge to find the most likely BERSize EssenceSourcePtr Stream0 = EssStrInf[ OutFileNum ].Stream ? *(EssStrInf[ OutFileNum ].Stream->begin()) : EssenceSourcePtr(NULL); if( !Stream0 || Stream0->GetBERSize() == 0) { if(Stream0 && (EssStrInf[ OutFileNum ].Stream->GetWrapType() == ClipWrap) ) DynamicOffset -= 8; else DynamicOffset -= 4; } else DynamicOffset -= Stream0->GetBERSize(); if( pOpt->BlockSize ) { // set dynamic default if -ko=-1000 if( pOpt->BlockOffset == -1000 ) pOpt->BlockOffset = DynamicOffset; Out->SetBlockAlign( pOpt->BlockSize, pOpt->BlockOffset, pOpt->BlockIndexOffset ); } // Use padding per command line - even for block aligned files if(pOpt->HeaderPadding) Writer->SetPartitionFiller(pOpt->HeaderPadding); if(pOpt->HeaderSize) Writer->SetPartitionSize(pOpt->HeaderSize); // DRAGONS: would be nice to have an even length Header Partition //if(pOpt->HeaderSize) Writer->SetPartitionSize(pOpt->HeaderSize - PartitionPackLength); Writer->WriteHeader(false, false); // If we are writing OP-Atom update the OP label so that body partition packs claim to be OP-Atom // The header will remain as a generalized OP until it is re-written after the footer if( pOpt->OPAtom ) { MData->SetOP(OPAtomUL); } if( pOpt->OPAtom ) { // Set top-level file package correctly for OP-Atom // DRAGONS: This will need to be changed if we ever write more than one File Package for OP-Atom! if( FilePackage) MData->SetPrimaryPackage(FilePackage); } if(pReadyForEssenceFlag) *pReadyForEssenceFlag=true; Ret=ProcessEssence(OutFileNum,pOpt,Source,WrapCfgList, Writer,EditRate, MData,EssStrInf,MPTimecodeComponent ); // Update SourcePackage Timecode Duration // DRAGONS: since we are assuming a 24 hour Source, don't need this // if( SPTimecodeComponent ) SPTimecodeComponent->SetDuration(EssenceDuration); // Update SourcePackage Edgecode Duration // DRAGONS: since we are assuming a 10000 foot Source, don't need this // if( SPEdgecodeComponent ) SPEdgecodeComponent->SetDuration(EssenceDuration); // Update the generation UIDs in the metadata to reflect the changes MData->UpdateGenerations(Ident); // Make sure any new sets are linked in ThisPartition->UpdateMetadata(MData); // Actually write the footer // Note: No metadata in OP-Atom footer if(pOpt->OPAtom) Writer->WriteFooter(false); else Writer->WriteFooter(true, true); // // ** Update the header ** // // For generalized OPs update the value of "FooterPartition" in the header pack // For OP-Atom re-write the entire header // UInt64 FooterPos = ThisPartition->GetUInt64(FooterPartition_UL); Out->Seek(0); DataChunkPtr IndexData; if(pOpt->UpdateHeader) { #ifndef WIN32 static pthread_mutex_t mutex=PTHREAD_MUTEX_INITIALIZER; pthread_mutex_lock(&mutex); #endif // Read the old partition to allow us to keep the same KAG and SIDs PartitionPtr OldHeader = Out->ReadPartition(); // Read any index table data IndexData = OldHeader->ReadIndexChunk(); // If the header did not contain any index data, see if we usefully can add some - we will ditch it if the update fails bool AddingIndex = false; if(!IndexData) { // Search for the appropriate index table to add BodyStream *pStream = NULL; IndexManager *pManager = NULL; // If the header has essence - it must be index from that essence UInt32 BodySID = OldHeader->GetUInt(BodySID_UL); if(BodySID) { // Get the manager if the header-essence is indexed pStream = Writer->GetStream(BodySID); if(pStream) pManager = pStream->GetIndexManager(); } else { // Scan all known streams... for(;;) { BodySID = Writer->GetNextUsedBodySID(BodySID); if(BodySID == 0) break; /// ... looking for a CBR index pStream = Writer->GetStream(BodySID); if(pStream) pManager = pStream->GetIndexManager(); if(pManager && pManager->IsCBR()) break; // TODO: We could try VBR if we know it will fit! if(pManager) break; } } // So here we either have the manager for the essence in the header, // or the manager for the first CBR essence stream, // or NULL // Read the index types and see what is requested BodyStream::IndexType IndexFlags = pStream->GetIndexType(); if(pManager && ( IndexFlags&(BodyStream::StreamIndexSparseFooter) ) ) { IndexTablePtr Index = pManager->MakeIndex(); if(Index) { if(!pManager->IsCBR()) pManager->AddEntriesToIndex(Index); IndexData = new DataChunk(); Index->WriteIndex(*IndexData); ThisPartition->SetUInt(IndexSID_UL, pManager->GetIndexSID()); AddingIndex = true; } } } // Now update the partition we are about to write (the one with the metadata) ThisPartition->ChangeType(ClosedCompleteHeader_UL); ThisPartition->SetUInt64(FooterPartition_UL, FooterPos); ThisPartition->SetKAG(OldHeader->GetUInt(KAGSize_UL)); // DRAGONS: We don't copy over the IndexSID if the code to add a new index table has added a new one in if(!AddingIndex) ThisPartition->SetUInt(IndexSID_UL, OldHeader->GetUInt(IndexSID_UL)); ThisPartition->SetUInt64(BodySID_UL, OldHeader->GetUInt(BodySID_UL)); Out->Seek(0); if(IndexData) { // Try and re-write with the index table, if this will not fit (and we have added the index in this update) remove it and try again bool Result = Out->ReWritePartitionWithIndex(ThisPartition, IndexData); if(AddingIndex && (!Result)) { ThisPartition->SetUInt(IndexSID_UL, 0); Out->ReWritePartition(ThisPartition); fprintf( hLogout,"Note: An attempt was made to add a full index table to the Header.\n"); fprintf( hLogout," This failed, but the header is still valid without the index table.\n"); } } else Out->ReWritePartition(ThisPartition); #ifndef WIN32 pthread_mutex_unlock(&mutex); #endif } else { ThisPartition = Out->ReadPartition(); ThisPartition->SetUInt64(FooterPartition_UL, FooterPos); Out->Seek(0); Out->WritePartitionPack(ThisPartition); } return Ret; }
class GlobalProperty: """Property which shares one value for all instances.""" def __init__(self, value): self.value = value def __get__(self, instance, owner=None): return self.value def __set__(self, instance, value): self.value = value
<filename>pygfa/graph_element/parser/path.py import re from pygfa.graph_element.parser import line, field_validator as fv class Path(line.Line): def __init__(self): super().__init__('P') REQUIRED_FIELDS = { \ 'path_name' : fv.GFA1_NAME, \ 'seqs_names' : fv.GFA1_NAMES, \ 'overlaps': fv.GFA1_CIGARS \ } PREDEFINED_OPTFIELDS = {} @classmethod def from_string(cls, string): """Extract the path fields from the string. The string can contains the P character at the begin or can just contains the fields of the path directly. """ if len(string.split()) == 0: raise line.InvalidLineError("Cannot parse the empty string.") fields = re.split('\t', string) pfields = [] if fields[0] == 'P': fields = fields[1:] if len(fields) < len(cls.REQUIRED_FIELDS): raise line.InvalidLineError("The minimum number of field for " + "Path line is not reached.") path = Path() path_name = fv.validate(fields[0], cls.REQUIRED_FIELDS['path_name']) sequences_names = [fv.validate(label, \ cls.REQUIRED_FIELDS['seqs_names']) \ for label in fields[1].split(",") ] overlaps = fv.validate(fields[2], cls.REQUIRED_FIELDS['overlaps']) pfields.append(line.Field('path_name', path_name)) pfields.append(line.Field('seqs_names', sequences_names)) pfields.append(line.Field('overlaps', overlaps)) for field in fields[3:]: pfields.append(line.OptField.from_string(field)) for field in pfields: path.add_field(field) return path if __name__ == '__main__': # pragma: no cover pass
/** * Created by xiaoshan on 2016/2/18. * 19:53 */ public class ThreadPoolFactory { private static ThreadPoolProxy mNormalThreadPool; private static ThreadPoolProxy mDownLoadThreadPool; public static ThreadPoolProxy getNormalThreadPool() { if (mNormalThreadPool == null) { synchronized (ThreadPoolProxy.class) { if (mNormalThreadPool == null) { mNormalThreadPool = new ThreadPoolProxy(5, 5, 3000); } } } return mNormalThreadPool; } public static ThreadPoolProxy getDownLoadThreadPool() { if (mDownLoadThreadPool == null) { synchronized (ThreadPoolProxy.class) { if (mDownLoadThreadPool == null) { mDownLoadThreadPool = new ThreadPoolProxy(3, 3, 3000); } } } return mDownLoadThreadPool; } }
import { Epic } from "redux-observable"; import { of } from "rxjs"; import { ajax } from "rxjs/ajax"; import { catchError, filter, map, mergeMap } from "rxjs/operators"; import {AddTodoAction, FetchTodoAction, RemoveTodoAction, ToggleTodoAction } from "../ducks/todo"; let API_BASE_URL = "http://localhost:3000/todo"; const PATH = { ADD: "/add", ALL: "/all", TOGGLE: "/toggle", }; // TODO: move this to duck files // TODO: rename duck to store // TODO: create mapper files // TODO: try out Visual Studio // TODO: checkout prettier linting export const FetchTodoEpic: Epic = (actions$) => actions$.pipe( filter(FetchTodoAction.started.match), mergeMap((action) => ajax.get( API_BASE_URL + PATH.ALL).pipe( map((response) => FetchTodoAction.done({params: action.payload, result: response.response})), catchError((error, caught) => of(FetchTodoAction.failed({error})))))); export const AddTodoEpic: Epic = (actions$) => actions$.pipe( filter(AddTodoAction.started.match), mergeMap((action) => ajax.post( API_BASE_URL + PATH.ADD, action.payload).pipe( map((response) => AddTodoAction.done({params: action.payload, result: response.response})), catchError((error, caught) => of(AddTodoAction.failed({params: action.payload, error})))))); export const RemoveTodoEpic: Epic = (actions$) => { return actions$.pipe( filter(RemoveTodoAction.started.match), mergeMap((action) => ajax.delete(API_BASE_URL + "/" + action.payload.id).pipe( map((response) => RemoveTodoAction.done({params: action.payload, result: action.payload})), catchError((error, caught) => of(RemoveTodoAction.failed({params: action.payload, error})))))); }; export const ToggleTodoEpic: Epic = (actions$) => actions$.pipe( filter(ToggleTodoAction.started.match), mergeMap((action) => ajax.post( API_BASE_URL + "/" + action.payload.id + PATH.TOGGLE, action.payload).pipe( map((response) => ToggleTodoAction.done({params: action.payload, result: response.response})), catchError((error, caught) => of(ToggleTodoAction.failed({params: action.payload, error}))))));
# Date: 02/03/2019 # Author: Mohamed # Description: Main file from os import urandom import requests as urlrequest from urllib.parse import urlparse from lib.database import Database from flask import Flask, render_template, request, jsonify, redirect, abort class Webserver: def __init__(self): self.database = Database() self.app = Flask(__name__) self.app.secret_key = urandom(0x200) @property def server_url(self): parse = urlparse(request.url) return '{}://{}/'.format(parse.scheme, parse.netloc) def add_paths(self): self.app.add_url_rule('/', 'index', self.index, defaults={'link_id': ''}) self.app.add_url_rule('/<path:link_id>', 'index', self.index) self.app.add_url_rule('/create', 'create', self.create, methods=['POST']) def index(self, link_id): if link_id: if self.database.link_id_exists(link_id): url = self.database.get_link_url(link_id) return redirect(url) return abort(404) return render_template('index.html') def parser_url(self, url): parse = urlparse(url) link1 = '{}://{}{}{}{}{}'.format( 'https' if not parse.scheme else parse.scheme, parse.netloc.lower(), parse.path, parse.params, '?' + parse.query if parse.query else '', parse.fragment ) link2 = link1.replace('https', 'http') try: urlrequest.get(link1) link = link1 except: link = link2 return link if ((parse.netloc or parse.path) and urlparse(request.url).netloc != parse.netloc) else '' def get_link_id(self, link_url): url = urlparse(request.url).netloc link_id = self.database.generate_link_id(url) self.database.add_link(link_url, link_id) return link_id def create(self): if not 'link' in request.form: return jsonify({ 'resp': '' }) link_url = request.form['link'] link_url = self.parser_url(link_url) if not link_url: return jsonify({ 'resp': '' }) if self.database.link_url_exists(link_url): return jsonify({ 'resp': self.server_url + self.database.get_link_id(link_url) }) link_id = self.get_link_id(link_url) return jsonify({ 'resp': self.server_url + link_id}) def start(self): self.add_paths() self.database.start() self.app.run(debug=False) if __name__ == '__main__': webserver = Webserver() webserver.start()
Deep-Sea Oil Plume Enriches Indigenous Oil-Degrading Bacteria Diving into Deep Water The Deepwater Horizon oil spill in the Gulf of Mexico was one of the largest oil spills on record. Its setting at the bottom of the sea floor posed an unanticipated risk as substantial amounts of hydrocarbons leaked into the deepwater column. Three separate cruises identified and sampled deep underwater hydrocarbon plumes that existed in May and June, 2010before the well head was ultimately sealed. Camilli et al. (p. 201; published online 19 August) used an automated underwater vehicle to assess the dimensions of a stabilized, diffuse underwater plume of oil that was 22 miles long and estimated the daily quantity of oil released from the well, based on the concentration and dimensions of the plume. Hazen et al. (p. 204; published online 26 August) also observed an underwater plume at the same depth and found that hydrocarbon-degrading bacteria were enriched in the plume and were breaking down some parts of the oil. Finally, Valentine et al. (p. 208; published online 16 September) found that natural gas, including propane and ethane, were also present in hydrocarbon plumes. These gases were broken down quickly by bacteria, but primed the system for biodegradation of larger hydrocarbons, including those comprising the leaking crude oil. Differences were observed in dissolved oxygen levels in the plumes (a proxy for bacterial respiration), which may reflect differences in the location of sampling or the aging of the plumes. Cold-loving bacteria biodegrade hydrocarbons in the oil plume faster than expected. The biological effects and expected fate of the vast amount of oil in the Gulf of Mexico from the Deepwater Horizon blowout are unknown owing to the depth and magnitude of this event. Here, we report that the dispersed hydrocarbon plume stimulated deep-sea indigenous -Proteobacteria that are closely related to known petroleum degraders. Hydrocarbon-degrading genes coincided with the concentration of various oil contaminants. Changes in hydrocarbon composition with distance from the source and incubation experiments with environmental isolates demonstrated faster-than-expected hydrocarbon biodegradation rates at 5°C. Based on these results, the potential exists for intrinsic bioremediation of the oil plume in the deep-water column without substantial oxygen drawdown.
Cortical Involvement in Schizophrenics The nature of schizophrenia has long been debated. The purpose of the present study was to investigate the hypothesis that typical/atypical schizophrenia (process/reactive) entails dysfunctioning in the frontal and temporal areas of the brain respectively. Rather than the conventional method of group mean analysis, the inverse factor analytic procedure of profile analysis was used to isolate clusters of individual profiles whose performance over 53 neuropsychological variables was similar. Results did not substantiate this hypothesis but did suggest a possible brain-damage component in typical schizophrenics which was not present in atypical schizophrenics. These results represent the first time that a process/reactive continuum has been suggested from an inductive approach for analyzing the performance of schizophrenics on an extensive battery of psychological tests sensitive to brain damage.
Cardiac Catheterization After CABG With BIMA Grafting: Independent Predictors and Mid-term Bypass Viability. INTRODUCTION Coronary artery bypass graft (CABG) patency is an important variable, but rarely studied as the main outcome. The best use of bilateral internal mammary artery (BIMA) grafting regarding configuration type or combination with saphenous vein graft (SVG) is still debated. PURPOSE To find independent predictors for need of cardiac catheterization and for significant lesions in CABG follow-up. METHODS Retrospective cohort including all patients who underwent isolated CABG with BIMA grafts between 2004 and 2013 in a tertiary center. Preoperative, surgical and postoperative data were collected through clinical files and informatics databases. Kaplan-Meier curves, Cox regression and logistic regression were used to find predictors for the need of catheterization and for significant angiographic lesions after CABG. Secondary end-points studied were mid- term survival and need of re-revascularization either surgically or percutaneously. RESULTS We included 1030 patients in this analysis. Median follow-up time was 5.5 years and 150 (15%) patients were re-catheterized in that period. Most of these procedures was due to ischemia suspicion (74%) and 61 (41%) were positive for significant angiographic lesions of conduits (IMA: 3.2% and SVG: 3.8%, p=0.488). In multivariate analysis, SVG use was found as an independent predictor of cardiac catheterization on follow-up (HR: 1.610, CI 95%: 1.038-2.499, p=0.034). On the other side, independent predictors of graft lesions were younger age (OR: 0.951, CI 95%: 0.921-0.982, p=0.002), female gender (OR: 2.231, CI 95%: 1.038-4.794, p=0.040), arterial hypertension (OR: 1.968, CI 95%: 1.022-3.791, p=0.043) and 3-vessel disease (OR: 2.820, CI 95%: 1.155-6.885, p=0.023). Among the patients with significant angiographic lesions, 48 underwent repeat revascularization (44 PCI e 4 CABG). Arterial hypertension and younger age were independent predictors of re-revascularization. CONCLUSION In BIMA patients the addition of SVG predicts the need of catheterization; however prevalence of significant angiographic lesions was similar in IMA and SVG. Our results suggest that arterial hypertension is an independent predictor of graft patency and re-revascularization rate.
The student teacher and the school community of practice: an exploration of the contribution of the legitimate peripheral participant Relentless reform and increased accountability in education in England have led to increasing attention on the effectiveness of teachers professional development (PD). A shift away from top-down approaches to PD has led to more emphasis placed on in-house, collaborative models. This paper reports on qualitative research conducted in the south of England, which explored the notion of postgraduate certificate in education (PGCE) student teachers on school placements as legitimate peripheral participants in communities of practice. It focuses on the benefits to the old-timer of training a newcomer rather than the original approach of examining how the community shapes the apprentice. It was found that school stake holders recognized the positive contribution made to teachers PD by the student teachers. This paper suggests that schools should be encouraged to build upon communities of practice to realize the benefits to themselves of engaging in training student teachers.
<reponame>ybuiw/ahws<filename>src/pages/baseLayout/index.tsx import BasicLayout, { SiderRouterProps } from '../../components/BasicLayout'; import { HomeOutlined } from '@ant-design/icons'; const route: SiderRouterProps = { routes: [ { path: '/', redirect: '/welcome', }, { path: '/welcome', isHideMenu: true, }, { path: '/home3', name: '管理页', icon: <HomeOutlined />, }, { path: '/home', name: '管理页', icon: <HomeOutlined />, routes: [ { path: '/home/1', name: '测试1', }, { path: '/home/2', name: '测试2', icon: 'CopyOutlined', } ] }, { path: '/list', name: '列表页', icon: <HomeOutlined />, routes: [ { path: '/list/1', name: '测试1', icon: 'CopyOutlined', }, { path: '/list/2', name: '测试2', icon: 'CopyOutlined', } ] }, ] } const Base = () => { return ( <BasicLayout route={route} headerConfig={{ title: "后台管理系统", logo: 'https://gw.alipayobjects.com/zos/rmsportal/KDpgvguMpGfqaHPjicRK.svg' }} >aaaa</BasicLayout> ) } export default Base;
Amplification of single molecule translocation signal using -strand peptide functionalized nanopores. Changes in ionic current flowing through nanopores due to binding or translocation of single biopolymer molecules enable their detection and characterization. It is, however, much more challenging to detect small molecules due to their rapid and small signal signature. Here we demonstrate the use of de novo designed peptides for functionalization of nanopores that enable the detection of a small analytes at the single molecule level. The detection relies on cooperative peptide conformational change that is induced by the binding of the small molecule to a receptor domain on the peptide. This change results in alteration of the nanopore effective diameter and hence induces current perturbation signal. On the basis of this approach, we demonstrate here the detection of diethyl 4-nitrophenyl phosphate (paraoxon), a poisonous organophosphate molecule. Paraoxon binding is induced by the incorporation of the catalytic triad of acetylcholine esterase in the hydrophilic domain of a short amphiphilic peptide and promotes -sheet assembly of the peptide both in solution and for peptide molecules immobilized on solid surfaces. Nanopores coated with this peptide allowed the detection of paraoxon at the single molecule level revealing two binding arrangements. This unique approach, hence, provides the ability to study interactions of small molecules with the corresponding engineered receptors at the single molecule level. Furthermore, the suggested versatile platform may be used for the development of highly sensitive small analytes sensors.
Comparative stages of expression of human squamous carcinoma cells and carcinogen transformed keratinocytes. The mouse monoclonal antibody OSU 22-3 was prepared using cells from a squamous cell carcinoma (SCC) as an immunogen. This antibody reacts with an antigen found on squamous cell carcinomas but does not react with normal keratinocytes. This antibody and two antibodies that react with normal keratinocytes were used as markers of malignant and normal phenotypes. These markers were used to evaluate several spontaneous and carcinogen initiated SCC tumors and to identify the expression of an antigen associated with a malignant phenotype. A variety of subpopulations in carcinogen initiated tumors and spontaneous SCC tumors were noted. The subpopulations that reacted only with MoAb OSU 22-3 exhibited features of anchorage independent growth and cellular invasiveness, and formed progressively growing tumors in nude mice. Other SCC spontaneous tumor cell subpopulations reacted with the antibodies associated with normal keratinocytes. These cells did not proliferate in vitro and did not form tumors in the nude mouse. There were other carcinogen transformed cells which reacted with MoAb OSU 22-3 but not with the antibodies associated with normal keratinocytes. These cells exhibited anchorage independent growth and cellular invasiveness but did not form tumors in nude mice. We conclude from this work that human SCC tumors contain multiple cell populations. These cell populations have varied growth properties and express surface antigens that may indicate their malignant vigor. Carcinogen transformed keratinocytes do exhibit some of the characteristics of SCC tumor phenotypes but not the property of malignant progressively growing cells on a routine and consistent basis. This feature is transiently and inconsistently expressed in a surrogate host by populations prepared from spontaneous SSC tumors.
<gh_stars>1-10 export class MapboxMarkerHandler { private map: any; private mode = 'add'; private modes = ['add', 'select', 'remove']; private source = 'point'; private customizableProperties = ['style']; private geojson: any = false; private isCursorOverPoint = false; private isDragging = false; private clickEvent = true; private clickEventsReady = false; private selectEventsReady = false; private markers = []; private selected: number; private markerClickedFunc = this._markerClickedFunc.bind(this); private mouseDownFunc = this._mouseDown.bind(this); private mouseMoveFunc = this._onMove.bind(this); private mouseUpFunc = this._onUp.bind(this); private addMarkerOnMapFunc = this._addMarker.bind(this); private style = { 'id': 'point', 'type': 'circle', 'source': this.source, 'paint': { 'circle-radius': 10, 'circle-color': '#3887be' } }; initialized = false; constructor() {} init(map: any, properties: any = false) { this.map = map; if (properties) { for (const propName in properties) { if ( properties[propName] && this[propName] && this.customizableProperties.indexOf(propName) !== -1 ) { this[propName] = properties[propName]; } } this.customPropertiesConstraints(); } this.setMode('add'); this.initialized = true; this.defaultValues(); } defaultValues() { this.clickEvent = true; this.clickEventsReady = false; this.selectEventsReady = false; } setMap(map) { if (this.map) { this.map.off('click', this.style.id, this.markerClickedFunc); this.map.off('click', this.addMarkerOnMapFunc); } this.map = map; } getMode() { return this.mode; } setMode(mode: any, markerId: any = false) { if (this.modes.indexOf(mode) === -1) { return; } this.mode = mode; if (this.mode === 'select' && markerId) { if (this.markers.find(c => c.properties.id !== markerId)) { this.selected = markerId; } } else if (mode === 'add') { this.clickEvent = true; } this.handleMode(); } getModes() { return this.modes; } getStyle() { return this.style; } setStyle(style: any) { this.style = style; } trash() { if (this.markers.find(m => m.properties.id === this.selected)) { this.map.fire('latitudeMarkers:remove', this.markers.find(m => m.properties.id === this.selected)); this.markers = this.markers.filter(m => m.properties.id !== this.selected); this.geojson.features = this.markers; this.selected = -1; this.updateData(); // After a deletion if (!this.markers.length) { this.setMode('add'); } } } getSelected() { if (this.selected === -1) { return null; } return this.markers.length > 0 ? this.markers.find(m => m.properties.id === this.selected) : null; } getMarkers() { return this.markers; } add(m) { const geojson = this._addMarker({lngLat: {lng: m[0], lat: m[1]}}); return geojson; } remove(id: number) { if (this.markers.find(m => m.properties.id === id)) { this.markers = this.markers.filter(m => m.properties.id !== id); this.geojson.features = this.markers; this.updateData(); } } private getId() { if (!this.markers.length) { return 1; } return this.markers[this.markers.length - 1].properties.id + 1; } private customPropertiesConstraints() { // Style / Source - Relation if (this.style['source'] === undefined) { this.style['source'] = this.source; } else if (this.style['source'] && this.style['source'] !== this.source) { this.source = this.style['source']; } } private handleMode() { if (this.mode === 'add') { this.handleAdd(); } else if (this.mode === 'select') { this.handleSelect(); } else if (this.mode === 'remove') { this.handleRemoval(); } } private handleRemoval() { this.trash(); } private updateData() { const source = this.map.getSource(this.source); if (source) { source.setData(this.geojson); } } private handleAdd() { if (this.clickEventsReady) { return; } this.clickEventsReady = true; this.map.on('mouseenter', this.style.id, () => { this.map.getCanvas().style.cursor = 'pointer'; this.isCursorOverPoint = true; this.map.dragPan.disable(); }); this.map.on('mouseleave', this.style.id, () => { this.map.getCanvas().style.cursor = ''; this.isCursorOverPoint = false; this.map.dragPan.enable(); }); this.map.on('click', this.style.id, this.markerClickedFunc); this.map.on('click', this.addMarkerOnMapFunc); } private handleSelect() { if (!this.selectEventsReady) { this.map.on('mousedown', this.style.id, this.mouseDownFunc); } this.selectEventsReady = true; this.clickEvent = false; } private _mouseDown(e) { if (!this.isCursorOverPoint) { return; } this.selected = e.features[0].properties.id; this.isDragging = true; // Mouse events this.map.on('mousemove', this.mouseMoveFunc); this.map.once('mouseup', this.mouseUpFunc); } private _onMove(e) { if (!this.isDragging) return; const coords = e.lngLat; for (const m of this.markers) { if (m.properties.id === this.selected) { m.geometry.coordinates = [coords.lng, coords.lat]; } } this.updateData(); } private _onUp(e) { if (!this.isDragging) return; this.isDragging = false; this.map.off('mousemove', this.mouseMoveFunc); this.map.off('mouseup', this.mouseUpFunc); this.map.off('mousedown', this.mouseDownFunc); this.map.fire('latitudeMarkers:move', JSON.parse(JSON.stringify(this.markers.find(m => m.properties.id === this.selected)))); } private _markerClickedFunc(e) { if ( e.features && e.features[0] && e.features[0].properties && e.features[0].properties.id) { const id = e.features[0].properties.id; if (this.markers.find(m => m.properties.id === id)) { if (this.getMode() === 'select') { if (id === this.selected) { this.selected = -1; } else { this.selected = id; } } else if (this.getMode() === 'remove') { this.selected = id; this.trash(); } } } } private _addMarker(e) { if (!this.clickEvent) { return; } const geojson = { 'type': 'Feature', 'geometry': { 'type': 'Point', 'coordinates': [e.lngLat.lng, e.lngLat.lat] }, 'properties': { 'id': this.getId() } }; this.markers.push(geojson); if (!this.geojson) { this.geojson = { 'type': 'FeatureCollection', 'features': this.markers }; } else { this.updateData(); } if (!this.map.getSource(this.source)) { this.map.addSource(this.source, { 'type': 'geojson', 'data': this.geojson }); this.map.addLayer(this.style); } // Select the last marker this.selected = geojson.properties.id; this.setMode('select', this.selected); this.map.fire('latitudeMarkers:add', JSON.parse(JSON.stringify(geojson))); return geojson; } destroy() { this.initialized = false; if (this.map) { this.map.off('click', this.style.id, this.markerClickedFunc); this.map.off('click', this.addMarkerOnMapFunc); this.markers = []; if (this.geojson) { this.geojson.features = this.markers; } this.selected = -1; if (this.map.getLayer(this.style.id)) { this.map.removeLayer(this.style.id); } if (this.map.getSource(this.source)) { this.map.removeSource(this.source); } } } }
<reponame>captnswing/webscreenshots # -*- coding: utf-8 -*- import datetime from south.db import db from south.v2 import SchemaMigration from django.db import models class Migration(SchemaMigration): def forwards(self, orm): # Adding field 'WebSite.title' db.add_column(u'main_website', 'title', self.gf('django.db.models.fields.CharField')(default='', max_length=250, blank=True), keep_default=False) def backwards(self, orm): # Deleting field 'WebSite.title' db.delete_column(u'main_website', 'title') models = { u'main.website': { 'Meta': {'ordering': "['url']", 'object_name': 'WebSite'}, u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'title': ('django.db.models.fields.CharField', [], {'max_length': '250', 'blank': 'True'}), 'url': ('django.db.models.fields.CharField', [], {'max_length': '250'}) } } complete_apps = ['main']
Evaluating a Cultural Competency Curriculum: Changes in Dental Students' Perceived Awareness, Knowledge, and Skills. In response to current and projected demographic changes in the United States, many dental schools have taken steps to increase the cultural competence of their students through various educational methods. The aim of this study was to evaluate the effectiveness of the cultural competency curriculum at Boston University Henry M. Goldman School of Dental Medicine (GSDM). The curriculum was evaluated using a pre and post design, utilizing an instrument developed for pharmacy students and modified for dental students. The questionnaire was comprised of 11 items designed to assess changes in students' awareness, knowledge, and skills in providing culturally competent care. Data were collected for two classes of second-year DMD students and first-year Advanced Standing students. The total number of returned surveys was 485, for a response rate of 79.5%. The students' post-curriculum mean scores were all higher than their pre-curriculum scores for overall cultural competence (pre 26.5±6.3 to post 29.8±7.2) and for individual subscores on awareness (pre 5.3±1.4 to post 5.5±1.5), knowledge (pre 7.2±1.9 to post 8.1±2.1), and skills (pre 14.1±4.4 to post 16.2±4.4). The improvements on all scores were statistically significant (p<0.0001), with the exception of the awareness component. This evaluation suggests that the cultural competency curriculum at GSDM has been effective in producing improvements in these students' cultural competence in the domains of knowledge and skills.
<reponame>GunpreetAhuja/StaticBugCheckers ''' Created on Nov. 30, 2017 @author <NAME> ''' import json import os import sys from Util import load_parsed_diffs, load_parsed_sb, find_msg_by_proj_and_cls, \ LineMatchesToMessages, CustomEncoder def match_diff_sb(d, sb_list): matches = [] lines_matches = [] for inst in sb_list: sb_lines = inst.unrollLines() if d.lines.intersection(sb_lines): matches.append(inst) lines_matches.extend(d.lines.intersection(sb_lines)) return matches, set(lines_matches) def get_hits_diffs_sb(diffs, sb_res): sb_count = 0 sb_all_matches = [] diffs_match_sb = [] for d in diffs: proj = d.proj cls = d.cls sb_list = find_msg_by_proj_and_cls(proj, cls, sb_res) diff_sb, lines = match_diff_sb(d, sb_list) if diff_sb: sb_count += len(diff_sb) sb_all_matches.append(LineMatchesToMessages(lines, diff_sb)) diffs_match_sb.extend(diff_sb) # print(sb_count) # return sb_all_matches return diffs_match_sb if __name__ == '__main__': """Get lines matches between each tool and bug fixes diffs""" diffs_file = os.path.join(os.getcwd(), sys.argv[1]) diffs = load_parsed_diffs(diffs_file) sb_file = os.path.join(os.getcwd(), sys.argv[2]) sb_res = load_parsed_sb(sb_file) diffs_sb = get_hits_diffs_sb(diffs, sb_res) output_file_name = "sb_diffs_warnings.json" with open(output_file_name, "w") as file: json.dump(diffs_sb, file, cls=CustomEncoder, indent=4)
Independent Country Program Review Peru 2017-2021 This Independent Country Program Review (ICPR) analyzes the Inter-American Development Bank (IDB) Group's strategy and program in Peru during the 2017-2021 period. ICPRs assess the relevance of the Bank's Country Strategy (CS) and provide additional information on the alignment and execution of the program. If the available information allows, ICPRs also report on progress toward the objectives set by the IDB Group in its Country Strategy. With this product, OVE seeks to provide the Boards of Executive Directors of the IDB and IDB Invest with useful information prior to their consideration of the new Country Strategy.
Peculiarities of psychological training of military servants to actions in extreme situation There are many examples of crises and catastrophes in the history of mankind. Almost the entire spectrum of natural disasters is possible in Kazakhstan. In particular, earthquakes, floods, fires in forests and steppes, snowstorms, and others. In all mountain and foothill zones, there is a danger of landslides, the threat of snow drift. In addition, there are situations that have arisen for man-made reasons. These catastrophes are the result of human activity. Such extreme situations require the concentration of all physical and psychological capabilities of a person. This is especially important for military personnel who often operate in extreme or critical conditions. Psychological readiness to solve such situations gives the individual confidence in the correctness of their own actions and, in the end, leads to a successful result. The purpose of this article is to present the results of research in the framework of a master's thesis. The main focus is on the analysis of the features of psychological training of military personnel to act in an extreme situation. The studied problem is revealed by the authors from the point of view of modern approaches to the psychological training of military personnel in a critical situation. The research methods used (analysis, generalization, experiment) allowed us to reveal the depth of the problem relevant to military psychology. The research is based on the results of modern research by Kazakh and foreign scientists. Approaches to the interpretation of the essence of the content of the concept "extreme situation" are considered, and the factors that determine the specifics of psychological training of military personnel to act in a crisis are highlighted. The concept of "psychological readiness of military personnel to work in critical (extreme) situations" is defined. Statistical results of the experiment are presented. The experimental activity carried out in the course of experimental work has a high practical significance, since it was successfully tested through the implementation of the work of the military unit 3176 "K" in Pavlodar and can be used in the psychological training of military personnel. The article is devoted to the actual problem of psychology. The results presented in the article may be useful for military psychologists.
The Newspaper has some harsh words for the incumbent president. The New York Post has endorsed Gov. Mitt Romney for president. In an editorial Thursday, the newspaper cited what it calls "America's woeful economy and the demonstrated inability of President Obama to cope with it." It says Obama claims he inherited the mess. But the Post says the president has "done nothing to fix it." The paper says the debates showed Romney has the experience and temperament to address America's economic woes instead of "just blaming others." On foreign policy, the paper said the unrest in the Middle East "testifies to Obama's inability to get the job done."
export enum ProjectType { 'react' = 'react', 'vue' = 'vue', 'taro' = 'taro', 'uniapp' = 'uniapp', 'nest-prisma-restful' = 'nestjs-prisma-restful', 'nest-prisma-graphql' = 'nestjs-prisma-graphql', 'react+nestjs-prisma-restful' = 'react + nestjs-prisma-restful (monorepo)', 'react+nestjs-prisma-graphql' = 'react + nestjs-prisma-graphql (monorepo)' }
An Analysis of Efficiency-Profitability Relationship One of the aims of the liberalization, privatization and globalization of the insurance market was to improve the performance of the public general insurers. After more than a decade to the introduction of the reforms, it is essential to study whether these have provided the desired results or not. The present study is an endeavour in this direction. It is unique in itself as it has explored the relationship between technical efficiency (TE) and profitability in the Indian public general insurers before and after the Liberalization Privatization Globalization (LPG) era. Four public general insurance companies are taken as the sample size and the study covers the time period of 21 years, that is, 19911992 to 20112012. The entire time period is further segregated into two parts as 19911992 to 19991900 as the pre-reform era and 20002001 to 20112012 as the post-reform era. The results of the efficiencyprofitability matrix of Indian public general insurers reveal that the position of these insurers was comparatively better in the pre-reform period as the percentage of wastages of resources was less in this period.
/******************************************************************************* * Copyright 2006 - 2012 Vienna University of Technology, * Department of Software Technology and Interactive Systems, IFS * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This work originates from the Planets project, co-funded by the European Union under the Sixth Framework Programme. ******************************************************************************/ package eu.scape_project.planning.model; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.persistence.CascadeType; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Lob; import javax.persistence.OneToMany; import javax.persistence.OneToOne; import org.hibernate.annotations.Fetch; import org.hibernate.annotations.FetchMode; /** * This entity bean contains all information defined in the workflow step * 'Define Sample Records'. * * @author <NAME> */ @Entity public class SampleRecordsDefinition implements Serializable, ITouchable { private static final long serialVersionUID = 2022932652305694008L; @Id @GeneratedValue private int id; /** * Hibernate note: standard length for a string column is 255 validation is * broken because we use facelet templates (issue resolved in Seam 2.0) * therefore allow "long" entries */ @Lob private String samplesDescription; public SampleObject getFirstSampleWithFormat() { for (SampleObject sample : records) { if (sample.isFormatDefined()) { return sample; } } return null; } /** * The list of representative samples. * * Note: * - retaining the order of these samples is critical, as each value in {@link Values} * correspond to the sample with the same index * - Per default Hibernate uses the id of the objects to determine the position. * - This object owns the samples, only newly created samples are added * -> no index is required to retain the order */ @OneToMany(cascade = CascadeType.ALL, mappedBy = "sampleRecordsDefinition", fetch = FetchType.EAGER, orphanRemoval = true) @Fetch(value = FetchMode.SELECT) private List<SampleObject> records = new ArrayList<SampleObject>(); @OneToOne(cascade = CascadeType.ALL) private CollectionProfile collectionProfile = new CollectionProfile(); @OneToOne(cascade = CascadeType.ALL) private ChangeLog changeLog = new ChangeLog(); public String getSamplesDescription() { return samplesDescription; } public void setSamplesDescription(String samplesDescription) { this.samplesDescription = samplesDescription; } /** * Remember Hibernate might call a getter method multiple times during a * session. Also, make sure that a call to an accessor method couldn't do * anything weird ... like initialize a lazy collection or proxy. For some * classes it can be worthwhile to provide two get/set pairs for certain * properties - one pair for business code and one for Hibernate. */ public List<SampleObject> getRecords() { return records; } public void setRecords(List<SampleObject> records) { this.records = records; } public int getId() { return id; } public void setId(int id) { this.id = id; } public ChangeLog getChangeLog() { return changeLog; } public void setChangeLog(ChangeLog value) { changeLog = value; } public boolean isChanged() { return changeLog.isAltered(); } public void touch() { changeLog.touch(); } /** * @see ITouchable#handleChanges(IChangesHandler) */ public void handleChanges(IChangesHandler h) { h.visit(this); // call handleChanges of all child elementss for (SampleObject record : records) { record.handleChanges(h); } } /** * Adds the given record to the list of SampleRecords. Used for importing by * the digester. * * We have to ensure referential integrity! * * @param record */ public void addRecord(SampleObject record) { // to ensure referential integrity record.setSampleRecordsDefinition(this); records.add(record); } public void removeRecord(SampleObject record) { records.remove(record); } public CollectionProfile getCollectionProfile() { return collectionProfile; } public void setCollectionProfile(CollectionProfile collectionProfile) { this.collectionProfile = collectionProfile; } public String getPuids() { ArrayList<String> puids = new ArrayList<String>(); for (SampleObject r : records) { if (r.getFormatInfo().getPuid() == null || "".equals(r.getFormatInfo().getPuid())) { continue; } String puid = r.getFormatInfo().getPuid(); if (!puids.contains(puid)) { puids.add(puid); } } StringBuffer puidsBuffer = new StringBuffer(); for (String puid : puids) { puidsBuffer.append(puid).append(":"); } return puidsBuffer.toString(); } }
AT&T Senior Executive Randall Stephenson (R) explains to President Donald Trump how the 5G will be deployed in cities during the American Leadership in Emerging Technology Event at the White House, on June 22, 2017. The Trump administration’s ambitious plan to speed up 5G deployment across the country—namely by curbing the decision-making power of local governments in its construction—is headed for its first legal hurdle. Several U.S. cities are poised to seek relief from the courts, arguing, in part, that the regulations passed by the FCC last week are burdensome, unfairly limiting the cities’ ability to recoup fees from telecom providers. The Federal Communications Commission (FCC) voted in September to place the rollout of 5G, the next generation of wireless technology, in the hands of the federal government, stripping away the power of local officials to negotiate directly with telecoms such as AT&T over the cost and placement of 5G equipment. The effect of the change is this: Cities will take in considerably less revenue from fees charged to telecom providers for mounting wireless equipment on city property. What’s more, the amount of time local officials have to dispute where the equipment is placed has been reduced to a 60- to 90-day window. 5G connection speeds are reputed to be up to 100 times faster than the current generation of cellular service. But to accomplish this, 5G relies on high-frequency waves that cannot travel the same distance allowed by current cellular technology. To achieve 5G speeds, cities and towns will inevitably do away with the massive cell towers recognizable to most Americans and replace them thousands of smaller towers mounted primarily on utility poles throughout the city. Higher estimates place the number of new cell sites needed at roughly 100 times what’s currently in place. Talks of similar suits have sprung up across Massachusetts, the Boston Globe reported Tuesday. The counter-argument offered by proponents of the FCC rules is that local governments will impede the 5G deployment amid a global race for higher internet speeds by politicizing the process or charging telecom providers exorbitant fees. The move was praised, for example, by the Wall Street Journal editorial board, which argued that cities are likely to pit telecom companies against one another in an effort to jack up utility pole rental fees beyond what the newspaper determined are fair market rates. In San Jose, California, the WSJ noted, rental fees accrued from utility placement is used by the city to help boost internet access in low-income neighborhoods. Los Angeles, too, has sought to balance the number of 5G permits handed out between wealthy and poorer areas—a move the newspaper painted as unnecessary and political because, it argued, “there will be sparse demand for 5G in low-income neighborhoods” over the next few years.
package com.lt.hm.wovideo.utils; import android.util.Pair; import java.util.Arrays; /** * Array SwipeBackUtils * <ul> * <li>{@link #isEmpty(Object[])} is null or its length is 0</li> * <li>{@link #getLast(Object[], Object, Object, boolean)} get last element of the target element, before the first one * that match the target element front to back</li> * <li>{@link #getNext(Object[], Object, Object, boolean)} get next element of the target element, after the first one * that match the target element front to back</li> * <li>{@link #getLast(Object[], Object, boolean)}</li> * <li>{@link #getLast(int[], int, int, boolean)}</li> * <li>{@link #getLast(long[], long, long, boolean)}</li> * <li>{@link #getNext(Object[], Object, boolean)}</li> * <li>{@link #getNext(int[], int, int, boolean)}</li> * <li>{@link #getNext(long[], long, long, boolean)}</li> * </ul> * * @author <a href="http://www.trinea.cn" target="_blank">Trinea</a> 2011-10-24 */ public class ArrayUtils { private ArrayUtils() { throw new AssertionError(); } /** * is null or its length is 0 * * @param <V> * @param sourceArray * @return */ public static <V> boolean isEmpty(V[] sourceArray) { return (sourceArray == null || sourceArray.length == 0); } /** * get last element of the target element, before the first one that match the target element front to back * <ul> * <li>if array is empty, return defaultValue</li> * <li>if target element is not exist in array, return defaultValue</li> * <li>if target element exist in array and its index is not 0, return the last element</li> * <li>if target element exist in array and its index is 0, return the last one in array if isCircle is true, else * return defaultValue</li> * </ul> * * @param <V> * @param sourceArray * @param value value of target element * @param defaultValue default return value * @param isCircle whether is circle * @return */ public static <V> V getLast(V[] sourceArray, V value, V defaultValue, boolean isCircle) { if (isEmpty(sourceArray)) { return defaultValue; } int currentPosition = -1; for (int i = 0; i < sourceArray.length; i++) { if (ObjectUtils.isEquals(value, sourceArray[i])) { currentPosition = i; break; } } if (currentPosition == -1) { return defaultValue; } if (currentPosition == 0) { return isCircle ? sourceArray[sourceArray.length - 1] : defaultValue; } return sourceArray[currentPosition - 1]; } /** * get next element of the target element, after the first one that match the target element front to back * <ul> * <li>if array is empty, return defaultValue</li> * <li>if target element is not exist in array, return defaultValue</li> * <li>if target element exist in array and not the last one in array, return the next element</li> * <li>if target element exist in array and the last one in array, return the first one in array if isCircle is * true, else return defaultValue</li> * </ul> * * @param <V> * @param sourceArray * @param value value of target element * @param defaultValue default return value * @param isCircle whether is circle * @return */ public static <V> V getNext(V[] sourceArray, V value, V defaultValue, boolean isCircle) { if (isEmpty(sourceArray)) { return defaultValue; } int currentPosition = -1; for (int i = 0; i < sourceArray.length; i++) { if (ObjectUtils.isEquals(value, sourceArray[i])) { currentPosition = i; break; } } if (currentPosition == -1) { return defaultValue; } if (currentPosition == sourceArray.length - 1) { return isCircle ? sourceArray[0] : defaultValue; } return sourceArray[currentPosition + 1]; } /** * @see {@link ArrayUtils#getLast(Object[], Object, Object, boolean)} defaultValue is null */ public static <V> V getLast(V[] sourceArray, V value, boolean isCircle) { return getLast(sourceArray, value, null, isCircle); } /** * @see {@link ArrayUtils#getNext(Object[], Object, Object, boolean)} defaultValue is null */ public static <V> V getNext(V[] sourceArray, V value, boolean isCircle) { return getNext(sourceArray, value, null, isCircle); } /** * @see {@link ArrayUtils#getLast(Object[], Object, Object, boolean)} Object is Long */ public static long getLast(long[] sourceArray, long value, long defaultValue, boolean isCircle) { if (sourceArray.length == 0) { throw new IllegalArgumentException("The length of source array must be greater than 0."); } Long[] array = ObjectUtils.transformLongArray(sourceArray); return getLast(array, value, defaultValue, isCircle); } /** * @see {@link ArrayUtils#getNext(Object[], Object, Object, boolean)} Object is Long */ public static long getNext(long[] sourceArray, long value, long defaultValue, boolean isCircle) { if (sourceArray.length == 0) { throw new IllegalArgumentException("The length of source array must be greater than 0."); } Long[] array = ObjectUtils.transformLongArray(sourceArray); return getNext(array, value, defaultValue, isCircle); } /** * @see {@link ArrayUtils#getLast(Object[], Object, Object, boolean)} Object is Integer */ public static int getLast(int[] sourceArray, int value, int defaultValue, boolean isCircle) { if (sourceArray.length == 0) { throw new IllegalArgumentException("The length of source array must be greater than 0."); } Integer[] array = ObjectUtils.transformIntArray(sourceArray); return getLast(array, value, defaultValue, isCircle); } /** * @see {@link ArrayUtils#getNext(Object[], Object, Object, boolean)} Object is Integer */ public static int getNext(int[] sourceArray, int value, int defaultValue, boolean isCircle) { if (sourceArray.length == 0) { throw new IllegalArgumentException("The length of source array must be greater than 0."); } Integer[] array = ObjectUtils.transformIntArray(sourceArray); return getNext(array, value, defaultValue, isCircle); } /** * 获取数组的纬度 * * @param objects * @return */ public static int getArrayDimension(Object objects) { int dim = 0; for (int i = 0; i < objects.toString().length(); ++i) { if (objects.toString().charAt(i) == '[') { ++dim; } else { break; } } return dim; } public static Pair<Pair<Integer, Integer>, String> arrayToObject(Object object) { StringBuilder builder = new StringBuilder(); int cross = 0, vertical = 0; if (object instanceof int[][]) { int[][] ints = (int[][]) object; cross = ints.length; vertical = cross == 0 ? 0 : ints[0].length; for (int[] ints1 : ints) { builder.append(arrayToString(ints1).second + "\n"); } } else if (object instanceof byte[][]) { byte[][] ints = (byte[][]) object; cross = ints.length; vertical = cross == 0 ? 0 : ints[0].length; for (byte[] ints1 : ints) { builder.append(arrayToString(ints1).second + "\n"); } } else if (object instanceof short[][]) { short[][] ints = (short[][]) object; cross = ints.length; vertical = cross == 0 ? 0 : ints[0].length; for (short[] ints1 : ints) { builder.append(arrayToString(ints1).second + "\n"); } } else if (object instanceof long[][]) { long[][] ints = (long[][]) object; cross = ints.length; vertical = cross == 0 ? 0 : ints[0].length; for (long[] ints1 : ints) { builder.append(arrayToString(ints1).second + "\n"); } } else if (object instanceof float[][]) { float[][] ints = (float[][]) object; cross = ints.length; vertical = cross == 0 ? 0 : ints[0].length; for (float[] ints1 : ints) { builder.append(arrayToString(ints1).second + "\n"); } } else if (object instanceof double[][]) { double[][] ints = (double[][]) object; cross = ints.length; vertical = cross == 0 ? 0 : ints[0].length; for (double[] ints1 : ints) { builder.append(arrayToString(ints1).second + "\n"); } } else if (object instanceof boolean[][]) { boolean[][] ints = (boolean[][]) object; cross = ints.length; vertical = cross == 0 ? 0 : ints[0].length; for (boolean[] ints1 : ints) { builder.append(arrayToString(ints1).second + "\n"); } } else if (object instanceof char[][]) { char[][] ints = (char[][]) object; cross = ints.length; vertical = cross == 0 ? 0 : ints[0].length; for (char[] ints1 : ints) { builder.append(arrayToString(ints1).second + "\n"); } } else { Object[][] objects = (Object[][]) object; cross = objects.length; vertical = cross == 0 ? 0 : objects[0].length; for (Object[] objects1 : objects) { builder.append(arrayToString(objects1).second + "\n"); } } return Pair.create(Pair.create(cross, vertical), builder.toString()); } /** * 数组转化为字符串 * * @param object * @return */ public static Pair arrayToString(Object object) { StringBuilder builder = new StringBuilder("["); int length = 0; if (object instanceof int[]) { int[] ints = (int[]) object; length = ints.length; for (int i : ints) { builder.append(i + ",\t"); } } else if (object instanceof byte[]) { byte[] bytes = (byte[]) object; length = bytes.length; for (byte item : bytes) { builder.append(item + ",\t"); } } else if (object instanceof short[]) { short[] shorts = (short[]) object; length = shorts.length; for (short item : shorts) { builder.append(item + ",\t"); } } else if (object instanceof long[]) { long[] longs = (long[]) object; length = longs.length; for (long item : longs) { builder.append(item + ",\t"); } } else if (object instanceof float[]) { float[] floats = (float[]) object; length = floats.length; for (float item : floats) { builder.append(item + ",\t"); } } else if (object instanceof double[]) { double[] doubles = (double[]) object; length = doubles.length; for (double item : doubles) { builder.append(item + ",\t"); } } else if (object instanceof boolean[]) { boolean[] booleans = (boolean[]) object; length = booleans.length; for (boolean item : booleans) { builder.append(item + ",\t"); } } else if (object instanceof char[]) { char[] chars = (char[]) object; length = chars.length; for (char item : chars) { builder.append(item + ",\t"); } } else { Object[] objects = (Object[]) object; length = objects.length; for (Object item : objects) { builder.append(BaseUtil.objectToString(item)+ ",\t"); } } return Pair.create(length, builder.replace(builder.length() - 2, builder.length(), "]").toString()); } /** * 是否为数组 * * @param object * @return */ public static boolean isArray(Object object) { return object.getClass().isArray(); } /** * 获取数组类型 * * @param object 如L为int型 * @return */ public static char getType(Object object) { if (isArray(object)) { String str = object.toString(); return str.substring(str.lastIndexOf("[") + 1, str.lastIndexOf("[") + 2).charAt(0); } return 0; } /** * 遍历数组 * * @param result * @param object */ private static void traverseArray(StringBuilder result, Object object) { if (!isArray(object)) { result.append(object.toString()); return; } if (getArrayDimension(object) == 1) { switch (getType(object)) { case 'I': result.append(Arrays.toString((int[]) object)).append("\n"); return; case 'D': result.append(Arrays.toString((double[]) object)).append("\n"); return; case 'Z': result.append(Arrays.toString((boolean[]) object)).append("\n"); return; case 'B': result.append(Arrays.toString((byte[]) object)).append("\n"); return; case 'S': result.append(Arrays.toString((short[]) object)).append("\n"); return; case 'J': result.append(Arrays.toString((long[]) object)).append("\n"); return; case 'F': result.append(Arrays.toString((float[]) object)).append("\n"); return; case 'L': result.append(Arrays.toString((Object[]) object)).append("\n"); default: return; } } for (int i = 0; i < ((Object[]) object).length; i++) { traverseArray(result, ((Object[]) object)[i]); } } public static String traverseArray(Object object) { StringBuilder result = new StringBuilder(); traverseArray(result, object); return result.toString(); } }
import chai, { expect } from 'chai' import { Contract, utils, BigNumber, constants} from 'ethers' import { solidity, MockProvider, createFixtureLoader } from 'ethereum-waffle' import { expandTo18Decimals, encodePrice } from '../shared/utilities' import { pairFixture_rEqualsPoint1 } from '../shared/fixtures' import { mineBlock } from '../utils' import { toType, TypeOutput } from 'ethereumjs-util' const MINIMUM_LIQUIDITY = BigNumber.from(10).pow(3) chai.use(solidity) const overrides = { gasLimit: 9999999 } describe('HamSwapV2Pair works well with r = 0.1', () => { const provider = new MockProvider({ ganacheOptions: { hardfork: 'istanbul', mnemonic: 'horn horn horn horn horn horn horn horn horn horn horn horn', gasLimit: 99999999, }, }) const [wallet, other] = provider.getWallets() const loadFixture = createFixtureLoader([wallet], provider) let factory: Contract let token0: Contract let token1: Contract let pair: Contract let virt: BigNumber let base: BigNumber beforeEach(async () => { const fixture = await loadFixture(pairFixture_rEqualsPoint1) factory = fixture.factory token0 = fixture.token0 token1 = fixture.token1 pair = fixture.pair virt = fixture.virt base = BigNumber.from(10000) }) it('mint:hamm', async () => { const token0Amount = expandTo18Decimals(1) const token1Amount = expandTo18Decimals(4) await token0.transfer(pair.address, token0Amount) await token1.transfer(pair.address, token1Amount) const expectedLiquidity = expandTo18Decimals(2).mul(virt.add(base)).div(base) const v0 = token0Amount.mul(virt).div(base); const v1 = token1Amount.mul(virt).div(base); const reserve0 = token0Amount.add(v0); const reserve1 = token1Amount.add(v1); await expect(pair.mint(wallet.address, overrides)) .to.emit(pair, 'Transfer') .withArgs(constants.AddressZero, constants.AddressZero, MINIMUM_LIQUIDITY) .to.emit(pair, 'Transfer' ) .withArgs(constants.AddressZero, wallet.address, expectedLiquidity.sub(MINIMUM_LIQUIDITY)) .to.emit(pair, 'Sync') .withArgs(reserve0, reserve1) .to.emit(pair, 'Mint') .withArgs(wallet.address, token0Amount, token1Amount) expect(await pair.totalSupply()).to.eq(expectedLiquidity) expect(await pair.balanceOf(wallet.address)).to.eq(expectedLiquidity.sub(MINIMUM_LIQUIDITY)) expect(await token0.balanceOf(pair.address)).to.eq(token0Amount) expect(await token1.balanceOf(pair.address)).to.eq(token1Amount) const reserves = await pair.getReserves() expect(reserves[0]).to.eq(reserve0) expect(reserves[1]).to.eq(reserve1) }) async function addLiquidity(token0Amount: BigNumber, token1Amount: BigNumber) { await token0.transfer(pair.address, token0Amount) await token1.transfer(pair.address, token1Amount) await pair.mint(wallet.address, overrides) } let calcExpectedOutputAmount = function (v0: BigNumber, v1: BigNumber, r0: BigNumber, r1: BigNumber, input0: BigNumber) { let output: BigNumber let reserve0 = v0.add(r0) let reserve1 = v1.add(r1) output = getAmountOut(input0, reserve0, reserve1) return output; } let getAmountOut = function (amountIn: BigNumber, reserveIn: BigNumber, reserveOut: BigNumber) { let output: BigNumber const amountInWithFee = amountIn.mul(BigNumber.from(997)) const numerator = amountInWithFee.mul(reserveOut) const denominator = reserveIn.mul(BigNumber.from(1000)).add(amountInWithFee) output = numerator.div(denominator) return output } const swapTestCases_1000: BigNumber[][] = [ [1, 5, 10, '1662497915624478906'], [1, 10, 5, '453305446940074565'], [2, 5, 10, '2851015155847869602'], [2, 10, 5, '831248957812239453'], [1, 10, 10, '906610893880149131'], [1, 100, 100, '987158034397061298'], [1, 1000, 1000, '996006981039903216'] ].map(a => a.map(n => (typeof n === 'string' ? BigNumber.from(n) : expandTo18Decimals(n)))) swapTestCases_1000.forEach((swapTestCase_1000, i) => { it(`getInputPrice:hamm:${i}`, async () => { const [swapAmount, token0Amount, token1Amount, ] = swapTestCase_1000 await addLiquidity(token0Amount, token1Amount) const v0 = token0Amount.mul(virt).div(base) const v1 = token1Amount.mul(virt).div(base) const expectedOutputAmount = calcExpectedOutputAmount( v0, v1, token0Amount, token1Amount, swapAmount ) await token0.transfer(pair.address, swapAmount) await expect(pair.swap(0, expectedOutputAmount.add(1), wallet.address, '0x', overrides)).to.be.revertedWith( 'HamSwapV2: K' ) await pair.swap(0, expectedOutputAmount, wallet.address, '0x', overrides) }) }) let getAmountIn = function (amountOut: BigNumber, reserveIn: BigNumber, reserveOut: BigNumber) { const numerator = reserveIn.mul(amountOut).mul(BigNumber.from(1000)) const denominator = reserveOut.sub(amountOut).mul(BigNumber.from(997)) let output: BigNumber = numerator.div(denominator) return output.add(BigNumber.from(1)) } const optimisticTestCases: BigNumber[][] = [ ['997000000000000000', 5, 10, 1], // given amountIn, amountOut = floor(amountIn * .997) ['997000000000000000', 10, 5, 1], ['997000000000000000', 5, 5, 1], [1, 5, 5, '1003009027081243732'] // given amountOut, amountIn = ceiling(amountOut / .997) ].map(a => a.map(n => (typeof n === 'string' ? BigNumber.from(n) : expandTo18Decimals(n)))) optimisticTestCases.forEach((optimisticTestCase, i) => { it(`optimistic:hamm:${i}`, async () => { const [amount0Out, token0Amount, token1Amount, ] = optimisticTestCase await addLiquidity(token0Amount, token1Amount) const amount1Input = getAmountIn( amount0Out, token1Amount.add(token1Amount.mul(virt).div(base)), token0Amount.add(token0Amount.mul(virt).div(base)) ) await token1.transfer(pair.address, amount1Input) await expect(pair.swap(amount0Out.add(1), 0, wallet.address, '0x', overrides)).to.be.revertedWith( 'HamSwapV2: K' ) await pair.swap(amount0Out, 0, wallet.address, '0x', overrides) }) }) it('swap:token0:hamm', async () => { const token0Amount = expandTo18Decimals(5) const token1Amount = expandTo18Decimals(10) await addLiquidity(token0Amount, token1Amount) const r0 = token0Amount.add(token0Amount.mul(virt).div(base)) const r1 = token1Amount.add(token1Amount.mul(virt).div(base)) const swapAmount = expandTo18Decimals(1) const expectedOutputAmount = getAmountOut(swapAmount, r0, r1) await token0.transfer(pair.address, swapAmount) await expect(pair.swap(0, expectedOutputAmount, wallet.address, '0x', overrides)) .to.emit(token1, 'Transfer') .withArgs(pair.address, wallet.address, expectedOutputAmount) .to.emit(pair, 'Sync') .withArgs(r0.add(swapAmount), r1.sub(expectedOutputAmount)) .to.emit(pair, 'Swap') .withArgs(wallet.address, swapAmount, 0, 0, expectedOutputAmount, wallet.address) // const res = await pair.getReserves(); // console.log("res0: ", res[0].toString(), ", r0: ", r0.add(swapAmount).toString()) // console.log("res1: ", res[1].toString(), ", r1: ", r1.sub(expectedOutputAmount).toString()) const reserves = await pair.getReserves() expect(reserves[0]).to.eq(r0.add(swapAmount)) expect(reserves[1]).to.eq(r1.sub(expectedOutputAmount)) expect(await token0.balanceOf(pair.address)).to.eq(token0Amount.add(swapAmount)) expect(await token1.balanceOf(pair.address)).to.eq(token1Amount.sub(expectedOutputAmount)) const totalSupplyToken0 = await token0.totalSupply() const totalSupplyToken1 = await token1.totalSupply() expect(await token0.balanceOf(wallet.address)).to.eq(totalSupplyToken0.sub(token0Amount).sub(swapAmount)) expect(await token1.balanceOf(wallet.address)).to.eq(totalSupplyToken1.sub(token1Amount).add(expectedOutputAmount)) }) it('swap:token1:hamm', async () => { const token0Amount = expandTo18Decimals(5) const token1Amount = expandTo18Decimals(10) await addLiquidity(token0Amount, token1Amount) const r0 = token0Amount.add(token0Amount.mul(virt).div(base)) const r1 = token1Amount.add(token1Amount.mul(virt).div(base)) const swapAmount = expandTo18Decimals(1) const expectedOutputAmount = getAmountOut(swapAmount, r1, r0) await token1.transfer(pair.address, swapAmount) await expect(pair.swap(expectedOutputAmount, 0, wallet.address, '0x', overrides)) .to.emit(token0, 'Transfer') .withArgs(pair.address, wallet.address, expectedOutputAmount) .to.emit(pair, 'Sync') .withArgs(r0.sub(expectedOutputAmount), r1.add(swapAmount)) .to.emit(pair, 'Swap') .withArgs(wallet.address, 0, swapAmount, expectedOutputAmount, 0, wallet.address) const reserves = await pair.getReserves() expect(reserves[0]).to.eq(r0.sub(expectedOutputAmount)) expect(reserves[1]).to.eq(r1.add(swapAmount)) expect(await token0.balanceOf(pair.address)).to.eq(token0Amount.sub(expectedOutputAmount)) expect(await token1.balanceOf(pair.address)).to.eq(token1Amount.add(swapAmount)) const totalSupplyToken0 = await token0.totalSupply() const totalSupplyToken1 = await token1.totalSupply() expect(await token0.balanceOf(wallet.address)).to.eq(totalSupplyToken0.sub(token0Amount).add(expectedOutputAmount)) expect(await token1.balanceOf(wallet.address)).to.eq(totalSupplyToken1.sub(token1Amount).sub(swapAmount)) }) it('swap:gas:hamm', async () => { const token0Amount = expandTo18Decimals(5) const token1Amount = expandTo18Decimals(10) await addLiquidity(token0Amount, token1Amount) // ensure that setting price{0,1}CumulativeLast for the first time doesn't affect our gas math await mineBlock(provider, (await provider.getBlock('latest')).timestamp + 1) await pair.sync(overrides) const r0 = token0Amount.add(token0Amount.mul(virt).div(base)) const r1 = token1Amount.add(token1Amount.mul(virt).div(base)) const swapAmount = expandTo18Decimals(1) const expectedOutputAmount = getAmountOut(swapAmount, r1, r0) await token1.transfer(pair.address, swapAmount) await mineBlock(provider, (await provider.getBlock('latest')).timestamp + 1) const tx = await pair.swap(expectedOutputAmount, 0, wallet.address, '0x', overrides) const receipt = await tx.wait() expect(receipt.gasUsed).to.eq(73462) }) let sqrt = function(y: BigNumber) { let z: BigNumber = BigNumber.from(0) if (y.gt(BigNumber.from(3))) { z = y let x: BigNumber = y.div(2).add(1) while(x.lt(z)) { z = x x = y.div(x).add(x).div(2) } } else if (!y.eq(0)) { z = BigNumber.from(1) } return z; } let calcExpectedLiquidity = function (r0: BigNumber, r1: BigNumber, input0: BigNumber, input1: BigNumber, l: BigNumber) { let inc: BigNumber if (!r0.eq(constants.Zero) && !r1.eq(constants.Zero)) { let inc0 = input0.mul(r0).div(l) let inc1 = input1.mul(r1).div(l) inc = inc0.gt(inc1) ? inc1 : inc0 } else if (r0.eq(constants.Zero) && !r1.eq(constants.Zero)) { inc = input1.mul(r1).div(l) } else if (!r0.eq(constants.Zero) && r1.eq(constants.Zero)) { inc = input0.mul(r0).div(l) } else { inc = sqrt(input0.mul(input1)).mul(virt.add(base)).div(base) // including MINIMUM_LIQUIDITY } return inc } it('burn:hamm', async () => { const token0Amount = expandTo18Decimals(3) const token1Amount = expandTo18Decimals(3) const expectedLiquidity = calcExpectedLiquidity(constants.Zero, constants.Zero, token0Amount, token1Amount, constants.Zero) await addLiquidity(token0Amount, token1Amount) expect(await pair.balanceOf(wallet.address)).to.eq(expectedLiquidity.sub(MINIMUM_LIQUIDITY)) await pair.transfer(pair.address, expectedLiquidity.sub(MINIMUM_LIQUIDITY)) const v0 = token0Amount.mul(virt).div(base) const left0 = /* real_left0 + virtual_left0 */ token0Amount.sub( token0Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity) ).add( v0.sub(v0.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity)) ) const v1 = token1Amount.mul(virt).div(base) const left1 = token1Amount.sub( token1Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity) ).add( v1.sub(v1.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity)) ) await expect(pair.burn(wallet.address, overrides)) .to.emit(pair, 'Transfer') .withArgs(pair.address, constants.AddressZero, expectedLiquidity.sub(MINIMUM_LIQUIDITY)) .to.emit(token0, 'Transfer') .withArgs(pair.address, wallet.address, token0Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity)) .to.emit(token1, 'Transfer') .withArgs(pair.address, wallet.address, token1Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity)) .to.emit(pair, 'Sync') .withArgs(left0, left1) .to.emit(pair, 'Burn') .withArgs(wallet.address, token0Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity), token1Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity), wallet.address) expect(await pair.balanceOf(wallet.address)).to.eq(0) expect(await pair.totalSupply()).to.eq(MINIMUM_LIQUIDITY) expect(await token0.balanceOf(pair.address)).to.eq( token0Amount.sub( token0Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity) ) ) expect(await token1.balanceOf(pair.address)).to.eq(token1Amount.sub(token1Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity))) const totalSupplyToken0 = await token0.totalSupply() const totalSupplyToken1 = await token1.totalSupply() expect(await token0.balanceOf(wallet.address)).to.eq(totalSupplyToken0.sub( token0Amount.sub( token0Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity) ) )) expect(await token1.balanceOf(wallet.address)).to.eq(totalSupplyToken1.sub( token1Amount.sub( token1Amount.mul(expectedLiquidity.sub(MINIMUM_LIQUIDITY)).div(expectedLiquidity) ) )) }) it('cumulativeLast:hamm', async () => { const token0Amount = expandTo18Decimals(3) const token1Amount = expandTo18Decimals(3) await addLiquidity(token0Amount, token1Amount) const r0 = token0Amount.add(token0Amount.mul(virt).div(base)) const r1 = token1Amount.add(token1Amount.mul(virt).div(base)) const blockTimestamp = (await pair.getReserves())[2] await mineBlock(provider, blockTimestamp + 1) await pair.sync(overrides) const initialPrice = encodePrice(r0, r1) expect(await pair.price0CumulativeLast()).to.eq(initialPrice[0]) expect(await pair.price1CumulativeLast()).to.eq(initialPrice[1]) expect((await pair.getReserves())[2]).to.eq(blockTimestamp + 1) const swapAmount = expandTo18Decimals(3) await token0.transfer(pair.address, swapAmount) await mineBlock(provider, blockTimestamp + 10) // swap to a new price eagerly instead of syncing const out1Amount = expandTo18Decimals(1) await pair.swap(0, out1Amount, wallet.address, '0x', overrides) // make the price nice expect(await pair.price0CumulativeLast()).to.eq(initialPrice[0].mul(10)) expect(await pair.price1CumulativeLast()).to.eq(initialPrice[1].mul(10)) expect((await pair.getReserves())[2]).to.eq(blockTimestamp + 10) await mineBlock(provider, blockTimestamp + 20) await pair.sync(overrides) const newPrice = encodePrice(r0.add(swapAmount), r1.sub(out1Amount)) expect(await pair.price0CumulativeLast()).to.eq(initialPrice[0].mul(10).add(newPrice[0].mul(10))) expect(await pair.price1CumulativeLast()).to.eq(initialPrice[1].mul(10).add(newPrice[1].mul(10))) expect((await pair.getReserves())[2]).to.eq(blockTimestamp + 20) }) it('feeTo:off:hamm', async () => { const token0Amount = expandTo18Decimals(1000) const token1Amount = expandTo18Decimals(1000) await addLiquidity(token0Amount, token1Amount) const expectedLiquidity = calcExpectedLiquidity(constants.Zero, constants.Zero, token0Amount, token1Amount, constants.Zero) const r0 = token0Amount.add(token0Amount.mul(virt).div(base)) const r1 = token1Amount.add(token1Amount.mul(virt).div(base)) const swapAmount = expandTo18Decimals(1) const expectedOutputAmount = getAmountOut(swapAmount, r1, r0) await token1.transfer(pair.address, swapAmount) await pair.swap(expectedOutputAmount, 0, wallet.address, '0x', overrides) await pair.transfer(pair.address, expectedLiquidity.sub(MINIMUM_LIQUIDITY)) await pair.burn(wallet.address, overrides) expect(await pair.totalSupply()).to.eq(MINIMUM_LIQUIDITY) }) let calLiquidityFee = function(reserve0: BigNumber, reserve1: BigNumber, kLast: BigNumber, supply: BigNumber) { let output: BigNumber = BigNumber.from(0) const liquidityFee = BigNumber.from(1) const liquidityFeeBase = BigNumber.from(6) let rootK = sqrt(reserve0.mul(reserve1)) let rootKLast = sqrt(kLast) if (rootK.gt(rootKLast)) { let numerator = supply.mul(rootK.sub(rootKLast)) let denominator = liquidityFeeBase.sub(liquidityFee).mul(rootK).add( liquidityFee.mul(rootKLast) ) output = numerator.div(denominator) } return output } let calcPairRemaingsAfterRemove = function (r0: BigNumber, r1: BigNumber, supply: BigNumber, remove: BigNumber) { let remain0: BigNumber let remain1: BigNumber remain0 = r0.sub(r0.mul(remove).div(supply)) remain1 = r1.sub(r1.mul(remove).div(supply)) return {remain0, remain1} } it('feeTo:on:hamm', async () => { await factory.setFeeTo(other.address) const token0Amount = expandTo18Decimals(1000) const token1Amount = expandTo18Decimals(1000) await addLiquidity(token0Amount, token1Amount) const expectedLiquidity = calcExpectedLiquidity(constants.Zero, constants.Zero, token0Amount, token1Amount, constants.Zero) const r0 = token0Amount.add(token0Amount.mul(virt).div(base)) const r1 = token1Amount.add(token1Amount.mul(virt).div(base)) const swapAmount = expandTo18Decimals(1) const expectedOutputAmount = getAmountOut(swapAmount, r1, r0) await token1.transfer(pair.address, swapAmount) await pair.swap(expectedOutputAmount, 0, wallet.address, '0x', overrides) const feeLiquidity = calLiquidityFee(r0.sub(expectedOutputAmount), r1.add(swapAmount), r0.mul(r1), expectedLiquidity) await pair.transfer(pair.address, expectedLiquidity.sub(MINIMUM_LIQUIDITY)) await pair.burn(wallet.address, overrides) expect(await pair.totalSupply()).to.eq(MINIMUM_LIQUIDITY.add(feeLiquidity)) expect(await pair.balanceOf(other.address)).to.eq(feeLiquidity) // using 1000 here instead of the symbolic MINIMUM_LIQUIDITY because the amounts only happen to be equal... // ...because the initial liquidity amounts were equal let res = calcPairRemaingsAfterRemove(token0Amount.sub(expectedOutputAmount), token1Amount.add(swapAmount), expectedLiquidity.add(feeLiquidity), expectedLiquidity.sub(MINIMUM_LIQUIDITY)) expect(await token0.balanceOf(pair.address)).to.eq(res.remain0) expect(await token1.balanceOf(pair.address)).to.eq(res.remain1) }) })
def stop_details(self, stop): response = { ATTR_STATUS: 'n/a', ATTR_TRAMS: [] } luas_params = DEFAULT_PARAMS selected_stop = self._stops.stop(stop) if selected_stop is None: _LOGGER.error("Stop '%s' is not valid", stop) return response DEFAULT_PARAMS[ATTR_STOP_VAL] = selected_stop[ATTR_ABBREV] if self._use_gzip: self._session.headers.update({'Accept-Encoding': 'gzip'}) api_response = self._session.get(self._api_endpoint, params=luas_params) if api_response.status_code == 200: _LOGGER.debug('Response received for %s', stop) try: tree = ElementTree.fromstring(api_response.content) status = tree.find(XPATH_STATUS).text.strip() trams = [] result = tree.findall(XPATH_DIRECTION_INBOUND) if result is not None: for tram in result: if tram.attrib[ATTR_DESTINATION_VAL] != ATTR_NO_TRAMS: trams.append({ ATTR_DUE: tram.attrib[ATTR_DUE_VAL], ATTR_DIRECTION: ATTR_INBOUND_VAL, ATTR_DESTINATION: tram.attrib[ATTR_DESTINATION_VAL] }) result = tree.findall(XPATH_DIRECTION_OUTBOUND) if result is not None: for tram in result: if tram.attrib[ATTR_DESTINATION_VAL] != ATTR_NO_TRAMS: trams.append({ ATTR_DUE: tram.attrib[ATTR_DUE_VAL], ATTR_DIRECTION: ATTR_OUTBOUND_VAL, ATTR_DESTINATION: tram.attrib[ATTR_DESTINATION_VAL] }) response[ATTR_STATUS] = status response[ATTR_TRAMS] = trams except ParseError as parse_err: _LOGGER.error( 'There was a problem parsing the Luas API response %s', parse_err ) _LOGGER.error('Entire response %s', api_response.content) except AttributeError as attib_err: _LOGGER.error( 'There was a problem parsing the Luas API response %s', attib_err) _LOGGER.error('Entire response: %s', api_response.content) else: _LOGGER.error( 'HTTP error processing Luas response %s', api_response.status_code ) return response
def Retrieve_DICU_DssRecord(dss_file): selector_B = "BBID" selector_C = "DIV-FLOW" logging.info("Retrieving selector_B={}, selector_C={} from \n {}" .format(selector_B, selector_C, dss_file)) dss_file_obj = pyhecdss.DSSFile(dss_file) catalog_df = dss_file_obj.read_catalog() selection = catalog_df.loc[(catalog_df['B'] == selector_B) & (catalog_df['C'] == selector_C)] pathnames_lst = dss_file_obj.get_pathnames(selection) if not len(pathnames_lst) == 1: logging.error('More than one dss record found for {},{} \n {}' .format(selector_B, selector_C, pathnames_lst)) sys.exit(0) else: logging.info("Success: single record isolated for {},{}" .format(selector_B, selector_C)) temp_df, temp_unit, temp_type = dss_file_obj.read_rts(pathnames_lst[0]) dss_file_obj.close() return temp_df
<reponame>ma-biao/mall<filename>mall-order/src/main/java/com/mabiao/mall/order/constant/PayConstant.java package com.mabiao.mall.order.constant; public class PayConstant { public static final Integer ALIPAY = 1; public static final Integer WXPAY = 2; }
<filename>deeplearning/keras/mnist/mnist_mlp.py import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout from keras.optimizers import RMSprop import numpy as np BATCH_SIZE = 128 NUM_CLASSES = 10 EPOCHS = 20 def get_dataset(): (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(60000, 784) x_test = x_test.reshape(10000, 784) x_train = x_train.astype(np.float32)/255 x_test = x_test.astype(np.float32)/255 # create one hot vector y_train = keras.utils.to_categorical(y_train, NUM_CLASSES) y_test = keras.utils.to_categorical(y_test, NUM_CLASSES) return x_train, y_train, x_test, y_test def define_model(): model = Sequential() model.add(Dense(512, activation='relu', input_shape=(784,))) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(NUM_CLASSES, activation='softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) return model def main(): x_train, y_train, x_test, y_test = get_dataset() model = define_model() history = model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test,y_test,verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) if __name__ == '__main__': main()
Bedside Imaging of Intracranial Hemorrhage in the Neonate Using Light: Comparison with Ultrasound, Computed Tomography, and Magnetic Resonance Imaging Medical optical imaging (MOI) uses light emitted into opaque tissues to determine the interior structure. Previous reports detailed a portable time-of-flight and absorbance system emitting pulses of near infrared light into tissues and measuring the emerging light. Using this system, optical images of phantoms, whole rats, and pathologic neonatal brain specimens have been tomographically reconstructed. We have now modified the existing instrumentation into a clinically relevant headband-based system to be used for optical imaging of structure in the neonatal brain at the bedside. Eight medical optical imaging studies in the neonatal intensive care unit were performed in a blinded clinical comparison of optical images with ultrasound, computed tomography, and magnetic resonance imaging. Optical images were interpreted as correct in six of eight cases, with one error attributed to the age of the clot, and one small clot not seen. In addition, one disagreement with ultrasound, not reported as an error, was found to be the result of a mislabeled ultrasound report rather than because of an inaccurate optical scan. Optical scan correlated well with computed tomography and magnetic resonance imaging findings in one patient. We conclude that light-based imaging using a portable time-of-flight system is feasible and represents an important new noninvasive diagnostic technique, with potential for continuous monitoring of critically ill neonates at risk for intraventricular hemorrhage or stroke. Further studies are now underway to further investigate the functional imaging capabilities of this new diagnostic tool.
<filename>src/commands/meal.py import logging import telegram from util.const import ( BREAKFAST, DINNER ) from util.messages import no_menu_msg, menu_msg, failed_to_parse_date_msg from util.util import parse_menu, localized_date_today from database.database import get_raw_menu, get_hidden_cuisines from util.kb_mark_up import start_button_kb from dateparser import parse def handle_menu(meal): assert meal == BREAKFAST or meal == DINNER, "Meal input is incorrect." # in this function, parsed_date returns date in Singapore time. As such, no conversion is required. def get_breakfast_or_dinner_menu(update, context): chat_id = update.effective_chat.id # send the user menu entered_date = '' if update.callback_query is None: entered_date = ' '.join(context.args) parsed_date = get_menu_query_date(entered_date) if parsed_date is None: context.bot.send_message(chat_id=chat_id, text=failed_to_parse_date_msg(entered_date)) return menu = get_raw_menu(meal, parsed_date) hidden_cuisines = get_hidden_cuisines(update.effective_chat.id) if menu is None: # if no menu, reply with no menu message if update.callback_query is not None: context.bot.edit_message_text(chat_id=chat_id, message_id=update.callback_query.message.message_id, text=no_menu_msg(meal), reply_markup=start_button_kb()) else: context.bot.send_message(chat_id=chat_id, text=no_menu_msg(meal), reply_markup=start_button_kb()) else: # else reply user of the menu menu = menu_msg(parsed_date, meal, parse_menu(menu, hidden_cuisines)) # send formatted menu to client if update.callback_query is not None: context.bot.edit_message_text(chat_id=chat_id, message_id=update.callback_query.message.message_id, text=menu, parse_mode=telegram.ParseMode.HTML, reply_markup=start_button_kb()) else: context.bot.send_message(chat_id=chat_id, text=menu, parse_mode=telegram.ParseMode.HTML, reply_markup=start_button_kb()) logging.info(f"{chat_id}: {meal} menu sent to chat") if update.callback_query is not None: context.bot.answer_callback_query(update.callback_query.id) def get_menu_query_date(entered_date): if entered_date == '': return localized_date_today() parsed_date = parse(entered_date) if parsed_date is None: return None return parsed_date.date() return get_breakfast_or_dinner_menu
/* * Copyright (c) 2018, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. * * WSO2 Inc. licenses this file to you under the Apache License, * Version 2.0 (the "License"); you may not use this file except * in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.wso2.mb.integration.tests.amqp.functional; import org.apache.commons.lang3.StringUtils; import org.testng.Assert; import org.testng.annotations.AfterClass; import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; import org.wso2.carbon.andes.stub.AndesAdminServiceBrokerManagerAdminException; import org.wso2.carbon.andes.stub.admin.types.Queue; import org.wso2.carbon.authenticator.stub.LoginAuthenticationExceptionException; import org.wso2.carbon.authenticator.stub.LogoutAuthenticationExceptionException; import org.wso2.carbon.automation.engine.FrameworkConstants; import org.wso2.carbon.automation.engine.context.AutomationContext; import org.wso2.carbon.automation.engine.context.TestUserMode; import org.wso2.carbon.integration.common.utils.LoginLogoutClient; import org.wso2.carbon.integration.common.utils.exceptions.AutomationUtilException; import org.wso2.carbon.user.mgt.stub.UserAdminUserAdminException; import org.wso2.mb.integration.common.clients.AndesClient; import org.wso2.mb.integration.common.clients.configurations.AndesJMSConsumerClientConfiguration; import org.wso2.mb.integration.common.clients.configurations.AndesJMSPublisherClientConfiguration; import org.wso2.mb.integration.common.clients.exceptions.AndesClientConfigurationException; import org.wso2.mb.integration.common.clients.exceptions.AndesClientException; import org.wso2.mb.integration.common.clients.operations.clients.AndesAdminClient; import org.wso2.mb.integration.common.clients.operations.utils.AndesClientConstants; import org.wso2.mb.integration.common.clients.operations.utils.AndesClientUtils; import org.wso2.mb.integration.common.clients.operations.utils.ExchangeType; import org.wso2.mb.integration.common.clients.operations.utils.JMSAcknowledgeMode; import org.wso2.mb.integration.common.utils.backend.MBIntegrationBaseTest; import org.xml.sax.SAXException; import javax.jms.JMSException; import javax.naming.NamingException; import javax.xml.stream.XMLStreamException; import javax.xml.xpath.XPathExpressionException; import java.io.IOException; import java.net.URISyntaxException; import java.rmi.RemoteException; /** * This test case contains test to check if messages goes to correct * tenants dead letter channel. */ public class TenantDeadLetterChannelTestCase extends MBIntegrationBaseTest { /** * The default andes acknowledgement wait timeout. */ private String defaultAndesAckWaitTimeOut = null; /** * Name of tenant's dlc queue */ private String tenantDlcQueueName = "dlctenant1.com/DeadLetterChannel"; /** * Name of super tenant's dlc queue */ private String superTenantDlcQueueName = "DeadLetterChannel"; /** * Initializes the test case. * * @throws XPathExpressionException * @throws java.rmi.RemoteException * @throws org.wso2.carbon.user.mgt.stub.UserAdminUserAdminException */ @BeforeClass(alwaysRun = true) public void init() throws XPathExpressionException, RemoteException, UserAdminUserAdminException { super.init(TestUserMode.SUPER_TENANT_USER); // Get current "AndesAckWaitTimeOut" system property. defaultAndesAckWaitTimeOut = System.getProperty(AndesClientConstants. ANDES_ACK_WAIT_TIMEOUT_PROPERTY); // Setting system property "AndesAckWaitTimeOut" for andes System.setProperty(AndesClientConstants.ANDES_ACK_WAIT_TIMEOUT_PROPERTY, "0"); } /** * Set default properties after test case. */ @AfterClass() public void tearDown() { // Setting system property "AndesAckWaitTimeOut" to default value. if (StringUtils.isBlank(defaultAndesAckWaitTimeOut)) { System.clearProperty(AndesClientConstants.ANDES_ACK_WAIT_TIMEOUT_PROPERTY); } else { System.setProperty(AndesClientConstants.ANDES_ACK_WAIT_TIMEOUT_PROPERTY, defaultAndesAckWaitTimeOut); } } /** * This test case will test functionality of tenant dead letter channel in a queue scenario. * 1. Publish 1 queue message to tenant. * 2. Add consumer for the queue message. * 3. Consumer do not acknowledge for the queue message. * 4. Message will put into tenant dlc after retry sending queue message 10 times. * 5. Number of messages in tenant dlc should be equal to 1. * 6. Number of messages in super tenant dlc should be equal to 0. * * @throws JMSException * @throws IOException * @throws NamingException * @throws AndesClientConfigurationException * @throws AndesClientException * @throws LoginAuthenticationExceptionException * @throws XPathExpressionException * @throws AndesAdminServiceBrokerManagerAdminException * @throws URISyntaxException * @throws SAXException * @throws LogoutAuthenticationExceptionException * @throws XMLStreamException */ @Test(groups = "wso2.mb", description = "Tenant dead letter channel test case for queues") public void performTenantDeadLetterChannelQueueTestCase() throws JMSException, IOException, NamingException, AndesClientConfigurationException, AndesClientException, LoginAuthenticationExceptionException, XPathExpressionException, AndesAdminServiceBrokerManagerAdminException, URISyntaxException, SAXException, LogoutAuthenticationExceptionException, XMLStreamException, AutomationUtilException { int sendMessageCount = 1; Queue tenantUserDlcQueue; Queue superAdminDlcQueue; String destinationName = "dlctenant1.com/tenantQueue"; // Get the automation context for the dlctenant1 AutomationContext tenantContext = new AutomationContext("MB", "mb001", "dlctenant1", "dlctenantuser1"); LoginLogoutClient loginLogoutClient = new LoginLogoutClient(tenantContext); String sessionCookie = loginLogoutClient.login(); AndesAdminClient andesClient = new AndesAdminClient(super.backendURL, sessionCookie); loginLogoutClient.logout(); // purge if there are any dlc messages in dlctenant1 user andesClient.purgeQueue(tenantDlcQueueName); // Get the automation context for the superTenant AutomationContext superTenantContext = new AutomationContext("MB", "mb001", FrameworkConstants.SUPER_TENANT_KEY, FrameworkConstants.SUPER_TENANT_ADMIN); LoginLogoutClient loginLogoutSuperTenant = new LoginLogoutClient(superTenantContext); String SuperTenantSessionCookie = loginLogoutSuperTenant.login(); AndesAdminClient andesAdminClient = new AndesAdminClient(super.backendURL, SuperTenantSessionCookie ); loginLogoutSuperTenant.logout(); // purge if there are any dlc messages in super tenant admin andesClient.purgeQueue(superTenantDlcQueueName); // Create a consumer client configuration AndesJMSConsumerClientConfiguration consumerConfig = new AndesJMSConsumerClientConfiguration(getAMQPPort(), "dlctenantuser1!dlctenant1.com", "dlctenantuser1", ExchangeType.QUEUE, destinationName); // Add manual client acknowledgement in configuration consumerConfig .setAcknowledgeMode(JMSAcknowledgeMode.CLIENT_ACKNOWLEDGE); // Acknowledge a message only after 200 messages are received consumerConfig .setAcknowledgeAfterEachMessageCount(200L); consumerConfig.setPrintsPerMessageCount(sendMessageCount); consumerConfig.setAsync(false); // Create consumer client with given consumerConfig AndesClient consumerClient = new AndesClient(consumerConfig, true); // Start consumer client consumerClient.startClient(); // Create a publisher client configuration AndesJMSPublisherClientConfiguration tenantPublisherConfig = new AndesJMSPublisherClientConfiguration(getAMQPPort(), "dlctenantuser1!dlctenant1.com", "dlctenantuser1", ExchangeType.QUEUE, destinationName); tenantPublisherConfig.setNumberOfMessagesToSend(sendMessageCount); tenantPublisherConfig.setPrintsPerMessageCount(sendMessageCount); // Create a publisher client with given configuration AndesClient tenantPublisherClient = new AndesClient(tenantPublisherConfig, true); // Start publisher client tenantPublisherClient.startClient(); AndesClientUtils.waitForMessagesAndShutdown(consumerClient, AndesClientConstants.DEFAULT_RUN_TIME); // Get tenant's dlc queue tenantUserDlcQueue = andesClient.getDlcQueue(); // Get super tenant dlc queue superAdminDlcQueue = andesAdminClient.getDlcQueue(); // Evaluating Assert.assertEquals(tenantUserDlcQueue.getMessageCount(), sendMessageCount, "failure on tenant dlc queue path"); Assert.assertEquals(superAdminDlcQueue.getMessageCount(), 0, "failure on super tenant dlc queue path"); } /** * This test case will test the functionality of messages being moved to tenant dead letter channel in a durable * topic subscription scenario. * 1. Add a durable subscription for a topic in tenant. * 1. Publish 1 message to the topic. * 3. Consumer do not acknowledge for the message. * 4. Message will put into tenant dlc after retry sending queue message 10 times. * 5. Number of messages in tenant dlc should be equal to 1. * 6. Number of messages in super tenant dlc should be equal to 0. * * @throws JMSException * @throws IOException * @throws NamingException * @throws AndesClientConfigurationException * @throws AndesClientException * @throws LoginAuthenticationExceptionException * @throws XPathExpressionException * @throws AndesAdminServiceBrokerManagerAdminException * @throws URISyntaxException * @throws SAXException * @throws LogoutAuthenticationExceptionException * @throws XMLStreamException */ @Test(groups = "wso2.mb", description = "Tenant dead letter channel test case for durable subscriptions") public void performTenantDeadLetterChannelDurableTopicSubscriptionTestCase() throws JMSException, IOException, NamingException, AndesClientConfigurationException, AndesClientException, LoginAuthenticationExceptionException, XPathExpressionException, AndesAdminServiceBrokerManagerAdminException, URISyntaxException, SAXException, LogoutAuthenticationExceptionException, XMLStreamException, AutomationUtilException { int sendMessageCount = 1; String topicName = "dlctenant1.com/tenantTopic"; String subscriptionId = "dlctenant1.com/tenantSub"; // Get the automation context for the dlctenant1 AutomationContext tenantContext = new AutomationContext("MB", "mb001", "dlctenant1", "dlctenantuser1"); LoginLogoutClient loginLogoutClient = new LoginLogoutClient(tenantContext); String sessionCookie = loginLogoutClient.login(); AndesAdminClient andesClient = new AndesAdminClient(super.backendURL, sessionCookie); loginLogoutClient.logout(); // purge if there are any dlc messages in dlctenant1 user andesClient.purgeQueue(tenantDlcQueueName); // Get the automation context for the superTenant AutomationContext superTenantContext = new AutomationContext("MB", "mb001", FrameworkConstants.SUPER_TENANT_KEY, FrameworkConstants.SUPER_TENANT_ADMIN); LoginLogoutClient loginLogoutSuperTenant = new LoginLogoutClient(superTenantContext); String SuperTenantSessionCookie = loginLogoutSuperTenant.login(); AndesAdminClient andesAdminClient = new AndesAdminClient(super.backendURL, SuperTenantSessionCookie); loginLogoutSuperTenant.logout(); // purge if there are any dlc messages in super tenant admin andesClient.purgeQueue(superTenantDlcQueueName); // Create a consumer client configuration AndesJMSConsumerClientConfiguration consumerConfig = new AndesJMSConsumerClientConfiguration(getAMQPPort(), "dlctenantuser1!dlctenant1.com", "dlctenantuser1", ExchangeType.TOPIC, topicName); // Add manual client acknowledgement in configuration consumerConfig.setAcknowledgeMode(JMSAcknowledgeMode.CLIENT_ACKNOWLEDGE); consumerConfig.setDurable(true, subscriptionId); consumerConfig.setSubscriptionID(subscriptionId); // Acknowledge a message only after 200 messages are received consumerConfig.setAcknowledgeAfterEachMessageCount(200L); consumerConfig.setPrintsPerMessageCount(sendMessageCount); consumerConfig.setAsync(false); // Create consumer client with given consumerConfig AndesClient consumerClient = new AndesClient(consumerConfig, true); // Start consumer client consumerClient.startClient(); // Create a publisher client configuration AndesJMSPublisherClientConfiguration tenantPublisherConfig = new AndesJMSPublisherClientConfiguration(getAMQPPort(), "dlctenantuser1!dlctenant1.com", "dlctenantuser1", ExchangeType.TOPIC, topicName); tenantPublisherConfig.setNumberOfMessagesToSend(sendMessageCount); tenantPublisherConfig.setPrintsPerMessageCount(sendMessageCount); // Create a publisher client with given configuration AndesClient tenantPublisherClient = new AndesClient(tenantPublisherConfig, true); // Start publisher client tenantPublisherClient.startClient(); AndesClientUtils.waitForMessagesAndShutdown(consumerClient, AndesClientConstants.DEFAULT_RUN_TIME); // Get tenant's dlc queue Queue tenantUserDlcQueue = andesClient.getDlcQueue(); // Get super tenant dlc queue Queue superAdminDlcQueue = andesAdminClient.getDlcQueue(); // Evaluating Assert.assertEquals(tenantUserDlcQueue.getMessageCount(), sendMessageCount, "failure on tenant dlc durable topic subscription path"); Assert.assertEquals(superAdminDlcQueue.getMessageCount(), 0, "failure on super tenant dlc durable topic subscription path"); } }
m,n=input().split(" ") c=0 x=int(m)*int(n) while x>1: x-=2 c+=1 print(c)
package com.cnpc.activiti.listener; import org.activiti.engine.delegate.DelegateExecution; import org.activiti.engine.delegate.ExecutionListener; /** * Created by billJiang on 2017/7/5. * e-mail:<EMAIL> qq:475572229 * 用户手动结束任务 */ public class TrainExecutionEndByUserListener implements ExecutionListener { @Override public void notify(DelegateExecution execution) throws Exception { } }
<reponame>HongKongCpp/AI-and-ML-for-Cpp--20191023 // // MIT License // // Copyright (c) 2019 <NAME> // Based on original code by <NAME> // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in all // copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE // SOFTWARE. // #include <iostream> #include <valarray> #include <vector> #include <utility> #include <cmath> using namespace std; class kMeans { public: kMeans() {} ~kMeans() {} kMeans(int k, vector<pair<double,double>> & data_) : m_k(k) , m_means(k) , m_data(k) { m_data[0] = data_; //this just assigns the first label to all data } void clusterData(valarray<pair<double,double>> & init_means_, int num_iters_ = 10) { //inilise data cout << "Initilising labels\r\n"; m_means = init_means_; this->assignLabels(); int i = 0; while(i < num_iters_ && !this->computeMeans()) { cout << "Running iteration << " << i << "\r\n"; this->assignLabels(); i++; } } void printClusters() const { for (int k = 0; k < m_k; k++) { cout << "Cluster: " << k << "\r\n"; for (auto const & feature : m_data[k]) { cout << " [" << get<0>(feature) << "," << get<1>(feature) << "] "; } cout << "\r\n"; } } private: bool computeMeans() { //return true if means are the same bool res = true; cout << "Mean: "; for (int k = 0; k < m_k; k++) { pair<double,double> mean(0,0); int num_features_for_k = m_data[k].size(); for (auto const & it : m_data[k]) { get<0>(mean) += get<0>(it); get<1>(mean) += get<1>(it); } get<0>(mean) /= num_features_for_k; get<1>(mean) /= num_features_for_k; res = (m_means[k] == mean && res == true) ? true : false; cout << "Converged? " << res << "\r\n"; m_means[k] = mean; cout << "cluster " << get<0>(mean) << " , " << get<1>(mean) << "\t"; } cout << "\r\n"; return res; } void assignLabels() { valarray<vector<pair<double,double>>> new_data(m_k); for (auto const & clust : m_data) { for (auto const & feature : clust) { int closest_mean = this->computeClosestCentroid(feature); new_data[closest_mean].push_back(feature); } } m_data = new_data; } int computeClosestCentroid(const pair<double,double> & point_) const { valarray<double> distances(m_k); for (int k = 0; k < m_k; k++) { double del_x = get<0>(point_) - get<0>(m_means[k]); double del_y = get<1>(point_) - get<1>(m_means[k]); double dist = sqrt((del_x * del_x) + (del_y * del_y)); distances[k] = dist; } auto closest = distance(begin(distances),min_element(begin(distances),end(distances))); return closest; } int m_k; int m_features; valarray<pair<double,double>> m_means; // is of length equal to k, the mean is a 2d vector valarray<vector<pair<double,double>>> m_data; // array is of length and holds the vectors of the // data points classified as that label }; using namespace std; int main (int argc, char ** argv) { vector<pair<double, double>> data = { { 1.1, 1 } , { 1.4, 2 } , { 3.8, 7 } , { 5.0, 8 } , { 4.3, 6 } , { 8, 5.0 } , { 6, 8.5 } , { 3, 2.0 } , { 9, 6 } , { 9.1, 4 } }; kMeans km(3, data); valarray<pair<double, double>> init_means = { { 1, 1 } , { 3, 4 } , { 8, 8 } }; km.clusterData(init_means); km.printClusters(); return 0; }
import * as React from 'react'; import './river.scss'; import * as d3 from 'd3'; import { requestRiver } from '../../utils/requests'; import { str2Date } from '../../global'; import dateFormat from 'dateformat'; interface IProps { start: Date; end: Date; dayWidth: number; onLineClick: (list: string[]) => void; } interface IState { stack: d3.Stack<any, any, any> | null; } export default class River extends React.Component<IProps, IState> { private _container: HTMLDivElement | null = null; private _layer_total: number = 12; private _ms1Day: number = 24*60*60*1000; private _datas: any[] = []; private _dates: string[] = []; private _riverData: d3.Series<{[key: number]: number;}, any>[] = []; private _svg: d3.Selection<d3.BaseType, unknown, HTMLElement, null> | null = null; private _area: d3.Area<any> | null = null; private _x: d3.ScaleLinear<number, number> | null = null; private _y: d3.ScaleLinear<number, number> | null = null; constructor(props: IProps) { super(props); this.state = { stack: null } let date: Date = props.start; while(date <= props.end) { this._dates.push(dateFormat(date, 'yyyy-mm-dd')); date = new Date(date.getTime() + this._ms1Day); } this.handleClickLine = this.handleClickLine.bind(this); } componentDidMount() { if(this._container) { this.requestRiver(); } } private flatten(items: any[], timelines: any[]) { items.forEach((item: any) => { if(item.type == 'multi-timelines') { this.flatten(item.items, timelines); }else if(item.type == 'timeline') { timelines.push(item) } }) } private requestRiver() { requestRiver("sm").then(data => { console.log(data); if(data && data.length) { let timelines: any[] = []; this.flatten(data, timelines); timelines.sort((a:any, b: any) => b.influence - a.influence); timelines = timelines.slice(0, this._layer_total); this._datas = []; timelines.forEach((d: any) => { let line: any[] = this._dates.map(dd => {return {date: dd, value: 0, ids: []}}); d.items.forEach((value:any) => { let arr: string[] = value.split(" "); let date: Date = str2Date(arr[0]); if(date <= this.props.end && date >= this.props.start) { let id: string = arr[arr.length-1]; let dateStr: string = dateFormat(date, 'yyyy-mm-dd'); let item: any = line.find(dd => dd.date == dateStr); if(item) { item.value += 1; item.ids.push(id); } } }) this._datas.push({ values: line.map(d => d.value), ids: line.reduce((pre, cur) => pre = pre.concat(cur.ids), []), }); }) console.log(this._datas); this.draw(); } }) } private draw() { if(this._container) { let datas: any[] = this._datas.map(d => d.values); let smoothed: any[] = datas.map(d => this.smooth(d)); let stack: d3.Stack<any, {[key: number]: number}, any> = d3.stack().keys(Array.from({length: this._layer_total}, (_, i) => i.toString())).offset(d3.stackOffsetWiggle); this._riverData = stack(d3.transpose(smoothed)); this._svg = d3.select('#river_svg'); this._x = d3.scaleLinear().domain([0, this._dates.length-1]).range([0, (this._dates.length * this.props.dayWidth)]); this._y = d3.scaleLinear().domain([d3.min(this._riverData, this.stackMin)!, d3.max(this._riverData, this.stackMax)!]).range([this._container.offsetHeight, 0]); let z = d3.interpolateBlues; this._area = d3.area() .x((_, i) => this._x!(i)) .y0(d => this._y!(d[0])) .y1(d => this._y!(d[1])); this._svg.selectAll("path") .data(this._riverData) .enter().append('path') .attr('d', this._area) .attr('fill', (_, i) => z(0.2 + i%3*0.15)) .on('mouseover', function(d, i) { d3.select(this) .style('filter', 'url(#drop-shadow)') .raise() }) .on('mouseout', function(d, i) { d3.select(this) .style('filter', 'none') }) .on('click', (_, i) => { this.handleClickLine(i); }) .style('cursor', 'pointer') var defs = this._svg.append("defs"); var filter = defs.append("filter") .attr("id", "drop-shadow") .attr("height", "130%"); filter.append("feGaussianBlur") .attr("in", "SourceAlpha") .attr("stdDeviation", 3) .attr("result", "blur"); filter.append("feOffset") .attr("in", "blur") .attr("dx", 1) .attr("dy", 1) .attr("result", "offsetBlur"); var feMerge = filter.append("feMerge"); feMerge.append("feMergeNode") .attr("in", "offsetBlur") feMerge.append("feMergeNode") .attr("in", "SourceGraphic"); } } private handleClickLine(index: number) { if(this._datas && index < this._datas.length) { let data: any = this._datas[index]; this.props.onLineClick && this.props.onLineClick(data.ids); } } private smooth(values: number[]) { let smoothed: number[] = []; values.forEach((value, i) => { let curr:number = value; let prev:number = i ? smoothed[i - 1] : 0; let next:number = i == values.length-1 ? values[values.length-1] : values[i+1]; let improved = this.average([prev, curr, next]); smoothed.push(improved); }); return smoothed; } private average(data: number[]) { let sum: number = data.reduce(function(sum, value) { return sum + value; }, 0); let avg: number = sum / data.length; return avg; } private stackMin(layer: any[]): number { return d3.min(layer, d => d[0]); } private stackMax(layer: any[]): number { return d3.max(layer, d => d[1]); } render() { return ( <div className='river' ref={r => this._container = r} style={{width: `${(this._dates.length * this.props.dayWidth)}px`}}> <svg className='river_svg' id='river_svg' width={`${(this._dates.length * this.props.dayWidth)}px`}/> </div> ) } }
/* * cloudbeaver - Cloud Database Manager * Copyright (C) 2020 DBeaver Corp and others * * Licensed under the Apache License, Version 2.0. * you may not use this file except in compliance with the License. */ import { useObserver } from 'mobx-react'; import { useEffect } from 'react'; import { useService } from '@cloudbeaver/core-di'; import { ThemeService } from './ThemeService'; /** * Must be observed from mobx */ export function useTheme() { const themeService = useService(ThemeService); const className = useObserver(() => themeService.currentTheme.className); useEffect(() => { document.body.classList.add(className); return () => document.body.classList.remove(className); }, [className]); }
Connecting Health and Environment with an Agroecological Europe The relationship between the climate crisis, the ongoing pandemic, and our food systems is increasingly apparent. Not just an environmentalist argument, its a reality which farmers are facing, scientists are finding evidence for, and consumers are recognising for themselves. Xavier Poux from AScA and IDDRI sees an agroecological Europe as the answer to a food system thats bad for our health and damaging our environment. How to win the support of farmers and overcome the flaws of the EUs Common Agricultural Policy remain formidable obstacles to this future.
<reponame>angelsware/aw-plugin-stripe #ifndef __AW_STRIPE_STRIPE_FACTORY_H__ #define __AW_STRIPE_STRIPE_FACTORY_H__ namespace Stripe { class IStripe; class CStripeFactory { public: static IStripe* create(); static void destroy(IStripe* stripe); }; } #endif
Simple and Effective Primary Assessment of Emergency Patients in a COVID-19 Outbreak Area: A Retrospective, Observational Study Background The rapid spread of COVID-19 has expanded into a pandemic, for which the main containment strategies to reduce transmission are social distancing and isolation of ill persons. Thousands of medical staff have been infected worldwide. Coronavirus testing kits have been in short supply, and early diagnostic reagents did not have high sensitivity. The aim of this study was to describe the characteristics of patients requiring emergency surgery in a COVID-19 outbreak area. Methods We assessed medical data regarding all patients who underwent emergency surgery at the main campus of Wuhan Union Hospital from January 23, 2020, to February 15, 2020. We classified patients based on suspicion of COVID-19 infection (suspected vs not suspected) before they were admitted to the operating room. We used descriptive statistics to analyze the data. Outcomes included the incidence of confirmed COVID-19 infection and length of stay, which were followed until March 25, 2020. Results Among the 88 emergency patients included in this study, the mean age was 37 years. Twenty-five patients presented with abnormalities observed on chest CT scans and 16 presented with fever. The median wait time for surgery was one day. The median preparation time and median time until short orientation memory concentration test (SOMCT) recovery from anesthesia were 44.0 min and 23.0 min, respectively. The median postoperative length of stay was five days. Compared with patients not suspected of COVID-19 infection, six patients were confirmed to be infected with COVID-19 in the suspected group. No health care workers were infected during this study period. Conclusion Simple identification using temperature screening of patients, respiratory symptoms, and chest CT scans before being admitted for emergency surgery was rapid and effective. Shortened contact times might reduce the risk of infection. Additional investigations with larger samples and improved designs are needed to confirm these observations. Introduction In December 2019, a series of pneumonia cases with clinical presentations that resembled severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle East respiratory syndrome coronavirus (MERS-CoV), emerged in Wuhan, Hubei, China. 1,2 A Chinese scientific team subsequently discovered a virus in the subgenus Sarbecovirus of the genus Betacoronavirus in those patients. 3 The pandemic is now named coronavirus disease-2019 (COVID-19). The rapid spread of COVID-19 expanded into a pandemic in March 2020. 4 Previous studies have described the epidemiological and clinical characteristics of patients with COVID-19 pneumonia. Currently, the main containment strategies to reduce transmission are social distancing and isolating ill persons. 9 However, infection with COVID-19 has been reported in health care workers in hospitals. 8,10 Thus, all public health care systems are facing evolving critical challenges globally. Many guidelines have been drafted to prevent the nosocomial spread of COVID-19. 11,12 However, in the early period of the infection, patients often showed only mild symptoms or were asymptomatic, which increases the difficulty in identifying patients with COVID-19 infections. This was especially true when identifying patients who needed emergency surgery in the hospital located in the initial outbreak area. Moreover, the positive diagnosis rates using qRT-PCR did not exhibit high sensitivity in clinical settings due to the use of different inactivation methods. 13 The aim of this study was to describe the clinical, laboratory, and radiological characteristics of patients who underwent emergency surgery and compare the clinical features between patients suspected of being infected with COVID-19 to patients who were not suspected of infection. Moreover, with consideration of possible shortages in medical supplies, we designed protective measures based on suspicion of COVID-19 infection to prevent cross-infection in the operating room. Study Design This was a single-center, retrospective, observational study that was conducted on the main campus of Wuhan Union Hospital (Wuhan, China), one of the major tertiary teaching hospitals in the endemic area of COVID-19. We analyzed patients who underwent emergency surgery at the main hospital campus from January 23, 2020, to February 15, 2020. Emergency surgery was defined as surgery required to deal with an acute threat to life, organ, limb, or tissue, which needed to be addressed immediately. All of Wuhan's public transportation was suspended on January 23, 2020. Subsequently, the main campus of Wuhan Union Hospital suspended all elective surgeries. Before February 15, 2020, we did not perform throat swab tests on all emergency patients to screen for COVID-19 infection at the time of admission, due to a lack of diagnostic reagents. The qRT-PCR results were obtained after 12 hours. Because the patient's condition was urgent, all the surgeries could not be postponed until the qRT-PCR results were received. The outcomes of this study included confirmed COVID-19 cases and the length of stay in the hospital, which were monitored until March 25, 2020. During the outbreak, the emergency department of the main hospital campus was responsible for receiving all patients in Wuhan who required emergency surgery. All study methods were conducted following the relevant regulations and guidelines of the institutional ethics committee of Tongji Medical College, Huazhong University of Science and Technology. Written informed consent was waived due to the rapidly emerging infectious disease. Data Collection The electronic medical records of patients were analyzed by members of the anesthesiology department at Wuhan Union Hospital. We reviewed clinical, laboratory, and radiological characteristics and nursing records. The information that was recorded included demographic data (age and sex), medical history (chronic medical histories), symptoms, clinical signs (temperature), laboratory findings (white blood cell count, lymphocyte count, lymphocyte percentage, C-reactive protein (CRP), and highsensitivity C-reactive protein (hr-CRP)), chest computed tomographic (CT) scans, and anesthesia records (ASA scores, type of anesthesia, type of surgery, and perioperative time points). Skilled anesthesiologists determined scores based on criteria from the American Society of Anesthesiologists (ASA) scoring system. Abnormalities observed on chest CT scans included ground-glass opacity, patchy local shadowing, or other inflammatory signs present in a patient's lungs. The wait time for surgery was calculated from admission to the day of surgery. The surgical preparation time was defined as the duration between the time the patient entered the operating room and the time of the initial incision. The time until short orientation memory concentration test (SOMCT) recovery was defined as the duration between the closure of the incision and the time the patient exited the operating room. We classified patients based on suspicion of COVID-19 infection (suspected vs not suspected). Suspicion of COVID-19 infection was defined as patients having a temperature higher than 37.2°C or an abnormal chest CT before entering the operating room. We defined a patient as not suspected of COVID-19 infection when their temperature and CT scans were normal before entering the operating room (if CT scans were available). Statistical Analysis Continuous variables including age, preparation time, wait times for surgery, time for SOMCT recovery, postoperative CRP, postoperative hs-CRP, and length of stay were expressed as medians (and interquartile ranges) or means (and standard deviation). The remaining data were categorical variables, which were summarized as counts and percentages. Differences between suspected and not suspected COVID-19 infections were calculated using the Mann-Whitney-Wilcoxon test or a two-sample t-test for continuous variables and the chi-square test or Fisher's Exact test for categorical variables. A two-sided of less than 0.05 was considered to be statistically significant. No imputation was made for missing data. The data analysis was performed using R software, version 3.6.3 (R Foundation for Statistical Computing). Results Between January 23, 2020, and February 15, 2020, the surgery department admitted 88 patients with indications for emergency surgery. Fourteen patients suffered from external trauma, 12 presented with acute abdomen, four with aortic dissection, two with critical coronary heart disease (CHD), four with esophageal foreign bodies, 27 with fetal distress, 14 with placental abruption, five with hemorrhage in placenta previa, four with globe rupture, one with pneumothorax, and one with testicular torsion. Five patients exhibited a mild cough, none of the patients presented obvious acute respiratory infection symptoms, such as a continuous cough, nasal congestion, sore throat, or headache. Thus, all 88 patients were included in this study. Preoperative Characteristics Of the 88 patients, 11 (12.5%) were older than 60 years, and seven (7.9%) were less than 16 years. The mean patient age was 37 years (SD, 17.8 years). Sixty-three (71.6%) patients were female, and 16 (18.2%) patients had a body temperature higher than 37.2°C before entering the operating room. Thirty-five patients had chest CT scans available, and of those 35 patients, four presented images with ground-glass opacity, eight presented with patchy local shadowing, 13 presented with patchy bilateral shadowing, and 10 presented with normal CT scans. Five patients exhibited a mild cough, and their CT scans were abnormal. Leucocytes were below the normal range in four (4.7%) patients and above the normal range in 27 (31.8%) patients. Lymphocytopenia was present in 22 (25.9%) patients. Twenty-five patients exhibited elevated levels of CRP or hs-CRP. (Table 1) Surgery Characteristics Of the 88 cases, nine emergency surgeries were repeat surgeries for disease progression in patients who had been hospitalized before January 23, 2020. Thirty-two (36.4%) patients received surgery on the same day as they were admitted, and 24 (27.3%) patients went to surgery on the second day after admission. The medical conditions for the majority of patients (64.8%) were classified as ASA Ⅰ and Ⅱ. The median surgical preparation time was less than one hour (44 minutes). More than half of the patients (55.7%) underwent non-general anesthesia and did not undergo tracheal intubation. Of the 31 patients who were extubated and recovered from general anesthesia in the operating room, the median time for SOMCT recovery was less than half an hour (23 minutes). Obstetric surgery accounted for 52.3% of the surgical cases. Eight patients were transferred to the intensive care unit due to poor postoperative conditions. ( Table 2) Postoperative Characteristics Within seven days postoperatively, the highest temperature observed in 18 patients (20.5%) was higher than 37.2°C. Over half of the patients (59.3%) in this study exhibited leukocytosis, and 66.7% of the patients exhibited lymphocyte counts in the normal range. In patients with elevated CRP and hs-CRP levels, which are biomarkers associated with infection, the mean CRP and hs-CRP levels were 82.2 and 78, respectively. Of the patients suspected of viral infection, six patients were confirmed to be positive for COVID-19 during the postoperative period. No patients who were initially classified as not suspected of viral infection tested positive for COVID-19. No anesthesiologists or nurses who participated in the emergency surgeries during this study showed respiratory symptoms in the 14 days following contact with any of the patients. Seven patients were still hospitalized on the final day of the follow-up period used in this study. The median duration of the postoperative length of stay was five days (Table 3). Discussion COVID-19 can be transmitted through aerosols and small droplets from normal breathing, coughing, sneezing, or fluids from human secretions. 7 The virus is highly contagious and has led to a global pandemic. The ability to screen and identify patients infected with COVID-19 that were admitted to the emergency department at the main campus of Wuhan Union Hospital was complicated by the variability in the clinical presentation of infected individuals. However, fever, cough, and radiologic abnormalities have been identified as the dominant symptoms in patients infected with COVID-19. 6 During this study, no chest CT scans or throat swabs were taken as routine COVID-19 screening measures for emergency surgery patients due to the shortage of medical resources. Therefore, we designed a simple management process to prevent nosocomial infections. We rapidly and effectively classified patients through the use of chest CT scans and monitoring their body temperature. Because the hospital was located in the original outbreak area, it was likely that asymptomatic patients and patients in the viral incubation period (without fever or radiologic abnormalities) were admitted to the hospital. 2 Therefore, all medical staff wore N95 respirators as a necessary protective measure. Additional high-level protection measures were implemented when treating patients suspected to be infected, including wearing a face shield, medical goggles, and disposable protective clothing. Although it has been reported there is no difference between wearing N95 respirators and surgical masks, 14 the potential for respiratory tract infection by COVID-19 was considered to be greater than other viruses. Therefore, N95 respirators were fit-tested to prevent virus transmission effectively. In our study, more female patients were involved because of the number of emergency cesarean sections (52.3%) that were needed. Based on the preoperative laboratory findings, patients suspected of COVID-19 infection exhibited abnormal leukocyte counts, lymphocyte counts, and CRP (hs-CRP) compared to patients who were not suspected to be infected. Many previous reports have revealed that lymphocytopenia is common in COVID-19 cases. 2,5 However, most emergency patients present with fever and leukocytosis, which made it more challenging to screen and identify atypical patients, DovePress asymptomatic patients, and patients who were still in the incubation period for COVID-19 infection. All patients wore standard surgical masks before they were transported into the preoperative preparation room. Airway assessment was done by inquiry instead of inspection with the mask removed. At least two different tracheal intubation devices were provided to avoid difficulty during tracheal intubation. Fortunately, no patient needed secondary tracheal intubation in this study. The method of anesthesia that was selected depended on the specific surgical requirements and the condition of the patient. Non-general anesthesia patients wore a surgical mask throughout the surgery. We then placed an anesthetic face mask, which was connected to the anesthesia machine, to deliver oxygen through the patient's surgical mask. For general anesthesia, video laryngoscope lenses were applied, which maximized the distance between the operator's and patient's faces to reduce the risk of viral transmission. Extubation was performed when the patient was fully awake. After extubation, we placed an anesthetic face mask on the patient's face to deliver oxygen and replaced the patient's surgical mask immediately when the recovery conditions met the standards needed to allow the patient to leave the operating room. Emergence was performed in the operating room instead of the postanesthesia care unit. In our study, the median preparation time and median time until SOMCT recovery were 44 min and 23 min, respectively. Therefore, minimizing contact time with patients should reduce the risk of exposure. The anesthetic machine and operating room were disinfected immediately after each surgery. Effective disinfection of the anesthesia machine and operating room were essential measures to ensure that the next surgical patient was not infected by virus possibly left by the previous patient. All patients were placed in separate rooms postoperatively to prevent any cross-infection in the ward after surgery. When patients suspected to be infected with COVID-19 were compared to non-suspected patients, more suspected patients presented with fever and lymphocytopenia. However, the CRP or hs-CRP values were not significantly different between the two patient groups. Fever is commonly observed during the immediate and early postoperative periods following surgery, and nonsteroidal antiinflammatory drugs are routine postoperative medications used to adjust the patient's body temperature. 15 Therefore, postoperative medications and surgical wound inflammation might influence some laboratory results, even if the patient had a viral infection. The COVID-19 cases in this study were confirmed in the postoperative period using chest CT scans and throat swabs that detected viral nucleic acid using qRT-PCR assays. Six patients were confirmed to be infected with COVID-19, and all six patients were in the suspected group, four of them presented images with ground-glass opacity in preoperative period. This result indicated that the preoperative classification system used with emergency patients based on body temperature measurements and chest CT scans was effective for patients who presented mild symptoms or were the incubation period. It has been reported that the incubation period of COVID-19 is estimated to be five days. 16 However, no patients presented any acute respiratory symptoms in the nonsuspected group. Not all patients underwent throat swab screening, so we could not confirm the existence of any asymptomatic patients. Fortunately, no fever and or respiratory symptoms appeared in any of the anesthesiologists and nurses who collaborated in the emergency surgeries conducted during this study. Seven patients were still hospitalized due to surgical complications at the end of this study. All confirmed cases of COVID-19 viral infection were transferred to a COVID-19 isolation ward. Although this was a single-center study with a relatively small number of cases, the procedures implemented to identify patients suspected of COVID-19 infection and the application of appropriate protective measures were effective. However, we note that every hospital has developed a management strategy to deal with the viral outbreak. We suggest that throat swabs or other COVID-19 tests be performed as soon as possible after patients are admitted to the emergency department, to optimize the identification of COVID-19 infection. Some laboratory results might be influenced or masked by the patient's perioperative condition and postoperative medications. In Wuhan, the epicenter of the viral outbreak, we lacked many of the necessary protective resources at the beginning of the disease outbreak. Through our established screening methods, we could save many resources needed to process situations that also needed protection. The rational use of protective resources after patients are classified based on suspicion of viral infection might alleviate the lack of protection caused by shortages in protective resources. This study had several limitations. First, some cases had incomplete laboratory testing due to the short length of stay of the patient. For example, the cases involving a removal procedure carried out under painless gastrointestinal endoscopy did not require lengthy post-surgical stays. Second, preoperative chest CT scans were not obtained for all surgical patients due to the severe nature of some patients' conditions. Third, the throat swabs used to detect viral nucleic acid were not obtained from all patients included in this study. There was a shortage of diagnostic reagents during the study period, so we could not assess the asymptomatic cases. Fourth, this was a single-center study with a small sample size. A larger sample size might result in more pronounced results between the two groups. Conclusion In the circumstances of a patient's emergency condition and lack of sufficient diagnostic reagents for COVID-19, identification of patients with possible viral infection using temperature screening, the presence or absence of respiratory symptoms, and chest CT scans was rapid and effective. Reducing the patients' respiratory exposure time and contact time with other individuals might reduce the risk of cross-infection between the medical staff and patients. Ethics Approval and Consent to Participate Tongji Medical College Ethics Committee approved our observational study. We were compliant with the Declaration of Helsinki, with respect to ethical treatment of patients in this clinical retrospective study. This study used only anonymized data and the rapid emergence of this infectious disease, written informed consent was waived. Publish your work in this journal Risk Management and Healthcare Policy is an international, peerreviewed, open access journal focusing on all aspects of public health, policy, and preventative measures to promote good health and improve morbidity and mortality in the population. The journal welcomes submitted papers covering original research, basic science, clinical & epidemiological studies, reviews and evaluations, guidelines, expert opinion and commentary, case reports and extended reports. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
Foreword to Special Issue on Education and Outreach s I started to understand the seismologist's words, it became attractive.... I will never forget that day. Letisia, age 12 There are many reasons to undertake education and outreach in seismology, but the above quote (see Burrato et al. ) illustrates one of the better ones. As it implies, seismology isn't just for graduate school anymore. Hundreds of high schools record seismic data on state-of-the-art seismometers, museum displays of seismic data attract millions of visitors, and Web-based earthquake maps pop up on thousands of browsers every day. These education and outreach activities result in tangible benefits, both to the general public and to the scientific community. For the public, the general level of scientific literacy and knowledge about earthquake hazards is elevated. For the scientific community, these activities increase the visibility of geophysical research among the public and provide a focal point for involvement by the next generation of students. These efforts may also help reduce earthquake risk overall. This issue highlights these recent education and outreach efforts in the U.S. and across the globe. A major theme is the development of schoolyard seismology efforts, but also discussed are descriptions of museum displays, new technical developments, innovative college classes, and two major seismology-oriented public outreach programs. We hope that this issue will provide a resource and inspiration for future action from a larger number of scientists. As the Earth scientists of today, it is our responsibility to share our passion with the potential Earth scientists of tomorrow, and to ignite a similar
<filename>Shared/src/main/java/me/zowpy/emerald/shared/SharedEmerald.java package me.zowpy.emerald.shared; import com.google.gson.Gson; import com.google.gson.GsonBuilder; import com.google.gson.JsonObject; import io.github.zowpy.jedisapi.JedisAPI; import io.github.zowpy.jedisapi.redis.RedisCredentials; import me.zowpy.emerald.shared.server.EmeraldServer; import lombok.Getter; import lombok.Setter; import me.zowpy.emerald.shared.jedis.SharedJedisSubscriber; import me.zowpy.emerald.shared.manager.GroupManager; import me.zowpy.emerald.shared.manager.ServerManager; import me.zowpy.emerald.shared.server.ServerProperties; import org.bukkit.Bukkit; import org.bukkit.entity.Player; import java.util.Collection; import java.util.List; import java.util.UUID; import java.util.stream.Collectors; /** * This Project is property of Zowpy © 2021 * Redistribution of this Project is not allowed * * @author Zowpy * Created: 8/10/2021 * Project: Emerald */ @Getter public class SharedEmerald { public static Gson GSON = new GsonBuilder().serializeNulls().create(); private final UUID uuid; private final JedisAPI jedisAPI; private final ServerManager serverManager; private final GroupManager groupManager; @Setter private ServerProperties serverProperties; public SharedEmerald(UUID uuid, RedisCredentials credentials) { this.uuid = uuid; this.serverManager = new ServerManager(this); this.groupManager = new GroupManager(); (this.jedisAPI = new JedisAPI(credentials)).registerSubscriber(new SharedJedisSubscriber(this)); } public void executeCommand(EmeraldServer server, String command) { JsonObject jsonObject = new JsonObject(); jsonObject.addProperty("command", command); jsonObject.addProperty("uuid", server.getUuid().toString()); jedisAPI.getJedisHandler().write("command###" + jsonObject); } /** * Get all online admin users. */ public List<Player> getAdminUsers() { return Bukkit .getServer() .getOnlinePlayers() .stream() .filter(p -> p.hasPermission("emerald.admin")) .collect(Collectors.toList()); } }
The revelation that Donald Trump Jr. was in contact with WikiLeaks throughout the 2016 U.S. presidential campaign is just the latest in a long catalog of leaks and discoveries about previously hidden contacts between surrogates of both the Trump campaign and Russia. A media frenzy naturally follows each new disclosure, as observers look to dissect the details in a search for greater meaning. Each piece of correspondence is analyzed, placed in a timeline and parsed for relevance. What does it tell us about possible collusion? From the correspondence made public so far about the contact with WikiLeaks, there are a few distinct concerns. Most importantly, it is clear that Donald Trump Jr. was willing to engage with WikiLeaks even though the Director of National Intelligence and the Department of Homeland Security had only recently implicated the organization in aiding the dissemination of stolen material from U.S. persons and institutions. Trump Jr. informed numerous senior campaign officials of the contact, none of whom apparently were troubled by the relationship. Of most interest, several observers have noted that then-candidate Trump tweeted about WikiLeaks only 15 minutes after his son had received a message from the organization asking that he do just that. From the WikiLeaks side, the correspondence shows an organization that, although it claims to be an open, non-partisan platform for whistleblowers, actively engaged in soliciting information and offering advice. One of the direct messages to Trump Jr. suggested that the campaign promote a narrative of a rigged election in the case that Hillary Clinton won the election. While fomenting such chaos would be of clear interest to an adversary like Russia, it is hardly the expected behavior of a neutral whistleblower website. Certainly, FBI investigators will marry the correspondence with their existing timelines and information to help build a coherent narrative of the campaign’s relationship with Russia and Russians. It’s another piece in the bigger puzzle. However, each of these bombshell revelations has the effect of taking our attention away from the bigger picture. It is too easy to focus on the individual trees and not the forest. Worse, since much of the Russian effort to influence the U.S. election was secret, not every new piece of information is necessarily relevant. Over the past months, we have scrutinized the arrest of George Papadopoulos, the travels of Carter Page, information that Trump lawyer Michael Cohen was involved in negotiations with Moscow, Jared Kushner discussions to arrange a back-channel to Moscow, the indictment of former campaign manager Paul Manafort, and a variety of Russian contacts with Attorney General Jeff Sessions and former National Security Advisor Michael Flynn. Also, the dossier produced by former British spy Christopher Steele has received attention from journalists, politicians and investigators alike. There is no shortage of information on which to speculate. However, from my perspective, the importance of the latest disclosure is straightforward. The WikiLeaks contact fits the same pattern of the reported interaction with Russia. Over the many weeks and months of the campaign, nobody on the Trump team ever chose to do the right thing. Engaging with an organization dedicated to harming the United States was a bad idea. WikiLeaks has published hundreds of thousands of pages of classified reports from the CIA, NSA, State Department and U.S. military. Their odious reputation was no secret. CIA Director and former Republican Congressman Mike Pompeo has characterized WikiLeaks as a “non-State hostile intelligence service that is often abetted by state actors like Russia.” Someone in the campaign surely should have suggested that they contact the FBI or at the very least, discontinue contact. Of course, this failure to do the right thing was even more pronounced as relates to Russia. Russia is a hostile country that seeks to harm U.S. interests around the world. Did nobody on the campaign team ever think that abetting the theft and disclosure of stolen material from American citizens was a bad idea? Did they even bother to ask their lawyers or security personnel? More fundamentally, why would any Presidential campaign feel the need to have regular and sustained contacts with Russia – a hostile power? How does Russia help to win votes in Iowa and New Hampshire? Did any other campaign have similar contacts? Did the Trump campaign have similar contacts with officials from China, India, Britain or Japan? At the very least, it would seem to be a distraction or waste of time for a busy campaign trying to attract party delegates, media support and developing policies to appeal to voters. If there was an innocent reason, why hasn’t the Administration ever even tried to explain its rationale? At best, they simply tried to hide the contact and when exposed, explained that such contact was normal. It was not. The failure to justify or disclose their activities all but insured that we would seek alternative explanations. From my perspective, while collusion or evidence of a conspiratorial relationship with Russian intelligence is yet unproven, the public disclosures at the very least display a willingness to collude. They signal intent. It is hard to imagine that Donald Trump Jr. was savvy enough to pull off a secret relationship with Russia but less hard to contemplate Paul Manafort doing so. He had years of operating in the corrupt and quasi-legal world of Russian money and espionage. If you accept ill-intent, it is easy to see the WikiLeaks platform as a useful place for both the Russians and the Trump campaign to disclose information in a deniable manner. So, while the revelation of a contact between WikiLeaks and Donald Trump Jr. may not yet be a smoking gun, it is yet another tree in the forest of deception. It shows a willingness to break rules and fits into the narrative of a cover-up. We will have to leave it to the professional investigators to determine if the WikiLeaks correspondence is truly relevant to the larger narrative of Russian efforts to attack the election of 2016. Nonetheless, the actions of the Trump campaign, taken together with the multi-faceted Russian attack that included cyber-attacks, cyber theft, propaganda, disinformation, attempted espionage, use of trolls, bots and non-attributable advertising and content creation, suggest that we need more than just a legal or partisan approach to face the challenge. Instead, we need to work with our foreign partners to develop a response, and a non-partisan 9/11-type Commission to look at how we can be better prepared in the future. We also need those seeking our highest offices to have a better conception of right and wrong.
Blue laser diodes (LDs) have the potential for increasing the storage capacity of optical disks over the densities currently available in compact disk systems based on red laser diodes. Increased storage capacity will open new markets for compact disks in motion picture distribution. One class of blue emitting elements is based on group III-V nitride films such as GaN epilayers grown on sapphire substrates. To fabricate a laser, a ridge structure is constructed to provide an appropriate optical cavity having parallel mirrors at each end of the cavity. The laser cavity is typically formed by sandwiching an active gain layer between two layers of GaN doped to form n-type and p-type semiconductors. The GaN layers are constructed so as to form a waveguide by depositing the various layers and then etching the stack to form a ridge structure whose vertical walls provide the waveguide. The ends of the waveguide are mirrors that reflect the light generated in the active region back and forth. In GaN based LDs the mirrors are typically formed by cleaving or etching the ends of the waveguide to provide the reflecting surface of the mirror. The ridge structure discussed above has two problems. First, the structure has poor heat dissipation. The heat generated in the active region must either be dissipated through the substrate or the walls of the ridge structure. The path to the substrate is restricted by the width of the ridge structure; hence, removing heat by transferring the heat to the substrate, which is typically in thermal contact with a heat sink, is difficult. The second problem with ridge structured devices is the high voltages needed to operate the devices. The p-contact is typically an ohomic contact on the top of the ridge. The resistance of this contact must be overcome to drive the device. To reduce this resistance, the contact needs to have as-large an area as possible. However, the available area is limited by the area on the top of the ridge. Broadly, it is the object of the present invention to provide an improved edge emitting laser diode. It is a further object of the present invention to provide an edge emitting diode that does not utilize a ridge structure, and hence, avoids the above-described problems. These and other objects of the present invention will become apparent to those skilled in the art from the following detailed description of the invention and the accompanying drawings.
package com.journal.nn.school123.rest.info.journal; import androidx.annotation.NonNull; import com.journal.nn.school123.pojo.Data; import com.journal.nn.school123.rest.AbstractPostRequest; import com.journal.nn.school123.rest.RequestParameters; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.Calendar; import java.util.HashMap; import java.util.Locale; import java.util.Map; import static com.journal.nn.school123.util.CurrentPeriodUtil.clearCalendar; public class JournalInfo extends AbstractPostRequest { public JournalInfo(@NonNull RequestParameters requestParameters, @NonNull JournalInfoListener listener) { super("/act/GET_STUDENT_DAIRY", requestParameters, listener ); listener.setCalendar(getCalendar()); } protected Map<String, String> getParams() { DateFormat dateFormat = new SimpleDateFormat("dd.MM.yyyy", Locale.getDefault()); Data data = requestParameters.getData(); Calendar calendar = getCalendar(); calendar.set(Calendar.DAY_OF_WEEK, Calendar.MONDAY); Map<String, String> params = new HashMap<>(); params.put("cls", data.getClassId()); params.put("pClassesIds", ""); params.put("begin_dt", dateFormat.format(calendar.getTime())); calendar.set(Calendar.DAY_OF_WEEK, Calendar.SUNDAY); params.put("end_dt", dateFormat.format(calendar.getTime())); params.put("student", requestParameters.getStudentId()); return params; } protected Calendar getCalendar() { Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.DAY_OF_WEEK, Calendar.MONDAY); clearCalendar(calendar); return calendar; } }
<filename>src/som/primitives/arrays/DoPrim.java<gh_stars>10-100 package som.primitives.arrays; import com.oracle.truffle.api.CompilerDirectives; import com.oracle.truffle.api.dsl.GenerateNodeFactory; import com.oracle.truffle.api.dsl.Specialization; import bd.primitives.Primitive; import som.interpreter.nodes.ExpressionNode; import som.interpreter.nodes.dispatch.BlockDispatchNode; import som.interpreter.nodes.dispatch.BlockDispatchNodeGen; import som.interpreter.nodes.nary.BinaryComplexOperation; import som.interpreter.nodes.specialized.SomLoop; import som.vm.constants.Nil; import som.vmobjects.SArray; import som.vmobjects.SArray.PartiallyEmptyArray; import som.vmobjects.SBlock; @GenerateNodeFactory @Primitive(selector = "do:", receiverType = SArray.class, disabled = true) public abstract class DoPrim extends BinaryComplexOperation { @Child private BlockDispatchNode block = BlockDispatchNodeGen.create(); // TODO: tag properly, it is a loop and an access private void execBlock(final SBlock block, final Object arg) { this.block.executeDispatch(new Object[] {block, arg}); } @Specialization(guards = "arr.isEmptyType()") public final SArray doEmptyArray(final SArray arr, final SBlock block) { int length = arr.getEmptyStorage(); try { if (SArray.FIRST_IDX < length) { execBlock(block, Nil.nilObject); } for (long i = SArray.FIRST_IDX + 1; i < length; i++) { execBlock(block, Nil.nilObject); } } finally { if (CompilerDirectives.inInterpreter()) { SomLoop.reportLoopCount(length, this); } } return arr; } @Specialization(guards = "arr.isPartiallyEmptyType()") public final SArray doPartiallyEmptyArray(final SArray arr, final SBlock block) { PartiallyEmptyArray storage = arr.getPartiallyEmptyStorage(); int length = storage.getLength(); try { if (SArray.FIRST_IDX < length) { execBlock(block, storage.get(SArray.FIRST_IDX)); } for (long i = SArray.FIRST_IDX + 1; i < length; i++) { execBlock(block, storage.get(i)); } } finally { if (CompilerDirectives.inInterpreter()) { SomLoop.reportLoopCount(length, this); } } return arr; } @Specialization(guards = "arr.isObjectType()") public final SArray doObjectArray(final SArray arr, final SBlock block) { Object[] storage = arr.getObjectStorage(); int length = storage.length; try { if (SArray.FIRST_IDX < length) { execBlock(block, storage[SArray.FIRST_IDX]); } for (long i = SArray.FIRST_IDX + 1; i < length; i++) { execBlock(block, storage[(int) i]); } } finally { if (CompilerDirectives.inInterpreter()) { SomLoop.reportLoopCount(length, this); } } return arr; } @Specialization(guards = "arr.isLongType()") public final SArray doLongArray(final SArray arr, final SBlock block) { long[] storage = arr.getLongStorage(); int length = storage.length; try { if (SArray.FIRST_IDX < length) { execBlock(block, storage[SArray.FIRST_IDX]); } for (long i = SArray.FIRST_IDX + 1; i < length; i++) { execBlock(block, storage[(int) i]); } } finally { if (CompilerDirectives.inInterpreter()) { SomLoop.reportLoopCount(length, this); } } return arr; } @Specialization(guards = "arr.isDoubleType()") public final SArray doDoubleArray(final SArray arr, final SBlock block) { double[] storage = arr.getDoubleStorage(); int length = storage.length; try { if (SArray.FIRST_IDX < length) { execBlock(block, storage[SArray.FIRST_IDX]); } for (long i = SArray.FIRST_IDX + 1; i < length; i++) { execBlock(block, storage[(int) i]); } } finally { if (CompilerDirectives.inInterpreter()) { SomLoop.reportLoopCount(length, this); } } return arr; } @Specialization(guards = "arr.isBooleanType()") public final SArray doBooleanArray(final SArray arr, final SBlock block) { boolean[] storage = arr.getBooleanStorage(); int length = storage.length; try { if (SArray.FIRST_IDX < length) { execBlock(block, storage[SArray.FIRST_IDX]); } for (long i = SArray.FIRST_IDX + 1; i < length; i++) { execBlock(block, storage[(int) i]); } } finally { if (CompilerDirectives.inInterpreter()) { SomLoop.reportLoopCount(length, this); } } return arr; } @Override public boolean isResultUsed(final ExpressionNode child) { return false; } }
The quickening of the national spirit: Cecil Sharp and the pioneers of the folk-dance revival in English state schools In 1910, Cecil Sharp (1859± 1924), as an Inspector of Training for Teachers at the Board of Education, wrote, n order that a boy or girl may become a good Englishman, or a good Englishwoman, training in English characteristics must be a prominent feature in education English History, English games, English ideals are of the utmost importance. A wholly national and, at the same time, a wholly spontaneous expression is found in folk-dances and songs. Sharp became the driving force behind the English folk-dance revival which commenced during the early years of the twentieth century and emanated from the work of the Folk-Song Society which had been founded in 1898. When the English Folk-Dance Society was established in 1911 its purpose was to disseminate a knowledge of English Folk-Dances, Singing Games and Folk-Songs and to encourage the practice of them in their traditional forms. The Society, with its sta€ of quali® ed teachers, local correspondents and several regional branches, provided instruction, resources and vocational courses at the Shakespeare Memorial Theatre, Stratford-upon-Avon. The Society promoted folk-dance in educational and recreative contexts and gave practical help by arranging classes, demonstrations and competitions. The quickening of the national spirit through dance did not become a living reality until a systematic e€ ort to recover what had virtually become an extinct dance heritage was undertaken by Cecil Sharp, who based his theories on those of Sir James Frazer, an anthropologist and folklorist. Sharp had been collecting folk songs in Oxfordshire in 1899 when he encountered a group of morris dancers from Headington. Although ® ve years lapsed between this meeting and the eventual reconstruction, demonstration and teaching of the dances to others, the role of folk-dance collector and disseminator soon emerged as the major commitment in his life. He searched the countryside, befriended village elders and persuaded them to whistle or play the tunes to him and to show or describe dance steps and ® gures. He
// SPDX-FileCopyrightText: 2019-present Open Networking Foundation <<EMAIL>> // // SPDX-License-Identifier: Apache-2.0 package set import ( api "github.com/atomix/atomix-api/go/atomix/primitive/set" "github.com/stretchr/testify/assert" "testing" ) func TestOptions(t *testing.T) { request := &api.EventsRequest{} assert.False(t, request.Replay) WithReplay().beforeWatch(request) assert.True(t, request.Replay) }
package com.me.ui.sample.pattern.behavior.strategy; import com.me.ui.util.LogUtils; /** * @author tangqi on 17-5-15. */ public class BusStrategy implements TrafficStrategy { @Override public void run() { LogUtils.d(BusStrategy.class, "坐大巴"); } }
<reponame>BuiltonDev/python-sdk from builton_sdk.api_models import Event def test_rest_decorators(): event = Event("request", "props") methods = ['get', 'get_all', 'refresh', 'search'] for m in methods: assert hasattr(event, m) def test_init_sets_api_path(): event = Event("request", "props") assert event.api_path == "events"
A plurality of lower power inverters can be connected in parallel to form a higher power inverter. In order to connect a plurality of inverters to form a parallel inverter, the main problem to be solved is how to reduce the circulating current among the modules. It is necessary not only to achieve the increasing by integer multiples of the load-carrying capacity, but also to achieve the even distribution of the load, so that all of the inverters have the same MTBF (Mean Time Between Failures) in theory, thereby achieving the maximum of the parallel system MTBF. In order to achieve such an object, there are following solutions in the art. In the first solution, it adopts a method of master-slave control in order to connect a plurality of inverters in parallel, that is, a control unit is used to control all of the power modules. All the power modules utilize the same driving signal of SPWM (sine pulse width modulation) to obtain substantially the same output, and its control block diagram is shown in FIG. 1. This control solution resolves the synchronization of the output voltages effectively, and adding the means of regulating bus voltage can achieve higher preciseness of even current. However, the disadvantage is the centralized form of control unit. A fault occurring in the control unit may cause the whole system paralyzed. Therefore, after the system is connected in parallel, the improvement of its MTBF is rather little because the fault bottleneck is presented. To overcome the disadvantage of the first solution, the second solution is provided. In this solution, every inverter is provided with a control unit, but only one control unit is turned on at any time by way of intellective selection. If any fault occurs in the control unit, the system will jump to some other control unit automatically. Although the second solution resolves the problem of the fault bottleneck, the complexity and the cost of such a system are increased. Also, the switching of the driving wave is technically dangerous, which is likely to lead the damage of the power tube. Moreover, switching the control unit renders the jump of the amplitude or phase of the output voltage to some extent, and reduces the purity of the output voltage. Meanwhile, it may realize the parallel connection of only few power modules since the load-carrying capacity of the control circuit is limited. Another disadvantage of this solution is that a logic control unit must be added because of the necessity of controlling the switches concentratedly, and thus it not only increases the additional cost but also adds the new fault bottleneck. The third solution is provided to reduce the fault rate of the master control unit and to prevent the danger brought by switching of the driving wave. In this solution, the parallel point is moved forward. The control block diagram of the improved parallel inverter is shown in FIG. 2, that is, the parallel point is moved forward to the output point of the voltage regulation. At any moment, only one of the selected switches K1˜Kn can turn on, that is, only one voltage regulating loop is selected to work, and other voltage loops are in a state of thermal backup. In comparison with the second solution, the third solution can overcome not only the problem of fault bottleneck of the control unit, but also resolve the danger of the switching of the driving wave. And since the commonly shared units are fewer, the reliability is enhanced. However, the complexity of the system switching still exists. Switching may also render the jump of the amplitude and the phase of the output voltage to some extent, and the load-carrying capacity of the control circuit cannot be improved. Further, only few power modules can be connected in parallel. This is essentially still a kind of centralized control. This solution cannot overcome the problem of switching of centralized control switches as well, and should add a logic control unit. This increases the cost and the new fault bottleneck. Moreover, the user is likely to pull out the inverter being used as the master module due to the requirement of the hot plug, such that the problem caused by the master-slave switching is more serious.
/** * The VirtualPCIPassthrough.VmiopBackingInfo data object type contains information about the plugin * that emulates the virtual device via the VMIOP plugin interface. * At present, this interface is only used to implement vGPU. * * @author Stefan Dilk <stefan.dilk@freenet.ag> * @version 7.0.2 * @since 6.0 */ @SuppressWarnings("unused") public class VirtualPCIPassthroughVmiopBackingInfo extends VirtualPCIPassthroughPluginBackingInfo { private String vgpu; private Boolean migrateSupported; @Override public String toString() { return "VirtualPCIPassthroughVmiopBackingInfo{" + "vgpu='" + vgpu + '\'' + ", migrateSupported=" + migrateSupported + "} " + super.toString(); } public String getVgpu() { return vgpu; } public void setVgpu(final String vgpu) { this.vgpu = vgpu; } public Boolean getMigrateSupported() { return migrateSupported; } public void setMigrateSupported(final Boolean migrateSupported) { this.migrateSupported = migrateSupported; } }
In order to improve the performance of organic electronic (OE) devices, such as organic field effect transistors (OFETs) or organic light emitting diodes (OLEDs), it is desirable to be able to deposit, from a solution, the individual functional layers, for example the semiconductor layer, in a specific, confined place on a substrate. Bank structures, and methods of forming them, are known to be used for defining such confined places on a substrate. For example, US 2007/0023837 A1, WO 2008/117395 A1, EP 1 933 393 A1, GB 2,458,454 A, GB 2,462,845 A, US 2003/017360 A1, US 2007/190673 A1, WO 2007/023272 A1 and WO 2009/077738 A1 individually and collectively are representative disclosures of such known structures and methods. However, despite these disclosures, none provide a manufacturing process or material that is compatible with ink-jet printing or photolithography, or that discloses a solution processable material that is useable without harmful reactive or migrating chemicals or methods. Thus it would be desirable to provide structure defining materials for use in forming bank structures that are compatible with ink-jet printing or photolithography which are essentially free of the aforementioned harmful reactive or migrating chemicals. Additionally it would be desirable to provide methods of forming such bank structures using methods that are both compatible with ink-jet printing or photolithography and do not require the use of processes such as halocarbon reactive ion etching. Still further it would be desirable to provide OE devices manufactured using such desirable structure defining materials and structure forming methods.
GRAND RAPIDS, MI—Steeling himself against brutal market conditions and an unforgiving fiscal climate, fearless local man Calvin Ordway boldly set out into the U.S. economy this week, sources close to the 32-year-old confirmed. Clad in a dress shirt and khakis, armed with only his wits and basic computer skills, Ordway reportedly showed no hesitation as he opened his front door and strode through the breach into a bleak economic landscape where there likely exists absolutely no demand for any task he can perform or product he can create. Advertisement “Does this man have no fear of the financial ruin that almost certainly awaits him?” said economist Carol F. Weiss, describing the U.S. economy as “entirely inhospitable to humankind.” “He has ventured into a dark and treacherous place. Where he will emerge, whether he will emerge, is impossible to say. We can only hope against hope that he remains solvent long enough to make it out with his assets intact.” “God help him,” Weiss added. According to sources, the college-educated Ordway entered the economy despite knowing it to be almost entirely devoid of revenue streams—and knowing that while his chances of finding profit in the barren, sparse wasteland were exceedingly small, the likelihood he would lose his way and fall victim to financial exposure was quite high. Advertisement Ordway was last seen behind the wheel of a 2001 Toyota Camary with 200,000 miles on its odometer, driving in the direction of a job believed to be located in a particularly tempestuous and unpredictable economic sector. “This poor soul has left himself at the mercy of the economy of the United States of America,” said former SEC Chairman Arthur Levitt, appearing shocked and incredulous as he spoke to reporters. “Good Lord, I wouldn’t go anywhere near that place on a good day, let alone in times like these. To make it in this economy you need contacts, you need political alliances, you need to know how to game the system in your favor.” “One false move could bankrupt him or, God forbid, something worse,” he added. “Tax codes, mortgage lenders, health insurers—if he loses his footing for even a moment, he could plunge into bottomless debt.” Advertisement Agreeing that each transaction he makes puts him at further risk and brings him closer and closer to financial oblivion, leading economists nonetheless acknowledged a grudging respect for the single-minded courage of Ordway’s “outright suicide mission.” “He’s a damn fool, but you have to admire him,” economist Paul Krugman said. “To go straight into the belly of the beast, to willfully forsake the comfort of his home and family, to throw himself into the nightmarish heart of fiscal danger so willingly. Call him crazy if you want. The man has brass balls.” Ordway’s wife, Louisa, meanwhile, expressed different concerns. “I just hope he makes it back and doesn’t leave me and the children here alone,” she told reporters. “He’s a brave man. A stupid man, maybe. But a brave man, and I love him for it.” Advertisement While the motivation for Ordway’s daring trek remained uncertain, reports indicated he may have set forth on a quest for the fabled treasure of middle-class respectability said to lie hidden somewhere in the nation’s deepest economic recesses.
<filename>src/utils/loadAndBundleSpec.ts<gh_stars>1-10 import type { Source, Document } from '@redocly/openapi-core'; // eslint-disable-next-line import/no-internal-modules import { bundle } from '@redocly/openapi-core/lib/bundle'; // eslint-disable-next-line import/no-internal-modules import { Config } from '@redocly/openapi-core/lib/config/config'; /* tslint:disable-next-line:no-implicit-dependencies */ import { convertObj } from 'swagger2openapi'; import { OpenAPISpec } from '../types'; import { IS_BROWSER } from './dom'; export async function loadAndBundleSpec(specUrlOrObject: object | string): Promise<OpenAPISpec> { const config = new Config({}); const bundleOpts = { config, base: IS_BROWSER ? window.location.href : process.cwd(), }; if (IS_BROWSER) { config.resolve.http.customFetch = global.fetch; } if (typeof specUrlOrObject === 'object' && specUrlOrObject !== null) { bundleOpts['doc'] = { source: { absoluteRef: '' } as Source, parsed: specUrlOrObject, } as Document; } else { bundleOpts['ref'] = specUrlOrObject; } const { bundle: { parsed }, } = await bundle(bundleOpts); return parsed.swagger !== undefined ? convertSwagger2OpenAPI(parsed) : parsed; } export function convertSwagger2OpenAPI(spec: any): Promise<OpenAPISpec> { console.warn('[ReDoc Compatibility mode]: Converting OpenAPI 2.0 to OpenAPI 3.0'); return new Promise<OpenAPISpec>((resolve, reject) => convertObj(spec, { patch: true, warnOnly: true, text: '{}', anchors: true }, (err, res) => { // TODO: log any warnings if (err) { return reject(err); } resolve(res && (res.openapi as any)); }), ); }
William Proxmire of Wisconsin retired from Congress almost 30 years ago, but he would fit right in as a senator today. An avowed opponent of government waste, he famously created the “Golden Fleece Award” to draw attention to whatever he deemed to be frivolous Federal spending. Some of the awards still hold as much crowd appeal as they did back then—the fourth award, in 1975, went to the U.S. Congress for “living high off the hog while much of the rest of the country is suffering economic disaster.” But many of his awards went to the National Science Foundation, NASA, and scientific agencies, targeting what he saw as pointless scientific research. These misguided awards reflect a widespread but wrongheaded understanding of how scientific progress and breakthroughs are made. One man whose work was somehow spared a Golden Fleece Award was Thomas Brock, but had Proxmire heard about it he would surely have been tempted to remedy this oversight. Brock studied how bacteria live in the hot springs of Yellowstone National Park. One can easily imagine Proxmire demanding to know why the American taxpayer should pay for such a boondoggle. “Perhaps we need to cure the hot springs of an infection?” he might have chortled. But in fact, Brock’s real motivation probably would not have placated Proxmire. Brock was a scientist, and like most scientists he was driven first and foremost by curiosity. Brock wanted to understand how life could survive in boiling water, an environment in which, according to everything known at the time, there could be no life. Through clever experiments he discovered an organism that could thrive in extreme environments. And his discovery triggered a revolution in biology, medicine, criminal justice and beyond. The organism he discovered produces enzyme that is at the core of a technique, known as PCR, which enables the amplification and identification of infinitesimal quantities of DNA. Even a single molecule can be enough. PCR provides the most sensitive test for HIV/AIDS and many other diseases. It is also familiar to fans of CSI as the forensic method used for matching trace amounts of hair, blood or other bodily fluids to the perpetrators or victims of crimes. PCR is the engine under the hood of the Innocence Project, which has so far exonerated 343 wrongfully convicted people (including 20 who spent time on death row), and has helped convict 147 real perpetrators. And it is because of PCR that we know about the hanky-panky going on 50,000 years ago between our human ancestors and their Neanderthal cousins. The fortuitous development of a transformative technology from curiosity-driven research is the rule, not the exception. Most of modern technology—the good, the bad, and the ugly—builds on scientific discoveries made by scientists who just wanted to understand nature. Nuclear weapons, which transformed world geopolitics after World War II, were an unexpected by-product of perhaps the most fundamental scientific discovery of the twentieth century, Albert Einstein’s iconic E=mc2. Medical diagnosis by X-rays and MRI were both applications of earlier fundamental discoveries in physics. There are few technologies today that do not owe their existence to the curiosity and persistence of a scientist five, 10 or even 50 years ago. Much has been written in the business sections about how companies that seek only to maximize short-term profits lose in the long run. The problem is that companies must satisfy their shareholders, and shareholders want immediate results. But an increasing number of companies are trying to push back by justifying their focus on long-term results. In scientific research the consequences of such short-sightedness is even more severe, yet government funding for basic biomedical research is increasingly tied to delivery of short-term goals. Brock was not included in the Nobel Prize awarded for PCR (that honor went solely to Kary Mullis). But, fittingly, in 2013 Brock was awarded the Golden Goose Award, established just a year earlier to officially recognize scientists whose federally funded basic research has led to innovations or inventions which have a significant impact on humanity or society. Anthony Zador, MD, PhD, is Professor and Chair of Neuroscience at Cold Spring Harbor Laboratory. He studies the brain circuits underlying thoughts, feelings and memories, and hopes that the insights from real brains will help in the design of next generation artificial brains.
We need to circle back to CNN’s epic failure last Friday. The network reported that a man named Michael Erickson had emailed Donald Trump, Donald Trump, Jr., and other Trump officials with a decryption key to a trove of emails from the DNC and John Podesta that was taken by Wikileaks. The “bombshell” aspect is that this email was sent before the documents were made public. Yeah—wrong. All of those documents were already public and the news organization screwed up the dates. They’ve since corrected the report: Statement from CNN PR pic.twitter.com/H7XZ8Fuzdi — Oliver Darcy (@oliverdarcy) December 8, 2017 Candidate Donald Trump, his son Donald Trump Jr. and others in the Trump Organization received an email in September 2016 offering a decryption key and website address for hacked WikiLeaks documents, according to an email provided to congressional investigators. The September 14 email was sent during the final stretch of the 2016 presidential race. CNN originally reported the email was released September 4 -- 10 days earlier -- based on accounts from two sources who had seen the email. The new details appear to show that the sender was relying on publicly available information. The new information indicates that the communication is less significant than CNN initially reported. Yet, prior to this, The Washington Post published a story that cleaned up CNN’s mess [emphasis mine]: A 2016 email sent to candidate Donald Trump and top aides pointed the campaign to hacked documents from the Democratic National Committee that had already been made public by the group WikiLeaks a day earlier. The email — sent the afternoon of Sept. 14, 2016 — noted that “Wikileaks has uploaded another (huge 678 mb) archive of files from the DNC” and included a link and a “decryption key,” according to a copy obtained by The Washington Post. The writer, who said his name was Michael J. Erickson and described himself as the president of an aviation management company, sent the message to the then-Republican nominee as well as his eldest son, Donald Trump Jr., and other top advisers. The day before, WikiLeaks had tweeted links to what the group said was 678.4 megabytes of DNC documents. But if 2 sources independently confirm to you that it's a banana..... https://t.co/eWWYEb96m8 — Larry O'Connor (@LarryOConnor) December 9, 2017 It's amazing to watch CNN stand up and defend the anonymous sources who burned their credibility like this. Must be pretty important and well connected sources then, definitely not part of the House intel committee. https://t.co/8lxIRBdP8R — Stephen Miller (@redsteeze) December 9, 2017 I'm seeing lots of tweets saying CNN should out the sources that misinformed @MKRaju. But @CNNPR says the network does not believe that the sources *intended* to deceive... https://t.co/wKl9rX7Ibc — Brian Stelter (@brianstelter) December 9, 2017 CNN's initial reporting of the date on an email sent to members of the Trump campaign about Wikileaks documents, which was confirmed by two sources to CNN, was incorrect. We have updated our story to include the correct date, and present the proper context for the timing of email — CNN Communications (@CNNPR) December 8, 2017 They got the dates wrong. ABC News stepped on a rake with dates when they reported that Donald Trump directed Michael Flynn to make contact with the Russians during the campaign. Actually, this was done after Trump had won the election. This was part of the transition. Nothing new. Brian Ross, who reported this on air, has been suspended for four weeks without pay. So, will CNN do the same? Nope. They’ve decided that since the sources didn’t intend to deceive, no disciplinary action will be taken. Also, CNN peddled this story for hours before correcting it. One CNN reporter is quoted as describing the whole affair as a “colossal f**k up.” That statement is accurate (via Washington Examiner): The story claimed the 2016 GOP nominee, his son Donald Trump Jr., and various campaign advisers received an email in September 2016 offering them advance access to an impending WikiLeaks dump of emails stolen from Democratic National Committee staffers and Hillary Clinton’s campaign chairman, John Podesta. The CNN report hinged entirely on an email that was supposedly sent on Sept. 4. The September email to Trump and his team included a “decryption key and website address” for the WikiLeaks dump, the article added. There’s a major, glaring error in this story, which CNN promoted all Friday morning and into the afternoon. […] “What a colossal f**k up,” one CNN reporter told the Examiner Friday in response to the story’s unraveling. So what went wrong? For starters, let’s look at CNN’s source material. The Sept. 14 email was uncovered by the House Intelligence Committee, which is investigating Russia’s alleged meddling in the 2016 election. Federal investigators are currently poring over thousands of emails sent and received by Trump officials and family members, which means they’re also looking into spam and junk email mails. Donald Trump Jr., who was copied on the Sept. 14 email, was asked about the note this week during a closed-door session with committee members. How the contents of that email got into media hands is anybody’s guess, but it’s not the best look for the committee. Now, regarding the bogus CNN report: Its authors, Manu Raju and Jeremy Herb, claimed the email had been “described” by “multiple sources” and “verified” by Trump Jr.'s attorney. CBS News also misreported independently that the email was dated September 4. The Post and the Wall Street Journal, both of which acquired a copy of the Sept. 14 email, handled the story a little differently. Where CNN and CBS saw a major scoop, the Post and the WSJ saw an email lacking in credibility. The Journal’s Rebecca Ballhaus, for example, noted Friday that there were serious problems with the note. “The Sept. 14 email to Trump campaign advertising WikiLeaks emails promoted publicly available info, was riddled with typos and came from a Trump backer who had given $40 to the campaign months earlier, per email viewed by @WSJ,” she tweeted. And there were other stories as well that were misreported, one was from The Wall Street Journal that alleged Robert Mueller has subpoenaed President Trump and his family’s financial records from Deutsche Bank; it was actually “people or entities close to Trump.” Donald Trump. Jr. doesn’t expect an apology from the network. The only question remains is whether CNN is an apple or a banana.
Detection of trace disulfur decafluoride in sulfur hexafluoride by gas chromatography/mass spectrometry The method utilizes a gas chromatograph/mass spectrometer (GC/MC) equipped with a heated jet separator. S 2 F 10 is converted to SOF 2 on the hot surfaces of the low-pressure portions of the jet separator at temperatures above 150 o C by a surface-catalyzed reaction involving H 2 O. By this method, a direct analysis of SF 6 for S 2 F 10 content can be performed with greater sensitivity than conventional gas chromatographic methods and with a higher degree of reliability and in a time much shorter than required for chromatographic methods that use enrichment procedures
<filename>packages/leaa-www/src/pages/index/_components/Home/Home.tsx import React from 'react'; import { SwiperImage } from '@leaa/www/src/components/SwiperImage'; import { Ax } from '@leaa/common/src/entrys'; import style from './style.less'; interface IProps { ax: Ax; } export default (props: IProps) => { return ( <div className={style['wrapper']}> {props.ax && props.ax.attachments && props.ax.attachments.bannerMbList && props.ax.attachments.bannerMbList.length !== 0 && ( <SwiperImage lazy attachmentList={props.ax.attachments.bannerMbList} centerMode height={props.ax.attachments.bannerMbList[0].height} /> )} </div> ); };
/* * ExplainQuery - * print out the execution plan for a given query * */ void ExplainQuery(Query *query, bool verbose, bool analyze, CommandDest dest) { List *rewritten; List *l; if (IsAbortedTransactionBlockState()) { elog(NOTICE, "(transaction aborted): %s", "queries ignored until END"); return; } if (query->commandType == CMD_UTILITY) { elog(NOTICE, "Utility statements have no plan structure"); return; } rewritten = QueryRewrite(query); if (rewritten == NIL) { elog(NOTICE, "Query rewrites to nothing"); return; } foreach(l, rewritten) ExplainOneQuery(lfirst(l), verbose, analyze, dest); }
May 2, 2018 children, family, teens. Boy Scouts of America announced its new campaign “Scout Me In” on Wednesday. This builds on the program it launched in fall to allow girls to join Boy Scouts at the Cub Scouts level and then eventually at the Boy Scout level (sixth grade and up). The marketing campaign’s goal is to let girls know that they can join, too, as well as boys that might not have felt welcome in the past. One big change is that the Boy Scout level will soon be called Scouts BSA rather than Boy Scouts. The organization name won’t change. RELATED: Will local girls become Boy Scouts? RELATED: What does Boy Scouts accepting transgender scouts mean to Austin kids?
def read_var_int(mv, offset): b0 = mv[offset] if b0 < 0xfd: return b0, offset+1 elif b0 == 0xfd: return read_ule2(mv, offset+1) elif b0 == 0xfe: return read_ule4(mv, offset+1) elif b0 == 0xff: return read_ule8(mv, offset+1)
Differential responses to UVB irradiation in human keratinocytes and epidermoid carcinoma cells. OBJECTIVE To examine UVB-induced responses in normal human keratinocytes (HaCaT) and epidermoid carcinoma cells (A431) at the cellular and molecular level, and investigated the protective effect of salidroside. METHODS Cells irradiated by UVB at various dosage and their viability was assessed by MTT assays, cell cycle was analysed by flow cytometry. The expression of NF-B, BCL-2, and CDK6 after 50 J/m UVB irradiation were detected by RT-PCR and western blotting. RESULTS Our results confirmed greater tolerance of A341 cells to UVB-induced damage such as cell viability and cell cycle arrest, which was accompanied by differential expression changes in NF-B, BCL-2, and CDK6. UVB exposure resulted in HaCaT cells undergoing G-S phase arrest. When treated with salidroside, HaCaT survival was significantly enhanced following exposure to UVB, suggesting great therapeutic potential for this compound. CONCLUSION Taken together, our study suggests that A431 respond differently to UVB than normal HaCaT cells, and supports a role for NF-B, CDK6, and BCL-2 in UVB-induced cell G-S phase arrest. Furthermore, salidroside can effectively protect HaCaT from UVB irradiation.
Metallic Metal-Organic Frameworks Predicted by the Combination of Machine Learning Methods and Ab Initio Calculations. Emerging applications of metal-organic frameworks (MOFs) in electronic devices will benefit from the design and synthesis of intrinsically, highly electronically conductive MOFs. However, very few are known to exist. It is a challenging task to search for electronically conductive MOFs within the tens of thousands of reported MOF structures. Using a new strategy (i.e., transfer learning) of combining machine learning techniques, statistical multivoting, and ab initio calculations, we screened 2932 MOFs and identified 6 MOF crystal structures that are metallic at the level of semilocal DFT band theory: Mn24 (X = S, Se,Te), Mn, Hg4Co4, and CdC4. Five of these structures have been synthesized and reported in the literature, but their electrical characterization has not been reported. Our work demonstrates the potential power of machine learning in materials science to aid in down-selecting from large numbers of potential candidates and provides the information and guidance to accelerate the discovery of novel advanced materials.
Joint Depth Estimation and Camera Shake Removal from Single Blurry Image Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.
<reponame>comqdhb/JSFML #include <JSFML/JNI/org_jsfml_graphics_CircleShape.h> #include <JSFML/Intercom/NativeObject.hpp> #include <JSFML/JNI/org_jsfml_internal_ExPtr.h> #include <SFML/Graphics/CircleShape.hpp> #include <SFML/Graphics/RenderTarget.hpp> /* * Class: org_jsfml_graphics_CircleShape * Method: nativeCreate * Signature: ()J */ JNIEXPORT jlong JNICALL Java_org_jsfml_graphics_CircleShape_nativeCreate (JNIEnv *env, jobject obj) { return (jlong)new sf::CircleShape(); } /* * Class: org_jsfml_graphics_CircleShape * Method: nativeSetExPtr * Signature: ()V */ JNIEXPORT void JNICALL Java_org_jsfml_graphics_CircleShape_nativeSetExPtr (JNIEnv *env, jobject obj) { JSFML::NativeObject::SetExPointer(env, obj, org_jsfml_internal_ExPtr_DRAWABLE, dynamic_cast<sf::Drawable*>(THIS(sf::CircleShape))); JSFML::NativeObject::SetExPointer(env, obj, org_jsfml_internal_ExPtr_TRANSFORMABLE, dynamic_cast<sf::Transformable*>(THIS(sf::CircleShape))); JSFML::NativeObject::SetExPointer(env, obj, org_jsfml_internal_ExPtr_SHAPE, dynamic_cast<sf::Shape*>(THIS(sf::CircleShape))); } /* * Class: org_jsfml_graphics_CircleShape * Method: nativeDelete * Signature: ()V */ JNIEXPORT void JNICALL Java_org_jsfml_graphics_CircleShape_nativeDelete (JNIEnv *env, jobject obj) { delete THIS(sf::CircleShape); } /* * Class: org_jsfml_graphics_CircleShape * Method: nativeSetRadius * Signature: (F)V */ JNIEXPORT void JNICALL Java_org_jsfml_graphics_CircleShape_nativeSetRadius (JNIEnv *env, jobject obj, jfloat radius) { THIS(sf::CircleShape)->setRadius(radius); } /* * Class: org_jsfml_graphics_CircleShape * Method: nativeSetPointCount * Signature: (I)V */ JNIEXPORT void JNICALL Java_org_jsfml_graphics_CircleShape_nativeSetPointCount (JNIEnv *env, jobject obj, jint count) { THIS(sf::CircleShape)->setPointCount(count); }
<reponame>nnoell/cgtoml /* CgToml * * Copyright © 2019 Collabora Ltd. * Copyright © 2021 <NAME> * * SPDX-License-Identifier: MIT */ #ifndef __CG_TOML_PRIVATE_H__ #define __CG_TOML_PRIVATE_H__ #include <glib-object.h> G_BEGIN_DECLS /* Forward declaration */ struct _CgTomlArray; typedef struct _CgTomlArray CgTomlArray; struct _TomlTable; typedef struct _CgTomlTable CgTomlTable; CgTomlArray * cg_toml_array_new (gconstpointer data); CgTomlTable * cg_toml_table_new (gconstpointer data); G_END_DECLS #endif
Comparison of Simulation Applications Used for Energy Consumption in Green Building Green Building philosophy is based on providing comfortable living environment for the residents, and simultaneously maintaining low level of negative impacts on the environment, besides applying resource efficient methodologies throughout the life cycle of the building, including efficient usage of energy resources. To achieve these goals, software applications can be used to analyze and simulate energy consumption in Green Building. This paper aims at comparing the most common applications of energy consumption analysis and simulation in terms of their usage in Building Information Modeling (BIM).
Re: "Manitou Springs City Council OKs new agreement with Cog Railway": The only winners I see here are Phil Anschutz's heirs. Even then it's questionable whether they will keep the Cog in the future. Heirs have a habit of liquidating dear old gramp's estate. Another point of view is that the Oklahoma Publishing Co. could be spun off or sold, go out of business or file for bankruptcy. There goes The Broadmoor. Finally, into the future, AEG and all its subsidiaries could be a shell of its former self or a faint memory 50 years down the road. Dynasties eventually die. Let's not forget the people of Manitou. What will they get out of the 50-year deal? Not much other than two generations of Manitoids asking the question, "What were these people thinking?" Crickets. Anschutz can well afford to rebuild the Cog on his dime. The citizens of Manitou should not be forced to support a private business. The citizens should begin looking around for replacements of the existing elected officials and doing it soon! You sure as heck would not want this existing crew jamming the people of Manitou with another bad deal. There are too many developers with their hands out for taxpayer-supported ventures with no guarantee of a reasonable rate of return for the citizens. Just pie in the sky projects that are long on blue-sky projections and come up short on results. The citizens get stuck — again. It's time to say NO!
A Recommendation Model Based on Multi-Emotion Similarity in the Social Networks This paper proposed a recommendation model called RM-SA, which is based on multi-emotional analysis in networks. In the RM-MES scheme, the recommendation values of goods are primarily derived from the probabilities calculated by a similar existing recommendation system during the initiation stage of the recommendation system. First, the behaviors of those users can be divided into three aspects, including browsing goods, buying goods only, and purchasingevaluating goods. Then, the characteristics of goods and the emotional information of user are considered to determine similarities between users and stores. We chose the most similar shop as the reference existing shop in the experiment. Then, the recommendation probability matrix of both the existing store and the new store is computed based on the similarities between users and target user, who are randomly selected. Finally, we used co-purchasing metadata from Amazon and a certain kind of comments to verify the effectiveness and performance of the RM-MES scheme proposed in this paper through comprehensive experiments. The final results showed that the precision, recall, and F1-measure were increased by 19.07%, 20.73% and 21.02% respectively. Introduction In recent years, people have been doing more online shopping on sites such as Amazon, Taobao, and Jingdong. As a result, how to build up an effective recommendation model is becoming a crucial research project. Recommendation systems in social networks were first proposed by Resnick P and Varian HR in 1997 and used to provide personalized and intelligent information services to users on online shopping sites. A variety of recommendation systems have been proposed by researchers. The main recommendation systems include : Content-based recommendation systems, which will recommend the goods that a user is interested in based on their historical behaviors; collaborative filtering recommendation systems, which adopt the similarities of users' historical purchasing behaviors to better represent the recommendation process in social networks; hybrid recommendation systems. The online shopping website often takes advantage of multiple methods to improve its recommendation ability. To the shop owner, it is important that a recommendation system can effectively introduce products with potential purchasing power to users. Although there are various methods which have been recommended in previous studies, some significant further difficulties are still needed to be overcome. For example, the "cold start" problem still exists in recommendation schemes and is not easy to be solved effectively. When a new shop opens, although it does not have purchase records, the relationships among goods are established by referring to identical products of the selected existing shop. The existing store is one that has been running for some time and whose historical purchase records are rich. For newly opened stores with nonexistent or sparse transaction records, it is difficult for the recommendation system to make an effective recommendation. Therefore, the existing store that is most similar to the target store is chosen as the reference existing store in the paper. The calculation of similarity is the number of goods in the new store divided by that in the existing store. The reference existing store shares the maximum numbers of goods with the target store; therefore, we can recommend goods to target users according to the reference existing store. Emotion, as an indispensable psychological activity in social networks, always affects the daily lives and decision-making processes of users in shopping. This paper proposes a recommendation model (RM-MES) based on multi-emotion similarity in social networks. The problem studied in this paper targets a particular store and regards how to effectively recommend goods to users to maximize the benefits of the shop owner and how to improve the performance indexes of the recommended scheme, such as the precision, recall, and F 1-measure. Our main contributions of this scheme are as follows: To solve the "cold start" problem, the RM-MES scheme uses the historical purchase records of an existing store to guide a recently opened store, which aims to form a recommendation probability matrix of both the existing store and the new store for the target users; To improve the accuracy of recommendation results, we propose a scheme based on multi-emotional analysis. The LDA topic model is used to subdivide user evaluation into six indexes. Considering user preferences for different levels of goods, the similarity of users is deeply analyzed, and the similarity results show its advantages; With the considerations of the different performances of users, the behaviors of those users can be divided into three aspects, including browsing goods, buying goods only, and purchasingevaluating goods. According to the three categories, the browsing similarity, purchasing similarity, and emotional similarity among users can be identified; We adopt the metadata of Amazon goods to verify the effectiveness and performance of the RM-MES scheme through comprehensive experiments. In addition, we analyze the impact of transition probability influence factor through the experiments. Related Works Generally, recommendation systems use a certain algorithm based on user behavior data or item data to recommend items that users need. According to the differences of the recommendation algorithms, the recommendation systems can be divided into the following categories : Content-based recommendation systems. According to the items that users have liked, a content-based system can recommend similar items to users. Such systems were developed based on information retrieval and filtering, using the historical purchase records of target users or analysis of the characteristics from purchase information via statistics and machine learning. Chen et al. proposed a probabilistic approach on the basis of TrueSkill for content-based recommendation systems. This system is useful for handling high uncertainty because it is only based on available goods and ratings given by users. There are still some disadvantages, such as the limited content analysis and the new user problem. Collaborative filtering recommendation systems, which are one of the most widely-used methods in practical applications, and their practical applications include Amazon, Taobao, and Digg. These types of schemes recommend products based on other users that have relationships that are similar to those of the target user. Li et al. designed a trust-aware recommender system, which fully extracted the influence of trust information and contextual information on ratings to improve precision. Wang et al. designed a combination model composed of the recommender and the similarity measure. He et al. proposed a novel model for the one-class collaborative filtering setting, which combines high-level visual features extracted from a deep convolutional neural network, users' past feedback, as well as evolving trends within the community to uncover the complex and evolving visual factors that people consider when evaluating products. Sun et al. proposed a time-sensitive collaborative filtering method to discover the latest preferences of the customers and improve the accuracy of the recommendation system without complicating the training phase. As a typical recommendation method, collaborative filtering recommendation systems still have some problems that need to be addressed, such as the sparse database problem and "cold start" problem. Hybrid recommendation systems, which combine the advantages of each recommendation scheme. As each recommendation scheme is not perfect, hybrid recommendation systems are frequently used in practical applications. Not all combination methods are effective in practical applications. It is important to avoid or compensate for the weaknesses of their recommendation. In the combination method of hybrid recommendation systems, researchers have proposed seven ideas of combination: Weight, switch, mixed, feature combination, cascade, feature augmentation, and meta-level. Song et al. researched how to gain better recommendations of traditional recommendation models on the basis of relationship information in social networks between customers and shops and proposed a matrix decomposition framework based on integrating relationship information in social networks. Emotion, as an indispensable psychological activity in social networks, always affects the daily lives and decision-making processes of users. Recommendations based on emotion get much attention from researchers in the field of personalized recommendation. Guo-Qiang et al. built a collaborative filtering recommendation algorithm based on user emotion and combined user ratings and emotional comments together through subject extraction and sentiment analysis in users' project reviews. Wijayanti et al. proposed an ensemble of a machine learning approach to detect the sentiment polarity in user-generated text. Vagliano et al. proposed a recommendation method according to the semantic annotation of entities that are recorded in customer comments, and the entities are considered as candidate recommendations. Musto et al. designed a multi-criteria collaborative filtering method, which uses aspect-based sentiment analyses of users' reviews to obtain sentiment scores as ratings of items from users. Contratres et al. proposed a recommendation process that includes sentiment analysis to textual data extracted from Facebook and Twitter and presented the results of an experiment in which this algorithm was used to reduce the cold start issue. Seo et al. proposed a friendship strength-based personalized recommender system. The personalized recommender system grants a weight to those users who are closely connected in their social circle based on friendship strength in order to recommend the topics or activities in which users might be interested. Meng et al. provided a principled and mathematical way to exploit both positive and negative emotion on reviews and proposed a novel framework MIRROR, exploiting emotion on reviews for recommend systems from both global and local perspectives. These schemes above have further improved the effectiveness of recommendation algorithms. However, these recommendation methods still have some problems that need to be overcome: 1. Most of the recommendation schemes only consider the "cold start" problem of new users, but do not consider the "cold start" problem for a recently opened store, so as to affect the recommend quality of recommendation system; 2. Some recommendation schemes search for user preferences by extracting user Facebook and Twitter data. However, it is difficult to extract the user's personal information due to issues such as permissions and technology. Additionally, because information that includes user emotions is often incomplete and fuzzy, it is not easy to directly analyze the emotions in the information from Facebook and Twitter; 3. These recommendation systems based on emotion only consider positive and negative emotions but do not consider users' preferences in other aspects; 4. When calculating the similarities of users' behaviors, most recommendation schemes do not take the correlation between projects into consideration; 5. Most recommendation schemes fail to consider the trust factor of each piece of merchandise, which may cause the recommendation system to provide distrusted items to target users. The RM-MES Algorithm In the RM-MES scheme, the set of users is defined as C = (c 1, c 2,..., c n ), the set of shops is defined as S = (s 1, s 2,..., s n ), the set of goods in the new target shop is defined as I = (I 1, I 2,..., I n ), the set of goods in the reference existing shop is defined as re f = (r 1, r 2,..., r n ), and the set of reviews of users is defined as C i = (c i1, c i2,..., c in ). In this paper, the relevant notations are shown in the Table 1. Sim c The set of similar users to the target user a SMX l The purchase matrixes of similar users The relationship among good i and good j The proportion of the mean recommendation probability n The number of final purchases in the new shop List i The number of recommended goods in each round w The length of the time window y The proportion of the influence factor of trust z The proportion of the influence factor of the latent factor The recommendation matrix of the target user based on the similarity of users The recommendation matrix of the target user based on the correlation relationship among goods The recommendation probability matrix of the target user based on The value of trust for good i rep The reputation of good i f re The purchase frequency of good i The proportion of the recommendation probability for the new shop recall The probability that users purchase what they like in the recommendation list The standard measurement for the classification accuracy of a recommendation algorithm B i The number of goods that user i likes N i The number of goods that user i has purchased in the recommendation list First, the RM-MES scheme should search the existing store that is most similar to the target store for reference, which means finding the largest number of goods that the target store and existing store both have in common. These typical stores can be calculated with Equation below: where num(S i ) represents the number of goods that existing shop S i has and num(S) represents the number of goods that the new target store has. In the experiment, we chose the most similar store S i for reference, which means searching the maximum result of R according to Equation. Emotional Analysis of User Reviews The first step is data preprocessing. The reviews of user are first categorized on the basis of their attributes. Latent Dirichlet allocation (LDA) has been employed as a technique to identify and annotate large text corpora with concepts, to track changes in topics over time, and to assess the similarity between documents. The LDA topic models provide the identification of core topics from a provided text collection. By analyzing the LDA thematic model of 5000 online reviews, we found that most consumers pay attention to six indicators: Quality, price, appearance, configuration, service, and express delivery. We thus classified reviews of users into six respective categories. The second step is to extract emotional information from the reviews of users. This includes the extraction and discrimination of evaluation words, the extraction of evaluation objects, the extraction of combination evaluation units, the extraction of evaluation phrases, and the extraction of evaluation collocations. Then, based on an emotional lexicon, we analyzed user emotional polarity and obtained emotional values. To distinguish words with the same emotional tendencies and different emotional polarity, we obtained the emotional scores of emotional words according to the public emotional vocabulary of HowNet (http://www.keenage.com/download/sentiment.rar). The HowNet is popular due to its context-specific lexicons. There are three categories of words: Emotional words, degree words, and negative words. Negative words can be used to determine whether the polarity of a comment is reversed or not. Degree words can provide different scores to different emotional words, and emotional words can be divided into positive words and negative words. If an emotional word is not in HowNet or has no emotional value, then we found its synonyms on the basis of TongYiCi Cilin (Mei et al., proposed in 1983) and compute the relevant emotional score. The text grading formula, as shown in Equation : where Score(i) represents the score of each comment, the index t of −1 depends on polarity reversal, k represents the degree of degree word, and word(j) is the original score of every word. Finally, we computed the value of the comment: where Rep i represents the reputation of the commodity, Score represents the emotional score of the evaluation, and i represents the weight of each index. The Calculation Method for Similar Users According to the flow of information in social networks, the target user is randomly selected by the RM-MES scheme, and then we need to find users that are similar to the target user. When considering the similarity of user behavior, most schemes ignore the different performance of users; therefore, the precision of recommendation results may not be satisfactory. After taking into account the different performances of users, we divided users' behaviors into browsing goods, buying goods, and purchasing goods as well as evaluating these goods. Then, we obtained the similarity of their browsing and purchasing behaviors and emotional feelings among two users. To obtain the similarity between c a and target user c b, we first obtained the similarity of their browsing. The browsing similarity formulas are as follows: where Browse(c a ) represents the goods that the user c a has browsed and Browse(c b ) represents the goods that the user c b has browsed. To obtain the similarity of their purchasing between user c a and target user c b, Equation can be obtained as follows: The similarity of emotional feelings between user c i and target user c j, Equation can be obtained as follows: where Simi a,b represents the correlation of emotional similarity between user c a and user c b, and rep k and fre k respectively represent the reputation and frequency of good I k. S a,b represents the set of goods that were purchased both by user c a and user c b. r i and r j respectively represent the mean ratings of user c a and user c b, respectively. r a,k and r b,k are the ratings of user c a and c b for good I k. a and b respectively represent the standard deviations for user c a and c b, and the calculation method is shown by Equation below: where, i represents the weight index of different similarity, respectively. We can get the similarity degree between users c a and c b according to Equation The Recommendation Probability for Each Good According to the Historical Purchase Records To recommend goods to users more effectively, we needed to calculate the recommendation probability of each merchandise on the basis of the purchase records of users. The main calculation methods are illustrated below: Suppose that the past states are V 0 = x 0, V 1 = x 1,..., V t−1 = x t−1 and that the present state is V t = x t, where V t = x t represents the state being x t at time t; the value of x t is 0 or 1. In this case, the state probability at the next time step x t+1 is represented by Equation : where p represents the probability of the state at the next time. Therefore, we can obtain the probability recommendation matrix of both the existing store for reference and the new store for the target user. For example, in order to get the recommendation probability matrix of the reference existing store for target user c a, the transfer matrix can be illustrated as Equation below: where H e c a represents the recommendation probability matrix of each piece of merchandise for the target user, and e represents the purchase records in the typical store for similar users and the target user. g e a,b represents the probability that the target user will purchase good I j at the next time instant t + 1 in the condition that the historical purchase records are e and the target user has purchased good I i at the current time t. The calculation method of g e a,b is shown as Equation below: where B c a t+1 represents the set of goods that the user c a will purchase at time t + 1 and B c a t indicates the set of goods that user c a has purchased at time t. num(c(t)) represents the number of users that have purchased good I i at time t, and num(c(t → t + 1)) represents the number of users that have purchased good I i at time t and purchased product I j at time t + 1. At the beginning of the experiment, the newly opened store has almost no historical purchase records; therefore, it is difficult to find similar users for the target user c a. Along with the experimental training, the purchase records of the new store are gradually increasing, and we can therefore search for similar users to the target user, and the calculation method is the same as above. For instance, assume that there is a target user c a and that similar users can first be obtained based on the purchase records in the reference existing store; after that, we can get the recommendation probability matrix of the target user c a. Suppose the threshold n = 4, the time window w = 3 and the number of good 5. The historical purchase records of similar users in the reference existing store when the time window m ≤3 are shown below: where the row of matrix k i represents the number of goods and the column of matrix k i represents the case of historical purchased X j. If a user has purchased good I 3 at time 2, the result of row 3 and column 2 is 1. Otherwise, the result of row 3 and column 2 is 0. The historical purchase records of the target user are shown below: k : According to Equation, the result of recommendation probability matrix of c a can be computed below: In order to explain the results above, we computed the result of g3,1 as an illustration. It is clear that the number of users between both the similar users and the target user that have ever purchased I 3 it is 5 at time t. Thus, the denominator of result is 5. In the case that users have purchased good I 3, the number of users that purchase good I 1 is 1 at time t + 1. Thus, the numerator of result is 1. Therefore, we can get the result for g3,1 is 0.2. The Calculation Method for the Correlation Relationships between Goods However, it is necessary to consider the correlation relationships among goods in recommendation systems. According to the characteristics of goods and the categories they belong to, the relationships among goods are taken into consideration on the basis of the information flow on the Internet, which means that if the flow of information is larger, the correlation relationship among the items is closer. In our paper, S = (s i,j ) is defined as the correlation relationship between goods in the scheme, where s i,j represents the probability of the correlation relationship of good I i and good I j. According to the definition of s i,j, it can be seen that the value of s i,j is in the interval. The matrix of the correlation relationship of goods is illustrated as follows: After that, we calculated the result of each S i,j. B i,j is defined as the number of users that have bought both good I i and good I j. The computing method of S i,j is illustrated as follows: where S i,j represents whether there is a relationship among good I i and good I j. If there is a relationship among I i and I j, the result of S i,j is 1. Otherwise, the result of S i,j is 0. h B i,j is a logical function that can qualify the result of S i,j during interval. It can be seen that the result of S i,j is symmetric, which means that the value of S i,j is equal to that of S j,i. We suppose that there is a correlation relationship between I 1 and I 2 and between I 3 and I 5. If B 1,2 = 4 and B 3,5 = 2, then we can obtain the results that s 1,2 = s 2,1 = 0.892, s 3,5 = s 5,3 = 0.889, and others in the recommendation probability matrix S are 0. The results for the recommendation matrix of the relationship correlations between goods are shown below: The Mean Recommendation Probability Matrix of Goods Then, the combination recommendation probability matrix can be obtained in the reference existing store, and the calculation method is as follows: where A e c i represents the final recommendation probability matrix and h represents the influence factor. The recommendation probability matrix is as shown below: where b e i,j represents the recommendation probability of each good after adding the factor of the correlation relationship between goods into H e c i. According to Equation Therefore, on the basis of the matrix of analysis above, we can obtain the mean transition probability of each good in the reference existing store. The calculated method is denoted as Equation : where B c a t represents the number of goods that the target user has bought at time t. Based on the above example and Equation, the final recommendation probability of each good in the existing store is: When a new store opens, although it does not have purchase records, the relationships between goods can be determined by referring to those of the existing store for reference. The Trust Factor of Goods in the RM-MES Scheme In traditional recommendation schemes, there exists a dependency among users in social networks. If two users have a similar performance, the trust level is obviously high. Therefore, in this paper, the trust factor is added into the RM-MES scheme to improve the accuracy of recommendation results. In the RM-MES scheme, the trust factor of a good is divided into the reputation, sales rank, and frequency. The calculated method of trust is denoted as follows: where trust i represents the trust degree of good i, rep i represents the reputation of good I, and f re i represents the purchased frequency of good i. Fre is a constant in the experiment, which makes sure that the value of f re i /Fre is in the interval. and respectively represent the scale factor of reputation and the influence factor of sales rank for good I i. rank i is the sales rank of good I i. Because the historical purchase records are rich in the existing store for reference, the value of rep i, rank i, and f re i for good i is certain. With the operation of the RM-MES scheme, the reputation, sales rank, and purchase frequency in the recently opened store change at different time cycles. The Latent Factors of Users in the RM-MES Scheme In the existing store, it is easy to determine the target user's transition matrix of probability by the histories of browsing and trust factor of goods if the target user is not new. However, there are few histories of browsing and trust degrees of goods for a recently opened store, which is called a "cold start". In our paper, we defined L a = (age, gender, location, browse) as the attribute set of latent factors. If the target user is new, and there is therefore no historical purchase record, it is not easy to recommend accurate goods for the user. However, the set of latent similar users can be adopted to compute the latent goods that the target user may like. The set of similar users for the target user c a is defined as Sim (Latent(c a ), Latent(c a )), which can be computed by the four factors shown in the Equation. where represents the influence factor of attribute similarity, 1 + 2 + 3 + 4 = 1, Sim(Age(c a ), Age(c b )) represents the latent similarity relationship of age between c a and c b, Sim(Gender(c), Gender(c b )) represents the latent similarity of gender between c a and c b, Sim(Location(c a ), Location(c b )) represents the latent similarity of location between c a and c b, and Sim(Browse(c a ), Browse(c b )) represents the latent similarity of browsing between c a and c b. The Establishment of Combination Calculation Based on the methods shown above, the RM-MES scheme combines the mean recommendation probability matrix of goods for target users, trust degree of selected goods, and latent factor of target users, to establish the computation method illustrated below: where R I j c a represents the recommendation probability in the existing selected store to recommend good I j to target user c a. x and y represent, respectively, the weight of the mean probability matrix of recommended goods for user c a and the trust degree for good I j, and z is the weight when the target user c a is new. If the target user selected is new, then the historical purchase records are empty, and therefore x + y = 1, z = 0. In the RM-MES scheme, we calculated R where R I j f c a represents the recommendation probability of providing good I j to user c a, and is the influence factor of historical purchase records in the recently opened store. The historical purchase records of a recently opened store are sparse, and therefore the result of R I j 1c a for the recently opened store is almost 0. Thus, at the beginning of our experiment, the value of the influence factor was zero; with the running of the recently opened store, the historical purchase records in the recently opened store will grow larger, and the value of will increase. We can calculate the value of the influence factor by Equation as follows: where ∑ n i=1 f re i represents the sum of the purchase frequencies of goods I 1 to I n in a time period and Fre is a constant number, which was defined above. With the operation of the recently opened store, the historical purchase record will increase; therefore, the influence factor will become larger. When ∑ n i=1 f re i reaches the threshold total purchase number n, the store can recommend goods to users on the basis of its own historical purchase records. Experimental Settings In order to evaluate the effectiveness and performance of the RM-MES scheme, the purchasing network metadata of Amazon products (http://jmcauley.ucsd.edu/data/amazon/) and user review information were used in our experiments. First, to evaluate the effectiveness and performance of the RM-MES scheme, we closely compared the RM-MES scheme with the classic trust-based scheme under a certain influence factor in different time periods. Then, we verified the influence factor of the influence factor x in the RM-MES scheme. In addition, we compared the trust degree of each selected good under different time periods. Finally, we compared and analyzed various detailed results during the experiment. The dataset in the experiment was obtained by enquiring into the dataset on the Amazon website. We chose the metadata and reviews of the health and personal care category, which contains approximately 263,032 different goods. For each user, the following information could be obtained: ID of the product bought, the review ID, comment on the product, and review time. For each piece of merchandise, the following information could be obtained: ID, sales rank, categories, description, and list of similar goods. From the health and personal care catalogue, in our experiment, we chose several kinds of goods with higher purchase ranking in the experiments. In our experiments, we used 3/4 of the selected purchase records as the training set and the rest as the test set. Then, to evaluate the effectiveness and performance of the RM-MES scheme, we compared results of precision, recall, and F 1-measure. The three indexes above are three standard measurements for measuring the effectiveness of a recommendation scheme (the higher, the better). The recall can be obtained by the following calculation method (Equation ): where H represents the total number of both the target user and similar users, N a indicates the number of goods that user c a purchased in the recommendation list, and List a indicates the number of goods in the recommendation list. The recall in the RM-MES scheme can be computed by Equation as follows: where B a represents the number of goods that user c a likes on the basis of the comments given by user c a and recall indicates the number of goods that target user c a likes in the list of recommendation to the total number of goods that user c a likes. The bigger the value of recall, the better. Because F 1-measure is calculated as a combination of these two indicators, F 1-measure can comprehensively verify the effectiveness of the RM-MES scheme. The calculation method of F 1-measure can be obtained by Equation as follows: If the F 1-measure of the recommendation scheme is higher, the performance of the recommendation scheme is better. Experimental Results In this section, the performance of the RM-MES scheme is compared with that of the trust-based scheme proposed in Reference. The trust-based scheme is to recommend appropriate goods to users based on the trust factor of goods. We chose the health and personal care shop as the reference of the recently opened shop. The users selected in the experiments were not new, so the latent factor of users was not considered in our experiment. In other words, x + y = 1 and z = 0. As shown in Figure 1a, the precision of the trust-based scheme was lower than that of the RM-MES scheme on average. When a new store opens, historical purchase records are more likely to be sparse, and therefore the precision is low (cold start). The RM-MES scheme had a better performance than the trust-based model on average. When time is 8, the recommendation results of precision of the newly opened store are higher than those of the existing store, which means the store can recommend goods to users with its own historical purchase records. of the trust-based scheme was lower than that of the RM-MES scheme on average. When a new store opens, historical purchase records are more likely to be sparse, and therefore the precision is low (cold start). The RM-MES scheme had a better performance than the trust-based model on average. When time is 8, the recommendation results of precision of the newly opened store are higher than those of the existing store, which means the store can recommend goods to users with its own historical purchase records. From the experimental results of precision, it is clear that the precision of the trust-based scheme is lower than that of the RM-MES scheme when a store is newly opened. This is because the RM-MES scheme first refers to the correlation of goods among the existing store and the new store and then guides the new store to recommend goods to users. Thus, though the newly opened store has few purchase records, the RM-MES scheme still has a better performance than other methods. Thus, it can be seen that the RM-MES scheme can solve the "cold start" problem. With the running of the newly opened store, there are more and more purchase records in the new store, so it is more effective for it to adopt its own purchase records. Thus, after a period of time, the precision of the recommendation model will be maintained at a constant level. It is similar to other schemes that only adopt their purchase records to recommend goods. Thus, the precision ratio of the RM-MES scheme is similar to that of other schemes. From the experimental results of precision, it is clear that the precision of the trust-based scheme is lower than that of the RM-MES scheme when a store is newly opened. This is because the RM-MES scheme first refers to the correlation of goods among the existing store and the new store and then guides the new store to recommend goods to users. Thus, though the newly opened store has few purchase records, the RM-MES scheme still has a better performance than other methods. Thus, it can be seen that the RM-MES scheme can solve the "cold start" problem. With the running of the newly opened store, there are more and more purchase records in the new store, so it is more effective for it to adopt its own purchase records. Thus, after a period of time, the precision of the recommendation model will be maintained at a constant level. It is similar to other schemes that only adopt their purchase records to recommend goods. Thus, the precision ratio of the RM-MES scheme is similar to that of other schemes. The comparison of percentages of improvement of precision when time <6 is shown in Figure 1b. It can be seen that the percentage of improvements of precision is very high at the beginning. This is because when a new shop opens, it has few historical purchase records, so it is difficult to recommend goods for target users appropriately. However, the proposed method in our paper can combine both the RM-MES scheme and the trust-based recommendation model to recommend goods to target users. Thus, the percent improvements of precision are very high. Then, we closely compared the result of recall for the two schemes with time passing, and the results are shown in Figure 2a. Figure 2b shows the comparison of percent improvements of recall in the RM-MES scheme. From the results of recall, it is clear that the RM-MES scheme has better effectiveness and performance than the trust-based scheme at the beginning. When a new store opens, the historical purchase records is more likely to be 0%; thus, it is difficult to recommend goods for target users appropriately (cold start). However, the existing store for reference has enough purchase records to recommend goods to target users. Therefore, the results of the recall that is the combination of both the existing store and the new store are definitely higher than those of the results of recall that only adopt their own purchase records of a new store. However, there were fluctuations during the experiment, as shown in the Figure 2a. This is because there are uncertainties in online social networks, which may cause the recall of recommendation to have fluctuations. Then, the results of F 1-measure of these two schemes are compared to comprehensively evaluate the performance of the proposed recommendation scheme. The results of F 1-measure are shown in Figure 3a. Figure 3b shows the comparison of the percentage of improvements of F 1-measure in the RM-MES scheme. From the results of F 1-measure, it is clear that the proposed method in our paper has a better performance than the trust-based scheme at the beginning. This is because the precisions and recalls of the proposed method in our paper are higher than those of other methods because the purchase records are more likely to be sparse when a new store opens. Therefore, the F 1-measure of the proposed method is higher than that of the other methods. There is an increasing number of purchase records in the newly opened store as time passes. The results for the F 1-measure of the recommendation model will be maintained at a constant level. the RM-MES scheme. From the results of recall, it is clear that the RM-MES scheme has better effectiveness and performance than the trust-based scheme at the beginning. When a new store opens, the historical purchase records is more likely to be 0%; thus, it is difficult to recommend goods for target users appropriately (cold start). However, the existing store for reference has enough purchase records to recommend goods to target users. Therefore, the results of the recall that is the combination of both the existing store and the new store are definitely higher than those of the results of recall that only adopt their own purchase records of a new store. However, there were fluctuations during the experiment, as shown in the Figure 2a. This is because there are uncertainties in online social networks, which may cause the recall of recommendation to have fluctuations. Then, the results of F1-measure of these two schemes are compared to comprehensively evaluate the performance of the proposed recommendation scheme. The results of F1-measure are shown in Figure 3a. Figure 3b shows the comparison of the percentage of improvements of F1-measure in the RM-MES scheme. From the results of F1-measure, it is clear that the proposed method in our paper has a better performance than the trust-based scheme at the beginning. This is because the precisions and recalls of the proposed method in our paper are higher than those of other methods because the purchase records are more likely to be sparse when a new store opens. Therefore, the F1-measure of the proposed method is higher than that of the other methods. There is an increasing number of purchase records in the newly opened store as time passes. The results for the F1-measure of the recommendation model will be maintained at a constant level. The precisions of the two schemes are closely compared under different recommendation thresholds, as shown in Figure 4a. From the picture above, we can reach the conclusion that the effectiveness of the proposed method in our paper is better than that of the trust-based model under different recommendation thresholds. This is because the trust-based model selects their own purchase records to recommend. However, the purchase matrixes are more likely to be empty at the beginning. Thus, it is difficult to recommend goods for target users appropriately. When the recommendation threshold is 0.5, the results of precision in our proposed method are the highest and decrease over time, and they remain at zero when the threshold is 0.9. A comparison of percentages of improvements of precision under different recommendation thresholds is shown in Figure 4b. As illustrated above, it can be seen that the percentage of precision is further improved in the RM-MES scheme proposed in this paper. The recalls of the RM-MES scheme proposed in this paper and the trust-based scheme under different recommendation thresholds are shown in Figure 5a. From Figure 5a, it can be seen that the recall for the RM-MES scheme proposed in this paper is greater than that of the trust-based model under different recommendation thresholds. This is because the datasets of the RM-MES scheme are a combination of the historical purchase records of the existing store and the historical purchase records of the target store. When the recommendation threshold is higher than 0.4, the results of recall in the RM-MES scheme gradually become smaller. The recalls of these two recommendation The precisions of the two schemes are closely compared under different recommendation thresholds, as shown in Figure 4a. From the picture above, we can reach the conclusion that the effectiveness of the proposed method in our paper is better than that of the trust-based model under different recommendation thresholds. This is because the trust-based model selects their own purchase records to recommend. However, the purchase matrixes are more likely to be empty at the beginning. Thus, it is difficult to recommend goods for target users appropriately. When the recommendation threshold is 0.5, the results of precision in our proposed method are the highest and decrease over time, and they remain at zero when the threshold is 0.9. A comparison of percentages of improvements of precision under different recommendation thresholds is shown in Figure 4b. As illustrated above, it can be seen that the percentage of precision is further improved in the RM-MES scheme proposed in this paper. The recalls of the RM-MES scheme proposed in this paper and the trust-based scheme under different recommendation thresholds are shown in Figure 5a. From Figure 5a, it can be seen that the recall for the RM-MES scheme proposed in this paper is greater than that of the trust-based model under different recommendation thresholds. This is because the datasets of the RM-MES scheme are a combination of the historical purchase records of the existing store and the historical purchase records of the target store. When the recommendation threshold is higher than 0.4, the results of recall in the RM-MES scheme gradually become smaller. The recalls of these two recommendation schemes remain at zero when the recommendation threshold is 0.9. A comparison of percentages of improvements of precision under different recommendation thresholds is shown in Figure 4b. As illustrated above, it can be seen that the percentage of precision is further improved in the RM-MES scheme proposed in this paper. The recalls of the RM-MES scheme proposed in this paper and the trust-based scheme under different recommendation thresholds are shown in Figure 5a. From Figure 5a, it can be seen that the recall for the RM-MES scheme proposed in this paper is greater than that of the trust-based model under different recommendation thresholds. This is because the datasets of the RM-MES scheme are a combination of the historical purchase records of the existing store and the historical purchase records of the target store. When the recommendation threshold is higher than 0.4, the results of recall in the RM-MES scheme gradually become smaller. The recalls of these two recommendation schemes remain at zero when the recommendation threshold is 0.9. A comparison of percent improvements of recall under different recommendation thresholds is illustrated in Figure 5b. To comprehensively evaluate the performance of the proposed scheme, the F1-measure results of these two recommendation schemes were compared under different recommendation thresholds. The F1-measure of the RM-MES scheme proposed in this paper and the trust-based scheme under different recommendation thresholds is illustrated in Figure 6a. It is clear that the effectiveness of the F1-measure in the RM-MES scheme is greater than that of the trust-based scheme. However, the RM-MES scheme combines both the historical purchase records of reference existing stores and newly opened stores together. Thus, at the beginning, it can recommend goods to users more accurately than the trust-based scheme. When the recommendation threshold is 0.9, the recall and precision in both a trust-based scheme and RM-MES scheme are zero; therefore, the F1-measure of these two recommendation schemes are both 0. In addition, it can be seen from the results below that the RM- MES scheme is more stable than the trust-based schemes. A comparison of percentages of improvements of F1-measure under different recommendation thresholds is illustrated in Figure 6b. From the results shown above, it can be seen that the percentages of the F1-measure are further improved in the RM-MES scheme under different thresholds. When the recommendation threshold is more than 0.8, the recall and precision of both the proposed scheme in our paper and the trust-based scheme are zero; therefore, F1-measure reaches zero in the experiment. To further evaluate the performance of the RM-MES scheme, after the experiment of the health and personal care category is complete, we carried on the experiment of a newly opened baby products store. In the category of baby products, there are about 71,317 different kinds of products. In our experiment, we selected the most purchased products as the training set on the basis of the sales rank. In the experiments, we used 3/4 of the selected historical purchase records as the training set and the rest as the test set. Table 2 shows a summary of the experimental results. A comparison of percent improvements of recall under different recommendation thresholds is illustrated in Figure 5b. To comprehensively evaluate the performance of the proposed scheme, the F 1-measure results of these two recommendation schemes were compared under different recommendation thresholds. The F 1-measure of the RM-MES scheme proposed in this paper and the trust-based scheme under different recommendation thresholds is illustrated in Figure 6a. It is clear that the effectiveness of the F 1-measure in the RM-MES scheme is greater than that of the trust-based scheme. However, the RM-MES scheme combines both the historical purchase records of reference existing stores and newly opened stores together. Thus, at the beginning, it can recommend goods to users more accurately than the trust-based scheme. When the recommendation threshold is 0.9, the recall and precision in both a trust-based scheme and RM-MES scheme are zero; therefore, the F 1-measure of these two recommendation schemes are both 0. In addition, it can be seen from the results below that the RM-MES scheme is more stable than the trust-based schemes. A comparison of percentages of improvements of F 1-measure under different recommendation thresholds is illustrated in Figure 6b. From the results shown above, it can be seen that the percentages of the F 1-measure are further improved in the RM-MES scheme under different thresholds. When the recommendation threshold is more than 0.8, the recall and precision of both the proposed scheme in our paper and the trust-based scheme are zero; therefore, F 1-measure reaches zero in the experiment. To further evaluate the performance of the RM-MES scheme, after the experiment of the health and personal care category is complete, we carried on the experiment of a newly opened baby products store. In the category of baby products, there are about 71,317 different kinds of products. In our experiment, we selected the most purchased products as the training set on the basis of the sales rank. In the experiments, we used 3/4 of the selected historical purchase records as the training set and the rest as the test set. Table 2 shows a summary of the experimental results. Table 2, we can reach the conclusion that the effectiveness of the RM-MES scheme is greater than that of the other schemes. In addition, in the proposed scheme in our paper, we can obtain the best measurement results when the influence factor of transition probability x is 0.3. To further evaluate the performance of the RM-MES scheme, after the experiment of the health and personal care category is complete, we carried on the experiment of a newly opened baby products store. In the category of baby products, there are about 71,317 different kinds of products. In our experiment, we selected the most purchased products as the training set on the basis of the sales rank. In the experiments, we used 3/4 of the selected historical purchase records as the training set and the rest as the test set. Table 2 shows a summary of the experimental results. From Table 2, we can reach the conclusion that the effectiveness of the RM-MES scheme is greater than that of the other schemes. In addition, in the proposed scheme in our paper, we can obtain the best measurement results when the influence factor of transition probability x is 0.3. At the initiation stage, the results of precision ratio, recall ratio and F 1-measure are improved by approximately 19.07%, 20.73%, and 21.02%, respectively, compared to the previous schemes. Conclusions In this paper, based on the trust-based recommendation model, we proposed a new recommendation model (RM-MES) based on multi-emotion similarity to improve the performance of the recommendation scheme and overcome the "cold start" problem. First, we divided users' behaviors into browsing goods, buying goods, and purchasing goods as well as evaluating these goods. Then, the recommendation attributes of goods were considered to obtain similarities between users and shops. Then, the most similar store was selected as the reference existing store in our experiment. Next, the recommendation probability matrix of both the existing store and the new store were calculated according to the similarity between users and target user. Finally, we adopted the Amazon product co-purchasing network metadata and commentary information to evaluate the effectiveness and performance of the RM-MES scheme through comprehensive experiments. Furthermore, we obtained the best measurement results when the influence factor of transition probability x was 0.3 in our experiment. Therefore, we compared detailed information in the RM-MES scheme with that in the trust-based scheme through experiments when the influence factor of transition probability x is 0.3 and analyzed the impact of the transition probability influence factors in the RM-MES scheme through experiments. Therefore, we can draw the conclusion that the RM-MES scheme has a better performance than other recommendation schemes. For high probability goods in the RM-MES scheme, the RM-MES scheme will further enhance the recommended probability of goods that have been recommended and non-recommended goods will suffer further reductions in their recommendation probabilities. Therefore, this tendency leads to the phenomenon that the recommendation system loses the opportunity to recommend more optimized goods. In future studies, we will further research how to recommend other goods with small probabilities to users to bring higher profit to the system.
package jec; import java.nio.ByteBuffer; public class Galois { private static final int NONE = 10; private static final int TABLE = 11; private static final int SHIFT = 12; private static final int LOGS = 13; private static final int SPLITW8 = 14; private static int[] multType = { NONE, /* 1 */TABLE, /* 2 */TABLE, /* 3 */TABLE, /* 4 */TABLE, /* 5 */TABLE, /* 6 */TABLE, /* 7 */TABLE, /* 8 */TABLE, /* 9 */TABLE, /* 10 */LOGS, /* 11 */LOGS, /* 12 */LOGS, /* 13 */LOGS, /* 14 */LOGS, /* 15 */LOGS, /* 16 */LOGS, /* 17 */LOGS, /* 18 */LOGS, /* 19 */LOGS, /* 20 */LOGS, /* 21 */LOGS, /* 22 */LOGS, /* 23 */SHIFT, /* 24 */SHIFT, /* 25 */SHIFT, /* 26 */SHIFT, /* 27 */SHIFT, /* 28 */SHIFT, /* 29 */SHIFT, /* 30 */SHIFT, /* 31 */SHIFT, /* 32 */SPLITW8 }; private static int[] primPoly = { 0, /* 1 */1, /* 2 */07, /* 3 */013, /* 4 */023, /* 5 */045, /* 6 */0103, /* 7 */0211, /* 8 */0435, /* 9 */01021, /* 10 */02011, /* 11 */04005, /* 12 */010123, /* 13 */020033, /* 14 */042103, /* 15 */0100003, /* 16 */0210013, /* 17 */0400011, /* 18 */01000201, /* 19 */02000047, /* 20 */04000011, /* 21 */010000005, /* 22 */020000003, /* 23 */040000041, /* 24 */0100000207, /* 25 */0200000011, /* 26 */0400000107, /* 27 */01000000047, /* 28 */02000000011, /* 29 */04000000005, /* 30 */010040000007, /* 31 */020000000011, /* 32 */00020000007 }; /* Really 40020000007, but we're omitting the high * order bit */ private static int[] nw = { 0, (1 << 1), (1 << 2), (1 << 3), (1 << 4), (1 << 5), (1 << 6), (1 << 7), (1 << 8), (1 << 9), (1 << 10), (1 << 11), (1 << 12), (1 << 13), (1 << 14), (1 << 15), (1 << 16), (1 << 17), (1 << 18), (1 << 19), (1 << 20), (1 << 21), (1 << 22), (1 << 23), (1 << 24), (1 << 25), (1 << 26), (1 << 27), (1 << 28), (1 << 29), (1 << 30), (1 << 31), -1 }; private static int[] nwm1 = { 0, (1 << 1) - 1, (1 << 2) - 1, (1 << 3) - 1, (1 << 4) - 1, (1 << 5) - 1, (1 << 6) - 1, (1 << 7) - 1, (1 << 8) - 1, (1 << 9) - 1, (1 << 10) - 1, (1 << 11) - 1, (1 << 12) - 1, (1 << 13) - 1, (1 << 14) - 1, (1 << 15) - 1, (1 << 16) - 1, (1 << 17) - 1, (1 << 18) - 1, (1 << 19) - 1, (1 << 20) - 1, (1 << 21) - 1, (1 << 22) - 1, (1 << 23) - 1, (1 << 24) - 1, (1 << 25) - 1, (1 << 26) - 1, (1 << 27) - 1, (1 << 28) - 1, (1 << 29) - 1, (1 << 30) - 1, 0x7fffffff, 0xffffffff }; private static int logTables[][] = new int[33][]; private static int multTables[][] = new int[33][]; private static int divTables[][] = new int[33][]; private static int ilogTables[][] = new int[33][]; private static int[] iLogTablesIndex = new int[33]; /* Special case for w = 32 */ private static int splitW8[][] = new int[7][]; /** * * @param r1 * - Region 1 * @param r2 * - Region 2 * @param r3 * - Sum region (r3 = r1 ^ r2) -- can be r1 or r2 * @param nbytes * - Number of bytes in region */ public static void regionXor(byte[] r1, byte[] r2, byte[] r3, int nbytes) { for (int i = 0; i < nbytes; i++) { r3[i] = (byte) (r1[i] ^ r2[i]); } } /** * @param region - Region to multiply * @param multby - Number to multiply by * @param nbytes - Number of bytes in region * @param r2 - If r2 != null, products go here * @param add * @throws Exception */ public static void regionMultiplyW08(byte[] region, int multby, int nbytes, byte[] r2, boolean add) throws Exception { byte[] ur1, ur2; int prod; int srow; ur1 = region; ur2 = (r2 == null) ? ur1 : r2; if (multTables[8] == null) { if (createMultTables(8) < 0) { throw new Exception("galois_08_region_multiply -- couldn't make multiplication tables\n"); } } srow = multby * nw[8]; if (r2 == null || !add) { for (int i = 0; i < nbytes; i++) { prod = multTables[8][srow + (ur1[i] & 0xFF)]; ur2[i] = (byte) prod; } } else { for (int i = 0; i < nbytes; i++) { prod = multTables[8][srow + (ur1[i] & 0xFF)]; ur2[i] = (byte) (((int) ur2[i]) ^ prod); } } return; } /** * @param region - Region to multiply * @param multby - Number to multiply by * @param nbytes - Number of bytes in region * @param r2 - If r2 != null, products go here * @param add * @throws Exception */ public static void regionMultiplyW16(byte[] region, int multby, int nbytes, byte[] r2, boolean add) throws Exception { byte[] ur1, ur2; int prod; int log1; ur1 = region; ur2 = (r2 == null) ? ur1 : r2; nbytes /= 2; if (multby == 0) { if (!add) { for (int i = 0; i < nbytes; i++) ur2[i] = 0; } return; } if (logTables[16] == null) { try { createLogTables(16); } catch (Exception e) { throw new Exception("galois_16_region_multiply -- couldn't make log tables\n"); } } log1 = logTables[16][multby]; for (int i = 0; i < nbytes; i++) { if (ur1[i] == 0) { ur2[i] = 0; } else { prod = logTables[16][ur1[i]] + log1; ur2[i] = (byte) ilogTables[16][iLogTablesIndex[16] + prod]; } } } /** * @param region - Region to multiply * @param multby - Number to multiply by * @param nbytes - Number of bytes in region * @param r2 - If r2 != null, products go here * @param add * @throws Exception */ public static void regionMultiplyW32(byte[] region, int multby, int nbytes, byte[] r2, boolean add) throws Exception { int[] ur1, ur2; int a, b, accumulator, i8, j8; int[] acache = new int[4]; ur1 = ByteBuffer.wrap(region).asIntBuffer().array(); ur2 = (r2 == null) ? ur1 : ByteBuffer.wrap(r2).asIntBuffer().array(); nbytes /= 4; if (splitW8[0] == null) { if (createSplitTablesW8() < 0) { throw new Exception("Galois.regionMultiplyW32 -- couldn't make split multiplication tables\n"); } } i8 = 0; for (int i = 0; i < 4; i++) { acache[i] = (((multby >> i8) & 255) << 8); i8 += 8; } if (!add) { for (int k = 0; k < nbytes; k++) { accumulator = 0; for (int i = 0; i < 4; i++) { a = acache[i]; j8 = 0; for (int j = 0; j < 4; j++) { b = ((ur1[k] >> j8) & 255); accumulator ^= splitW8[i + j][a | b]; j8 += 8; } } ur2[k] = accumulator; } } else { for (int k = 0; k < nbytes; k++) { accumulator = 0; for (int i = 0; i < 4; i++) { a = acache[i]; j8 = 0; for (int j = 0; j < 4; j++) { b = ((ur1[k] >> j8) & 255); accumulator ^= splitW8[i + j][a | b]; j8 += 8; } } ur2[k] = (ur2[k] ^ accumulator); } } ByteBuffer byteBuffer = ByteBuffer.allocate(ur2.length * 4); byteBuffer.asIntBuffer().put(ur2); r2 = byteBuffer.array(); } public static int singleDivide(int a, int b, int w) throws Exception { if (multType[w] == TABLE) { if (divTables[w] == null) { try { createMultTables(w); } catch (Exception e) { throw new Exception("ERROR -- cannot make multiplication tables for w=" + w + "\n"); } } return divTables[w][(a << w) | b]; } else if (multType[w] == LOGS) { if (b == 0) return -1; if (a == 0) return 0; if (logTables[w] == null) { if (createLogTables(w) < 0) { throw new Exception("ERROR -- cannot make log tables for w=" + w); } } int sum_j = logTables[w][a] - logTables[w][b]; return ilogTables[w][iLogTablesIndex[w] + sum_j]; } else { if (b == 0) return -1; if (a == 0) return 0; int sum_j = inverse(b, w); return singleMultiply(a, sum_j, w); } } public static int inverse(int y, int w) throws Exception { if (y == 0) return -1; if (multType[w] == SHIFT || multType[w] == SPLITW8) return shiftInverse(y, w); return singleDivide(1, y, w); } public static int shiftInverse(int y, int w) throws Exception { int[] mat2 = new int[32]; int[] inv2 = new int[32]; for (int i = 0; i < w; i++) { mat2[i] = y; if ((y & nw[w - 1]) != 0) { y = y << 1; y = (y ^ primPoly[w]) & nwm1[w]; } else { y = y << 1; } } invertBinaryMatrix(mat2, inv2, w); return inv2[0]; } public static void invertBinaryMatrix(int[] mat, int[] inv, int rows) throws Exception { int cols, i, j; int tmp; cols = rows; for (i = 0; i < rows; i++) inv[i] = (1 << i); /* First -- convert into upper triangular */ for (i = 0; i < cols; i++) { /* * Swap rows if we have a zero [i][i] element. If we can't swap, then * the matrix was not invertible */ if ((mat[i] & (1 << i)) == 0) { for (j = i + 1; j < rows && (mat[j] & (1 << i)) == 0; j++) ; if (j == rows) { throw new Exception("galois.invertBinaryMatrix: Matrix not invertible!!\n"); } tmp = mat[i]; mat[i] = mat[j]; mat[j] = tmp; tmp = inv[i]; inv[i] = inv[j]; inv[j] = tmp; } /* Now for each j>i, add A_ji*Ai to Aj */ for (j = i + 1; j != rows; j++) { if ((mat[j] & (1 << i)) != 0) { mat[j] ^= mat[i]; inv[j] ^= inv[i]; } } } /* * Now the matrix is upper triangular. Start at the top and multiply * down */ for (i = rows - 1; i >= 0; i--) { for (j = 0; j < i; j++) { if ((mat[j] & (1 << i)) != 0) { inv[j] ^= inv[i]; } } } } public static int singleMultiply(int x, int y, int w) throws Exception { if (x == 0 || y == 0) return 0; if (multType[w] == TABLE) { if (multTables[w] == null) { if (createMultTables(w) < 0) { throw new Exception("ERROR -- cannot make multiplication tables for w=" + w); } } return multTables[w][(x << w) | y]; } else if (multType[w] == LOGS) { if (logTables[w] == null) { if (createLogTables(w) < 0) { throw new Exception("ERROR -- cannot make log tables for w=" + w); } } int sum_j = logTables[w][x] + logTables[w][y]; return ilogTables[w][iLogTablesIndex[w] + sum_j]; } else if (multType[w] == SPLITW8) { if (splitW8[0] == null) { if (createSplitTablesW8() < 0) { throw new Exception("ERROR -- cannot make log split_w8_tables for w=" + w); } } return splitMultiplyW08(x, y); } else if (multType[w] == SHIFT) { return shiftMultiply(x, y, w); } throw new Exception("Galois_single_multiply - no implementation for w" + w); } public static int createLogTables(int w) throws Exception { if (w > 30) return -1; if (logTables[w] != null) return 0; logTables[w] = new int[nw[w]]; ilogTables[w] = new int[nw[w] * 3]; for (int i = 0; i < nw[w]; i++) { logTables[w][i] = nwm1[w]; ilogTables[w][i] = 0; } int b = 1; for (int i = 0; i < nwm1[w]; i++) { if (logTables[w][b] != nwm1[w]) { throw new Exception("Galois.createLogTables Error: i=" + i + ", b=" + b + ", B->J[b]=" + logTables[w][b] + ", J->B[i]=" + ilogTables[w][i] + " (0" + ((b << 1) ^ primPoly[w]) + ")\n"); } logTables[w][b] = i; ilogTables[w][i] = b; b = b << 1; if ((b & nw[w]) != 0) b = (b ^ primPoly[w]) & nwm1[w]; } for (int i = 0; i < nwm1[w]; i++) { ilogTables[w][i + nwm1[w]] = ilogTables[w][i]; ilogTables[w][i + nwm1[w] * 2] = ilogTables[w][i]; } iLogTablesIndex[w] = nwm1[w]; return 0; } public static int shiftMultiply(int x, int y, int w) { int j, ind; int scratch[] = new int[33]; int prod = 0; for (int i = 0; i < w; i++) { scratch[i] = y; if ((y & (1 << (w - 1))) != 0) { y = y << 1; y = (y ^ primPoly[w]) & nwm1[w]; } else { y = y << 1; } } for (int i = 0; i < w; i++) { ind = (1 << i); if ((ind & x) != 0) { j = 1; for (int k = 0; k < w; k++) { prod = prod ^ (j & scratch[i]); j = (j << 1); } } } return prod; } public static int splitMultiplyW08(int x, int y) { int a, b, accumulator, i8, j8; accumulator = 0; i8 = 0; for (int i = 0; i < 4; i++) { a = (((x >> i8) & 255) << 8); j8 = 0; for (int j = 0; j < 4; j++) { b = ((y >> j8) & 255); accumulator ^= splitW8[i + j][a | b]; j8 += 8; } i8 += 8; } return accumulator; } public static int createMultTables(int w) throws Exception { if (w >= 14) return -1; if (multTables[w] != null) return 0; multTables[w] = new int[nw[w] * nw[w]]; divTables[w] = new int[nw[w] * nw[w]]; if (logTables[w] == null) { try { createLogTables(w); } catch (Exception e) { multTables[w] = null; divTables[w] = null; throw e; } } /* Set mult/div tables for x = 0 */ int j = 0; multTables[w][j] = 0; /* y = 0 */ divTables[w][j] = -1; j++; for (int y = 1; y < nw[w]; y++) { /* y > 0 */ multTables[w][j] = 0; divTables[w][j] = 0; j++; } for (int x = 1; x < nw[w]; x++) { /* x > 0 */ multTables[w][j] = 0; /* y = 0 */ divTables[w][j] = -1; j++; for (int y = 1; y < nw[w]; y++) { /* y > 0 */ int index1 = logTables[w][x] + logTables[w][y]; multTables[w][j] = ilogTables[w][iLogTablesIndex[w] + index1]; int index2 = logTables[w][x] - logTables[w][y]; divTables[w][j] = ilogTables[w][(iLogTablesIndex[w]) + index2]; j++; } } return 0; } public static int createSplitTablesW8() { if (splitW8[0] != null) return 0; try { createMultTables(8); } catch (Exception e) { return -1; } for (int i = 0; i < 7; i++) { splitW8[i] = new int[(1 << 16)]; } int zElt, yElt, index, ishift, jshift; for (int i = 0; i < 4; i += 3) { ishift = i * 8; for (int j = ((i == 0) ? 0 : 1); j < 4; j++) { jshift = j * 8; index = 0; for (int z = 0; z < 256; z++) { zElt = (z << ishift); for (int y = 0; y < 256; y++) { yElt = (y << jshift); splitW8[i + j][index] = shiftMultiply(zElt, yElt, 32); index++; } } } } return 0; } }
// App is where all routes and middleware for buffalo // should be defined. This is the nerve center of your // application. func App() (*buffalo.App, error) { if app == nil { store, err := GetStorage() mf := module.NewFilter() if err != nil { err = fmt.Errorf("error getting storage configuration (%s)", err) return nil, err } if err := store.Connect(); err != nil { err = fmt.Errorf("error connecting to storage (%s)", err) return nil, err } worker, err := getWorker(store, mf) if err != nil { return nil, err } lggr := log.New(env.CloudRuntime(), env.LogLevel()) app = buffalo.New(buffalo.Options{ Env: ENV, PreWares: []buffalo.PreWare{ cors.Default().Handler, }, SessionName: "_athens_session", Worker: worker, Logger: log.Buffalo(), }) app.Use(ssl.ForceSSL(secure.Options{ SSLRedirect: ENV == "production", SSLProxyHeaders: map[string]string{"X-Forwarded-Proto": "https"}, })) if ENV == "development" { app.Use(middleware.ParameterLogger) } initializeTracing(app) initializeAuth(app) if env.EnableCSRFProtection() { csrfMiddleware := csrf.New app.Use(csrfMiddleware) } if T, err = i18n.New(packr.NewBox("../locales"), "en-US"); err != nil { app.Stop(err) } app.Use(T.Middleware()) if err := addProxyRoutes(app, store, mf, lggr); err != nil { err = fmt.Errorf("error adding proxy routes (%s)", err) return nil, err } app.ServeFiles("/", assetsBox) } return app, nil }
Finance, Consulting and Sales & Marketing were the most preferred domains.. More than 200 recruiters, including 57 first time recruiters, visited for several roles across multiple sectors. NEW DELHI: The highest offer at Indian Institute of Management (IIM Indore) this placement season stood at Rs 89.25 lakh per annum, an increase of 41% from last year. The highest domestic offer on campus also increased by 23% to Rs 40.5 lakh per annum. The average CTC for the top 50 offers is Rs 30.04 lakh per annum and at Rs 28.47 lakh per annum for the top 100. The median salary for the graduating batch was Rs 19.4 lakh per annum. IIM Indore achieved 100% placement with offer letters in the hands of 607 students. “We are delighted by the faith that the top companies of the country and the MNCs have reposed in our students. In the years to come, we will continue to strengthen our engagement with the industry and ensure that we continue to create socially conscious responsible leaders,” said Himanshu Rai, Director, IIM Indore. More than 200 recruiters, including 57 first time recruiters, visited for several roles across multiple sectors for both the Post Graduate Programme as well as the 5-Year Integrated Programme in Management. Finance, Consulting and Sales & Marketing were the most preferred domains. Recruiters in the finance domain included Aditya Birla Capital, Axis Bank, Barclays, Bank of America Continuum, Credit Suisse, CRISIL, Deutsche Bank, DHFL Pramerica, Edelweiss, Fidelity Investments, offering roles to 26% of the batch, the institute shared. Consulting continued to be a much sought after domain with 27% of the batch opting for consulting and strategy roles with recruiters like Avalon Consulting, Bain Capability Centre, Boston Consulting Group, Cognizant Business Consulting, Deloitte USI, Ernst & Young, etc, lining up. McKinsey also visited the campus after skipping placements in the previous years.
import tensorflow as tf from core import utils, yolov3 from core.dataset import dataset, Parser from basicNet.mobilenetV2 import MobilenetV2 from config.config import * from read_tfrecord import load_train_val_data sess = tf.Session() trainset, testset = load_train_val_data(TRAIN_TFRECORD, TEST_TFRECORD) print("trainset image size: {}, {}".format(trainset.parser.image_h, trainset.parser.image_h)) print("testset image size: {}, {}".format(testset.parser.image_h, testset.parser.image_h)) is_training = tf.placeholder(tf.bool) example = tf.cond(is_training, lambda: trainset.get_next(), lambda: testset.get_next()) images, *y_true = example model = yolov3.yolov3(NUM_CLASSES, ANCHORS, basic_net=MobilenetV2) with tf.variable_scope('yolov3'): model.set_anchor(images) pred_feature_map = model.forward(images, is_training=is_training) loss = model.compute_loss(pred_feature_map, y_true) y_pred = model.predict(pred_feature_map) tf.summary.scalar("loss/coord_loss", loss[1]) tf.summary.scalar("loss/sizes_loss", loss[2]) tf.summary.scalar("loss/confs_loss", loss[3]) tf.summary.scalar("loss/class_loss", loss[4]) global_step = tf.Variable(0, trainable=False) write_op = tf.summary.merge_all() writer_train = tf.summary.FileWriter("./data/train", tf.get_default_graph()) writer_test = tf.summary.FileWriter("./data/test") saver_to_restore = tf.train.Saver(var_list=tf.contrib.framework.get_variables_to_restore(include=["yolov3"])) update_vars = tf.contrib.framework.get_variables_to_restore(include=["yolov3/yolo-v3"]) learning_rate = tf.train.exponential_decay(LR, global_step, decay_steps=DECAY_STEPS, decay_rate=DECAY_RATE, staircase=True) optimizer = tf.train.AdamOptimizer(learning_rate) # set dependencies for BN ops update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = optimizer.minimize(loss[0], var_list=update_vars, global_step=global_step) sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()]) # saver_to_restore.restore(sess, "./checkpoint/yolov3.ckpt-12000") saver = tf.train.Saver(max_to_keep=2) for step in range(STEPS): run_items = sess.run([train_op, write_op, y_pred, y_true] + loss + [global_step, learning_rate], feed_dict={is_training:True}) if (step+1) % EVAL_INTERNAL == 0: train_rec_value, train_prec_value = utils.evaluate(run_items[2], run_items[3]) writer_train.add_summary(run_items[1], global_step=step) writer_train.flush() # Flushes the event file to disk if (step+1) % SAVE_INTERNAL == 0: saver.save(sess, save_path="./checkpoint/yolov3.ckpt", global_step=step+1) print("=> LR %7.4f \tGLOBAL STEP %10d \tSTEP %10d [TRAIN]:\tloss_xy:%7.4f \tloss_wh:%7.4f \tloss_conf:%7.4f \tloss_class:%7.4f" %(run_items[10], run_items[9], step+1, run_items[5], run_items[6], run_items[7], run_items[8])) run_items = sess.run([write_op, y_pred, y_true] + loss, feed_dict={is_training:False}) if (step+1) % EVAL_INTERNAL == 0: test_rec_value, test_prec_value = utils.evaluate(run_items[1], run_items[2]) print("\n=======================> evaluation result <================================\n") print("=> STEP %10d [TRAIN]:\trecall:%7.4f \tprecision:%7.4f" %(step+1, train_rec_value, train_prec_value)) print("=> STEP %10d [VALID]:\trecall:%7.4f \tprecision:%7.4f" %(step+1, test_rec_value, test_prec_value)) print("\n=======================> evaluation result <================================\n") writer_test.add_summary(run_items[0], global_step=step) writer_test.flush() # Flushes the event file to disk
<reponame>stricoff92/games-hub import json import random from asgiref.sync import async_to_sync from django.db import transaction from channels.layers import get_channel_layer from connectquatro.models import Board from connectquatro import tasks as cq_tasks from lobby.models import Player, Game, CompletedGame, GameFeedMessage from lobby import lib as lobby_lib class ColumnIsFullError(Exception): pass class ColumnOutOfRangeError(Exception): pass class SerializedDataMismatchedError(Exception): pass # sync database functions def board_state_to_obj(board:Board) -> str: return json.loads(board.board_state) def board_obj_to_serialized_state(board:dict) -> str: return json.dumps(board) def get_active_player_id_from_board(board:Board): board_state = board_state_to_obj(board) return board_state[Board.STATE_KEY_NEXT_PLAYER_TO_ACT] def get_next_player_turn(board:Board): game_data = board_state_to_obj(board) return game_data[Board.STATE_KEY_NEXT_PLAYER_TO_ACT] def get_game_over_state(board:Board) -> tuple: if board.game.is_over: winning_player = board.game.completedgame.winners.first() return True, winning_player winner = get_winning_player(board) if winner: return True, winner game = board.game if game.players.count() == 1: return True, game.players.first() return False, None def get_winning_player(board:Board) -> Player: board.refresh_from_db() game_data = board_state_to_obj(board) board_list = game_data[Board.STATE_KEY_BOARD_LIST] for player in board.game.archived_players.all(): # Check for horizontal N in a rows for row_ix, row in enumerate(board_list): in_a_row = 0 for col_ix, player_in_slot in enumerate(row): if player_in_slot == player.id: in_a_row += 1 else: in_a_row = 0 if in_a_row == board.max_to_win: return player # Check for verticle N in a rows for col_ix in range(len(board_list[0])): column = [board_list[row_ix][col_ix] for row_ix in range(len(board_list))] in_a_row = 0 for row_ix, player_in_slot in enumerate(column): if player_in_slot == player.id: in_a_row += 1 else: in_a_row = 0 if in_a_row == board.max_to_win: return player # check for diagonal N in a row number_of_rows = len(board_list) for row_ix, row in enumerate(board_list): for col_ix, player_in_slot in enumerate(row): if player_in_slot != player.id: continue # check for diagonal down right if ( col_ix <= (len(row) - board.max_to_win) # we have space to the right and row_ix <= (number_of_rows - board.max_to_win)): # we have space below in_a_row = 1 for offset in range(1, board.max_to_win): row_ix_to_check = row_ix + offset col_ix_to_check = col_ix + offset next_chip = board_list[row_ix_to_check][col_ix_to_check] if next_chip == player.id: in_a_row += 1 if in_a_row >= board.max_to_win: return player # check for diagonal down left if ( col_ix >= (len(row) - board.max_to_win) # we have space to the left and row_ix <= (number_of_rows - board.max_to_win)): # we have space below in_a_row = 1 for offset in range(1, board.max_to_win): row_ix_to_check = row_ix + offset col_ix_to_check = col_ix - offset next_chip = board_list[row_ix_to_check][col_ix_to_check] if next_chip == player.id: in_a_row += 1 if in_a_row >= board.max_to_win: return player def cycle_player_turn(board:Board) -> tuple: board_state = board_state_to_obj(board) current_player_id = board_state[Board.STATE_KEY_NEXT_PLAYER_TO_ACT] player_ids = list(board.game.players.order_by('turn_order').values_list('id', flat=True)) current_player_position = player_ids.index(current_player_id) restart_order = current_player_position == len(player_ids) - 1 if restart_order: new_player_to_act = player_ids[0] else: new_player_to_act = player_ids[current_player_position + 1] board_state[Board.STATE_KEY_NEXT_PLAYER_TO_ACT] = new_player_to_act board.board_state = board_obj_to_serialized_state(board_state) board.save(update_fields=['board_state']) return board, new_player_to_act @transaction.atomic def drop_chip(board:Board, player:Player, column_ix:int): game_data = board_state_to_obj(board) board_list = game_data[Board.STATE_KEY_BOARD_LIST] try: target_colum = [row[column_ix] for row in board_list] except IndexError: raise ColumnOutOfRangeError() if target_colum[0]: raise ColumnIsFullError() next_row_ix = None for last_ix, chip in enumerate(target_colum): if chip: next_row_ix = last_ix - 1 break # This column is empty if next_row_ix is None: next_row_ix = len(target_colum) - 1 board_list[next_row_ix][column_ix] = player.id game_data[Board.STATE_KEY_BOARD_LIST] = board_list board.board_state = board_obj_to_serialized_state(game_data) board.save(update_fields=['board_state']) return board @transaction.atomic def start_game(game): # Set game flags. game.is_started = True game.save(update_fields=['is_started']) game.archived_players.set(game.players.all()) # Set player turn order and color. colors = [c[0] for c in Player.COLOR_CHOICES] colors.sort(key=lambda c: random.random()) player_ids = game.players.values_list("id", flat=True) random_order_player_ids = sorted( player_ids, key=lambda v: random.random()) for ix, player_id in enumerate(random_order_player_ids): turn_order = ix + 1 player_color = colors.pop(0) Player.objects.filter(id=player_id).update( turn_order=turn_order, color=player_color, lobby_status=Player.LOBBY_STATUS_JOINED) # Set board state board = Board.objects.get(game=game) board_state = { Board.STATE_KEY_NEXT_PLAYER_TO_ACT:random_order_player_ids[0], Board.STATE_KEY_BOARD_LIST:[ [None for i in range(board.board_length_x)] for j in range(board.board_length_y) ] } board.board_state = board_obj_to_serialized_state(board_state) board.save(update_fields=['board_state']) # Fire off websocket events alert_game_lobby_game_started(game) # TODO: clean code move to diff abstraction lobby_lib.update_lobby_list_remove_game(game) def get_game_state(board, requesting_player=None) -> tuple: game = board.game board_state = board_state_to_obj(board) board_list = board_state[Board.STATE_KEY_BOARD_LIST] data = { 'board_list':board_list, 'players':[], 'winner':None, 'game_over':False, 'active_player':None, 'next_player_slug':None, } game_over, winning_player = get_game_over_state(board) if game_over: data['game_over'] = True data['winner'] = { 'handle':winning_player.handle, 'slug':winning_player.slug, } if not winning_player: players = game.players.all() next_player_id_to_act = board_state[Board.STATE_KEY_NEXT_PLAYER_TO_ACT] player_to_move = players.filter(id=next_player_id_to_act).first() data['next_player_slug'] = player_to_move.slug if requesting_player: data['active_player'] = player_to_move == requesting_player data['players'] = [p for p in board.game.players.values('slug', 'id', 'color')] game_over = winning_player is not None return data, game_over # TUPLE ! @transaction.atomic def remove_player_from_active_game(player): game = player.game if not game.is_started: raise TypeError("game is not started") if game.is_over: raise TypeError("game already over") player.game = None player.is_lobby_owner = False player.save(update_fields=['game', 'is_lobby_owner']) game_over_gfm = None gfm = GameFeedMessage.objects.create( game=game, message_type=GameFeedMessage.MESSAGE_TYPE_PLAYER_QUIT, message=f"{player.handle} quit") board = game.board board_state = board_state_to_obj(board) players_left = game.players.all() players_left_count = players_left.count() if players_left_count > 1: # Still players left. The game continues. current_player_turn_id = board_state[Board.STATE_KEY_NEXT_PLAYER_TO_ACT] if current_player_turn_id == player.id: # Adjust active player turn. Active player just left. next_turn_player_id = players_left.order_by('turn_order').first().id board_state[Board.STATE_KEY_NEXT_PLAYER_TO_ACT] = next_turn_player_id board.board_state = board_obj_to_serialized_state(board_state) board.save(update_fields=['board_state']) game.tick_count = game.tick_count + 1 game.save(update_fields=['tick_count']) cq_tasks.cycle_player_turn_if_inactive.delay( game.id, next_turn_player_id, game.tick_count) elif players_left_count == 1: # 1x player left. End the game game.is_over = True game.save() cg = CompletedGame.objects.create(game=game) last_player = game.players.first() cg.winners.set([last_player]) game_over_gfm = GameFeedMessage.objects.create( game=game, message_type=GameFeedMessage.MESSAGE_TYPE_GAME_STATUS, message=f"{last_player.handle} wins") game_state, is_over = get_game_state(board) alert_game_players_to_new_move(game, game_state) push_new_game_feed_message(gfm) if game_over_gfm: push_new_game_feed_message(game_over_gfm) @transaction.atomic def remove_player_from_completed_game(player): game = player.game if not game.is_over: raise TypeError("game is not over") player.game = None player.is_lobby_owner = False player.save(update_fields=['game', 'is_lobby_owner']) # async channel layer functions @async_to_sync async def alert_game_lobby_game_started(game): channel_layer = get_channel_layer() await channel_layer.group_send( game.channel_layer_name, {"type":"game.started"}) @async_to_sync async def alert_game_players_to_new_move(game, game_state): channel_layer = get_channel_layer() await channel_layer.group_send( game.channel_layer_name, { "type":"game.move", "game_state":game_state, }) @async_to_sync async def update_count_down_clock(game, player_slug, seconds_left): channel_layer = get_channel_layer() await channel_layer.group_send( game.channel_layer_name, { "type":"countdown.update", "player_slug":player_slug, "seconds":seconds_left, }) @async_to_sync async def push_new_game_feed_message(game_feed_message:GameFeedMessage): channel_layer = get_channel_layer() await channel_layer.group_send( game_feed_message.game.channel_layer_name, { "type": "new.game.feed.message", "message": game_feed_message.message, "message_type": game_feed_message.message_type, "font_awesome_classes": game_feed_message.font_awesome_classes, "created_at":game_feed_message.created_at.isoformat(), })
/* * This header is generated by classdump-dyld 1.5 * on Wednesday, October 27, 2021 at 3:16:52 PM Mountain Standard Time * Operating System: Version 13.5.1 (Build 17F80) * Image Source: /System/Library/Frameworks/PhotosUI.framework/PhotosUI * classdump-dyld is licensed under GPLv3, Copyright © 2013-2016 by <NAME>. Updated by <NAME>. */ @class NSArray, PUSessionInfo; @protocol PUPhotosGridViewSupplementalToolbarDataSource <NSObject> @property (getter=isAnyAssetSelected,nonatomic,readonly) BOOL anyAssetSelected; @property (nonatomic,readonly) NSArray * selectedAssets; @property (nonatomic,readonly) PUSessionInfo * sessionInfo; @property (getter=isAnyAssetDownloading,nonatomic,readonly) BOOL anyAssetDownloading; @optional -(BOOL)isAnyAssetDownloading; @required -(PUSessionInfo *)sessionInfo; -(NSArray *)selectedAssets; -(BOOL)isAnyAssetSelected; @end
/** Add a commercial information frame to this tag. Multiple WCOM frames can be added to a single tag, but each * must have a unique URL. * * @param oWCOMUrlLinkID3V2Frame the frame to be added * @throws ID3Exception if this tag already contains an WCOM frame with the same description */ public void addWCOMUrlLinkFrame(WCOMUrlLinkID3V2Frame oWCOMUrlLinkID3V2Frame) throws ID3Exception { if (oWCOMUrlLinkID3V2Frame == null) { throw new NullPointerException("Attempt to add null WCOM frame to tag."); } if (ID3Tag.usingStrict() && (m_oWCOMUrlToFrameMap.containsKey(oWCOMUrlLinkID3V2Frame.getCommercialInformationUrl()))) { throw new ID3Exception("Tag already contains WCOM frame with matching URL."); } m_oWCOMUrlToFrameMap.put(oWCOMUrlLinkID3V2Frame.getCommercialInformationUrl(), oWCOMUrlLinkID3V2Frame); oWCOMUrlLinkID3V2Frame.addID3Observer(this); oWCOMUrlLinkID3V2Frame.notifyID3Observers(); }
Two new Florida Department of Environmental Protection approved test pilot programs are in the works to potentially remedy the conditions of canal waters in Cape Coral. One of these programs began earlier this week in the Cabot Canal, with the other in the queue, waiting for the right partner to join forces with, according to city officials. "We're excited about this," said Jeff Pearson, City of Cape Coral Utilities director, who is overseeing the two projects. "We are cautiously optimistic that this will help get rid of the algae." On Tuesday, workers from Ecological Laboratories (ELI), an international biotechnology company in Cape Coral that specializes in the natural treatment of environmental issues associated with water, took samples from the dead-end Cabot Canal off of Everest Parkway to test toxicity levels in the water. Residents were notified and signs were posted to give homeowners a heads up that the project was taking place. "Reading from preliminary testing Tuesday showed high levels of toxicity," said Matthew Richter, vice president and chief information officer with ELI. A "water fence" was installed to section off a 620-foot long and 80-foot wide section of the canal to separate treated water from untreated water. They will test both and compare results. Thursday was the first day ELI's Microbe-Lift product was applied to the specific area, which is designed to reduce waste organics, nutrients and pathogens that have caused waters to deteriorate in quality and color. "The product is one hundred percent safe and all organic," said Richter. ELI will be testing its results in their own lab, an outside lab, as well as testing from the City of Cape Coral. Treatment-which will be applied by Mettauer Environmental, a local licensed environmental treatment company in SWFL- will be done every three days for the first five treatments, then every two weeks for two months, followed by once a month, over the 180-day program period. "Our hope and goal is to show a positive change both physically and scientifically," Richter said. ELI offered this trial program to the city at no cost after Pearson sat in on a presentation about the product and how it has benefitted other bodies of water and aquariums around the world. This is the first time ELI has been DEP approved to use its methods on state waters. Doug Dent, senior vice president of ELI, said the eutrophication of Florida waters over 50 years is the reason why these green-water events occur. His plans to fix it? "We can convert nitrate compounds in water, called denitrification, that reduces nitrates and bubble off into the atmosphere," Dent said. Denitrification is a key function of ELI's technologies capability and water management efforts for restoration and control of green water events. Besides an increase in water clarity, this process will also see a reduction in the foul, nausea-inducing smell wafting off of the Nickelodeon slime colored water. "We want to restore the ecosystem through a natural, biological process," added Dent. Essentially, they are treating the water with a group of microorganisms designed to enhance water quality. These microorganisms will out-compete the bacteria for food, starving them off and work to restore a state of normalcy to the ecosystem. Dent advocated that though this may provide somewhat of a solution, Floridians have to implement better practices. ELI is hoping this trial run will lead them to make a bigger difference for Southwest Florida waters. "Ultimately, we want to get to the source, which is Lake O," said Richter. "The first step is this test trial phase." The second pilot project in the works is a bubble curtain to be placed at a canal opening to try and filter out any floating materials coming though the waters. "We would implement this to keep floating debris or algae clumps from coming down the river into the canal system," Pearson said. This curtain would potentially be placed at the mouth of the Mandolin Canal, the first canal opening north of the Cape Coral bridge, but nothing is set in stone, according to Pearson. "Everything for this project is still preliminary, we are investigating further," he said. "We still have some homework to do." These curtains have been used in the Florida Keys to some success. As for the "vacuuming" project that started a few weeks ago, it will continue to try and relieve some heavily effected areas, including Cape Coral, this week, said county spokesperson Betsy Clayton. "Lee County and the state Department of Environmental Protection continue to evaluate and assess the pilot program for blue-green algae cleanup and processing," she said. "Staff also have discussions daily with the contractor, AECOM. The plan is for the pilot project to continue while state funds are available. The pilot cleanup project provides temporary relief for some residents, but the real solution lies in the billions of dollars being spent on statewide water-quality projects."
package com.challenge.repository; import com.challenge.entity.LogEvent; import com.challenge.entity.QLogEvent; import com.challenge.service.dto.LevelErrorEnum; import com.querydsl.core.types.Predicate; import com.querydsl.core.types.dsl.SimpleExpression; import com.querydsl.core.types.dsl.StringExpression; import com.querydsl.core.types.dsl.StringPath; import org.springframework.data.domain.Page; import org.springframework.data.domain.Pageable; import org.springframework.data.querydsl.QuerydslPredicateExecutor; import org.springframework.data.querydsl.binding.QuerydslBinderCustomizer; import org.springframework.data.querydsl.binding.QuerydslBindings; import org.springframework.data.querydsl.binding.SingleValueBinding; import org.springframework.stereotype.Repository; import java.time.LocalDateTime; import java.util.Iterator; import java.util.Optional; @Repository public interface LogEventRepository extends BaseRepository<LogEvent, Long> , QuerydslPredicateExecutor<LogEvent>, QuerydslBinderCustomizer<QLogEvent> { Optional<LogEvent> findByLevelErrorEnumAndEventDescriptionIgnoreCaseAndStatus(LevelErrorEnum levelErrorEnum, String eventDescription, String status); Optional<Page<LogEvent>> findAllByStatusIgnoreCase(String status, Pageable pageable); Optional<LogEvent> findByIdAndStatusIgnoreCase(Long id, String status); Page<LogEvent> findAll(Predicate predicate, Pageable pageable); @SuppressWarnings("NullableProblems") @Override default void customize(QuerydslBindings bindings, QLogEvent logevent) { bindings.excluding(logevent.id); bindings.bind(String.class).first((SingleValueBinding<StringPath, String>) StringExpression::containsIgnoreCase); bindings.bind(logevent.eventDate).all((path, value) -> { Iterator<? extends LocalDateTime> it = value.iterator(); LocalDateTime from = it.next(); if (value.size() >= 2) { LocalDateTime to = it.next(); return Optional.of(path.between(from, to)); } else { return Optional.of(path.goe(from)); } }); bindings.bind(logevent.eventCount).first(SimpleExpression::eq); } }
<filename>otter-operations/src/main/java/se/l4/otter/operations/DefaultCompoundOperation.java<gh_stars>1-10 package se.l4.otter.operations; import org.eclipse.collections.api.list.ImmutableList; import org.eclipse.collections.api.list.ListIterable; /** * Default implementation of {@link CompoundOperation}. * * @author <NAME> * * @param <Handler> */ public class DefaultCompoundOperation<Handler> implements CompoundOperation<Handler> { private final ImmutableList<Operation<Handler>> operations; public DefaultCompoundOperation( ImmutableList<Operation<Handler>> operations ) { this.operations = operations; } @Override public void apply(Handler target) { for(Operation<Handler> op : operations) { op.apply(target); } } @Override public ListIterable<Operation<Handler>> getOperations() { return operations; } @Override public Operation<Handler> invert() { return null; } @Override public String toString() { return getClass().getSimpleName() + operations; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((operations == null) ? 0 : operations.hashCode()); return result; } @Override public boolean equals(Object obj) { if(this == obj) return true; if(obj == null) return false; if(getClass() != obj.getClass()) return false; DefaultCompoundOperation other = (DefaultCompoundOperation) obj; if(operations == null) { if(other.operations != null) return false; } else if(!operations.equals(other.operations)) return false; return true; } }
Three deadly crashes in three months. Three fatal crashes of Amtrak trains in as many months are raising serious concerns about rail safety nationwide. Sunday's deadly crash of a Miami-bound Amtrak train into the back of a CSX freight train in South Carolina is the second fatal crash in a week. A chartered train for Republican members of Congress heading to a retreat collided with a garbage truck Wednesday in rural Virginia, killing the truck driver. Investigators are still examining what caused a Dec. 18 derailment on an overpass between Seattle and Portland, Ore., killing three. Amtrak CEO Richard Anderson acknowledged the safety concerns Sunday and said he hoped to instill a culture similar to that of airlines: "Amtrak is fully committed and values safety as its highest priority." Authorities say Sunday's crash happened when the southbound passenger train from New York somehow got switched from the main track onto a siding where it rear-ended parked freight cars, killing the train's engineer and conductor. The Amtrak train at the time was passing through an area owned and controlled by CSX. The crash is the latest in a series of high-profile incidents involving passenger trains, including at least three people killed by high-speed Brightline trains in Florida since that service began testing last year and launched in January. In all three incidents, police said those struck did not heed warning lights and crossing gates positioned at the intersections. While all three of Amtrak's most recent crashes appear to have different causes, some critics are calling for changes to the organization's approach to safety. "The company needs to take bold action, possibly even pause operations, to show that they’re taking these failures seriously,” Brian Fielkow, author of the book Leading People Safely, which argues that companies can ultimately save money by operating safely. Amtrak remains popular with riders and lawmakers from both rural and urban areas. More than 31 million riders last year reached more than 500 destinations in 46 states and three Canadian provinces, the company said. Amtrak’s trains along the Northeast corridor are its busiest, but the company also serves dozens of small towns across the West, including transporting tens of thousands of Boy Scouts annually to the Philmont Scout Ranch in northern New Mexico. Still, Amtrak has been slowly attracting more riders and is under the gun. President Trump has proposed slashing Amtrak's annual subsidy and forcing it to cut unprofitable long-haul trains in favor of services like the Acela in New York, Boston and Washington, D.C. • Dec. 18: A train traveling 80 mph entered a 30-mph curve and derailed, sending cars plunging off a bridge onto Interstate 5 below on the Cascades route between Seattle and Portland, Ore.. • April 3, 2016: A train going 99 mph near Philadelphia slammed into a backhoe on the track, killing two workmen and injuring 39 passengers. Investigators said a series of safety lapses caused the collision. Railroad advocates point out that trains remain far safer than cars, which killed 37,000 people last year across the country. Jim Mathews, president and CEO of the non-profit Rail Passengers Association, pointed out that in the vast majority of train crashes, a vehicle or person on the tracks was at fault, even though Amtrak or another railroad gets blamed. In the U.S., a person or vehicle gets hit by a train every three hours, accounting for 96% of rail fatalities, according to the Rail Passengers Association, which has been pushing Congress to boost safety funding. "It's easy, when these things happen, to lose perspective. But despite these incidents, it really does remain a very safe way to travel," Mathews said. "The facts are that Amtrak's trains don't crash a lot, and people don’t die a lot in those crashes." The National Transportation Safety Board was at Sunday's crash site and will investigate the cause of the wreck. Train safety expert Richard Beall said the cause is likely one of three things: a track problem, a fault with the train itself or a crew error. Most passenger trains, he said, are run by a single engineer in the locomotive working a shift that could be as long as 12 hours. Beall, a longtime engineer who retired last summer, said railroads have invested significantly in improving crossings and signals. They are also working to adopt technology mandated by Congress called Positive Train Control, which experts say would reduce crashes by tracking and controlling a train's location and speed. "They can't get it to work," Beall said. Beall said the risk of crew fatigue is very real, especially at the time of Sunday's collision, about 2:35 a.m. "I've run trains for 47 years. That’s a tough time of the morning," Beall said. "You're out there in the dark, looking out at two shiny rails in the headlight. You can get hypnotized by what's in front of you."