content
stringlengths
7
2.61M
Outpatient Civil Commitment in Texas for Management and Treatment of Sexually Violent Predators: A Preliminary Report Texas established in 1999 outpatient civil commitment for sexually violent predators discharged from prison with or without parole. These individuals suffer from a behavioral abnormality, have been convicted of two or more sexually violent crimes and are deemed likely to reoffend. Civilly committed individuals are managed by a team composed of case manager (supervision), treatment provider, public safety officer (global positioning satellite monitoring), and other professionals. Treatment consists of individual and group therapy using a standard workbook. Out of 21 committed individuals, 7 are in the treatment, 1 died, 10 are in custody after breaking conditions of commitment that constitute a felony, and 3 await release from prison. Cost of outpatient civil commitment is less than $20,000/person/year compared with more than $100,000 for inpatient commitment in other states. Texas has found outpatient civil commitment to be an effective and relatively low-cost way to protect the public and treat the offender.
<reponame>jgpattis/Desres-sars-cov-2-apo-mpro import pyemma.coordinates as coor import numpy as np import enspara.msm as msm import pyemma.plots as pyemma_plots import matplotlib.pyplot as plt sys = 'back' n_clusters = 300 dtrajs = coor.load(f'cluster_data/{sys}_{n_clusters}_cluster_dtrajs.h5') max_lag = 15 dt2 = [i.astype(np.int_) for i in dtrajs] dt3 = [i.reshape((i.shape[0])) for i in dt2] lags = [2,4,6,8,10,12,16,24,32,48,64,80] def norm_pseudo(C, prior_counts=1/ n_clusters, calculate_eq_probs=True): return msm.builders.normalize(C, prior_counts=prior_counts, calculate_eq_probs=calculate_eq_probs) its1 = msm.timescales.implied_timescales(np.array(dt3), lags, norm_pseudo, n_times=8) fig, ax = plt.subplots() for i in range(8): ax.plot(lags, np.absolute(its1[:,i]), linewidth=2, marker='o') ax.plot(lags,lags, linestyle='dashed', linewidth=2, color='k') ax.set_yscale('log') ax.set_xlabel('Lag time (ns)') ax.set_ylabel('Implied Timescales (ns)') fig.savefig(f'implied_timescales_12/{sys}_implied_timescale_enspara_norm_pseudo_more3.pdf')
In the week-long siege battle between Raya Sarkar, a law student in America, and about 60 Indian academic dons targeted by her on Facebook with unsubstantiated allegation of being sexual predators, the first cogent reply has been shot by Partha Chatterjee, a political scientist and author of considerable national and international repute. Earlier, Chatterjee’s request, made to Sarkar through TheWire.in for “the allegation against me (to) be made known to me so that I could make a specific response to it”, appeared somewhat anaemic to some of his admirers who’d have liked him to take legal recourse. It emboldened those behind the “list”, with Sarkar pronouncing magisterially, “the list will stay for students to be wary”. But instead of calling up his lawyer, Chatterjee has, in a new response, deftly pushed the ball in Sarkar’s court. “As far as I understand it, Raya Sarkar’s post in response to my statement suggests that no further information will be made available on the allegation against me... It is justified to conclude that the alleged complaint against me has no substance”. In an earlier article in DailyO, I have narrated women students’ problems in being identified as complainants against influential professors as long as they hold the key to the students’ career development. The alleged victims cannot be named as their predators would then block their “road to Oxford”. Chatterjee, in his polite rebuff, has reminded the so-called student activists of two sides of the bargain. Either spell out the charges, or admit you have no case. That is where Raya Sarkar’s campaign differs from the #MeToo disclosures against Hollywood movie producer Harvey Weinstein and other film celebrities. Implicit in the hashtag is the "Me" word, with the concomitant accountability. Anonymous complaints often form the basis for investigation of alleged tax swindles et al, but that also calls for some whereabouts of the offence, which Sarkar and her followers are refusing to part with, assuming of course that they are privy to them. More disconcerting is Sarkar’s repeated assertion that opposition to her action originates from the “savarna” (non-Dalit Hindu) class and her taunt at the older and left-wing feminists (Nivedita Menon, Kavita Krishnan, et al) who are apprehensive about her methods is that they probably would sing a different tune if the academics on her list were pro-BJP ideologically. In other words, she wants all to know that she is holding the fort for the Dalit feminist cause within the student community against an "exploitative Left establishment". In today’s student politics, JNU and Jadavpur University have been successful, in varying degrees, to arrest the saffron tide. It is quite a curious happenstance that yet another list of names of academics and activists who are alleged sexual harassers has surfaced, under the name of Malati Kumari. This list, again, is similarly tilted against JNU, with about half the names drawn from its faculty and student unions. Malati Kumari’s list includes renowned sociologist Dipankar Gupta, not under JNU with which he had long association but under Shiv Nadar University, allegedly for "verbal and emotional harassment” in 2013. This list is a shade more comprehensive than Sarkar’s as it generally mentions the nature of offence under the Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act 2013, and the concerned year. Still, it will be left to the accused person to decide if he’d like to respond, if at all, to the charges in the digital space, or before the institution’s Internal Complaints Committee (ICC), or the court of law as final recourse. However, Malati Kumar’s exercise too has an obvious political subtext. “We”, she writes, “who have compiled the list are Dalit-Bahujan and, like us, many of the survivors like us have come from small towns/villages/marginalised communities to these big university and urban spaces with a lot of hopes”. Significantly, prominently placed in Malati Kumar’s list is Professor Kancha Ilaiah, director of Centre for Study of Social Exclusion and Inclusive Policy at Maulana Azad National University in Hyderabad. The offence is not stated, only the year, 2012, is mentioned. In the Dalit intellectual spectrum, Ilaiah is among the brightest and most uncompromising in his opposition to the politics of Hindutva. Assuming that his alleged victim never appears on the scene, his name being on the list alone can put a permanent question mark on his acceptability in the Dalit community. Many of those arguably slandered by Raya Sarkar and Malati Devi — Partha Chatterjee, historian Dipesh Chakrabarty (on Sarkar’s list), Dipankar Gupta — are towering figures in their respective fields of study. Ilaiah has brought a rich intellectual content to India’s Dalit movement. They should certainly be made accountable if charges against them of harassment of their students, or colleagues, are made to stand in the ICC or any other forum. But if it is a onslaught for "saffron" forces to capture some "difficult" academic enclaves, those who feel they have been wrongfully smeared should approach the court without delay. Also read: Why the response to a list of sexual harassers has splintered India's feminist movement
<gh_stars>1-10 /* eslint-disable @typescript-eslint/no-non-null-assertion */ import { axe } from "jest-axe"; import { DefineStepFunction } from "jest-cucumber"; class Then { static iShouldSeeXOnThePage(then: DefineStepFunction): void { then(/^I should see "(.+)" on the page$/, async (content: string) => { const element = await browser!.findElement({ tagName: "body" }); await expect(element.getText()).resolves.toContain(content); }); } static thePageShouldBeAccessible(then: DefineStepFunction): void { then("the page should be accessible", async () => { const document = await browser!.getPageSource(); await expect(axe(document)).resolves.toHaveNoViolations(); }); } static thePageTitleShouldBeX(then: DefineStepFunction): void { then(/^the page title should be "(.+)"$/, async (title: string) => { await expect(browser!.getTitle()).resolves.toEqual(title); }); } } export default Then;
A man who survived New Zealand's mosque attacks told a crowd of about 20,000 that he forgave the gunman who killed his wife and 49 other people. Farid Ahmed was speaking at a national remembrance service held on Friday in Christchurch to commemorate those who died in the attacks two weeks ago. It was the third major memorial held in the city since the attacks and a more formal occasion, with dozens of dignitaries from other countries attending, including Australian Prime Minister Scott Morrison. The memorial featured musical guest Yusuf Islam, also known as Cat Stevens, who performed his song Peace Train. Thousands stood in silence in Christchurch as the names of 50 people shot dead in two mosques were read, with speakers calling for the legacy of the tragedy to be a kinder, more tolerant New Zealand. Prime Minister Jacinda Ardern, who wore a Maori cloak during the service, said the world had to end the vicious cycle of "extremism". "Our challenge now is to make the very best of us a daily reality, because we are not immune to the viruses of hate, of fear. We never have been," said Ardern at service in Hagley Park, near the Al Noor mosque where more than 40 of the victims were killed by a white supremacist during Friday prayers on March 15. "The answer to them lies in a simple concept that is not bound by domestic borders, that isn't based on ethnicity, power-base or even forms of governance. The answer lies in our humanity," she said. Security was tight around the service and New Zealand remains on high-security alert. Police Commissioner Mike Bush said it was one of the largest security events ever conducted by the police in New Zealand. Ahmed, whose wife Husna was one of the 50 killed, told the crowd as a man of faith he had forgiven his wife's killer because he did not want to have "a heart that is boiling like a volcano". "I want a heart that will be full of love and care and full of mercy and will forgive easily, because this heart doesn't want any more lives to be lost," he said to applause. He called for people to work together for peace and to change attitudes to see everyone as part of one family, using Christchurch's nickname of the Garden City to make his point. "I may be from one culture, you may come from another culture, I may have one faith, you may have one faith, but together we are a beautiful garden," Ahmed said. Kelly Smith, 52, from Auckland, New Zealand's largest city, said she found Ahmed's speech beautiful. "I loved what he said: we're all different flowers, but we all look pretty together and that's so true," she said. Mohamed Mohideen, president of the Islamic Council of Victoria in Australia, said Ardern's response to the attack helped provide comfort and thanked her for her support of the Muslim community. The massacre in Christchurch was carried out by a lone gunman who live streamed the rampage on Facebook. Australian Brenton Tarrant, 28, has been charged with one count of murder and is likely to face more charges when he reappears in court next Friday. Morrison said he has been working closely with Ardern to look at issues such as gun laws and blocking hate-filled content on social media. "There are the laws we need now, to ensure that social media is not weaponised," Morrison told reporters after the service. The memorial was broadcast throughout New Zealand. Muslim volunteers, some of whom had travelled from Australia and Asia, handed out pamphlets with information about Islam as crowds left the park after the service.
package com.zw.api2.swaggerEntity; import lombok.Data; import lombok.EqualsAndHashCode; /*** @Author gfw @Date 2019/1/30 14:28 @Description 分页查询报警参数列表 @version 1.0 **/ @Data @EqualsAndHashCode(callSuper = false) public class SwaggerAlarmSettingQuery { /** * 页数 */ private Long page; /** * 每页显示条数 */ private Long limit; /** * 查询条件 */ private String simpleQueryParam; /** * 组织id */ private String groupId; /** * 所属分组 */ private String assignmentId; /** * 设备类型 */ private String deviceType; }
package es.amplia.oda.datastreams.adc.datastreams; import es.amplia.oda.core.commons.adc.*; import es.amplia.oda.core.commons.interfaces.EventPublisher; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.ArgumentCaptor; import org.mockito.Captor; import org.mockito.Mock; import org.mockito.internal.util.reflection.Whitebox; import org.mockito.runners.MockitoJUnitRunner; import static org.junit.Assert.*; import static org.mockito.Matchers.*; import static org.mockito.Mockito.verify; import static org.powermock.api.mockito.PowerMockito.*; @RunWith(MockitoJUnitRunner.class) public class AdcDatastreamsEventTest { private static final String TEST_DATASTREAM = "testDatastream"; private static final int TEST_INDEX = 1; private static final float TEST_VALUE = 99.99f; private static final float TEST_MIN = 0.0f; private static final float TEST_MAX = 100.f; private static final String ADC_DEVICE_EXCEPTION_SHOULD_BE_CAUGHT = "ADC Device Exception should be caught"; private static final String CHANNEL_FIELD_NAME = "channel"; @Mock private AdcService mockedService; @Mock private EventPublisher mockedEventPublisher; private AdcDatastreamsEvent testEvent; @Mock private AdcChannel mockedChannel; @Mock private AdcEvent mockedEvent; @Captor private ArgumentCaptor<AdcChannelListener> listenerCaptor; @Before public void prepareForTest() { testEvent = new AdcDatastreamsEvent(TEST_DATASTREAM, TEST_INDEX, mockedService, mockedEventPublisher, TEST_MIN, TEST_MAX); } @Test public void testRegisterToEventSource() { when(mockedService.getChannelByIndex(TEST_INDEX)).thenReturn(mockedChannel); testEvent.registerToEventSource(); verify(mockedService).getChannelByIndex(eq(TEST_INDEX)); verify(mockedChannel).addAdcPinListener(listenerCaptor.capture()); AdcChannelListener capturedListener = listenerCaptor.getValue(); when(mockedEvent.getScaledValue()).thenReturn(TEST_VALUE); capturedListener.channelValueChanged(mockedEvent); verify(mockedEventPublisher).publishEvents(eq(""), eq(new String[0]), any()); } @Test public void testRegisterToEventSourceAdcDeviceExceptionIsCaught() { when(mockedService.getChannelByIndex(TEST_INDEX)).thenReturn(mockedChannel); doThrow(new AdcDeviceException("")).when(mockedChannel).addAdcPinListener(any()); testEvent.registerToEventSource(); assertTrue(ADC_DEVICE_EXCEPTION_SHOULD_BE_CAUGHT, true); } @Test public void testUnregisterFromEventSource() { Whitebox.setInternalState(testEvent, CHANNEL_FIELD_NAME, mockedChannel); testEvent.unregisterFromEventSource(); verify(mockedChannel).removeAllAdcPinListener(); } @Test public void testUnregisterFromEventSourceWithAdcExceptionIsCaught() { Whitebox.setInternalState(testEvent, CHANNEL_FIELD_NAME, mockedChannel); doThrow(new AdcDeviceException("")).when(mockedChannel).removeAllAdcPinListener(); testEvent.unregisterFromEventSource(); assertTrue(ADC_DEVICE_EXCEPTION_SHOULD_BE_CAUGHT, true); } @Test public void testUnregisterFromEventSourceWithException() { Whitebox.setInternalState(testEvent, CHANNEL_FIELD_NAME, mockedChannel); doThrow(new ArrayIndexOutOfBoundsException("")).when(mockedChannel).removeAllAdcPinListener(); testEvent.unregisterFromEventSource(); assertTrue(ADC_DEVICE_EXCEPTION_SHOULD_BE_CAUGHT, true); } }
Long-term outcomes after anal fistula surgery: results from two university hospitals in Thailand Purpose This study aimed to evaluate long-term outcomes after anal fistula surgery from university hospitals in Thailand. Methods A prospectively collected database of patients with cryptoglandular anal fistula undergoing surgery from 2011 to 2017 in 2 university hospitals was reviewed. Outcomes were treatment failure (persistent or recurrent fistula), fecal continence status, and chronic postsurgical pain. Results This study included 247 patients; 178 (72.1%) with new anal fistula and 69 (27.9%) with recurrent fistula. One hundred twenty-one patients (49.0%) had complex fistula; 53 semi-horseshoe (21.5%), 41 high transsphincteric (16.6%), 24 horseshoe (9.7%), and 3 suprasphincteric (1.2%). Ligation of intersphincteric fistula tract (LIFT) was the most common operation performed (n = 88, 35.6%) followed by fistulotomy (n = 79, 32.0%). With a median follow-up of 23 months (interquartile range, 1245 months), there were 18 persistent fistulas (7.3%) and 33 recurrent fistulae (13.4%)accounting for 20.6% overall failure. All recurrence occurred within 24 months postoperatively. Complex fistula was the only significant predictor for recurrent fistula with a hazard ratio of 4.81 (95% confidence interval, 1.8212.71). There was no significant difference in healing rates of complex fistulas among seton staged fistulotomy (85.0%), endorectal advancement flap (72.7%), and LIFT (65.9%) (P = 0.239). Four patients (1.6%) experienced chronic postsurgical pain. Seventeen patients (6.9%) reported worse fecal continence. Conclusion Overall failure for anal fistula surgery was 20.6%. Complex fistula was the only predictor for recurrent fistula. At least 2-year period of follow-up is suggested for detecting recurrent diseases and assessing patient-reported outcomes such as chronic pain and continence status. INTRODUCTION An anal fistula is one of the most common benign anal diseases requiring surgical intervention. Its pathogenesis is closely related to chronic bacterial infection of anal glands which is known as cryptoglandular infection. The disease represents a wide spectrum of complexity due to various degrees of anal sphincter complex involvement and its unpredictable or multiple tracts thus leading to a high rate of recurrent fistula or persistent (unhealed) fistula after surgery. Ultimately, the goals of anal fistula surgery are to achieve complete healing of the fistula tract by means of closure or removal of the tract and, more importantly, to preserve anal sphincter function. Although sphincterpreserving operations including ligation of intersphincteric fistula tract (LIFT) have gained popularity in the last decade, the best surgery for anal fistula remains inconclusive because no single procedure is entirely effective. Since the results of anal fistula surgery require a long period of follow-up to determine both clinical outcomes (i.e., recurrent rate and pattern of recurrence) and patient-reported outcomes (i.e., fecal continence status and chronic postsurgical pain), there are a relatively limited number of large studies (more than 200 cases) examining these long-term outcomes [4,. Moreover, to the best of our knowledge, no such large-scale studies reported these results in a comprehensive manner. The aim of this study was therefore to evaluate long-term clinical and patient-reported outcomes after anal fistula surgery from 2 large referral university hospitals in Thailand. Factors influencing recurrent fistula were also determined. METHODS Patients This study has been approved by the Institutional Ethics Committee of Faculty of Medicine Siriraj Hospital (No. Si 752/2017) and Faculty of Medicine, Khon Kaen University (No. HE621468) with a waiver for informed consent. A prospectively collected database of patients with cryptoglandular anal fistula undergoing curativeintent surgery from January 2011 to November 2017 by 2 Thai board-certified colorectal surgeons in 2 large university hospitals (Faculty of Medicine Siriraj Hospital, Mahidol University in Bangkok and Srinagarind Hospital, Khon Kaen University in Khon Kaen) was reviewed. Patients with tuberculosis-associated anal fistula, Crohn-related fistula, and fistula with malignant transformation were excluded. Patients who had never attended the follow-up clinic and cannot be contacted by all means were also excluded. Fistula classification and surgery The type of anal fistulas was classified based on their relationship to the anal sphincter complex determined by intraoperative findings in conjunction with preoperative radiological imaging (if any) as intersphincteric, transsphincteric, suprasphincteric, extrasphincteric, and semi-horseshoe or horseshoe fistula. They were then divided into 2 groups based on the American Society of Colon and Rectal Surgeon practical parameters for the management of anal fistula as 'simple' fistula (consisting of intersphincteric fistula and low transsphincteric fistula) and 'complex' fistula (defined as transsphincteric fistulas involving more than 30% of the external sphincter, suprasphincteric, extrasphincteric, and semi-horseshoe or horseshoe fistulas). If the patients had more than 1 fistula tracts, the most complex type of fistula was used as a representative in such patients. Patients were operated on by a board-certified colorectal surgeon-mostly with patients in a prone position. Preoperative intravenous antibiotics covering gram-negative bacilli and anaerobic bacteria were given only to patients with complex types of anal fistula. Depending on the planned operation and patient's preference, operations were performed under 1 of the following anesthetic techniques; perianal block (with or without total intravenous sedation), spinal anesthesia, or general anesthesia. Fistulotomy (with or without marsupialization) and fistulectomy were usually performed for 'simple' fistula whereas operations for 'complex' fistula including seton staged fistulotomy, LIFT, and en-dorectal advancement flap (ERAF) was determined by the anatomy or complexity of anal fistula, continence status of the patient, type of previous surgery (if any), and agreement between patient and surgeon. Standard postoperative care was provided to every patient including opioid-sparing multimodal analgesia and laxatives. Patients receiving perianal block may not require hospitalization whereas those subjected to the other anesthetic techniques were routinely admitted 1 or 2 days after surgery. If the patients underwent more than 1 operation at the same time, the main operation (especially for complex fistula) was used as a representative in such patients. Primary outcome and data collection Primary outcomes were the rate of treatment failure-which included persistent and recurrent fistula. The persistent fistula was defined as unhealed fistula after surgery. Recurrence was defined as a fistula that recurred after clinically complete healing or full epithelization of wound or external opening of the fistula. Factors influencing recurrent fistula were also determined. Secondary outcomes included changes in fecal continence status after surgery and the rate of chronic postsurgical pain. Fecal continence status was evaluated using Wexner score. Chronic postsurgical pain was defined as pain lasting more than 3 months after an operation without other etiology of pain such as acute or chronic abscess formation. During an index operation, demographic data and operative details were noted. Patients' demographics included were age, sex, onset of the disease, previous treatment, and preoperative imaging (if any). Notably, preoperative radiological studies of anal fistula may or may not be performed at the discretion of surgeons. Operative details included fistula type, the number of primary fistula tract, operative time, and correspondence to the Goodsall's rule (as if the external opening of a fistula is located in the posterior half of the anus, its tract will follow a curved course to the posterior midline of the anal canal; whereas if the opening is located in the anterior half of the anus, its tract will follow a straight radial course to the dentate line). Follow-up protocol Patients would visit a follow-up clinic every 4 to 8 weeks after an operation until the fistula clinically healed. Thereafter they were advised to visit the clinic every 6 to 12 months or when having any symptoms suggestive of recurrence. Patient-reported outcomes (fecal continence status and chronic postsurgical pain) were also assessed during the follow-up. For those missing the follow-up schedule, a telephone interview or telemedicine was utilized. Statistical analysis Stata ver. 13.1 (Stata Corp., College Station, TX, USA) was used for statistical analysis. Continuous data were reported as mean ± standard deviation or median (interquartile range, IQR). Categor-ical data were described in number (percentage). Kaplan-Meier survival analysis was utilized to plot survival curve. The univariate relation between each variable and recurrent fistula was analyzed by binary logistic regression. Factors potentially associated with recurrent fistula (P < 0.2) in the univariate analysis were included in a multivariate model of logistic regression. Hazard ratio (HR) was presented as number (95% confidence interval ). A P-value of < 0.05 was considered statistically significant. Study population During the period of study, 257 anal fistula surgeries were performed by the 2 colorectal surgeons in 2 university hospitals. According to our exclusion criteria, 10 patients were excluded: 3 with tuberculosis-associated anal fistula, 2 with adenocarcinoma arising in anal fistula, 1 with Crohn-related fistula, and 4 with no follow-up data. Finally, 247 patients with cryptoglandular anal fistula were included and their clinical characteristics are summarized in Table 1. Characteristics of anal fistula Sixty-nine patients (27.9%) underwent surgery for recurrent fistula following previous surgery elsewhere. The others (72.1%) had a new diagnosis of anal fistula and underwent surgery in our institutes. Preoperative radiological studies of anal fistula were performed in 180 patients (72.9%) including hydrogen peroxideenhanced 3-dimensional endoanal ultrasonography (3D-EAUS) and magnetic resonance imaging (MRI) of anal fistula (Table 1). Surgical outcomes With a median follow-up time of 23 months (IQR, 12-45 months), there were 18 (7.3%) persistent unhealed fistula and 33 (13.4%) recurrent fistula-accounting for the overall failure rate of 20.6%. All recurrent fistulas clinically presented 24 months after an operation ( Fig. 2A). The rates of treatment failure in each operation classified by the subtypes of an anal fistula are shown in Table 2. Notably, there was no significant difference in the rate of Table 3). Regarding patient-reported outcomes, 17 patients (6.9%) experienced worse continence score after surgery (median Wexner's score change of 3; range, 1-8). Details of patients with worse postoperative incontinence score and their association with anal fis-tula type and operative methods are summarized in Table 4 and Table 5, respectively. Four patients (1.6%) reported chronic pain lasting up to 6 months after the operation but the severity of pain was quite mild (average numerical pain scale, 2 out of 10) and can be controlled by oral analgesia. Characteristics of these 4 patients were following; 1 with semi-horseshoe fistula undergoing ERAF, Intersphincteric fistula and low transsphincteric were classified as 'simple' fistula, and the others were classified as 'complex' fistula. b Fistulotomy with marsupialization was grouped as fistulotomy. Other procedures included core-out distal fistulectomy, simple closure of the internal opening, and video-assisted anal fistula treatment. c Failure cases included persistent fistula and recurrent fistula. 1 with semi-horseshoe fistula undergoing fistulectomy, and 2 with high transphincteric fistula undergoing seton staged fistulotomy. Factors influencing recurrent fistula In the univariate analysis, complex anal fistula, initial recurrence status, and operative time more than 45 minutes were 3 significant factors for recurrent disease. However, in the multivariate analysis, the complex anal fistula was the only independent factor for recurrent fistula (HR, 4.81; 95% CI, 1.82-12.71) ( Table 6, Fig. 2B). DISCUSSION This study of 247 patients with cryptoglandular anal fistula (27.9% recurrent fistula and 49.0% complex type) demonstrated that sphincter-preserving operations including LIFT and ERAF were utilized in 44.1% of patients in this cohort. With a median followup of nearly 2 years, the overall rate of treatment failure was approximately 21%; mainly from recurrent diseases. Notably, all recurrent fistulas occurred within 24 months postoperatively. In this study, we divided treatment failure into the persistent fistula and recurrent fistula because they are different entities. The former is mainly related to incomplete removal or closure of the primary fistula tract or its internal opening whereas the latter can be caused by several surgical and disease-related factors. In addition to more likelihoods of overall failure, complex anal fistula was the only significant predictor for recurrent fistula. Interestingly, seton staged fistulotomy, ERAF and LIFT had a comparable rate of healing in complex fistula surgery. Last but not least, functional disability after fistula surgery exists even in the hands of a proctologist with a 6.9% rate of worse continence score and 1.6% rate of chronic postsurgical pain. This real-world data indicated that about half of cryptoglandular anal fistulas presented in daily practice were classified as complex fistula, which was an independent risk factor for recurrent disease. Our findings were consistent with 2 recent reviews of factors associated with recurrent anal fistula, in which complex fistula including a high position of fistula tract (high transsphincteric and suprasphincteric fistula) and curved fistula (semi-horseshoe and horseshoe fistula) were strong predictors for recurrence. Some investigators also suggested that recurrent fistulas were more likely to unhealed or recurrent than newly-forming anal fistulas. However, initial recurrent status was associated with disease recurrence in our univariate analysis but not multivariate analysis. Our results also indicated that the success rate of anal fistula surgery should be evaluated at least on postoperative year 2 because some recurrent fistula clinically presented at the late stage but not over 24 months after an operation in our study. It is known that preoperative radiological imaging could help delineating and defining the course of anal fistula, especially recurrent or complex ones, which could lead to more appropriate surgical decisions and better outcomes. In our study, preoperative radiological imaging was performed in about 3-quarters of the studied patients. As shown in this study, hydrogen peroxide-en- Values are presented as number only, number (%), or mean ± standard deviation. P = 0.584. Other procedures included core-out distal fistulectomy, simple closure of the internal opening, and video-assisted anal fistula treatment. www.coloproctol.org Long-term outcomes after anal fistula surgery: results from two university hospitals in Thailand Weeraput Chadbunchachai, et al. 138 hanced 3D-EAUS was used more frequently than MRI of anal fistula in Thailand because it is cheaper and more available as an office-based investigation. Also, it was evident that both modalities had comparable sensitivity (about 87%) to detect anal fistula although MRI had a higher specificity. Notably, our analysis did not find an association between preoperative imaging modality and the failure rates of fistula surgery. This study demonstrated that 17.4% of patients had 2 or more primary fistula tract but multiple tracts were not a risk factor for recurrence. It is worth noting that only 70.4% of the fistula tracts followed Goodsall's rule. Recently, the predictive value of Goodsall's rule has been challenged because it was shown to be accurate only when applied to simple fistula (intersphincteric or low transsphincteric fistula) where its accuracy was less than 70% in complex fistulas. Complex anal fistula remains a challenging problem for colorectal surgeons as noted with a complete healing rate of 64.2% in this study. The healing rates of complex fistulas in our study were comparable among seton staged fistulotomy (85.0%), ERAF (72.7%), and LIFT (65.9%). Although there is no direct comparison of clinical effectiveness among these 3 procedures in the literature, seton staged fistulotomy appeared to have the highest rate of complex fistula healing followed by ERAF and LIFT, which was also demonstrated in this study. However, staged fistulotomy may have more adverse effects on anal sphincter function than the other 2 sphincter-preserving procedures (ERAF and LIFT). Since there is a lack of high-quality study determining the best or standard procedure for complex anal fistula, operative techniques will mainly depend on the anatomy of fistula and surgeon expertise-with may require a stepwise approach with preferential choices of sphincter-preserving operations or perform multiple procedures at the same time. Apart from clinical outcomes, patient-reported outcomes gain more interests in surgical practice because it affects patient's quality of life. In the case of anal fistula surgery, 2 main patient-reported outcomes were fecal continence status and chronic postsurgical pain. Unfortunately, both of these functional outcomes (especially chronic pain after anal surgery) have been hardly mentioned in a comprehensive manner in the literature. In this study, 6.9% of studied patients experienced worse postoperative continence scores (median Wexner's score change of 3) and 1.6% had chronic postsurgical pain. The incidence of new-onset fecal incontinence after fistula surgery is various in the literature ranging from 8% to 52% depending on fistula characteristics, surgical technique, and measurement tool. Patients with simple fistula and those having sphincter-preserving operations were reported to have a lower risk of fecal incontinence than their counterparts. However, our analysis did not find a significant difference in the incidences of fecal incontinence among various surgical procedures or among different subtypes of anal fistula. Since chronic pain after fistula surgery is not well studied or described, its incidence is largely unknown but it can be disturbing for patients. In this study, 1.6% of studied patients reported chronic anal pain beyond 3 months after operation without identified etiology of pain. Their chronic postsurgical pain was mild and controllable with oral analgesia. It lasted up to 6 months postoperatively. The causes of chronic pain after fistula surgery could include occult infection, nonhealing fistula, trauma to the anal sphincter complex, and peripheral nerve injury. The possibility of chronic postsurgical pain highlights the importance of meticulous techniques and proper anatomical knowledge in anal fistula surgery. Fundamentally, this large-scale study showed comprehensive clinical and patient-reported outcomes after anal fistula surgery with a sufficient period of follow-up. However, there are some limitations that should be acknowledged. First, this study was conducted in 2 referral tertiary university hospitals, and all operations were performed by colorectal surgeons. Hence, fistula characteristics may be different from those seen in primary and secondary hospitals so are the outcomes performed by non-proctologists. In fact, some investigators suggested that colorectal surgeons tended to perform sphincter-preserving operations and had fewer recurrences than general surgeons. Second, this study included only cryptoglandular anal fistulas. Therefore, surgical techniques and their results (both clinical and patient-reported outcomes) may be different for fistula related to Crohn disease and tuberculosis. It is well known that patients with non-cryptoglandular fistulas are more difficult and complicated to treat due to more complexity of fistula characteristic and the possibility of rectal involvement or extensive perineal involvement. Third, 37.2% of studied patients were classified without any preoperative imaging study which may mistake the classification of anal fistula and possibly lead to a bias. Last, we did not perform any incontinence tests (e.g., manometry) other than Wexner's clinical score. In conclusion, this prospective audit showed a high proportion of complex fistula and sphincter-preserving operations seen in 2 university hospitals in Thailand. Despite satisfactory outcomes in the vast majority of studied patients, there were 20.6% treatment failure, 6.9% worse postoperative continence, and 1.6% chronic postoperative pain. The complex fistula was a strong predictor for recurrent fistula which eventually presented within 24 months after surgery. Hence, at least a 2-year period of follow-up is suggested for detecting any recurrence and measuring functional outcomes, which included fecal continence status and chronic postsurgical pain. The patient-reported outcomes should routinely be evaluated together with other clinical outcomes. These findings could also outline the information for counseling patients about potential outcomes and adverse effects before they anticipate surgery for anal fistula. CONFLICT OF INTEREST No potential conflict of interest relevant to this article was reported. FUNDING None.
<filename>scripts.py # encoding: utf-8 import argparse import csv import logging import os import sys import time from io import StringIO from datetime import ( datetime, timedelta, ) from enum import Enum from sqlalchemy import ( or_, ) from api.adobe_vendor_id import ( AuthdataUtility, ) from api.bibliotheca import ( BibliothecaCirculationSweep ) from api.config import ( CannotLoadConfiguration, Configuration, ) from api.controller import CirculationManager from api.lanes import create_default_lanes from api.local_analytics_exporter import LocalAnalyticsExporter from api.marc import LibraryAnnotator as MARCLibraryAnnotator from api.novelist import ( NoveListAPI ) from api.nyt import NYTBestSellerAPI from api.odl import ( ODLImporter, ODLImportMonitor, SharedODLImporter, SharedODLImportMonitor, ) from api.onix import ONIXExtractor from api.opds_for_distributors import ( OPDSForDistributorsImporter, OPDSForDistributorsImportMonitor, OPDSForDistributorsReaperMonitor, ) from api.overdrive import ( OverdriveAPI, ) from core.entrypoint import EntryPoint from core.external_list import CustomListFromCSV from core.external_search import ExternalSearchIndex from core.lane import Lane from core.lane import ( Pagination, Facets, FeaturedFacets, ) from core.marc import MARCExporter from core.metadata_layer import ( CirculationData, FormatData, ReplacementPolicy, LinkData, ) from core.metadata_layer import MARCExtractor from core.mirror import MirrorUploader from core.model import ( CachedMARCFile, CirculationEvent, Collection, ConfigurationSetting, Contribution, CustomList, DataSource, DeliveryMechanism, Edition, ExternalIntegration, get_one, Hold, Hyperlink, Identifier, LicensePool, Loan, Representation, RightsStatus, SessionManager, Subject, Timestamp, Work, EditionConstants) from core.model.configuration import ExternalIntegrationLink from core.opds import ( AcquisitionFeed, ) from core.opds_import import ( MetadataWranglerOPDSLookup, OPDSImporter, ) from core.scripts import OPDSImportScript, CollectionType from core.scripts import ( Script as CoreScript, DatabaseMigrationInitializationScript, IdentifierInputScript, LaneSweeperScript, LibraryInputScript, PatronInputScript, TimestampScript, ) from core.util import LanguageCodes from core.util.opds_writer import ( OPDSFeed, ) from core.util.datetime_helpers import utc_now class Script(CoreScript): def load_config(self): if not Configuration.instance: Configuration.load(self._db) class CreateWorksForIdentifiersScript(Script): """Do the bare minimum to associate each Identifier with an Edition with title and author, so that we can calculate a permanent work ID. """ to_check = [Identifier.OVERDRIVE_ID, Identifier.THREEM_ID, Identifier.GUTENBERG_ID] BATCH_SIZE = 100 name = "Create works for identifiers" def __init__(self, metadata_web_app_url=None): if metadata_web_app_url: self.lookup = MetadataWranglerOPDSLookup(metadata_web_app_url) else: self.lookup = MetadataWranglerOPDSLookup.from_config(_db) def run(self): # We will try to fill in Editions that are missing # title/author and as such have no permanent work ID. # # We will also try to create Editions for Identifiers that # have no Edition. either_title_or_author_missing = or_( Edition.title == None, Edition.sort_author == None, ) edition_missing_title_or_author = self._db.query(Identifier).join( Identifier.primarily_identifies).filter( either_title_or_author_missing) no_edition = self._db.query(Identifier).filter( Identifier.primarily_identifies==None).filter( Identifier.type.in_(self.to_check)) for q, descr in ( (edition_missing_title_or_author, "identifiers whose edition is missing title or author"), (no_edition, "identifiers with no edition")): batch = [] self.log.debug("Trying to fix %d %s", q.count(), descr) for i in q: batch.append(i) if len(batch) >= self.BATCH_SIZE: self.process_batch(batch) batch = [] def process_batch(self, batch): response = self.lookup.lookup(batch) if response.status_code != 200: raise Exception(response.text) content_type = response.headers['content-type'] if content_type != OPDSFeed.ACQUISITION_FEED_TYPE: raise Exception("Wrong media type: %s" % content_type) importer = OPDSImporter( self._db, response.text, overwrite_rels=[Hyperlink.DESCRIPTION, Hyperlink.IMAGE]) imported, messages_by_id = importer.import_from_feed() self.log.info("%d successes, %d failures.", len(imported), len(messages_by_id)) self._db.commit() class MetadataCalculationScript(Script): """Force calculate_presentation() to be called on some set of Editions. This assumes that the metadata is in already in the database and will fall into place if we just call Edition.calculate_presentation() and Edition.calculate_work() and Work.calculate_presentation(). Most of these will be data repair scripts that do not need to be run regularly. """ name = "Metadata calculation script" def q(self): raise NotImplementedError() def run(self): q = self.q() search_index_client = ExternalSearchIndex(self._db) self.log.info("Attempting to repair metadata for %d works" % q.count()) success = 0 failure = 0 also_created_work = 0 def checkpoint(): self._db.commit() self.log.info("%d successes, %d failures, %d new works.", success, failure, also_created_work) i = 0 for edition in q: edition.calculate_presentation() if edition.sort_author: success += 1 work, is_new = edition.license_pool.calculate_work( search_index_client=search_index_client) if work: work.calculate_presentation() if is_new: also_created_work += 1 else: failure += 1 i += 1 if not i % 1000: checkpoint() checkpoint() class FillInAuthorScript(MetadataCalculationScript): """Fill in Edition.sort_author for Editions that have a list of Contributors, but no .sort_author. This is a data repair script that should not need to be run regularly. """ name = "Fill in missing authors" def q(self): return self._db.query(Edition).join( Edition.contributions).join(Contribution.contributor).filter( Edition.sort_author==None) class UpdateStaffPicksScript(Script): DEFAULT_URL_TEMPLATE = "https://docs.google.com/spreadsheets/d/%s/export?format=csv" def run(self): inp = self.open() tag_fields = { 'tags': Subject.NYPL_APPEAL, } integ = Configuration.integration(Configuration.STAFF_PICKS_INTEGRATION) fields = integ.get(Configuration.LIST_FIELDS, {}) importer = CustomListFromCSV( DataSource.LIBRARY_STAFF, CustomList.STAFF_PICKS_NAME, **fields ) reader = csv.DictReader(inp, dialect='excel-tab') importer.to_customlist(self._db, reader) self._db.commit() def open(self): if len(sys.argv) > 1: return open(sys.argv[1]) url = Configuration.integration_url( Configuration.STAFF_PICKS_INTEGRATION, True ) if not url.startswith('https://') or url.startswith('http://'): url = self.DEFAULT_URL_TEMPLATE % url self.log.info("Retrieving %s", url) representation, cached = Representation.get( self._db, url, do_get=Representation.browser_http_get, accept="text/csv", max_age=timedelta(days=1)) if representation.status_code != 200: raise ValueError("Unexpected status code %s" % representation.status_code) if not representation.media_type.startswith("text/csv"): raise ValueError("Unexpected media type %s" % representation.media_type) return StringIO(representation.content) class CacheRepresentationPerLane(TimestampScript, LaneSweeperScript): name = "Cache one representation per lane" @classmethod def arg_parser(cls, _db): parser = LaneSweeperScript.arg_parser(_db) parser.add_argument( '--language', help='Process only lanes that include books in this language.', action='append' ) parser.add_argument( '--max-depth', help='Stop processing lanes once you reach this depth.', type=int, default=None ) parser.add_argument( '--min-depth', help='Start processing lanes once you reach this depth.', type=int, default=1 ) return parser def __init__(self, _db=None, cmd_args=None, testing=False, manager=None, *args, **kwargs): """Constructor. :param _db: A database connection. :param cmd_args: A mock set of command-line arguments, to use instead of looking at the actual command line. :param testing: If this method creates a CirculationManager object, this value will be passed in to its constructor as its value for `testing`. :param manager: A mock CirculationManager object, to use instead of creating a new one (creating a CirculationManager object is very time-consuming). :param *args: Positional arguments to pass to the superconstructor. :param **kwargs: Keyword arguments to pass to the superconstructor. """ super(CacheRepresentationPerLane, self).__init__(_db, *args, **kwargs) self.parse_args(cmd_args) if not manager: manager = CirculationManager(self._db, testing=testing) from api.app import app app.manager = manager self.app = app self.base_url = ConfigurationSetting.sitewide(self._db, Configuration.BASE_URL_KEY).value def parse_args(self, cmd_args=None): parser = self.arg_parser(self._db) parsed = parser.parse_args(cmd_args) self.languages = [] if parsed.language: for language in parsed.language: alpha = LanguageCodes.string_to_alpha_3(language) if alpha: self.languages.append(alpha) else: self.log.warn("Ignored unrecognized language code %s", alpha) self.max_depth = parsed.max_depth self.min_depth = parsed.min_depth # Return the parsed arguments in case a subclass needs to # process more args. return parsed def should_process_lane(self, lane): if not isinstance(lane, Lane): return False language_ok = False if not self.languages: # We are considering lanes for every single language. language_ok = True if not lane.languages: # The lane has no language restrictions. language_ok = True for language in self.languages: if language in lane.languages: language_ok = True break if not language_ok: return False if self.max_depth is not None and lane.depth > self.max_depth: return False if self.min_depth is not None and lane.depth < self.min_depth: return False return True def cache_url(self, annotator, lane, languages): raise NotImplementedError() def generate_representation(self, *args, **kwargs): raise NotImplementedError() # The generated document will probably be an OPDS acquisition # feed. ACCEPT_HEADER = OPDSFeed.ACQUISITION_FEED_TYPE cache_url_method = None def process_library(self, library): begin = time.time() client = self.app.test_client() ctx = self.app.test_request_context(base_url=self.base_url) ctx.push() super(CacheRepresentationPerLane, self).process_library(library) ctx.pop() end = time.time() self.log.info( "Processed library %s in %.2fsec", library.short_name, end-begin ) def process_lane(self, lane): """Generate a number of feeds for this lane. One feed will be generated for each combination of Facets and Pagination objects returned by facets() and pagination(). """ cached_feeds = [] for facets in self.facets(lane): for pagination in self.pagination(lane): extra_description = "" if facets: extra_description += " Facets: %s." % facets.query_string if pagination: extra_description += " Pagination: %s." % pagination.query_string self.log.info( "Generating feed for %s.%s", lane.full_identifier, extra_description ) a = time.time() feed = self.do_generate(lane, facets, pagination) b = time.time() if feed: cached_feeds.append(feed) self.log.info( "Took %.2f sec to make %d bytes.", (b-a), len(feed.data) ) total_size = sum(len(x.data) for x in cached_feeds) return cached_feeds def facets(self, lane): """Yield a Facets object for each set of facets this script is expected to handle. :param lane: The lane under consideration. (Different lanes may have different available facets.) :yield: A sequence of Facets objects. """ yield None def pagination(self, lane): """Yield a Pagination object for each page of a feed this script is expected to handle. :param lane: The lane under consideration. (Different lanes may have different pagination rules.) :yield: A sequence of Pagination objects. """ yield None class CacheFacetListsPerLane(CacheRepresentationPerLane): """Cache the first two pages of every relevant facet list for this lane.""" name = "Cache paginated OPDS feed for each lane" @classmethod def arg_parser(cls, _db): parser = CacheRepresentationPerLane.arg_parser(_db) available = Facets.DEFAULT_ENABLED_FACETS[Facets.ORDER_FACET_GROUP_NAME] order_help = 'Generate feeds for this ordering. Possible values: %s.' % ( ", ".join(available) ) parser.add_argument( '--order', help=order_help, action='append', default=[], ) available = Facets.DEFAULT_ENABLED_FACETS[Facets.AVAILABILITY_FACET_GROUP_NAME] availability_help = 'Generate feeds for this availability setting. Possible values: %s.' % ( ", ".join(available) ) parser.add_argument( '--availability', help=availability_help, action='append', default=[], ) available = Facets.DEFAULT_ENABLED_FACETS[Facets.COLLECTION_FACET_GROUP_NAME] collection_help = 'Generate feeds for this collection within each lane. Possible values: %s.' % ( ", ".join(available) ) parser.add_argument( '--collection', help=collection_help, action='append', default=[], ) available = [x.INTERNAL_NAME for x in EntryPoint.ENTRY_POINTS] entrypoint_help = 'Generate feeds for this entry point within each lane. Possible values: %s.' % ( ", ".join(available) ) parser.add_argument( '--entrypoint', help=entrypoint_help, action='append', default=[], ) default_pages = 2 parser.add_argument( '--pages', help="Number of pages to cache for each facet. Default: %d" % default_pages, type=int, default=default_pages ) return parser def parse_args(self, cmd_args=None): parsed = super(CacheFacetListsPerLane, self).parse_args(cmd_args) self.orders = parsed.order self.availabilities = parsed.availability self.collections = parsed.collection self.entrypoints = parsed.entrypoint self.pages = parsed.pages return parsed def facets(self, lane): """This script covers a user-specified combination of facets, but it defaults to using every combination of available facets for the given lane with a certain sort order. This means every combination of availability, collection, and entry point. That's a whole lot of feeds, which is why this script isn't actually used -- by the time we generate all of then, they've expired. """ library = lane.get_library(self._db) default_order = library.default_facet(Facets.ORDER_FACET_GROUP_NAME) allowed_orders = library.enabled_facets(Facets.ORDER_FACET_GROUP_NAME) chosen_orders = self.orders or [default_order] allowed_entrypoint_names = [ x.INTERNAL_NAME for x in library.entrypoints ] default_entrypoint_name = None if allowed_entrypoint_names: default_entrypoint_name = allowed_entrypoint_names[0] chosen_entrypoints = self.entrypoints or allowed_entrypoint_names default_availability = library.default_facet( Facets.AVAILABILITY_FACET_GROUP_NAME ) allowed_availabilities = library.enabled_facets( Facets.AVAILABILITY_FACET_GROUP_NAME ) chosen_availabilities = self.availabilities or [default_availability] default_collection = library.default_facet( Facets.COLLECTION_FACET_GROUP_NAME ) allowed_collections = library.enabled_facets( Facets.COLLECTION_FACET_GROUP_NAME ) chosen_collections = self.collections or [default_collection] top_level = (lane.parent is None) for entrypoint_name in chosen_entrypoints: entrypoint = EntryPoint.BY_INTERNAL_NAME.get(entrypoint_name) if not entrypoint: logging.warn("Ignoring unknown entry point %s" % entrypoint_name) continue if not entrypoint_name in allowed_entrypoint_names: logging.warn("Ignoring disabled entry point %s" % entrypoint_name) continue for order in chosen_orders: if order not in allowed_orders: logging.warn("Ignoring unsupported ordering %s" % order) continue for availability in chosen_availabilities: if availability not in allowed_availabilities: logging.warn("Ignoring unsupported availability %s" % availability) continue for collection in chosen_collections: if collection not in allowed_collections: logging.warn("Ignoring unsupported collection %s" % collection) continue facets = Facets( library=library, collection=collection, availability=availability, entrypoint=entrypoint, entrypoint_is_default=( top_level and entrypoint.INTERNAL_NAME == default_entrypoint_name ), order=order, order_ascending=True ) yield facets def pagination(self, lane): """This script covers a user-specified number of pages.""" page = Pagination.default() for pagenum in range(0, self.pages): yield page page = page.next_page if not page: # There aren't enough books to fill `self.pages` # pages. Stop working. break def do_generate(self, lane, facets, pagination, feed_class=None): feeds = [] title = lane.display_name library = lane.get_library(self._db) annotator = self.app.manager.annotator(lane, facets=facets) url = annotator.feed_url(lane, facets=facets, pagination=pagination) feed_class = feed_class or AcquisitionFeed return feed_class.page( _db=self._db, title=title, url=url, worklist=lane, annotator=annotator, facets=facets, pagination=pagination, max_age=0 ) class CacheOPDSGroupFeedPerLane(CacheRepresentationPerLane): name = "Cache OPDS grouped feed for each lane" def should_process_lane(self, lane): # OPDS grouped feeds are only generated for lanes that have sublanes. if not lane.children: return False if self.max_depth is not None and lane.depth > self.max_depth: return False return True def do_generate(self, lane, facets, pagination, feed_class=None): title = lane.display_name annotator = self.app.manager.annotator(lane, facets=facets) url = annotator.groups_url(lane, facets) feed_class = feed_class or AcquisitionFeed # Since grouped feeds are only cached for lanes that have sublanes, # there's no need to consider the case of a lane with no sublanes, # unlike the corresponding code in OPDSFeedController.groups() return feed_class.groups( _db=self._db, title=title, url=url, worklist=lane, annotator=annotator, max_age=0, facets=facets ) def facets(self, lane): """Generate a Facets object for each of the library's enabled entrypoints. This is the only way grouped feeds are ever generated, so there is no way to override this. """ top_level = (lane.parent is None) library = lane.get_library(self._db) # If the WorkList has explicitly defined EntryPoints, we want to # create a grouped feed for each EntryPoint. Otherwise, we want # to create a single grouped feed with no particular EntryPoint. # # We use library.entrypoints instead of lane.entrypoints # because WorkList.entrypoints controls which entry points you # can *switch to* from a given WorkList. We're handling the # case where you switched further up the hierarchy and now # you're navigating downwards. entrypoints = list(library.entrypoints) or [None] default_entrypoint = entrypoints[0] for entrypoint in entrypoints: facets = FeaturedFacets( minimum_featured_quality=library.minimum_featured_quality, uses_customlists=lane.uses_customlists, entrypoint=entrypoint, entrypoint_is_default=( top_level and entrypoint is default_entrypoint ) ) yield facets class CacheMARCFiles(LaneSweeperScript): """Generate and cache MARC files for each input library.""" name = "Cache MARC files" @classmethod def arg_parser(cls, _db): parser = LaneSweeperScript.arg_parser(_db) parser.add_argument( '--max-depth', help='Stop processing lanes once you reach this depth.', type=int, default=0, ) parser.add_argument( '--force', help="Generate new MARC files even if MARC files have already been generated recently enough", dest='force', action='store_true', ) return parser def __init__(self, _db=None, cmd_args=None, *args, **kwargs): super(CacheMARCFiles, self).__init__(_db, *args, **kwargs) self.parse_args(cmd_args) def parse_args(self, cmd_args=None): parser = self.arg_parser(self._db) parsed = parser.parse_args(cmd_args) self.max_depth = parsed.max_depth self.force = parsed.force return parsed def should_process_library(self, library): integration = ExternalIntegration.lookup( self._db, ExternalIntegration.MARC_EXPORT, ExternalIntegration.CATALOG_GOAL, library) return (integration is not None) def process_library(self, library): if self.should_process_library(library): super(CacheMARCFiles, self).process_library(library) self.log.info("Processed library %s" % library.name) def should_process_lane(self, lane): if isinstance(lane, Lane): if self.max_depth is not None and lane.depth > self.max_depth: return False if lane.size == 0: return False return True def process_lane(self, lane, exporter=None): # Generate a MARC file for this lane, if one has not been generated recently enough. if isinstance(lane, Lane): library = lane.library else: library = lane.get_library(self._db) annotator = MARCLibraryAnnotator(library) exporter = exporter or MARCExporter.from_config(library) update_frequency = ConfigurationSetting.for_library_and_externalintegration( self._db, MARCExporter.UPDATE_FREQUENCY, library, exporter.integration ).int_value if update_frequency is None: update_frequency = MARCExporter.DEFAULT_UPDATE_FREQUENCY last_update = None files_q = self._db.query(CachedMARCFile).filter( CachedMARCFile.library==library ).filter( CachedMARCFile.lane==(lane if isinstance(lane, Lane) else None), ).order_by(CachedMARCFile.end_time.desc()) if files_q.count() > 0: last_update = files_q.first().end_time if not self.force and last_update and (last_update > utc_now() - timedelta(days=update_frequency)): self.log.info("Skipping lane %s because last update was less than %d days ago" % (lane.display_name, update_frequency)) return # To find the storage integration for the exporter, first find the # external integration link associated with the exporter's external # integration. integration_link = get_one( self._db, ExternalIntegrationLink, external_integration_id=exporter.integration.id, purpose=ExternalIntegrationLink.MARC ) # Then use the "other" integration value to find the storage integration. storage_integration = get_one(self._db, ExternalIntegration, id=integration_link.other_integration_id ) if not storage_integration: self.log.info("No storage External Integration was found.") return # First update the file with ALL the records. records = exporter.records( lane, annotator, storage_integration ) # Then create a new file with changes since the last update. start_time = None if last_update: # Allow one day of overlap to ensure we don't miss anything due to script timing. start_time = last_update - timedelta(days=1) records = exporter.records( lane, annotator, storage_integration, start_time=start_time ) class AdobeAccountIDResetScript(PatronInputScript): @classmethod def arg_parser(cls, _db): parser = super(AdobeAccountIDResetScript, cls).arg_parser(_db) parser.add_argument( '--delete', help="Actually delete credentials as opposed to showing what would happen.", action='store_true' ) return parser def do_run(self, *args, **kwargs): parsed = self.parse_command_line(self._db, *args, **kwargs) patrons = parsed.patrons self.delete = parsed.delete if not self.delete: self.log.info( "This is a dry run. Nothing will actually change in the database." ) self.log.info( "Run with --delete to change the database." ) if patrons and self.delete: self.log.warn( """This is not a drill. Running this script will permanently disconnect %d patron(s) from their Adobe account IDs. They will be unable to fulfill any existing loans that involve Adobe-encrypted files. Sleeping for five seconds to give you a chance to back out. You'll get another chance to back out before the database session is committed.""", len(patrons) ) time.sleep(5) self.process_patrons(patrons) if self.delete: self.log.warn("All done. Sleeping for five seconds before committing.") time.sleep(5) self._db.commit() def process_patron(self, patron): """Delete all of a patron's Credentials that contain an Adobe account ID _or_ connect the patron to a DelegatedPatronIdentifier that contains an Adobe account ID. """ self.log.info( 'Processing patron "%s"', patron.authorization_identifier or patron.username or patron.external_identifier ) for credential in AuthdataUtility.adobe_relevant_credentials(patron): self.log.info( ' Deleting "%s" credential "%s"', credential.type, credential.credential ) if self.delete: self._db.delete(credential) class AvailabilityRefreshScript(IdentifierInputScript): """Refresh the availability information for a LicensePool, direct from the license source. """ def do_run(self): args = self.parse_command_line(self._db) if not args.identifiers: raise Exception( "You must specify at least one identifier to refresh." ) # We don't know exactly how big to make these batches, but 10 is # always safe. start = 0 size = 10 while start < len(args.identifiers): batch = args.identifiers[start:start+size] self.refresh_availability(batch) self._db.commit() start += size def refresh_availability(self, identifiers): provider = None identifier = identifiers[0] if identifier.type==Identifier.THREEM_ID: sweeper = BibliothecaCirculationSweep(self._db) sweeper.process_batch(identifiers) elif identifier.type==Identifier.OVERDRIVE_ID: api = OverdriveAPI(self._db) for identifier in identifiers: api.update_licensepool(identifier.identifier) elif identifier.type==Identifier.AXIS_360_ID: provider = Axis360BibliographicCoverageProvider(self._db) provider.process_batch(identifiers) else: self.log.warn("Cannot update coverage for %r" % identifier.type) class LanguageListScript(LibraryInputScript): """List all the languages with at least one non-open access work in the collection. """ def process_library(self, library): print(library.short_name) for item in self.languages(library): print(item) def languages(self, library): ":yield: A list of output lines, one per language." for abbreviation, count in library.estimated_holdings_by_language( include_open_access=False ).most_common(): display_name = LanguageCodes.name_for_languageset(abbreviation) yield "%s %i (%s)" % (abbreviation, count, display_name) class CompileTranslationsScript(Script): """A script to combine translation files for circulation, core and the admin interface, and compile the result to be used by the app. The combination step is necessary because Flask-Babel does not support multiple domains yet. """ def run(self): languages = Configuration.localization_languages() for language in languages: base_path = "translations/%s/LC_MESSAGES" % language if not os.path.exists(base_path): logging.warn("No translations for configured language %s" % language) continue os.system("rm %(path)s/messages.po" % dict(path=base_path)) os.system("cat %(path)s/*.po > %(path)s/messages.po" % dict(path=base_path)) os.system("pybabel compile -f -d translations") class InstanceInitializationScript(TimestampScript): """An idempotent script to initialize an instance of the Circulation Manager. This script is intended for use in servers, Docker containers, etc, when the Circulation Manager app is being installed. It initializes the database and sets an appropriate alias on the ElasticSearch index. Because it's currently run every time a container is started, it must remain idempotent. """ name = "Instance initialization" TEST_SQL = "select * from timestamps limit 1" def run(self, *args, **kwargs): # Create a special database session that doesn't initialize # the ORM -- this could be fatal if there are migration # scripts that haven't run yet. # # In fact, we don't even initialize the database schema, # because that's the thing we're trying to check for. url = Configuration.database_url() _db = SessionManager.session( url, initialize_data=False, initialize_schema=False ) results = None try: # We need to check for the existence of a known table -- # this will demonstrate that this script has been run before -- # but we don't need to actually look at what we get from the # database. # # Basically, if this succeeds, we can bail out and not run # the rest of the script. results = list(_db.execute(self.TEST_SQL)) except Exception as e: # This did _not_ succeed, so the schema is probably not # initialized and we do need to run this script.. This # database session is useless now, but we'll create a new # one during the super() call, and use that one to do the # work. _db.close() if results is None: super(InstanceInitializationScript, self).run(*args, **kwargs) else: self.log.error("I think this site has already been initialized; doing nothing.") def do_run(self, ignore_search=False): # Creates a "-current" alias on the Elasticsearch client. if not ignore_search: try: search_client = ExternalSearchIndex(self._db) except CannotLoadConfiguration as e: # Elasticsearch isn't configured, so do nothing. pass # Set a timestamp that represents the new database's version. db_init_script = DatabaseMigrationInitializationScript(_db=self._db) existing = get_one( self._db, Timestamp, service=db_init_script.name, service_type=Timestamp.SCRIPT_TYPE ) if existing: # No need to run the script. We already have a timestamp. return db_init_script.run() # Create a secret key if one doesn't already exist. ConfigurationSetting.sitewide_secret(self._db, Configuration.SECRET_KEY) class LoanReaperScript(TimestampScript): """Remove expired loans and holds whose owners have not yet synced with the loan providers. This stops the library from keeping a record of the final loans and holds of a patron who stopped using the circulation manager. If a loan or (more likely) hold is removed incorrectly, it will be restored the next time the patron syncs their loans feed. """ name = "Remove expired loans and holds from local database" def do_run(self): now = utc_now() # Reap loans and holds that we know have expired. for obj, what in ((Loan, 'loans'), (Hold, 'holds')): qu = self._db.query(obj).filter(obj.end < now) self._reap(qu, "expired %s" % what) for obj, what, max_age in ( (Loan, 'loans', timedelta(days=90)), (Hold, 'holds', timedelta(days=365)), ): # Reap loans and holds which have no end date and are very # old. It's very likely these loans and holds have expired # and we simply don't have the information. older_than = now - max_age qu = self._db.query(obj).join(obj.license_pool).filter( obj.end == None).filter( obj.start < older_than).filter( LicensePool.open_access == False ) explain = "%s older than %s" % ( what, older_than.strftime("%Y-%m-%d") ) self._reap(qu, explain) def _reap(self, qu, what): """Delete every database object that matches the given query. :param qu: The query that yields objects to delete. :param what: A human-readable explanation of what's being deleted. """ counter = 0 print("Reaping %d %s." % (qu.count(), what)) for o in qu: self._db.delete(o) counter += 1 if not counter % 100: print(counter) self._db.commit() self._db.commit() class DisappearingBookReportScript(Script): """Print a TSV-format report on books that used to be in the collection, or should be in the collection, but aren't. """ def do_run(self): qu = self._db.query(LicensePool).filter( LicensePool.open_access==False).filter( LicensePool.suppressed==False).filter( LicensePool.licenses_owned<=0).order_by( LicensePool.availability_time.desc()) first_row = ["Identifier", "Title", "Author", "First seen", "Last seen (best guess)", "Current licenses owned", "Current licenses available", "Changes in number of licenses", "Changes in title availability", ] print("\t".join(first_row)) for pool in qu: self.explain(pool) def investigate(self, licensepool): """Find when the given LicensePool might have disappeared from the collection. :param licensepool: A LicensePool. :return: a 3-tuple (last_seen, title_removal_events, license_removal_events). `last_seen` is the latest point at which we knew the book was circulating. If we never knew the book to be circulating, this is the first time we ever saw the LicensePool. `title_removal_events` is a query that returns CirculationEvents in which this LicensePool was removed from the remote collection. `license_removal_events` is a query that returns CirculationEvents in which LicensePool.licenses_owned went from having a positive number to being zero or a negative number. """ first_activity = None most_recent_activity = None # If we have absolutely no information about the book ever # circulating, we act like we lost track of the book # immediately after seeing it for the first time. last_seen = licensepool.availability_time # If there's a recorded loan or hold on the book, that can # push up the last time the book was known to be circulating. for l in (licensepool.loans, licensepool.holds): for item in l: if not last_seen or item.start > last_seen: last_seen = item.start # Now we look for relevant circulation events. First, an event # where the title was explicitly removed is pretty clearly # a 'last seen'. base_query = self._db.query(CirculationEvent).filter( CirculationEvent.license_pool==licensepool).order_by( CirculationEvent.start.desc() ) title_removal_events = base_query.filter( CirculationEvent.type==CirculationEvent.DISTRIBUTOR_TITLE_REMOVE ) if title_removal_events.count(): candidate = title_removal_events[-1].start if not last_seen or candidate > last_seen: last_seen = candidate # Also look for an event where the title went from a nonzero # number of licenses to a zero number of licenses. That's a # good 'last seen'. license_removal_events = base_query.filter( CirculationEvent.type==CirculationEvent.DISTRIBUTOR_LICENSE_REMOVE, ).filter( CirculationEvent.old_value>0).filter( CirculationEvent.new_value<=0 ) if license_removal_events.count(): candidate = license_removal_events[-1].start if not last_seen or candidate > last_seen: last_seen = candidate return last_seen, title_removal_events, license_removal_events format = "%Y-%m-%d" def explain(self, licensepool): edition = licensepool.presentation_edition identifier = licensepool.identifier last_seen, title_removal_events, license_removal_events = self.investigate( licensepool ) data = ["%s %s" % (identifier.type, identifier.identifier)] if edition: data.extend([edition.title, edition.author]) if licensepool.availability_time: first_seen = licensepool.availability_time.strftime(self.format) else: first_seen = '' data.append(first_seen) if last_seen: last_seen = last_seen.strftime(self.format) else: last_seen = '' data.append(last_seen) data.append(licensepool.licenses_owned) data.append(licensepool.licenses_available) license_removals = [] for event in license_removal_events: description ="%s: %s→%s" % ( event.start.strftime(self.format), event.old_value, event.new_value ) license_removals.append(description) data.append(", ".join(license_removals)) title_removals = [event.start.strftime(self.format) for event in title_removal_events] data.append(", ".join(title_removals)) print("\t".join([str(x).encode("utf8") for x in data])) class NYTBestSellerListsScript(TimestampScript): name = "Update New York Times best-seller lists" def __init__(self, include_history=False): super(NYTBestSellerListsScript, self).__init__() self.include_history = include_history def do_run(self): self.api = NYTBestSellerAPI.from_config(self._db) self.data_source = DataSource.lookup(self._db, DataSource.NYT) # For every best-seller list... names = self.api.list_of_lists() for l in sorted(names['results'], key=lambda x: x['list_name_encoded']): name = l['list_name_encoded'] self.log.info("Handling list %s" % name) best = self.api.best_seller_list(l) if self.include_history: self.api.fill_in_history(best) else: self.api.update(best) # Mirror the list to the database. customlist = best.to_customlist(self._db) self.log.info( "Now %s entries in the list.", len(customlist.entries)) self._db.commit() class OPDSForDistributorsImportScript(OPDSImportScript): """Import all books from the OPDS feed associated with a collection that requires authentication.""" IMPORTER_CLASS = OPDSForDistributorsImporter MONITOR_CLASS = OPDSForDistributorsImportMonitor PROTOCOL = OPDSForDistributorsImporter.NAME class OPDSForDistributorsReaperScript(OPDSImportScript): """Get all books from the OPDS feed associated with a collection to find out if any have been removed.""" IMPORTER_CLASS = OPDSForDistributorsImporter MONITOR_CLASS = OPDSForDistributorsReaperMonitor PROTOCOL = OPDSForDistributorsImporter.NAME class DirectoryImportScript(TimestampScript): """Import some books into a collection, based on a file containing metadata and directories containing ebook and cover files. """ name = "Import new titles from a directory on disk" @classmethod def arg_parser(cls, _db): parser = argparse.ArgumentParser() parser.add_argument( '--collection-name', help='Titles will be imported into a collection with this name. The collection will be created if it does not already exist.', required=True ) parser.add_argument( '--collection-type', help='Collection type. Valid values are: OPEN_ACCESS (default), PROTECTED_ACCESS, LCP.', type=CollectionType, choices=list(CollectionType), default=CollectionType.OPEN_ACCESS ) parser.add_argument( '--data-source-name', help='All data associated with this import activity will be recorded as originating with this data source. The data source will be created if it does not already exist.', required=True ) parser.add_argument( '--metadata-file', help='Path to a file containing MARC or ONIX 3.0 metadata for every title in the collection', required=True ) parser.add_argument( '--metadata-format', help='Format of the metadata file ("marc" or "onix")', default='marc', ) parser.add_argument( '--cover-directory', help='Directory containing a full-size cover image for every title in the collection.', ) parser.add_argument( '--ebook-directory', help='Directory containing an EPUB or PDF file for every title in the collection.', required=True ) RS = RightsStatus rights_uris = ", ".join(RS.OPEN_ACCESS) parser.add_argument( '--rights-uri', help="A URI explaining the rights status of the works being uploaded. Acceptable values: %s" % rights_uris, required=True ) parser.add_argument( '--dry-run', help="Show what would be imported, but don't actually do the import.", action='store_true', ) parser.add_argument( '--default-medium-type', help='Default medium type used in the case when it\'s not explicitly specified in a metadata file. ' 'Valid values are: {0}.'.format(', '.join(EditionConstants.FULFILLABLE_MEDIA)), type=str, choices=EditionConstants.FULFILLABLE_MEDIA ) return parser def do_run(self, cmd_args=None): parser = self.arg_parser(self._db) parsed = parser.parse_args(cmd_args) collection_name = parsed.collection_name collection_type = parsed.collection_type data_source_name = parsed.data_source_name metadata_file = parsed.metadata_file metadata_format = parsed.metadata_format cover_directory = parsed.cover_directory ebook_directory = parsed.ebook_directory rights_uri = parsed.rights_uri dry_run = parsed.dry_run default_medium_type = parsed.default_medium_type return self.run_with_arguments( collection_name=collection_name, collection_type=collection_type, data_source_name=data_source_name, metadata_file=metadata_file, metadata_format=metadata_format, cover_directory=cover_directory, ebook_directory=ebook_directory, rights_uri=rights_uri, dry_run=dry_run, default_medium_type=default_medium_type ) def run_with_arguments( self, collection_name, collection_type, data_source_name, metadata_file, metadata_format, cover_directory, ebook_directory, rights_uri, dry_run, default_medium_type=None ): if dry_run: self.log.warn( "This is a dry run. No files will be uploaded and nothing will change in the database." ) collection, mirrors = self.load_collection(collection_name, collection_type, data_source_name) if not collection or not mirrors: return self.timestamp_collection = collection if dry_run: mirrors = None self_hosted_collection = collection_type in (CollectionType.OPEN_ACCESS, CollectionType.PROTECTED_ACCESS) replacement_policy = ReplacementPolicy.from_license_source(self._db) replacement_policy.mirrors = mirrors metadata_records = self.load_metadata(metadata_file, metadata_format, data_source_name, default_medium_type) for metadata in metadata_records: _, licensepool = self.work_from_metadata( collection, collection_type, metadata, replacement_policy, cover_directory, ebook_directory, rights_uri ) licensepool.self_hosted = True if self_hosted_collection else False if not dry_run: self._db.commit() def load_collection(self, collection_name, collection_type, data_source_name): """Locate a Collection with the given name. If the collection is found, it will be associated with the given data source and configured with existing covers and books mirror configurations. :param collection_name: Name of the Collection. :type collection_name: string :param collection_type: Type of the collection: open access/proteceted access. :type collection_name: CollectionType :param data_source_name: Associate this data source with the Collection if it does not already have a data source. A DataSource object will be created if necessary. :type data_source_name: string :return: A 2-tuple (Collection, list of MirrorUploader instances) :rtype: Tuple[Collection, List[MirrorUploader]] """ collection, is_new = Collection.by_name_and_protocol( self._db, collection_name, ExternalIntegration.LCP if collection_type == CollectionType.LCP else ExternalIntegration.MANUAL ) if is_new: self.log.error( "An existing collection must be used and should be set up before running this script." ) return None, None mirrors = dict(covers_mirror=None, books_mirror=None) types = [ ExternalIntegrationLink.COVERS, ExternalIntegrationLink.OPEN_ACCESS_BOOKS if collection_type == CollectionType.OPEN_ACCESS else ExternalIntegrationLink.PROTECTED_ACCESS_BOOKS ] for type in types: mirror_for_type = MirrorUploader.for_collection(collection, type) if not mirror_for_type: self.log.error( "An existing %s mirror integration should be assigned to the collection before running the script." % type ) return None, None mirrors[type] = mirror_for_type data_source = DataSource.lookup( self._db, data_source_name, autocreate=True, offers_licenses=True ) collection.external_integration.set_setting( Collection.DATA_SOURCE_NAME_SETTING, data_source.name ) return collection, mirrors def load_metadata(self, metadata_file, metadata_format, data_source_name, default_medium_type): """Read a metadata file and convert the data into Metadata records.""" metadata_records = [] if metadata_format == 'marc': extractor = MARCExtractor() elif metadata_format == 'onix': extractor = ONIXExtractor() with open(metadata_file) as f: metadata_records.extend(extractor.parse(f, data_source_name, default_medium_type)) return metadata_records def work_from_metadata(self, collection, collection_type, metadata, policy, *args, **kwargs): """Creates a Work instance from metadata :param collection: Target collection :type collection: Collection :param collection_type: Collection's type: open access/protected access :type collection_type: CollectionType :param metadata: Book's metadata :type metadata: Metadata :param policy: Replacement policy :type policy: ReplacementPolicy :return: A 2-tuple of (Work object, LicensePool object) :rtype: Tuple[core.model.work.Work, LicensePool] """ self.annotate_metadata(collection_type, metadata, policy, *args, **kwargs) if not metadata.circulation: # We cannot actually provide access to the book so there # is no point in proceeding with the import. return edition, new = metadata.edition(self._db) metadata.apply(edition, collection, replace=policy) [pool] = [x for x in edition.license_pools if x.collection == collection] if new: self.log.info("Created new edition for %s", edition.title) else: self.log.info("Updating existing edition for %s", edition.title) work, ignore = pool.calculate_work() if work: work.set_presentation_ready() self.log.info( "FINALIZED %s/%s/%s" % (work.title, work.author, work.sort_author) ) return work, pool def annotate_metadata( self, collection_type, metadata, policy, cover_directory, ebook_directory, rights_uri): """Add a CirculationData and possibly an extra LinkData to `metadata` :param collection_type: Collection's type: open access/protected access :type collection_type: CollectionType :param metadata: Book's metadata :type metadata: Metadata :param policy: Replacement policy :type policy: ReplacementPolicy :param cover_directory: Directory containing book covers :type cover_directory: string :param ebook_directory: Directory containing books :type ebook_directory: string :param rights_uri: URI explaining the rights status of the works being uploaded :type rights_uri: string """ identifier, ignore = metadata.primary_identifier.load(self._db) data_source = metadata.data_source(self._db) mirrors = policy.mirrors circulation_data = self.load_circulation_data( collection_type, identifier, data_source, ebook_directory, mirrors, metadata.title, rights_uri ) if not circulation_data: # There is no point in contining. return if metadata.circulation: circulation_data.licenses_owned = metadata.circulation.licenses_owned circulation_data.licenses_available = metadata.circulation.licenses_available circulation_data.licenses_reserved = metadata.circulation.licenses_reserved circulation_data.patrons_in_hold_queue = metadata.circulation.patrons_in_hold_queue circulation_data.licenses = metadata.circulation.licenses metadata.circulation = circulation_data # If a cover image is available, add it to the Metadata # as a link. cover_link = None if cover_directory: cover_link = self.load_cover_link( identifier, data_source, cover_directory, mirrors ) if cover_link: metadata.links.append(cover_link) else: logging.info( "Proceeding with import even though %r has no cover.", identifier ) def load_circulation_data( self, collection_type, identifier, data_source, ebook_directory, mirrors, title, rights_uri): """Loads an actual copy of a book from disk :param collection_type: Collection's type: open access/protected access :type collection_type: CollectionType :param identifier: Book's identifier :type identifier: core.model.identifier.Identifier, :param data_source: DataSource object :type data_source: DataSource :param ebook_directory: Directory containing books :type ebook_directory: string :param mirrors: Dictionary containing mirrors for books and their covers :type mirrors: Dict[string, MirrorUploader] :param title: Book's title :type title: string :param rights_uri: URI explaining the rights status of the works being uploaded :type rights_uri: string :return: A CirculationData that contains the book as an open-access download, or None if no such book can be found :rtype: CirculationData """ ignore, book_media_type, book_content = self._locate_file( identifier.identifier, ebook_directory, Representation.COMMON_EBOOK_EXTENSIONS, "ebook file", ) if not book_content: # We couldn't find an actual copy of the book, so there is # no point in proceeding. return book_mirror = mirrors[ ExternalIntegrationLink.OPEN_ACCESS_BOOKS if collection_type == CollectionType.OPEN_ACCESS else ExternalIntegrationLink.PROTECTED_ACCESS_BOOKS ] if mirrors else None # Use the S3 storage for books. if book_mirror: book_url = book_mirror.book_url( identifier, '.' + Representation.FILE_EXTENSIONS[book_media_type], open_access=collection_type == CollectionType.OPEN_ACCESS, data_source=data_source, title=title ) else: # This is a dry run and we won't be mirroring anything. book_url = identifier.identifier + "." + Representation.FILE_EXTENSIONS[book_media_type] book_link_rel = \ Hyperlink.OPEN_ACCESS_DOWNLOAD \ if collection_type == CollectionType.OPEN_ACCESS \ else Hyperlink.GENERIC_OPDS_ACQUISITION book_link = LinkData( rel=book_link_rel, href=book_url, media_type=book_media_type, content=book_content ) formats = [ FormatData( content_type=book_media_type, drm_scheme=DeliveryMechanism.LCP_DRM if collection_type == CollectionType.LCP else DeliveryMechanism.NO_DRM, link=book_link, ) ] circulation_data = CirculationData( data_source=data_source.name, primary_identifier=identifier, links=[book_link], formats=formats, default_rights_uri=rights_uri, ) return circulation_data def load_cover_link(self, identifier, data_source, cover_directory, mirrors): """Load an actual book cover from disk. :return: A LinkData containing a cover of the book, or None if no book cover can be found. """ cover_filename, cover_media_type, cover_content = self._locate_file( identifier.identifier, cover_directory, Representation.COMMON_IMAGE_EXTENSIONS, "cover image" ) if not cover_content: return None cover_filename = ( identifier.identifier + '.' + Representation.FILE_EXTENSIONS[cover_media_type] ) # Use an S3 storage mirror for specifically for covers. if mirrors and mirrors[ExternalIntegrationLink.COVERS]: cover_url = mirrors[ExternalIntegrationLink.COVERS].cover_image_url( data_source, identifier, cover_filename ) else: # This is a dry run and we won't be mirroring anything. cover_url = cover_filename cover_link = LinkData( rel=Hyperlink.IMAGE, href=cover_url, media_type=cover_media_type, content=cover_content, ) return cover_link @classmethod def _locate_file(cls, base_filename, directory, extensions, file_type="file", mock_filesystem_operations=None): """Find an acceptable file in the given directory. :param base_filename: A string to be used as the base of the filename. :param directory: Look for a file in this directory. :param extensions: Any of these extensions for the file is acceptable. :param file_type: Human-readable description of the type of file we're looking for. This is used only in a log warning if no file can be found. :param mock_filesystem_operations: A test may pass in a 2-tuple of functions to replace os.path.exists and the 'open' function. :return: A 3-tuple. (None, None, None) if no file can be found; otherwise (filename, media_type, contents). """ if mock_filesystem_operations: exists_f, open_f = mock_filesystem_operations else: exists_f = os.path.exists open_f = open success_path = None media_type = None attempts = [] for extension in extensions: for ext in (extension, extension.upper()): if not ext.startswith('.'): ext = '.' + ext filename = base_filename + ext path = os.path.join(directory, filename) attempts.append(path) if exists_f(path): media_type = Representation.MEDIA_TYPE_FOR_EXTENSION.get( ext.lower() ) content = None with open_f(path) as fh: content = fh.read() return filename, media_type, content # If we went through that whole loop without returning, # we have failed. logging.warn( "Could not find %s for %s. Looked in: %s", file_type, base_filename, ", ".join(attempts) ) return None, None, None class LaneResetScript(LibraryInputScript): """Reset a library's lanes based on language configuration or estimates of the library's current collection.""" @classmethod def arg_parser(cls, _db): parser = LibraryInputScript.arg_parser(_db) parser.add_argument( '--reset', help="Actually reset the lanes as opposed to showing what would happen.", action='store_true' ) return parser def do_run(self, output=sys.stdout, **kwargs): parsed = self.parse_command_line(self._db, **kwargs) libraries = parsed.libraries self.reset = parsed.reset if not self.reset: self.log.info( "This is a dry run. Nothing will actually change in the database." ) self.log.info( "Run with --reset to change the database." ) if libraries and self.reset: self.log.warn( """This is not a drill. Running this script will permanently reset the lanes for %d libraries. Any lanes created from custom lists will be deleted (though the lists themselves will be preserved). Sleeping for five seconds to give you a chance to back out. You'll get another chance to back out before the database session is committed.""", len(libraries) ) time.sleep(5) self.process_libraries(libraries) new_lane_output = "New Lane Configuration:" for library in libraries: new_lane_output += "\n\nLibrary '%s':\n" % library.name def print_lanes_for_parent(parent): lanes = self._db.query(Lane).filter(Lane.library==library).filter(Lane.parent==parent).order_by(Lane.priority) lane_output = "" for lane in lanes: lane_output += " " + (" " * len(list(lane.parentage))) + lane.display_name + "\n" lane_output += print_lanes_for_parent(lane) return lane_output new_lane_output += print_lanes_for_parent(None) output.write(new_lane_output) if self.reset: self.log.warn("All done. Sleeping for five seconds before committing.") time.sleep(5) self._db.commit() def process_library(self, library): create_default_lanes(self._db, library) class NovelistSnapshotScript(TimestampScript, LibraryInputScript): def do_run(self, output=sys.stdout, *args, **kwargs): parsed = self.parse_command_line(self._db, *args, **kwargs) for library in parsed.libraries: try: api = NoveListAPI.from_config(library) except CannotLoadConfiguration as e: self.log.info(str(e)) continue if (api): response = api.put_items_novelist(library) if (response): result = "NoveList API Response\n" result += str(response) output.write(result) class ODLImportScript(OPDSImportScript): """Import information from the feed associated with an ODL collection.""" IMPORTER_CLASS = ODLImporter MONITOR_CLASS = ODLImportMonitor PROTOCOL = ODLImporter.NAME class SharedODLImportScript(OPDSImportScript): IMPORTER_CLASS = SharedODLImporter MONITOR_CLASS = SharedODLImportMonitor PROTOCOL = SharedODLImporter.NAME class LocalAnalyticsExportScript(Script): """Export circulation events for a date range to a CSV file.""" @classmethod def arg_parser(cls, _db): parser = argparse.ArgumentParser() parser.add_argument( '--start', help="Include circulation events that happened at or after this time.", required=True, ) parser.add_argument( '--end', help="Include circulation events that happened before this time.", required=True, ) return parser def do_run(self, output=sys.stdout, cmd_args=None, exporter=None): parser = self.arg_parser(self._db) parsed = parser.parse_args(cmd_args) start = parsed.start end = parsed.end exporter = exporter or LocalAnalyticsExporter() output.write(exporter.export(self._db, start, end))
<reponame>VoltiSubito/tandem import "./index.scss"; import * as React from "react"; import * as cx from "classnames"; import { Dispatch } from "redux"; import { Workspace } from "./workspace/view.pc"; import { Welcome } from "./welcome/view.pc"; import { RootState, isUnsaved, getBuildScriptProcess, RootReadyType } from "../../state"; import { Chrome } from "./chrome.pc"; export type RootOuterProps = { root: RootState; dispatch: Dispatch<any>; }; export class RootComponent extends React.PureComponent<RootOuterProps> { render() { const { root, dispatch } = this.props; // TODO - add loading state here if (root.readyType === RootReadyType.LOADING) { return null; } let content; const buildScriptProcess = getBuildScriptProcess(root); if (!root.projectInfo) { content = ( <Welcome key="welcome" dispatch={dispatch} selectedDirectory={root.selectedDirectoryPath} /> ); } else { content = ( <div key="workspace-root" className="m-root"> <Workspace root={root} dispatch={dispatch} /> </div> ); } if (root.customChrome) { content = ( <Chrome content={content} unsaved={isUnsaved(root)} projectInfo={root.projectInfo} dispatch={dispatch} buildButtonProps={{ dispatch, buildScriptProcess, hasBuildScript: Boolean( root.projectInfo && root.projectInfo.config.scripts && root.projectInfo.config.scripts.build ), hasOpenScript: Boolean( root.projectInfo && root.projectInfo.config.scripts && root.projectInfo.config.scripts.openApp ) }} /> ); } return content; } }
// Identifies ranges of dispatchable ops and moves them into dispatch regions. LogicalResult identifyBlockDispatchRegions(Block *block, Dispatchability &dispatchability) { bool didFindAnyNewRegions; do { didFindAnyNewRegions = false; for (auto &rootOp : llvm::reverse(*block)) { LLVM_DEBUG(llvm::dbgs() << "-> EVALUATING OP FOR ROOT FUSION: " << rootOp.getName() << "\n"); if (!isDispatchableOp(&rootOp, dispatchability)) { LLVM_DEBUG(llvm::dbgs() << " -SKIP NON DISPATCHABLE OP-\n"); continue; } if (isNonFusionRootOp(&rootOp)) { LLVM_DEBUG(llvm::dbgs() << " -SKIP NON FUSION ROOT OP-\n"); continue; } auto fusedSubgraph = findFusionSubgraphFromRoot(&rootOp, dispatchability); auto workload = calculateWorkload(&rootOp, rootOp.getResult(0)); if (!workload) { return failure(); } if (failed(buildDispatchRegion(block, workload, fusedSubgraph))) { return failure(); } didFindAnyNewRegions = true; break; } } while (didFindAnyNewRegions); return success(); }
Role of Abscisic Acid in Drought-Induced Freezing Tolerance, Cold Acclimation, and Accumulation of LT178 and RAB18 Proteins in Arabidopsis thaliana To study the role of abscisic acid (ABA) in development of freezing tolerance of Arabidopsis thaliana, we exposed wild-type plants, the ABA-insensitive mutant abi1, and the ABA-deficient mutant aba-1 to low temperature (LT), exogenous ABA, and drought. Exposure of A. thaliana to drought stress resulted in a similar increase in freezing tolerance as achieved by ABA treatment or the initial stages of acclimation, suggesting overlapping responses to these environmental cues. ABA appears to be involved in both LT- and drought-induced freezing tolerance, since both ABA mutants were impaired in their responses to these stimuli. To correlate enhanced freezing tolerance with the presence of stress-specific proteins, we characterized the accumulation of RAB18 and LTI78 in two ecotypes, Landsberg erecta and Coimbra, and in the ABA mutants during stress response. LT- and drought-induced accumulation of RAB18 coincided with the increase in freezing tolerance and was blocked in the cold-acclimation-deficient ABA mutants. In contrast, LT178 accumulated in all genotypes in response to LT and drought and was always present when the plants were freezing tolerant. This suggests that development of freezing tolerance in A. thaliana requires ABA-controlled processes in addition to ABA-independent factors.
class TransactionV210: """ TransactionV210 defines the decoding and signing logic for an ERC20 Address Registration Transaction (v210). Creation of this Transaction from the JS API is not supported, as such this implementation is only meant to allow to keep your balance up to date, should you also use your seed on other platforms where you might have interacted with the ERC20 network. """ def __init__(self): """ Initializes a new v210 tansaction """ self._coin_inputs = [] self._refund_coin_output = None self._tx_fee = None self._erc20_reg_fee = None self._erc20_pub_key = None self._erc20_tft_adddress = '' self._erc20_address = '' self._erc20_signature = '' self._id = None self._specifier = bytearray(b"erc20 addrreg tx") @property def version(self): return ERC20_ADDRESS_REGISTRATION_TRANSACTION_VERSION @property def id(self): """ Gets transaction id """ return self._id @id.setter def id(self, txn_id): """ Sets transaction id """ self._id = txn_id @property def coin_inputs(self): """ Retrieves coins inputs """ return self._coin_inputs or [] @property def coin_outputs(self): """ Retrieves coins outputs """ if self._refund_coin_output: return [self._refund_coin_output] return [] @property def miner_fees(self): """ Retrieves the miner fees """ # TODO: include the registration fee as miner fee if self._tx_fee: return [self._tx_fee] return [] @property def data(self): """ Gets the arbitrary data of the transaction """ return bytearray() @property def data_type(self): """ Gets the optional type of the arbitrary data of the transaction """ return 0 @property def json(self): """ Returns a json version of the TransactionV210 object """ result = { 'version': self.version, 'data': { 'pubkey': self._erc20_pub_key.json if self._erc20_pub_key else '', 'tftaddress': self._erc20_tft_adddress or '', 'erc20address': self._erc20_address or '', 'signature': self._erc20_signature or '', 'regfee': str(self._erc20_reg_fee), 'txfee': str(self._tx_fee), 'coininputs': [input.json for input in self._coin_inputs] } } if self._refund_coin_output: result['data']['refundcoinoutput'] = self._refund_coin_output.json return result def from_dict(self, data): """ Populates this TransactionV210 object from a data (JSON-decoded) dictionary """ self._erc20_pub_key = data.get('pubkey', None) if self._erc20_pub_key: self._erc20_pub_key = tftsig.SiaPublicKey.from_string(self._erc20_pub_key) self._erc20_tft_adddress = data.get('tftaddress', '') self._erc20_address = data.get('erc20address', '') self._erc20_signature = data.get('signature', '') if 'regfee' in data: self._erc20_reg_fee = int(data['regfee']) else: self._erc20_reg_fee = 0 if 'txfee' in data: self._tx_fee = int(data['txfee']) else: self._tx_fee = 0 if 'coininputs' in data: for ci_info in data['coininputs']: ci = CoinInput.from_dict(ci_info) self._coin_inputs.append(ci) else: self._coin_inputs = [] if 'refundcoinoutput' in data: co = CoinOutput.from_dict(data['refundcoinoutput']) self._refund_coin_output = co else: self._refund_coin_output = None def add_coin_input(self, parent_id, pub_key): """ Adds a new input to the transaction """ key = Ed25519PublicKey(pub_key=pub_key) fulfillment = SingleSignatureFulfillment(pub_key=key) self._coin_inputs.append(CoinInput(parent_id=parent_id, fulfillment=fulfillment)) def set_transaction_fee(self, fee): self._tx_fee = fee def set_registration_fee(self, fee): self._erc20_reg_fee = fee @property def erc20_public_key(self): return self._erc20_pub_key def set_erc20_public_key(self, pubkey): self._erc20_pub_key = pubkey def set_erc20_signature(self, signature): self._erc20_signature = signature def set_refund_coin_output(self, value, recipient): """ Set a coin output as refund coin output of this tx @param value: Amout of coins @param recipient: The recipient address """ unlockhash = UnlockHash.from_string(recipient) condition = UnlockHashCondition(unlockhash=unlockhash) self._refund_coin_output = CoinOutput(value=value, condition=condition) def get_input_signature_hash(self, extra_objects=None): """ Builds a signature hash for an input """ if extra_objects is None: extra_objects = [] buffer = bytearray() # encode the transaction version buffer.extend(tfbinary.IntegerBinaryEncoder.encode(self.version)) # encode the specifier buffer.extend(self._specifier) # encode the public key buffer.extend(tfbinary.BinaryEncoder.encode(self._erc20_pub_key)) # extra objects if any for extra_object in extra_objects: buffer.extend(tfbinary.BinaryEncoder.encode(extra_object)) # encode the number of coins inputs buffer.extend(tfbinary.IntegerBinaryEncoder.encode(len(self._coin_inputs), _kind='int')) # encode inputs parent_ids for coin_input in self._coin_inputs: buffer.extend(tfbinary.BinaryEncoder.encode(coin_input.parent_id, type_='hex')) # encode fees buffer.extend(tfbinary.BinaryEncoder.encode(self._erc20_reg_fee, type_='currency')) buffer.extend(tfbinary.BinaryEncoder.encode(self._tx_fee, type_='currency')) # encode refund coin output if self._refund_coin_output: buffer.extend([1]) buffer.extend(tfbinary.BinaryEncoder.encode(self._refund_coin_output)) else: buffer.extend([0]) # now we need to return the hash value of the binary array # return bytes(buffer) return hash(data=buffer)
Hepatoprotective effect of ethanol extract from Berchemia lineate against CCl4-induced acute hepatotoxicity in mice Abstract Context: The roots of Berchemia lineate (L.) DC. (Rhamnaceae) have been long used as a remedy for the treatment of some diseases in Guangxi Province, China. Objective: The present study investigates the hepatoprotective effect of Berchemia lineate ethanol extract (BELE) on CCl4-induced acute liver damage in mice. Materials and methods: Effect of BELE administrated for 7 consecutive days was evaluated in mice by the serum alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), total bilirubin (TBIL), albulin (ALB), globulin (GLB), and total protein (TP) levels, as well as liver superoxide dismutase (SOD) activity and malondialdehyde (MDA) level. Moreover, histopathological examinations were also taken. Results: Compared with the model group, administration of 400mg/kg BELE for 7d in mice significantly decreased the serum ALT (56.25U/L), AST (297.67U/L), ALP (188.20U/L), and TBIL (17.90mol/L), along with the elevation of TP (64.67g/L). In addition, BELE (100, 200, and 400mg/kg, i.g.) treated mice recorded a dose-dependent increment of SOD (291.17, 310.32, and 325.67U/mg prot) and reduction of MDA (7.27, 6.77, and 5.33nmol/mg prot) levels. Histopathological examinations also confirmed that BELE can ameliorate CCl4-induced liver injuries, characterized by extensive hepatocellular degeneration/necrosis, inflammatory cell infiltration, congestion, and sinusoidal dilatation. Discussion and conclusion: The results indicated that BELE possessed remarkable protective effect against acute hepatotoxicity and oxidative injuries induced by CCl4, and that the hepatoprotective effects of BELE may be due to both the inhibition of lipid peroxidation and the increase of antioxidant activity.
Exercise and High-Fat Diet in Obesity: Functional Genomics Perspectives of Two Energy Homeostasis Pillars The heavy impact of obesity on both the population general health and the economy makes clarifying the underlying mechanisms, identifying pharmacological targets, and developing efficient therapies for obesity of high importance. The main struggle facing obesity research is that the underlying mechanistic pathways are yet to be fully revealed. This limits both our understanding of pathogenesis and therapeutic progress toward treating the obesity epidemic. The current anti-obesity approaches are mainly a controlled diet and exercise which could have limitations. For instance, the classical anti-obesity approach of exercise might not be practical for patients suffering from disabilities that prevent them from routine exercise. Therefore, therapeutic alternatives are urgently required. Within this context, pharmacological agents could be relatively efficient in association to an adequate diet that remains the most efficient approach in such situation. Herein, we put a spotlight on potential therapeutic targets for obesity identified following differential genes expression-based studies aiming to find genes that are differentially expressed under diverse conditions depending on physical activity and diet (mainly high-fat), two key factors influencing obesity development and prognosis. Such functional genomics approaches contribute to elucidate the molecular mechanisms that both control obesity development and switch the genetic, biochemical, and metabolic pathways toward a specific energy balance phenotype. It is important to clarify that by gene-related pathways, we refer to genes, the corresponding proteins and their potential receptors, the enzymes and molecules within both the cells in the intercellular space, that are related to the activation, the regulation, or the inactivation of the gene or its corresponding protein or pathways. We believe that this emerging area of functional genomics-related exploration will not only lead to novel mechanisms but also new applications and implications along with a new generation of treatments for obesity and the related metabolic disorders especially with the modern advances in pharmacological drug targeting and functional genomics techniques. Obesity as a Health Problem in Need of Novel Approaches Obesity is defined as an abnormal or excessive fat accumulation resulting from a broken energy homeostasis. It has an epidemiological profile with a continuously increasing trend worldwide. In the United States of America, at least 78.6 million people suffer from obesity. Obesity is also linked to diabetes development (diabesity). In addition, not only many risk factors can increase obesity prevalence but the obesity epidemic has also a major impact on health due to the complexity of its mechanisms, pathophysiology, and metabolic consequences. Obesity has also been reported to increase risks and incidence of diseases and disorders such as advanced colorectal neoplasm, malnutrition, and mortality risk in addition to decreasing life expectancy among other diverse health impacts that could justify classifying obesity as a disease. Diet control (caloric restriction), exercise, or the combination of both are the main anti-obesity approaches. For persons with morbid obesity, bariatric surgery can be an option and medications are prescribed in some cases as well. Although body weight management is a multibillion-dollar market, there are only few Food and Drug Administration-approved drugs available for long-term obesity treatment, but all have undesirable side effects. In addition, some disabilities or heart diseases might limit the ability of individuals with obesity to exercise. In spite of the efforts of the diverse local, national, and international organizations in collaboration with health professionals and decision makers, obesity remains a major challenge with heavy consequences on life quality of the population and on healthcare budgets especially that patients with obesity might require a specific or an adapted therapeutic care for some diseases compared to patients not suffering from obesity. Therefore, there is an urgent need to further explore the obesity-related pathways in order to understand the underlying mechanisms and identify potential therapeutic targets. Herein, we focus on exercise and high-fat (HF) diet as they represent key factors for obesity prevention, development, and treatment area. We highlight how functional genomics allows exploring these factors via illustrative examples along with the research, pharmacological and clinical possible outcomes, and implications. Exercise and Health Along with resting energy expenditure, exercise-induced energy expenditure represents a key component of the total energy expenditure. In addition to its place within the energy balance as the most variable part, exercise has benefits at different levels even for the older population. Regular exercise contributes to reduced body weight, blood pressure, low-density lipoprotein, and total cholesterol and increases high-density lipoprotein cholesterol, muscular function, and strength as well as insulin sensitivity. This makes exercise an important therapy both to prevent and manage obesity. Although the purpose remains to create an accumulative negative caloric balance leading to weight loss, intensity, regularity, and duration of an exercise defines its type and the related outcomes and benefits. The choice of exercise types depends on what we want to achieve in terms of muscle strength, fat mass loss, mitochondrial function enhancement, etc., as well as the ability of the individual depending on factors like age, cardiovascular health, and disability. For instance, an elderly person with cardiovascular disease would go for a walk to burn calories because of their limited exercise capacity. The key metabolic tissue used during exercise is the skeletal muscle and its health represents a key factor for both an improved metabolic performance as well as a healthy ageing which are two risk factors of obesity. Exercise has a crucial role in maintaining skeletal muscle homeostasis especially for the older population. Biochemical profile of muscles is highly determined by protein synthesis (muscle contraction) and energy metabolism (energy expenditure) that govern the ability of energy usage via locomotion, which is a principle component of anti-obesity therapy involving exercise. Importantly, both body size and body composition, which are shaped by exercise, are determinants of resting energy expenditure. This shows that the benefits of exercise in terms of caloric use goes beyond the exercise-related energy expenditure. In addition, the benefits of exercise are not limited to energy metabolism, lipoprotein profile, or obesity treatment. Indeed, studies have shown how exercise could help to improve the prognosis, therapy, or prevent (reduce the risk) the onset of diverse diseases and conditions such as cancers, cancer-induced cardiac cachexia, multiple sclerosis, stroke, breast cancer-related lymphedema, as well as to counteract some treatments side effects and can even be prescribed as a complimentary therapy (e.g., exercise oncology). Exercise Impacts Gene Expression Identifying genes that are regulated by exercise (exercise-induced genes, especially in the skeletal muscle) has been among the focus of different research groups that have already identified a number of key exercise-related transcriptomes. For instance, numerous studies have obtained data that defined the effects of exercise on genes that are related to exercise benefits at the biochemical and metabolic levels. Indeed, they have shown that exercise induces the expression of genes that regulate or are related to mitochondrial biogenesis, oxidative phosphorylation (OXPHOS), antioxidant defense mechanism, cell proliferation, and the amelioration of insulin resistance which indicates links between exercise outcomes and transcriptome modifications. Furthermore, other gene expression-based studies, mainly comparative and under different conditions including exercise and resting have allowed the collation of data and increase our understanding of the skeletal muscle transcriptome and functions in diverse contexts and depending on the population category. This contributes to a more precise mechanistic understanding of the genetic and biochemical changes at the molecular level. Thus, could guide to a muscle-targeting therapy development for obesity by defining the pathway associations with genes to optimize other therapies and even improve the pharmacovigilance based on genetic profiling. Beyond that, identifying exercise-induced genes would support further progress in understanding and treating different diseases other than those only depending on energy homeostasis which would expend the benefits of "exercise pills". Gene Expression Patterns Underlie Muscular Adaptation to Exercise Exploring such exercise-induced genes and pathways contributes to understand the molecular profiles that govern the adaptive responses of muscles to exercise. In addition, advances in epigenetics of muscle in relation to exercise, diet, and aging would further strengthen this field beyond genomics and put each of these pillars within a complementary network of data via which we can investigate potential therapies. For instance, exercise during pregnancy induces offspring changes, indicating that mother physical activity (intensity and frequency) impacts the health of the unborn child which opens an area in molecular pediatrics research. Our team has also focused on gene expression in the skeletal muscle of endurance athletes compared to sedentary men and identified 33 genes that are differentially expressed. This study, which supports the data reported above, highlight the global muscle gene expression including genes mostly related to muscle contraction and energy metabolism (two parameters improved by exercise). Moreover, these data further support our previous characterization of the global gene expression profile of sprinter's muscle, that shows transcripts mainly involved in contraction and energy metabolism as the most expressed in muscles of sprinters. Such genetic expression pattern reflects a functional and metabolic adaptation of athletes toward an increased muscle contractile function along with an enhanced energy expenditure in the context of exercise training-induced muscle adaptations. Furthermore, another study, involving healthy men, shows that moderate-intensity exercise at the lactate threshold induces the expression of transcriptomes involved in the tricarboxylic acid cycle, -oxidation, antioxidant enzymes, contractile apparatus, and electron transport in the skeletal muscle. Following the same line of thought, it was demonstrated that after 6 weeks of endurance training at lactate threshold intensity, the regulation of skeletal muscle transcriptome in elderly men includes increased expression of genes related to oxidative OXPHOS. All these changes reflect an increase in the energy expenditure ability via an enhanced mitochondrial activity with an increased usage of biofuels which would be combined to reduced energy storage and lead to protection from obesity. This study has also highlighted the importance of mitochondrial OXPHOS and extracellular matrix (ECM) remodeling in the skeletal muscle adaptation which correlates with a previously reported work in which genes of both ECM and calcium binding are upregulated and those related to diabetes are modulated in human skeletal muscle following a 6 wks aerobic training. We note that the exercise-induced genes are associated with a profile that counteracts the ageing process. Indeed, whereas ageing (risk factor for obesity) decreases metabolic performance (e.g., mitochondrial dysfunction ) and the strength of the muscle and increases oxidative stress, exercise improves those biological patterns in the muscle. One of the mild endurance training induced genes that draws particular attention is the secreted protein acidic and rich in cysteine (SPARC). This gene was characterized as an exercise-induced gene as well as electrical pulse stimulation (considered as the in vitro form of exercise)-induced gene in C2C12 myoblasts. In addition, studies have shown that SPARC increased in the skeletal muscle during training. This same protein plays diverse roles in energy metabolism especially in the muscle, ECM remodeling and myoblast differentiation, inflammation, and cancer development, which would indicate that SPARC plays a role in exercise-induced benefit related processes involving inflammation, cancer, and tissue remodeling. All these gene expression changes help to understand, at least in part, exercise-induced pathways of mitochondrial biogenesis and mitochondrial biochemistry as well as muscle adaptation and how exercise can reverse ageing impacts on skeletal muscle. Such genomics studies are supported and complemented by proteomics studies that have explored the variations in protein expression in muscle depending on the physical activity [66, and reflects an adaptation of the proteinic profile, comparable to the transcriptomic changes, as well. This includes the increase in the expression of a peroxisome proliferator-activated receptor coactivator 1 isoform PGC-14 that is involved in the regulation of skeletal muscle hypertrophy which reflects an aspect from the correlation and complementarity between the functional genomics and functional proteomics. Moreover, studies of exercise-related genes can be categorized depending on exercise type, e.g., endurance-based exercise and resistance-based exercise. The transcriptomic signature of exercised muscle is also variable depending on muscle fibers and age. This indicates a need of a classification strategy depending of the variables (age, muscle fibers, exercise type, etc.) that modify gene expression response to exercise. Such classification could also be extrapolated to the therapeutic target identification depending on the suitable pharmacological effects (enhance the metabolism, increase muscle strength, etc.). Implications Such exercise-related gene expression patterns explain some of the exercise benefits, including those seen even after detraining, including increased muscle contraction and energy metabolism improvement, thereby providing molecular and mechanistic links between the exercise benefits and the genes (over) expressed with or following exercise which could potentially be used for drug development towards an "exercise pill" (Figure 1). Importantly, the exercise benefits and their clinical outcomes are precisely what clinicians hope to observe in their patients (with obesity, diabetes, etc.) such as an improved blood lipoprotein profile, increased usage of lipids and glucose, ameliorated insulin resistance, as well as an enhanced energy expenditure. Obtaining these effects is exactly what functional genomics-based therapies aim to achieve via pharmacological agents. Indeed, identifying exercise-specific genes and exploring the pathways they control would allow the development of exercise pills. Such pills could therapeutically mimic the effects of exercise via targeting these "exercise-genes" pathways through pharmacological agents and thus, obtain the benefits of exercise without intensive training. This is of a particular importance for old (and suffering from heart diseases) or disabled individuals who have limited ability to exercise but who therapeutically require the benefits of exercise. Therefore, such "exercise pill" would allow to overcome this limitation of applying exercise as a therapy for obesity. High-Fat Diet Particularities in Obesity Context As diet is the other pillar in obesity research and represents the energy intake and a key part of anti-obesity therapy, it is also an important factor for gene expression studies in the context of obesity. The diverse properties and impacts the diet has on metabolism pattern and biochemical adaptations made the identification and the exploration of associated specific gene expression patterns an important element in obesity molecular research. The effect of diet on obesity development is well known especially for HF diet. The reason behind the focus on fat, beyond the concept of excess caloric intake, is that this nutrient, compared to both carbohydrates and proteins, has limited effect on satiety, is associated with high palatability, and has a high caloric density. In addition, the lipid content in the modern Western diet increases fat consumption and is part of the unhealthy lifestyle. Indeed, following a HF meal ingestion, both caloric intake and energy expenditure favor weight gain because of the palatability, high caloric density, and low satiety effect of HF nutrients, as well as the weak potency for fat oxidation and energy expenditure associated with elevated fat intake. The other pattern associated with HF diet is that the offspring have obesity risk and gene expression alterations as a consequence of the maternal HF diet. This highlights the need to focus on HF diet especially as it impacts gene expression and epigenetics profile as exemplified by studies showing that epigenetic changes can be consequences of the maternal HF diet The control of food intake represents a major determinant in the etiology of obesity especially with HF meals which acutely disrupt energy balance. Feeding behavior is controlled by short-term circulating nutrients and hormones as well as signals derived from peripheral tissues in response to a meal and changes in energy stores. Within this context, the hypothalamus is a key brain center upon which all these peripheral signals converge to regulate feeding behavior and energy intake, thus it controls short-term as well as long-term energy balance and steady-state body weight High-Fat Diet Particularities in Obesity Context As diet is the other pillar in obesity research and represents the energy intake and a key part of anti-obesity therapy, it is also an important factor for gene expression studies in the context of obesity. The diverse properties and impacts the diet has on metabolism pattern and biochemical adaptations made the identification and the exploration of associated specific gene expression patterns an important element in obesity molecular research. The effect of diet on obesity development is well known especially for HF diet. The reason behind the focus on fat, beyond the concept of excess caloric intake, is that this nutrient, compared to both carbohydrates and proteins, has limited effect on satiety, is associated with high palatability, and has a high caloric density. In addition, the lipid content in the modern Western diet increases fat consumption and is part of the unhealthy lifestyle. Indeed, following a HF meal ingestion, both caloric intake and energy expenditure favor weight gain because of the palatability, high caloric density, and low satiety effect of HF nutrients, as well as the weak potency for fat oxidation and energy expenditure associated with elevated fat intake. The other pattern associated with HF diet is that the offspring have obesity risk and gene expression alterations as a consequence of the maternal HF diet. This highlights the need to focus on HF diet especially as it impacts gene expression and epigenetics profile as exemplified by studies showing that epigenetic changes can be consequences of the maternal HF diet The control of food intake represents a major determinant in the etiology of obesity especially with HF meals which acutely disrupt energy balance. Feeding behavior is controlled by short-term circulating nutrients and hormones as well as signals derived from peripheral tissues in response to a meal and changes in energy stores. Within this context, the hypothalamus is a key brain center upon which all these peripheral signals converge to regulate feeding behavior and energy intake, thus it controls short-term as well as long-term energy balance and steady-state body weight. Therefore, screening the changes in gene level following acute HF meal ingestions would reveal new elements within the gut-brain axis leading to the development of novel approaches for the understanding and the control of energy homeostasis. In particular, the identification of transcriptomic changes induced by HF diet both in digestive and peripheral tissues as well as within the central energy metabolism control centers in the brain. Digestive System (First Food "Receptors") Differentially expressed genes in the stomach and intestine are key elements since these two tissues represent the sites of most of the digestive processes and where the nutrients are first available in the simplest forms (that interact with endocrine system and different receptors). Thus, stomach and intestine represent the starting point of signals controlling energy balance (including food intake). Importantly, variations (gene expression) within the digestive system may reflect changes at the digestive process that could impact the availability, the absorbance ratio, as well as the biochemical and endocrine effects of the diet nutrients. Since HF diet-induced transcriptomes would require more attention than the low-fat (LF) induced genes, it is of a great importance to identify and more precisely distinguish between HF and LF specific genes. Therefore, the particularity of selected studies we report first herein is that fasting status was the reference (control) to study both HF and LF-specific genes. In fact, numerous previous studies that investigated HF-specific changes used LF conditions as a reference, therefore, were not able to characterize LF-specific genes nor to distinguish HF-specific from LF-specific transcriptomes. We first report a transcriptomic study that identified the peripheral signals of appetite and satiety from mice duodenum by investigating the transcriptomic changes in the duodenum mucosa 30 min, 1 h, and 3 h (to explore acute impact rather than chronic gene expression modifications) following HF and LF meal ingestion. This study reveals that energy, protein, and fat intake transcriptome expression changes were higher in the HF groups compared to LF groups. These data correlate with an intestinal mucosal mRNA analysis that demonstrates changes in the expression of genes related to anabolic and catabolic lipid metabolism pathways and a recent paper shows that the expression of genes related to the uptake and transport of lipid and cholesterol as well as glucose storage are upregulated in the duodenum. This changes specific patterns of HF-diet compared to LF-diet. Digestive mucosa is the first tissue that interacts with nutrients during the first digestive processes and has the ability to produce signal molecules that can act as hormones within the gut-brain axis. Therefore, the key concept beyond identifying digestive mucosal diet-induced genes is to eventually identify new signals and responses to nutrient ingestion controlling food intake and energy expenditure. As an example of a potential signal molecule, the trefoil factor 2 (Tff2) has been identified as a newly found HF-specific gene for which its deficiency in mice leads to a protection from HF diet-induced obesity. Among the hundreds of genes that are modulated after HF or LF meal ingestion [106,, we put a spotlight on the Tff2 and its pathway as a potential targetable pathway for obesity molecular therapies. Indeed, this gene is upregulated by HF (and not LF) diet which suggests it is a specific acute HF-induced signals that may impact food intake regulation. At the peripheral level, HF-diet decreases the expression of genes involved in metabolizing glucose in porcine perirenal and subcutaneous adipose tissues which would indicate the switch (as an adaptation) of the metabolism toward less glucose usage in the presence of lipid intake, probably to increase lipid metabolism following a LF-diet intake. In addition, it has been shown that in mesenteric adipose tissue, only LF meal upregulated transcripts implicated in lipid biosynthesis, whereas transcripts involved in lipid utilization and glucose production were downregulated in both HF and LF meals following 3 h of meal ingestion, also pointing a metabolic adaptation of lipid metabolism depending of lipid ratio within the diet. Adipose Tissue (Energy-Stocking Tissue) and Skeletal Muscle (Energy-Usage Tissue) HF diet induces an increase in the expression of genes related to inflammation, whereas it downregulates genes related to lipid metabolism, adipocyte differentiation markers, and detoxification processes, and cytoskeletal structural components in mouse adipose tissue. These observations highlight how the metabolic function reacts to HF diet in terms of adaptation and at the same time emphasizes health problems associated with obesity such as inflammation. These results, further indicate that the metabolism is shifted toward the usage of lipids rather than glucose, are in agreement with other studies showing that HF diet enhances the expression of genes related to lipid catabolism in the skeletal muscle. Such data illustrate how the metabolic cellular system can adapt to the type and the quantity of nutrients received through different diets and the activated metabolic processes are chosen depending on such factors. Exploring such "diet-oriented" metabolic pathways might allow the development of pharmacological approaches that could mimic such pathways in order to increase lipid store usage by tissues as a part of anti-obesity therapies. Importantly, knowing the metabolism-related genes regulated by diet could optimize pharmacotherapies and diet-based therapies by selecting the type and the quantity of specific nutrients that could act towards a suitable metabolic phenotype for a specific patient. Herein, it is worth emphasizing that in order to correctly design a study, selecting the control group remains critical. Indeed, to study HF or LF diet, it is important to define the reference whether it is fasting status or fed control. In case of fed control, not only the caloric content but also the fat type and its chemical nature are also to be taken into account when reaching conclusions. Brain (Energy Balance-Control Centers) Besides identifying diet-related peripheral signals, changes induced by the diet at the central level have also been studied. For instance, the study of HF and LF meal ingestion-induced changes in the hypothalamic transcriptome reveals that 3 h after the beginning of meal ingestion, 12 transcripts were regulated by food intake including two involved in mitochondrial functions. This work also reveals the increased expression of the major urinary protein 1 (Mup1) gene in the hypothalamus of LF fed mice compared to fasting mice. MUP1 is a protein involved in metabolic profile improvement including energy balance toward skeletal muscle with increased mitochondrial function and energy expenditure in diabetic mice. These MUP1 effects on metabolism regulation including glucose and lipid metabolism, might explain the benefits of the LF diet. Such benefits are not only explained by the limited caloric intake in LF diet compared to HF diet but results from the switch of the metabolic profile toward more fuel usage and energy expenditure. In addition, we might also suggest that Mup1, with biochemical effects protecting from obesity, is involved in the pathways that are blunted during obesity which would further increase energy storage and decrease energy expenditure. Indeed, in another study, a 8-12 d dietary restriction in LF-diet groups of mice led to a downregulation of Mup1 in adipose tissue which could be an adaptation to the dietary restriction in order to conserve energy stores and limit energy usage since the organism is under caloric privation. This further highlights the importance of Mup1 in energy balance, both in energy expenditure and energy conservation, and presents its function as a potential molecular target for obesity as well. Furthermore, regarding the hypothalamic (center of energy homeostasis control) transcriptome, high-fructose diet fed to Wistar rats throughout development lead to the remodeling of 966 genes and enhanced both depressive-like and anxiety-like behaviors which could lead individuals to manifest either increase or loss of their appetite. In addition, the hypothalamic transcriptome pattern under HF diet condition (over 2 wks) exploring the neuropeptides involved in energy balance explains how ingesting a HF meal contributes to remodeling the expression of neuropeptide Y, agouti-related protein, and proopiomelanocortin over time. This last element is extremely important to understand the establishment and the development of obesity by studying key molecular signals at different steps and reveal the underlying paths. Importantly, the data generated on preferentially expressed genes in the hypothalamus and pituitary gland improve the understanding of the central control of energy metabolism and diet impact on gene expression. Potential Applications The characterization of novel fat-specific genes may contribute to the development of new therapeutic targets for appetite and satiety controls. Herein, it is worth mentioning that the existence of two levels of diet-dependent energy metabolism control (peripheral and central) provides wider therapeutic options and further choices depending on the patient's physiological or pathophysiological status. For instance, a patient with obesity suffering from a functional gastrointestinal disease might not respond well for an obesity therapy targeting the peripheral signals and would require targeting the central pathways. Mapping how the metabolic profiles (governed by selected genes) change according to the type of diet and the time between meal ingestion and gene expression analysis (and eventually at which time the meal is ingested) would allow the identification of selected signals that are specific and/or time dependent (Figure 2). Such data could allow to improve precise personal therapies for individuals. Additional studies have examined the interaction between diet and gene expression regulation. HF and high-cholesterol (HFHC) diet, and HFHC plus high-sucrose diet have been explored within the context of differentially expressed genes. Unlike the previous examples, blood RNA analysis was performed and revealed differential hyperlipidemia gene expression profiles even though levels of fasting plasma lipids and glucose corresponding to these two diets was similar. This indicates that gene expression might not reflect phenotypic changes and that corresponding in vivo metabolic and biochemical exploration is required to understand gene expression modifications. In addition to studying the effects of diet itself, it is highly relevant to explore the impacts of drugs that modify the effects and distribution of nutrients in vivo. For example, Salomki et al., showed that administering metformin (prescribed to regulate glucose blood levels ) to pregnant Additional studies have examined the interaction between diet and gene expression regulation. HF and high-cholesterol (HFHC) diet, and HFHC plus high-sucrose diet have been explored within the context of differentially expressed genes. Unlike the previous examples, blood RNA analysis was performed and revealed differential hyperlipidemia gene expression profiles even though levels of fasting plasma lipids and glucose corresponding to these two diets was similar. This indicates that gene expression might not reflect phenotypic changes and that corresponding in vivo metabolic and biochemical exploration is required to understand gene expression modifications. In addition to studying the effects of diet itself, it is highly relevant to explore the impacts of drugs that modify the effects and distribution of nutrients in vivo. For example, Salomki et al., showed that administering metformin (prescribed to regulate glucose blood levels ) to pregnant female mice that were on a HF diet resulted in transcriptome related to mitochondrial ATP production and adipocytes differentiation of the offspring resulting in an improved metabolic phenotype. From a therapeutic viewpoint (pharmacology and nutrition), understanding the pathways stimulated or deactivated depending of the type of diets would allow nutritionists and clinicians to adapt the diet for their patients based on the therapy they are following or based on their lifestyle to avoid possible adverse interactions between the diets, therapies, and activated pathways (genes, enzymes, etc.). This would help mitigate therapeutic failure, or pharmacotoxicity by reducing the drug clearance (metabolism) that could lead to a toxic accumulation. The goal herein remains to reach and adapt to the clinical and therapeutic needs. Finally, the main potential application beyond focusing on HF-diet-induced genes remains the fact that lipid metabolism-related feedback hormones (mainly leptin) do not have an acute effect. In fact, their effects develop after a relatively long period of time compared to carbohydrate-induced hormones (for instance insulin) that are stimulated immediately following a carbohydrate intake. This highlights the importance of elucidating changes that are both acute and specific to HF diet intake in order to identify acute signals of lipid intake; based on which therapies (hormonal or pharmacological) can be developed. In addition, HF diet changed the expression of genes related to neurogenesis, calcium signaling, and synapse, in the brain cortex. Such ability of the diet to impact neuronal-specific gene patterns could explain how diet and obesity establishment affect the ability of the brain to control energy balance and would require comparable studies in the hypothalamic region, the center of metabolic homeostasis control. Combining the study of changes in the intestinal mucosa (first tissue that comes in contact with the food) with those in the brain (centers that receive peripheral signals and control food intake) would provide the best combination to identify acute HF-specific signals of food intake regulation and, therefore, optimize the therapies based on these axes. Conclusions, Discussion, and Perspectives Overall, identifying such differentially expressed genes related to exercise and high-fat diet and their related pathways could suggest potential novel therapeutic targets for obesity treatments after elucidating the mechanisms linking those genes to the diverse energy metabolism phenotypes. Functional genomics would, therefore, lead to a new generation of therapeutic approaches that would, through targeting selected energy balance pathways, mimic the benefits and outcomes of physical activity, suitable diets, or even hormones. For the diet, due to the properties of lipids (high caloric density, low satiety effect, etc.), we believe that one of the best strategies to develop pharmacotherapies for obesity would be to target HF intake at the appetizer time. Therefore, one of the primary strategies is to identify and study the HF diet-induced satiety hormone; usually transcriptionally regulated 30 min to 3 h after HF meal and to deliver it at the time of appetizer in order to control HF intake, obesity, and the related complex diseases and conditions. Herein, it is important to emphasize that adequate diet control is the key solution for obesity (especially if combined with exercise ) and that pharmacological options remain complementary in selected cases. Regarding identifying pathways of the exercise-induced genes is important for development of exercise pills (long-term objective) that could therapeutically mimic the effects of exercise via targeting these "exercise-genes" pathways through pharmacological agents and, thus, obtain the benefits of exercise without intensive training. This is of a great importance for individuals who are not able to perform exercise because of physical handicap or diseases like heart failure. Importantly, data generated by functional genomics, especially if combined with functional proteomics and the dynamic-dependent studies of the diverse related pathways will not only provide new insight into therapeutic options and research applications but also into clinical implications. Such implications will cover exercise, HF diet, but also other obesity-related factors such as hormones which are worth exploring within the functional genomics context.
<reponame>IstvanOri/HTML2MD<filename>html2md/commands/TableBuffer.py class TableBuffer: def __init__(self): self._cells = [] self._rows = [] @property def rows(self): return self._rows def cell_feed(self, content): self._cells.append(content) def row_feed(self): self._rows.append(self._cells) self._cells = [] def clear(self): self._cells = [] self._rows = []
Mayor Rahm Emanuel and Chicago aldermen are offering a compromise to religious leaders who object to paying a water bill after decades of a blanket exemption. A new ordinance on the table will mean free water again for non-profits with net assets of less than a $1 million Those with up to $250 million in assets would receive a discount. Some 4,000 eligible non-profits are currently billed for 60 percent of their water use. If the proposed changes do not go through, all nonprofits will be charged for 80 percent of their water use beginning next year. charge them for the water! There are so many non-profits it makes me sick. On each main city block you will find 3-5 so called non-profits.
Measurement of the quenching factor of Na recoils in NaI(Tl) Measurements of the quenching factor for sodium recoils in a 5 cm diameter NaI(Tl) crystal at room temperature have been made at a dedicated neutron facility at the University of Sheffield. The crystal has been exposed to 2.45 MeV mono-energetic neutrons generated by a Sodern GENIE 16 neutron generator, yielding nuclear recoils of energies between 10 and 100 keVnr. A cylindrical BC501A detector has been used to tag neutrons that scatter off sodium nuclei in the crystal. Cuts on pulse shape and time of flight have been performed on pulses recorded by an Acqiris DC265 digitiser with a 2 ns sampling time. Measured quenching factors of Na nuclei range from 19% to 26% in good agreement with other experiments, and a value of 25.2 \pm 6.4% has been determined for 10 keV sodium recoils. From pulse shape analysis, the mean times of pulses from electron and nuclear recoils have been compared down to 2 keVee. The experimental results are compared to those predicted by Lindhard theory, simulated by the SRIM Monte Carlo code, and a preliminary curve calculated by Prof. Akira Hitachi. Introduction Astronomical observations, such as galactic rotation curves and gravitational lensing, combined with measurements of the temperature fluctuations in the Cosmic Microwave Background and abundances of light nuclei, point to the striking conclusion that the majority of matter in the Universe does not consist of the stars, planets and gas that are visible in the images from telescopes. A possible solution to this is the presence of a more elusive particle population of 'Dark Matter', that contributes to most of the mass of galaxies. Earth-based detectors for dark matter particles passing through the Earth typically utilise large masses of ultra-radiopure target materials, in what is referred to as the direct method. Of the many possible candidates, the Weakly Interacting Massive Particle (WIMP) has the most direct search experiments dedicated to its discovery. Direct searches for WIMPs detect the elastic recoil of an incident WIMP off a target nucleus. Such an interaction deposits a recoil energy E R in the detector. A variety of approaches to the direct detection of dark matter are adopted by various international collaborations. A recent comprehensive review is given by. Inorganic crystal scintillators are popular choices as target materials for direct dark matter search experiments. The high light yield and pulse shape differences between nuclear and electron recoils explain why thallium activated sodium iodide (NaI(Tl)) crystals are the oldest scintillators used in such experiments. They still remain one of the best detectors at determining spindependent WIMP-nucleon limits, and the ANAIS, DAMA/NaI and ELEGANT-V direct search experiments utilise them. The DAMA/NaI experiment is the only one that has claimed to witness the annual modulation of a WIMP signal, and until recently, NAIAD held the best spin-dependent limit on WIMP-proton interactions. DAMA/LIBRA is a next generation NaI(Tl)-based detector currently taking data at Gran Sasso. Hence, NaI(Tl) remains an important detector material in non-baryonic dark matter searches. Energy scale calibration is performed by exposing the detector to radiation from a gamma-ray emitting radioisotope. Unlike neutrons and WIMPs, detectable energy from gamma-rays is a result of collisions with target electrons rather than nuclei. The energy deposited by nuclear recoils is less than that for electron recoils of the same E R, which is known as ionisation quenching. In other words, E vis = QE R, where E vis is the visible energy, and Q is the measurable quantity showing the degree of quenching for nuclear recoils with respect to electron interactions, also known as the quenching factor. When calculating the WIMP-nucleon differential cross-section to derive a limit as outlined in, this effect can be corrected for by multiplying the detected energy by the reciprocal of the quenching factor. It is necessary to determine the quenching factor for each scintillating dark matter target independently. Additionally, the scintillation efficiency changes depending on the recoil energy, and combined with form factor corrections to the WIMPnucleon differential cross-section that favour low energy recoils, it is important to conduct measurements at energies relevant to dark matter searches (below 50 keV). Quenching factors of Na recoils in NaI(Tl) have been measured by to a minimum recoil energy of 15 keV. The experiment described here has probed the quenching factor to a lower recoil energy of 10 keV, and has achieved the highest accuracy above 20 keV. Between the energy range 10 to 100 keV, it provides the most detailed measurement of the quenching factor to date. Theoretical overview After a nuclear interaction, a recoiling nucleus will lose energy as it moves through a target material through collisions with electrons (hereafter called electronic energy loss, resp. electronic energy loss mechanism) and other nuclei. As most detectors, including scintillators, are sensitive to electronic energy loss only, the quenching factor can be calculated through an understanding of these mechanisms. In other words, scintillation light can be understood to be the result of the electronic energy loss mechanisms, while non-radiative transfers, such as heat, are due to collisions with other nuclei. The Lindhard theory attempts to quantify these interactions from first principles, and the points relevant to the theoretical determination of the quenching factor are briefly outlined here. The energy loss mechanisms through the electronic and nuclear channels can be understood as the electronic and nuclear stopping powers respectively. These can be defined by rescaling the range R and energy E R of a recoiling nucleus to the respective non-dimensional variables and. In such a way, the nuclear energy loss d d n can be defined as a universal function f () that can be calculated numerically. When the penetrating particle and the atoms of the medium are the same, becomes: where Z is the atomic number of the target nuclei and E R is the deposited energy in keV. The electronic energy loss is defined by d d e = √. If the penetrating particle is identical to the atoms of the medium, the constant is given by: where A is the mass number of the target nuclei and e ≈ Z 1 6 from. Assuming that the electronic and nuclear collisions are uncorrelated, the total energy given to electrons and that given to atoms can be expressed as the two separate quantities and respectively. The non-dimensional variable can now be written in terms of these: For large, the mean energy given to atoms of the medium is inversely proportional to. However, this does not hold when < 1, in which case ≈. A single formula that combines these results is where the function g() is well fitted by : From Eq. (2.3), the mean energy given to electrons in terms of can be written as = −. Therefore, an expression for the quenching factor can be obtained using its definition as given previously, by dividing by through a combination of Eq. (2.3) and Eq. (2.4): By substituting Eq. (2.1), Eq. (2.2) and Eq. (2.5) into Eq. (2.6), the quenching factor can be expressed as a function of nuclear recoil energy. The theoretical dependence of the quenching factor for sodium recoils in 23 11 Na is shown in Figure 1. As 23 11 Na and 127 53 I have significantly different mass and atomic numbers, the quenching factor of sodium recoils in sodium, and those in iodine are not similar. In the case of sodium recoils in iodine, the evaluation of the quenching factor is more complicated, and Eq. (2.6) can no longer be used. Lindhard theory can only approximate the quenching factor in such cases at very low energies. A key requirement of the Lindhard theory is that electronic and nuclear collisions can be separated. However, the repulsion between two interacting nuclei makes part of the parameter range unavailable for transferring energy to electrons. As a result, the electronic stopping power is suppressed when ≪ 1, leading to the non-proportionality of d d e with √ in this energy range. This can be corrected for by replacing e in Eq. (2.2) with the function, Z 1 Z 2 given in, where Z 1 and Z 2 are the atomic numbers of the penetrating and target nuclei respectively. The impact of this correction on light nuclei, such as sodium, is very small, and as such it is not evaluated here.. The preliminary curve of the quenching factor of sodium recoils in NaI(Tl) from is illustrated by the dashed line. Finally, the result derived from TRIM for Na recoils in NaI(Tl) is shown by the dotted line. In semiconductors, the measured quenching factor agrees well with that given by Eq. (2.6). For scintillators, however, some degree of quenching also affects the electronic energy loss by ions (energy loss due to excitation and ionisation) for high Linear Energy Transfer (LET). The absolute quenching factor for nuclear recoils q n given by Eq. (2.6) is not the required correction factor to the differential WIMP-nucleon event rate. The quenching factor of nuclear recoils relative to those from electron interactions Q can be approximated by : where q e is the electronic quenching factor (quenching factor for electronic energy loss of ions) and S is the scintillation efficiency for electron recoils. If the quenching factor for sodium recoils in Na from Eq. (2.6) is defined as q n in Eq. (2.7), then an overestimation for the quenching factor of sodium recoils in NaI(Tl) will result. The response of NaI(Tl) to photons is known to be non-linear with energy. Therefore, the choice of gamma source for detector calibration plays some role in the final quenching factor, as a linear energy distribution is assumed in dark matter experiments. Using the response curve from, S is equal to 0.9 for 122 keV gamma-rays. The preliminary theoretical curve of the quenching factor of Na recoils in NaI(Tl) from is shown in Figure 1. The Stopping and Range of Ions in Matter (SRIM) package simulates the process of ions impinging onto various target materials. The program calculates the stopping power and range of ions in matter using a quantum mechanical treatment of ion-atom collisions. These parameters are used by the TRansport of Ions in Matter (TRIM) program to calculate the final distribution of the ions. All the energy loss mechanisms associated with ion-atom collisions, such as target damage, sputtering, ionisation and phonon production, are also evaluated. The quenching factors for various materials have been simulated with these programs, and that for sodium recoils in NaI(Tl) is determined here using the same technique. A NaI(Tl) crystal of density 3.67 g/cm 3 is defined as the solid target. Sodium ions are given an initial energy (in other words, a recoil energy) and propagated through the crystal at a normal incidence angle. Recoil energies are varied between 1 and 100 keV, in 1 keV steps, and 4 000 ions are simulated at each energy. The percentage energy loss from the original ions and the resultant recoiling atoms induced by ion-atom collisions is calculated by TRIM. This is then subdivided into energy losses from ionisation, vacancies from un-filled holes left behind after a recoil atom moves from its original site, and phonon emission. Light emission is a result of ionisation, and hence, the sum of the percentage energy loss due to ionisation from the original ion and recoiling atoms is the quenching factor. The mean of these contributions over 4 000 events is evaluated by TRIM, and the results are shown in Figure 1. Unlike the prediction of the quenching factor from, the result from TRIM follows the shape of the Lindhard curve. However, although they display similar values at low energies, the quenching factor from Lindhard theory rises faster with increasing energy. At 10 keVnr, all three results are in good agreement, and it is after this point that they start to diverge. The most comprehensive treatment to the evaluation of the quenching factor is that given by (Eq. (2.7)). Therefore, it is reasonable to assume that the measurements of Q will more closely match the shape of this curve, although their values should lie below it. The quenching factor can be measured by inducing nuclear recoils of a known energy in the target material. In this way, the ratio of measured energy through electronic energy losses to known recoil energy can be determined. In the case of NaI(Tl), iodine recoils will also occur. However, it is clear from Eq. (2.6) that the degree of quenching for heavy nuclei such as iodine is significantly greater than that for lighter nuclei such as sodium. This means that a low energy threshold is required to witness iodine recoils. As the purpose of this paper is the measurement of Na recoil in NaI(Tl), such a threshold is not attained, and hence iodine recoils will not be visible. Experimental apparatus Two Sodern GENIE 16 neutron generators are housed within a dedicated neutron laboratory at the University of Sheffield. The deuterium-deuterium and deuterium-tritium accelerators produce an isotropic distribution of 2.45 MeV and 14.0 MeV mono-energetic neutrons respectively. All electronics and data acquisition equipment are located in the control room, which is isolated from the experimental hall by 3 ft of concrete shielding. During operation, the beam is placed into a concrete castle to provide additional shielding. A schematic view of the detector arrangement is shown in Figure 2. Only the deuteriumdeuterium neutron beam is used for these measurements. Neutrons of energy 2.45 MeV pass through a hole in the concrete castle. They travel 50 cm before reaching the centre of the NaI(Tl) crystal. The energy deposited E R as a function of the scattering angle is given by: where m A is the mass of the target nucleus, E n is the energy of incident neutrons, m n is the mass of the neutron and is the scattering angle. Scattered neutrons are detected by a secondary BC501A liquid scintillator detector, which is placed at an angle for the recoil energies of interest E R. NaI(Tl) crystals are hygroscopic and need to be encased within an air-tight container. The 5 cm diameter, 5.4 cm long, cylindrical NaI(Tl) crystal used here is encased within a hollow aluminium cylinder of wall thickness 2.5 mm. A glass window, of thickness 2.5 mm and diameter 5 cm, is optically coupled to the crystal with silicon oil to improve light collection. The reflection of light off the inner walls is increased by the wrapping of 1 mm thick PTFE tape around the crystal. A 3-inch ETL 9265KB photomultiplier tube (PMT) is optically coupled to the glass window. As the energy of the calibration source is significantly higher than the nuclear recoil energies in this experiment, a tapered voltage divider network is chosen and constructed for the PMT. Such a system reduces space charge effects, which lead to a non-linear response where high-energy pulses appear smaller than they actually are. The secondary detector consists of a cylindrical aluminium vessel of diameter 7.8 cm and height 8.0 cm filled with BC501A liquid scintillator. The active volume is viewed by an ETL 9288B PMT at an operating voltage of -1300 V. However, the deposited energy spectrum for events as a result of two or more nuclear recoils, represented by the shaded area in (b), is featureless. This implies that their contribution to the background should not interfere with the signal peak position at approximately 10 keVnr. The peak at approximately 2 keVnr in (b) is from iodine recoils, which is not visible in real data at this scattering angle due to the higher energy threshold. Features either side of the recoil peaks in (b) are due to neutrons scattering off nuclei within the wax shielding before entering the secondary detector, and those that escape through gaps between the BC501A cylinder and wax walls. The contribution to background from these interactions at other deposited energies is featureless. Only events with a single interaction in the crystal contribute to the recoil energy E R at a given scattering angle in Eq. (3.1). Multiple interactions lead to neutrons depositing a range of possible energies in the crystal before being detected by the secondary BC501A detector, thus contributing to the background. Due to the cylindrical geometry of the crystal, a Monte Carlo simulation is required to obtain an accurate probability for multiple interactions. The geometry of the experiment, illustrated in Figure 2 is replicated within the GEANT4 framework, where 2.45 MeV neutrons are generated at the face of the neutron source and fired towards the NaI(Tl) crystal. A total of 10 8 events are generated at scattering angles associated with 10 and 100 keVnr sodium recoil energies as given by Eq. (3.1). Only events that deposit energy in both the crystal and BC501A detector are recorded. Approximately 0.13% of 10 keVnr and 0.05% of 100 keVnr events satisfy this condition. The reason for this asymmetry is the non-isotropic cross-section for neutron scattering at higher recoil energies and for heavier nuclei. Simulation results at 10 keVnr nuclear recoil energy are shown in Figure 3. Although a significant proportion of events undergo multiple scattering in the crystal, the deposited energy from these interactions, represented by the shaded histogram, is featureless in comparison with the total recoil energy spectrum. Therefore, there is no preferential energy deposition, and the number of multiple interactions should make no difference to the final result.. Hardware trigger electronics for the quenching factor experiment. Analogue photomultiplier signals from the BC501A detector and NaI(Tl) crystal are split with a 50 power divider, and sent to a discriminator and an input channel on the DAQ. The discriminator is set at a threshold of 5 mV, and a 100 ns wide NIM pulse is sent to a 2-fold coincidence unit. If the signals are coincident, a NIM pulse provides the external trigger to the DAQ. The inclusion of nuclear recoils off iodine nuclei also results in a low energy peak at approximately 2 keVnr from single scattered neutrons as shown in Figure 3(b). From Eq. (3.1), the change in energy with scattering angle is far more pronounced for lighter nuclei, and as iodine has a significantly higher mass number than sodium, such a result is expected. For the reasons outlined in Section 2, such a peak would not be visible in the measured data, and will not interfere with the sodium peak. The configuration of electronics for the hardware trigger is shown in Figure 4. Analogue photomultiplier signals from the NaI(Tl) crystal and BC501A detector are split with a 50 power divider. The signal from each PMT is then sent simultaneously to a discriminator set at a threshold of 5 mV, and a channel of the data acquisition system (DAQ). The hardware trigger is two signals coincident within a 100 ns time window. The analogue pulses are converted to digitised waveforms by an 8-bit, 2-channel Acqiris digitiser with a 500 MHz sampling rate. Data acquisition software running on a linux computer, similar to that used by the ZEPLIN-II experiment, reads out the digitised waveforms and writes them to disk. An analysis program reads the binary data output of the digitiser. The program goes through each event, extracting the amplitude at each 2 ns sampling point and placing the values into an array. To assign a baseline, the mean and standard deviation of the first 200 ns of a waveform are determined. This process is then repeated over the same time window, excluding bins of amplitude greater than three standard deviations from the mean. The baseline is calculated in this manner on an event-by-event basis, and this procedure results in an improvement to its estimation. An event viewer is implemented within the ROOT framework. Waveform parameters are extracted and stored in a ROOT tree for later analysis. The total pulse area, which is proportional to the deposited energy, is the sum of digitised bin contents within a range: where s 1 is the first and s 2 the second sampling point over which the summation is performed. The value of s 1 is the first point at which the pulse reaches 10% of its maximum amplitude. The amplitude of each bin i is denoted by V i (t), and with a 500 MHz sampling rate, ∆t = 2 ns. The start t 1 and stop t 2 times are defined as t 1,2 = s 1,2 ∆t. Gamma-ray calibration A constant value for t 2 in Eq. (3.2) needs to be defined to obtain the area under NaI(Tl) pulses. It is difficult to determine t 2 from a single pulse, due to the sparse distribution of photons in the tail region. Instead, a scintillation pulse is built from the sum of 10 000 pulses detected when the crystal is irradiated by 30 keV X-rays from 129 I source, as shown in Figure 5. The amplitudes are normalised to reproduce the mean shape of one pulse. Photons from scintillation light continue well beyond 3 s. However, electronic noise in the tail region makes it difficult to choose a relatively large value for t 2. A compromise of 2 s after the start of the pulse is used, which is equivalent to about 92% of the total waveform. The crystal is exposed to gamma-rays from a variety of radioactive sources, between 30 keV (X-rays from 129 I) and 662 keV ( 137 Cs -rays). A decrease in photon response is observed at the iodine K-shell absorption edge at 33.2 keV, consistent with other studies. Therefore, determination of the energy scale must be performed in a region where a linear response is observed. Calibration is performed with the 122 keV gamma line from a 57 Co source to establish an electron equivalent energy scale (labelled keVee as opposed to keVnr for nuclear recoil energies), as shown in Figure 5. A light yield of 5.1 photoelectrons/keV is found. The procedure is repeated approximately every 3 hours to analyse any drift in the light yield, and if significant degradation is witnessed the crystal is recoupled to the PMT. An attenuation coefficient of around 1.01 cm 2 /g for 122 keV 57 Co gamma-rays traversing NaI(Tl) translates to a mean free path length of 2.7 mm. Therefore, most interactions will occur near the surface of the crystal, and hence may be affected by defects. To check for deformities, the light yield at 30 angles around the crystal is checked. All but one point lie within one standard deviation of the mean light yield. Therefore, the use of 57 Co for calibration is acceptable, as no major surface defects are present. A suitable full scale (range) over which to digitise the signal needs to be chosen. Using the Lindhard curve for sodium recoils in Na from Figure 1, a 50 keV nuclear recoil will be quenched by 48%, resulting in a 25 keV electron equivalent pulse. This is equivalent to a pulse close to that from a 30 keV X-rays from 129 I. With an amplitude of just over 11 mV, a range of 50 mV is adequate for this experiment. Low level cuts include removing pulses that saturate the digitiser and are not in coincidence. The latter are caused when the DAQ triggers on the end of an event. Event selection by pulse shape discrimination in BC501A A secondary detector is required to identify neutrons that scatter off the target nuclei at the nuclear recoil energies given by Eq. (3.1). As the main background is from gamma-rays, a detector material with a high discrimination power is a major requirement. Enhanced emission of the slow component and a high hydrogen-to-carbon ratio make the BICRON Corporation BC501A organic liquid scintillator (C 6 H 4 (CH 4 ) 2 ), equivalent to Nuclear Enterprises NE213, well-suited for this purpose. As with the NaI(Tl) pulses described previously, a suitable value for t 2 in Eq. (3.2) needs to determined. From the typical 600 keVee pulses in Figure 6, a value of 100 ns after the position of the maximum bin is defined as the end of the waveform. The intensity of these pulses I(t) can be written to good approximation as a function of four exponentials : where e is the RC time constant of the data acquisition electronics, 1 and 2 are the decay time constants of the fast and slow components respectively, and A and B are their respective intensities. Due to the loose definition of the pulse start time, an additional parameter for time reference t 0 is defined. The ratio B A in Eq. (4.1) provides a measure of the discrimination power, making use of the characteristic enhanced emission of the slow component in BC501A. The fitting of each pulse is a time consuming procedure due to the six free parameters in Eq. (4.1). The time taken for a fit to successfully converge can be improved by restricting parameters, or deriving average values for some and fixing them. However, if the discriminating factor is the ratio of the intensities of the slow to fast components, the same should hold true for the ratio of the slow component to the total intensity of the pulse as B ≪ A. Therefore, by integrating over the tail of a pulse and dividing by the total area, neutron and gamma events are separated. The ratio of partial pulse area in the tail P nr to total pulse area A nr will be closer to unity when compared with electron recoils of the same energy P er A er. This can be inferred from, where the B A ratio is smaller for electron recoils. In other words: The discrimination technique defined in Eq. (4.2) is tested by exposing the BC501A detector to 662 keV gamma-rays from a 137 Cs source and 2.45 MeV neutrons from the deuterium-deuterium beam. The hardware trigger is identical to that shown in Figure 4, with the exception of the NIM pulse from the discriminator acting as the external trigger to the digitiser. Offline saturation cuts are performed on the recorded data. The results are shown in Figure 7. in the data from the neutron run. This band is absent from the gamma-ray data. A large gamma background is apparent in data from the 2.45 MeV deuterium-deuterium beam, emphasising the need for good neutron-gamma discrimination. By changing the pulse partial area integration boundaries, it may be possible to increase the resolution between neutron and gamma events, and hence decrease the energy threshold for discrimination. The start of the tail is varied between 10 and 50 ns after the maximum peak position, in stages of 10 ns. One dimensional histograms of the partial-to-total pulse area ratio result in two peaks. By fitting Gaussian functions to these peaks, a figure of merit M is used to quantify the neutron-gamma discrimination power: wherex n andx are the mean positions of the neutron and gamma peaks respectively. The full width half maximum of the neutron and gamma peaks are given by n and respectively. Applying Eq. (4.3), M-factors are calculated as shown in Table 1. A lower limit on partial pulse area integral of 20 ns after the position of the peak provides the best resolution power. Scintillation light from nuclear recoils is quenched in all materials. BC501A is not an exception, and the quenching factor has been found to be non-linear. An electron equivalent scale is established by calibrating the BC501A detector with the 662 keV gamma line from 137 Cs. Cutting at total pulse areas greater than 16 nVs, as shown in Figure 8, yields a minimum energy threshold of 280 keVee. Using the non-linear function for the proton quenching factor given by, this corresponds to a proton recoil energy of 910 keVnr. Reducing the threshold further does not affect Figure 8. Implementation of pulse shape discrimination cut in the BC501A detector for neutrons from the 2.45 MeV deuterium-deuterium neutron beam. The best separation between gammas and neutrons occurs when the lower limit for partial pulse integral is set to 20 ns after the maximum peak position, as shown in (a). The implementation of this cut is shown for the data taken at 10 keVnr in (b), where events that lie within the black box are accepted. A minimum energy threshold of 280 keVee is attained. the resulting quenching factor. Event selection by time of flight As the rest mass energy of incident neutrons is far greater than their kinetic energy of 2.45 MeV, they are non-relativistic. With reference to Figure 2, after interacting with the crystal, the non- relativistic neutrons will take longer to reach the secondary BC501A detector than gamma-rays. This time of flight t can be quantified with: where s is the distance travelled by the neutron. As E R ≪ E n, the recoil energy is ignored to good approximation in Eq. (4.4). For E n = 2.45 MeV, Eq. (4.4) yields a value of 38 ns for the time of flight. In Figure 9, prior to the cut on BC501A pulse shapes outlined previously, two peaks are visible at approximately 0 and 40 ns corresponding to gamma-rays and neutrons, respectively. The time of flight differs from that expected due to a time delay in the cables and the possibility of a neutron to interact anywhere along the 8 cm depth of liquid scintillator that it traverses. From Figure 2, s = 80 cm is the distance from the centre of the crystal to the face of the secondary detector. Substituting a value of s = 88 cm into Eq. (4.4) results in an upper limit of 42 ns. This is confirmed from the simulated time of flight distributions for scattering angles associated with 10 and 100 keVnr energy depositions in Figure 10. A sharp decline is witnessed in the number of events that contribute to the simulated neutron peaks after 42 ns. Due to the large background from gamma-rays, it is difficult to fit a Gaussian function to the measured neutron peak in Figure 9a. However, after cutting on neutron events in the BC501A detector, as shown in Figure 9b, the neutron peak becomes clearly visible. Event selection by pulse shape discrimination in NaI(Tl) A variety of pulse shape discrimination techniques can be employed to discriminate low energy nuclear and electron recoils in inorganic crystal scintillators. Discrimination using mean time, neural networks and log likelihood have been investigated in CsI(Tl) crystals. No significant difference in efficiencies between the techniques was observed at energy scales relevant to dark matter searches. The scintillation mechanism of CsI(Tl) is similar to that of NaI(Tl), so there is no reason to believe that the same result would not be true here. Therefore, mean time is used for nuclear-electron recoil discrimination as it is the easiest to implement with digitised waveforms. The reduction code calculates the mean time t for each event with: where A i is the amplitude of the digitised pulse and t i is the time relative to the start of the pulse at sampling point i. The vast majority of gamma events have been rejected by performing the lower level cuts on PSD in BC501A and time of flight, as outlined above. Additional improvement is achieved by plotting mean time distributions of events and removing those which lie more than a half of a standard deviation from the peak position of the Gaussian fit (see Figure 11a). This is illustrated by the clear peak present in the resultant electron equivalent energy distribution for 10 keVnr Na recoils in Figure 11b. To assess the discrimination power of NaI(Tl), mean time distributions from gamma-rays are compared with those from neutrons. The 511 keV gamma-ray from a 22 Na source has a number of properties that make it attractive for such a measurement. Neutrons will interact throughout the bulk of the crystal, and gamma-rays of this energy have a typical penetration depth of 29 mm. Additionally, the interaction cross-section in NaI(Tl) is dominated by Compton scattering at this energy, meaning that a significant number of gamma-rays will deposit a fraction of their energy in the crystal before escaping. Finally, their back-to-back emission is exploited by placing the source between the crystal and BC501A detector, and operating them in coincidence using the electronics shown in Figure 4. The width of NIM pulses from the discriminator is reduced to 10 ns, which is the minimum setting, as coincident gammas will arrive at each detector at the same time. This enables the discriminator threshold to be decreased to 2 mV without noisy events polluting the data. Mean time values for electron recoils are evaluated in 2 keVee wide energy bins, each containing approximately 6 000 events. Values for nuclear recoils come from the data taken with the neutron beam at each scattering angle (see Table 2). The results, from Figure 12, indicate that mean time values for sodium recoils stay roughly constant with energy (only small increase is seen above 15 keV), compared with those from Compton scatters (electron recoils) that increase significantly, in agreement with. Additionally, it becomes difficult to distinguish electron and nuclear recoils at energies below 4 keVee, hampering the sensitivity of NaI(Tl) dark matter detectors. Results Distributions similar to that shown in Figure 11b are constructed for different energy bins in keVnr scale. The areas around the peaks are fitted to the Gaussian function, the peak positions of the fits being associated with the measured (or electron equivalent) energy. The quenching factors for each energy bin are determined as the ratio of electron equivalent to recoil energy. Values of the measured energies and resultant quenching factors at each scattering angle are given in Table 2. The quenching factor of sodium recoils in NaI(Tl) varies between 19% and 26% in the range 10 to 100 keVnr, in agreement with previous experimental results, as shown in Figure 13. A scintillation efficiency of 25.2 ± 6.4% has been determined for 10 keVnr Na recoils. Systematic errors in the measurement of the scattering angle start to dominate at energies less than 20 keVnr. Although it may be possible to take a measurement at 5 keVnr, especially as the light yield seems to increase at lower nuclear recoil energies, the magnitude of the systematic error at such a scattering angle would be too large to obtain a sensible result. Therefore, the limiting factor in this experiment is not the light yield, but the error associated with the scattering angle. There are a number of features in Figure 13, including a dip in the quenching factor around a nuclear recoil energy of 40 keVnr, and subsequent rise at lower energies. This is the first time a dip has been observed. The measurement performed here is the most comprehensive study of the quenching factor of sodium recoils in NaI(Tl) to date in the low energy regime. It is therefore possible that such a feature could have been hidden from other experiments, as there are fewer data points available to witness this pattern. A similar trend has been seen in liquid xenon at energies below 10 keVnr, indicating that there is some underlying process responsible for these observations. Quenching factors for silicon recoils in Si, argon in Ar, germanium in Ge and xenon in Xe have been derived from SRIM by, and compared with predictions from Lindhard theory and experimental data where available. Their results indicate that the nuclear stopping powers predicted Figure 13. Quenching factor of Na recoils in NaI(Tl). Experimental results from this work (filled black squares), Spooner et al. (open squares), Tovey et al. (open triangles), Gerbier et al. (open circles) and Simon et al. (open diamond) are shown. Additionally, the preliminary theoretical estimation of the quenching factor from Hitachi is represented by the solid black line. by Lindhard theory and calculated by SRIM differ by 15% at most, although bigger discrepancies are present for the electronic stopping power. When compared with experimental data, the original Lindhard theory is closest to giving an accurate prediction for these media. Neither Lindhard theory nor the results from SRIM reproduce the shape of the experimental results for Na recoils in NaI(Tl). Unlike the prediction from Hitachi, which provides a better resemblance to the pattern seen, they do not consider the effect of electronic quenching due to high LET of ions. However, the appearance of the dip remains unexplained. Conclusion Quenching factor measurements have been performed for sodium recoils in a 5 cm diameter, cylindrical NaI(Tl) crystal. The results show an average quenching factor of 22.1% at energies less than 50 keVnr, in agreement with other measurements. Results from simulations confirm that the contribution from multiple scattering events provides a featureless background, and can be neglected. The results do not reproduce the shape of the predicted curves from Lindhard theory, and SRIM and TRIM. However, the predicted quenching factor from Hitachi, which takes electronic quenching into account, compares favourably with the experimental results. The presence of a dip in the quenching factor at around 40 keVnr is observed. Table 2. Quenching factors of Na nuclear recoils relative to those of gamma-rays of the same energy. The average scattering angle is given by (column 1), the average recoil of the Na nucleus is given by E R (column 2) and the measured energy is given by E vis (column 3). The fractional contribution from statistical errors remains constant at ≈ 0.05 over the full energy range. The systematic error is dominated by uncertainties in the determination of the scattering angle. A marked increase in the contribution from the systematic error is seen at low, where E R < 20 keVnr. At higher recoil energies, a reduction in the relative contribution of the systematic error is seen, as it decreases by just under an order of magnitude over the full energy range. Systematic and statistical errors are added quadratically to obtain the uncertainty on the quenching factor.
Phonon Rabi-assisted tunneling in diatomic molecules We study electronic transport in diatomic molecules connected to metallic contacts in the regime where both electron-electron and electron-phonon interactions are important. We find that the competition between these interactions results in unique resonant conditions for interlevel transitions and polaron formation: the Coulomb repulsion requires additional energy when electrons attempt phonon-assisted interlevel jumps between fully or partially occupied levels. We apply the equations of motion approach to calculate the electronic Green's functions. The density of states and conductance through the system are shown to exhibit interesting Rabi-like splitting of Coulomb blockade peaks and strong temperature dependence under the it interacting resonant conditions. A significant current effort in nanoscopic systems is the study of electron transport in natural and quantumdot molecules. Much of the interest lies in being able to investigate different regimes of competing electronelectron and electron-phonon interactions. It is typically the case, due to the spatial confinement, that electronelectron interactions (EEI) play a more important role than electron-phonon interactions (EPI) in determining electronic transport properties in low-dimensional systems. Different geometries of quantum-dot molecules (constructed with interconnected quantum dots) have been studied in the literature, and the role of phonons on electron transport has been analyzed in these systems. 1 For instance, it is known that phonons are a relatively weak perturbation, responsible for the broadening of Coulomb blockade peaks in the conductance and for the appearance of satellite features in the nonlinear transport regime. 1 More recently, the field of "molecular electronics," 2,3 where electrons and/or holes are injected directly into molecules attached to metal electrodes, has seen intense activity and progress. 4,5 It is interesting to note that EPI become more important in molecular electronics, since local molecule deformations produce significant electronic level shifts, as has been observed in experiments. 6 In fact, vibrational and torsional modes play prominent roles in electron transport, producing sidebands in the voltagedependent differential conductance 7,8,9 and/or polaronic shifts of the electronic levels. 10 Furthermore, in a molecular system with discrete electronic energy levels, vibrations produce important effects when the energy of the vibrational modes matches the energy difference between electronic levels. 11 As a result, EPI provides relaxation mechanisms (inelastic scattering) that affect the conductance of the system. 12 The simplest model of EPI is perhaps the independent boson model, 13 where localized electrons interact with a phonon system. Phonons introduce a shift of electronic levels and create a series of phonon replica peaks in the density of states ( DOS) of the electronic system. In a double barrier heterostructure interacting with phonons, as electrons have access to an energy continuum in the region outside the barriers, the inelastic scattering strongly affects the resonant tunneling regime through the heterostructure. 14 The effects of EPI on a doublelevel quantum dot have also been studied recently, 15 although the combined effects of inelastic scattering and EEI were not considered. In order to study a more realistic molecular system it is important to include both interactions-EEI and EPI -simultaneously. The interplay between the competing interactions is likely to result in unique conditions for phonon emission and absorption, as well as in unexpected polaron behavior. 10 In this work we study the effect that EEI and EPI have on charge transport through a diatomic molecule, envisioned as two atomic sites (or quantum dots) directly coupled to leads, as shown schematically in Fig. 1. We study this system using the equations of motion method, which allows us to obtain the DOS and electronic occupation as well as the conductance through the system. We exploit the fact that the EPI strength is small compared to the phonon energy, and thus include self-energy terms up to second order in this parameter. Standard considerations, similar to those applied in the Hubbard approximation, 16,17 are used to evaluate the equations. A significant result is the identification of occupation-dependent resonant conditions for phonon absorption and emission in the presence of Coulomb repulsion. More importantly, we find a unique type of Rabi splitting in the DOS from the mixing between a doubly occupied low-energy level (boosted by the Coulomb repulsion) and a higher-energy state. This Rabi splitting is mediated by thermally regulated phonon emission and absorption in the molecule. The effect is shown to dramatically modify the transport properties of the system since the resulting polaron formation competes with resonant tunneling. The effect is made remarkably noticeable even for weak EPI since the phonon-assisted transport is magnified by the virtual emission and absorption pro- Schematic representation of the model system. The electron-phonon interaction connects the local sites ( < ) via phonons of frequency 0 with coupling constant. cesses in the interacting resonant regime. For concreteness we consider a two-level diatomic molecule with local energies 0 and 0 (we assume 0 > 0 ), as shown in Fig. 1. Each dot or atomic site is connected independently to two external current leads. The system as described can be mapped to that studied in Ref. (there for the one-electron case) and is designed to model various experimental geometries. The total Hamiltonian is written as H T = H mol + H leads + H mol−leads. Each lead is modeled as a semi-infinite tightbinding chain, H lead =,<j,j> tc j c j, where the site index sum is over nearest neighbors, c j (c j ) creates (annihilates) a fermion at the j-th site with spin, and the left (right) lead is defined for j, j ≤ −1 (j, j ≥ 1). The Hamiltonian for the molecule is given by and in the rotating-wave approximation 13 where b (b) creates (annihilates) a phonon with energy 0, i = 0 i − eV g, and i =,. The gate voltage V g controls the particle number by shifting both localized energies with respect to the Fermi energies of the left and right electrodes. We assume here the same gate on both sites, as is likely the case in molecules. 6, connects the diatomic molecule to the leads, where we will consider t, t ≪ t. Away from the Kondo regime the effect of the leads is to broaden the energy levels of the dots through the tunnel couplings t and t. (Ref. 18) To determine the dynamics of the electrons in the molecule, we calculate the local retarded Green's function, together with the nonlocal functions, The latter are needed since they describe electron propagation due to the EPI, and are associated with the absorption (G ) and emission (G ) of phonons. The corresponding equations of motion up to second order in the EPI coupling strength, yield (i =, ), where the expressions for the corresponding self-energies are given by We have considered here the paramagnetic case with n i = n i ≡ n i. The broadening due to the leads is is the DOS in the leads. 1 We take = = (), for simplicity. 19 The nonlocal Green's functions, Eqs. and, have simple physical interpretations. At low temperature T, b b ≈ 0 thus G → 0 as n → 0, i.e., there is no phonon emission if the level is empty. It is also interesting to note that if the dot is not completely full, n < 1, the process of phonon absorption described by G is possible (even at small T ). In the limit U = 0 the local Green's functions G ii have two poles, one at i and another at the pole of the self-energy i. In this case there is a single phonon resonance condition for both levels (i =, ), achieved when the phonon energy 0 matches the energy difference between the two localized electron energies ∆ ≡ −, i.e., at ∆ = 0. Thus, the resonance effectively couples both sites. Such a phonon resonance condition on transport has been recently explored by Tasai and Eto. 15 They find a sharp dip in the conductance as the result of destructive interference between bonding and antibonding states (see the dashed line in Fig. 2). We will see below that electron repulsion greatly affects this behavior. In the limit = 0 we recover the Hubbard I approximation. 16 As expected, each local Green's function has poles at i and i + U. In contrast, when U = 0, we find different resonant conditions for and. The Coulomb repulsion requires extra energy for electrons to tunnel into a fully or partially occupied state. This extra energy depends on the occupation fraction of the two sites, which can be controlled by the gate voltage. The self-consistent charge of each site is obtained by integrating the DOS, ii = (−1/)ImG ii (), for different gate voltages. With appropriate parameter values we find that the effects brought about by the EPI and EEI are emphasized under resonant conditions. Hereafter all energy quantities are given in units of the level spacing ∆. Figure 2 shows the total (for both sites) DOS for the molecular system as a color map for different values of gate voltage V g and energy, measured with respect to the lowest level. The temperature is set at k B T = 0.015, U = 0.4, and 0 = 0.6. For low-voltage values, the molecule is empty and the DOS shows two main features at the noninteracting energies ≃,, slightly shifted and broadened by the coupling to the phonons and leads. As the gate voltage increases, the lowest level approaches the Fermi level, and as its occupancy grows a unique feature in the DOS develops. This is shown in Fig. 2 at energy + U ≈ 0.4 for V g 1. Notice that at the same time, a weak phonon-related feature (a "phonon replica") appears at energy ≃ + 0 = 0.6, as can be seen from the expression for the self-energy, Eq.. For V g 1.4, the lowest level is almost fully occupied, n ≃ 1, and the resonant absorption/emission condition in Eq. is achieved when + U n + 0 ≃. The increasing n occupation shifts the phonon replica feature, moving it closer to. The resonant condition results in the effective mixing of the + U and levels due to phonon-assisted transitions. The mixing produces a near degeneracy and, as a consequence, an effective phonon Rabi splitting of spectral features appears. This is the origin of the doublet appearing at − ≃ 1 with nearly equal-size peaks in the range V g ≈ 1.6-2.4. Note that the lower level participates in the process as a result of the Coulomb repulsion between electrons. Notice further that the resonant condition disappears once the higher level becomes fully (doubly) occupied (V g 2.4 in Fig. 2). The DOS returns then to the standard Hubbard peaks at + U and + U dominating the spectrum of the molecule (weak phonon-related features near 0 and 0.8 are present for finite T and ). As we will show below, the Rabi resonance in the presence of EEI affects also the conductance through the system, providing an experimentally accessible signature of the effect. Figure 3 shows the conductance G vs gate voltage for different temperatures. As the gate voltage increases, the conductance exhibits the anticipated Coulomb blockade (CB) peaks. Notice that the first two are associated here with, and show nearly full e 2 /h conductance (limited by finite temperatures). The dominant effect of higher temperatures is to produce important changes in the dot occupancy due to phonon emission (occupancy changes due to thermal excitation of electrons are negligible). The third CB peak appearing at V g − 0 ≃ 1 is clearly split, indicating the phononmediated transitions between the and levels. The temperature dependence of this peak is pronounced, as the thermal variation in electron and phonon occupations will not only make it weaker but will also rapidly detune the resonance condition. A higher phonon presence results in stronger EPI and a drop in the conductance, a typical signature of interference between resonant tunneling and polaron formation processes. The splitting of the third CB peak increases linearly with the EPI strength, as expected in Rabi splitting phenomena. Notice that once the level is nearly full, the effect disappears, and the Coulomb blockade peak at V g − 0 ≃ U + ∆ = 1.4 is essentially T independent. The appearance of a shoulder in the second CB peak for higher temperatures is also a consequence of EPI, but one that is inelastic in nature: The inset of Fig. 3 shows the contribution of inelastic processes (∝ G + G ) to the conductance. The highest peak at V g − 0 ≃ 0.4 is due to phonon emission processes: electrons thermally excited to level (above the Fermi level here) fall to level + U by the emission of phonons (the inset of Fig. 4 illustrates levels involved). These processes do not enhance the conductance of the resonant level, but rather reduce it, as one sees in the main panel. The suppression is produced by the destructive interference of the two different conducting processes that electrons undergo. The resonant condition of phonon-assisted tunneling through the molecule in the presence of EEI is a function of the interaction parameter U and the occupation of the molecule. In order to explore the dependence of the Rabi splitting on interaction parameters we show in Fig. 4 a color map of the conductance as function of U/∆ and (V g − 0 )/∆. It is clear that the conductance returns to the non-phonon-assisted resonant single CB peak whenever U/∆ is far below or above the resonant value U ≃ ∆− 0 = 0.4. The lowest level is nearly doubly occupied in this V g regime, i.e., (V g − 0 )/∆ ≃ 1. At the resonance, U ≃ 0.4, the conductance Rabisplits into two peaks that have the same height and demonstrates the effect of polaron formation on the conductance. We have shown that the competition of EEI and EPI in a diatomic molecule produces unexpected Rabi splitting phenomena in the DOS with observable effects on the conductance. This phenomenon involves states produced by the Coulomb repulsion, and it is enhanced at higher temperatures, a direct consequence of the thermal nature of the phonon bath involved. We would like to thank CAPES (Brazil) and NSF-IMC grant No 0336431 for support, and C. Bsser for helpful discussions.
<filename>models/listener.go // Code generated by go-swagger; DO NOT EDIT. // Copyright (c) 2016, 2017, 2018 <NAME> <<EMAIL>>. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // * Neither the name of the <organization> nor the // names of its contributors may be used to endorse or promote products // derived from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND // ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED // WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE // DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY // DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES // (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; // LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND // ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS // SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // package models // This file was generated by the swagger tool. // Editing this file might prove futile when you re-run the swagger generate command import ( "encoding/json" strfmt "github.com/go-openapi/strfmt" "github.com/go-openapi/errors" "github.com/go-openapi/swag" "github.com/go-openapi/validate" ) // Listener listener // swagger:model Listener type Listener struct { // id ID ULID `json:"id,omitempty"` // name // Required: true Name *string `json:"name"` // port // Required: true Port *int64 `json:"port"` // protocol // Required: true Protocol *string `json:"protocol"` } // Validate validates this listener func (m *Listener) Validate(formats strfmt.Registry) error { var res []error if err := m.validateID(formats); err != nil { // prop res = append(res, err) } if err := m.validateName(formats); err != nil { // prop res = append(res, err) } if err := m.validatePort(formats); err != nil { // prop res = append(res, err) } if err := m.validateProtocol(formats); err != nil { // prop res = append(res, err) } if len(res) > 0 { return errors.CompositeValidationError(res...) } return nil } func (m *Listener) validateID(formats strfmt.Registry) error { if swag.IsZero(m.ID) { // not required return nil } if err := m.ID.Validate(formats); err != nil { if ve, ok := err.(*errors.Validation); ok { return ve.ValidateName("id") } return err } return nil } func (m *Listener) validateName(formats strfmt.Registry) error { if err := validate.Required("name", "body", m.Name); err != nil { return err } return nil } func (m *Listener) validatePort(formats strfmt.Registry) error { if err := validate.Required("port", "body", m.Port); err != nil { return err } return nil } var listenerTypeProtocolPropEnum []interface{} func init() { var res []string if err := json.Unmarshal([]byte(`["tcp","udp"]`), &res); err != nil { panic(err) } for _, v := range res { listenerTypeProtocolPropEnum = append(listenerTypeProtocolPropEnum, v) } } const ( // ListenerProtocolTCP captures enum value "tcp" ListenerProtocolTCP string = "tcp" // ListenerProtocolUDP captures enum value "udp" ListenerProtocolUDP string = "udp" ) // prop value enum func (m *Listener) validateProtocolEnum(path, location string, value string) error { if err := validate.Enum(path, location, value, listenerTypeProtocolPropEnum); err != nil { return err } return nil } func (m *Listener) validateProtocol(formats strfmt.Registry) error { if err := validate.Required("protocol", "body", m.Protocol); err != nil { return err } // value enum if err := m.validateProtocolEnum("protocol", "body", *m.Protocol); err != nil { return err } return nil } // MarshalBinary interface implementation func (m *Listener) MarshalBinary() ([]byte, error) { if m == nil { return nil, nil } return swag.WriteJSON(m) } // UnmarshalBinary interface implementation func (m *Listener) UnmarshalBinary(b []byte) error { var res Listener if err := swag.ReadJSON(b, &res); err != nil { return err } *m = res return nil }
/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ /* * File: FileLogger.h * Author: <NAME> * * Created on June 30, 2017, 3:11 PM */ #ifndef FILELOGGER_H #define FILELOGGER_H #include <Common/Logger/AbstractLogger.h> /** * @brief ConsoleLogger is a singleton class that provides logging infrastructure to the File System. */ class FileLogger : public AbstractLogger { Q_OBJECT public: static AbstractLogger* GetBaseInstance(); static FileLogger* GetInstance(); public slots: /** * @brief Log the message with the given details to the proper file. This * slot is to be connected to a logging signal. * * @param type - The string name of the type of message (DEBUG, INFO, WARN, CRITICAL, or FATAL) * @param context - Information pertaining to the context in which the message was created. * @param msg - The text of the message to be output to the given medium. */ void Log(QString type, const QMessageLogContext &context, const QString &msg); /** * @brief Configure this logger with * @param config */ void Config(ConfigurationManager &config); /** * @brief Change the directory in which logs are stored. * @param log_directory - The string containing the full or relative path to * the log directory. */ void SetLogDirectory(QString log_directory){ log_directory_ = log_directory; } /** * @brief Get the currently set log directory. * @return The currently set log directory. */ QString GetLogDirectory(){ return log_directory_; } private: /** * Hidden constructor for singleton. */ FileLogger() { } QString log_directory_; static FileLogger * instance_; }; #endif /* FILELOGGER_H */
In some scenarios, it may be economically more effective to repair metal components that have suffered cracks and damages, rather than replace them completely. One such repair technique is referred to as narrow groove welding or narrow gap welding in the art. In such a type of welding, the welding operation is typically performed in deep and recessed portions of a work piece. Narrow groove welding may also be beneficial when joining work pieces with thick walls. Normally, narrow groove welding is carried out by an arc welding process. In a conventional arc welding process, a continuous length of welding wire is fed into an arc welding torch. The welding torch passes the welding wire and a contact tip located at one end of the welding torch guides the welding wire to the weld joint. The welding wire acts as a consumable electrode and is fused into an electric arc. The electric arc is created between the welding wire and the base material and melts the metals at the weld joint. There have been attempts to make narrow groove welds in the past. However, almost all such attempts are subjected to severe challenges. One such challenge is shorting of the welding wire when it comes in contact with the sidewall of the narrow groove. To tackle this problem, the welding wire has to be fed extremely straight into the contact tip so that the welding wire does not touch the sidewalls. This is difficult to achieve in practice, especially, with fine welding wires. Another challenge is that the length of the welding wire (commonly referred to as stick out) can be rather long, e.g., exceed three inches, causing the welding wire to bend. A welding wire that is bent can be very cumbersome and difficult to control during a welding operation. Furthermore, a long stick out increases the tendency for spurious arcing to occur between the welding wire being fed down into the narrow groove and the side walls of the narrow groove. To make matters worse, current embodiments of the contact tip are traditionally short in length, and are made of a good electrical conductor, typically copper. This creates a two-fold problem when welding in deep and narrow recessed environments. The first problem is that the inadequate length of the contact tip does not allow the contact tip to reach into narrow and recessed portion of a weld joint of a work piece. This problem becomes much more severe in scenarios wherein the weld joint is narrower than the contact tip. The second problem is that, because the contact tip is typically constructed from a material that is a good conductor of electricity, any contact with the surrounding wall of the recessed portion creates a short in the system. For these aforementioned reasons there exists a need to create an assembly that allows for welding in deep and narrow recessed gaps within a work piece.
package data; import java.io.FileNotFoundException; import java.io.PrintStream; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.concurrent.ThreadLocalRandom; import entities.Relation; /** * Abstract class for a synthetic data generator. * Produces a list of database relations. * The generated database can be accessed by the getter method {@link #get_database} * or written to an output stream by {@link #print_database}. * For specific queries and patterns, * subclasses need to implement {@link #populate_database} appropriately. * @author <NAME> */ public abstract class Database_Query_Generator { /** * An output stream where the database can be written to. */ protected PrintStream out; /** * The sizes of the relations of the database. */ protected List<Integer> n_list; /** * The number of relations in the database. */ protected int l; /** * The generated database as a list of relations. */ protected List<Relation> database; /** * Constructor for when all the relations have the same size n. */ public Database_Query_Generator(int n, int l) { this.l = l; this.n_list = new ArrayList<Integer>(); for (int i = 0; i < l; i ++) this.n_list.add(n); this.out = System.out; } /** * Constructor that allows for a different size per relation. */ public Database_Query_Generator(List<Integer> n_list, int l) { this.l = l; this.n_list = n_list; this.out = System.out; } /** * Set the output file that will be created. If no file is specified, the default output will be used * (stdout). * @param fileName The path of the output file. */ public void setOutputFile(String fileName) { try { this.out = new PrintStream(fileName); } catch (FileNotFoundException e) { e.printStackTrace(); } } /** * Populates the relations of the database according to a subclass logic. */ protected abstract void populate_database(); /** * Creates the database. */ public void create() { database = new ArrayList<Relation>(); populate_database(); } /** * Prints the database to the specified output stream */ public void print_database() { for (int i = 0; i < l; i++) printRelation(database.get(i)); if (out != System.out) this.out.close(); } /** * Prints a relation to the specified output stream */ private void printRelation(Relation r) { this.out.print(r.toString()); } /** * Returns the database as a list of relations */ public List<Relation> get_database() { return this.database; } /** * Returns a random number in a specific range (uniform distribution) * Used whenever tuple costs are generated uniformly at random * Hardcoded maximum cost = 10,000 */ protected double get_uniform_tuple_weight() { return ThreadLocalRandom.current().nextDouble(0.0, 10000.0); } /** * Returns the weight of the tuple according to the weight distribution specified * @param weight_distr Specifies the distribution of input weights. * @param tup_no The index of the tuple (starting from 1) * @param relation_no The index of the relation (starting from 1) */ protected double get_tuple_weight(String weight_distr, int tup_no, int relation_no) { Double res = null; if (weight_distr.equals("uniform")) { /** * Returns a random number in a specific range (uniform distribution) * Used whenever tuple costs are generated uniformly at random * Hardcoded maximum cost = 10,000 */ res = ThreadLocalRandom.current().nextDouble(0.0, 10000.0); } else if (weight_distr.equals("lex")) { int max_n = Collections.max(n_list); res = tup_no * Math.pow(max_n, 2 * (l - relation_no)); } else if (weight_distr.equals("revlex")) { int max_n = Collections.max(n_list); res = tup_no * Math.pow(max_n, 2 * (relation_no - 1)); } else { System.err.println("Unknown weight distribution"); System.exit(1); } return res; } /** * Utility method for help messages. * @return String */ protected static String getName() { String className = Thread.currentThread().getStackTrace()[2].getClassName(); return className; } }
<reponame>ByteLaw5/Byte-s-Structures package com.bytelaw.bytesstructures.world.gen.feature; import net.minecraft.util.math.BlockPos; import net.minecraft.world.ISeedReader; import net.minecraft.world.gen.ChunkGenerator; import net.minecraft.world.gen.feature.Feature; import net.minecraft.world.gen.feature.structure.StructureManager; import java.util.Random; public class LargeRockFeature extends Feature<LargeRockConfig> { public LargeRockFeature() { super(LargeRockConfig.CODEC); } @Override public boolean func_230362_a_(ISeedReader p_230362_1_, StructureManager p_230362_2_, ChunkGenerator p_230362_3_, Random p_230362_4_, BlockPos p_230362_5_, LargeRockConfig p_230362_6_) { return true; } }
Numerical Prediction of Background Buildup of Salinity Due to Desalination Brine Discharges into the Northern Arabian Gulf : Brine discharges from desalination plants into low-flushing water bodies are challenging from the point of view of dilution, because of the possibility of background buildup e ff ects that decrease the overall achievable dilution. To illustrate the background buildup e ff ect, this paper uses the Arabian (Persian) Gulf, a shallow, reverse tidal estuary with only one outlet available for exchange flow. While desalination does not significantly a ff ect the long-term average Gulf-wide salinity, due to the mitigating e ff ect of the Indian Ocean Surface Water inflow, its resulting elevated salinities, as well as elevated concentrations of possible contaminants (such as heavy metals and organophosphates), can a ff ect marine environments on a local and regional scale. To analyze the potential e ff ect of background salinity buildup on dilutions achievable from discharge locations in the northern Gulf, a 3-dimensional hydrodynamic model (Delft3D) was used to simulate brine discharges from a single hypothetical source location along the Kuwaiti shoreline, about 900 km from the Strait of Hormuz. Using nested grids with a horizontal resolution, comparable to a local tidal excursion (250 m), far field dilutions of about 28 were computed for this discharge location. With this far field dilution, to achieve a total dilution of 20, the near field dilution (achievable using a submerged di ff user) would need to be increased to approximately 70. Conversely, the background build-up means that a near field dilution of 20 yields a total dilution of only about 12. Introduction Marine impacts associated with brine discharge are mainly judged on their brine and contaminant concentrations after initial mixing (dilution). Dilution is generally obtained through judicious choice of outfall parameters, including location, orientation, number of ports, discharge flow rate, etc., using a combination of analytical and physical models. The effectiveness of any discharge design will depend on the relative magnitude of the near and far field dilution. In general, the near field is facilitated by turbulent entrainment processes, while the far field is dominated by advection and diffusion processes. The far field dilution is defined as: where c o, c a, and c F are the pollutant concentrations in the discharge, the ambient (water not influenced by the discharge) and the far field (region >~100 meters surrounding the discharge), and the near field dilution is defined as: where c N is the concentration in the near field (<~100 meters from the source). The combined or total local dilution at a given location, defined as: can be obtained algebraically by combining Equations - to obtain: or approximately the harmonic sum of S N and S F. In addition, over time, it is useful to calculate a harmonic mean far field dilution (after ), which is equivalent to the arithmetic mean concentration: While recent numerical efforts and computational advances have been able to refine the horizontal resolution to 10s of meters (and, therefore, reduce the distance between the near and far field), multiport diffusers with port diameters on the order of 10 cm would require resolutions of sub-meter level resolution, plus additional physics (e.g., entrainment, bubble dynamics or stratification) to properly model the near field mixing processes. It is therefore more computationally economical to adopt two separate models to model the near and far field models and to couple them, to yield an overall dilution that is of interest. The primary objective of this paper is to investigate the potential background buildup of a low-flushing water body on a local scale. It begins with a brief description of the oceanographic conditions of the paper's focus area, the Arabian or Persian Gulf (hereafter referred to as the Gulf-a body of water with the potential of increased background buildup), and an analysis of long-term Gulf-wide salinity changes, due to desalination. The study considers a single hypothetical desalination brine discharge, located in the northern Gulf, and present predicted far field dilutions, using a Gulf-wide hydrodynamic model (Delft3D). The paper discusses the importance of the grid resolution, in order to resolve the background buildup concentrations, as well as illustrate tradeoffs which occur when the current fields are not well resolved. Finally, although this paper does not directly provide a diffuser outfall design, it will discuss the impact of the predicted background buildup on the target dilutions, that would need to be accomplished when designing a diffuser outfall. Case Study: Gulf Scale Environmental Impact The Gulf is a shallow (mean depth of about 35 m), reverse tidal estuary, with only one outlet available for exchange flow (the Strait of Hormuz), located roughly 1000 km downcoast from the head of the Gulf. It has a minimum width of about 65 km, a maximum width of about 340 km, maximum length of about 990 km, total surface area of 239,000 km 2, and a total volume of about 9000 km 3 ( Figure 1). The Gulf consistently carries about a third of the world's total seawater and brackish water desalination capacity. Desalination has been conducted predominantly via multi-stage flash distillation (MSF) technology since the 1950s, and, due to the heat energy required, this has always been an energy intensive process. MSF plants were often located near power stations, to take advantage of the pre-heated water from the power plants' cooling water stream as feed water for the MSF plant, in order to reduce the overall energy cost of the desalination process. In recent decades, reverse osmosis (RO) has been adopted, because of the reduced cost of desalination and its scalability (it may be implemented for individual buildings up to the largest plant, e.g., Ashkelon, Israel, with a capacity of 320,000 m 3 /day). Brine discharges from reverse osmosis (RO) desalination plants, located mostly throughout the Arabian coast, can contain excess salinity (up to 35,000 ppm greater than ambient), contaminants, such as heavy metals and organophosphates. Discharges from multistage flash desalination plants additionally have an excess temperature, in the range of about 5-15 degrees Celsius warmer than ambient seawater. Several studies on Gulf-wide circulation patterns exist in the literature [6,7,. This paper focuses on the drivers of the Gulf-wide circulation that are responsible for observed residual currents at a given location in the northern Gulf. The following section presents the spatial, seasonal and long-term trends in Gulf-wide salinity, which contribute to overall circulation patterns in the Gulf. (MSF) technology since the 1950s, and, due to the heat energy required, this has al nergy intensive process. MSF plants were often located near power stations, to of the pre-heated water from the power plants' cooling water stream as feed water fo, in order to reduce the overall energy cost of the desalination process. In recent dec osis (RO) has been adopted, because of the reduced cost of desalination and its scala implemented for individual buildings up to the largest plant, e.g. Ashkelon, Israel, w 320,000 m 3 /day). discharges from reverse osmosis (RO) desalination plants, located mostly throughou ast, can contain excess salinity (up to 35,000 ppm greater than ambient), contamin avy metals and organophosphates. Discharges from multistage flash desalin itionally have an excess temperature, in the range of about 5-15 degrees Celsius wa nt seawater. al studies on Gulf-wide circulation patterns exist in the literature [6,7,. This p the drivers of the Gulf-wide circulation that are responsible for observed residual cur ocation in the northern Gulf. The following section presents the spatial, seasonal and s in Gulf-wide salinity, which contribute to overall circulation patterns in the Gulf. ide Circulation ualitative description of the circulation pattern in the Arabian Gulf presented here is servations by Reynolds, 1993. The Gulf itself is shallower n coast (typically only about 20 m depth) and the Gulf deepens into a trough that the Iranian coast in the north. The bathymetry of the Gulf is very shallow in the sou ith typical bottom slopes of about 4 m over 10 km. Gulf-Wide Circulation The qualitative description of the circulation pattern in the Arabian Gulf presented here is based on field observations by Reynolds, 1993. The Gulf itself is shallower near the Arabian coast (typically only about 20 m depth) and the Gulf deepens into a trough that runs parallel to the Iranian coast in the north. The bathymetry of the Gulf is very shallow in the southern portion, with typical bottom slopes of about 4 m over 10 km. The circulation in the Gulf is dominated by the exchange flows in and out of the Strait of Hormuz-see Figure 2. A lower salinity surface current, known as the Indian Ocean Surface Water (IOSW), flows into the Gulf year-round (T 1 in Figure 2), initially flowing northward along the Iranian coast. While a small part of the surface current flows back out along the southern part of the Strait (shown as T 2 in Figure 2), the bulk of the flow intrudes into the Gulf, and mixes with the existing hypersaline water in the Gulf. The prevailing wind, called the Shamal, is from the Northwest, and can have velocities of up to 18 m/s in the winter, compared with less than 10 m/s in the summer. Additionally, the Northern Gulf receives freshwater river inflows along the Iranian coast and at the Shatt al-Arab, which contributes to circulation (with two branches of freshwater flowing southward along the Arabian and Iranian coasts). The intense evaporation of the shallow water from the Northern Gulf and the UAE coast creates a dense brine that spills into the trough to the north and leaves the Strait as a subsurface gravity current (shown as T 3 in Figure 2). Throughout the Gulf, the tide and wind induce shear that is responsible for the dispersion of tracers. Residence times, calculated by, for tracer sources within the Gulf, are depicted in Figure 3. Tracer sources in shallow regions of the Arabian coast (e.g. Kuwaiti coast, Bahrain and UAE coasts) may experience residence times of 2 to 3 yrs. Flow Annual flow (expressed as equivalent Gulf-wide precipitation rate, m/yr) Throughout the Gulf, the tide and wind induce shear that is responsible for the dispersion of tracers. Residence times, calculated by, for tracer sources within the Gulf, are depicted in Figure 3. Tracer sources in shallow regions of the Arabian coast (e.g., Kuwaiti coast, Bahrain and UAE coasts) may experience residence times of 2 to 3 yrs. Throughout the Gulf, the tide and wind induce shear that is responsible for the dispersion of tracers. Residence times, calculated by, for tracer sources within the Gulf, are depicted in Figure 3. Tracer sources in shallow regions of the Arabian coast (e.g. Kuwaiti coast, Bahrain and UAE coasts) may experience residence times of 2 to 3 yrs. Flow Annual flow (expressed as equivalent Gulf-wide precipitation rate, m/yr) Gulf-Wide Salinity Xue and Eltahir, 2015 provided estimates of the Gulf water balance, expressed as a Gulf-averaged precipitation rate (Table 1): Table 1. Gulf water balance (Based on ). Desalination up to −0.04 m/yr (may be smaller in magnitude, due to the return of some of the freshwater back into the Gulf after domestic/industrial use) As seen above, on a basin-wide basis, desalination amounts to an equivalent of about 2% of the evaporative loss of freshwater from the Gulf, and thus is not a major contributor to freshwater loss or increased salinity. The salinity of the Gulf is typically about 38-42 practical salinity units (psu), and it is clear from the water balance above that the high salinity of the Gulf is due to its large evaporation output compared with river and rain inputs. Flow Annual Flow (Expressed as Equivalent Gulf-Wide Precipitation Rate, m/yr) Salinity values taken at different locations over the Gulf over the period 1955-2012 were obtained from the World Ocean Atlas ( ; statistical mean of temperature on 1/4 grid). Figure 4 shows that the interdecadal variability of the salinity in the Gulf is less than the seasonal variability. The lack of variability over the decades could be attributed to the mitigating effect of the fresher inflows from the Indian Ocean via the Hormuz strait, as confirmed by modeling studies on Gulf equilibria conditions by Ibrahim, 2017. As observed on a smaller scale between two bodies of water of differing density, separated by a narrow slot (analogous to the narrow Hormuz strait), the larger the density difference between the two water bodies, the larger the magnitude of the mitigating exchange flow. Water 2019, 11, x FOR PEER REVIEW 5 of 14 As seen above, on a basin-wide basis, desalination amounts to an equivalent of about 2% of the evaporative loss of freshwater from the Gulf, and thus is not a major contributor to freshwater loss or increased salinity. The salinity of the Gulf is typically about 38-42 practical salinity units (psu), and it is clear from the water balance above that the high salinity of the Gulf is due to its large evaporation output compared with river and rain inputs. Salinity values taken at different locations over the Gulf over the period 1955-2012 were obtained from the World Ocean Atlas ( ; statistical mean of temperature on 1/4° grid). Figure 4 shows that the interdecadal variability of the salinity in the Gulf is less than the seasonal variability. The lack of variability over the decades could be attributed to the mitigating effect of the fresher inflows from the Indian Ocean via the Hormuz strait, as confirmed by modeling studies on Gulf equilibria conditions by Ibrahim, 2017. As observed on a smaller scale between two bodies of water of differing density, separated by a narrow slot (analogous to the narrow Hormuz strait), the larger the density difference between the two water bodies, the larger the magnitude of the mitigating exchange flow. Delft3D model This paper used a 3-dimensional finite difference hydrodynamic model, coupled with a water quality module (Delft3D-FLOW), as a tool to determine the far field dilution of various contaminants, as well as to quantify a background far field concentration that may affect near field outfalls. The basis for this model was the Gulf Community Model (see www.agmcommunity.org), which has been adjusted for use in the current study. A combination of measured bathymetry data, meteorological and tidal forcings, as well as freshwater riverine inflows into the Gulf, were input into the model to simulate circulation patterns in the Gulf. Details of the model are presented below. The basic Arabian Gulf Model used a 4 km square grid (lat/lon) plus 10 vertical sigma layers Delft3D Model This paper used a 3-dimensional finite difference hydrodynamic model, coupled with a water quality module (Delft3D-FLOW), as a tool to determine the far field dilution of various contaminants, as well as to quantify a background far field concentration that may affect near field outfalls. The basis for this model was the Gulf Community Model (see www.agmcommunity.org), which has been adjusted for use in the current study. A combination of measured bathymetry data, meteorological and tidal forcings, as well as freshwater riverine inflows into the Gulf, were input into the model to simulate circulation patterns in the Gulf. Details of the model are presented below. The basic Arabian Gulf Model used a 4 km square grid (lat/lon) plus 10 vertical sigma layers ( Figure 5). Our model used a 4-year hydrodynamic spin-up with a time step of 5 minutes, because contaminants discharged at Kuwait Bay may take 3 years to exit the Gulf. External forcings include gridded wind and meteorological data, and four river inputs. Tidal forcings (expressed as a time series of water elevations) were imposed along the external boundary, a transect across the Gulf of Oman in the southeastern edge of the domain (shown in Figure 5). A bottom roughness (Manning's coefficient n = 0.03) was used throughout the Gulf. Current Speed Calibration As the study focuses on the northern Arabian Gulf, close to Kuwaiti waters, the modeled velocity was compared with available water elevation time series at one grid cell location (Umm al Maradim Island) with Acoustic Doppler Current Profiler (ADCP) measurements provided by the Kuwait Institute of Scientific Research (KISR), during the summer of 2011, for a location ~ 25 km offshore and ~ 90 km south of Kuwait City; 28 deg 40.153'N, 48 deg 38.760'E,. This data provided a sense of the tidal conditions present near the Southern Kuwaiti shore, as well as data for model calibration. Figure 6 shows a good comparison between the measured current speeds (eastward and northward) and the Delft3D modeled speeds. The observed current is mainly tidal in the southeast and northwest directions, consistent with the shore parallel direction. There is also a mean residual current of 4 cm/s in the south-southeast direction (bearing about 170 degrees). As shown in Figure 6, there is a slight mismatch in the orientation of the currents, which could be a result of the current meter's location near an island (of dimension 800 by 300 m), whose bathymetry may not be resolved from the available depth data and model grid resolution (250 m). While monthly data were available for some dissolved chemicals (KEPA, personal communication, 2017) for about 13 onshore and offshore locations, the time resolutions (one reading a month) are insufficient to be useful for purposes of model calibration or validation. Additionally, the monitoring locations may experience contaminants originating from multiple sources along the Current Speed Calibration As the study focuses on the northern Arabian Gulf, close to Kuwaiti waters, the modeled velocity was compared with available water elevation time series at one grid cell location (Umm al Maradim Island) with Acoustic Doppler Current Profiler (ADCP) measurements provided by the Kuwait Institute of Scientific Research (KISR), during the summer of 2011, for a location~25 km offshore and~90 km south of Kuwait City; 28 40.153 N, 48 38.760 E,. This data provided a sense of the tidal conditions present near the Southern Kuwaiti shore, as well as data for model calibration. Figure 6 shows a good comparison between the measured current speeds (eastward and northward) and the Delft3D modeled speeds. The observed current is mainly tidal in the southeast and northwest directions, consistent with the shore parallel direction. There is also a mean residual current of 4 cm/s in the south-southeast direction (bearing about 170 degrees). As shown in Figure 6, there is a slight mismatch in the orientation of the currents, which could be a result of the current meter's location near an island (of dimension 800 by 300 m), whose bathymetry may not be resolved from the available depth data and model grid resolution (250 m). While monthly data were available for some dissolved chemicals (KEPA, personal communication, 2017) for about 13 onshore and offshore locations, the time resolutions (one reading a month) are insufficient to be useful for purposes of model calibration or validation. Additionally, the monitoring locations may experience contaminants originating from multiple sources along the Kuwaiti coastline, which again cannot be resolved with the space and time resolutions available. The model calibration, using tidal data, is discussed in the section below. Figure 6. (a) Northward and eastward velocities measured by the ADCP at 10 m depth (same current meter as ; red) and depth averaged predictions by Delft3D for the same times (blue), (b) longshore velocities: Delft3D predictions (blue) versus measured velocities (red). Tidal Response Calibration While matching current speeds is an important aspect of calibration, it is also important for the model to match the tidal response at the Gulf scale. Figure 7 shows the locations in the Gulf with tidal gage data available as harmonic components. Figure 8 shows the correlation plots of M2, K1, O1, and S2 tidal components for amplitude and phases, compared with those modeled by the calibrated Delft3D model. These show that the Gulf-wide Manning's friction coefficient of = 0.03 has resulted in good agreement with the Gulf-wide tidal amplitudes, The Gulf-wide modeled and observed tidal phases were mostly in agreement. ; red) and depth averaged predictions by Delft3D for the same times (blue), (b) longshore velocities: Delft3D predictions (blue) versus measured velocities (red). Tidal Response Calibration While matching current speeds is an important aspect of calibration, it is also important for the model to match the tidal response at the Gulf scale. Figure 7 shows the locations in the Gulf with tidal gage data available as harmonic components. Figure 8 shows the correlation plots of M2, K1, O1, and S2 tidal components for amplitude and phases, compared with those modeled by the calibrated Delft3D model. These show that the Gulf-wide Manning's friction coefficient of = 0.03 has resulted Hormuz Strait Calibration The Delft3D model was run for the entire year of 2010, and was used to compute temperature and salinity along the cross section (Figure 9), as well as the flux out of the Hormuz Strait, at different months ( Figure 10). Figure 10 compared the model results with those predicted by, using another numeric model, FVCOM. The behavior shown in Figures 9 and 10 corroborates with the qualitative circulation behavior reported by, namely: the increased salinity stratification, coupled with influx of fresher water into the Gulf during February, and the outflow of saltier water along the surface, as well as in the deeper part of the strait, in October (consistent with the flow pattern shown in Figure 2). Water 2019, 11, x FOR PEER REVIEW 9 of 14 Hormuz Strait Calibration The Delft3D model was run for the entire year of 2010, and was used to compute temperature and salinity along the cross section (Figure 9), as well as the flux out of the Hormuz Strait, at different months ( Figure 10). Figure 10 compared the model results with those predicted by, using another numeric model, FVCOM. The behavior shown in Figures 9 and 10 corroborates with the qualitative circulation behavior reported by, namely: the increased salinity stratification, coupled with influx of fresher water into the Gulf during February, and the outflow of saltier water along the surface, as well as in the deeper part of the strait, in October (consistent with the flow pattern shown in Figure 2). Positive fluxes (red) represent flow out of the Gulf. Top row indicates Ic1, the high-salinity equilibrium attained by a high initial conditions of salinity (Gulf-wide salinity = 40 g/kg), predicted by FVCOM model (from ). Middle row indicate Ic2, the low-salinity equilibrium attained by low initial conditions for salinity (Gulf-wide salinity = 25 g/kg) predicted by. Bottom row indicates model predictions from the current Delft3D model. Nested Models The calibrated Delft3D model was used to investigate the effect of horizontal grid resolution on the predicted far field dilution near a hypothetical brine discharge, located close to the Al-Zour power plant (a source of desalination brine). The location was chosen as it was a coastal region, close to the current meter data; similarly, the model time period matched that of current meter observations (March 2010). Figure 11(a) shows the inner nested model grids, in relation to the outer model. The model's horizontal resolution was increased in the area offshore of the southern coast of Kuwait, Positive fluxes (red) represent flow out of the Gulf. Top row indicates Ic1, the high-salinity equilibrium attained by a high initial conditions of salinity (Gulf-wide salinity = 40 g/kg), predicted by FVCOM model (from ). Middle row indicate Ic2, the low-salinity equilibrium attained by low initial conditions for salinity (Gulf-wide salinity = 25 g/kg) predicted by. Bottom row indicates model predictions from the current Delft3D model. Nested Models The calibrated Delft3D model was used to investigate the effect of horizontal grid resolution on the predicted far field dilution near a hypothetical brine discharge, located close to the Al-Zour power plant (a source of desalination brine). The location was chosen as it was a coastal region, close to the current meter data; similarly, the model time period matched that of current meter observations (March 2010). Figure 11a shows the inner nested model grids, in relation to the outer model. The model's horizontal resolution was increased in the area offshore of the southern coast of Kuwait, based on the description of potential adverse effects to the marine environment, shown in Figure 11a, below. The three nested grid levels used were as follows: based on the description of potential adverse effects to the marine environment, shown in Figure 11(a), below. The three nested grid levels used were as follows: Outer = ~4 km grid (0.05 degrees)-entire Gulf Mid = ~1 km grid (0.01 degrees), offshore of Kuwait to ~40 km Fine = ~500 m grid (0.005 degrees), offshore of Kuwait to ~25 km Finest = 250 m grid (0.0025 degrees), offshore to Kuwait to ~10 km Using the model results from the three nested grids, it is possible to test the mesh sensitivity of the computed dilution, resulting from a source shown in Figure 11(b). To do this, model-predicted concentration timeseries were obtained for the following locations (shown in Figure 11 Using the model results from the three nested grids, it is possible to test the mesh sensitivity of the computed dilution, resulting from a source shown in Figure 11b. To do this, model-predicted concentration timeseries were obtained for the following locations (shown in Figure 11b): 1. Locations~4 km away (horizontal grid resolution of the outer model), shown in red; 2. Locations~1 km away (resolution of the mid-scale model), shown in green; 3. Locations~500 m away (resolution of the fine model), shown in blue; and 4. Locations~250 m away (resolution of the finest model), shown in yellow. Table 2 shows the harmonic time-averaged dilutions (defined in Equation ) computed at the various locations indicated in Figure 11b, using each of the nested model outputs. Harmonic mean dilution values, at locations that are subgridscale for a particular nested model, were spatially interpolated (shown in the table with grey shading). It can be seen in Table 2 that the predicted dilutions are sensitive to the horizontal resolution. For locations within~250 m from the source (from the finest resolution model), the model predicted lower far field dilution ( S F ∼ 14 − 59, with a harmonic average of 28). With a near field dilution of S N = 20, this harmonic average far field dilution (computed using Equation ) would result in a total dilution 1 S T = 1 S F + 1 S N, of about S T = 12. It is worth noting that there is a balance between the flow field resolution and the predicted dispersion. This was explored by, using a reverse Gaussian puff model which simulates the discharge over multiple tidal cycles, using puffs of conservative contaminant that grow in size, according to. The model used a constant depth equal to the local depth of 3 m at the yellow points indicated in Figure 11b, and assumed a spatially unvarying flow field (that has a similar effect to using a coarser grid for velocities). The Gaussian puff model's assumption of a spatially uniform velocity under-predicted the dilutions, and only matched the Delft3D predictions when the puff model diffusivity was increased by a factor of about 1.5, compared with the value prescribed by. This difference may be attributed to the puff model's use of a simplified flow field, while the Delft3D model exhibits a higher dispersion of the contaminant plume by capturing the spatial variation in the velocity field. The far field (background) buildup has significant implications for near field diffuser design. For a discharge excess salinity of ∆S ∼ 40 and a target excess salinity of ∆S ∼ 2, a total dilution of S T ∼ 20 is required. With zero background buildup, a near field dilution of S N ∼ 20 would suffice. However, the modeling result here indicates that S F ∼ 28, and, in order to achieve a total dilution of S T ∼ 20, the near field dilution would now need to be S N ∼ 70. Other water bodies have far higher flushing potential for contaminants than the Gulf. For example, brine discharges from an outfall from a desalination plant sited in Tuticorin, along the southeastern Indian coast, are expected to observe a dilution of over 1600 within less than 1 km downstream of the outfall. Desalination discharges from nearby Omani desalination plants, situated on the coastline of the Gulf of Oman, are also able to be diluted by a factor of 35-100 at about 50 m downstream of the discharge, and over 2000 about 2 km downstream. Conclusions Far field dilutions were computed using the Delft3D model, at about one tidal excursion from the source, as a measure of the background concentrations experienced by the source in an offshore discharge location in the northern Arabian Gulf. By comparing the results of nested models at different horizontal resolutions, it was determined that the far field dilutions are only accurately captured when the Delft3D horizontal resolution is on the order of the tidal excursion. Also, the computed harmonic mean dilutions for the far field approach near field dilutions ( S N ∼ 20), indicating that far field contaminants do "double back" at the source, and near field diffusers would have to be designed to produce higher dilutions to satisfy target total dilutions. A higher/lower brine discharge, coupled with smaller/larger tidal excursions and smaller/larger residual velocities, would result in a smaller/larger far field (background) dilution.
import math t=int(input()) for w in range(t): a,b=(int(i) for i in input().split()) c=0 k1=max(a,b) k2=min(a,b) while(k1>k2): if(k1%8==0 and k1//8>=k2): c+=1 k1//=8 elif(k1%4==0 and k1//4>=k2): c+=1 k1//=4 elif(k1%2==0 and k1//2>=k2): c+=1 k1//=2 else: c=-1 break if(k1!=k2): c=-1 print(c)
<reponame>FritzHerbers/ngx-formly<filename>src/ui/kendo/checkbox/src/checkbox.type.ts import { Component, ChangeDetectionStrategy, ViewEncapsulation, Type } from '@angular/core'; import { FieldTypeConfig, FormlyFieldConfig } from '@ngx-formly/core'; import { FieldType, FormlyFieldProps } from '@ngx-formly/kendo/form-field'; interface CheckboxProps extends FormlyFieldProps {} export interface FormlyCheckboxFieldConfig extends FormlyFieldConfig<CheckboxProps> { type: 'checkbox' | Type<FormlyFieldCheckbox>; } @Component({ selector: 'formly-field-kendo-checkbox', template: ` <input type="checkbox" kendoCheckBox [formControl]="formControl" [formlyAttributes]="field" /> <label [for]="id" class="k-checkbox-label"> {{ props.label }} <span *ngIf="props.required && props.hideRequiredMarker !== true" aria-hidden="true" class="k-required">*</span> </label> `, changeDetection: ChangeDetectionStrategy.OnPush, encapsulation: ViewEncapsulation.None, styleUrls: ['./checkbox.type.scss'], }) export class FormlyFieldCheckbox extends FieldType<FieldTypeConfig<CheckboxProps>> { override defaultOptions = { props: { hideLabel: true, }, }; }
/** * Created by Neal on 16/4/8. */ @Entity(table = "company") public class Company { @Id public Long id; @Column("name") public String name; @Column("type") public Integer type; @Column("status") public Integer status; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Integer getType() { return type; } public void setType(Integer type) { this.type = type; } public Integer getStatus() { return status; } public void setStatus(Integer status) { this.status = status; } }
import argparse import networkx as nx import HuGE_pel as huge from gensim.models import Word2Vec import time def parse_args(): ''' Parses the HuGE arguments. ''' parser = argparse.ArgumentParser(description="Run HuGE.") parser.add_argument('--input', nargs='?', default='../graph/CA-AstroPh.txt', help='Input graph path') parser.add_argument('--comnb', nargs='?', default='../pre_data/CA-AstroPh_comneig.txt', help='Preprocessing for common neighbors') parser.add_argument('--output', nargs='?', default='../emb/CA-AstroPh.emb', help='Embeddings path') parser.add_argument('--dimensions', type=int, default=128, help='Number of dimensions. Default is 128.') parser.add_argument('--window-size', type=int, default=10, help='Context size for optimization. Default is 10.') parser.add_argument('--iter', default=1, type=int, help='Number of epochs in SGD') parser.add_argument('--sample', type=float, default=0.5, help='Inout hyperparameter. Default is 0.5') parser.add_argument('--workers', type=int, default=10, help='Number of parallel workers. Default is 10.') parser.add_argument('--directed', dest='directed', action='store_true', help='Graph is (un)directed. Default is undirected.') parser.add_argument('--undirected', dest='undirected', action='store_false') parser.set_defaults(undirected=False) parser.add_argument('--r', type=float, default=0.999, help='R square. Default is 0.999') parser.add_argument('--min_L', type=int, default=10, help='The minimum walk length. Default is 10') parser.add_argument('--h', type=float, default=0.001, help='Variation H. Default is 0.001') return parser.parse_args() def read_graph(): ''' Reads the input network in networkx. ''' G = nx.read_edgelist(args.input, nodetype=int, create_using=nx.Graph()) return G def learn_embeddings(walks): ''' Learn embeddings by optimizing the Skipgram objective using SGD. ''' walks = [map(str, walk) for walk in walks] model = Word2Vec(walks, size=args.dimensions, window=args.window_size, min_count=0, sg=1, workers=args.workers, iter=args.iter) model.wv.save_word2vec_format(args.output) return model.wv def common_neighbor_loading(): comm_neighbor = {} with open(args.comnb, 'r') as f: com_edges = f.readlines() for edge in com_edges: com_ind = edge.split() comm_neighbor[int(com_ind[0]), int(com_ind[1])] = int(com_ind[2]) return comm_neighbor def main(args): ''' Pipeline for representational learning for all nodes in a graph. ''' print "loading graph" load_graph_time = time.time() nx_G = read_graph() G = huge.Graph(nx_G, args.directed) loaded_graph_time = time.time() print "load graph completed! load time used: ", (loaded_graph_time - load_graph_time), 's' print 'Common Neighbor Loading:' time1 = time.time() comm_neighbor = common_neighbor_loading() time2 = time.time() print "common neig loading time:", (time2 - time1), 's' nodes = list(G.G.nodes()) print "Random walk starting:" walks_time = time.time() walks, walk_length_list = G.simulate_walks_parallel(nodes, comm_neighbor, args.r, args.min_L, args.h, args.workers) walks_time_end = time.time() print "Walk path completed! time used:", (walks_time_end - walks_time), 's' learn_time = time.time() wv = learn_embeddings(walks) learn_time_end = time.time() print 'learning time used:', (learn_time_end-learn_time), 's' if __name__ == "__main__": args = parse_args() main(args)
Nonparametric detection using dependent samples (Corresp.) A new general approach to the formulation of a non-parametric detector using dependent samples is introduced and applied to a space-diversity system employing dc signaling. A comparison based on a form of asymptotic relative efficiency is made between the new detector and a Mann-Whitney detector. Under certain conditions the new procedure demonstrates an improvement in transmission efficiency.
<gh_stars>0 import numpy arr1 = numpy.array([['a','b'],['c','d']]) # flattenedArr1 = [list(cell for cell in row) for row in arr1] print(arr1.flatten()) print(arr1.reshape(-1,6)) # import itertools # binaryCombinations = list(map(list, itertools.product([0, 1], repeat=4))) # combinations = [ # [ # flattenedArr1[i] if digit else (0, '') for i, digit in enumerate(binaryCombination) # ] for binaryCombination in binaryCombinations # ] # print(combinations)
I actually thought Jurassic World did a great job with the concept of cloned dinosaurs being old news, but it didn’t go far enough on the ramifications of that level of genetic engineering technology existing since the early 1990s. This world, while it would be very similar to OTL at first glance, should be drastically different in the field of medicine and biotechnology, which has ramifications throughout the 1990s and 2000s that make 2017 look very different. The first is how InGen’s tech does away with a lot of diseases. Alzheimer’s, cancer, infertility, birth defects, etc. would all be gone in the First World. Organ failure is a thing of the past: need a new liver? Clone a new one. This even has ramifications for voluntary or “cosmetic” alterations: a MTF transsexual can have a womb grown just for them, which would likely bring the entire transgenderism issue to the political front far earlier than OTL. Then there’s the concept of applying this genetic engineering tech to improve humanity. This would be the major source of controversy in the world: should designer babies be legalized? Is this the way of the future, or are we playing God? Are we saving mankind, or are we replacing it with something better...or worse? I imagine that while the West is still wringing its hands on the issue, the East sees this as a way to gain an advantage. Japan develops a robust infrastructure of artificial wombs, which empowers the robotics industry to create artificial nannies to take care of this new generation. China creates an army of clone girls for future generations of boys to marry, so the population bottleneck is not an ongoing problem. The presence of I. Rex shows that large-scale genetic mix and matching is possible, and I find it difficult to believe that a theme park is the only organization in the world working on it. I can see military, intelligence, and black ops units around the world creating their own breeds of super soldiers; the first generations would now be in their early twenties, and given faster maturity rates, they’d probably have been in action for several years before. The legal status of these soldiers is in limbo: they are technically conscripts, perhaps even slaves, but what other family do they have? The Western governments, at least, already have a clever legal argument if the slavery issue ever does come up: the super soldiers are not genetically identical to humans, and so therefore are not persons and fall outside of the protection of anti-slavery laws. As for the plot point of using dinosaurs in combat? It’s probably been tried, but failed because human infantry are still far more reliable. Dinosaurs aren’t the only extinct animal brought back. Many animals, from mammoths to thylacines to dodos, are back. Driving species to extinction isn’t as bad a PR move as it once was, since any perpetrator can just spend a few million to bring the species back from the dead. This has, ironically, resulted in even more environmental degradation, as the idea that mankind can fix these problems through technology has set in. Last decade’s scheme to fix global warming by reviving ancient species of plankton to act as carbon sinks kinda backfired, as large portions of the Gulf of Mexico are now covered in runaway blooms of the stuff.
Theatre as Sacrament All theatre is sacramental. A theatrical event establishes itself as theatre by setting aside a measured space as inviolable for a measured time. This framing effect of theatre is sacramental. Within the frame of theatre, other sacraments can be represented or performed. Part I of this paper develops conceptual distinctions necessary to understanding the sacramental in theatre, using an ethics-based theory of sacrament. Part II sets out to use the theory, applying it to open up new questions for the interpretation of ancient Greek tragedy, and using the theory to explain certain plot elements in Sophocles' Philoctetes and other plays. Any act of theatre has a sacramental effect. The art of theatre makes ceremonies possible, and by ceremonies we are able to make things sacred. In saying that theatre is sacramental, I am not saying that it is religious. Religious ceremonies employ the art of theatre and depend on that art, but theatre does not depend on religion. I understand sacrament as an ethical concept. A sacrament sets up an ethical hedge around somethingmakes it wrong to touch, to tread on, or to alter the thing in question. By theatre I mean the art that makes action worth watching for a measured time in a measured space. This art must of necessity draw a line between watching and being watched. Drawing that line is a minor, though fundamental, sacrament. Other sacraments may take place within the frame of theatre. My theory of the sacramental in theatre stands on its usefulness for understanding and interpreting the elements of theatrethe experiences of both the watchers and the watched in actual productions, on the one hand, and, on the other, the texts that survive to represent productions of the past. If the theory is coherent and useful, then we should use it. Otherwise, not.
package main import ( "flag" "fmt" "net" "os" "strings" "time" "github.com/ton31337/nerf" "go.uber.org/zap" "go.uber.org/zap/zapcore" "google.golang.org/grpc" ) func startServer(lightHouse string) { if lightHouse == "" { fmt.Println("-lighthouse flag must be set") flag.Usage() os.Exit(1) } lightHouseIPS := strings.Split(lightHouse, ":") if len(lightHouseIPS) < 2 { fmt.Println("The format for lighthouse must be <NebulaIP>:<PublicIP>") flag.Usage() os.Exit(1) } if err := net.ParseIP(lightHouseIPS[0]); err == nil { fmt.Println("NebulaIP address is not IPv4") flag.Usage() os.Exit(1) } if err := net.ParseIP(lightHouseIPS[1]); err == nil { fmt.Println("PublicIP address is not IPv4") flag.Usage() os.Exit(1) } nerf.ServerCfg.Nebula.LightHouse.NebulaIP = lightHouseIPS[0] nerf.ServerCfg.Nebula.LightHouse.PublicIP = lightHouseIPS[1] nerf.ServerCfg.Logger.Debug("Nerf server started", zap.String("lightHouse", lightHouse)) go func() { for range time.Tick(10 * time.Second) { if (time.Now().Unix() - nerf.ServerCfg.Teams.UpdatedAt) > int64(time.Hour.Seconds()) { nerf.ServerCfg.Teams.Mutex.Lock() nerf.ServerCfg.Logger.Debug( "begin-of-sync Github Teams with local cache") nerf.ServerCfg.Teams.Sync() nerf.ServerCfg.Logger.Debug( "end-of-sync Github Teams with local cache") nerf.ServerCfg.Teams.Mutex.Unlock() } } }() // Start gRPC server only when Teams are synced initially. for { if nerf.ServerCfg.Teams != nil && !nerf.ServerCfg.Teams.Mutex.Locked() { lis, err := net.Listen("tcp", fmt.Sprintf(":%d", 9000)) if err != nil { nerf.ServerCfg.Logger.Fatal("failed to listen gRPC server", zap.Error(err)) } grpcServer := grpc.NewServer() nerf.RegisterServerServer(grpcServer, &nerf.Server{}) if err = grpcServer.Serve(lis); err != nil { nerf.ServerCfg.Logger.Fatal("can't serve gRPC", zap.Error(err)) } break } } } func main() { lightHouse := flag.String("lighthouse", "", "Set the lighthouse. E.g.: <NebulaIP>:<PublicIP>") gaidysUrl := flag.String( "gaidysUrl", os.Getenv("GAIDYS_URL"), "Set URL for Gaidys service (IPAM)", ) logLevel := flag.String( "log-level", "info", "Set the logging level - values are 'debug', 'info', 'warn', and 'error'", ) printUsage := flag.Bool("help", false, "Print command line usage") flag.Parse() if *printUsage { flag.Usage() os.Exit(0) } nerf.ServerCfg = nerf.NewServerConfig() logger, _ := zap.Config{ Encoding: "json", Level: zap.NewAtomicLevelAt(nerf.StringToLogLevel(*logLevel)), OutputPaths: []string{"stdout"}, EncoderConfig: zapcore.EncoderConfig{ TimeKey: "timestamp", EncodeTime: zapcore.ISO8601TimeEncoder, MessageKey: "message", }, }.Build() nerf.ServerCfg.Logger = logger nerf.ServerCfg.GaidysUrl = *gaidysUrl defer func() { _ = nerf.ServerCfg.Logger.Sync() }() startServer(*lightHouse) }
MINNEAPOLIS -- Body camera video released Thursday shows how two dogs approached a Minneapolis police officer before they were shot and seriously wounded in their fenced-in backyard earlier this month, CBS Minnesota reports. The officer was responding to a false security alarm on July 8 when he shot the dogs, according to the police report. One suffered a bullet wound to the jaw, and the other was hit multiple times in its body. Both dogs survived but will require extensive treatment. In the body camera footage, the first dog can be seen approaching the officer, identified as Michael Mays, slowly with its tail wagging. After the officer shoots the animal in the face, the other dog dashes toward the officer and is hit by gunfire. "I dispatched both of them," the officer reports immediately after the shooting. Video then shows Mays climb the backyard fence, walk around the house and speak to the teenage resident who tripped the alarm. Body camera footage showing a police officer shooting two dogs in Minneapolis. Minneapolis Police Department / CBS Minnesota He apologizes to the sobbing teenager for shooting the dogs, saying, "I don't like shooting dogs, I love dogs." In a report filed the night of the shooting, Mays said that the dogs, which he described as pit bulls, charged at him. The police union defended Mays, saying the first pit bull growled at him before approaching. However, the body camera footage cannot verify this, as no sound was recorded until after shots were fired. The body camera footage was released Thursday afternoon by Michael Padden, the attorney for the dogs' owner, Jennifer LeMay, who says the animals are service dogs for her children. At a news conference, Padden wanted to know why the audio on Mays' body camera wasn't turned on as the officer approached the house. The day after the shooting happened, LeMay posted surveillance video taken by a backyard camera to Facebook, where it went viral, garnering hundreds of thousands of views. The video clearly shows Mays shooting both dogs and climbing over the fence. A Facebook image of one of Jennifer LeMay's dogs. CBS Minnesota / Jennifer LeMay / Facebook Minneapolis Police Chief Janee Harteau described the surveillance video as "difficult to watch." She called for an Internal Affairs use of force review of the incident and said that officer training courses on dealing with dogs will be updated. So far, the department has yet to comment on Mays' actions. The wounded dogs required thousands of dollars in veterinary treatment. A GoFundMe page to raise money for medical expenses has received nearly $40,000. Harteau also said that the police department will help pay for treatment expenses. The dog shooting is the second Minneapolis police incident this month involving questions of police and body cameras. Over the weekend, Justine Damond, an Australian yoga teacher reporting a possible sexual assault in the alley behind her home, was fatally shot by a responding officer. While both officers on the scene were wearing body cameras, neither was turned on when the fatal shot was fired. The shooting remains under investigation.
[Abraham] grew strong in faith and gave glory to God. He was fully convinced that God was able to do what he promised. As I entered high school, I decided that I wanted to be on the school’s wrestling team. When practices started in late October, I realized that the sport would be much harder than I had originally thought. Then, when the official matches began, every time I stepped on the mat, I lost. Daily, I thought about quitting the team. I was tired of putting in so much work and effort only to lose. When the season ended, I had only five wins and a whopping twenty-seven losses. However, I decided to forget about that season and to work hard the next year. I trained and practiced. When my sophomore season of wrestling came, I was a much better wrestler. I climbed to the number four spot in the state rankings. Practicing our faith requires a similar commitment. Sometimes, I start to lose faith in God and in myself. I now realize that instead of losing faith, we can become more committed and grow stronger in our faith through prayer, reading the Bible, and worshiping with other Christians. God will give us the strength to keep moving forward.
The Business Model: An Integrative Framework for Strategy Execution We have many useful frameworks for formulating business strategy, i.e., devising a theory of how to compete. Frameworks for strategy execution are comparatively fragmented and idiosyncratic. This paper proposes a business model framework to link the firm's theory about how to compete to its execution. The framework captures previous ideas about business models in a simple logical structure that reflects current thinking in strategy. The business model framework provides a consistent logical picture of the firm that is a useful tool for the strategist, for teaching, and potentially for research on business models in strategy.
<reponame>hangilc/myclinic-meisai import { h, appendToElement } from "./typed-dom"; import { Op } from "myclinic-drawer"; import { print, listPrinterSettings } from "./service"; class Nav { dom: HTMLElement; onPageChange: (number) => void = _ => {}; constructor(){ this.dom = h.span({}, []); } update(currentPage: number, totalPages: number){ this.dom.innerHTML = ""; let prevLink = h.a({}, ["<"]); let nextLink = h.a({}, [">"]); prevLink.addEventListener("click", event => { if( currentPage > 1 ){ this.onPageChange(currentPage - 1); } }); nextLink.addEventListener("click", event => { if( currentPage < totalPages ){ this.onPageChange(currentPage + 1); } }); if( totalPages > 1 ){ appendToElement(this.dom, [ prevLink, " ", `${currentPage} / ${totalPages}`, " ", nextLink ]); } } } export class PrinterWidget { dom: HTMLElement; onPageChange: (number) => void = _ => {}; private pages: Op[][] = []; private settingKey: string|null = null; private settingName: string|null = null; private nav: Nav; private settingNameSpan: HTMLElement; private selectWorkarea: HTMLElement; constructor(settingKey?: string){ this.nav = new Nav(); this.nav.onPageChange = newPage => { let pageIndex = newPage - 1; this.onPageChange(pageIndex); }; if( settingKey !== undefined ){ this.settingKey = settingKey; this.settingName = getPrinterSetting(settingKey); } let printButton = h.button({}, ["印刷"]); printButton.addEventListener("click", event => { if( this.settingName === null ){ print(this.pages); } else { print(this.pages, this.settingName); } }); this.settingNameSpan = h.span({}, [this.settingName || "(プリンター未選択)"]); let selectPrinter = h.a({}, ["プリンター選択"]); selectPrinter.addEventListener("click", async event => { if( this.selectWorkarea.innerHTML === "" ){ let settings = await listPrinterSettings(); this.fillSelectWorkarea(settings); } else { this.selectWorkarea.innerHTML = ""; } }); this.selectWorkarea = h.div({}, []); this.dom = h.div({}, [ printButton, " ", this.nav.dom, " ", "プリンター:", this.settingNameSpan, " ", selectPrinter, " ", h.a({href: "/printer", target:"printer"}, ["プリンター管理"]), this.selectWorkarea ]); } setPages(pages: Op[][]): void{ this.nav.update(1, pages.length); this.pages = pages; } updateNavPage(page: number): void { this.nav.update(page, this.pages.length); } private fillSelectWorkarea(settings: string[]): void{ let dom = this.selectWorkarea; let current = this.settingName; let form = h.form({}, []); { let opt = h.input({type: "radio", name: "printer-setting"}, []); opt.checked = !current; opt.addEventListener("change", event => { this.updateSetting(null); dom.innerHTML = ""; }) appendToElement(form, [opt, "(プリンター未選択)", " "]); } settings.forEach(setting => { let opt = h.input({type: "radio", name: "printer-setting"}, []); opt.checked = setting === current; opt.addEventListener("change", event => { this.updateSetting(setting); dom.innerHTML = ""; }) appendToElement(form, [opt, setting, " "]); }); let cancel = h.button({}, ["キャンセル"]); cancel.addEventListener("click", event => { dom.innerHTML = ""; }) form.appendChild(cancel); dom.appendChild(form); } private updateSetting(setting: string | null){ this.settingName = setting; if( this.settingKey !== undefined ){ if( setting === null ){ removePrinterSetting(this.settingKey); } else { setPrinterSetting(this.settingKey, setting); } } this.settingNameSpan.innerHTML = ""; appendToElement(this.settingNameSpan, [setting || "(プリンター未選択)"]) } } export function getPrinterSetting(key: string): string | null{ return window.localStorage.getItem(key); } export function setPrinterSetting(key, name): void{ window.localStorage.setItem(key, name); } export function removePrinterSetting(key): void{ window.localStorage.removeItem(key); }
from abc import ABC from web_framework.api.module import MethodContentType from web_framework.api.type_parsing import TypeAdapter from typing import TypeVar T = TypeVar('T') class TextTypeAdapter(TypeAdapter[T], ABC): def __init__(self, adapting_type): super().__init__(adapting_type, MethodContentType.TEXT) class JsonTypeAdapter(TypeAdapter[T], ABC): def __init__(self, adapting_type): super().__init__(adapting_type, MethodContentType.JSON) class HtmlTypeAdapter(TypeAdapter[T], ABC): def __init__(self, adapting_type): super().__init__(adapting_type, MethodContentType.HTML)
Around 8:15 p.m., an Amtrak train struck Aaron Matthew Wolf east of Sweet Bay Lane, about a mile south of San Luis Obispo. Emergency responders pronounced Wolf dead at the scene. Investigators have yet to determine why Wolf was on the tracks. An autopsy is scheduled for Wednesday. On Monday, Cal Poly officials notified students and staff that the victim of the crash was Wolf, a computer science junior. The university says counselling services are available to students 24 hours a day. Counselling services are also available for Cal Poly employees. Wolf was one of three individuals who were struck and killed by trains on the Central Coast on three consecutive days. The other two deaths occurred in Santa Barbara County. A man who appeared to be in his 50s was struck in Summerland on Friday afternoon, and a 19-year-old woman was struck in Goleta on Sunday morning. Investigators said the Goleta incident appeared to be a suicide. Additionally, a 54-year-old woman was struck and killed by a train in San Luis Obispo last week. The woman was hit on the night of Jan. 4 near The Graduate. Train engineers are often left very traumatized by these terrible events. They can even suffer PTSD. It is a major cause of early retirement. It’s not just the accident itself it is the constant stress that the engineer experiences as they constantly anticipate a repeat of the same circumstances.
<reponame>sebastian-raubach/blog-server /* * This file is generated by jOOQ. */ package blog.raubach.database.codegen.tables.records; import blog.raubach.database.codegen.tables.Hikestats; import java.sql.Timestamp; import javax.annotation.Generated; import org.jooq.Field; import org.jooq.Record1; import org.jooq.Record9; import org.jooq.Row9; import org.jooq.impl.UpdatableRecordImpl; // @formatter:off /** * This class is generated by jOOQ. */ @Generated( value = { "http://www.jooq.org", "jOOQ version:3.11.9" }, comments = "This class is generated by jOOQ" ) @SuppressWarnings({ "all", "unchecked", "rawtypes" }) public class HikestatsRecord extends UpdatableRecordImpl<HikestatsRecord> implements Record9<Integer, Integer, Double, Double, String, String, String, Timestamp, Timestamp> { private static final long serialVersionUID = -1913783811; /** * Setter for <code>blog_db.hikestats.post_id</code>. */ public void setPostId(Integer value) { set(0, value); } /** * Getter for <code>blog_db.hikestats.post_id</code>. */ public Integer getPostId() { return (Integer) get(0); } /** * Setter for <code>blog_db.hikestats.duration</code>. */ public void setDuration(Integer value) { set(1, value); } /** * Getter for <code>blog_db.hikestats.duration</code>. */ public Integer getDuration() { return (Integer) get(1); } /** * Setter for <code>blog_db.hikestats.distance</code>. */ public void setDistance(Double value) { set(2, value); } /** * Getter for <code>blog_db.hikestats.distance</code>. */ public Double getDistance() { return (Double) get(2); } /** * Setter for <code>blog_db.hikestats.ascent</code>. */ public void setAscent(Double value) { set(3, value); } /** * Getter for <code>blog_db.hikestats.ascent</code>. */ public Double getAscent() { return (Double) get(3); } /** * Setter for <code>blog_db.hikestats.gpx_path</code>. */ public void setGpxPath(String value) { set(4, value); } /** * Getter for <code>blog_db.hikestats.gpx_path</code>. */ public String getGpxPath() { return (String) get(4); } /** * Setter for <code>blog_db.hikestats.elevation_profile_path</code>. */ public void setElevationProfilePath(String value) { set(5, value); } /** * Getter for <code>blog_db.hikestats.elevation_profile_path</code>. */ public String getElevationProfilePath() { return (String) get(5); } /** * Setter for <code>blog_db.hikestats.time_distance_profile_path</code>. */ public void setTimeDistanceProfilePath(String value) { set(6, value); } /** * Getter for <code>blog_db.hikestats.time_distance_profile_path</code>. */ public String getTimeDistanceProfilePath() { return (String) get(6); } /** * Setter for <code>blog_db.hikestats.created_on</code>. */ public void setCreatedOn(Timestamp value) { set(7, value); } /** * Getter for <code>blog_db.hikestats.created_on</code>. */ public Timestamp getCreatedOn() { return (Timestamp) get(7); } /** * Setter for <code>blog_db.hikestats.updated_on</code>. */ public void setUpdatedOn(Timestamp value) { set(8, value); } /** * Getter for <code>blog_db.hikestats.updated_on</code>. */ public Timestamp getUpdatedOn() { return (Timestamp) get(8); } // ------------------------------------------------------------------------- // Primary key information // ------------------------------------------------------------------------- /** * {@inheritDoc} */ @Override public Record1<Integer> key() { return (Record1) super.key(); } // ------------------------------------------------------------------------- // Record9 type implementation // ------------------------------------------------------------------------- /** * {@inheritDoc} */ @Override public Row9<Integer, Integer, Double, Double, String, String, String, Timestamp, Timestamp> fieldsRow() { return (Row9) super.fieldsRow(); } /** * {@inheritDoc} */ @Override public Row9<Integer, Integer, Double, Double, String, String, String, Timestamp, Timestamp> valuesRow() { return (Row9) super.valuesRow(); } /** * {@inheritDoc} */ @Override public Field<Integer> field1() { return Hikestats.HIKESTATS.POST_ID; } /** * {@inheritDoc} */ @Override public Field<Integer> field2() { return Hikestats.HIKESTATS.DURATION; } /** * {@inheritDoc} */ @Override public Field<Double> field3() { return Hikestats.HIKESTATS.DISTANCE; } /** * {@inheritDoc} */ @Override public Field<Double> field4() { return Hikestats.HIKESTATS.ASCENT; } /** * {@inheritDoc} */ @Override public Field<String> field5() { return Hikestats.HIKESTATS.GPX_PATH; } /** * {@inheritDoc} */ @Override public Field<String> field6() { return Hikestats.HIKESTATS.ELEVATION_PROFILE_PATH; } /** * {@inheritDoc} */ @Override public Field<String> field7() { return Hikestats.HIKESTATS.TIME_DISTANCE_PROFILE_PATH; } /** * {@inheritDoc} */ @Override public Field<Timestamp> field8() { return Hikestats.HIKESTATS.CREATED_ON; } /** * {@inheritDoc} */ @Override public Field<Timestamp> field9() { return Hikestats.HIKESTATS.UPDATED_ON; } /** * {@inheritDoc} */ @Override public Integer component1() { return getPostId(); } /** * {@inheritDoc} */ @Override public Integer component2() { return getDuration(); } /** * {@inheritDoc} */ @Override public Double component3() { return getDistance(); } /** * {@inheritDoc} */ @Override public Double component4() { return getAscent(); } /** * {@inheritDoc} */ @Override public String component5() { return getGpxPath(); } /** * {@inheritDoc} */ @Override public String component6() { return getElevationProfilePath(); } /** * {@inheritDoc} */ @Override public String component7() { return getTimeDistanceProfilePath(); } /** * {@inheritDoc} */ @Override public Timestamp component8() { return getCreatedOn(); } /** * {@inheritDoc} */ @Override public Timestamp component9() { return getUpdatedOn(); } /** * {@inheritDoc} */ @Override public Integer value1() { return getPostId(); } /** * {@inheritDoc} */ @Override public Integer value2() { return getDuration(); } /** * {@inheritDoc} */ @Override public Double value3() { return getDistance(); } /** * {@inheritDoc} */ @Override public Double value4() { return getAscent(); } /** * {@inheritDoc} */ @Override public String value5() { return getGpxPath(); } /** * {@inheritDoc} */ @Override public String value6() { return getElevationProfilePath(); } /** * {@inheritDoc} */ @Override public String value7() { return getTimeDistanceProfilePath(); } /** * {@inheritDoc} */ @Override public Timestamp value8() { return getCreatedOn(); } /** * {@inheritDoc} */ @Override public Timestamp value9() { return getUpdatedOn(); } /** * {@inheritDoc} */ @Override public HikestatsRecord value1(Integer value) { setPostId(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord value2(Integer value) { setDuration(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord value3(Double value) { setDistance(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord value4(Double value) { setAscent(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord value5(String value) { setGpxPath(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord value6(String value) { setElevationProfilePath(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord value7(String value) { setTimeDistanceProfilePath(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord value8(Timestamp value) { setCreatedOn(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord value9(Timestamp value) { setUpdatedOn(value); return this; } /** * {@inheritDoc} */ @Override public HikestatsRecord values(Integer value1, Integer value2, Double value3, Double value4, String value5, String value6, String value7, Timestamp value8, Timestamp value9) { value1(value1); value2(value2); value3(value3); value4(value4); value5(value5); value6(value6); value7(value7); value8(value8); value9(value9); return this; } // ------------------------------------------------------------------------- // Constructors // ------------------------------------------------------------------------- /** * Create a detached HikestatsRecord */ public HikestatsRecord() { super(Hikestats.HIKESTATS); } /** * Create a detached, initialised HikestatsRecord */ public HikestatsRecord(Integer postId, Integer duration, Double distance, Double ascent, String gpxPath, String elevationProfilePath, String timeDistanceProfilePath, Timestamp createdOn, Timestamp updatedOn) { super(Hikestats.HIKESTATS); set(0, postId); set(1, duration); set(2, distance); set(3, ascent); set(4, gpxPath); set(5, elevationProfilePath); set(6, timeDistanceProfilePath); set(7, createdOn); set(8, updatedOn); } // @formatter:on }
Churn railway station History This was a small and very isolated single platform halt with access only via an unmetalled downland sheep road. It was built as a temporary stop to accommodate a competition held by the National Rifle Association in 1888. However, from 1889 military summer camps were established near to the station which required the use of the halt as the only access to the site. Timetables provided that trains would not call at Churn unless prior notice had been given to the Stationmaster at Didcot. Facilities The station buildings consisted of no more than a simple wooden shelter and basic lavatories. In order to provide deliveries of goods for the camps a small siding was built at the southern end of the station. In fiction In 1905 it was the subject of a fictional crime mystery, Sir Gilbert Murrell's Picture, part of Thrilling Stories of the Railways by Victor Whitechurch (1905) Closure The station closed in 1962 when the entire line was closed to passenger traffic. Freight operations ceased in 1966.
Emilia Clarke plays Daenerys Targaryen in "Game of Thrones" HOLLYWOOD — "Game of Thrones" star Emilia Clarke suffered two nearly fatal brain aneurysms in the early years of filming the hit series, she said in an essay published Thursday. The British actress -- who plays Daenerys Targaryen on the blockbuster show about to enter its final season -- wrote that the first aneurysm rupture struck while she was at the gym in February 2011, just after filming the first season. "At some level, I knew what was happening: my brain was damaged," the 32-year-old wrote in The New Yorker magazine in her piece titled, "A Battle For My Life." "For a few moments, I tried to will away the pain and the nausea," she continued. "To keep my memory alive, I tried to recall, among other things, some lines from 'Game of Thrones'." Clarke was rushed to the hospital and diagnosed with a subarachnoid hemorrhage -- a form of stroke triggered by bleeding into areas that surround the brain, which kills about one third of the patients it strikes. She was 24 at the time of her first brain surgery, and said the recovery period in which she could not even recall her own name -- a condition called aphasia -- gave her "a sense of doom." "In my worst moments, I wanted to pull the plug. I asked the medical staff to let me die," Clarke said. "My job -- my entire dream of what my life would be -- centered on language, on communication. Without that, I was lost." The condition passed and Clarke left the hospital one month after her admission -- but doctors had found she had a second aneurysm that could rupture at any moment. "Even before we began filming Season 2, I was deeply unsure of myself. I was often so woozy, so weak, that I thought I was going to die," Clarke wrote, saying she took morphine between interviews while promoting the acclaimed show. During a routine brain scan doctors found her growth had doubled in size and decided to operate -- a seemingly simple procedure that resulted in major complications and another month in the hospital. Today, Clarke says she has "healed beyond my most unreasonable hopes," and helped develop a charity to offer treatment to patients recovering from stroke and brain injuries. "There is something gratifying, and beyond lucky, about coming to the end of 'Thrones,'" Clarke wrote. "I'm so happy to be here to see the end of this story and the beginning of whatever comes next."
San Francisco Fed President John Williams on Wednesday repeated that his view that the decision to lift interest rates is "data dependent" as he said he hasn&apos;t seen a convincing sign underlying inflation has bottomed out. "Until I have more confidence that inflation will be moving back to 2%, I&apos;ll continue to be in wait-and-see mode," Williams told a Los Angeles gathering of bank economists. "Every FOMC meeting is on the table."
<reponame>Armcannon/cli package fork import ( "net/http" "net/url" "regexp" "testing" "time" "github.com/cli/cli/context" "github.com/cli/cli/git" "github.com/cli/cli/internal/config" "github.com/cli/cli/internal/ghrepo" "github.com/cli/cli/internal/run" "github.com/cli/cli/pkg/cmdutil" "github.com/cli/cli/pkg/httpmock" "github.com/cli/cli/pkg/iostreams" "github.com/cli/cli/pkg/prompt" "github.com/cli/cli/test" "github.com/google/shlex" "github.com/stretchr/testify/assert" ) func runCommand(httpClient *http.Client, remotes []*context.Remote, isTTY bool, cli string) (*test.CmdOut, error) { io, stdin, stdout, stderr := iostreams.Test() io.SetStdoutTTY(isTTY) io.SetStdinTTY(isTTY) io.SetStderrTTY(isTTY) fac := &cmdutil.Factory{ IOStreams: io, HttpClient: func() (*http.Client, error) { return httpClient, nil }, Config: func() (config.Config, error) { return config.NewBlankConfig(), nil }, BaseRepo: func() (ghrepo.Interface, error) { return ghrepo.New("OWNER", "REPO"), nil }, Remotes: func() (context.Remotes, error) { if remotes == nil { return []*context.Remote{ { Remote: &git.Remote{ Name: "origin", FetchURL: &url.URL{}, }, Repo: ghrepo.New("OWNER", "REPO"), }, }, nil } return remotes, nil }, } cmd := NewCmdFork(fac, nil) argv, err := shlex.Split(cli) cmd.SetArgs(argv) cmd.SetIn(stdin) cmd.SetOut(stdout) cmd.SetErr(stderr) if err != nil { panic(err) } _, err = cmd.ExecuteC() if err != nil { return nil, err } return &test.CmdOut{ OutBuf: stdout, ErrBuf: stderr}, nil } func TestRepoFork_nontty(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.Verify(t) defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} _, restore := run.Stub() defer restore(t) output, err := runCommand(httpClient, nil, false, "") if err != nil { t.Fatalf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) assert.Equal(t, "", output.Stderr()) } func TestRepoFork_existing_remote_error(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} _, err := runCommand(httpClient, nil, false, "--remote") if err == nil { t.Fatal("expected error running command `repo fork`") } assert.Equal(t, "a remote called 'origin' already exists. You can rerun this command with --remote-name to specify a different remote name.", err.Error()) reg.Verify(t) } func TestRepoFork_no_existing_remote(t *testing.T) { remotes := []*context.Remote{ { Remote: &git.Remote{ Name: "upstream", FetchURL: &url.URL{}, }, Repo: ghrepo.New("OWNER", "REPO"), }, } defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.Verify(t) defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} cs, restore := run.Stub() defer restore(t) cs.Register(`git remote add -f origin https://github\.com/someone/REPO\.git`, 0, "") output, err := runCommand(httpClient, remotes, false, "--remote") if err != nil { t.Fatalf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) assert.Equal(t, "", output.Stderr()) } func TestRepoFork_in_parent_nontty(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} cs, restore := run.Stub() defer restore(t) cs.Register(`git remote add -f fork https://github\.com/someone/REPO\.git`, 0, "") output, err := runCommand(httpClient, nil, false, "--remote --remote-name=fork") if err != nil { t.Fatalf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) assert.Equal(t, "", output.Stderr()) reg.Verify(t) } func TestRepoFork_outside_parent_nontty(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} reg.Verify(t) defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} cs, restore := run.Stub() defer restore(t) cs.Register(`git clone https://github.com/someone/REPO\.git`, 0, "") cs.Register(`git -C REPO remote add -f upstream https://github\.com/OWNER/REPO\.git`, 0, "") output, err := runCommand(httpClient, nil, false, "--clone OWNER/REPO") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) assert.Equal(t, output.Stderr(), "") } func TestRepoFork_already_forked(t *testing.T) { reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} _, restore := run.Stub() defer restore(t) output, err := runCommand(httpClient, nil, true, "--remote=false") if err != nil { t.Errorf("got unexpected error: %v", err) } r := regexp.MustCompile(`someone/REPO.*already exists`) if !r.MatchString(output.Stderr()) { t.Errorf("output did not match regexp /%s/\n> output\n%s\n", r, output.Stderr()) return } reg.Verify(t) } func TestRepoFork_reuseRemote(t *testing.T) { remotes := []*context.Remote{ { Remote: &git.Remote{Name: "origin", FetchURL: &url.URL{}}, Repo: ghrepo.New("someone", "REPO"), }, { Remote: &git.Remote{Name: "upstream", FetchURL: &url.URL{}}, Repo: ghrepo.New("OWNER", "REPO"), }, } reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} output, err := runCommand(httpClient, remotes, true, "--remote") if err != nil { t.Errorf("got unexpected error: %v", err) } r := regexp.MustCompile(`Using existing remote.*origin`) if !r.MatchString(output.Stderr()) { t.Errorf("output did not match: %q", output.Stderr()) return } reg.Verify(t) } func TestRepoFork_in_parent(t *testing.T) { reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} _, restore := run.Stub() defer restore(t) defer stubSince(2 * time.Second)() output, err := runCommand(httpClient, nil, true, "--remote=false") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) r := regexp.MustCompile(`Created fork.*someone/REPO`) if !r.MatchString(output.Stderr()) { t.Errorf("output did not match regexp /%s/\n> output\n%s\n", r, output) return } reg.Verify(t) } func TestRepoFork_outside(t *testing.T) { tests := []struct { name string args string }{ { name: "url arg", args: "--clone=false http://github.com/OWNER/REPO.git", }, { name: "full name arg", args: "--clone=false OWNER/REPO", }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} output, err := runCommand(httpClient, nil, true, tt.args) if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) r := regexp.MustCompile(`Created fork.*someone/REPO`) if !r.MatchString(output.Stderr()) { t.Errorf("output did not match regexp /%s/\n> output\n%s\n", r, output) return } reg.Verify(t) }) } } func TestRepoFork_in_parent_yes(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} cs, restore := run.Stub() defer restore(t) cs.Register(`git remote add -f fork https://github\.com/someone/REPO\.git`, 0, "") output, err := runCommand(httpClient, nil, true, "--remote --remote-name=fork") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) //nolint:staticcheck // prefer exact matchers over ExpectLines test.ExpectLines(t, output.Stderr(), "Created fork.*someone/REPO", "Added remote.*fork") reg.Verify(t) } func TestRepoFork_outside_yes(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} cs, restore := run.Stub() defer restore(t) cs.Register(`git clone https://github\.com/someone/REPO\.git`, 0, "") cs.Register(`git -C REPO remote add -f upstream https://github\.com/OWNER/REPO\.git`, 0, "") output, err := runCommand(httpClient, nil, true, "--clone OWNER/REPO") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) //nolint:staticcheck // prefer exact matchers over ExpectLines test.ExpectLines(t, output.Stderr(), "Created fork.*someone/REPO", "Cloned fork") reg.Verify(t) } func TestRepoFork_outside_survey_yes(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} cs, restore := run.Stub() defer restore(t) cs.Register(`git clone https://github\.com/someone/REPO\.git`, 0, "") cs.Register(`git -C REPO remote add -f upstream https://github\.com/OWNER/REPO\.git`, 0, "") defer prompt.StubConfirm(true)() output, err := runCommand(httpClient, nil, true, "OWNER/REPO") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) //nolint:staticcheck // prefer exact matchers over ExpectLines test.ExpectLines(t, output.Stderr(), "Created fork.*someone/REPO", "Cloned fork") reg.Verify(t) } func TestRepoFork_outside_survey_no(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} _, restore := run.Stub() defer restore(t) defer prompt.StubConfirm(false)() output, err := runCommand(httpClient, nil, true, "OWNER/REPO") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) r := regexp.MustCompile(`Created fork.*someone/REPO`) if !r.MatchString(output.Stderr()) { t.Errorf("output did not match regexp /%s/\n> output\n%s\n", r, output) return } reg.Verify(t) } func TestRepoFork_in_parent_survey_yes(t *testing.T) { reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} defer stubSince(2 * time.Second)() cs, restore := run.Stub() defer restore(t) cs.Register(`git remote add -f fork https://github\.com/someone/REPO\.git`, 0, "") defer prompt.StubConfirm(true)() output, err := runCommand(httpClient, nil, true, "--remote-name=fork") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) //nolint:staticcheck // prefer exact matchers over ExpectLines test.ExpectLines(t, output.Stderr(), "Created fork.*someone/REPO", "Added remote.*fork") reg.Verify(t) } func TestRepoFork_in_parent_survey_no(t *testing.T) { reg := &httpmock.Registry{} defer reg.Verify(t) defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} defer stubSince(2 * time.Second)() _, restore := run.Stub() defer restore(t) defer prompt.StubConfirm(false)() output, err := runCommand(httpClient, nil, true, "") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) r := regexp.MustCompile(`Created fork.*someone/REPO`) if !r.MatchString(output.Stderr()) { t.Errorf("output did not match regexp /%s/\n> output\n%s\n", r, output) return } } func Test_RepoFork_gitFlags(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} cs, cmdTeardown := run.Stub() defer cmdTeardown(t) cs.Register(`git clone --depth 1 https://github.com/someone/REPO.git`, 0, "") cs.Register(`git -C REPO remote add -f upstream https://github.com/OWNER/REPO.git`, 0, "") output, err := runCommand(httpClient, nil, false, "--clone OWNER/REPO -- --depth 1") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) assert.Equal(t, output.Stderr(), "") reg.Verify(t) } func Test_RepoFork_flagError(t *testing.T) { _, err := runCommand(nil, nil, true, "--depth 1 OWNER/REPO") if err == nil || err.Error() != "unknown flag: --depth\nSeparate git clone flags with '--'." { t.Errorf("unexpected error %v", err) } } func TestRepoFork_in_parent_match_protocol(t *testing.T) { defer stubSince(2 * time.Second)() reg := &httpmock.Registry{} defer reg.Verify(t) defer reg.StubWithFixturePath(200, "./forkResult.json")() httpClient := &http.Client{Transport: reg} cs, restore := run.Stub() defer restore(t) cs.Register(`git remote add -f fork git@github\.com:someone/REPO\.git`, 0, "") remotes := []*context.Remote{ { Remote: &git.Remote{Name: "origin", PushURL: &url.URL{ Scheme: "ssh", }}, Repo: ghrepo.New("OWNER", "REPO"), }, } output, err := runCommand(httpClient, remotes, true, "--remote --remote-name=fork") if err != nil { t.Errorf("error running command `repo fork`: %v", err) } assert.Equal(t, "", output.String()) //nolint:staticcheck // prefer exact matchers over ExpectLines test.ExpectLines(t, output.Stderr(), "Created fork.*someone/REPO", "Added remote.*fork") } func stubSince(d time.Duration) func() { originalSince := Since Since = func(t time.Time) time.Duration { return d } return func() { Since = originalSince } }
import java.io.*; import java.util.*; public class E237 implements Runnable { final static int INF = Integer.MAX_VALUE; class Edge { int from, to, capacity, cost, flow; Edge(int from, int to, int capacity, int cost) { this.from = from; this.to = to; this.capacity = capacity; this.cost = cost; } } void addEdge(int from, int to, int capacity, int cost, ArrayList<Integer>[] g, Edge[] edges, int last) { g[from].add(last); g[to].add(last + 1); edges[last] = new Edge(from, to, capacity, cost); edges[last + 1] = new Edge(to, from, 0, -cost); } void solve() throws Exception { String t = in.nextToken(); int n = in.nextInt(); String[] s = new String[n]; int[] a = new int[n]; for (int i = 0; i < n; i++) { s[i] = in.nextToken(); a[i] = in.nextInt(); } ArrayList<Integer>[] g = new ArrayList[n + 28]; for (int i = 0; i < n + 28; i++) g[i] = new ArrayList<Integer>(); int totalEdges = n + n * 26 + 26, nowEdges = -2; Edge[] edges = new Edge[totalEdges * 2]; for (int i = 1; i <= n; i++) addEdge(0, i, a[i - 1], i, g, edges, nowEdges += 2); int[] cnt = new int[26]; for (int i = 0; i < n; i++) { for (char c : s[i].toCharArray()) cnt[c - 'a']++; for (int j = 0; j < 26; j++) addEdge(i + 1, n + 1 + j, cnt[j], 0, g, edges, nowEdges += 2); Arrays.fill(cnt, 0); } for (char c : t.toCharArray()) cnt[c - 'a']++; for (int i = 0; i < 26; i++) addEdge(1 + n + i, 1 + n + 26, cnt[i], 0, g, edges, nowEdges += 2); long[] ans = minCostMaxFLow(0, 1 + n + 26, 1 + n + 26 + 1, g, edges); out.println(ans[0] == t.length() ? ans[1] : -1); } boolean djikstra(int s, int t, int n, ArrayList<Integer>[] g, Edge[] edges, final int[] d, int[] p, int[] phi, PriorityQueue<Integer> q, long[] answer) { Arrays.fill(d, INF); d[s] = 0; q.add(s); while (!q.isEmpty()) { int v = q.poll(); for (int id : g[v]) { Edge e = edges[id]; if (e.flow < e.capacity && d[e.to] > d[v] + e.cost + phi[v] - phi[e.to]) { d[e.to] = d[v] + e.cost + phi[v] - phi[e.to]; p[e.to] = id; q.add(e.to); } } } if (d[t] == INF) return false; int len = d[t] - phi[s] + phi[t], cur = t, flow = INF; while (cur != s) { Edge e = edges[p[cur]]; flow = Math.min(flow, e.capacity - e.flow); cur = e.from; } cur = t; while (cur != s) { Edge e = edges[p[cur]]; e.flow += flow; edges[p[cur] ^ 1].flow -= flow; cur = e.from; } answer[0] += flow; answer[1] += (long) len * flow; for (int i = 0; i < n; i++) if (d[i] < INF) phi[i] += d[i]; return true; } long[] minCostMaxFLow(int s, int t, int n, ArrayList<Integer>[] g, Edge[] edges) { long[] answer = new long[2]; // answer[0] = maxFlow, answer[1] = cost int[] p = new int[n], phi = new int[n]; final int[] d = new int[n]; PriorityQueue<Integer> q = new PriorityQueue<Integer>(new Comparator<Integer>() { public int compare(Integer a, Integer b) { return d[a] - d[b]; } }); Arrays.fill(phi, INF); phi[s] = 0; for (int i = 0; i < n - 1; i++) for (Edge e : edges) if (phi[e.from] != INF && phi[e.to] > phi[e.from] + e.cost) phi[e.to] = phi[e.from] + e.cost; while (djikstra(s, t, n, g, edges, d, p, phi, q, answer)); return answer; } public static void main(String[] args) { new E237().run(); } InputReader in; PrintWriter out; public void run() { try { File defaultInput = new File("input.txt"); if (defaultInput.exists()) { in = new InputReader("input.txt"); } else { in = new InputReader(); } out = new PrintWriter(System.out); solve(); out.close(); } catch (Exception e) { e.printStackTrace(); System.exit(261); } } class InputReader { BufferedReader reader; StringTokenizer tokenizer; InputReader() { reader = new BufferedReader(new InputStreamReader(System.in)); } InputReader(String fileName) throws FileNotFoundException { reader = new BufferedReader(new FileReader(new File(fileName))); } String readLine() throws IOException { return reader.readLine(); } String nextToken() throws IOException { while (tokenizer == null || !tokenizer.hasMoreTokens()) tokenizer = new StringTokenizer(readLine()); return tokenizer.nextToken(); } boolean hasMoreTokens() throws IOException { while (tokenizer == null || !tokenizer.hasMoreTokens()) { String s = readLine(); if (s == null) return false; tokenizer = new StringTokenizer(s); } return true; } int nextInt() throws NumberFormatException, IOException { return Integer.parseInt(nextToken()); } long nextLong() throws NumberFormatException, IOException { return Long.parseLong(nextToken()); } double nextDouble() throws NumberFormatException, IOException { return Double.parseDouble(nextToken()); } } }
// formatLine formats a given line in the wordlist to a numeric value (key) and a word // value. func formatLine(line string) (int, string) { stringSlice := strings.Fields(line) key, err := strconv.Atoi(stringSlice[0]) if err != nil { log.Fatal(err) } return key, stringSlice[1] }
/*** *vsnprnc.c - Version of _vsnprintf with the error return fix. * * Copyright (c) Microsoft Corporation. All rights reserved. * *Purpose: * The _vsnprintf_c() flavor returns -1 in case there is no space * available for the null terminator & blanks out the buffer * *******************************************************************************/ #define _COUNT_ 1 #define _SWPRINTFS_ERROR_RETURN_FIX 1 #include <wchar.h> #include "vsprintf.c"
“I think I just witnessed Mt. Gox die today. I didn’t get my bitcoin, but glad I came and tried.” – Reddit user ‘CoinSearcher’, after conducting a three-day protest at Mt. Gox’s headquarters in Tokyo. Mt. Gox, the world’s original and once-largest bitcoin exchange, appears to be in a state of disarray after it suspended bitcoin withdrawals to work on what it said were technical issues. Meanwhile, the clamour of angry customer voices is growing. The exchange’s moves have had a negative impact on the bitcoin markets. The price of 1 BTC plunged from $850 at the start of the week to $681, according to the CoinDesk Bitcoin Price Index, in the wake of the Gox announcement. It has promised an update on Monday 10th February (Japan time). The internal workings of Mt. Gox have long been the focus of discussion in the bitcoin community. Users have reported delays in obtaining a ‘verified’ account there after submitting the required identification documents. @coindesk why do people still use mtgox after EVERYTHING. Stay the fuck away. — Laflamme Photo (@LaflammePhoto) February 7, 2014 Frustrated bitcoin owners have also written about unresolved customer service requests after suffering delays in withdrawing funds from the exchange, with some taking to Twitter to express their opinion on it. 70% polled cannot withdraw their money A CoinDesk survey of readers who use Mt. Gox has found that nearly 70% of respondents have not received their funds after making withdrawal requests from the exchange. Some 914 respondents said they were still waiting to receive their funds. The median waiting time was between one and three months, with 22% reporting wait times of between one week and a month. About a third of respondents said they did successfully withdraw funds from Mt. Gox – many of whom had short waiting times. About half reported receiving their funds within a week. But for everyone else, the waiting game continued. The CoinDesk survey revealed that having a ‘verified’ or ‘trusted’ account at Mt. Gox did little to reduce withdrawal delays. The majority of CoinDesk readers polled, or more than 85%, said they had ‘verified’ or ‘trusted’ accounts at Mt. Gox. Some 68% of verified account holders, or 822 respondents, said they were still waiting for their withdrawal from the exchange. The median waiting time was between one and three months, and 78% of verified account holders polled said they had been waiting for up to three months. The CoinDesk survey has attracted more than 2,800 responses since it went live on 4th February. Reddit user flys from Australia to Gox for sit-in protest It took a lone protestor to bring the simmering dissatisfaction with Mt. Gox to a boil. Flying for 16 hours from Australia to Japan for a three-day sit-in on a quest for answers as to the fate of his large bitcoin balance, the protestor, known on Reddit as ‘CoinSearcher’, eventually confronted CEO Mark Karpeles and business development manager Gonzague Gay-Bouchery. The protestor later posted a summary of his experiences on Reddit. CoinSearcher appeared to alleviate some users’ fears that the top Mt. Gox executives had vanished. Gay-Bouchery’s explanation that most of Mt. Gox’s bitcoins were kept in secure, and not quickly accessible, physical cold-storage in multiple locations made sense to many. “Because Gox is the best known of all the exchanges, we have been under the regulatory spotlight,” Gay-Bouchery told the protestor, adding: “This has created problems with government agencies, and also with our banking partners […] there are also some ongoing investigations, which we cannot talk about.” Gay-Bouchery refuted data published by The Gox Report that the exchange had a backlog of 40,000 BTC – worth about $34m at the time – that had not been processed, saying that the figure was “not correct” (Mt. Gox subsequently altered its API to cut off real-time information to sites like The Gox Report). He reiterated the company’s claim that withdrawal problems were merely a technical issue, and that “all the coins are safe”. After attending the weekly Tokyo bitcoin meetup on Thursday night, CoinSearcher said: “There was a general consensus amongst the participants that Mt. Gox was finished as an exchange. They acknowledged that Mt. Gox had played an important role in propelling bitcoin to what it is today, but its decline and ultimate closure was inevitable.” A spread that was too good to be true One of the clearest signs that all was not well on Mt. Gox was the exchange’s quoted US dollar price for bitcoin. Quoted prices on Mt. Gox began to diverge sharply from two other major exchanges, Bitstamp and BTC-e, last July. The initial spread shows Gox prices trading at several percentage points above the other exchanges throughout that month. By the end of August, however, the divergence hit double-digits. Gox prices were more than 19% above BTC-e’s prices on 22nd August, for example. Although the spread oscillated in the following months, it consistently exceeded the 10% mark. In the run-up to the freeze on Gox, on 28th January, the gap between Gox and Bitstamp’s rates stood at 20%, while the same measure between Gox and BTC-e stood at 26%. The persistent price differences seemed to be a flagrant violation of the ‘law of one price’ – the economics concept that posits that the price of a freely traded good should be equal across all open markets. In theory, the massive price differences between the exchanges suggested that there was a persistent arbitrage opportunity to buy bitcoin cheaply on Bitstamp or BTC-e and sell them at a double-digit premium on Mt. Gox. But as the CoinDesk survey shows, Mt. Gox customers have consistently failed to withdraw their funds from Gox over at least the last three months, when the spread was widest. This suggests that in practice, most opportunists transferring currencies to Gox to take advantage of a higher sale price would have failed to get their funds out of the exchange. A measure of desperation [post-quote] The seemingly incredible arbitrage opportunity and Gox’s withdrawal freeze are linked. The roots of the Gox premium can be traced back to June, when the exchange announced it was putting US dollar withdrawals on a “temporary hiatus”. It later transpired that Gox and its founder, Mark Karpeles, had been ensnared in an operation by US federal agents as they moved against the exchange for failing to register as a ‘money service business’. The US Department of Homeland Security and the Secret Service seized three accounts linked to Gox containing more than $5m. As research from The Genesis Block shows, the executed seizure warrant was dated 19th June, the day before Gox announced it would halt dollar withdrawals. All the market observers CoinDesk spoke to agreed that the cause of the Gox premium was the exchange’s persistent withdrawal failures, dating back to June, when US dollar withdrawals were stopped. As the freeze took effect, Gox customers turned to bitcoin withdrawals as they attempted to get funds out. This worked for a time, but it also increased the volume of bids for bitcoin on the exchange. “Effectively, the Mt. Gox price reflected the inability to withdraw funds in fiat. This creates only a bid for bitcoin,” said Greg Schvey, co-founder of The Genesis Block. As a result of the increased volume of bids for bitcoin on Mt. Gox, the bitcoin price began to rise steadily, adding to a widening divergence from prices quoted on other major exchanges. “We can interpret [the Gox premium] as a measure of fear on the part of customers that they’re not going to get their money back. Their desperation is measured by how much they’re willing to pay for bitcoin [on Gox],” said Garrick Hileman, an economic historian at the London School of Economics. ‘Coding himself out of a mess’ While the exchange has posted a number of notices on its website announcing withdrawal delays, its top executives have remained silent on the matter. The company has posted a notice of delays on its main trading page since the beginning of 2014, originally citing a backlog caused by Japanese New Year business holidays as the cause. One prominent technical member of the bitcoin community thinks he knows what’s behind the current withdrawal freeze. Andreas Antonopoulos, who recently joined Blockchain.info as chief security officer, says he has studied exchange technologies over the past 15 years. His verdict on Gox’s withdrawal freeze, as an outsider, is scathing: “Mt. Gox has built an exchange based on a hodgepodge of technologies that are really not suitable for running an exchange. And it’s being run by people who don’t really have experience building and operating scalable systems.” Antonopoulos outlined what he believes to be the technical reasons behind the Gox freeze. The root of the problem lies in its decision to use a version of the bitcoin client it customised itself, rather than the standard client. As a result, Gox handles the protocol with some discrepancies. One of those discrepancies, as Antonopoulos understands it, is the way transactions are propagated through the network. A miner on Gox, for example, will prematurely be credited for a new block before the network has a chance to confirm the transaction. As a result, when the transaction hits the bitcoin network to be corroborated, it is rejected. Gox’s solution is to cancel the initial transaction and resubmit it until it is approved. “This is like putting a Band-Aid on the problem. Gox should not be generating non-standard transactions in the first place. Band-Aids like this will further exacerbate scalability problems,” Antonopoulos said. In the case of the mining example, the cancelled and resubmitted transactions cause delays in fulfilling withdrawal requests within Gox. This doesn’t necessarily cause huge problems unless the system is under pressure from an external factor, like a spike in withdrawal requests, for example. “When transactions increase, then there are more delayed transactions, which can cause a panic. It just snowballs,” Antonopoulos said. A lack of detailed comment or response from Mt. Gox to users or the media has only increased customer concerns about the fate of their money. The company’s location in Japan – where outsiders’ access to information is often limited by a language barrier – has shielded the company from the kind of scrutiny a US-based operation would receive. Furthermore, Gox’s chief executive has made little attempt to address the issues publicly. “I’ve heard that Mark [Karpeles] has rolled up his sleeves and is trying to code himself out of this mess,” Antonopoulos said. “It’s clear that he lacks the expertise to fix this other than applying another Band-Aid. The things they’ve done in the past won’t get them out of this.” Looming insolvency? Roger Ver declared last July he had looked at Mt. Gox’s books and determined it had plenty of fiat currency in the bank, and that withdrawal delays were not being caused by a lack of fiat. He was still optimistic the exchange would fulfill its obligations. “I don’t have any special insight into Mt. Gox at the moment, but if I had to guess, I think they have the bitcoins and the fiat,” he told CoinDesk. “I actually think, in the long run, this will be good for bitcoin because it will be clear to the world that there is an open invitation for true professionals to quickly dominate the bitcoin exchange industry.” Bobby Lee, CEO of exchange BTC China, which actually eclipsed Mt. Gox’s trading volumes at times in 2013, said he also accepted its official explanations. While he didn’t see its immediate problems reaching China, he said negative stories about a company the size of Mt. Gox “could put a damper on the whole bitcoin ecosystem”. “I was actually quite surprised to hear about the suspension of bitcoin withdrawals at Mt. Gox,” he said. “Their explanation is plausible, about the need to diagnose a technical situation, which thus requires the halting of bitcoin transfers. Running an exchange is a complex job, especially with a large audience, and when dealing with a real decentralized currency like bitcoin.” “Their restrictions and delays on fiat currency withdrawals seem suspicious to me, as there is no adequate explanation for that.” He went on to say: “Regarding the BTC withdrawal limitations, since they promised to give everyone an update on Monday, I would give them the benefit of doubt at this point. It would also help customers understand better, if Mt. Gox can make a clear statement about their overall solvency status.” Other prominent bitcoiners were less gentle: In what should be a surprise to no one, mtgox has suspended operations. We Need a licensed US #bitcoin exchange. http://t.co/ygAOViFiB0 — jeremy liew (@jeremysliew) February 7, 2014 Antonopoulos’ technical appraisal of Gox may be damning, but he stops short of indicting the exchange for being fraudulent. He pulls no punches with his verdict on their business acumen, however: “I do not think Gox has solvency problems. It’s simply a business being run in an amateurish way, in a market that is far more demanding than can support amateurish operations.” CoinDesk also contacted US exchange and payment processor Coinbase, but it declined to comment. Innocent beginnings Mt. Gox, owned by a company called Tibanne Ltd, was the largest bitcoin-fiat currency exchange from 2010 until last year. It started life in 2009 as a place for players of Magic: The Gathering to trade cards. Tibanne is run by Mark Karpeles, who acquired the exchange from founder Jed McCaleb in 2011. In its four-year history, the pioneering exchange has suffered hacking attempts, DDOS attacks, and the same regulatory issues that have plagued other bitcoin businesses. Along with technical issues, the glare of law enforcement’s spotlight since April 2013 has seen Mt. Gox’s US dollar market-share plunge from over 70% in April to about 19% now, significantly behind Europe’s Bitstamp and BTC-e with 30% and 24% respectively. Mt. Gox is also the subject of a current $75m lawsuit from former partner CoinLab, which it has also countersued for $5.5m. Legacy of resilience? One of the recurring themes in Mt. Gox’s story is its ability to recover from seemingly insurmountable setbacks, be they bank account seizures or electronic theft. The media has made a habit of chronicling the ‘fall of Mt. Gox’ (Wired, Business Insider), with CoinDesk being no exception – only to be proven wrong when the exchange’s volumes bounce back. Some market watchers remain reluctant to count Mt. Gox out, even with its current freeze on withdrawals. “Every time it’s had some seemingly crippling issue, it’s always managed to maintain market-share,” said Schvey of The Genesis Block. Mt. Gox’s historic position as the dominant exchange in the global cryptocurrency economy appears to have helped it build a valuable brand that has linked it inextricably with the growth of bitcoin itself. As new bitcoin users flood into the cryptocurrency economy – which has grown from a market capitalisation of a $250m just 12 months ago to $8.6b today – many of these new investors start their cryptocurrency education at the foot of Mt. Gox. “New buyers come in and they don’t know the history. There is a lot of brand recognition, and it’s going to take time for that brand to be completely destroyed through incompetence,” said Antonopoulos. The Mt. Gox freeze may have dampened the price of bitcoin, but Schvey, for one, believes the impact has already been priced in. “We saw major sell-offs on Gox, but the market impact looks like it’s largely been realised at this point. As soon as people get their money out, other exchanges will pick up [market-share],” he said. In Antonopoulos’ view, however, the story of Mt. Gox isn’t one of resilience in the face of adversity. Instead, the constant breakdowns in Tokyo tell a tale of gradual disintegration, with each breakdown or withdrawal freeze jolting the firm closer to the edge. He said: “They will keep causing crashes in the bitcoin network until everyone abandons them, so abandon them sooner rather than later. Not because they’re frauds, but because they are amateurish – clownish – in their operations.” This article was co-authored by Joon Ian Wong, Jon Southurst and Emily Spaven. Editor’s note: The CoinDesk Bitcoin Price Index committee has recently been reviewing Mt. Gox’s inclusion in the BPI. Friday’s announcement from Gox about halting bitcoin withdrawals has added more fuel to the discussion. Any changes that are made will be announced on CoinDesk. Feel free to let us know your thoughts in the comments. Praying angel image via Shutterstock
Archibald Loudon and the Politics of Print and Indian-Hating in the Early Republic abstract:Indian-hating, a critical building block of white nationalism during the early American republic, was built from the grassroots by printers who were also local citizens with their own personal and political axes to grind. The Pennsylvanian Archibald Loudon was one of these printers. His two-volume collection of frontier captivity, war, and atrocity narratives, titled A Selection of Some of the Most Interesting Narratives, of Outrages, Committed by the Indians, in Their Wars with the White People, epitomizes how printers collected and disseminated local stories of Indigenous violencefiltered through the lenses of their own partisan politicsto generate hatred for Indians on the eve of the War of 1812. This essay tells the story of Loudon and his Selection. It analyzes how Loudon's experiences as a colonial frontier refugee, Revolutionary War soldier, stalwart Democratic-Republican, and friend of the writer and politician Hugh Henry Brackenridge made him into an Indian-hater. It also assesses his two-volume Selection as a remarkable collection of local stories that framed the violent as well as the noble acts of local Native peoples and the harrowing tales of white martyrs and settlers who survived so as to influence national conversations about race and belonging, politics and war in the early republic.
In the third part of his series on Islam in south-east Asia, Roger Hardy asks if the world's biggest Muslim country can shake off extremism and make the transition to democracy. You can still see the crater outside the Australian embassy in Jakarta where a truck, packed with explosives, blew up last September - killing 11 people. Experts were quick to see the hand of Jemaah Islamiah - the group held responsible for the Bali bombings of 2002, and widely seen as the regional arm of al-Qaeda. The attack came at a sensitive moment, when this huge nation of 220 million people was about to elect a new president - a key step on the road from dictatorship to democracy. In the event, voting went off peacefully. But the bombing showed that Indonesia's fledgling democracy still faces daunting challenges. The teenaged schoolgirls in Yogyakarta, in central Java, gave me a warm and very noisy welcome. Identically clothed in neat blue dresses and white headscarves, they laughed and joked. One even sang a Maria Carey song. It was a far cry from the prevalent Western image of a madrasa, or Muslim school. The girls' school is part of the huge educational network run by Muhammadiyah, one of Indonesia's oldest and biggest Muslim grass-roots organisations. Claiming a staggering 35 million members, Muhammadiyah runs schools, universities, clinics and charities across this far-flung country. But only a few miles from the girls' school, I visited a very different madrasa. This was infamous school founded in Ngruki by Abu Bakar Ba'asyir - the elderly cleric currently on trial in Jakarta as the alleged spiritual leader of Jemaah Islamiah. Asked whether he condemned the Bali bombings, Wahyuddin - the man who runs the Ngruki school - said merely that he "disagreed" with them. Americans attack Muslims, he said, so Muslims attack Americans. It was a case of action and reaction. No one was attacking the Japanese. Back in the capital, I visited a bar which had been smashed up during Ramadan, the Muslim month of fasting. There I met Hilmy Bakr Almascaty, one of the leaders of the Islamic Defenders' Front - the group which carried out the attack. He made it clear that any bar or restaurant serving alcohol during the holy month was a legitimate target. Islamic radicals like these pose a direct threat to Indonesia's centuries-old tradition of tolerance and moderation. I began to wonder if the "silent majority" wasn't just a little too silent. I met one of the most articulate members of the new government, Defence Minister Juwono Sudarsono. He links the rise of radicalism to the perception that corruption and social injustice are rife in south-east Asia. But the government is reluctant to outlaw Jemaah Islamiah, for fear of upsetting Muslim sensibilities. I went to the trial of Abu Bakar Ba'asyir in a makeshift courtroom in Jakarta. There, at a distance, I saw the man who in many ways symbolises the radical Islamist challenge. Sitting with a red and white keffieh draped around his shoulders, the elderly cleric smiled for the cameras. Many Indonesian Muslims seem to regard him as a kindly old man who has no link whatever to Bali and the other bombings Indonesia has suffered. One young radical I met at the trial said bluntly that the Bali attack had been carried out by the CIA - and that the trial was a CIA conspiracy. In numbers, Indonesia's moderate mainstream - bolstered by groups like Muhammadiyah - dwarfs the radical fringe. But I was reminded of the cryptic words of a former British prime minister. "It's not enough to be nice." Roger Hardy's programme on Indonesia - the third in a four-part series, "Islam's Furthest Frontier" - is broadcast on the BBC World Service on 21 February. Does South East Asia hold the key to Islam and modernity?
<gh_stars>0 /* Copyright 2018 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package vclib import ( "context" "errors" "fmt" "path/filepath" "strings" "github.com/vmware/govmomi/find" "github.com/vmware/govmomi/object" "github.com/vmware/govmomi/property" "github.com/vmware/govmomi/vim25/mo" "github.com/vmware/govmomi/vim25/types" "github.com/vmware/govmomi/vslm" klog "k8s.io/klog/v2" ) // Datacenter extends the govmomi Datacenter object type Datacenter struct { *object.Datacenter } // GetDatacenter returns the DataCenter Object for the given datacenterPath // If datacenter is located in a folder, include full path to datacenter else just provide the datacenter name func GetDatacenter(ctx context.Context, connection *VSphereConnection, datacenterPath string) (*Datacenter, error) { finder := find.NewFinder(connection.Client, false) datacenter, err := finder.Datacenter(ctx, datacenterPath) if err != nil { klog.Errorf("Failed to find the datacenter: %s. err: %+v", datacenterPath, err) return nil, err } dc := Datacenter{datacenter} return &dc, nil } // GetAllDatacenter returns all the DataCenter Objects func GetAllDatacenter(ctx context.Context, connection *VSphereConnection) ([]*Datacenter, error) { var dc []*Datacenter finder := find.NewFinder(connection.Client, false) datacenters, err := finder.DatacenterList(ctx, "*") if err != nil { klog.Errorf("Failed to find the datacenter. err: %+v", err) return nil, err } for _, datacenter := range datacenters { dc = append(dc, &(Datacenter{datacenter})) } return dc, nil } // GetNumberOfDatacenters returns the number of DataCenters in this vCenter func GetNumberOfDatacenters(ctx context.Context, connection *VSphereConnection) (int, error) { finder := find.NewFinder(connection.Client, false) datacenters, err := finder.DatacenterList(ctx, "*") if err != nil { klog.Errorf("Failed to find the datacenter. err: %+v", err) return 0, err } return len(datacenters), nil } // GetVMByIP gets the VM object from the given IP address func (dc *Datacenter) GetVMByIP(ctx context.Context, ipAddy string) (*VirtualMachine, error) { s := object.NewSearchIndex(dc.Client()) ipAddy = strings.ToLower(strings.TrimSpace(ipAddy)) svm, err := s.FindByIp(ctx, dc.Datacenter, ipAddy, true) if err != nil { klog.Errorf("Failed to find VM by IP. VM IP: %s, err: %+v", ipAddy, err) return nil, err } if svm == nil { klog.Errorf("Unable to find VM by IP. VM IP: %s", ipAddy) return nil, ErrNoVMFound } virtualMachine := VirtualMachine{svm.(*object.VirtualMachine), dc} return &virtualMachine, nil } // GetVMByDNSName gets the VM object from the given dns name func (dc *Datacenter) GetVMByDNSName(ctx context.Context, dnsName string) (*VirtualMachine, error) { s := object.NewSearchIndex(dc.Client()) dnsName = strings.ToLower(strings.TrimSpace(dnsName)) svms, err := s.FindAllByDnsName(ctx, dc.Datacenter, dnsName, true) if err != nil { klog.Errorf("Failed to find VM by DNS Name. VM DNS Name: %s, err: %+v", dnsName, err) return nil, err } if len(svms) == 0 { klog.Errorf("Unable to find VM by DNS Name. VM DNS Name: %s", dnsName) return nil, ErrNoVMFound } if len(svms) > 1 { klog.Errorf("Multiple vms found VM by DNS Name. DNS Name: %s", dnsName) return nil, ErrMultipleVMsFound } virtualMachine := VirtualMachine{svms[0].(*object.VirtualMachine), dc} return &virtualMachine, nil } // GetVMByUUID gets the VM object from the given vmUUID func (dc *Datacenter) GetVMByUUID(ctx context.Context, vmUUID string) (*VirtualMachine, error) { s := object.NewSearchIndex(dc.Client()) vmUUID = strings.ToLower(strings.TrimSpace(vmUUID)) svm, err := s.FindByUuid(ctx, dc.Datacenter, vmUUID, true, nil) if err != nil { klog.Errorf("Failed to find VM by UUID. VM UUID: %s, err: %+v", vmUUID, err) return nil, err } if svm == nil { klog.Errorf("Unable to find VM by UUID. VM UUID: %s", vmUUID) return nil, ErrNoVMFound } virtualMachine := VirtualMachine{svm.(*object.VirtualMachine), dc} return &virtualMachine, nil } // GetVMByPath gets the VM object from the given vmPath // vmPath should be the full path to VM and not just the name func (dc *Datacenter) GetVMByPath(ctx context.Context, vmPath string) (*VirtualMachine, error) { finder := getFinder(dc) vm, err := finder.VirtualMachine(ctx, vmPath) if err != nil { klog.Errorf("Failed to find VM by Path. VM Path: %s, err: %+v", vmPath, err) return nil, err } virtualMachine := VirtualMachine{vm, dc} return &virtualMachine, nil } // GetAllDatastores gets the datastore URL to DatastoreInfo map for all the datastores in // the datacenter. func (dc *Datacenter) GetAllDatastores(ctx context.Context) (map[string]*DatastoreInfo, error) { finder := getFinder(dc) datastores, err := finder.DatastoreList(ctx, "*") if err != nil { klog.Errorf("Failed to get all the datastores. err: %+v", err) return nil, err } var dsList []types.ManagedObjectReference for _, ds := range datastores { dsList = append(dsList, ds.Reference()) } var dsMoList []mo.Datastore pc := property.DefaultCollector(dc.Client()) properties := []string{DatastoreInfoProperty} err = pc.Retrieve(ctx, dsList, properties, &dsMoList) if err != nil { klog.Errorf("Failed to get Datastore managed objects from datastore objects."+ " dsObjList: %+v, properties: %+v, err: %v", dsList, properties, err) return nil, err } dsURLInfoMap := make(map[string]*DatastoreInfo) for _, dsMo := range dsMoList { dsURLInfoMap[dsMo.Info.GetDatastoreInfo().Url] = &DatastoreInfo{ &Datastore{object.NewDatastore(dc.Client(), dsMo.Reference()), dc}, dsMo.Info.GetDatastoreInfo()} } klog.V(9).Infof("dsURLInfoMap : %+v", dsURLInfoMap) return dsURLInfoMap, nil } // GetDatastoreByPath gets the Datastore object from the given vmDiskPath func (dc *Datacenter) GetDatastoreByPath(ctx context.Context, vmDiskPath string) (*DatastoreInfo, error) { datastorePathObj := new(object.DatastorePath) isSuccess := datastorePathObj.FromString(vmDiskPath) if !isSuccess { klog.Errorf("Failed to parse vmDiskPath: %s", vmDiskPath) return nil, errors.New("Failed to parse vmDiskPath") } return dc.GetDatastoreByName(ctx, datastorePathObj.Datastore) } // GetDatastoreByName gets the Datastore object for the given datastore name func (dc *Datacenter) GetDatastoreByName(ctx context.Context, name string) (*DatastoreInfo, error) { finder := getFinder(dc) ds, err := finder.Datastore(ctx, name) if err != nil { klog.Errorf("Failed while searching for datastore: %s. err: %+v", name, err) return nil, err } var dsMo mo.Datastore pc := property.DefaultCollector(dc.Client()) properties := []string{DatastoreInfoProperty} err = pc.RetrieveOne(ctx, ds.Reference(), properties, &dsMo) if err != nil { klog.Errorf("Failed to get Datastore managed objects from datastore objects."+ " properties: %+v, err: %v", properties, err) return nil, err } return &DatastoreInfo{ &Datastore{ds, dc}, dsMo.Info.GetDatastoreInfo()}, nil } // GetResourcePool gets the resource pool for the given path func (dc *Datacenter) GetResourcePool(ctx context.Context, computePath string) (*object.ResourcePool, error) { finder := getFinder(dc) var computeResource *object.ComputeResource var err error if computePath == "" { computeResource, err = finder.DefaultComputeResource(ctx) } else { computeResource, err = finder.ComputeResource(ctx, computePath) } if err != nil { klog.Errorf("Failed to get the ResourcePool for computePath '%s'. err: %+v", computePath, err) return nil, err } return computeResource.ResourcePool(ctx) } // GetFolderByPath gets the Folder Object from the given folder path // folderPath should be the full path to folder func (dc *Datacenter) GetFolderByPath(ctx context.Context, folderPath string) (*Folder, error) { finder := getFinder(dc) vmFolder, err := finder.Folder(ctx, folderPath) if err != nil { klog.Errorf("Failed to get the folder reference for %s. err: %+v", folderPath, err) return nil, err } folder := Folder{vmFolder, dc} return &folder, nil } // GetVMMoList gets the VM Managed Objects with the given properties from the VM object func (dc *Datacenter) GetVMMoList(ctx context.Context, vmObjList []*VirtualMachine, properties []string) ([]mo.VirtualMachine, error) { var vmMoList []mo.VirtualMachine var vmRefs []types.ManagedObjectReference if len(vmObjList) < 1 { klog.Error("VirtualMachine Object list is empty") return nil, fmt.Errorf("VirtualMachine Object list is empty") } for _, vmObj := range vmObjList { vmRefs = append(vmRefs, vmObj.Reference()) } pc := property.DefaultCollector(dc.Client()) err := pc.Retrieve(ctx, vmRefs, properties, &vmMoList) if err != nil { klog.Errorf("Failed to get VM managed objects from VM objects. vmObjList: %+v, properties: %+v, err: %v", vmObjList, properties, err) return nil, err } return vmMoList, nil } // GetVirtualDiskPage83Data gets the virtual disk UUID by diskPath func (dc *Datacenter) GetVirtualDiskPage83Data(ctx context.Context, diskPath string) (string, error) { if len(diskPath) > 0 && filepath.Ext(diskPath) != ".vmdk" { diskPath += ".vmdk" } vdm := object.NewVirtualDiskManager(dc.Client()) // Returns uuid of vmdk virtual disk diskUUID, err := vdm.QueryVirtualDiskUuid(ctx, diskPath, dc.Datacenter) if err != nil { klog.Warningf("QueryVirtualDiskUuid failed for diskPath: %q. err: %+v", diskPath, err) return "", err } diskUUID = formatVirtualDiskUUID(diskUUID) return diskUUID, nil } // GetDatastoreMoList gets the Datastore Managed Objects with the given properties from the datastore objects func (dc *Datacenter) GetDatastoreMoList(ctx context.Context, dsObjList []*Datastore, properties []string) ([]mo.Datastore, error) { var dsMoList []mo.Datastore var dsRefs []types.ManagedObjectReference if len(dsObjList) < 1 { klog.Error("Datastore Object list is empty") return nil, fmt.Errorf("Datastore Object list is empty") } for _, dsObj := range dsObjList { dsRefs = append(dsRefs, dsObj.Reference()) } pc := property.DefaultCollector(dc.Client()) err := pc.Retrieve(ctx, dsRefs, properties, &dsMoList) if err != nil { klog.Errorf("Failed to get Datastore managed objects from datastore objects. dsObjList: %+v, properties: %+v, err: %v", dsObjList, properties, err) return nil, err } return dsMoList, nil } // CheckDisksAttached checks if the disk is attached to node. // This is done by comparing the volume path with the backing.FilePath on the VM Virtual disk devices. func (dc *Datacenter) CheckDisksAttached(ctx context.Context, nodeVolumes map[string][]string) (map[string]map[string]bool, error) { attached := make(map[string]map[string]bool) var vmList []*VirtualMachine for nodeName, volPaths := range nodeVolumes { for _, volPath := range volPaths { setNodeVolumeMap(attached, volPath, nodeName, false) } vm, err := dc.GetVMByPath(ctx, nodeName) if err != nil { if IsNotFound(err) { klog.Warningf("Node %q does not exist, vSphere CP will assume disks %v are not attached to it.", nodeName, volPaths) } continue } vmList = append(vmList, vm) } if len(vmList) == 0 { klog.V(2).Info("vSphere CP will assume no disks are attached to any node.") return attached, nil } vmMoList, err := dc.GetVMMoList(ctx, vmList, []string{"config.hardware.device", "name"}) if err != nil { // When there is an error fetching instance information // it is safer to return nil and let volume information not be touched. klog.Errorf("Failed to get VM Managed object for nodes: %+v. err: +%v", vmList, err) return nil, err } for _, vmMo := range vmMoList { if vmMo.Config == nil { klog.Errorf("Config is not available for VM: %q", vmMo.Name) continue } for nodeName, volPaths := range nodeVolumes { if nodeName == vmMo.Name { verifyVolumePathsForVM(vmMo, volPaths, attached) } } } return attached, nil } // VerifyVolumePathsForVM verifies if the volume paths (volPaths) are attached to VM. func verifyVolumePathsForVM(vmMo mo.VirtualMachine, volPaths []string, nodeVolumeMap map[string]map[string]bool) { // Verify if the volume paths are present on the VM backing virtual disk devices for _, volPath := range volPaths { vmDevices := object.VirtualDeviceList(vmMo.Config.Hardware.Device) for _, device := range vmDevices { if vmDevices.TypeName(device) == "VirtualDisk" { virtualDevice := device.GetVirtualDevice() if backing, ok := virtualDevice.Backing.(*types.VirtualDiskFlatVer2BackingInfo); ok { if backing.FileName == volPath { setNodeVolumeMap(nodeVolumeMap, volPath, vmMo.Name, true) } } } } } } func setNodeVolumeMap( nodeVolumeMap map[string]map[string]bool, volumePath string, nodeName string, check bool) { volumeMap := nodeVolumeMap[nodeName] if volumeMap == nil { volumeMap = make(map[string]bool) nodeVolumeMap[nodeName] = volumeMap } volumeMap[volumePath] = check } // GetAllDatastoreClusters returns all datastore clusters and optionally their // children. func (dc *Datacenter) GetAllDatastoreClusters(ctx context.Context, child bool) (map[string]*StoragePodInfo, error) { finder := getFinder(dc) storagePods, err := finder.DatastoreClusterList(ctx, "*") if err != nil { klog.Errorf("Failed to get all the datastore clusters. err: %+v", err) return nil, ErrNoDataStoreClustersFound } var spList []types.ManagedObjectReference for _, sp := range storagePods { spList = append(spList, sp.Reference()) } var spMoList []mo.StoragePod pc := property.DefaultCollector(dc.Client()) properties := []string{StoragePodDrsEntryProperty, StoragePodProperty} err = pc.Retrieve(ctx, spList, properties, &spMoList) if err != nil { klog.Errorf("Failed to get Datastore managed objects from datastore objects."+ " dsObjList: %+v, properties: %+v, err: %v", spList, properties, err) return nil, err } spURLInfoMap := make(map[string]*StoragePodInfo) for _, spMo := range spMoList { spURLInfoMap[spMo.Summary.Name] = &StoragePodInfo{ &StoragePod{ dc, object.NewStoragePod(dc.Client(), spMo.Reference()), make([]*Datastore, 0), }, spMo.Summary, &spMo.PodStorageDrsEntry.StorageDrsConfig, make([]*DatastoreInfo, 0), } if child { err := spURLInfoMap[spMo.Summary.Name].PopulateChildDatastoreInfos(ctx, false) if err != nil { klog.Warningf("PopulateChildDatastoreInfos Failed. Err: %v", err) } } } klog.V(9).Infof("spURLInfoMap : %+v", spURLInfoMap) return spURLInfoMap, nil } // GetDatastoreClusterByName gets the DatastoreCluster object for the given name func (dc *Datacenter) GetDatastoreClusterByName(ctx context.Context, name string) (*StoragePodInfo, error) { finder := getFinder(dc) ds, err := finder.DatastoreCluster(ctx, name) if err != nil { klog.Errorf("Failed while searching for datastore cluster: %s. err: %+v", name, err) return nil, err } var spMo mo.StoragePod pc := property.DefaultCollector(dc.Client()) properties := []string{StoragePodDrsEntryProperty, StoragePodProperty} err = pc.RetrieveOne(ctx, ds.Reference(), properties, &spMo) if err != nil { klog.Errorf("Failed to get Datastore managed objects from datastore objects."+ " properties: %+v, err: %v", properties, err) return nil, err } return &StoragePodInfo{ &StoragePod{ dc, object.NewStoragePod(dc.Client(), spMo.Reference()), make([]*Datastore, 0), }, spMo.Summary, &spMo.PodStorageDrsEntry.StorageDrsConfig, make([]*DatastoreInfo, 0), }, nil } // CreateFirstClassDisk creates a new first class disk. func (dc *Datacenter) CreateFirstClassDisk(ctx context.Context, datastoreName string, datastoreType ParentDatastoreType, diskName string, diskSize int64) error { m := vslm.NewObjectManager(dc.Client()) var pool *object.ResourcePool var ds types.ManagedObjectReference if datastoreType == TypeDatastoreCluster { storagePod, err := dc.GetDatastoreClusterByName(ctx, datastoreName) if err != nil { klog.Errorf("GetDatastoreClusterByName failed. Err: %v", err) return err } ds = storagePod.Reference() pool, err = dc.GetResourcePool(ctx, "") if err != nil { klog.Errorf("GetResourcePool failed. Err: %v", err) return err } } else { datastore, err := dc.GetDatastoreByName(ctx, datastoreName) if err != nil { klog.Errorf("GetDatastoreByName failed. Err: %v", err) return err } ds = datastore.Reference() } spec := types.VslmCreateSpec{ Name: diskName, CapacityInMB: diskSize, BackingSpec: &types.VslmCreateSpecDiskFileBackingSpec{ VslmCreateSpecBackingSpec: types.VslmCreateSpecBackingSpec{ Datastore: ds, }, ProvisioningType: string(types.BaseConfigInfoDiskFileBackingInfoProvisioningTypeThin), }, } if datastoreType == TypeDatastoreCluster { err := m.PlaceDisk(ctx, &spec, pool.Reference()) if err != nil { klog.Errorf("PlaceDisk(%s) failed. Err: %v", diskName, err) return err } } task, err := m.CreateDisk(ctx, spec) if err != nil { klog.Errorf("CreateDisk(%s) failed. Err: %v", diskName, err) return err } err = task.Wait(ctx) if err != nil { klog.Errorf("Wait(%s) failed. Err: %v", diskName, err) return err } return nil } // GetFirstClassDisk searches for an existing FCD. func (dc *Datacenter) GetFirstClassDisk(ctx context.Context, datastoreName string, datastoreType ParentDatastoreType, diskID string, findBy FindFCD) (*FirstClassDiskInfo, error) { var fcd *FirstClassDiskInfo if datastoreType == TypeDatastoreCluster { storagePod, err := dc.GetDatastoreClusterByName(ctx, datastoreName) if err != nil { klog.Errorf("GetDatastoreClusterByName failed. Err: %v", err) return nil, err } fcd, err = storagePod.GetFirstClassDiskInfo(ctx, diskID, findBy) if err != nil { klog.Errorf("GetFirstClassDiskByName failed. Err: %v", err) return nil, err } } else { datastore, err := dc.GetDatastoreByName(ctx, datastoreName) if err != nil { klog.Errorf("GetDatastoreByName failed. Err: %v", err) return nil, err } fcd, err = datastore.GetFirstClassDiskInfo(ctx, diskID, findBy) if err != nil { klog.Errorf("GetFirstClassDiskByName failed. Err: %v", err) return nil, err } } return fcd, nil } // GetAllFirstClassDisks returns all known FCDs. func (dc *Datacenter) GetAllFirstClassDisks(ctx context.Context) ([]*FirstClassDiskInfo, error) { storagePods, errDsClusters := dc.GetAllDatastoreClusters(ctx, true) if errDsClusters != ErrNoDataStoreClustersFound { klog.Warningf("GetAllDatastoreClusters failed. Err: %v", errDsClusters) return nil, errDsClusters } datastores, err := dc.GetAllDatastores(ctx) if err != nil { klog.Errorf("GetAllDatastores failed. Err: %v", err) return nil, err } alreadyVisited := make([]string, 0) firstClassDisks := make([]*FirstClassDiskInfo, 0) if errDsClusters != ErrNoDataStoreClustersFound { for _, storagePod := range storagePods { err := storagePod.PopulateChildDatastoreInfos(ctx, false) if err != nil { klog.Warningf("PopulateChildDatastores failed. Err: %v", err) continue } for _, datastore := range storagePod.DatastoreInfos { alreadyVisited = append(alreadyVisited, datastore.Info.Name) } disks, err := storagePod.ListFirstClassDisksInfo(ctx) if err != nil { klog.Warningf("ListFirstClassDisks failed for %s. Err: %v", storagePod.Name(), err) continue } firstClassDisks = append(firstClassDisks, disks...) } } for _, datastore := range datastores { if ExistsInList(datastore.Info.Name, alreadyVisited, false) { continue } alreadyVisited = append(alreadyVisited, datastore.Info.Name) disks, err := datastore.ListFirstClassDiskInfos(ctx) if err != nil { klog.Warningf("ListFirstClassDisks failed for %s. Err: %v", datastore.Info.Name, err) continue } firstClassDisks = append(firstClassDisks, disks...) } return firstClassDisks, nil } // DoesFirstClassDiskExist returns information about an FCD if it exists. func (dc *Datacenter) DoesFirstClassDiskExist(ctx context.Context, fcdID string) (*FirstClassDiskInfo, error) { datastores, err := dc.GetAllDatastores(ctx) if err != nil { klog.Errorf("GetAllDatastores failed. Err: %v", err) return nil, err } for _, datastore := range datastores { fcd, err := datastore.GetFirstClassDiskInfo(ctx, fcdID, FindFCDByID) if err == nil { klog.Infof("DoesFirstClassDiskExist(%s): FOUND", fcdID) return fcd, nil } } klog.Infof("DoesFirstClassDiskExist(%s): NOT FOUND", fcdID) return nil, ErrNoDiskIDFound } // DeleteFirstClassDisk deletes an FCD. func (dc *Datacenter) DeleteFirstClassDisk(ctx context.Context, datastoreName string, datastoreType ParentDatastoreType, diskID string) error { var ds types.ManagedObjectReference if datastoreType == TypeDatastoreCluster { storagePod, err := dc.GetDatastoreClusterByName(ctx, datastoreName) if err != nil { klog.Errorf("GetDatastoreClusterByName failed. Err: %v", err) return err } datastore, err := storagePod.GetDatastoreThatOwnsFCD(ctx, diskID) if err != nil { klog.Errorf("GetDatastoreThatOwnsFCD failed. Err: %v", err) return err } ds = datastore.Reference() } else { datastore, err := dc.GetDatastoreByName(ctx, datastoreName) if err != nil { klog.Errorf("GetDatastoreByName failed. Err: %v", err) return err } ds = datastore.Reference() } m := vslm.NewObjectManager(dc.Client()) task, err := m.Delete(ctx, ds, diskID) if err != nil { klog.Errorf("Delete(%s) failed. Err: %v", diskID, err) return err } err = task.Wait(ctx) if err != nil { klog.Errorf("Wait(%s) failed. Err: %v", diskID, err) return err } return nil }
<filename>mojo/nacl/mojo_syscall_internal.h<gh_stars>1-10 // Copyright 2014 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #ifndef MOJO_NACL_MOJO_SYSCALL_INTERNAL_H_ #define MOJO_NACL_MOJO_SYSCALL_INTERNAL_H_ #include "native_client/src/trusted/service_runtime/nacl_copy.h" #include "native_client/src/trusted/service_runtime/sel_ldr.h" namespace { class ScopedCopyLock { public: explicit ScopedCopyLock(struct NaClApp* nap) : nap_(nap) { NaClCopyTakeLock(nap_); } ~ScopedCopyLock() { NaClCopyDropLock(nap_); } private: struct NaClApp* nap_; }; static inline uintptr_t NaClUserToSysAddrArray( struct NaClApp* nap, uint32_t uaddr, size_t count, size_t size) { // TODO(ncbray): overflow checking size_t range = count * size; return NaClUserToSysAddrRange(nap, uaddr, range); } template <typename T> bool ConvertScalarInput( struct NaClApp* nap, uint32_t user_ptr, T* value) { if (user_ptr) { uintptr_t temp = NaClUserToSysAddrRange(nap, user_ptr, sizeof(T)); if (temp != kNaClBadAddress) { *value = *reinterpret_cast<T volatile*>(temp); return true; } } return false; } template <typename T> bool ConvertScalarOutput( struct NaClApp* nap, uint32_t user_ptr, T volatile** sys_ptr) { if (user_ptr) { uintptr_t temp = NaClUserToSysAddrRange(nap, user_ptr, sizeof(T)); if (temp != kNaClBadAddress) { *sys_ptr = reinterpret_cast<T volatile*>(temp); return true; } } *sys_ptr = 0; // Paranoia. return false; } template <typename T> bool ConvertScalarInOut( struct NaClApp* nap, uint32_t user_ptr, bool optional, T* value, T volatile** sys_ptr) { if (user_ptr) { uintptr_t temp = NaClUserToSysAddrRange(nap, user_ptr, sizeof(T)); if (temp != kNaClBadAddress) { T volatile* converted = reinterpret_cast<T volatile*>(temp); *sys_ptr = converted; *value = *converted; return true; } } else if (optional) { *sys_ptr = 0; *value = static_cast<T>(0); // Paranoia. return true; } *sys_ptr = 0; // Paranoia. *value = static_cast<T>(0); // Paranoia. return false; } template <typename T> bool ConvertArray( struct NaClApp* nap, uint32_t user_ptr, uint32_t length, size_t element_size, bool optional, T** sys_ptr) { if (user_ptr) { uintptr_t temp = NaClUserToSysAddrArray(nap, user_ptr, length, element_size); if (temp != kNaClBadAddress) { *sys_ptr = reinterpret_cast<T*>(temp); return true; } } else if (optional) { *sys_ptr = 0; return true; } return false; } template <typename T> bool ConvertBytes( struct NaClApp* nap, uint32_t user_ptr, uint32_t length, bool optional, T** sys_ptr) { if (user_ptr) { uintptr_t temp = NaClUserToSysAddrRange(nap, user_ptr, length); if (temp != kNaClBadAddress) { *sys_ptr = reinterpret_cast<T*>(temp); return true; } } else if (optional) { *sys_ptr = 0; return true; } return false; } // TODO(ncbray): size validation and complete copy. // TODO(ncbray): ensure non-null / missized structs are covered by a test case. template <typename T> bool ConvertStruct( struct NaClApp* nap, uint32_t user_ptr, bool optional, T** sys_ptr) { if (user_ptr) { uintptr_t temp = NaClUserToSysAddrRange(nap, user_ptr, sizeof(T)); if (temp != kNaClBadAddress) { *sys_ptr = reinterpret_cast<T*>(temp); return true; } } else if (optional) { *sys_ptr = 0; return true; } return false; } } // namespace #endif // MOJO_NACL_MOJO_SYSCALL_INTERNAL_H_
#pragma once //#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers // Windows Header Files #include <windows.h> #include <stdio.h> #include <stdlib.h> #include <conio.h> #include <tchar.h> #include <string> #define BUFSIZE 4096
import logging from flask import g from app.data_model.questionnaire_store import QuestionnaireStore from app.storage.storage_factory import get_storage logger = logging.getLogger(__name__) def get_questionnaire_store(user_id, user_ik): # Sets up a single QuestionnaireStore instance throughout app. store = g.get('_questionnaire_store') if store is None: storage = get_storage(user_id, user_ik) store = g._questionnaire_store = QuestionnaireStore(storage) return store def get_metadata(user): if user.is_anonymous: logger.debug("Anonymous user requesting metadata get instance") return None questionnaire_store = get_questionnaire_store(user.user_id, user.user_ik) return questionnaire_store.metadata def get_answer_store(user): questionnaire_store = get_questionnaire_store(user.user_id, user.user_ik) return questionnaire_store.answer_store def get_answers(user): return get_answer_store(user).map() def get_completed_blocks(user): questionnaire_store = get_questionnaire_store(user.user_id, user.user_ik) return questionnaire_store.completed_blocks
package org.mindinformatics.gwt.domeo.client.ui.toolbar; import org.mindinformatics.gwt.domeo.client.Domeo; import org.mindinformatics.gwt.domeo.client.IDomeo; import org.mindinformatics.gwt.domeo.client.Resources; import org.mindinformatics.gwt.domeo.client.ui.plugins.PluginsViewerPanel; import org.mindinformatics.gwt.domeo.client.ui.preferences.PreferencesViewerPanel; import org.mindinformatics.gwt.domeo.client.ui.toolbar.addressbar.AddressBarPanel; import org.mindinformatics.gwt.domeo.component.sharing.ui.SharingOptionsViewer; import org.mindinformatics.gwt.domeo.component.textmining.ui.TextMiningServicePicker; import org.mindinformatics.gwt.framework.component.IInitializableComponent; import org.mindinformatics.gwt.framework.component.preferences.src.BooleanPreference; import org.mindinformatics.gwt.framework.component.profiles.model.IProfile; import org.mindinformatics.gwt.framework.component.ui.glass.EnhancedGlassPanel; import org.mindinformatics.gwt.framework.component.ui.toolbar.ToolbarHorizontalPanel; import org.mindinformatics.gwt.framework.component.ui.toolbar.ToolbarHorizontalTogglePanel; import org.mindinformatics.gwt.framework.component.ui.toolbar.ToolbarItemsGroup; import org.mindinformatics.gwt.framework.component.ui.toolbar.ToolbarPanel; import org.mindinformatics.gwt.framework.component.ui.toolbar.ToolbarPopup; import org.mindinformatics.gwt.framework.component.ui.toolbar.ToolbarSimplePanel; import org.mindinformatics.gwt.framework.component.users.ui.UserAccountViewerPanel; import org.mindinformatics.gwt.framework.src.Application; import org.mindinformatics.gwt.framework.src.ApplicationResources; import org.mindinformatics.gwt.framework.src.ApplicationUtils; import com.google.gwt.core.client.GWT; import com.google.gwt.dom.client.Document; import com.google.gwt.dom.client.IFrameElement; import com.google.gwt.event.dom.client.ClickEvent; import com.google.gwt.event.dom.client.ClickHandler; import com.google.gwt.event.dom.client.KeyCodes; import com.google.gwt.event.dom.client.KeyPressEvent; import com.google.gwt.event.dom.client.KeyPressHandler; import com.google.gwt.event.logical.shared.SelectionEvent; import com.google.gwt.event.logical.shared.SelectionHandler; import com.google.gwt.user.client.Window; import com.google.gwt.user.client.ui.Composite; import com.google.gwt.user.client.ui.SimplePanel; import com.google.gwt.user.client.ui.SuggestOracle.Suggestion; /** * @author <NAME> <<EMAIL>> */ public class DomeoToolbarPanel extends Composite implements IInitializableComponent { public static final ToolbarResources localResources = GWT.create(ToolbarResources.class); public static final String DOCUMENT_COMMANDS_GROUP = "Document commands"; private static final String POPUP_WIDTH = "170"; // By contract private IDomeo _domeo; private ToolbarPanel toolbar; private ToolbarItemsGroup commandsGroup; private AddressBarPanel addressBarPanel; private ToolbarHorizontalTogglePanel annotateButtonPanel; private ToolbarHorizontalTogglePanel annotateMultipleButtonPanel; private ToolbarHorizontalTogglePanel highlightButtonPanel; private ToolbarHorizontalTogglePanel analyzeButtonPanel; private ToolbarSimplePanel shareButton; public DomeoToolbarPanel(IDomeo application) { _domeo = application; _domeo.getLogger().debug(this.getClass().getName(), "Creating the Toolbar..."); Resources _resources = Domeo.resources; final ApplicationResources _applicationResources = Application.applicationResources; localResources.toolbarCss().ensureInjected(); toolbar = new ToolbarPanel(_domeo); ToolbarSimplePanel homepageButton = new ToolbarSimplePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { Window.Location.assign(ApplicationUtils.getUrlBase(Window.Location.getHref())); toolbar.disableToolbarItems(); } }, _applicationResources.homeLittleIcon().getSafeUri().asString(), "Homepage"); addressBarPanel = new AddressBarPanel(_domeo); addressBarPanel.initializeHandlers( new ClickHandler() { @Override public void onClick(ClickEvent event) { if (_domeo.getAnnotationPersistenceManager() .isWorskspaceUnsaved()) { Window.alert("The workspace contains unsaved annotation.\n\n" + "By selecting 'Yes', the unsaved annotations will be lost.\n\n" + "By selecting 'Cancel', you will have the chance to save the annotation.\n\n"); //event.setMessage("The workspace contains unsaved annotation."); } if(addressBarPanel.getAddress().length()>0) _domeo.attemptContentLoading(addressBarPanel.getAddress()); } }, new KeyPressHandler() { @Override public void onKeyPress(KeyPressEvent event) { int charCode = event.getUnicodeCharCode(); if (charCode == 0) { // it's probably Firefox int keyCode = event.getNativeEvent().getKeyCode(); if (keyCode == KeyCodes.KEY_ENTER) { if(addressBarPanel.getAddress().length()>0) _domeo.attemptContentLoading(addressBarPanel.getAddress()); } } else if (charCode == KeyCodes.KEY_ENTER) { if(addressBarPanel.getAddress().length()>0) _domeo.attemptContentLoading(addressBarPanel.getAddress()); } } }, new SelectionHandler<Suggestion>() { @Override public void onSelection(SelectionEvent<Suggestion> event) { _domeo.attemptContentLoading(event.getSelectedItem().getReplacementString()); } }); highlightButtonPanel = new ToolbarHorizontalTogglePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { if(isManualHighlightSelected()) _domeo.getLogger().command(this.getClass().getName(), "Enabling manual highlight"); else _domeo.getLogger().command(this.getClass().getName(), "Disabling manual highlight"); if(isManualAnnotationSelected()) { _domeo.getLogger().debug(this, "Disabling manual annotation"); deselectManualAnnotation(); } else if(isManualMultipleAnnotationSelected()) { if(_domeo.getClipboardManager().getBufferedAnnotation().size()>0) { _domeo.getLogger().debug(this, "Performing manual multiple highlight"); _domeo.getContentPanel().getAnnotationFrameWrapper().performMultipleTargetsHighlight(_domeo.getClipboardManager().getBufferedAnnotation()); } deselectManualMultipleAnnotation(); } else if(_domeo.getContentPanel().getAnnotationFrameWrapper().anchorNode!=null) _domeo.getContentPanel().getAnnotationFrameWrapper().annotate(); } }, _resources.highlightLittleIcon(), _resources.highlightLittleColorIcon(), "Highlight", "Highlight"); annotateButtonPanel = new ToolbarHorizontalTogglePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { //_domeo.updateAnnotationMode(); if(isManualAnnotationSelected()) _domeo.getLogger().command(this.getClass().getName(), "Enabling manual annotation"); else _domeo.getLogger().command(this, "Disabling manual annotation"); if(isManualHighlightSelected()) { _domeo.getLogger().debug(this, "Disabling manual highlight"); deselectManualHighlight(); } if(isManualMultipleAnnotationSelected()) { if(_domeo.getClipboardManager().getBufferedAnnotation().size()>0) { _domeo.getLogger().debug(this, "Performing manual multiple annotation"); _domeo.getContentPanel().getAnnotationFrameWrapper().performMultipleTargetsAnnotation( new ClickHandler() { @Override public void onClick(ClickEvent event) { deselectManualAnnotation(); selectManualMultipleAnnotation(); } } ); } deselectManualMultipleAnnotation(); } // If not multiple selections and text selected... else if(_domeo.getContentPanel().getAnnotationFrameWrapper().anchorNode!=null) { System.out.println("DomeoToolbarPanel-DomeoToolbarPanel():"+_domeo.getContentPanel().getAnnotationFrameWrapper().matchText); _domeo.getContentPanel().getAnnotationFrameWrapper().annotate(); } } }, _resources.domeoAnnotateIcon(), _resources.domeoAnnotateColorIcon(), "Annotate", "Annotate"); if(((BooleanPreference)_domeo.getPreferences(). getPreferenceItem(Application.class.getName(), Domeo.PREF_ANN_MULTIPLE_TARGETS))!=null && ((BooleanPreference)_domeo.getPreferences().getPreferenceItem(Application.class.getName(), Domeo.PREF_ANN_MULTIPLE_TARGETS)).getValue()) { annotateMultipleButtonPanel = new ToolbarHorizontalTogglePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { //_domeo.updateAnnotationMode(); if(isManualMultipleAnnotationSelected()) _domeo.getLogger().command(this.getClass().getName(), "Enabling multiple manual annotation"); else { _domeo.getContentPanel().getAnnotationFrameWrapper().clearTemporaryAnnotations(); _domeo.getLogger().command(this, "Disabling multiple manual annotation"); } if(isManualHighlightSelected()) { _domeo.getLogger().debug(this, "Disabling multiple manual highlight"); deselectManualHighlight(); } if(isManualAnnotationSelected()) { _domeo.getLogger().debug(this, "Disabling manual annotation"); deselectManualAnnotation(); } } }, _resources.domeoClipIcon(), _resources.domeoClipColorIcon(), "Clip", "Clip"); } // ToolbarHorizontalPanel analyzeButtonPanel = new ToolbarHorizontalPanel( // _domeo, new ClickHandler() { // @Override // public void onClick(ClickEvent event) { // Window.alert("Click on Analyze"); // } // }, _applicationResources.runLittleIcon().getSafeUri().asString(), "Analyze", "Analyze"); analyzeButtonPanel = new ToolbarHorizontalTogglePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { _domeo.getLogger().debug(this, "Beginning textminning..."); _domeo.getProgressPanelContainer().setProgressMessage("Textmining selection..."); // TODO Hidious!!!!! IFrameElement iframe = IFrameElement.as(_domeo.getContentPanel().getAnnotationFrameWrapper().getFrame().getElement()); final Document frameDocument = iframe.getContentDocument(); _domeo.getContentPanel().getAnnotationFrameWrapper().getSelectionText(_domeo.getContentPanel().getAnnotationFrameWrapper(), frameDocument); if(_domeo.getContentPanel().getAnnotationFrameWrapper().matchText!=null && _domeo.getContentPanel().getAnnotationFrameWrapper().matchText.length()>2) { TextMiningServicePicker tmsp = new TextMiningServicePicker(_domeo); new EnhancedGlassPanel(_domeo, tmsp, tmsp.getTitle(), 800, false, false, false); } else { _domeo.getLogger().debug(this, "No text to textmine..."); _domeo.getContentPanel().getAnnotationFrameWrapper().clearSelection(); _domeo.getToolbarPanel().deselectAnalyze(); _domeo.getProgressPanelContainer().setWarningMessage("No text has been selected for textmining!"); } } }, _applicationResources.runLittleIcon(), _applicationResources.spinningIcon2(), "Analyze", "Analyze"); shareButton = new ToolbarSimplePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { ToolbarPopup popup = new ToolbarPopup(_domeo, "Share", Domeo.resources.shareIcon().getSafeUri().asString()); popup.setWidth(POPUP_WIDTH + "px"); popup.setPopupPosition(Window.getClientWidth()-(Integer.parseInt(POPUP_WIDTH)+48), -6); //25 popup.setAnimationEnabled(false); popup.addButtonPanel(_applicationResources.allLinkIcon().getSafeUri().asString(), "Current Workspace", new ClickHandler() { @Override public void onClick(ClickEvent event) { if(!_domeo.isLocalResources() && !_domeo.isHostedMode() && _domeo.getPersistenceManager().isResourceLoaded()) { SharingOptionsViewer lwp = new SharingOptionsViewer(_domeo); new EnhancedGlassPanel(_domeo, lwp, lwp.getTitle(), 440, false, false, false); } } }); popup.show(); } }, _applicationResources.shareIcon().getSafeUri().asString(), "Sharing"); ToolbarSimplePanel settingsButton = new ToolbarSimplePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { ToolbarPopup popup = new ToolbarPopup(_domeo, "Settings", Domeo.resources.settingsLittleIcon().getSafeUri().asString()); popup.setWidth(POPUP_WIDTH + "px"); popup.setPopupPosition(Window.getClientWidth()-(Integer.parseInt(POPUP_WIDTH)+27), -6); //25 popup.setAnimationEnabled(false); popup.addButtonPanel(_applicationResources.userLittleIcon().getSafeUri().asString(), "Account", new ClickHandler() { @Override public void onClick(ClickEvent event) { UserAccountViewerPanel lwp = new UserAccountViewerPanel(_domeo); new EnhancedGlassPanel(_domeo, lwp, _domeo.getUserManager().getUser().getScreenName(), false, false, false); } }); popup.addButtonPanel(_applicationResources.preferencesLittleIcon().getSafeUri().asString(), "Preferences", new ClickHandler() { @Override public void onClick(ClickEvent event) { PreferencesViewerPanel lwp = new PreferencesViewerPanel(_domeo); new EnhancedGlassPanel(_domeo, lwp, lwp.getTitle(), false, false, false); } }); popup.addButtonPanel(_applicationResources.pluginsLittleIcon().getSafeUri().asString(), "Add-ons and Profiles", new ClickHandler() { @Override public void onClick(ClickEvent event) { PluginsViewerPanel lwp = new PluginsViewerPanel(_domeo); new EnhancedGlassPanel(_domeo, lwp, lwp.getTitle(), 850, false, false, false); } }); popup.show(); } }, _applicationResources.settingsLittleIcon().getSafeUri().asString(), "Preferences"); ToolbarSimplePanel helpButton = new ToolbarSimplePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { ToolbarPopup popup = new ToolbarPopup(_domeo, "Help", Domeo.resources.helpLittleIcon().getSafeUri().asString()); popup.setWidth(POPUP_WIDTH + "px"); popup.setPopupPosition(Window.getClientWidth()-(Integer.parseInt(POPUP_WIDTH)+12), -6); //25 popup.setAnimationEnabled(false); popup.addButtonPanel("", "Report an issue", new ClickHandler() { @Override public void onClick(ClickEvent event) { Window.alert("Report an issue"); } }); popup.addButtonPanel("", "Domeo help", new ClickHandler() { @Override public void onClick(ClickEvent event) { Window.alert("Display online resources"); } }); popup.addButtonPanel("", "About Domeo", new ClickHandler() { @Override public void onClick(ClickEvent event) { Window.alert("About " + Domeo.APP_NAME + " - " + Domeo.APP_VERSION_LABEL); } }); popup.show(); } }, _applicationResources.helpLittleIcon().getSafeUri().asString(), "Help"); ToolbarSimplePanel saveButton = new ToolbarSimplePanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { _domeo.getLogger().command(this, "Saving annotation..."); _domeo.getAnnotationPersistenceManager().saveAnnotation(); if(_domeo.isHostedMode()) _domeo.getAnnotationPersistenceManager().mockupSavingOfTheAnnotation(); //toolbar.disableToolbarItems(); } }, _resources.saveMediumIcon().getSafeUri().asString(), "Save"); toolbar.addToLeftPanel(homepageButton, "22"); if(!_domeo.getProfileManager().getUserCurrentProfile().isFeatureDisabled(IProfile.FEATURE_ADDRESSBAR)) { toolbar.addToLeftPanel(addressBarPanel); } //toolbar.addToLeftPanel(annotateButtonPanel); //toolbar.addToLeftPanel(analyzeButtonPanel); commandsGroup = new ToolbarItemsGroup(DOCUMENT_COMMANDS_GROUP); commandsGroup.addItem(highlightButtonPanel); if(((BooleanPreference)_domeo.getPreferences(). getPreferenceItem(Application.class.getName(), Domeo.PREF_ANN_MULTIPLE_TARGETS))!=null && ((BooleanPreference)_domeo.getPreferences().getPreferenceItem(Application.class.getName(), Domeo.PREF_ANN_MULTIPLE_TARGETS)).getValue()) { commandsGroup.addItem(annotateMultipleButtonPanel); } commandsGroup.addItem(annotateButtonPanel); //commandsGroup.addItem(analyzeButtonPanel); //commandsGroup.addItem(analyzeButtonPanel2); if(!_domeo.getProfileManager().getUserCurrentProfile().isFeatureDisabled(IProfile.FEATURE_ANALYZE)) { commandsGroup.addItem(analyzeButtonPanel); } commandsGroup.addItem(saveButton); toolbar.registerGroup(commandsGroup); SimplePanel sp = new SimplePanel(); sp.setWidth("100px"); toolbar.addToRightPanel(sp); if(!_domeo.getProfileManager().getUserCurrentProfile().isFeatureDisabled(IProfile.FEATURE_SHARING)) { toolbar.addToRightPanel(shareButton); } if(!_domeo.getProfileManager().getUserCurrentProfile().isFeatureDisabled(IProfile.FEATURE_PREFERENCES)) { toolbar.addToRightPanel(settingsButton); } if(!_domeo.getProfileManager().getUserCurrentProfile().isFeatureDisabled(IProfile.FEATURE_HELP)) { toolbar.addToRightPanel(helpButton); } if(_domeo.getProfileManager().getUserCurrentProfile().isFeatureEnabled(IProfile.FEATURE_BRANDING)) { ToolbarHorizontalPanel domeoButton = new ToolbarHorizontalPanel( _domeo, new ClickHandler() { @Override public void onClick(ClickEvent event) { ApplicationUtils.openUrl("http://annotationframework.org"); } }, Domeo.resources.domeoLogoIcon().getSafeUri().asString(), "Domeo","Domeo"); toolbar.addToBrandingPanel(domeoButton, "60px"); } initWidget(toolbar); } public AddressBarPanel getAddressBarPanel() { return addressBarPanel; } public void deselectManualAnnotation() { annotateButtonPanel.deselect(); } public void selectManualMultipleAnnotation() { annotateMultipleButtonPanel.select(); } public void deselectManualMultipleAnnotation() { annotateMultipleButtonPanel.deselect(); } public void deselectManualHighlight() { highlightButtonPanel.deselect(); } public boolean isManualAnnotationSelected() { return annotateButtonPanel.isSelected(); } public boolean isManualMultipleAnnotationSelected() { return annotateMultipleButtonPanel.isSelected(); } public boolean isManualHighlightSelected() { return highlightButtonPanel.isSelected(); } public void deselectAnalyze() { analyzeButtonPanel.deselect(); } public void attachGroup(String groupName) { toolbar.attachGroup(groupName); } public void detachGroup(String groupName) { toolbar.detachGroup(groupName); } public void hideCommands() { toolbar.hideGroup(DomeoToolbarPanel.DOCUMENT_COMMANDS_GROUP); } public void disable() { toolbar.disableToolbarItems(); } @Override public void init() { toolbar.init(); toolbar.detachGroup(DomeoToolbarPanel.DOCUMENT_COMMANDS_GROUP); } }
<reponame>sphuber/aiida-core # -*- coding: utf-8 -*- ########################################################################### # Copyright (c), The AiiDA team. All rights reserved. # # This file is part of the AiiDA code. # # # # The code is hosted on GitHub at https://github.com/aiidateam/aiida-core # # For further information on the license, see the LICENSE.txt file # # For further information please visit http://www.aiida.net # ########################################################################### """Module to manage loading entrypoints.""" import enum import functools import traceback from typing import Any, Optional, List, Sequence, Set, Tuple # importlib.metadata was introduced into the standard library in python 3.8, # but was then updated in python 3.10 to use an improved API. # So for now we use the backport importlib_metadata package. from importlib_metadata import EntryPoint, EntryPoints from importlib_metadata import entry_points as _eps from aiida.common.exceptions import MissingEntryPointError, MultipleEntryPointError, LoadingEntryPointError __all__ = ('load_entry_point', 'load_entry_point_from_string', 'parse_entry_point') ENTRY_POINT_GROUP_PREFIX = 'aiida.' ENTRY_POINT_STRING_SEPARATOR = ':' @functools.lru_cache(maxsize=1) def eps(): return _eps() class EntryPointFormat(enum.Enum): """ Enum to distinguish between the various possible entry point string formats. An entry point string is fully qualified by its group and name concatenated by the entry point string separator character. The group in AiiDA has the prefix `aiida.` and the separator character is the colon `:`. Under these definitions a potentially valid entry point string may have the following formats: * FULL: prefixed group plus entry point name aiida.transports:ssh * PARTIAL: unprefixed group plus entry point name transports:ssh * MINIMAL: no group but only entry point name: ssh Note that the MINIMAL format can potentially lead to ambiguity if the name appears in multiple entry point groups. """ INVALID = 0 FULL = 1 PARTIAL = 2 MINIMAL = 3 ENTRY_POINT_GROUP_TO_MODULE_PATH_MAP = { 'aiida.calculations': 'aiida.orm.nodes.process.calculation.calcjob', 'aiida.cmdline.data': 'aiida.cmdline.data', 'aiida.cmdline.data.structure.import': 'aiida.cmdline.data.structure.import', 'aiida.cmdline.computer.configure': 'aiida.cmdline.computer.configure', 'aiida.data': 'aiida.orm.nodes.data', 'aiida.groups': 'aiida.orm.groups', 'aiida.node': 'aiida.orm.nodes', 'aiida.parsers': 'aiida.parsers.plugins', 'aiida.schedulers': 'aiida.schedulers.plugins', 'aiida.tools.calculations': 'aiida.tools.calculations', 'aiida.tools.data.orbitals': 'aiida.tools.data.orbitals', 'aiida.tools.dbexporters': 'aiida.tools.dbexporters', 'aiida.tools.dbimporters': 'aiida.tools.dbimporters.plugins', 'aiida.transports': 'aiida.transports.plugins', 'aiida.workflows': 'aiida.workflows', } def parse_entry_point(group: str, spec: str) -> EntryPoint: """Return an entry point, given its group and spec (as formatted in the setup)""" name, value = spec.split('=', maxsplit=1) return EntryPoint(group=group, name=name.strip(), value=value.strip()) def validate_registered_entry_points() -> None: # pylint: disable=invalid-name """Validate all registered entry points by loading them with the corresponding factory. :raises EntryPointError: if any of the registered entry points cannot be loaded. This can happen if: * The entry point cannot uniquely be resolved * The resource registered at the entry point cannot be imported * The resource's type is incompatible with the entry point group that it is defined in. """ from . import factories factory_mapping = { 'aiida.calculations': factories.CalculationFactory, 'aiida.data': factories.DataFactory, 'aiida.groups': factories.GroupFactory, 'aiida.parsers': factories.ParserFactory, 'aiida.schedulers': factories.SchedulerFactory, 'aiida.transports': factories.TransportFactory, 'aiida.tools.dbimporters': factories.DbImporterFactory, 'aiida.tools.data.orbital': factories.OrbitalFactory, 'aiida.workflows': factories.WorkflowFactory, } for entry_point_group, factory in factory_mapping.items(): entry_points = get_entry_points(entry_point_group) for entry_point in entry_points: factory(entry_point.name) def format_entry_point_string(group: str, name: str, fmt: EntryPointFormat = EntryPointFormat.FULL) -> str: """ Format an entry point string for a given entry point group and name, based on the specified format :param group: the entry point group :param name: the name of the entry point :param fmt: the desired output format :raises TypeError: if fmt is not instance of EntryPointFormat :raises ValueError: if fmt value is invalid """ if not isinstance(fmt, EntryPointFormat): raise TypeError('fmt should be an instance of EntryPointFormat') if fmt == EntryPointFormat.FULL: return f'{group}{ENTRY_POINT_STRING_SEPARATOR}{name}' if fmt == EntryPointFormat.PARTIAL: return f'{group[len(ENTRY_POINT_GROUP_PREFIX):]}{ENTRY_POINT_STRING_SEPARATOR}{name}' if fmt == EntryPointFormat.MINIMAL: return f'{name}' raise ValueError('invalid EntryPointFormat') def parse_entry_point_string(entry_point_string: str) -> Tuple[str, str]: """ Validate the entry point string and attempt to parse the entry point group and name :param entry_point_string: the entry point string :return: the entry point group and name if the string is valid :raises TypeError: if the entry_point_string is not a string type :raises ValueError: if the entry_point_string cannot be split into two parts on the entry point string separator """ if not isinstance(entry_point_string, str): raise TypeError('the entry_point_string should be a string') try: group, name = entry_point_string.split(ENTRY_POINT_STRING_SEPARATOR) except ValueError: raise ValueError('invalid entry_point_string format') return group, name def get_entry_point_string_format(entry_point_string: str) -> EntryPointFormat: """ Determine the format of an entry point string. Note that it does not validate the actual entry point string and it may not correspond to any actual entry point. This will only assess the string format :param entry_point_string: the entry point string :returns: the entry point type """ try: group, _ = entry_point_string.split(ENTRY_POINT_STRING_SEPARATOR) except ValueError: return EntryPointFormat.MINIMAL else: if group.startswith(ENTRY_POINT_GROUP_PREFIX): return EntryPointFormat.FULL return EntryPointFormat.PARTIAL def get_entry_point_from_string(entry_point_string: str) -> EntryPoint: """ Return an entry point for the given entry point string :param entry_point_string: the entry point string :return: the entry point if it exists else None :raises TypeError: if the entry_point_string is not a string type :raises ValueError: if the entry_point_string cannot be split into two parts on the entry point string separator :raises aiida.common.MissingEntryPointError: entry point was not registered :raises aiida.common.MultipleEntryPointError: entry point could not be uniquely resolved """ group, name = parse_entry_point_string(entry_point_string) return get_entry_point(group, name) def load_entry_point_from_string(entry_point_string: str) -> Any: """ Load the class registered for a given entry point string that determines group and name :param entry_point_string: the entry point string :return: class registered at the given entry point :raises TypeError: if the entry_point_string is not a string type :raises ValueError: if the entry_point_string cannot be split into two parts on the entry point string separator :raises aiida.common.MissingEntryPointError: entry point was not registered :raises aiida.common.MultipleEntryPointError: entry point could not be uniquely resolved :raises aiida.common.LoadingEntryPointError: entry point could not be loaded """ group, name = parse_entry_point_string(entry_point_string) return load_entry_point(group, name) def load_entry_point(group: str, name: str) -> Any: """ Load the class registered under the entry point for a given name and group :param group: the entry point group :param name: the name of the entry point :return: class registered at the given entry point :raises TypeError: if the entry_point_string is not a string type :raises ValueError: if the entry_point_string cannot be split into two parts on the entry point string separator :raises aiida.common.MissingEntryPointError: entry point was not registered :raises aiida.common.MultipleEntryPointError: entry point could not be uniquely resolved :raises aiida.common.LoadingEntryPointError: entry point could not be loaded """ entry_point = get_entry_point(group, name) try: loaded_entry_point = entry_point.load() except ImportError: raise LoadingEntryPointError(f"Failed to load entry point '{name}':\n{traceback.format_exc()}") return loaded_entry_point def get_entry_point_groups() -> Set[str]: """ Return a list of all the recognized entry point groups :return: a list of valid entry point groups """ return eps().groups def get_entry_point_names(group: str, sort: bool = True) -> List[str]: """Return the entry points within a group.""" all_eps = eps() group_names = list(all_eps.select(group=group).names) if sort: return sorted(group_names) return group_names def get_entry_points(group: str) -> EntryPoints: """ Return a list of all the entry points within a specific group :param group: the entry point group :return: a list of entry points """ return eps().select(group=group) def get_entry_point(group: str, name: str) -> EntryPoint: """ Return an entry point with a given name within a specific group :param group: the entry point group :param name: the name of the entry point :return: the entry point if it exists else None :raises aiida.common.MissingEntryPointError: entry point was not registered """ found = eps().select(group=group, name=name) if name not in found.names: raise MissingEntryPointError(f"Entry point '{name}' not found in group '{group}'") if len(found.names) > 1: raise MultipleEntryPointError(f"Multiple entry points '{name}' found in group '{group}': {found}") return found[name] @functools.lru_cache(maxsize=100) def get_entry_point_from_class(class_module: str, class_name: str) -> Tuple[Optional[str], Optional[EntryPoint]]: """ Given the module and name of a class, attempt to obtain the corresponding entry point if it exists :param class_module: module of the class :param class_name: name of the class :return: a tuple of the corresponding group and entry point or None if not found """ for group in get_entry_point_groups(): for entry_point in get_entry_points(group): if entry_point.module != class_module: continue if entry_point.attr == class_name: return group, entry_point return None, None def get_entry_point_string_from_class(class_module: str, class_name: str) -> Optional[str]: # pylint: disable=invalid-name """ Given the module and name of a class, attempt to obtain the corresponding entry point if it exists and return the entry point string which will be the entry point group and entry point name concatenated by the entry point string separator entry_point_string = '{group:}:{entry_point_name:}' This ensures that given the entry point string, one can load the corresponding class by splitting on the separator, which will give the group and entry point, which should the corresponding factory to uniquely determine and load the class :param class_module: module of the class :param class_name: name of the class :return: the corresponding entry point string or None """ group, entry_point = get_entry_point_from_class(class_module, class_name) if group and entry_point: return ENTRY_POINT_STRING_SEPARATOR.join([group, entry_point.name]) # type: ignore[attr-defined] return None def is_valid_entry_point_string(entry_point_string: str) -> bool: """ Verify whether the given entry point string is a valid one. For the string to be valid means that it is composed of two strings, the entry point group and name, concatenated by the entry point string separator. If that is the case, the group name will be verified to see if it is known. If the group can be retrieved and it is known, the string is considered to be valid. It is invalid otherwise :param entry_point_string: the entry point string, generated by get_entry_point_string_from_class :return: True if the string is considered valid, False otherwise """ try: group, _ = entry_point_string.split(ENTRY_POINT_STRING_SEPARATOR) except (AttributeError, ValueError): # Either `entry_point_string` is not a string or it does not contain the separator return False return group in ENTRY_POINT_GROUP_TO_MODULE_PATH_MAP @functools.lru_cache(maxsize=100) def is_registered_entry_point(class_module: str, class_name: str, groups: Optional[Sequence[str]] = None) -> bool: """Verify whether the class with the given module and class name is a registered entry point. .. note:: this function only checks whether the class has a registered entry point. It does explicitly not verify if the corresponding class is also importable. Use `load_entry_point` for this purpose instead. :param class_module: the module of the class :param class_name: the name of the class :param groups: optionally consider only these entry point groups to look for the class :return: True if the class is a registered entry point, False otherwise. """ for group in get_entry_point_groups() if groups is None else groups: for entry_point in get_entry_points(group): if class_module == entry_point.module and class_name == entry_point.attr: return True return False
<filename>router_test.go package ultron import ( "bytes" "encoding/json" "io" "net/http" "net/http/httptest" "testing" "github.com/stretchr/testify/assert" ) func TestHTTPRouter(t *testing.T) { runner := newMasterRunner() handler := buildHTTPRouter(runner) ts := httptest.NewServer(handler) defer ts.Close() res, err := http.Get(ts.URL + "/metrics.json") assert.Nil(t, err) data, _ := io.ReadAll(res.Body) defer res.Body.Close() d := make([]interface{}, 0) err = json.Unmarshal(data, &d) assert.Nil(t, err) reader := bytes.NewBuffer([]byte("foobar")) res, err = http.Post(ts.URL+"/api/v1/plan", "", reader) assert.Nil(t, err) ret := new(restResponse) err = json.NewDecoder(res.Body).Decode(ret) assert.Nil(t, err) assert.True(t, ret.ErrorMessage != "") }
#include "EventsHandler.hpp" namespace { glm::vec2 GetMousePosition(const SDL_MouseButtonEvent &event) { return { event.x, event.y }; } glm::vec2 GetMousePosition(const SDL_MouseMotionEvent &event) { return { event.x, event.y }; } } void sdl_handle::EventHandle(const SDL_Event &event,const IEventActor &actor) { switch (event.type) { case SDL_MOUSEBUTTONDOWN: actor.OnMouseDown(event.button); break; case SDL_MOUSEBUTTONUP: actor.OnMouseUp(event.button); break; case SDL_MOUSEMOTION: actor.OnMouseMotion(event.motion); break; case SDL_MOUSEWHEEL: acceptor.OnMouseWheel(event.wheel); break; case SDL_KEYDOWN: actor.OnKeyDown(event.key); break; case SDL_KEYUP: actor.OnKeyUp(event.key); break; } }
def execute(self): self.outf = open(self.output, 'w') try: self.do_execute() finally: self.outf.close()
The invention relates to desulfurization of hydrocarbon feeds. Environmental concerns regarding hydrocarbons such as gasoline and diesel, and sulfur specifications in connection with same, continue as an important issue. These specifications are designed to reduce vehicular emissions, specifically, SOx generated in the combustion of such fuels. It is expected that regulations will soon require gasoline and diesel quality to have sulfur levels less than or equal to about 30 ppm in the United States, and 50 ppm in Western Europe. These regulations will only become more stringent as time goes on. It is expected that legislation, especially that in the U.S.A. and Japan, will call for “sulfur free” gasoline and diesel fuel by the end of this decade. Although technologies exist for deep sulfur removal from hydrocarbon feeds, this technology does not necessarily provide an economically attractive solution to achieve the required low sulfur specifications. Conventional hydrodesulfurization requires high temperatures, expensive equipment and potentially expensive additives, all of which leads to commercially unattractive processes. It is clear that the need remains for a process for deep desulfurization of hydrocarbon feeds, which is economically attractive and efficient. It is therefore the primary object of the present invention to provide such process. It is a further object of the present invention to provide a process which can be carried out at less extreme temperatures and pressures. Other objects and advantages of the present invention will appear herein below.
// makeAdminPeers - helper function to construct a collection of adminPeer. func makeAdminPeers(endpoints EndpointList) (adminPeerList adminPeers) { localAddr := GetLocalPeer(endpoints) if strings.HasPrefix(localAddr, "127.0.0.1:") { localAddr = net.JoinHostPort(sortIPs(localIP4.ToSlice())[0], globalMinioPort) } adminPeerList = append(adminPeerList, adminPeer{ addr: localAddr, cmdRunner: localAdminClient{}, isLocal: true, }) for _, hostStr := range GetRemotePeers(endpoints) { host, err := xnet.ParseHost(hostStr) logger.FatalIf(err, "Unable to parse Admin RPC Host", context.Background()) rpcClient, err := NewAdminRPCClient(host) logger.FatalIf(err, "Unable to initialize Admin RPC Client", context.Background()) adminPeerList = append(adminPeerList, adminPeer{ addr: hostStr, cmdRunner: rpcClient, }) } return adminPeerList }
<gh_stars>0 #pragma once #ifndef OOSL_WRITER_BASE_H #define OOSL_WRITER_BASE_H #include <string> namespace oosl{ namespace common{ class writer_base{ public: virtual ~writer_base() = default; virtual writer_base &begin() = 0; virtual writer_base &end() = 0; virtual writer_base &write(const std::string &value){ return write(value.c_str()); } virtual writer_base &write(const char *value) = 0; virtual writer_base &write(const std::wstring &value){ return write(value.c_str()); } virtual writer_base &write(const wchar_t *value) = 0; }; } } #endif /* !OOSL_WRITER_BASE_H */
By Janice Neumann (Reuters Health) - Teenaged boys who spend too many hours in front of the computer or television without participating in enough weight-bearing exercise could develop weaker bones as they age, a small Norwegian study suggests. Childhood and the teen years are critical periods for growing bones and establishing a bone density level that can affect osteoporosis risk much later in life. "We found a relationship between higher screen time and lower bone mineral density in boys," said Anne Winther, a physiotherapist at University Hospital of North Norway in Tromso and the study's first author. "We are not able to detect causality with this study design, but it is likely that screen time is an indicator of a lifestyle that has negative impact on bone mass acquisition." Among the 316 boys and 372 girls aged 15 to 19 years old, those who spent two to four hours, or more than six hours, in front of the screen every day tended to be slightly heavier than their peers who spent less time in front of screens. And boys overall spent more time in front of the computer and television than girls (five hours a day versus four). But the boys with heavy screen time also had lower bone mineral density (BMD) levels, while the girls' BMD was higher with heavier screen time. Winther, who is also a doctoral student at UiT The Arctic University of Norway, and colleagues note in BMJ Open that decreased lean mass and increased fat mass could be more harmful to boys than girls and might actually protect female bones. For the study, the youngsters reported how many hours per day they spent in front of the computer or watching television or DVDs on weekends, as well as how much time they were sedentary, walked, cycled and participated in recreational sports weekly. "The most important finding was that the detrimental relationship between this screen-based sedentary behavior and bone mass density in boys persisted two years later," Winther said. The American College of Sports Medicine recommends 10 to 20 minutes of gymnastics or running or jumping, or other weight-bearing exercise at least three days weekly for children and adolescents. "I think you can never say too often what the authors were saying," said Dr. Laura Bachrach, a pediatric endocrinologist at Stanford University Medical School in California. "We're really worried about this because there's sort of this critical time between being born and reaching the early 20s when you're setting up the scaffolding of life (in terms of the geometry and density of the bone)," she told Reuters Health by email. "You sort of max out in your early 20s and there is real concern that the lifestyle of young people nowadays versus 40 or 50 years ago is setting people up to be more at risk as adults for not having a very robust bone bank as they age," Bachrach said. The study focused on older teens, although sedentary time and exercise would have the most bone impact on nine to 15-year-olds, Bachrach pointed out. "The horse may have been a little bit out of the barn here in terms of what they're looking at," Bachrach said. "Girls tend to mature earlier . . . the girls were even more fixed in their position in the skeletal world by the time they started, whereas the boys were perhaps a little more malleable." SOURCE: http://bit.ly/1er92SH BMJ Open 2015.
/** * Flushes the stream from the CQP server to the client program, * emptying its buffer. * * Note that under windows, buffered output is not possible, so * this function does nothing. * * @return Boolean: true if everything OK, otherwise false. */ int cqi_flush(void) { #ifdef __MINGW__ return 1; #else if (snoop) { Rprintf( "CQi FLUSH\n"); } if (EOF == fflush(conn_out)) { perror("ERROR cqi_flush()"); return 0; } else { return 1; } #endif }
<filename>src/pywildmatch/match.py #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # IMPORT from sre_constants import error as sre_error from pywildmatch._lib import * from pywildmatch.error import WildmatchError from pywildmatch.param import Param, parameterize #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # MATCH class Match: def __init__(self, flag=WM_Match, icase=False, verbose=False): if not (flag&1) and icase: self.flag = flag+1 else: self.flag = flag self.verbose = verbose # GET Param OBJ FOR GIVEN PATTERN def get_param(self, pattern, flag): # -> Param Type try: return parameterize(pattern, flag) except WildmatchError as e: if self.verbose: from .disp import disp param = Param(pattern=pattern, flag=flag) param.update(e) disp(param) raise # GET REGX FUNCTION FOR GIVEN Param OBJ def get_re(self, param): # -> Pattern try: return getattr( sre_compile( param.regx, IGNORECASE if (param.flag&1) else 0 ), param.method ) except sre_error as e: if self.verbose: from .disp import disp e.__class__.__name__ = 'sre_error' param.update(e) disp(param) raise # MATCH ONE PATTERN AGAINST A STRING def match_one(self, pattern, flag=None): # -> FunctionType flag = self.flag if flag is None else flag param = self.get_param(pattern, flag) re = self.get_re(param) def matcher(text): param.text = text result = re(decode(text)) if param.negate: if result: param.update() else: param.update(NEG_MATCH) else: param.update(result) return param return matcher # MATCH ANY FROM PATTERNS AGAINST A STRING def match_any(self, patterns): # -> FunctionType matchers = [self.match_one(p) for p in patterns] hit = None def matcher(text): nonlocal hit param = miss = None if hit: param = hit(text) if param.has_match: return param miss,hit = hit,None for i,m in enumerate(matchers): param = m(text) if param.has_match: hit = matchers.pop(i) break if miss: matchers.insert(0, miss) if hit: return param return matcher # MATCH EACH FROM PATTERNS AGAINST A STRING def match_many(self, patterns): # -> FunctionType matchers = (self.match_one(p) for p in patterns) mmany = lambda text: (m(text) for m in matchers) def matcher(text): yield from (param for param in mmany(text) if param) return matcher
package usecase import ( "context" "fmt" "time" "github.com/meroedu/meroedu/internal/domain" ) // CourseUseCase ... type CourseUseCase struct { courseRepo domain.CourseRepository userRepo domain.UserRepository lessonRepo domain.LessonRepository attachmentRepo domain.AttachmentRepository tagRepo domain.TagRepository categoryRepo domain.CategoryRepository contextTimeOut time.Duration } // NewCourseUseCase will create new an func NewCourseUseCase(c domain.CourseRepository, timeout time.Duration) domain.CourseUseCase { return &CourseUseCase{ courseRepo: c, contextTimeOut: timeout, } } // GetAll ... func (usecase *CourseUseCase) GetAll(c context.Context, start int, limit int) (res []domain.Course, err error) { ctx, cancel := context.WithTimeout(c, usecase.contextTimeOut) defer cancel() // count, err := usecase.courseRepo.GetCourseCount(ctx) // log.Info(count) res, err = usecase.courseRepo.GetAll(ctx, start, limit) if err != nil { return nil, err } return res, nil } // GetByID ... func (usecase *CourseUseCase) GetByID(c context.Context, id int64) (*domain.Course, error) { ctx, cancel := context.WithTimeout(c, usecase.contextTimeOut) defer cancel() course, err := usecase.courseRepo.GetByID(ctx, id) if err != nil { return nil, err } return course, nil } // GetByTitle ... func (usecase *CourseUseCase) GetByTitle(c context.Context, title string) (*domain.Course, error) { ctx, cancel := context.WithTimeout(c, usecase.contextTimeOut) defer cancel() res, err := usecase.courseRepo.GetByTitle(ctx, title) if err != nil { return nil, err } return res, nil } // CreateCourse .. func (usecase *CourseUseCase) CreateCourse(c context.Context, course *domain.Course) (err error) { ctx, cancel := context.WithTimeout(c, usecase.contextTimeOut) defer cancel() existedCourse, err := usecase.GetByTitle(ctx, course.Title) fmt.Println(existedCourse) if existedCourse != nil { return domain.ErrConflict } course.UpdatedAt = time.Now() course.CreatedAt = time.Now() err = usecase.courseRepo.CreateCourse(ctx, course) if err != nil { return } return } // UpdateCourse .. func (usecase *CourseUseCase) UpdateCourse(c context.Context, course *domain.Course, id int64) (err error) { ctx, cancel := context.WithTimeout(c, usecase.contextTimeOut) defer cancel() existedCourse, err := usecase.GetByID(ctx, id) if &existedCourse == nil { return domain.ErrNotFound } course.ID = id course.UpdatedAt = time.Now() err = usecase.courseRepo.UpdateCourse(ctx, course) if err != nil { return } return } // DeleteCourse ... func (usecase *CourseUseCase) DeleteCourse(c context.Context, id int64) (err error) { ctx, cancel := context.WithTimeout(c, usecase.contextTimeOut) defer cancel() existedCourse, err := usecase.courseRepo.GetByID(ctx, id) if err != nil { return err } if existedCourse == nil { return domain.ErrNotFound } return usecase.courseRepo.DeleteCourse(ctx, id) }
def _load_json_for_keys(self, file: str, *keys: list, directory: str="") -> dict: file_path = path.join(path.dirname(__file__), directory, file) json_out, err, trace = load_json(file_path) if not err: filtered_json = dict((key, json_out[key]) for key in keys if key in json_out) self.logger.debug("Loaded json for filtered keys is: %s" % filtered_json) return filtered_json self.logger.error("Error while loading the json file:%s , Error:%s, Trace: %s" % (file, err, trace)) return json_out
#!/usr/bin/env python import json import subprocess import sys # CONFIG VARIABLES FOR DEPLOYMENT EXEC = './db-search' SIGNIFICANCE = 0.9 if __name__ == "__main__": if len(sys.argv) != 2: print("error: pass query as a single json string") sys.exit(-1) qd = json.loads(sys.argv[1]) if not 'Read' in qd: print("error: json must provide array named Read") sys.exit(-1) query = qd['Read'] if len(query) < 2: print("error: query to short") sys.exit(-1) CALL = [EXEC] + [str(x) for x in query] res = subprocess.check_output(CALL).decode('ascii') R = res.split('%\n')[:-1] L = [dict(x.split('\t') for x in res.split('\n')[:-1]) for res in R] for x in L: l = x['Read'] x['Read'] = [float(x) for x in l.split(' ')[:-1]] l = x['Confidence'] x['Confidence'] = float(l) l = x['Time'] x['Time'] = float(l) L = [x for x in L if (x['Confidence'] > SIGNIFICANCE)] print(json.dumps(L))
The unemployment rate among military veterans plunged to a record low 2.7 percent in October, according to data from the Bureau of Labor Statistics (BLS). According to the BLS data, the veteran unemployment rate dropped to 2.7 percent from 4.3 percent in 2016 and was down from nearly 10 percent in 2010. The BLS added that among post-9/11 veterans who served in the Iraq and Afghanistan wars, the unemployment rate hovered around 3.6 percent. In October 2016, the unemployment rate for this group of veterans was 4.7 percent, and in 2009 it was 10.2 percent. The agency noted that the improved numbers come after Hurricanes Harvey and Irma caused a slight uptick in the unemployment rate in September. The White House touted the falling unemployment rate as evidence that President Trump’s plans to revitalize the economy are working. “With nearly 1.5 million new jobs since the president took office, including over 260,000 last month, it’s clear his agenda is putting Americans back to work,” the White House said in a statement. Critics say that the falling unemployment rate points to a less rosy picture of the economy because many Americans have given up on looking for work. The data for the Labor Department’s monthly jobs report comes from a survey of American households breaking down unemployment by each demographic.
Isolation and characterization of beta-glucan receptors on human mononuclear phagocytes. beta-glucan receptors, with ligand specificity for yeast and fungal carbohydrate polymers, have been studied as phagocytic receptors of human monocytes. To characterize their structure, binding studies were carried out with human U937 cells and a rabbit IgG anti-Id that recognizes epitopes on monocyte beta-glucan receptors. Unstimulated U937 cells specifically bound large amounts of the anti-Id, but almost none of the control anti-isotype. At saturation, the number of anti-Id molecules bound per U937 cell was 2.6 x 10 with an apparent Ka of 1.9 x 10 M-1. Immunoprecipitates from detergent lysates of surface-radioiodinated U937 cells contained only two membrane proteins with antigenic specificity for the anti-Id, one having a mol wt of 180 kD and the other 160 kD. Both proteins were disulfide-linked and presented, after reduction, as five polypeptides of 95, 88, 60, 27, and 20 kD. Detergent lysates of unlabeled U937 cells, purified by affinity chromatography on anti-Id-Sepharose, yielded the same two nonreduced proteins and five reduction products in slab gels stained with Coomassie blue. In Western blots probed with the anti-Id, the most immunoreactive nonreduced and reduced affinity-purified products were the 160 and 20 kD molecules, respectively. Immunoblots of two-dimensional gels showed the 180 and 160 kD proteins to express a common epitope through disulfide linkage to the 20 kD polypeptide. By immunoblot analysis, U937 cell glucan-binding proteins from detergent lysates contained two cell proteins antigenic for the anti-Id that were indistinguishable from affinity-purified molecules in size and subunit composition. Studies of affinity-purified proteins from detergent lysed human monocytes were characterized by immunoblot analysis and found to be identical to U937 cell beta-glucan receptors. They consisted of two disulfide-linked proteins, with mol wt of 180 and 160 kD, and had in common a 20 kD polypeptide with the anti-Id epitope. The smallest functional unit ligand for human monocyte a-glucan receptors has been isolated from purified yeast glucan and shown by mass spectrometry to be a heptaglucoside. The recent development of rabbit anti-idiotypic antibodies to an Id of a mAb with specificity for the yeast heptaglucoside has provided the first immunologic probe that recognizes epitopes on monocyte a-glucan receptors. The anti-Id specifically binds to human monocytes and selectively blocks their ingestion of zymosan and glucan particles. In the current studies, we determine the relationship between U937 cell proteins with the anti-Id epitope and a-glucan receptors by comparing the proteins eluted from anti-ld-Sepharose to those from yeast glucan particles. Both reagents identify the same two species of molecules in U937 cells and these are also present in human monorytes. Cell Culture and Isolation. U937 cells were cultured in 150 cm2 tissue culture flasks (Costar Corp., Cambridge, MA) containing RPMI 1640 Medium (Gibco Laboratories, Grand Island, NY) and 10% heat-inactivated (56°C for 30 min) calf serum (Gibco Laboratories). The cell cultures were incubated at 37°C in a humidified atmosphere of 5% CO 2 and harvested during logarithmic phase of growth by centrifugation. As specified in the text, the cells were washed 3-5 times in HBSS, which lacked calcium, magnesium, and phenol red, or in RPMI. They were resuspended in buffer or medium, counted on a Coulter counter (Coulter Electronics, Hialeah, FL), and measured for viabilityby Trypan blue exclusion, which was X95%. Human monocytes were isolated from normal citrated and dextran-treated blood, purified by gradient centrifugation on Ficoll-Paque (Pharmacia Fine Chemicals, Piscataway, NJ), washed in HBSS, and resuspended in RPMI containing 1 mg/ml BSA (Miles Laboratories, Elkhart, IN). Monolayers of monocytes were prepared in 60-mm plastic tissue culture dishes (Becton Dickinson and Co., Oxnard, CA); 1.5 ml of 2.2 x 106/ml mononuclear cells were used in each of two layerings. By visual enumeration at x 40 with an inverted phase contrast microscope and a calibrated reticle, 20-35% of the layered mononuclear cells adhered to the dishes. By morphology and nonspecific esterase staining, >95% of the adherent cells were monorytes. Mouse hybrrdomms producing IgG2a mAb OEA10 and IgG1 OKMI with specificities for yeast (3-glucans and the ot chain of CD11b, respectively, were raised in spinner cultures; the mAb were purified by affinity chromatography with rat mAb AHF5 anti-mouse L chain. The anti-Id, as described in detail, was raised in rabbits immunized with mAb OEA10 and rendered specific for Id by adsorption of mouse serotypic and isotypic determinants before affinity purification by passage and elution from Sepharose-mAb OEA10 with 0.1 M glycine HCI, pH 2.5. Rabbit anti-mouse IgG2a was eluted from Sepharose-mAb UPC 10. By SDS-PAGE, thepurified anti-Id and corresponding anti-isotype contained only IgG. 1512 Structural Properties of /3-Glucan Receptors through Sephadex G-25 (Pharmacia Fine Chemicals) columns in PBS and 0.02% NaN3. The specific activity of the radiolabeled antibodies was 1-2 x 106 cpm/lAg. Protein-Coupled SepharoseBeads andPurified Glucan Pbrticks. BSA, OKMI, nonimmune rabbit IgG, anti-isotype, and anti-Id were each coupled in 0.1 M phosphate buffer, pH 7.0, at a concentration of 4 mg of protein/g of activated CH-Sepharose beads (Pharmacia Fine Chemicals) with coupling efficiencies of 75-85%. For anti-CRI, the proportion of protein to beads was reduced by half and 82% of the protein was covalently bound to the beads. The preparation of purified glucan particles from S. cerevisiae (Fleischmann, E. Hanover, NJ) was the same as that used in previous studies. The yeast were treated sequentially with hot NaOH, hot acetic acid, ethanol, and acetone, and the final product dried under vacuum. After resuspension, the particles were counted and analyzed for carbohydrate and protein. 1 mg of glucan contained lOs particles, 99% carbohydrate, 0.6% protein, and no neutral sugar other than glucose. Binding Studies with U937 Cells. Suspensions of 4 x 105 U937 cells, which had been washed in RPMI, were incubated in 0.30 ml of cold RPMI containing 10 mM Hepes, 300,ug of BSA, 40 wg of nonimmune rabbit IgG, and increasing amounts of I'll-rabbit IgG anti-Id for 90 min at 4°C, which was sufficient to reach equilibrium. Nonspecific binding was assessed by incubating samples in the presence of 40-to 400-fold molar excess of unlabeled anti-Id. Replicate samples of cells in 0.075 ml were layered on 0.25 ml of a 3:1 mixture of dibutyl/dinonyl-phalate (ICN Biomedicals Inc., Plainview, NY) in 0.5 ml polypropylene microfuge tubes and centrifuged at 8,000 g for 1 min. The tubes were cut, and the pellets and supernatants measured for cell-bound and free '25 I-anti-Id, respectively. The specific binding data were analyzed by the LIGAND computer program to determine the affinity and number of bound molecules per cell at saturation. Binding of "5Prabbit IgG anti-mouse IgG2a (UPC 10) was carried out in a smaller manner in the absence andpresence of 40-fold excess unlabeled anti-isotype. Radioiodination and Immunoprecipitation ofSurface U937 Cell Proteins. U937 cells, washed five times in HBSS, were surface-labeled for 1 h at 4°C by the incubation of 2.5 x 10 7 cells in 1 ml of HBSS and 1 mCi of Na'2 'I (New England Nuclear) in glass vials coated with 150 hg of IODO-GEN (Pierce Chemical Co.). The labeled cells were centrifuged at 700 g for 4 min at 4°C, washed three times in cold HBSS containing 2 mg/ml BSA and twice in buffer alone, and then lysed for 1 h at 4°C in 8 ml of HBSS containing 1% NP-40, 5 mM DFP, 2 mM PMSF, 1 p.M pepstatin, and 1 p.M leupeptin (lysis buffer). The lysates were centrifuged at 10,000 g for 1 h at 4°C and the resulting supernatant fractions assessed for radiolabeled protein. Of the original radioiodide, 6.2 ± 1.7% (mean t SD, n -7) was incorporated into cells and 2.7 ± 1.0% was precipitable by TCA. For immunoprecipitation, the detergent-soluble materials were incubated for 18 h at 4°C with Sepharose-BSA and the precleared lysates sequentially incubated for 1 h at 4°C with 100 ;1 of the packed protein-coupled Sepharose beads indicated in each study. The beads were washed five times in lysis buffer, treated with 300 WI of 1% SDS for 5 min at 100°C to elute adsorbed proteins, and sedimented by centrifugation at 700 g for 5 min at 25°C. Eluted soluble materials were centrifuged at 14,000 g for S min at 10°C, lyophilized, dissolved in Laemmli sample buffer, and subjected to SDS-PAGE as described below. Radioautographs were prepared by exposing dried gels to X-ray film (XAR X-Omat ; Eastman Kodak Co., Rochester, NY). Unlabeled Cell Lysates. Batches of 6.4 t 2.0 x 10s U937 cells (mean ± SD, n = 21) were harvested and washed four times in HBSS. Pelleted cells were resuspended at a density of 5 x 10, cells/ml of lysis buffer, incubated for 1 h at 4°C with frequent agitation, and stored at -70°C. Immediately before use, the lysates were centrifuged at 10,000 g for 1 h at 4°C to remove detergentinsoluble materials. For monocyte lysates, replicate dishes of buffer-washed adherent cells were each treated with 1.5 ml of lysis buffer and scraped with a disposable cell scraper (Costar Corp.). Examination of the dishes by inverted phase microscopy revealed nuclei but few intact cells. To maximize protein yield, sets of dishes with individual donor monorytes were treated with lysis buffer already containing solubilized cells. The final pooled product was incubated for 1 h at 4°C, stored at -70°C, and clarified by centrifugation before use. Immunoadsorption and Glucan-Binding. Forimmunoadsorption, detergent-soluble materials of batch-lysed U937 cells and monocytes were precleared as before with Sepharose-BSA and the precleared products sequentially incubated with Sepharose beads bearing nonimmune rabbit IgG and anti-Id. The beads were washed and eluted, as described for radiolabeled immunoprecipitates, and the final soluble products were stored at -70°C as lyophilized powders. For glucan-bound materials, replicate samples of detergent-soluble lysates of 6.5 x 10' U937 cells were precleared with Sepharose-BSA and incubated for 4 h at 4°C with 6.5 x 10 8 glucan particles. To obtain adequate amounts of protein, washed particles from 3-4 samples were pooled before elution and subsequent lyophilization. For studies in which (3-glucan receptors and proteins with the anti-Id epitope were directly compared, parallel samples of immunoadsorbed proteins were prepared in a similar manner. Immunoafnity Column Chromatography. For affinity purification of U937 cell proteins, 750-900 ml of detergent-soluble fractions from 3-4 x 109 lysed cells and with 0.02% NaN3 were chromatographed sequentially on columns of Sepharose 4B (6 x 2.5 cm), nonimmune rabbit IgG-Sepharose (4.5 x 2.5 cm), and anti-Id-Sepharose (4.0 x 2.5 cm) at a flow rate of 20 ml/h at 10°C. Proteins were continuously monitored by OD at 280 urn with an on-line UVdetector (Isco, Lincoln, NE). The columns were washed in 500-750 ml of PBS with 0.02% azide at a rate of 35 ml/h. To remove azide and to establish baselines, the anti-Id-Sepharose was washed in 100-150 ml of PBS before elution of bound materials with 0.1 M glycine-HCI, pH 2.5. The manually collected proteins were dialyzed at 4°C against 1 MM PO 4, 7.5 mM NaCl, pH 7.0, lyophilized, dissolved in distilled water, and stored at -70°C. Purification of monocyte proteins were carried out in a similar fashion with 60-90 ml of detergent-soluble fractions from 4-7 x 108 lysed monorytes. SDS-PAGE. SDS-PAGE was performed as described in 1.5mm discontinuous slab gels, a 3% gel stacked on a 5-15% polyacrylamide gradient resolving gel. For nonreducing/reducing twodimensional SDS-PAGE, immunoaffinity purified proteins were heated in sample buffer with 1% SDS, loaded into 5-mm wells of gels, and electrophoresed. Gel strips, 11 x 2 cm, containing the resolved proteins were excised, incubated at 25°C for 1 h in sample buffer with 1% SDS and 0.1 M DTT, and inserted into a 13-cm sample well of the second gel; prestained standards were 1513 Czop et al. loaded into a separate 7-mm well. The running buffer for the second gels contained 0.1 mM sodium thioglycollate. Immunoblotting. Proteins resolved by SDS-PAGE were transferred onto nitrocellulose, analyzed by the immunoblotting method described with 25 ug/ml of anti-Id and 101 cpm/ml of 111 1-goat anti-rabbit F(ab')2, and detected by radioautography on X-ray film. The primary and secondary antibodies were diluted in 0.01 M Tris, 0.15 M NaCl, 0.02% NaN3, pH 7.4, containing 2% BSA (Tris-BSA). In the absence of anti-Id, blots incubated in Tris-BSA with or without 25 Ftg/ml of nonimmune rabbit IgG contained no detectable proteins. Radioautographic Methodfor Protein Determination. To conserve isolated cell protein, serial dilutions of affinity-purified proteins were spotted in 2.5 p.l onto nitrocellulose, detected by direct probing with '211-anti-Id in radioautographs, and quantitated by densitometry with mAb OEA10, the immunogen for the anti-Id, as reference standard. To assess for purity, diluted samples on replicate strips were treated in a similar fashion with labeled nonimmune rabbit IgG or goat anti-rabbit F(aV)2 and the concentrations of detected protein calibrated against unlabeled anti-Id. The purity of isolated cell proteins with the anti-Id epitope was >95%. Control blots probed with 111-nonimmune IgG showed no cell or reference protein. For three separate preparations, the yields of affinity-purified U937 cell protein were 1.8 ± 1.0 p,g (mean ± SD) per 108 lysed cells. Similar concentrations of cell protein were obtained by indirect probing with unlabeled anti-Id and detection with the labeled goat antibody. Results Binding oftheAnti-Id to U937 Cells. The anti-Id is specific for the Id of mAb OEA10 anti-yeast (3-glucans and crossreactive with epitopes found on human monocyte 0-glucan receptors. To determine whether comparable epitopes were expressed by U937 cells, preliminary binding studies were carried out with duplicate sample mixtures containing increasing doses of 125 1-anti-Id or 125 1-anti-isotype in the absence and presence of 40-fold excess of the corresponding unlabeled antibody. U937 cells exhibited substantial amounts of specific binding of the anti-Id, which approached plateau levels of 8% at an input of 1 lAg of labeled antibody, and low levels of binding of the anti-mouse isotype, which remained constant irrespective of dose (data not shown). Binding of the anti-Id to U937 cells was further evaluated with duplicate sample mixtures containing 1 ttg of 1251anti-Id and increasing amounts of unlabeled anti-Id or nonimmune IgG, which ranged from 0 to 400 hg. The average percentage of bound 125 1-anti-Id, initially 7.76%, was progressively decreased by the unlabeled antibody and unaffected by nonimmune IgG (Fig. 1). In the presence of 100 and 200,ug of unlabeled anti-Id, binding by U937 cells was reduced to averages of 0.64 and 0.53%, respectively. 1, inset). Although the number of apparent binding sites was high, there were no significant differerices in the number of surface antigenic sites or their affinity for the anti-Id with increased washing of harvested cells, preincubation of washed cells in HBSS for 2-4 h, or use of ultracentrifuged preparations of anti-Id. These data indicated U937 cells to be a rich source of proteins with the anti-Id epitope and possible specificity for yeast f3-glucans. Immunoprecipitation of Surface-Labeled U937 Cell Proteins. Biochemical studies ofproteins with the anti-Id epitope were carried out with radioiodinated intact U937 cells, which were subsequently lysed. Detergent-soluble proteins, sequentially immunoprecipitated by nonimmune rabbit IgG, OKM1, anti-CRl, and anti-Id, were resolved by SDS-PAGE and detected by radioautography. Two membrane proteins were specifically immunoprecipitated by the anti-Id : a prominent species of 180 kD and a slightly less intense molecule of 160 kD (Fig. 2 A). The detection of little or no protein antigenic for OKM1 or anti-CR1 was in agreement with other studies of these receptors on U937 cells. Immunoprecipitations per-Structural Properties of (3-Glucan Receptors formed without preadsorptions by OKM1 and/or anti-CR1 showed the anti-Id epitope to be restricted to the same two proteins and provided further evidence for the high specificity of the anti-Id. After reduction, the two membrane proteins with the anti-Id epitope showed several faintly detectable radioactive bands but little or no parent molecule. To demonstrate these more clearly, detergent-soluble proteins were prepared from four times as many surface-labeled cells and subjected, as a single batch, to sequential immunoprecipitation with nonimmune IgG-and anti-Id-Sepharose beads. Five prominent reduction products of the two immunospecific membrane proteins were detected : a 95 kD, an 88 kD, a 60 kD, a 27 W, and a 20 kD (Fig. 2 B). None of these was detected in eluates from the nonimmune IgG-coupled beads. Immunoaffrnity Purification of Detergent-Soluble U937 Cell Proteins. To determine whether additional cellular proteins contained the anti-Id epitope, soluble proteins from 3-4 x 109 detergent-lysed cells were passaged through columns of nonimmune IgG-Sepharose followed by passage and elution from columns of anti-Id-Sepharose. The affinity-purified proteins were resolved in nonreduced and reduced samples by SDS-PAGE and detected by staining with Coomassie blue. Electrophoretic separation of an estimated 5 hg of purified protein yielded two major molecules of 180 and 160 M and five prominent reduction products o£ 95, 88, 60, 27, and 20 kD (Fig. 3). By densitometry, the concentration of the 180 kD protein was approximately two-thirds that of the 160 kD, whereas the concentrations of the five reduction products were nearly equal. In addition to these proteins, nonreduced samples contained two apparent aggregates ofhigh mol wt and a protein of 60 kD, which, as a group, accounted for about 15% of the total stained protein ; reduced samples had a 5-7% content of a 160 kD stained protein. Analyses of three preparations of similarly purified proteins showed slightly different proportions of these minor constituents, but no additional molecules with the anti-Id epitope. Immunoblot Analysis of AfnitrPurified U937 Cell Proteins. For resolution of the 180 and 160 kD proteins and comparison of their reduction products, nonreduced and reduced samples containing 0.4 Ag of affinity-purified cell pro- 1515 Czop et al. radioautography with 1 2 5I-goat anti-rabbit F(ab')2 ; the film was exposed for 6 d. In duplicate blots, none of the proteins was reactive with nonimmune rabbit IgG or the labeled goat antibody. Mobility and size (W) of prestained standards are indicated. tein were subjected to SDS-PAGE, electrophoretically transferred onto nitrocellulose, and probed with the anti-Id. Under these conditions, the 180 and 160 kD proteins were clearly reactive with the anti-Id, but the only detectable protein after reduction was the 20 kD polypeptide (Fig. 4). Neither these molecules nor the apparent aggregates showed reactivity with nonimmune IgG or 125 1-goat anti-rabbit F(ab')2 (data not shown). Reactivity of the anti-Id with the 160 kD protein was always greater than that of the 180 kD. This was further confirmed by immunoblot analysis of two-dimensional gels. For this analysis, 4.8 Wg of nonreduced affinity-purified protein were resolved in the first gel. These proteins were reduced in the second dimension, transferred onto nitrocellulose, and probed with the anti-Id in immunoblots. The anti-Id detected the 20 kD polypeptide, as found previously, and further demonstrated that this subunit component was a constituent of each of the nonreduced proteins, including all of the aggregated proteins (Fig. 5). Smaller amounts of reduced polypeptides of 95, 60, and 27 kD were also detectable, indicating that the anti-Id epitope was not limited to the 20 kD polypeptide. Identification and Characterization of U937 Cell a-Glucan Receptors. To determine whether U937 cells had (3-glucan receptors reactive with the anti-Id, samples of glucan particles were incubated with detergent-soluble proteins at a particle-to-cell ratio of 10 :1, washed, and the eluted materials were analyzed in immunoblots. For comparison, soluble proteins from the same batches and numbers of lysed cells were immunoadsorbed with an equal ratio of packed anti-Id-Sepharose beads; nonreduced and reduced samples each containing half of the eluted proteins were analyzed concurrently with the glucan-derived materials. Under these conditions, the proteins eluted from anti-ld-Sepharose beads were markedly overloaded in nonreduced samples; however, all were clearly resolved by reduction and demonstrated to contain abundant quantities of the 20 kD polypeptide (Fig. 6, lanes 1 and 3). Two distinct glucan-binding proteins were identified with the anti-ld, a minor protein of 180 kD and a major molecule of 160 kD; one prominent polypeptide of 20 kD was detected after reduction of equal amounts of protein (Fig. 6, lanes 2 and 4). Control eluates from the same numbers of pooled buffer-treated glucan particles (2 x 109) contained no detectable protein in immunoblots probed directly or indirectly with either nonimmune IgG or anti-Id (data not shown). Despite the presence of large amounts of protein, 1516 Structural Properties of (3-Glucan Receptors none of the proteins bearing the anti-Id epitope from antibody, or glucan-derived samples was detected in duplicate blots probed with 'III-goat anti-rabbit IgG with or without nonimmune IgG. Identification and Characterization of Monocyte (3-Glucan Receptors. To determine the molecular nature of human monocyte (3-glucan receptors reactive with the anti-Id, detergentsoluble proteins in 3-7 x 107 adherent cells from individual monocyte donors were purified by adsorption of cell proteins to nonimmune rabbit IgG-Sepharose beads before passage and elution from anti-Id-Sepharose-The eluted proteins from both types of beads were resolved by SDS-PAGE and analyzed by immunoblotting with the anti-Id. Two monocyte proteins with mol wt of 180 and 160 kD and apparent aggregates of these proteins bound the anti-ld (Fig. 7 A, lane 2). None of these species had specificity for nonimmune rabbit IgG (Fig. 7 A, lane 1). Monocytes prepared from four separate donors always demonstrated a dominant band of 160 kD and, in one case, this was the only detectable monocyte protein reactive with the anti-Id. For quantitative comparison, the experiments designed to demonstrate the presence and structural properties of U937 cell 0-glucan receptors (Fig. 6) were repeated, but the amounts of immunodetectable 160 kD protein in the antibody-and glucan-derived samples were normalized to each other and to the corresponding monocyte product. Electrophoretic separation of 0.1 pg and 3.0 leg of cell protein in antibody-and glucan-derived samples, respectively, yielded amounts of immunodetectable 160 kD proteins, which were similar (Fig. 7 B) and comparable to the depicted monocyte product (Fig. 7 A, lane 2). In terms of Figure 5. Subunit localization of the anti-Id epitope in purified U937 cell proteins. Two-dimensional SDS-PAGE was performed with 4.8 Wg of affinitypurified U937 cell protein under nonreducing (NR) conditions in the first dimension and reducing (R) conditions in the second. Proteins were immunoblotted with the anti-Id and detected by radioautoradiography, as described for Fig. 4, after 17 h of exposure. Immunoblot analysis of three two-dimensional gels showed the same polypeptides with the anti-Id epitope; control blots revealed no molecules with specificity for nonimmune rabbit IgG or the labeled goat antibody. Figure 6. Identification and structure of U937 cell /3-glucan receptors. Glucan-binding proteins from 2 x 108 detergent lysed U937 cells were evenly divided in nomeduced (lane 2) and reduced (lane 4) samples, resolved by SDS-PAGE, and analyzed in immunoblots probed with the anti-Id as described in Fig. 4; the film was exposed for 6 d. For comparison, soluble proteins from the same number of detergent-lysed cells were concurrently immunoabsorbed with an equal ratio of packed anti-Id-Sepharose beads and analyzed in immunoblots of nonreduced (lane 1) and reduced (lane 3) samples; the film was exposed for 17 h. The glucan-and antibodyderived materials were run in the same slab gel and the results are representative of five analyses. cell number, these data suggested that monocytes contained 20-40 times fewer 0-glucan receptors than U937 cells. To determine whether the structural properties of R-glucan receptors in monocytes and U937 cells were similar, detergentsoluble proteins from 4-7 x 108 monocytes were immunopurified by column chromatography and compared, in reduced samples, to column-purified U937 cell proteins. Duplicate samples, each containing about 0.4 Rg of monocyte and 0.8 Ag of U937 cell protein, were subjected to SDS-PAGE and immunoblot analysis with nonimmune IgG or anti-Id. A reduction product of 20 kD was the only monocyte and 151 7 Czop et al. the major U937 cell polypeptide detected by the anti-Id (Fig. 8). When radioautography was extended from 21 h to 4 d, an additional monocyte polypeptide of 95 kD was detected. Regardless of exposure time, blots probed with nonimmune rabbit IgG were always negative (data not shown). Discussion The present studies demonstrate the molecular nature of a-glucan receptors on human mononuclear phagocytes and are the first to characterize the structure of these biochemical entities. The anti-idiotypic antibody, previously shown to bind to and block function of human monocyte 0-glucan receptors, provided a means to identify and isolate receptors which Fig. 6. The radioautograph was developed after 21 h of exposure. A monocyte polypeptide of 95 kD was the only additional species detectable in films exposed for 4 d ; control blots revealed no molecules with specificity for nonimmune rabbit IgG or 1 251-goat anti-rabbit F(ab')2. The data are representative of three analyses. initiate phagocytosis of particulate yeast glucan. The availability of a human myelomonocytic cell line provided an alternative to obtaining the large numbers of peripheral blood monocytes required to carry out detailed molecular studies of a-glucan receptors. U937 cells were found to be a suitable cultured source of cells that expressed surface materials antigenic for the anti-Id but not for the corresponding antiisotype present in the same rabbit antisera before purification of the anti-Id (text). Uptake of radiolabeled anti-Id was saturable at levels of 93-95% by unlabeled anti-Id but was unaffected by the same inputs of nonimmune IgG (Fig. 1). Calculations based on the amounts of IgG specifically bound revealed that 2.6-5.2 x 106 constitutive surface molecules 151 8 Structural Properties of /3-Glucan Receptors were present on each U937 cell; these had an apparent affinity of 1.9 x 107M-1 for the anti-Id. Even when consideration was given to these values being derived for logarithmically growing leukemic cells, the data indicated an unexpectedly high number of receptors. Examination of surface-radioiodinated U937 cells demonstrated that the anti-Id epitope was found on two plasma membrane proteins of 180 and 160 kD (Fig. 2 A). Both of these proteins disappeared with reduction and five dominant reduction products of 95, 88, 60, 27, and 20 kD (Fig. 2 B, lane 2) were present. Under reducing and nonreducing conditions, the only other radiolabeled proteins detected were two minor constituents. Neither of these was dependent on the specificity of the anti-Id, as shown by their binding to nonimmune IgG (Fig. 2 B, lane 1). The larger protein of 72 kD was probably the high affinity IgG FcR 1 and the smaller one of 40 kD was, in all likelihood, cytoskeletal actin nonspecifically bound to IgG. Analysis of total U937 cell protein failed to identify additional molecules reactive with the anti-Id (Fig. 3). The 180-and 160-kD proteins, which were both complexes of several disulfide-linked poly-peptides, accounted for 85-90% of the protein purified by affinity column chromatography; the remainder was nearly equally divided among protein aggregates of at least two sizes and a 60-kD protein. The two major proteins were reduced to five polypeptides of 95, 88, 60, 27, and 20 kD and these accounted for 95% of the total sample. Immunoblots of column purified (Fig. 4) and immunoadsorbed (Fig. 7 B) materials indicated the anti-Id epitope to be more prevalent on the 160-kD than on the 180-kD protein. Immunoblots bearing larger amounts of column-purified (Fig. 8, lane 2) and immunoadsorbed (Fig. 6, lane 3) materials indicated that each of the five reduction products expressed the anti-Id epitope, with the epitope density always being significantly higher for the 20-kD polypeptide. Elution of immunoadsorbed materials with hot SDS was more efficient in removing firmly bound molecules from solid-phase beads, as evidenced by the small amounts of H and L chains of IgG and a prominent 40-kD band which was likely a dimer of the 20-kD (Fig. 6, lane.3). The identification of the U937 cell as a cell type having -glucan receptors was determined by first incubating detergent-soluble proteins with glucan particles and then detecting glucan-bound proteins by immunoblotting with the anti-Id. The glucan-bound proteins were virtually identical to the proteins adsorbed and immunochemically detected with the anti-Id. The glucan-derived samples contained a dominant 160-kD protein, a minor 180-kD species, and two apparent aggregates. All of these disappeared with reduction and a 20-kD subunit presented as the reduced molecule with the greatest immunoreactivity (Fig. 6, lanes 2 and 4). For immunoblots bearing nearly equal amounts of detectable antibody-and glucan-derived protein, the anti-Id was most reactive with the 160-kD proteins in both types of samples (Fig. 7 B) and, in each case, with the 20-kD reduction product (data not shown). That the glucan-derived proteins were a-glucan receptors of the U937 cells was further supported by immunoblotting eluates of buffer-treated glucan particles which, despite the efficient removal of bound materials, contained no proteins reactive with the anti-Id. U937 cells share many surface characteristics with normal human monocytes including structurally equivalent forms of several ligand-specific receptors : IgG FcR I (CD64) and II (CD32), which are both single-chain molecules ; two species of heterodimeric fibronectin receptors, one of which has been shown with monocytes to be identical to the fibroblast receptor (very late antigen 5) ; and three noncovalent heterodimers of the leukocyte adhesion family, Lymphocyte function-associated antigen 1(CD11a), Mac-1/Mol (CD11b), and p150,95 (CD11c), which share a common (3-subunit (CD18). Data obtained from the current studies of U937 cell /3-glucan receptors were strikingly similar to monocyte proteins immunopurified with the anti-Id. Detergent-soluble monocyte proteins which were immunoadsorbed and subsequently characterized with the anti-Id contained a major protein of 160 kD, a minor molecule of 180 kD, and two minor apparent aggregates (Fig. 7 A, lane 2). Each of these molecules was composed of several disulfide-
Presentation Sisters The Presentation Sisters, officially the Sisters of the Presentation of the Blessed Virgin Mary, are a religious institute of Roman Catholic women founded in Cork, Ireland, by Venerable Nano Nagle in 1775. The Sisters of the congregation use the postnominal initials P.B.V.M. The Presentation Sisters' mission is to help the poor and needy around the world. Historically, the Sisters focused their energies on creating and staffing schools that would educate young people, especially young ladies. Most of these schools are still in operation and can be found across the globe. The Presentation Sisters are located in 24 countries including Antigua, Australia, Bolivia, Canada, Chile, Colombia, Commonwealth of Dominica, Ecuador, Great Britain, Guatemala, India, Ireland, Israel, New Zealand, Nicaragua, Pakistan, Papua New Guinea, Peru, Philippines, Slovakia, Thailand, United States of America, Zambia and Zimbabwe. Beginnings Honora (Nano) Nagle was born in Ballygriffin, Cork, Ireland in 1718. Her wealthy Catholic family provided her the advantage of an education in France, at a time when the law precluded the less advantaged from education in Ireland. In 1775, Nagle entered with some companions on a novitiate for the religious life. With them, she received the habit on 29 June 1776, taking the name of Mother Mary of St. John of God. They made their first annual vows 24 June 1777. The foundress had begun the erection of a convent close to that which she had built for the Ursulines, and it was opened on Christmas Day, 1775. They adopted as their title the Society of Charitable Instruction of the Sacred Heart of Jesus, which was changed in 1791 to that of "Presentation Sisters". Their habit was similar to that of the Ursulines. As the schools of the Presentation Sisters developed, Nagle is quoted as having said of them: "I can assure you my schools are beginning to be of service to a great many parts of the world... I often think they will not bring me to heaven as I only take delight and pleasure in them." Institutional development The second superioress was Mother Mary Angela Collins. Soon after her succession a set of rules, adapted from that of St. Augustine, was drawn up by Bishop Moylan, and approved by Pope Pius VI in September 1791. This congregation of teaching Sisters itself was given formal approval by Pope Pius VII in 1800. Communities from Cork were founded at Killarney in 1793; Dublin in 1794; and at Waterford in 1798. A second convent at Cork was established in 1799, by Sister M. Patrick Fitzgerald; and a convent at Kilkenny in 1800, by Sister M. Joseph McLoughlan. The schools, regulated at the time by a United Kingdom Government board, had for their first object the Catholic and moral training of the young, which was not interfered with by the government. The secular system followed was the "National", superseded, in many cases, by the "Intermediate", both of which ensured a sound education in English; to these were added domestic economy, Latin, Irish, French, and German. The average attendance of children in each of the city convents of Dublin, Cork, and Limerick was over 1,200; that in the country convents between 300 and 400, making a total of 22,200 who received an excellent education without charge. For girls who needed to support themselves by earning a living, work-rooms were established at Cork, Youghal, and other places, where Limerick lace, Irish points and crochet were taught. In 1802, the Sisters' example inspired the formation of the Presentation Brothers. In 1833 a house was founded by Mother Josephine Sargeant from Clonmel at Manchester, England, from which sprang two more, one at Buxton St Anne's and one at Matlock St Joseph's. The schools were well attended; the number of children, including those of an orphanage, being about 1,400. India received its first foundation in 1841, when Mother Xavier Kearney and some Sisters from Rahan and Mullingar established themselves at Madras. Soon four more convents in the Madras presidency were founded from this, and in 1891 one at Rawal Pindi. These schools comprised orphanages, and day and boarding-schools, both for Europeans and local children. In the 20th century, foundations were established in Africa (Zimbabwe, 1949; Zambia, 1970) and New Zealand (1951). The first of a new wave of foundations from Ireland in the USA began in Texas (San Antonio, 1952), followed by foundations in the Philippines (1960), South America (Chile, 1982; Ecuador, 1983; Peru, 1993); Slovakia (1992); and Thailand (1999). Organization Communities of Presentation Sisters exist throughout the world. However, historical and legal factors caused these communities to develop and operate as autonomous groups. Each community is independent of the motherhouse, and subject only to its own superioress and the bishop of its respective diocese. A large proportion of these communities are today more closely united within the Union of Sisters of the Presentation of the Blessed Virgin Mary, created by papal decree on 19 July 1976. Today, more than 1,600 Sisters pursue work in education and relief of the poor on every continent. International Presentation Association (IPA) The International Presentation Association was established in 1988 as a network of the various congregations of PBVM women, including the Union of Presentation Sisters, the Conference of Presentation Sisters of North America, and the Australian Society. The goal of the IPA is to foster unity and to enable collaboration for the sake of mission. The IPA has NGO consultant status with the UN Economic and Social Council. Conference of Presentation Sisters of North America (CPS) The Conference of Presentation Sisters of North America began in August 1953 under the title of the "North American Conference", when several Presentation communities in North America began to collaborate and communicate on issues of ministry, spirituality and social justice. All of these communities claim their origins from Nano Nagle. In 2002, the North American Conference included eight communities, and changed its name to CPS. Together the eight communities established a collaborative ministry project in New Orleans called "Lantern Light". St. John's, Newfoundland The first Presentation Convent in the Americas was founded in Newfoundland in 1833 at the request of Bishop Michael Anthony Fleming, Vicar Apostolic of the island. The convent and a neighboring school were established in St. John's, Newfoundland, by Mother Mary Bernard Kirwan accompanied by Sisters Mary Xavier Molony, Mary Magdalen O’Shaughnessy, and Mary Xaverius Lynch. The motherhouse was established adjacent to the Basilica of St. John the Baptist. As of 2019, the congregation was serving twelve ministry locations in Newfoundland. San Francisco, California In November 1854, five Presentation Sisters arrived in San Francisco from Ireland at the invitation of Archbishop Joseph Sadoc Alemany. Mother M. Joseph Cronin was appointed as the community's first superior; but due to unforeseen circumstances, she returned to Ireland in 1855 with two other members of the small community, Sisters Clare Duggan and Augustine Keane. The remaining Sisters were Mother Mary Teresa Comerford, who assumed the role as new superior, Mother Xavier Daly, and their first postulant, Mary Cassian. The Sisters had great difficulties in their early founding years; but succeeded in interesting prominent Catholics of the city in their work. By 1900, the San Francisco Presentation foundation established two convents and schools within the city limits named Presentation High School, San Francisco, and one in Berkeley, California named Presentation High School, Berkeley. They also staffed schools in Gilroy and Sonoma, California. The Presentation Sisters opened San Francisco's School of the Epiphany in 1938, and Menlo Park's Nativity Catholic School in 1956. Presentation High School San Francisco was an all-girls school. The main building was built in 1930 at 2340 Turk Street. In 1991 the building became University of San Francisco's Education Building. In nearby San Jose, California, the Presentation Sisters opened Presentation High School in 1962. The school still operates as an all-girls Catholic high school. In Sacramento, California, the Sisters staffed a pair of K–8 schools for 30 years each: Presentation School during 1961–1991, and Saint Mary School during 1969–1999. Dubuque, Iowa The congregation was introduced into the Diocese of Dubuque by Mother Mary Vincent Hennessey in 1874. By 1913, the congregation had established ten branch-houses in neighboring Nebraska. Staten Island, New York The Presentation Convent of St. Michael's Church (New York City) was founded on 8 September 1874, by Mother Joseph Hickey of the Presentation Convent, Terenure, County Dublin, with two Sisters from that convent, two from Clondalkin, one from Tuam, and five postulants. Father Arthur J. Donnelly, the founding pastor of St. Michael's Church as its school building neared completion, went to Ireland in February 1874 to invite the Presentation Sisters to take charge of the girls' department. Upon the Sisters' agreeing, Paul Cardinal Cullen, Archbishop of Dublin, applied to the Holy See for the necessary authorization for the Sisters to leave Ireland and proceed to New York, which was accorded by Pope Pius IX. In 1884, the Sisters took charge of St. Michael's Home, Greenridge, Staten Island, where soon over two hundred destitute children were cared for. This became the home of the newly established Sisters of the Presentation of Staten Island, which became its own congregation on 1 May 1890. (Others from the early New York community developed into today's Presentation Sisters of New Windsor.) In 1921–1922, the Staten Island congregation began educating young local students at St. Ann's Church, St. Clare's Church, and Our Lady Help of Christians. By the 1950s, a dozen locations on Staten Island were served by more than 125 Sisters, larger than any other Presentation community in their first two centuries. In the 1960s, they were instrumental in establishing Countess Moore High School. Founded in 1962 as an all-girls school, in September 1969 it became co-educational and later changed its name to Moore Catholic High School. In 1945, the Staten Island motherhouse moved from St. Michael's Home in Greenridge to the former "Horrmann Castle" atop Grymes Hill, and finally in 1965 to a new building next to the old Greenridge property. Fargo, North Dakota The Fargo, North Dakota community was established in 1880 under Mother Mary John Hughes, and took charge of a free school, home, and academy. Fargo's Presentation Sisters merged into the Union (U.S. Province) in 2013. Aberdeen, South Dakota In 1886 some Sisters from Fargo went to Aberdeen, South Dakota, and, under the guidance of Mother M. Joseph Butler, took charge of schools at Bridgewater, Bristol, Chamberlain, Elkton, Jefferson, Mitchell, Milbank, and Woonsocket, as well as two hospitals. In 1922, what is now called Presentation College opened in Aberdeen. The college primarily educated nurses for the northern portion of South Dakota. New Windsor, New York In 1886 Mother Magdalen Keating, with a small group of Sisters, left New York at the invitation of the Rev. P. J. Garrigan (later Bishop of Sioux City, Iowa), to take charge of the schools of St. Bernard's Parish, Fitchburg, Massachusetts. The mission flourished and established other foundations in West Fitchburg and Clinton, Massachusetts; Central Falls, Rhode Island; and Berlin, New Hampshire. In 1997, the Sisters of the Presentation of Fitchburg, Massachusetts, and the Sisters of the Presentation of Newburgh, New York, united to form one congregation, now based in New Windsor, New York. Watervliet, New York The Presentation Sisters of Watervliet, New York established their community in 1881. They elected not to join the Conference of Presentation Sisters of North America, and Watervliet remains an independent congregation.
/* Internal function to reset the locale manager window... * so we can be sure the node it is operating on is still valid. */ void locale_manager_reset(char *val) { if(hwndLocale) { HWND combo = (HWND)dw_window_get_data(hwndLocale, "combo"); HWND entry = (HWND)dw_window_get_data(hwndLocale, "entry"); HWND def = (HWND)dw_window_get_data(hwndLocale, "default"); locale_manager_update(); if(def) dw_window_set_text(def, val ? val : ""); if(entry) dw_window_set_text(entry, ""); if(combo) dw_window_set_text(combo, ""); dw_window_set_data(hwndLocale, "selected", NULL); dw_window_set_data(hwndLocale, "node", NULL); } }
The Center for Food Safety heralded reports that Michael Dourson, President Trump's controversial nominee to lead the U.S. Environmental Protection Agency's (EPA) Office of Chemical Safety and Pollution Prevention, Wednesday withdrew his nomination after senators raised concerns over his past work and conflicts of interest. "Dourson is a long-time pesticide industry shill, with a history of manipulating scientific research to benefit corporate special interests. He was a dangerous, irresponsible choice to oversee chemical safety at the EPA," said Andrew Kimbrell, executive director a Center for Food Safety. "The Senate correctly raised important conflict of interest concerns based on the public's outcry. Make no mistake: This is your victory, food movement. The Trump Administration should now move forward to nominate someone who will put public and environmental health over the profits of chemical companies." If confirmed, Dourson would have been in the position to set safety levels for many of the same chemicals his company was hired to defend. Multinational chemical companies like Dow, Monsanto and DuPont have routinely hired Dourson to downplay the effects of highly toxic substances linked to birth defects, developmental problems and cancer. For years, Dourson has accepted payments for "criticizing studies that raised concerns about the safety of his clients' products," according to a review of financial records and his published work by The Associated Press. In fact, Dourson has spent much of his career helping companies fight restrictions on their toxic products. Dourson even went so far as to assert that children are less susceptible to toxic chemicals, despite the widespread, verified scientific consensus that children are more susceptible. At his committee hearing, Dourson's questionable track record and refusal to commit to recusing himself from working on chemicals he's been paid by industry to "study" in the past led Sen. Ed Markey (D-MA) to tell Dourson, "You're not just an outlier on this science, you're outrageous in how far from the mainstream of science you actually are. It's pretty clear you have never met a chemical you didn't like." "Dourson's withdrawal is another victory for the American people, sound science and the rule of law," said Kimbrell. "The food movement needs to continue to speak loudly against this Administration's efforts to promote corporate profits over the protection of farmers, food safety, public health and the environment." Dourson is the second controversial nominee to withdraw in as many months. Sam Clovis, a former Trump campaign aide, climate change "skeptic" and conservative radio talk show host, withdrew from consideration for Chief Scientist at USDA, after months of opposition and public outcry about his lack of qualification and biases. Clovis was also linked to the ongoing investigation into Russia's interference with the 2016 presidential election.
<reponame>ngswbryan/main package seedu.address.storage; import static org.junit.jupiter.api.Assertions.assertEquals; import static seedu.address.storage.earnings.JsonAdaptedEarnings.MISSING_FIELD_MESSAGE_FORMAT; import static seedu.address.testutil.Assert.assertThrows; import static seedu.address.testutil.TypicalEarnings.CS2107_EARNINGS; import org.junit.jupiter.api.Test; import seedu.address.commons.exceptions.IllegalValueException; import seedu.address.model.classid.ClassId; import seedu.address.model.earnings.Amount; import seedu.address.model.earnings.Claim; import seedu.address.model.earnings.Count; import seedu.address.model.earnings.Date; import seedu.address.model.earnings.Type; import seedu.address.storage.earnings.JsonAdaptedEarnings; public class JsonAdaptedEarningsTest { private static final String INVALID_DATE = "523/23-2033"; private static final String INVALID_TYPE = "meeting+consultation"; private static final String INVALID_CLASSID = " "; private static final String INVALID_AMOUNT = "23.241"; private static final String INVALID_CLAIM = "waiting"; private static final String INVALID_COUNT = "15"; private static final String VALID_DATE = CS2107_EARNINGS.getDate().toString(); private static final String VALID_TYPE = CS2107_EARNINGS.getType().toString(); private static final String VALID_CLASSID = CS2107_EARNINGS.getClassId().toString(); private static final String VALID_AMOUNT = CS2107_EARNINGS.getAmount().toString(); private static final String VALID_CLAIM = CS2107_EARNINGS.getClaim().toString(); private static final String VALID_COUNT = CS2107_EARNINGS.getCount().toString(); @Test public void toModelType_validEarningsDetails_returnsEarnings() throws Exception { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(CS2107_EARNINGS); assertEquals(CS2107_EARNINGS, earnings.toModelType()); } @Test public void toModelType_invalidDate_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(INVALID_DATE, VALID_CLASSID, VALID_AMOUNT, VALID_TYPE, VALID_CLAIM, VALID_COUNT); String expectedMessage = Date.MESSAGE_CONSTRAINTS; assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_nullDate_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(null, VALID_CLASSID, VALID_AMOUNT, VALID_TYPE, VALID_CLAIM, VALID_COUNT); String expectedMessage = String.format(MISSING_FIELD_MESSAGE_FORMAT, Date.class.getSimpleName()); assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_invalidType_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, VALID_CLASSID, VALID_AMOUNT, INVALID_TYPE, VALID_CLAIM, VALID_COUNT); String expectedMessage = Type.MESSAGE_CONSTRAINTS; assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_nullType_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, VALID_CLASSID, VALID_AMOUNT, null, VALID_CLAIM, VALID_COUNT); String expectedMessage = String.format(MISSING_FIELD_MESSAGE_FORMAT, Type.class.getSimpleName()); assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_invalidClassId_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, INVALID_CLASSID, VALID_AMOUNT, VALID_TYPE, VALID_CLAIM, VALID_COUNT); String expectedMessage = ClassId.MESSAGE_CONSTRAINTS; assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_nullClassId_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, null, VALID_AMOUNT, VALID_TYPE, VALID_CLAIM, VALID_COUNT); String expectedMessage = String.format(MISSING_FIELD_MESSAGE_FORMAT, ClassId.class.getSimpleName()); assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_invalidAmount_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, VALID_CLASSID, INVALID_AMOUNT, VALID_TYPE, VALID_CLAIM, VALID_COUNT); String expectedMessage = Amount.MESSAGE_CONSTRAINTS; assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_nullAmount_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, VALID_CLASSID, null, VALID_TYPE, VALID_CLAIM, VALID_COUNT); String expectedMessage = String.format(MISSING_FIELD_MESSAGE_FORMAT, Amount.class.getSimpleName()); assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_invalidClaim_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, VALID_CLASSID, VALID_AMOUNT, VALID_TYPE, INVALID_CLAIM, VALID_COUNT); String expectedMessage = Claim.MESSAGE_CONSTRAINTS; assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_nullClaim_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, VALID_CLASSID, VALID_AMOUNT, VALID_TYPE, null, VALID_COUNT); String expectedMessage = String.format(MISSING_FIELD_MESSAGE_FORMAT, Claim.class.getSimpleName()); assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_invalidCount_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, VALID_CLASSID, VALID_AMOUNT, VALID_TYPE, VALID_CLAIM, INVALID_COUNT); String expectedMessage = Count.MESSAGE_CONSTRAINTS; assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } @Test public void toModelType_nullCount_throwsIllegalValueException() { JsonAdaptedEarnings earnings = new JsonAdaptedEarnings(VALID_DATE, VALID_CLASSID, VALID_AMOUNT, VALID_TYPE, VALID_CLAIM, null); String expectedMessage = String.format(MISSING_FIELD_MESSAGE_FORMAT, Count.class.getSimpleName()); assertThrows(IllegalValueException.class, expectedMessage, earnings::toModelType); } }
/* Used to compare two if two string, returns 0 if they are the same */ int strcmp(char str1[], char str2[]) { uint32_t i; for (i = 0; str1[i] == str2[i]; i++) { if (str1[i] == '\0') return 0; } return str1[i] - str2[i]; }
Truly one of the brightest living diamonds, Keanu Reeves, has written a book about the "explores the real and symbolic nature of the shadow as image and figure of speech." No one knows more about shadows than Keanu Reeves; I say this in complete earnest. The book, appropriately called Shadows, was published by Steidl, and is a collaborative effort with visual artist Alexandra Grant, whose renderings of shadows are complimented by Reeves' brooding, rhetorical questions and proclamations...from the perspective of said shadows. In a description of the book, Steidl made this mouthful of a statement: "What exactly is a shadow? Is it light tracing an object or the shape a body throws when it comes between a light source and a surface? Is it a metaphor for the intimate, darker side of a person's nature, the unconscious side of one's self, where daemons and secrets are kept hidden or repressed? Is it an allegorical place or state of being, somewhere between darkness and light, living and dying?" Well, damn. Here's an offering: You can order Shadows now. I'm going to be in a fetal position clutching a photo of Private Idaho-era Keanu for the rest of the day.
Determining Short Fiber Content in Cotton Based on large data sets from three consecutive cotton crop years, linear models for SFW and SFN in terms of the HVI length parameters have been developed. The necessity of modifying the Suter-Webb array distributions is justified. The results are discussed in light of the concept of "similarity" related to fiber length distributions, as defined in Part I. Using normalized regression equations, UI is demonstrated to have a stronger influence on SFC than the range parameters (UHM, ML). The terms UI, UHM, ML are standard measurements of the HVI instruments and have been defined in the text. The models developed in this study have been compared with the one developed by Preysch in 1979, and the relative improvement over Preysch's method is discussed.
def download_images() -> None: def print_error_message(msg: str, origin: str) -> None: print(f'{esc.RED}{esc.BOLD}{msg}{esc.RESET}', f'[{esc.ITALIC}{origin}{esc.RESET}]') args = parser.parse_args() success: List[str] = [] failure: List[str] = [] url_count: int = 0 try: urls_file: str = os.path.abspath(args.urls_file) download_dir: str = os.path.abspath(args.dest_dir) if os.path.exists(download_dir) and not os.path.isdir(download_dir): raise NotADirectoryError elif not os.path.exists(download_dir): os.makedirs(download_dir) with open(urls_file, 'r') as urls: for url in urls.readlines(): url = url.strip() if args.verbose: print(f'{esc.GREEN}Downloading{esc.RESET}', f'{esc.ITALIC}{url}{esc.RESET}') if img_dl.download_image(url, download_dir): success.append(url) print(f'{esc.GREEN}{esc.BOLD}Download successful{esc.RESET}\n') else: failure.append(url) print(f'{esc.RED}{esc.BOLD}Download failed{esc.RESET}\n') url_count += 1 json_object: Dict[str, Union[int, List[str]]] = { 'URLs processed': url_count, 'Downloads succeeded': len(success), 'Downloads failed': len(failure), 'Failed downloads': failure, 'Successful downloads': success } log_file: str = os.path.join(download_dir, 'result.json') with open(log_file, 'w+') as result: json.dump(json_object, result, indent=4) except NotADirectoryError: print_error_message('ERROR: Invalid destination directory', download_dir) except FileNotFoundError: print_error_message("ERROR: The URL files provided doesn't exist", urls_file) except PermissionError: print_error_message("ERROR: You can't write to this directory", download_dir) except requests.exceptions.MissingSchema: print_error_message('ERROR: Invalid URLs provided', urls_file)
def domConvertEncodedText(txt): global __domParser if jseval is None: return txt if __domParser is None: __domParser = jseval("new DOMParser") dom = __domParser.parseFromString("<!doctype html><body>" + str(txt), "text/html") return dom.body.textContent
def printable_board(self, indent_char="\t", legend_hint=True, symbols=None): symbols = symbols or self.symbols assert len(symbols) == 2, "`symbols` must have exactly 2 elements" data_symbols = self.data.copy() for orig, new in zip(("1", "2"), symbols): data_symbols[data_symbols == orig] = new board_symbols = data_symbols.reshape((3, 3)) if legend_hint: legend_board = np.where( self.data == "_", range(9), " ").reshape((3, 3)) return "\n".join( [indent_char + "GAME | INDEX"] + [indent_char + "===== | ====="] + [ indent_char + " ".join(b_row) + " | " + " ".join(l_row) for b_row, l_row in zip(board_symbols, legend_board) ] ) else: return "\n".join([indent_char + " ".join(row) for row in board_symbols])
import tensorflow as tf import numpy as np from scipy.misc import imsave from skimage.transform import resize from copy import deepcopy import os import constants as c from loss_functions import combined_loss from utils import psnr_error, sharp_diff_error from tfutils import w, b # noinspection PyShadowingNames class GeneratorModel: def __init__(self, session, summary_writer, height_train, width_train, height_test, width_test, scale_layer_fms, scale_kernel_sizes): """ Initializes a GeneratorModel. @param session: The TensorFlow Session. @param summary_writer: The writer object to record TensorBoard summaries @param height_train: The height of the input images for training. @param width_train: The width of the input images for training. @param height_test: The height of the input images for testing. @param width_test: The width of the input images for testing. @param scale_layer_fms: The number of feature maps in each layer of each scale network. @param scale_kernel_sizes: The size of the kernel for each layer of each scale network. @type session: tf.Session @type summary_writer: tf.train.SummaryWriter @type height_train: int @type width_train: int @type height_test: int @type width_test: int @type scale_layer_fms: list<list<int>> @type scale_kernel_sizes: list<list<int>> """ self.sess = session self.summary_writer = summary_writer self.height_train = height_train self.width_train = width_train self.height_test = height_test self.width_test = width_test self.scale_layer_fms = scale_layer_fms self.scale_kernel_sizes = scale_kernel_sizes self.num_scale_nets = len(scale_layer_fms) self.define_graph() # noinspection PyAttributeOutsideInit def define_graph(self): """ Sets up the model graph in TensorFlow. """ with tf.name_scope('generator'): ## # Data ## with tf.name_scope('data'): self.input_frames_train = tf.placeholder( tf.float32, shape=[None, self.height_train, self.width_train, 3 * c.HIST_LEN]) self.gt_frames_train = tf.placeholder( tf.float32, shape=[None, self.height_train, self.width_train, 3]) self.input_frames_test = tf.placeholder( tf.float32, shape=[None, self.height_test, self.width_test, 3 * c.HIST_LEN]) self.gt_frames_test = tf.placeholder( tf.float32, shape=[None, self.height_test, self.width_test, 3]) # use variable batch_size for more flexibility self.batch_size_train = tf.shape(self.input_frames_train)[0] self.batch_size_test = tf.shape(self.input_frames_test)[0] ## # Scale network setup and calculation ## self.summaries_train = [] self.scale_preds_train = [] # the generated images at each scale self.scale_gts_train = [] # the ground truth images at each scale self.d_scale_preds = [] # the predictions from the discriminator model self.summaries_test = [] self.scale_preds_test = [] # the generated images at each scale self.scale_gts_test = [] # the ground truth images at each scale for scale_num in range(self.num_scale_nets): with tf.name_scope('scale_' + str(scale_num)): with tf.name_scope('setup'): ws = [] bs = [] # create weights for kernels for i in range(len(self.scale_kernel_sizes[scale_num])): ws.append(w([self.scale_kernel_sizes[scale_num][i], self.scale_kernel_sizes[scale_num][i], self.scale_layer_fms[scale_num][i], self.scale_layer_fms[scale_num][i + 1]])) bs.append(b([self.scale_layer_fms[scale_num][i + 1]])) with tf.name_scope('calculation'): def calculate(height, width, inputs, gts, last_gen_frames): # scale inputs and gts scale_factor = 1. / 2 ** ((self.num_scale_nets - 1) - scale_num) scale_height = int(height * scale_factor) scale_width = int(width * scale_factor) inputs = tf.image.resize_images(inputs, [scale_height, scale_width]) scale_gts = tf.image.resize_images(gts, [scale_height, scale_width]) # for all scales but the first, add the frame generated by the last # scale to the input if scale_num > 0: last_gen_frames = tf.image.resize_images( last_gen_frames,[scale_height, scale_width]) inputs = tf.concat([inputs, last_gen_frames], 3) # generated frame predictions preds = inputs # perform convolutions with tf.name_scope('convolutions'): for i in range(len(self.scale_kernel_sizes[scale_num])): # Convolve layer preds = tf.nn.conv2d( preds, ws[i], [1, 1, 1, 1], padding=c.PADDING_G) # Activate with ReLU (or Tanh for last layer) if i == len(self.scale_kernel_sizes[scale_num]) - 1: preds = tf.nn.tanh(preds + bs[i]) else: preds = tf.nn.relu(preds + bs[i]) return preds, scale_gts ## # Perform train calculation ## # for all scales but the first, add the frame generated by the last # scale to the input if scale_num > 0: last_scale_pred_train = self.scale_preds_train[scale_num - 1] else: last_scale_pred_train = None # calculate train_preds, train_gts = calculate(self.height_train, self.width_train, self.input_frames_train, self.gt_frames_train, last_scale_pred_train) self.scale_preds_train.append(train_preds) self.scale_gts_train.append(train_gts) # We need to run the network first to get generated frames, run the # discriminator on those frames to get d_scale_preds, then run this # again for the loss optimization. if c.ADVERSARIAL: self.d_scale_preds.append(tf.placeholder(tf.float32, [None, 1])) ## # Perform test calculation ## # for all scales but the first, add the frame generated by the last # scale to the input if scale_num > 0: last_scale_pred_test = self.scale_preds_test[scale_num - 1] else: last_scale_pred_test = None # calculate test_preds, test_gts = calculate(self.height_test, self.width_test, self.input_frames_test, self.gt_frames_test, last_scale_pred_test) self.scale_preds_test.append(test_preds) self.scale_gts_test.append(test_gts) ## # Training ## with tf.name_scope('train'): # global loss is the combined loss from every scale network self.global_loss = combined_loss(self.scale_preds_train, self.scale_gts_train, self.d_scale_preds) self.global_step = tf.Variable(0, trainable=False) self.optimizer = tf.train.AdamOptimizer(learning_rate=c.LRATE_G, name='optimizer') self.train_op = self.optimizer.minimize(self.global_loss, global_step=self.global_step, name='train_op') # train loss summary loss_summary = tf.summary.scalar('train_loss_G', self.global_loss) self.summaries_train.append(loss_summary) ## # Error ## with tf.name_scope('error'): # error computation # get error at largest scale self.psnr_error_train = psnr_error(self.scale_preds_train[-1], self.gt_frames_train) self.sharpdiff_error_train = sharp_diff_error(self.scale_preds_train[-1], self.gt_frames_train) self.psnr_error_test = psnr_error(self.scale_preds_test[-1], self.gt_frames_test) self.sharpdiff_error_test = sharp_diff_error(self.scale_preds_test[-1], self.gt_frames_test) # train error summaries summary_psnr_train = tf.summary.scalar('train_PSNR', self.psnr_error_train) summary_sharpdiff_train = tf.summary.scalar('train_SharpDiff', self.sharpdiff_error_train) self.summaries_train += [summary_psnr_train, summary_sharpdiff_train] # test error summary_psnr_test = tf.summary.scalar('test_PSNR', self.psnr_error_test) summary_sharpdiff_test = tf.summary.scalar('test_SharpDiff', self.sharpdiff_error_test) self.summaries_test += [summary_psnr_test, summary_sharpdiff_test] # add summaries to visualize in TensorBoard self.summaries_train = tf.summary.merge(self.summaries_train) self.summaries_test = tf.summary.merge(self.summaries_test) def train_step(self, batch, discriminator=None): """ Runs a training step using the global loss on each of the scale networks. @param batch: An array of shape [c.BATCH_SIZE x self.height x self.width x (3 * (c.HIST_LEN + 1))]. The input and output frames, concatenated along the channel axis (index 3). @param discriminator: The discriminator model. Default = None, if not adversarial. @return: The global step. """ ## # Split into inputs and outputs ## input_frames = batch[:, :, :, :-3] gt_frames = batch[:, :, :, -3:] ## # Train ## feed_dict = {self.input_frames_train: input_frames, self.gt_frames_train: gt_frames} if c.ADVERSARIAL: # Run the generator first to get generated frames scale_preds = self.sess.run(self.scale_preds_train, feed_dict=feed_dict) # Run the discriminator nets on those frames to get predictions d_feed_dict = {} for scale_num, gen_frames in enumerate(scale_preds): d_feed_dict[discriminator.scale_nets[scale_num].input_frames] = gen_frames d_scale_preds = self.sess.run(discriminator.scale_preds, feed_dict=d_feed_dict) # Add discriminator predictions to the for i, preds in enumerate(d_scale_preds): feed_dict[self.d_scale_preds[i]] = preds _, global_loss, global_psnr_error, global_sharpdiff_error, global_step, summaries = \ self.sess.run([self.train_op, self.global_loss, self.psnr_error_train, self.sharpdiff_error_train, self.global_step, self.summaries_train], feed_dict=feed_dict) ## # User output ## if global_step % c.STATS_FREQ == 0: print ('GeneratorModel : Step ', global_step) print (' Global Loss : ', global_loss) print (' PSNR Error : ', global_psnr_error) print (' Sharpdiff Error: ', global_sharpdiff_error) if global_step % c.SUMMARY_FREQ == 0: self.summary_writer.add_summary(summaries, global_step) print ('GeneratorModel: saved summaries') if global_step % c.IMG_SAVE_FREQ == 0: print ('-' * 30) print ('Saving images...') # if not adversarial, we didn't get the preds for each scale net before for the # discriminator prediction, so do it now if not c.ADVERSARIAL: scale_preds = self.sess.run(self.scale_preds_train, feed_dict=feed_dict) # re-generate scale gt_frames to avoid having to run through TensorFlow. scale_gts = [] for scale_num in range(self.num_scale_nets): scale_factor = 1. / 2 ** ((self.num_scale_nets - 1) - scale_num) scale_height = int(self.height_train * scale_factor) scale_width = int(self.width_train * scale_factor) # resize gt_output_frames for scale and append to scale_gts_train scaled_gt_frames = np.empty([c.BATCH_SIZE, scale_height, scale_width, 3]) for i, img in enumerate(gt_frames): # for skimage.transform.resize, images need to be in range [0, 1], so normalize # to [0, 1] before resize and back to [-1, 1] after sknorm_img = (img / 2) + 0.5 resized_frame = resize(sknorm_img, [scale_height, scale_width, 3]) scaled_gt_frames[i] = (resized_frame - 0.5) * 2 scale_gts.append(scaled_gt_frames) # for every clip in the batch, save the inputs, scale preds and scale gts for pred_num in range(len(input_frames)): pred_dir = c.get_dir(os.path.join(c.IMG_SAVE_DIR, 'Step_' + str(global_step), str(pred_num))) # save input images for frame_num in range(c.HIST_LEN): img = input_frames[pred_num, :, :, (frame_num * 3):((frame_num + 1) * 3)] imsave(os.path.join(pred_dir, 'input_' + str(frame_num) + '.png'), img) # save preds and gts at each scale # noinspection PyUnboundLocalVariable for scale_num, scale_pred in enumerate(scale_preds): gen_img = scale_pred[pred_num] path = os.path.join(pred_dir, 'scale' + str(scale_num)) gt_img = scale_gts[scale_num][pred_num] imsave(path + '_gen.png', gen_img) imsave(path + '_gt.png', gt_img) print ('Saved images!') print ('-' * 30) return global_step def test_batch(self, batch, global_step, num_rec_out=1, save_imgs=True): """ Runs a training step using the global loss on each of the scale networks. @param batch: An array of shape [batch_size x self.height x self.width x (3 * (c.HIST_LEN+ num_rec_out))]. A batch of the input and output frames, concatenated along the channel axis (index 3). @param global_step: The global step. @param num_rec_out: The number of outputs to predict. Outputs > 1 are computed recursively, using previously-generated frames as input. Default = 1. @param save_imgs: Whether or not to save the input/output images to file. Default = True. @return: A tuple of (psnr error, sharpdiff error) for the batch. """ if num_rec_out < 1: raise ValueError('num_rec_out must be >= 1') print ('-' * 30) print ('Testing:') ## # Split into inputs and outputs ## input_frames = batch[:, :, :, :3 * c.HIST_LEN] gt_frames = batch[:, :, :, 3 * c.HIST_LEN:] ## # Generate num_rec_out recursive predictions ## working_input_frames = deepcopy(input_frames) # input frames that will shift w/ recursion rec_preds = [] rec_summaries = [] for rec_num in range(num_rec_out): working_gt_frames = gt_frames[:, :, :, 3 * rec_num:3 * (rec_num + 1)] feed_dict = {self.input_frames_test: working_input_frames, self.gt_frames_test: working_gt_frames} preds, psnr, sharpdiff, summaries = self.sess.run([self.scale_preds_test[-1], self.psnr_error_test, self.sharpdiff_error_test, self.summaries_test], feed_dict=feed_dict) # remove first input and add new pred as last input working_input_frames = np.concatenate( [working_input_frames[:, :, :, 3:], preds], axis=3) # add predictions and summaries rec_preds.append(preds) rec_summaries.append(summaries) print ('Recursion ', rec_num) print ('PSNR Error : ', psnr) print ('Sharpdiff Error: ', sharpdiff) # write summaries # TODO: Think of a good way to write rec output summaries - rn, just using first output. self.summary_writer.add_summary(rec_summaries[0], global_step) ## # Save images ## if save_imgs: for pred_num in range(len(input_frames)): pred_dir = c.get_dir(os.path.join( c.IMG_SAVE_DIR, 'Tests/Step_' + str(global_step), str(pred_num))) # save input images for frame_num in range(c.HIST_LEN): img = input_frames[pred_num, :, :, (frame_num * 3):((frame_num + 1) * 3)] imsave(os.path.join(pred_dir, 'input_' + str(frame_num) + '.png'), img) # save recursive outputs for rec_num in range(num_rec_out): gen_img = rec_preds[rec_num][pred_num] gt_img = gt_frames[pred_num, :, :, 3 * rec_num:3 * (rec_num + 1)] imsave(os.path.join(pred_dir, 'gen_' + str(rec_num) + '.png'), gen_img) imsave(os.path.join(pred_dir, 'gt_' + str(rec_num) + '.png'), gt_img) print ('-' * 30)
The Critical Period Hypothesis for L2 Acquisition: An Unfalsifiable Embarrassment? : This article focuses on the uncertainty surrounding the issue of the Critical Period Hypothesis. It puts forward the case that, with regard to naturalistic situations, the hypothesis has the status of both not proven and unfalsified. The article analyzes a number of reasons for this situation, including the effects of multi-competence, which remove any possibility that competence in more than one language can ever be identical to monolingual competence. With regard to the formal instructional setting, it points to many decades of research showing that, as critical period advocates acknowledge, in a normal schooling situation, adolescent beginners in the long run do as well as younger beginners. The article laments the profusion of definitions of what the critical period for language actually is and the generally piecemeal nature of research into this important area. In particular, it calls for a fuller integration of recent neurolinguistic perspectives into discussion of the age factor in second language acquisition research. Introduction In SLA research, the age at which L2 acquisition begins has all but lost its status as a simple quasi-biological attribute and is now widely recognized to be a 'macrovariable' (;cf. Birdsong 2018)-in other words, a complex combination of sociocultural and psychological variables. Dimensions other than physical maturation are increasingly often taken into account in discussions of age and language acquisition. This stems from the recognition that a host of factors is responsible for individual variability in L2 attainment, including the state of entrenchment of the L1, psychological variables such as self-regulation, motivation and identification, conative factors, as well as the degree of immersion in the L2 context, among others (see Birdsong 2018). Despite this, a narrow maturational perspective still persists, in the form of various versions of the Critical Period Hypothesis (CPH) (Lenneberg 1967). However, the precise timing of the offset of the posited critical period has long been a matter of debate, as has the proposed range of its effects (cf. Singleton 2005). Some CPH advocates in the SLA area maintain the traditional perspective of the hypothesis ever more strictly. With regard to their criterion for the falsification of the CPH, they demand "scrutinized nativelikeness" for all linguistic features in the performance of later learners of additional languages at all times (Abrahamsson and Hyltenstam 2009;Long 2013). The one dimension in which some prominent CPH advocates (e.g., DeKeyser 2003; Johnson and Newport 1989) concede that the critical age has no role is that of formal education-for reasons having to do with the essential experiential difference between the normal language classroom and the naturalistic learning environment. In this position paper, we explore a number of areas, concluding that in the naturalistic sphere, the critical period notion remains unproven but also unfalsified, which is very disappointing given the amount of time that has passed since CPH first emerged. We point out the main reasons that have contributed to this lack of progress and the unfortunate consequences of the lack of resolution of this controversy. The Notion of Critical Period The term critical period 1, as used by biologists, refers to a phase in the development of an organism during which a particular capacity or behavior must be acquired if it is to be acquired at all. More precisely, a critical period is a "bounded maturational span during which experiential factors interact with biological mechanisms to determine neurocognitive and behavioral outcomes" (Birdsong 2017). Certain influences or stimuli from the environment are judged necessary for the particular development to take place. Critical periods are assumed to be enabled by the fact that the brain is especially plastic during early development, allowing for neural wiring to form optimal circuits for the development of a specific capacity or behavior. The reason why critical periods end is purportedly to facilitate future development. Once neural circuits are fully formed, they become fixed, which serves the purpose of allowing other, more complicated functions to build on the basis of the more basic ones, once the basic ones are consolidated (). An example often cited is that of early imprinting in certain species. Thus, for instance, immediately after hatching, ducklings follow and become attached to the first moving object they perceive-usually their mother. This following behavior occurs only within a certain period of time, after which the hatchlings develop a fear of strange objects and retreat from them instead of following them. Between onset of the following behavior and its cessation is what is seen as the critical period for imprinting (Clark and Clark 1977, p. 520). Another example is the acquisition of birdsong: for instance, if a young chaffinch does not hear an adult bird singing within a certain period, the young bird in question will apparently never sing a full chaffinch song (Thorpe 1954). Imprinting in birds exemplifies a sharply delimited critical period of relatively short duration. Critical periods for the development of complex behaviors in humans are understood to be longer and much less clearly delineated (). If language acquisition in human beings is constrained by the limits of a critical period, the implication is, however, that unless language acquisition gets under way before the period ends, it will not fully happen. There is also widely assumed to be an analogical implication that additional languages acquired beyond the end of the critical period will not be completely or "perfectly" acquired. This analogy is actually problematic, the idea of a critical period for language in general being different from that of a critical period for specific language competencies. Early evidence for the existence of a critical period for first language acquisition was based on cases of "feral children" or from children who were deprived of socialization during childhood, and were later unable to acquire language successfully. Obviously, these conditions of extreme deprivation could have serious psychological consequences, and so the problems with speech development cannot be attributed solely to a missed critical period. Less problematic is the more recently available evidence from infants who are born deaf, but later have their hearing restored surgically through cochlear implantation. The earlier the surgery is performed, apparently, the more likely the child is to develop normal speech, preferably before the age of 6 months (Kral and Eggermont 2007). In contrast, a mature person, deprived of hearing speech for three years, will retain their language faculty, deprivation of a specific sensory experience at later age seeming not to be damaging to the already formed system. On the other hand, the neural circuitry in the adult is certainly not immutable-plasticity persists and can be redeveloped (see, for an overview and bibliography). If there is a critical period, what are its limits and what is the extent of its effects? With regard to limits, as Bates et al. noted some years ago, "the end of the critical period for language in humans has proven... difficult to find, with estimates ranging from 1 year of age to adolescence" (p. 85). Differences concerning the offset of language-readiness go back to the origins of the Critical Period Hypothesis. For Penfield (Penfield and Roberts 1959), the widely acknowledged forerunner of the CPH, the critical age was after age nine, when the brain was supposed to lose its plasticity, whereas for Lenneberg, the "father" of the CPH, it was puberty, when the process of assigning language functions to the language-dominant brain-hemisphere was supposed to be complete. Both Penfield and Lenneberg were researchers who had a strong impact regarding the idea that language learning capacity is programmed to undergo a sudden and serious decline at a particular point; however, in common with many researchers taking this line, they disagreed as to where precisely this point is located. There have been claims that the critical period for everyone ends even earlier than age six (see, e.g., Hyltenstam and Abrahamsson 2003;Ruben 1997). Meisel suggests that, at least for some aspects of language, the window of opportunity for nativelike ultimate attainment begins to close as early as 3-4 years of age. Recent developments in critical period research have brought us no closer to an agreed offset point. The two recent, large-scale studies, whose findings have been interpreted by their authors as supportive of the CPH, have determined the age when the critical period closes as 9 years () and 17 years ()-a difference of no less than 8 years. Especially problematic is the widespread acceptance of puberty (following Lenneberg) as the critical period offset point. The timing of puberty is generally assumed in the CPH literature to happen around twelve to fourteen years of age. This assumption turns out to be a gross simplification. Puberty, turns out, in fact, to be associated with quite a wide age-range (8-14 years), occurs usually later in boys than in girls, and has an increasingly early onset in girls in many cultures (see, e.g., Roberts 2013). Some girls, in fact, experience puberty as early as age six (see, e.g., ), and also, there are some cases of individuals not reaching puberty until their very late teens (see, e.g., Abdel Aal 2016). Commonsensically, the proposition that, on the basis of the foregoing, the acquirability of a second language, being linked to the age of puberty, is, in some individuals, severely curtailed at age six, while in others, it remains unproblematic until age 17 or 18, appears rather questionable. However, although implausible, the proposition is not invalidated by even very large individual differences. There remains the theoretical possibility that the ability to fully acquire a second language may be related to puberty. However, the wide age range for puberty raises a very serious issue concerning the design of studies which explore this issue. Most researchers collect data on the age of arrival into the L2 country, or the age at which language instruction began, but do not focus on the age at which individuals began puberty, probably because of the widespread assumption that puberty is assumed to happen at a non-specific age. Since this assumption is unreliable, not much useful information can be found in the literature on this issue. CPH or CPHs? Nor do differences among researchers concern only the CPH offset point. Regarding the affected language learning capacities, as pointed out by Singleton, CPH advocates have written of deficits in general language learning ability and in the linguistic features of every degree of supposed innateness. As far as the underlying sources of critical period effects are concerned, Singleton retails six accounts of a neurobiological nature, as well as four relating to cognitive development, and a further four having to do with affect and motivation. His response to this enormous range of perspectives is that the CPH cannot plausibly be regarded as a scientific hypothesis either in the strict Popperian sense of something which can be falsified (see, e.g., Popper 1959) or in the looser sense of something that can be clearly confirmed or supported (see, e.g., Ayer 1959). Birdsong and Vanhove (2016, p. 164) make a similar point, saying that the CPH is actually "a conglomerate of partly overlapping, partly contradictory hypotheses"-thus, resistant to proof or disproof. Singleton also critiques the "multiple critical periods" idea-revived by, for example, Granena and Long, who posit three sensitive periods, closing, according to their analysis, first, for phonology, then for lexis, and finally, for syntax. Supporters of this idea might have been pleased with the results of the two recent studies mentioned above, by Hartshorne et al. and Dollmann et al. ; the former inferred critical period closure at the age of 17 on the basis of a test on syntax (in L2 English), whereas the latter suggested closure at the age of 9 on the basis of measurements of the degree of a foreign accent (in L2 German). However, many other critical periods for language have been proposed, with different sequences and different ages. For example, Meisel (2008Meisel (, 2010 suggests that there are various periods for different aspects of grammar (for example, inflectional morphology), and that some aspects of language are affected already at 3-4 years of age. Additionally, the multiple critical periods hypothesis has always been undermined by mixed evidence and by counterevidence. For example, the notion that in order to attain a nativelike accent, one has to begin one's L2 experience in early childhood was devastatingly contradicted by Bongaerts' series of studies (e.g., Bongaerts 1999Bongaerts, 2003 and by Moyer's work (e.g., Moyer 1999Moyer, 2004Moyer, 2013Moyer, 2014. Even studies which offer support to the CPH, such as that of Dollmann et al., find that there are always exceptions to the general trend; that is, there are always cases of high levels of ultimate attainment in late L2 learners. Moreover, if multiple critical periods were indeed to happen in the postulated sequence, it would be impossible to encounter L2 users who have a nativelike accent but an imperfect command of syntax, which is obviously not the case. One could argue that the last point rules out the possibility that multiple critical periods occur in a specific sequence, but not that they occur in general. However, if multiple critical periods exist in an unspecified number, for an unspecified set of aspects of language, take place at varying ages, and in a different sequence for each individual, the lack of specificity renders this hypothesis almost meaningless-at least until a theoretical model or explanation is offered which would make some predictions as to why the sequence should be variable. At the moment, no such theoretical possibility has been entertained. The notion of multiple, separate critical periods for language, specifically for grammar and lexis, is also undermined by a number of recent trends in linguistics which blur the distinction between the two. The notion that lexis and syntax are clearly separable (cf. Singleton 2020a, 2021; Singleton and Leniewska 2021), was dealt a death blow by the work of Sinclair (e.g., Sinclair 1991) and Hoey (e.g., Hoey 2007), then buried deep by emergent grammar (see, e.g., Lantolf and Thorne 2006) and by the usage-based perspective on language knowledge (e.g., Ellis 2017). Naturalistic evidence generally supports the notion that, in the long term, the earlier L2 learning begins, the higher the degree of L2 proficiency attained. This is the pattern found in classic immigrant studies. Thus, for example, Asher and Garca demonstrated that an early age of arrival in America was a better predictor of English pronunciation than length of residence; Seliger et al. found that most of those who had migrated to Israel or the United States before age 9 thought themselves to be native speakers of Hebrew or English, whereas most of those who had migrated at or after age 16 felt they still had a foreign accent; Patkowski showed a negative relationship between English syntactic rating and age of arrival in the United States; Hyltenstam discovered a higher number of lexical and grammatical errors in the Swedish of immigrants settling in Sweden after age 7; and Piske et al. found the vowel production of early bilinguals to be more nativelike than that of late bilinguals. A general finding appears to be that those who arrive early in a country where a language in general use differs from their home language are more likely than older arrivals to pass-eventually-for native speakers of the new language. This "earlier the better" tendency in naturalistic SLA is, however, only a tendency. Not all immigrants arriving in their host country in childhood attain a high degree of mastery of the ambient language, and those who arrive later do not necessarily fail to acquire the degree of proficiency attained by younger arrivals. Regarding the latter point, one can cite the case of the 20 late L2 acquirers of French in Kinsella and Singleton's study. In a test of their identification of regional French accents and a lexicogrammatical test, 3 of these 20 participants scored within native-speaker ranges across the board. Such findings do not, however, undermine the CPH for its most stalwart advocates (e.g., Abrahamsson and Hyltenstam 2009;Long 2013), for whom the criterion for falsification is 'scrutinized nativelikeness' in the L2 at all times with regard to every single linguistic feature in the later learner (Abrahamsson and Hyltenstam 2009). Problems with the "Scrutinized Nativelikeness" Yardstick Nativelikeness (see, e.g., Long 1990Long, 1993 has, in fact, proved extremely difficult to establish and demonstrate in general (cf. Dewaele et al. in press). Some years ago, Davies, addressing the problem of defining what a native speaker actually is, expressed the view that "the distinction native speaker-non-native speaker... is at bottom one of confidence and identity" (Davies 2003, p. 213;. The concept of "scrutinized nativelikeness" is problematic for two important reasons. Firstly, the conception of nativeness as a benchmark implies that there is a specific, clearly defined level of language proficiency that characterizes native speakers, whereas, in reality, native speakers of a language display quite a wide spectrum of divergence from idealized norms (). It is now recognized that even monolingual native speakers exhibit features in their representations of linguistic structure that would normally be deemed erroneous (see Dbrowska 2012). Hulstijn emphasizes the need to recognize this range of levels of usage among native speakers, because the widespread assumption concerning the homogeneity of the native speaking population is problematic for SLA research. Most studies that assess the level of L2 learners in relation to native speaker norms with the use of tests assume that native speakers will obtain maximum results. This impression may be reinforced by the fact that the native speaker control groups tend to be highly educated (Andringa 2014)-owing usually to the convenience factor of researchers drawing the native-speaker sample from their colleagues or students. When comparing non-native speakers' performance on grammar tasks to that of native speakers, Dbrowska et al. observed that the amount of variation in native speakers substantially exceeded expectations based on previous research. This was due to their use of a larger and more heterogenous group of native speaker controls; having a less restrictive approach to the selection of native speaker controls increases the performance overlap between near-native learners and native speakers. Secondly, it is doubtful whether any speaker of more than one language should be assessed using norms based on the performance of monolingual speakers. From Cook's multi-competence perspective, none of such a person's languages can be expected faithfully to coincide with the native language of monolinguals (see Cook 2002). The inevitable interaction between the relevant language competencies inevitably has effects on language production (see, e.g., Jarvis and Pavlenko 2008). Birdsong (2008, p. 22) expresses a similar view, arguing that "minor quantitative departures from monolingual values are artefacts of the nature of bilingualism, wherein each language affects the other and neither is identical to that of a monolingual". It goes without saying-or should!-that this mutual influence includes the domain of "language intuition" (cf. Abrahamsson 2012). Recent work in translanguaging (e.g., Wei 2018; Singleton 2020b) supports Cook's and Birdsong's insights. The above discussion has important implications for CPH research. Birdsong (2014, p. 47) comments that, because of the mutual influence of a multilingual's knowledge of his/her languages, and the fact that the L2 will inevitably be affected by such influence, "nonativelikeness will eventually be found"; if, then, pure and exceptionless nativelikeness is demanded in order to disconfirm the CPH, he argues, "the CPH is invulnerable to falsification". An implication which is important for future research is that L2 learners should be compared to, or judged according to, norms set by bilingual (or multilingual) speakers of the same language who acquired the language in question from birth. Aptitude Studies claiming to demonstrate the existence of a critical period for SLA tend to find exceptions in the form of late L2 learners who, nevertheless, perform at a nativelike level. This is of course problematic for proponents of the CPH. One way in which CPH advocates try to deal with the problem is by positing that some individuals have particular innate characteristics which allow them to overcome the disadvantages of missing the critical period. The main candidate trait in this respect is language aptitude, the degree of which has been widely portrayed as inborn (see, e.g., Carroll 1981). High aptitude is often designated a "gift for languages" (Rosenthal 1996, p. 59), which, according to some, may act to some degree as a prophylactic against the effects of the critical period (see, e.g., Abrahamsson and Hyltenstam 2008;Granena and Long 2013;). Such a claim is made, for example, by DeKeyser in relation to those of his immigrant late learners who performed at a native level, all of whom, he reported, as showing high aptitude. Birdsong's reanalysis of the data in question, however, suggests that education was a more robust predictor of the proficiency results than aptitude. In any case, it is not clear that language aptitude is simply an innate trait (cf. Singleton 2017); at least, to an extent, the awareness that derives from experience and training seems to impact on it (cf. Robinson 2002). Kormos (2013, pp. 145-46), citing a range of studies (Eisenstein 1980;;Sfr and Kormos 2008;Nijakowska 2010), sums up the way in which thinking on this matter is moving: Although language-learning aptitude might seem to be a relatively stable individual characteristic when compared with other factors, such as motivational orientation and action control mechanisms, there seems to be some converging evidence that certain components of aptitude... might improve in the course of language learning. This, of course, hugely complicates the posited interaction between aptitude and the so-called critical age. It very much suggests that what has been propounded on this issue is, to say the least, wildly premature. Age or Opportunity? While it is a widely accepted fact that, in the naturalistic environment, older beginners are not generally as successful as younger ones, doubts arise as to whether age itself is in fact the variable at play. There are other variables that have nothing to do with biological maturation, but are confoundable with age. Most obviously, length of residence in the target country often correlates with ultimate attainment alongside age (see, e.g., the results of ). A wide range of other factors have been proposed in this respect, including psychological, social, and educational ones. A major suspect in this context appears to be the amount and quality of input experienced (see, e.g., Flege 2019). Arrival in the host country at a later age means, for example, less time spent in school. As Flege points out, it is often assumed that increased length of residence automatically means more input, but it does not; immigrants may spend years interacting with a predominantly L1 environment, or with the accented (or otherwise non-native) L2 speech of other members of the immigrant community. As Flege and Bohn put it, "immigrants' length of residence (LOR) in a predominantly L2-speaking environment is problematic because it does not vary linearly with the phonetic input that L2 learners receive and because it provides no insight into the quality of L2 input that has been received" (p. 32). Socioeconomic status (SES) is likely to be a factor, too, by analogy with the role that SES is known to play in L1 and L2 acquisition (see ;, for SES in L1, for SES in L2). Immigrants with a lower SES are less likely to have good educational opportunities, and are more likely to have stronger ties to L1-speaking migrant communities. Such considerations have led some to postulate that opportunity rather than age is the most important predictor of attainmentthe opportunity for large amounts of high-quality input, interaction, and education in the target language. Thus, Marinova-Todd et al. argue that late L2 acquisition ends in full success for those "adults who invest sufficient time and attention in SLA and who benefit from high motivation and from supportive, informative L2 environments". Looking for Discontinuity As has been mentioned several times, research on the age factor in SLA in the naturalistic sphere generally shows a negative correlation between age of acquisition and ultimate attainment. In other words, the older one is at the beginning of acquisition, the lower the level of proficiency, which will be the long-term outcome of learning. This is a general trend, not really questioned despite the abundant exceptions. However, to demonstrate the existence of a critical period, there has to be incontrovertible evidence of a discontinuity in relationship between the effects of different ages of acquisition on ultimate attainment, preferably visible in studies with large numbers of participants. However, it is less clear what counts as such discontinuity, and-more generallywhat shape is to be expected of the relationship between age of acquisition (AoA) and ultimate attainment (UA) (see ;Vanhove 2013). Do we expect UA to be the same for all AoAs in the entire critical period window, then to drop sharply, and flatten again, or is there supposed to be a gradual decline throughout? Different conceptualizations of the critical period effects are possible. This raises serious methodological issues for large-scale studies. The work of Vanhove and Birdsong and Vanhove shows how differences (sometimes seemingly minor) in how data are statistically analyzed can make a major difference in what inferences are drawn from the data when it comes to the analysis of the AoA-UA function. Very intriguing results have recently come from a study by Hartshorne et al., involving a record-breaking number of participants (over 600 thousand), which makes it a prime example of the so-called "big data" approach to psycholinguistic study. The test constructed by the researchers became viral on social media, thanks to the fact that the participants could test themselves on whether they were nativelike, and also to see if the test was able to accurately discover which variety of English they used. It should be noted here that the test was designed with the intention of measuring the subjects' knowledge of syntax only. One of the main challenges in critical-period-related research is that it is difficult to infer information about the change in learning rate (when it occurs, and how sudden it is) from data about the AoA and ultimate attainment. (Two contradictory theoretical models-a gradual decline in learning ability over the lifespan and a sudden slowing down of learning at the end of a critical period-may produce the same result in terms of ultimate attainment.) To overcome that challenge, the authors of the study used a novel modelling technique in order to mathematically reconstruct the learning rate on the basis of the available answers. The results indicate that the learning rate for syntax declines by an astonishing 50 percent at around the age of 17.4. In response to some concerns about the novel method used to analyze the data (see Frank 2018), a follow-up study was conducted by Chen and Hartshorne that presents an improved analysis of the impressive dataset, which had meanwhile grown to include data from more than one million respondents. While the original study did not provide any estimate of uncertainty, Chen and Hartshorne added bootstrapping to obtain confidence intervals. They also used Item Response Theory instead of simply the proportion of correct answers on the test to gauge syntactic knowledge. Finally, and most importantly, they used a different mathematical model from the original study, in order to investigate the possibility that the previous results may have been an artefact of the model used. The results of the new study confirmed the previous findings-only the age when the syntax learning ability declines was found to be slightly later. Hartshorne et al. and Chen and Hartshorne thus provide support, in a way, to the CPH, in the sense that they provide evidence of discontinuity in the learning rate, but their findings also constitute a major challenge for existing CPH-based predictions in that they place the discontinuity much later than most scholars have predicted. The impressive size of the dataset and the ensuing statistical power give a lot of weight to the results. It also needs to be remembered, however, that the two studies only concern syntactic knowledge, and only as much of the syntactic knowledge as was measured by the specific test used in the study, as the authors themselves acknowledge. The test features a selection of items which are meant to represent syntax, and in fact, some grammar topics appear to be overrepresented (cleft sentences, passive voice) and some items would usually be labelled as lexical (such as the item "I.... the story", in which the correct answer "told" needs to be selected from among distractors such as "said"). It is, therefore, unclear to what extent the test covers English syntax comprehensively. Moreover, syntax is assumed to consist of a finite number of rules, a view which has been challenged by developments in phraseology and construction grammar (as we discuss elsewhere in this article). Syntax is also assumed to be a unified whole, for which one critical period would apply, which is again one theoretical possibility, but not the only one. Such reservations by no means take away from the importance of the study, but they point to the need to conduct further studies along these lines. Neurolinguistics: New Developments Despite the many versions of the critical period discussed earlier, Lenneberg is aptly named as "father" of the CPH, to the extent that his 1967-vintage version of the hypothesis, postulating that the critical period affects all aspects of language and that it ends at puberty, is probably still the one that holds most sway. The element that has mostly suffered the ravages of ageing is the rationale underlying Lenneberg's hypothesis. Lenneberg explained the alleged problems of later second language learners in terms of a developmental process in the brain which, according to him, was completed by the "critical age" of puberty. The process referred to is the "lateralization" of language functions to the brain hemisphere dominant for language (usually the left). Lenneberg's account of the nature and timescale of such lateralization is no longer taken seriously by neuroscientists. Indeed, as early as 1973, Krashen devastatingly critiqued Lenneberg's account of lateralization, claiming on the basis of brain damage studies that it was complete before age 5 (Krashen 1973, p. 65). Current research suggests a complex and multi-factored relationship between lateralization and age (see, e.g., ). It is worth stressing that the original CPH made claims about the biology of the human brain; that is, the hypothesized changes were supposed to be neurological in nature. However, subsequent research sought to provide evidence for the hypothesis principally on the basis of studies of human behavior, e.g., second language attainment in learners with different AoA. The reason for this was that meaningful detailed study of the postulated neurological reality behind age-related language acquisition phenomena was not possible. Much has changed in this respect since the 1960s, when Lenneberg's hypothesis was originally published, particularly in the last two decades. In fact, the developments in neurology have been much more radical in the last few decades than in mainstream second language acquisition research, which relies on essentially the same approaches as before (with the main technological development of larger-scale studies being facilitated by the use of the internet). Neurolinguistics, however, has seen major developments in the form of electroencephalography/event-related potential (EEG/ERP) research as well as (functional) magnetic resonance imaging ((f)MRI). When large groups of neurons fire at the same time in the brain, the resulting small voltages can be recorded using ERPs. Different types of linguistic stimuli result in distinct "brain signatures" (, p. 176), i.e., different ERP patterns. The patterns most often mentioned in the literature on linguistic processing are the N400 and P600 patterns, which result from waves that differ in terms of amplitude, direction and timing. P600, for instance, accompanies many types of morphosyntactic processing in native speakers, while N400 effects are typical of semantic processing. This is why they allow researchers to gauge a participant's sensitivity to a specific linguistic stimulus. If such ERP signatures generated in response to the same stimulus are different for someone's L1 and L2s, it implies that different processing mechanisms are used in each case. (See, for an accessible overview of ERPs in language processing research and further bibliography.) Even though the body of ERP research on language processing has produced results which are, to some extent, mixed (see, for example, the work of Schmid and associates, which show the plasticity of the brain in adults to be limited, especially in the case of morphosyntax-e.g., ), the majority of studies support the view that the adult brain substantively retains plasticity. DeLuca et al. provide a careful summary of research to date and conclude that the overall picture is that of much greater plasticity of the adult brain than would be implied by the classic CPH. Most importantly, the results show that according to the majority of studies, L2 learners at lower levels of proficiency produce different ERP responses to the same stimuli from L1 speakers, while L2 learners at higher levels tend to parallel L1 speakers of the same language in terms of ERP patterns. DeLuca et al. conclude that even with the minor discrepancies between the results of some studies, the overall picture which emerges is that of L2 learners gradually shifting to language processing that is qualitatively similar to that of native users. Interestingly, as noted by Steinhauer and Kasparian, early studies using ERP, from the turn of the millennium, actually seemed to confirm the CPH, showing that ERP patterns of brain activation of learners with late AoA differed from those of native speakers, in contrast to the patterns of learners with early AoA, which mirrored those of native speakers. Such studies were later found to have confounded AoA with proficiency. The new, better-controlled studies showed that the type of brain activity changes with proficiency, and most importantly-that late adult learners of L2 who reached very high levels of attainment displayed comparable patterns of brain activity to native speakers, even for morphosyntax (e.g., ). For example, such findings were reported by Rossi et al. -a study which investigated late English-Spanish bilinguals of varying degrees of (self-reported) English proficiency. The study used ERP to examine their reactions to gender and number violations in the use of clitic pronouns in Spanish sentences. The study suggests that it is possible for late bilingual speakers of Spanish at high levels of proficiency to process grammatical features in a nativelike way, with nativeness evidenced by an adequate ERP signature. Similar conclusions were reached in Rossi et al.. In a nod in the direction of the concept of Universal Grammar (UG), which predicts that if a syntactic feature is absent from one's L1, it is impossible or more difficult to acquire it in L2, both the aforementioned studies incorporated grammatical structures of two kinds: those with and without corresponding L1 structures. Both investigations showed that, in regard to both these types of structures, at high proficiency levels, the processing is similar to that of native speakers (as evidenced by the type of the ERP signatures). Recent work with implications for the CPH comes from ERP studies on attrition. We will report in detail on one such highly interesting study, by Kasparian and Steinhauer. They provided three groups with the same stimulus sentences; in some of the sentences, one word was replaced with an incorrect word, either very similar to the one which would be correct, or less similar. The three groups were Italian monolinguals living in Italy, Italians living in Canada who reported using mostly English in their everyday lives and having occasional difficulty remembering words in Italian (which indicates L1 attrition), and L1 English speakers with advanced L2 Italian. The ERP patterns observed in those three groups were similar for both the attriters and the L2 learners. What is more, the ERP patterns were found to be related to language proficiency in Italian, thus, for the attriters, the extent of attrition. Very high levels of L2 proficiency were accompanied by ERP patterns similar to those of monolingual native speakers. Lower levels of L2 proficiency and more severe cases of L1 attrition were characterized by similar ERP patterns, different from those of monolinguals, indicating problems with lexical retrieval. These findings were limited to lexical aspects of language, but in another study (Kasparian and Steinhauer 2017), the authors obtained similar results for morphosyntax. These studies point to the ongoing plasticity of the brains of adults, whether it is for language acquisition or loss. (F)MRI is another technology which can provide a glimpse of the underlying language processing mechanisms. It is a neuroimaging technique that is non-invasive and relies on the use of radio frequency pulses and magnetic fields. With respect to the brain, its specific structures can be captured by static MRI, whereas the neural processes can be captured by functional MRI (fMRI). In contrast to ERP, MRI provides information on the location of brain activity; (f)MRI studies also provide some evidence about the neuroplasticity of the adult brain in terms of language learning. "Plasticity" refers to regional changes in the volume and/or density of white and grey matter, as well as changes in connectivity between different areas of the brain, i.e., patterns of activation. According to the review provided by DeLuca et al., most such studies are longitudinal, with participants having brain scans before a language training program and after it. L2 acquisition has been found to have observable effects on the brain, resulting in an increase in grey matter volume and in white matter volume in brain regions related to language. Fewer (f)MRI studies have looked at more naturalistic language acquisition. For example, Pliatsikas et al. found patterns of white matter increase in late bilinguals that are similar to those in child bilinguals. DeLuca et al. sum up their review on MRI research in the following words: "MRI affords us the opportunity to literally see first-hand if the 1967 predictions hold. Evidence (... ) clearly suggests they do not" (p. 185). The authors sum up their indepth overview of a vast number of neurolinguistic studies, which examine brain plasticity with the following conclusion: Taken together, the evidence indicates that the neural substrates and processes underlying language acquisition and production in the L2 are maximally comparable to those in L1 across the lifespan. Any maturational constraints that might apply are not specific to language learning, especially to the extent that they create critical periods, but are generic constraints brought about by healthy ageing and apply to other aspects of cognition (e.g., memory). While showing this does nothing to negate the very observable differences in path and outcome between typical L1 and adult L2 acquisition, it does suggest that other variables conspire to account for these differences; that is, there is no true fundamental difference in how language is acquired and processed, irrespective of age (p. 188). Concluding Remarks The paradox concerning the CPH is that, although there are vast amounts of literature on this matter, the findings are "interpretable". This is because (i) the notion of critical period is used with very different, often underdefined meanings, (ii) because the relevant research is extremely varied and variable, and (iii) because the different categories of participants in the relevant studies are less than precisely profiled and sometimes confused. One reason for these deficiencies may be that the research in question is not "highstakes"-neither backed by copious funds nor by international teams of researchers. Perhaps this situation may change in coming years. Another contributing factor may be the very limited cooperation between linguists and psychologists. Psychologists either do not take linguistic theories into account, or, if there are linguistic theories which have made their way into psycholinguistic research, they tend to derive from the UG model (see, e.g., ;), whereas linguistics certainly has more to offer than this! Linguists could benefit from a careful consideration of the neurolinguistic research outlined above. What would be welcome, from the linguistics side, is a carefully designed, large-scale study that would test various aspects of language knowledge and use, which could be combined with neurolinguistic measures. Another shortcoming and source of problems is the widespread (mostly tacit) assumption that measuring any small subset of language skills of aspects of language competence is representative of the bigger picture. As a result, many studies focus on various arbitrarily selected aspects of language knowledge and use. For example, the age-factor-related literature cited throughout this paper very often focuses on accent, the presence of a nativelike accent being popularly taken as the criterion for "nativelikeness". Accent is thus often covertly assumed to represent language proficiency in general. It seems to be tempting to some (and not just lay-people) to imagine that nativelike pronunciation is the ultimate marker of performing like a native speaker, but we have to recognize that, while the various aspects of language proficiency do tend to develop broadly in parallel, pronunciation is arguably distinct from other language skills. Similarly, studies that examine morphosyntax often include apparently arbitrarily selected items (e.g., ). Age-related SLA research has given too little attention to comprehensively and systematically testing language knowledge. While it is astonishing that after decades of research, we are nowhere near being able to close the topic of the CPH, some people might doubt the importance of this debate, especially given that the overall picture is so muddied. After all, while there are individuals who achieve remarkable success in the acquisition of a foreign language, despite starting late, most do not. A myriad of factors, including individual factors, affect this process, and getting to the bottom of the issue may be seen as either too difficult, or simply not worth the major research effort it would entail. However, the CPH debate is anything but irrelevant. In fact, it is one of those issues in second language acquisition research that has important real-world implications, because research findings may inform policies applied in the formal instructional context. The widespread popularity of the concept of maturational constraints limiting a person's ability to learn a language beyond puberty has been behind the widespread move to lower the starting age of institutional L2 learning. This trend was instigated some seventy years ago, under the influence of advocates of early L2 instruction in the school curriculum-such as Penfield-and has accelerated dramatically in recent times all round the world (see, e.g., Murphy 2014), despite not being supported by empirical research. In fact, it flies in the face of such research, which, for nearly half a century, has been showing that in a normal schooling situation, pupils who are taught an L2 at primary level do not, in the long run, maintain the advantage of their early start (see e.g., Pfenninger and Singleton 2017;Singleton and Pfenninger 2016). Starting age-related differences are demonstrated to be levelled out over the course of the secondary school period. This obviously implies that the late starters do better than the early starters, since they are able to acquire as much second language knowledge as the earlier starters within a considerably shorter period of time, and thus, progress faster than younger starters. From the 1970s (e.g., Burstall 1975;Carroll 1975), studies were conducted which have consistently failed to verify the hope that early instruction would deliver higher proficiency levels than later instruction. Moreover, the later beginners, who have less learning time, have been found to be equal or superior across a range of measures (see Muoz and Singleton 2011). In Canada and the US, it was also found that older immersion learners were as successful as younger learners in shorter time periods (e.g., Swain and Lapkin 1989;). A very recent study (), involving a sample of almost 20 thousand students, also failed to show an advantage in outcomes for earlier beginners. Despite the univocal research history on this matter, the authors attribute their results to the fact that at the secondary level in German schools, everyone is taught English at the same level. Importantly, a meticulous synthesis of empirical studies, which examined the results of early L2 instruction (Huang 2016), has shown that there is no solid evidence supporting the "younger is better" approach to L2 teaching. To sum up: given the amount of time and attention that has been devoted to this topic in the last five decades, the overall results are very disappointing. Our own discussion in this article has tended in the direction of affirming that the CPH, despite its long history, has the status of "not proven". For an issue which attracts such popular interest and prejudices and has been a research topic for half a century still to be surrounded by such ambivalence is embarrassing, indeed, and unfortunate, especially given the hunger for clarity in relation to planning institutional second language teaching. Conflicts of Interest: The authors declare no conflict of interest. 1 Another term used in connection with this concept is "sensitive period". Sometimes, a distinction is made between a critical period and a sensitive period (the latter denoting a milder version of the former); for example, Knudsen defines critical periods as "a subset of sensitive periods for which the instructive influence of experience is essential for typical circuit performance and the effects of experience on performance are irreversible", while a sensitive period occurs when "the effect of experience on the brain is particularly strong during a limited period in development". However, the two terms are often used interchangeably. Moreover, in linguistics, there is a well-established tradition of referring to the sensitive/critical period for a second language as a "critical period", even if it is closer to a sensitive period. We therefore follow Birdsong in assuming these two terms to be interchangeable and not making a specific distinction between the two.
<gh_stars>100-1000 // Package constants contains constants that are used internally across the demoinfocs library. package constants // Various constants tat are used internally. const ( MaxEdictBits = 11 EntityHandleIndexMask = (1 << MaxEdictBits) - 1 EntityHandleSerialNumberBits = 10 EntityHandleBits = MaxEdictBits + EntityHandleSerialNumberBits InvalidEntityHandle = (1 << EntityHandleBits) - 1 )
Cognitive Characteristics of the Parents of Schizophrenic Patients This article reviews the empirical literature on cognition and communication in the parents of schizophrenic patients to address the questions of whether these parents as a group show evidence of any distinguishing cognitive characteristics and, if so, what those characteristics might be. Included in the review are studies of thought and communication disorder, and psychometric studies of cognitive functioning. We included only those that used reliable measures and included control groups in their designs. Taken together, the findings provide substantial evidence that nonschizophrenic parents of schizophrenic patients as a group demonstrate subtle cognitive difficulties in the area of concept formation and maintenance. There are also indications of other cognitive anomalies that will require further study. We discuss the importance of clarifying the etiological relevance of these findings and of pursuing further research in this area.
Nano-plasmonics and electronics co-integration in CMOS enabling a pill-sized multiplexed fluorescence microarray system. The ultra-miniaturization of massively multiplexed fluorescence-based bio-molecular sensing systems for proteins and nucleic acids into a chip-scale form, small enough to fit inside a pill (∼ 0.1cm3), can revolutionize sensing modalities in-vitro and in-vivo. Prior miniaturization techniques have been limited to focusing on traditional optical components (multiple filter sets, lenses, photo-detectors, etc.) arranged in new packaging systems. Here, we report a method that eliminates all external optics and miniaturizes an entire multiplexed fluorescence system into a 2 1 mm2 chip through co-integration for the first time of massively scalable nano-plasmonic multi-functional optical elements and electronic processing circuitry realized in an industry standard complementary-metal-oxide semiconductor (CMOS) foundry process with absolutely 'no change' in fabrication or processing. The implemented nano-waveguide based filters operating in the visible and near-IR realized with the embedded sub-wavelength multi-layer copper-based electronic interconnects inside the chip show for the first time a sub-wavelength surface plasmon polariton mode inside CMOS. This is the principle behind the angle-insensitive nature of the filtering that operates in the presence of uncollimated and scattering environments, enabling the first optics-free 96-sensor CMOS fluorescence sensing system. The chip demonstrates the surface sensitivity of zeptomoles of quantum dot-based labels, and volume sensitivities of ∼ 100 fM for nucleic acids and ∼ 5 pM for proteins that are comparable to, if not better, than commercial fluorescence readers. The ability to integrate multi-functional nano-optical structures in a commercial CMOS process, along with all the complex electronics, can have a transformative impact and enable a new class of miniaturized and scalable chip-sized optical sensors.. Ultra-miniaturized CMOS fluorescence microarray system. a, Overview of the system including the CMOS IC with the integrated 96-sensor array, the UV LED as the excitation light source, removable glass slip as the bio-interface, and silicon fixtures enabling automatic alignment with a disposable bio interface for multiplexed detection. b, Fluorescence reader system. c, Perspective and cross-sectional view of the sensing pixels. The strong UV light excites the immobilized fluorophores to emit a weak signal in NIR. Integrated nanoplasmonic filter rejects the UV light and allows the local fluorescence to be detected and process by the photon-detection circuity and integrated electronics underneath. The nanoplasmonic filter is realized using the 4 th to 7 th copper interconnect layers in the 65-nm CMOS process and spreads across all the sensor sites. The 1 st -3 r d interconnect layers are used for circuit routing and optical blocking. The multiplexed fluorescence signals are read out in a time-multiplexed fashion and are further processed by CDS circuits to eliminate random offsets and suppress low-frequency noise. which are extremely challenging to miniaturize. Achieving such levels of sensitivity requires precise collection and low-noise detection of the fluorescence signals after filtering though an array of bulky optical components including excitation, dichroic and emission multi-layer filter sets, lenses and objectives arranged in collimated optics, often with motorized stages for scanning and reading. The complexity of this limits the multiplexing ability of many lab-on-chip devices without significantly sacrificing sensitivity for fluorescence detection. Prior efforts to miniaturize such fluorescence sensing systems have primarily relied on more compact ways to package these traditional components to enable applications in microscopy, sequencing and fluorescence endoscopy. This approach is fundamentally limited in the extent of possible miniaturization without significantly affecting sensitivity and multiplexing ability. In this pursuit, CMOS provides a potential platform and over the last decade, CMOS and hybrid silicon systems have played a crucial role in high-precision sensing arrays in electrical detection of DNAs, DNA sequencing, nuclear magnetic resonance detection, electrochemical sensing, detection of redox-active metabolites in biofilms, in multiplexed electrophysiological recording of a large network of electrogenic cells and in magnetic-based sensing. For fluorescence assays, while CMOS can enable highdensity multiplexed photo-detection and readout with sensitivity comparable to CCDs, it does not have the capability to manipulate optical fields to emulate the functions of the external optical components in a traditional fluorescence reader. This typically requires a similar approach as before with external filtering and collimating optics and post-fabrication, or by allowing fluorescence lifetime detection with complex laser synchronization with picosecond levels of accuracy and significantly sacrificing sensitivity (∼nM). Here, we report for the first time realization of massively parallelized nanoplasmonic optical structures and co-integration with electronic circuitry in a commercial CMOS integrated circuit (IC) with no custom fabrication or post-processing that allows us to eliminate all external optics, filtering, collimation or bulky lasers and enable full integration of 96-sensor fluorescence platform and reader system into a total volume of less than 0.1 cc. The co-integration of the scalable nanoplasmonics and electronics in the same substrate is demonstrated allowing optimal detection and filtering across the optical and electronic partitions enabling us to reach surface sensitivities of the order of zeptomoles (∼ 1 dot/m 2 ) of labels on the chip surface. This corresponds to surface density where less than 1 out of 1-100 million excitation photons gets converted into a fluorescence photon. Exploiting metallic nanostructures to engineer optical fields has enabled significant progress in the field of metal-optics and nanoplasmonics in enabling sub-diffraction waveguiding, nanofocusing, plasmon modulation, flat lenses with meta-surfaces, and plasmon resonance-based enhancements for Raman spectroscopy and biosensing. In spite of the progress, the application of such plasmonic structures have been limited due to their use as only discrete optical components, and their fabrication in custom lithographic processes using nobel metals making them incompatible with CMOS fabrication. To enable the multiplexed assay and massively parallelized nanoplasmonic elements with integrated electronics, we adopted an absolutely 'no change' approach to the CMOS fabrication and demonstrate this system in an industry standard 65-nm process, typically used for microprocessors and wireless ICs. While our prior proof-of-concept work has shown the ability to integrate one sensor in silicon to detect fluorescent tags (not labeled bio-molecules) on the chip surface with an external laser with moderate sensitivity, the presented work demonstrates scalable complex nanoplasmonicelectronic systems for massively multiplexed assays in CMOS, allowing the entire system miniaturization including an uncollimated optical LED source. The multiplexing capability is key element for all medical diagnostics, critical for screening and for allowing multiplicative readouts from a single assay, including the blank control, to significantly improve the statistics and reducing the false-positive and false negative rates. In addition, we show its ability to operate with a side-positioned excitation LED (which is key for extreme miniaturization and low power operation), improve the sensitivity of each pixel by 70 fold reaching that of commercial benchtop readers and demonstrate the detection of both proteins and nucleic acids with the assay chemistry on-chip in a multiplexed fashion. More importantly, this work demonstrates that such co-design and co-integration of electronics/nanoplasmonics in a single substrate allows tighter control of the optical path and can overcome partial limitations of any one component through cross-layer optimization. As an example, the multiple distributed control sites allow us to sense the average residual background and filter it through a combination nano-optical filtering upfront (45-60 dB) and subsequent electronic filtering to achieve an end-to-end excitation-to-fluorescence sensing capability of P ex /P f l ≈ 77 dB for a signal-to-noise-ratio (SNR) ≈ 1, enabling ∼ 100 fM-pM levels of assay sensitivity comparable to, if not better than, commercial optical DNA microarrays and ELISA systems. With the integration capability CMOS, the technology can be potentially scaled into tens of thousands of sensor sites, if not hundreds of thousands. System architecture overview The fluorescence microarray system consists of the CMOS chip with the integrated filters, detection, read-out and signal processing circuitry, UV LED source and carefully placed tiny silicon-wafer based fixtures that allows automatic alignment with a disposable functionalized glass slip (bio-interface) for multiplexed detection ( Fig. 1(a)). A vertically positioned low-cost UV LED serves as the excitation light source. The light is incident from near grazing angles to the surface of the glass slip and is rejected by the angle-insensitive nanoplasmonic fluorescence emission filter integrated inside the sensor chip. The NIR emission from the spatially multiplexed fluorescent tags, on the other hand, passes through the filter efficiently, gets detected by the photo-detector arrays and electronically processed by the IC. The chip measures 2 1mm 2 (∼ 1.4mg) and the entire sensing part of the system (including the excitation light source, the sensor IC, the fixtures and the bio-interface) occupies a volume of around 0.1cc, as shown in Fig. 1(b). Due to the elimination of all external optics, filtering of the excitation light is critical to robust operation of a fluorescence sensor array. As shown in the perspective and cross-sectional view in Fig. 1(c), the filter and the interconnects are co-designed with the embedded copper layers in the 65-nm CMOS process. Each of the 96 pixels comprises of 80/80 differential photodiodes laid out in a symmetric fashion, with a capacitive trans-impedence amplifier (CTIA). The signals are readout in a multiplexed fashion and are further processed by the correlated double sampling circuits to suppress random offsets and low-frequency noise. The co-design and integration of optics, electronics and bio-chemistry in one platform allows strict control of the entire process from optical transduction to sensing data extraction. Integrated angle-insensitive nano-plasmonic filter in CMOS In a classical fluorescence set-up, both fluorescence signal and laser excitation are collimated to allow the usage of a high-performance multi-layer fluorescence emission filter which typically works within a very small range of angles (≈ ±5 ). In this miniaturized sensor platform without optical collimation, the radiation from the fluorescent dipoles on the surface interacts with the integrated filters in a complex fashion for a wide range of incident angles. This is shown in the simulated radiation propagation (originated from the dipole location) for random polarized fluorescent emitters at the air/SiO 2 interface ( Fig. 2(a)). In addition, the filter needs to handle the near-grazing excitation light as well as the scattered light from the assay and other necessary structures of the CMOS chip (e.g. the bonding pads). Therefore, the angle insensitive characteristic of the filter with rejection ratios (≥ 40dB) becomes a critical and differentiating factor to enable chip-scale fluorescence sensing. This precludes any resonant filter structures, such as interference-based or resonant plasmonic coupling or classical grating structures implemented in CMOS. In this work, we exploit the fundamental loss of plasmonic waveguiding in copper to enable angle-insensitive filter characteristics. In the 65-nm industry standard CMOS process, the lowest metal layers in close proximity to the transistors have the smallest feature sizes (≈ 100 nm width, ≈ 130 nm spacing). The nanoplasmonic filter is designed to be an array of vertical nano-slab waveguides realized with the 4 th − 7 th copper interconnect layers and the via layers in between, in total measuring 1.41 m in vertical length in the direction of the optical mode propagation ( Fig. 2(b)). Electronic signals are extracted from the photon-detection circuitry underneath the filter and transferred to the edge of the chip using the 1 st − 3 r d interconnect layers. The entire optical path, filters and the electronic routing are co-designed to ensure optimal performance and to minimize scattering light leakage from the sides of the chip. When excited with the randomly oriented fluorescence tags from the surface of the chip and the excitation signal, the sub-wavelength nano-plasmonic filter channelizes a collection of modes including coupled surface plasmon polariton (SPP) Fig. 2. a. Simulated radiation propagation (originated from the dipole location) for randomly polarized fluorescence emitters on the chip surface at the air/SiO 2 interface. b. Structure and SEM image of the integrated nano-plasmonic filter, implemented in 65 nm CMOS process with minimum metal linewidth of 100 nm and spacing of 130 nm. c. The simulated electric field intensity in filter for LED excitation at 405 nm and fluorescence emission demonstrates the angle-insensitivity of the nanoplasmonic filters with nearly 50 dB of optical filtering. Fig. 3. Circuit architecture. a, Circuits architecture of the 96-sensor array chip with the integrated pixels, control and readout circuitry, nanoplasmonic filters and optical shield, all co-designed in a single IC. b,c,d, Layout and schematic of a single sensing pixel. The photo-detection is enabled in each pixel with 80 sensing diodes at the center measured differentially with respect to symmetrically placed and shielded 80 reference diodes to suppress common-mode dark currents. The signals are process with differential CTIA. Each photodiode is implemented using nwell-psub structure and each pixel measures 100m in each dimension. modes and coupled cavity modes. Complete removal of all external collimating optics for true miniaturization requires the angle insensitivity nature of the filter operation. The work leverages the sub-wavelength spacing of the array to ensure that the cavity modes are largely suppressed and the plasmon modes are dominant in the waveguide system. As the coupled SPP modes propagate, the modes exploit the inter-band transition of copper near the excitation wavelength at 405 nm to absorb the light while the SPP modes at the fluorescence wavelength propagate with high efficiency to get collected and processed by the integrated electronic circuitry. The core principle of background suppression in this multi-modal nanoplasmonic filter is through the differential losses of the coupled SPP modes across the two wavelength regimes enabling nearly 35 dB/m of rejection ratio per 1 m of vertical wave travel. Therefore, simply the lowest three metal layers measuring 1.41 m in length enables 50 dB of optical filtering as shown in the simulated optical fields inside the filter in Fig. 2(c). Collectively, when combined with electronic filtering at the backend, the system achieves an end-to-end filtering capability reaching up to 77 dB. The LED excitation can be at any angle of convenience, and the choice of near-grazing incidence from a side-positioned LED in this work is to preserve the ultra-compact packaging of the system. The complete and theory, modal analysis, band structure, full-wave simulation, and experimental result of the nano-plasmonic filter can be found in. Other examples of nano-optical systems in CMOS can be found in. Integrated circuits, read-out and signal processing The architecture of the custom CMOS chip consists of an array of 96 sensing pixels with multiplexing and CDS readout circuitry, and a sheet of nanoplasmonic fluorescence emission filter to reject the excitation background ( Fig. 3(a-d)). Each sensor size is kept at 100 100 m in accordance with commercial DNA/protein arrayer for multiplexed assay on the surface. However, in principle, the sensor sizes can be reduced at least by a factor of 100 in area enabling tens of thousands of sensing sites for massively multiplexed assays and imaging. The entire chip is surrounded by the vertical optical shield, which is also made by the copper interconnects similar to the nanoplasmonic filter, that suppresses leakage of scattered light into the chip sensing sites from the side edges. In order to minimize the effect of dark current and increase sensitivity, photo-detection is enabled in each pixel with 80 sensing diodes at the center measured differentially with respect to symmetrically placed and shielded 80 reference diodes ( Fig. 3(b)). The differential signal processing also exploits the correlation in diode dark currents. This fluctuation in temperature, LED power and voltage supply across the sensor and reference sites suppresses these variations as common mode allowing high sensitivity detection. Multiplexed assays and sensor cross-talk The sensor array design needs to take into account the cross talk between the pixels for multiplexed assays. The mutual coupling of the fluorescence signal from localized spots to the neighboring pixels is dependent on the assay interface with the sensor. The CMOS chip surface is typically coated with silicon nitride layers, and functionalization can be done on a 1 m thick glass grown on the surface. Fig. 4(a-c) shows the set up and electromagnetic simulation of fluorescence dipoles spread on a 50 m spot size on the 1 m thick glass surface on the chip. The labels mostly radiate into the chip and as can be seen, the lateral spread is kept to a minimum and almost fully confined to the local pixel underneath minimizing cross talk. The primary reasons to choose the pixel size to be 100 m is to enhance this light collection efficiency and minimize cross talk and keeping in line with commercial arrayers that can allow spot sizes to be confined between 50-75 m. In a commercial setting, this is an appropriate methodology since the low-cost chip can allow one-time disposable use. In the laboratory setting, to reuse the same chip for multiple bio measurements, we design a disposable functionalized glass slip (also shown in Fig. 1), which is supported by silicon positioners to allow easy alignment with the on-chip sensor array. The glass slip is held approximately 100 m from the chip surface as shown in Fig. 1 and Fig. 4. Evidently, this causes spreading of the fluorescence emission from the localized spots into the sensor array resulting in cross-talk among the neighboring pixels. While this limits the number of spots we can perform on the surface with this particular glass slip set-up, this packaging is only used for chip reuse. The functionalization on a thin glass grown on the chip surface has been shown in and can be performed on the presented chip as well. Integrated nanoplasmonic filter performance The optical filters are characterized with the integrated photo-detectors. Fig. 5(a) shows the measured normal incidence transmission spectrum (normalized) for the y-polarization (perpendicular to the slabs). To comply with the design rule checks of the CMOS fabrication process that ensure high yield, the filter is designed to allow light to pass for one polarization along y. The sub-wavelength spacing blocks all wavelengths for the other polarization and therefore, the system requires no external polarizer for the LED light source. The only effect of this is that the yield of the photon detection is reduced by half. We choose quantum dots as the fluorescence tag in this case for their photo-stability, stronger emission and higher Stokes shift. Fig. 5(a) shows fluorescence excitation and emission spectrum of the chosen Qdot 800 fluorescent tag that matches with the filter performance. Fig. 5(b) shows the measured responsivity ratio (in dB) between fluorescence at 800 nm and excitation at 405 nm wavelengths, demonstrating an average filtering ratio of 45 dB achieved for the center 4 10 pixels. The remaining pixels on the side suffer from additional optical leakage at 405 nm, due to m-sized gaps needed to comply with design-rule-checks for the CMOS process as the electronic signals are routed to the edge of the chip through the shield. As seen in Fig. 1, the chip is unpackaged and exposed to all forms of scattering and standard optical packaging can suppress this additional leakage. Fig. 5(c) and Fig. 5(d) show the measured filter transmittance (normalized) for the excitation and emission wavelengths across angles of incidence and polarizations, demonstrating rejection ratios varying between 45-60 dB. In essence, the subwavelength non-resonant nature of the nanoplasmonic structure ensures the rejection of near-grazing or scattered excitation light from all angles, which is critically important for sensitive bio-molecular assays. This allows us to eliminate collimation, objective lens and other external optical filtering elements and replace the typical excitation laser by a ultra-compact low-cost LED for an overall ultra-miniaturized system. Microarray alignment for multiplexed detection and disposable interface In practical applications, while the mm-sized biosensor chip platform with the LED and silicon fixtures may serve as a the sensing platform, the functionalized glass cover slip can serve as the removable and disposable cartridge for multiplexed sensing. Since the interface is spotted with the probes and placed on the CMOS chip, it is very important to ensure that the capture spots are accurately aligned with the sensors on-chip. This is ensured by four silicon fixtures that clamp on the glass slip when simply placed on the chip, thereby accurately aligning the spots with the sensors. As shown in Fig. 6(a), the four tiny silicon-wafer based fixtures are placed to fit the size of the cover slips with estimated gap being 5-10 m, and each glass cover slip is precisely diced to have near-identical dimension. In order to perform multiplexed detection of DNA on the same cover slip, multiple capture DNAs have to be printed precisely on different sensor spots, requiring alignment of the DNA arrayer/printer with sensor pixels. The alignment is performed in the following way. Firstly, the distance between the top left corner of the cover slip (x 0, y 0 ) and a Fig. 4. Sensor cross talk on the multiplexed assay based on the bio-interface. a. Assay can be performed directly on a functionalized 1 m thick glass grown on the chip surface. b,c. Electromagnetic simulation of the fluorescence emission from a 50 m diameter spot on the surface and its coupling in the lateral dimensions. Since the pixels are 100 m in dimension, the emission remains localized to the pixel underneath the spot minimizing cross-talk. d. To re-use chip to do multiple measurements in a laboratory setting, we create a disposable bio-interface that is aligned with the sensor array with silicon positioners (Fig. 1). We perform the experiments on the glass slip positioned about 100 m from the chip surface. e,f. Electromagnetic simulation of the fluorescence emission from a 50 m diameter spot on the glass slip. The elevation causes the emission to spread creating cross-talk among the neighboring pixels. This limits the number of spots we can perform on the surface and is only done to reuse the chip for multiple experiments in a laboratory setting. The functionalization can be done on the chip surface as shown in part a., similar to. reference point of the sensor array (x 1, y 1 ) is measured using a microscope and a 2D translational stage. Since each pixel on the sensor (x i, y j ) with respect to the reference point (x 1, y 1 ) is known (the pixels have fixed pitch p = 100m), the distance between the pixels and the top left corner of the cover slip is therefore: where i=1, 2,..., 8, and j=1, 2,..., 12 represent the column and row number of the 96 pixels. When the multiple DNA capture strands are printed, the pin of the arrayer is aligned to the top left corner of the cover slip (x 0, y 0 ) first, and subsequently travels a small distance of (d x, d y ) to print on the point of the cover clip that is well-aligned with the related pixels. Such calibrating method, where the travel distance (d x, d y ) is in mm range, minimizes the alignment error and therefore maximizes the repeatability of the multiplexed detection scheme of the sensor. The overall accuracy in spotting which includes the gap between the glass slip and the fixture, and the accuracy of the spotter is estimated to be less than 10 m. This is much smaller than a single sensor size of 100 m, and therefore signal cross-talk is minimized. Fig. 6. a. Alignment of the bio-interface (glass slip) with the sensor array for multiplexed assays. b. Calibration process and signal acquisition for the assay. Sensor array calibration, read-out and sensor noise In the 96-multiplexed sensor array, each sensing site processes the local emissions of the fluorescence signal on the glass slip above and provides a differential signal at the output when addressed electronically. The side positioned LED minimizes direct incidence of light on the chip surface that reduces leakage. This allows a higher integration time that enhances sensitivity, as we will show later in the paper. We measured the light intensity on the glass slip and it averages approximately 0.2 mW/ mm 2 that translates to nearly 70% of the power being incident on the assay. While this is fairly efficient, the coupling efficiency can certainly be improved with a tighter control of the light path. The method of sensor calibration and assay read-out is shown in Fig. 6(b). There is expected sensor-to-sensor variation due to the differences in responsivity of the photodiodes, filter rejection ratio and due to differences in impinging excitation light from the side-positioned LED. All of these collectively are measured for a one-time calibration with 10 different cover slips with negative control on all sites. This is done to average out the variations in the scattering effects of the functionalized slips. The measured average response of the center 40 pixels under this condition is shown in Fig. 6(b). The average spatial variation across the pixels remains within 1 dB and the slip-to-slip variation is less than 0.3 dB. This is a one-time calibration process that is common to many commercial electronic chipsets where during the testing phase, sensors are calibrated automatically with a source, and the results are stored in a memory. In fact, this can be automated with built-in self testing mechanism where the LED can be switched on by the system periodically and the sensor can be recalibrated. The disposable bio-interface now can allow the same sensor and reader interface to be reused for other assays. With on-chip column and row decoders, there are many possible ways to extract the signals from the pixel array. Fig. 6(b) shows one such scheme, which resets and activates all the sensor sites at the same time and reads the signals in a time-multiplexed fashion in one single integration period. As an example to read the center 40 pixels, each of the sensors are read twice within one integration period (e.g. 200 ms). This is shown in Fig. 6(b), where each color represents the measured waveform of one pixel. In this way, we obtain two short signals separated by around 100 ms for each pixel (different pixels are plotted in different color in the figure). Regression analysis can be subsequently performed on the two shorter time series to obtain the pixel integration slope which is proportional to the light power. As an example, the two short time series at around 0 and 100 ms (marked by red color denoting the signal for pixel 1), is regressed with the red line. Likewise, the two short black time series at 100 ms and 200 ms (denoting the signal for pixel 40) is regressed with the black line ( Fig. 6(b)). Such readout scheme keeps the total readout time short to allow the use of averaging for noise reduction with 100 acquisitions in less than one minute. The slope of the output signal is approximately given by V sig (t) = i ph t C f b, where i ph is the photo-current and C f b ≈ 15.6 f F is the feedback capacitor (Fig. 3(c)) across which the signal is integrated. In a typical operation, the total light power is ∼ 2.4 pW. With the photodiode quantum efficiency being Q.E. ∼ 0.1, i ph ≈ 156 fA, the signal can be integrated for 100 ms for maximum swing voltage of V sw =1 V. Expectedly, the SNR of the sensor is mainly determined by the filter performance, fluorescence intensity and the noise of the sensor, which includes the circuits noise, and mainly the photon shot noise upon fluorescence excitation. The average noise voltage in dark (V n,dk ) is around 0.51 mV. In fluorescence excitation, the total sensor noise increases with integration time reaching a maximum value of V n, ph ≈ 3.5 mV (where the shot noise alone ≈ eV s w C f b ≈ 3.2mV). It should be noted that given the excitation and fluorescence light power, V sig ∝ t and V n, ph ∝ √ t, and therefore, SN R ∝ √ t. Therefore, the maximum integration time reaching full voltage swing should be used to increase SNR. With the proposed time-multiplexed reading scheme, multiple signal acquisition can be performed within a reasonable time period (< 1min) to reduce the total noise from 3.5 mV to 0.7 mV through averaging. Coverslip functionalization The tiny glass slips that serve as the bio-interface are placed and temporarily fixed inside glass wells with volume around 300 L, in which the DNA hybridization and protein sandwich assays are performed. To minimize nonspecific binding of biomolecules and Qdot fluorescent tags, the surfaces of the glass slips are passivated and functionalized with reactive chemical groups to attach the capture molecules. All steps are performed at room temperature unless otherwise specified. The glass wells and the glass slips are first washed in 1 M KOH solution for 2 hours, and then rinsed thoroughly with Millipore water. For DNA detection, subsequently, the glass slips are incubated for 15 minutes with a mixture of BSA and Biotin-BSA in Tris-NaCl buffer, with the concentration being 5 mg/ml and 0.1 mg/ml, respectively. Then the wells are rinsed 5 times with Tris-NaCl buffer. After that, the glass slips are incubated for 20 minutes with 10 mg/ml streptavidin (ProspecBio) in 0.2 mM PBS buffer, and rinsed again with Tris-NaCl buffer. For protein detection, after the same KOH cleaning, the glass slips are rinsed with ethanol and incubated in 2% (3 glycidyloxypropyl)trimethoxysilane (Sigma-Aldrich) in 95% ethanol for 30 minutes. Afterwards, the glass slips are rinsed with ethanol, dried with N 2, and baked at 110 for 15 minutes to improve the attachment of silane groups to glass surface. Biosensing experiments For DNA hybridization assay, immediately after the streptavidin incubation and washing, the glass slips are incubated for 15 minutes with biotinylated capture DNA strand (IDT) of 500 M concentration in PBS, and then they are rinsed with PBS buffer. The capture DNA (Biotin-5'TTTTTTTTTTTTTTTTTTGCCCTACGCGTGTAC3') has 33 bases. After that, the glass slips are incubated for 15 minutes with various test concentrations of 100 L target DNA (3'CGGGATGC-GCACATGTTTTTTTTTTTTTTTTTT5'-Biotin and/or other noncomplementary sequences for negative control), diluted from stock solution in PBS with 0.05% Tween20 (Sigma-Aldrich), which is a surfactant used to suppress non-specific binding on surface. After rinsing the glass wells with PBS buffer, the glass slips are incubated for 15 minutes with 1 nM Qdot 800 streptavidin-conjugates, diluted from stock solution in PBS with 0.05% Tween20. The glass wells are then thoroughly washed in PBS for 8 times to remove the unbounded fluorescent quantum dots, and the cover slips are removed from the wells to be measured on top of the sensor chip. For DNA microarray experiments, instead of incubating the entire glass slip with the same capture DNA, we use a commercial microarrayer (XactII from LabNEXT) to print different capture DNA strands on different spots of the glass slip. Then various mixture of DNA targets are tested. Other steps are identical to single DNA hybridization assay. For protein detection, we use human IL-6 protein (Biolegend) as an example for demonstration. The capture antibody and detection antibody of IL-6 will both specifically bind to different parts of IL-6, respectively (both are from Biolegend). After the silane functionalization of the glass slips, the Human IL-6 capture antibody (≈ 0.4mg/ml) is incubated with the prepared glass surface overnight at 4 C. The glass slips are then rinsed with washing buffer (0.05% Tween20 in PBS) and incubated for 2 hours with 100 L target solution (IL-6 and/or IFN- for negative control) of various test concentrations, obtained from stock samples dissolved in PBS with 1% BSA. The cover slips are rinsed 8 times with the washing buffer and subsequently the human IL-6 detection antibody (≈ 60g/ml, biotinylated) is applied and incubated for 1 hour. The glass slips are then rinsed with washing buffer and immersed with 1 nM Qdot 800 streptavidin conjugates for 15 min. The glass wells are then thoroughly washed in washing buffer for 8 times to remove the unbounded fluorescent quantum dots. The cover slips are removed from the wells to be measured on top of the sensor chip. Limit of detection of quantum-dot based fluorophore The ability to detect extremely low level of fluorophores on the surface of the chip is crucial for the chip-scale fluorescence reader to be deployed in practical settings where the number of target analytes is limited. To quantify the performance of fluorescence biosensors, the minimum detectable surface density of fluorophores is one of the most direct metrics. In order to estimate this, a tiny droplet of quantum dot solution in water with 0.5 L volume with certain volume concentration is dropped on the glass slip, let dry, and is subsequently placed on the surface of the sensor chip. The average surface density can be estimated by the liquid volume, volume concentration, and the surface area which is roughly a circle with radius around 0.5 mm. Different cover slips with varying Qdot surface concentration are measured upon LED excitation. At the lowest detection level, a volume concentration of 2.5 pM is used, resulting in an estimated surface density of ≈ 1dot/m 2, as shown in Fig. 7(a), with SNR ∼ 4. In order to further verify the surface density, the same procedure is repeated on cover slips and is subsequently viewed under a fluorescence microscope. Fig. 7(a) shows a magnified portion of the fluorescence image. To quantify the density, a 2D peak search algorithm is employed on a large 180 m 137m area of the fluorescence image divided into 99 sub-sections. The total number of quantum dots in each sub-section is calculated and divided by the sub-section area to calculate the distribution of the surface density. As shown in Fig. 7(a), the mean value of the surface density is ∼ 0.6 dot/m 2. During the measurement, we record the middle two sensor data that has a moderately uniform surface density as captured in the fluorescence image. The image shown in the figure is spread over two pixels of the sensor. Therefore, the signal recorded from the sensor corresponds to the fluorescence image in Fig. 7(a). This allows us to accurately calculate the surface density. This slightly lower number than the estimation of 1 dot/m 2 from total number of molecules can be attributed to the increased density at the boundary of the droplet when it was dried. Note that the demonstrated ability to reach the surface densities of the order of 1 dot/m 2 is equivalent to a total of 5 zeptomole (10 −21 ) molecules on the sensor surface (the active photo-sensing area is around 55m55m for one pixel). This exceeds the surface sensitivity levels of modern fluorescence scanners and readers which are typically in the attomole range. Fig. 1 shows the system integration of the sensor array with the bio-assay interface. In order to be compatible with standard assay procedures, capture probes (nucleic acids/proteins) can be immobilized on a 100 m thick cover slip (1.3 mm 5 mm) using a commercial micro-arrayer. This not only allows re-usability of the chip but also avoids deposition of glass required for immobilizing capture probes directly on the chip surface. The assay experiments are carried out in a glass well containing the cover slip and after completion of the procedure with the target/probe solution and fluorescence labeling, the test slip is placed on the chip surface for detection and analysis (Fig. 1). The signals from the sensor arrays are then electronically processed on chip for analysis. The platform can be functionalized with specific bio-probes with conventional micro-arrayer to prepare the disposable cartridges, i.e., the glass slip. DNA and protein detection The chip is tested for both DNA and protein detection where the glass slips are placed in separate glass wells (≈ 300 L in volume) and standard assay protocols are subsequently performed. With target DNA concentration varied between 100 fM -100 pM and allowed to hybridize with the complementary capture DNA, the composite is detected using a streptavidinconjugated Qdot 800 fluorescent tag. The sensor output and the standard error for positive and negative control slips are shown in Fig. 7(b). The standard error is the standard deviation of the signals measured from multiple sample slides performed with the same assay procedure. Although the sensor noise is low, the functionalization itself has some variations across the slides so the error bar is primarily due to the variations in the assay chemistry. The chip demonstrates a linear response with limit of detection (LOD) of 100 fM at SN R ≈ 2. For proteins, we measure the detection of human IL-6 which is a multi-functional cytokine playing critically important roles in the regulation of a wide range of biological activities in various cell types and in the auto-immune processes in many diseases. A sandwich assay for human IL-6 detection is carried out with human IL-6 capture antibody immobilized on the surface. The bio-interface is incubated with IL-6 target. A secondary biotinylated detection antibody is introduced for the sandwich assay and finally the streptavidin-conjugated Qdot 800 is introduced to serve as the fluorescent label. A linear response is demonstrated for target concentrations varying between 5-125 pM with the LOD being 5 pM (Fig. 7(c)). Fig. 7(d) and Fig. 7(e) show the effect of non-specific binding on both DNA and the assays. Specifically, varying concentrations of non-complementary DNA and nonspecific protein (I F N − ) are introduced into their respective assays with fixed concentration of the target DNA (∼10 pM) and target protein (∼ 100 pM). As shown in the figure, the effect on the fluorescence signal is small and can be further suppressed with an optimized blocking agent. Fig. 8(a)-Fig. 8(c) shows the multiplexed detection capability of the sensor array with different DNA capture strands showing positive responses at the conjugated site and almost no signal at the non-specific sites. Multiplexed detection schemes need to take into account the cross-talk across the pixels. In this experiment, the spotted sizes of the capture DNAs are around 200-300 m which are larger than the sensor size. Further, the glass slip being 100 m away from the sensor, the fluorescence emission spreads to the neighboring pixel as we elaborated in Section 2.4 and Fig. 4. This is also seen in the measured signals extracted across multiple pixels in the neighborhood of the corresponding spot ( Fig. 8(a)-Fig. 8(c)). This is not a fundamental limitation of the sensor but of the current arrayer configuration and the bio-interface, as we discussed in Section 2.4 System analysis and limit of detection The translation of surface sensitivity to volume sensitivity for assays is a strong function of the assay chemistry, the diffusion mechanism of the target nucleic acid or protein in the bulk solution (that is influenced by the flow and design of the reaction chamber). However, the core surface sensitivity of the sensor in its ability to detect the minimum number of fluorescence labels on the surface (after the assay and washing steps) can be analytically derived based on both the optical and electronic performances of the components. This including the filter rejection ratio, sensor responsivity, output noise and integration time. Once the surface sensitivity is determined, volume sensitivity for assays can be optimized through careful design of the flow process and the assay protocol. Here, we analyze the surface sensitivity of the sensor based on the measurements of the optical and electronic characteristics and compare with the directly measured performance. The fundamental limit of the sensor to surface sensitivity is not only limited by the noise of the detector (contributed primarily by the shot noise, circuit readout noise and the quantization noise after digitization), but also by the standard error of the chemical binding process when the assay is repeated. Consider P f as the fluorescence light power, R f as the photodiode responsivity (A/W) at the fluorescence wavelength, T as the integration time and e as the electron charge. Then the signal power after the integration time (expressed as a number of electron charges) is given by Fig. 8. a, b, c, Measured multiplexed detection capability of the sensor array with different DNA capture strands showing positive responses at the conjugated site and almost no signal at the non-specific sites. As the glass slip is 100 m from the sensor surface, we can see fluorescence emission spreading across multiple pixels (each square is a sensor site). This is not a fundamental limitation of the sensor, but arises as a result of the spacing between the chip and the bio-interface. Evidently, higher integration time increases the signal power, but it is ultimately limited by the finite rejection ratio of the implemented filter that causes the detector to saturate to the highest allowable voltage swing V SW. The maximum integration time is given by where P l is the excitation light power and is the filter rejection ratio. As can be seen, a stronger filtering ratio allows a higher integration time boosting the signal. Therefore, it is critical to ensure robust filtering performance as demonstrated in the paper, allowing a ratio between 45-60 dB across angles of incidence (Fig. 5). On the other hand, total standard error at the output is composed of sensing circuits noise, readout noise, photon shot noise, LED power fluctuation, and standard error of the assay or biological noise. This can be collectively expressed as Here, V N cir is the photosensing circuits noise, C f b is the feedback capacitance of the CTIA, V N adc is the readout and quantization noise, ex represents the normalized standard deviation of the fluctuation of the LED excitation power, and bio represents the biological noise. Since the photon shot noise is dominant, and P 2 n, ph = and SNR can be seen to increase monotonically with integration time T till it reaches the maximum value in. Using this maximum integration time, the minimum detectable fluorescence signal as a fraction of the incident excitation light for a signal-to-noise ratio (SNR) of one can be derived to be As described in Section 3.3, the measured r.m.s noise voltage is 3.5 mV. This can be expressed as V T ot al = (V SW ex ) 2 + V 2 N cir + V 2 N adc + eV SW C f b and includes the LED fluctuations, circuits and read out noise and photon shot noise. However, with multiple signal acquisition (∼ 100 times) for each sensing pixel, the total noise can be further reduced by simple averaging, nearly 7 dB below the measured level to V T ot al ≈ 0.7 mV. The reduction is less than 10 dB expected from a white noise process, due to the contribution of the 1/f noise of the circuits. In summary, in our design, we measure the following parameters The maximum allowable voltage swing V SW is measured to be around 1 V. The rejection ratio () varies between 45-60 dB (Fig. 5) and we assume here the worst case value of 45 dB. Fig. 9. a. Variation of achievable sensitivity expressed as P ex /P f l for a SNR=1 with signal power and integration time. Maximizing the integration time to maximize achievable voltage swing allows this ratio to reach nearly 77 dB. This is achieved with an initial pre-filtering of 45-60 dB optically with the nanoplasmonic filters and the remaining background suppression is achieved electronically. The standard error for assay variation i.e. the biological noise is estimated to be bio = 20% Measured V T ot al ≈ 0.7 mV after averaging. Utilizing these values, the maximum ratio between fluorescence and excitation signal can be estimated to be to be P f l /P ex ≈ -77 dB for the implemented system. The Qdot800 that we use in this work has the absorption coefficient of 1.06 10 7 cm −1 M −1 that translates to a molecular absorption cross section of 1.76 10 −14 cm 2. Assuming the estimated quantum efficiency of 0.5, P f l /P ex ≈ -77 dB translates to a detection limit of 0.23 dot/m 2 with a SNR=1. For a SNR of 3-4, the theoretical limit corresponds to 0.7-1 dot/m 2 which is very close to the measured value as shown in Fig. 7(a). The electronic-photonic co-design approach allows us to reach levels of P f l /P ex ≈ -77 dB through a combination of optical and electronic filtering and processing across multiple stages. A initial pre-filtering of 45-60 dB is achieved optically with the nanoplasmonic filters. The remaining suppression of the background is enabled electronically with a control site, and noise minimization by multiple acquisition and averaging (100 acquisitions in less than 1 minute). The sensor design itself uses interlaid differential sensing (Fig. 3) to minimize the effect of dark currents. The achievable sensitivity varies with both signal power and integration time as captured in Fig. 9. This co-design and integration methodology allows us to collectively reach towards sensitivity levels approaching that of commercial ELISA platforms. Conclusion and discussion In conclusion, we present for the first time external optics-free nano-optical fluorescence 96sensor-array in CMOS with sensitivities of commercial ELISA readers. The salient features of the system can be summarized as below We demonstrate for the first time integration of complex nanoplasmonic structures and active devices in a 'no change' approach to an industry-standard CMOS process. This is the key toward such extreme miniaturization and it enables a new class of angle-insensitive nanoplasmonic filtering (45-60 dB ) across a massively parallelized chip-scale platform. This filtering characteristic is district from classical interference-based or plasmonicsbased resonant structures allowing us to eliminate all external optics and miniaturize the entire system to such an extent. The work demonstrates the strength of electronic/nano-optic co-integration, co-design and cross-layer optimization, where 45-50 dB of filtering is achieved optically, and 25-30 dB of subsequent electronic filtering allowing us to reach fluorescence detection limits of nearly 77 dB below the excitation signal. The system demonstrates measured detection limits down to zeptomoles of fluorophores on the surface (∼ 1 dot/m 2 of surface density) and ∼ 100 fM and ∼ 5 pM respectively, comparable to commercial fluorescence-based readers and ELISA-based detection systems. Further, being realized in a fully integrated CMOS process with no post-processing, this can enable future point-of-care systems at extremely low-cost by incorporating the sensing interface, sensing platform and the reader all integrated into the chip. In addition, the entire nano-optical system including the source, occupies less than 0.1 cc in volume potentially enabling future in-vivo bimolecular sensing modalities. The ability of a CMOS system to detect fluorescence quantum-dots down to zeptomole level (∼ 1 dot/m 2 of surface density) is a promising sign where more complex multi-modal sensing systems can be envisioned for future sensing applications. While the system sensitivities of ∼ 100 fM and ∼ 5 pM respectively are lower than other works utilizing plasmonic enhancement, gold nanoparticle labels, fibe-optic readouts, organometallic labels and GMR sensors, it is comparable to commercial fluorescence based readers and ELISA-based detection and reader systems and therefore suitable for a wide range of molecular diagnostic applications in clinical and research settings. Currently, sensitivity is partly limited by the poor quantum efficiency of the diodes (Q.E.∼ 0.1) in the implemented process and can be further increased by migrating to a CMOS-based imager process with similar feature size. We are also investigating the light delivery mechanism for a more uniform illumination. While the variation is not large enough to cause Qdot photo-bleaching and can be addressed with an automated calibration process, it is still important to enhance the uniformity as much as possible. Uniformity of background can ensure similar dynamic range across all the senor pixels. This requires modification of the light delivery mechanism. While we currently allow the LED to shine directly on the chip, pixel-to-pixel variation in the background can be reduced by coupling light through an optical fiber and employing a diffuser above the chip. Co-design of the electronic and optical packaging is critical to the sensor performance, and we are currently investigating methods to allow a more uniform illumination. When compared to commercially available biochip such as Affimetrix, it can be noted that the presented sensor array does not need an additional reader. It encompasses the sensor platform, sensors and reader all integrated into the chip (with the source being closely packaged as well). Quantum dot based assays have matured over the years, and we employ these labels to allow the scattering and angle insensitive filtering. This is the key to miniaturization, where the light path is not collimated unlike bulky fluorescence readers. While a 96-sensor array is demonstrated in this paper, given the integration capability of both electronics and future nano-optics, the technology can be potentially scaled into tens of thousands, if not hundreds of thousands of sensing sites, in a scalable and cost-effective manner. Of course, the entire system is more than the sensor. Scaling of CMOS-based bio-sensing platforms including packaging and functionalization is expectedly a multi-step process that is likely to be handled by different facilities. While this can involve varying degrees of cost, it is in line with manufacturing of commercial electronic systems where chip fabrication, packaging, assembly and testing are carried out in different facilities. Typically, the costs are split evenly across these different steps. Similar partition of the manufacturing cost can be envisioned for CMOS-based biosensing systems. Both IC fabrication and bio-functionalization are standardized processes and can be carried out separately, and a final packaging step can bring these parts together for a complete system. This does not negate the significant cost reduction in the entire system, since it eliminates the complexity and cost of the reader system. More importantly through miniaturization and possible integration of a wireless interface in the sensor, it can enable a new eco-system of connected biosensors deployable at point of care. With the ability to interface with digital microfluidics in future, this can enable new complex ultra-miniaturized sample-to-answer biomedical devices that have the great potential to be deployed at the point-of-care for both in-vitro and in-vivo applications.
n = int(input()) k = int(input()) m = len(list(str(n))) n0 = list(str(n)) for i in range(m): n0[i] = int(n0[i]) ans = 0 comb = 1 if k < m: for j in range(k): comb *= (m-1-j) comb //= (j+1) ans += comb*(9**k) #print(ans,k,m) if k == 1: ans += n0[0] elif k == 2: if m >= 2: ans += (n0[0]-1)*9*(m-1) c = 1 while n0[c] == 0 and c < m-1: c += 1 if n0[c] > 0: ans += n0[c] ans += 9*(m-c-1) else: ans = 0 else: if m >= 3: ans += (n0[0]-1)*9*9*(m-1)*(m-2)//2 c = 1 while n0[c] == 0 and c <= m-2: c += 1 if c < m-1: ans += (n0[c]-1)*9*(m-c-1) ans += 9*9*(m-c-1)*(m-c-2)//2 c1 = c+1 while n0[c1] == 0 and c1 < m-1: c1 += 1 if n0[c1] > 0: ans += n0[c1] ans += 9*(m-c1-1) else: ans = 0 print(ans)
Surgical management of umbilical masses with associated umbilical cord remnant infections in calves. Intra-abdominal umbilical cord remnant infections were diagnosed in 21 calves during a 5-year period. The urachal remnant alone was involved in 15 calves, umbilical artery remnant alone in 1 calf, and the umbilical vein remnant alone in 4 calves. Both urachus and umbilical vein were involved in 1 calf. All cases were managed surgically by ventral celiotomy. Infected urachal remnants not extending to the bladder, infected umbilical artery remnant, and infected umbilical vein remnants not extending to the liver were dissected free of surrounding adhered structures, ligated proximal to the infected segment, transected, and removed. Infected urachal remnants extending to the bladder were similarly isolated and removed after resection of the attached bladder apex. Infected umbilical vein remnants extending to the liver were marsupialized. Of 19 calves available for follow-up from 1 to 32 months after surgery, 15 recovered without any postoperative complications, 3 had short-term complications, and 1 calf developed an incisional hernia.
<filename>Automation/src/TestCases/LogOutSuacedo.java package TestCases; import Pages.Dashboard; import Pages.HomePage; import Pages.LoginPage; import Pages.LogOut; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.support.ui.ExpectedConditions; import org.openqa.selenium.support.ui.WebDriverWait; import java.util.concurrent.TimeUnit; public class LogOutSuacedo { public static void main(String[] args) throws InterruptedException { System.setProperty("webdriver.chrome.driver", "c:\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); driver.get("https://www.saucedemo.com/"); //Creating object of Login page LoginPage login = new LoginPage(driver); //Creating object of Dashboard Dashboard dashboard = new Dashboard(driver); //Creatoig object of LogOut LogOut logout = new LogOut(driver); //Click on Login button login.clickLogin(); //Enter username & password login.enterUsername("standard_user"); login.enterPassword("<PASSWORD>"); //Click on login button login.clickLogin(); Thread.sleep(3000); //Click on Menu button dashboard.ClickMenuButton(); Thread.sleep(7000); //Click on Logout button logout.clickLogout(); Thread.sleep(4000); //Validate if HomePage header Container ID is present logout.checkIfElementExists(); Thread.sleep(4000); //Validate object inside login page System.out.println("Success Execution for 'LogOutSuacedo'"); //Close browser instance driver.quit(); } }
The Class of 2010 is heading into the real world but where should they live? Urban guru Richard Florida and his team find the best cities for the young and ambitious. Let’s not go overboard. That 20 percent plus unemployment rate includes high school dropouts and people who didn’t finish college. The unemployment rate for college graduates is actually less than 5 percent. And the unemployment rate in the professional and technical fields where you’re most likely to work—science and engineering, business and management, education and health care—is just under 4 percent. Make no mistake about it, times are tough—but it’s blue-collar workers and blue-collar communities that have borne the full brunt of the crisis. Most recent college grads will find jobs, even if they have to look a little longer than previous classes did. And that’s not such a bad thing. With all those high-paying corporate entry-level jobs for the taking during the boom years, too many young people went for the bucks and landed in careers that were unsatisfying and unfulfilling. Now more than ever, it’s really important to put serious thought into where you want to live. The place you choose to live is key to your economic future. Jobs no longer last forever. In fact, the average twentysomething switches jobs every year. Places can provide the vibrant, thick labor market that can get you that next job, and the one after that and be your hedge against layoffs during this economic downturn. Early career moves are the most important of all, according to Don Peck in the National Journal. He cites a prominent study that finds that “about two-thirds of all lifetime income growth occurs in the first 10 years of a career, when people can switch jobs easily, bidding up their earnings.” Sure you can move from place to place—and it’s true twentysomethings are three- to four-times more likely to move than fiftysomethings—but it’s a lot easier to manage a forward-looking career if you choose the right place with abundant opportunity to start out in. So what do twentysomethings want in a community? To get at this, my team and I analyzed the results of a Gallup survey of some 28,000 Americans in their 20s. Some key things stand out. Jobs are clearly important—but just as clearly, they’re not all-important. When asked what would keep them in their current location, twentysomethings ranked the availability of jobs second. Twentysomethings understand well they face not only fewer job options but dwindling corporate commitment—it’s not only harder to find a job, it’s also easier to lose it. So it makes good sense to pick a city where the labor market is thick with job opportunities as a hedge against economic insecurity. What twentysomethings value the most is the ability to meet people and make friends. This also makes very good sense actually. Personal networks are about much more than having fun, they’re among the best ways to find a job and move forward in a career. Twentysomethings rank the availability of outstanding colleges and universities highly. Many want to go back to school to pursue a graduate degree or professional degree, and having these options available where you live is a big plus. Of course, young people value amenities, too—from parks and open space to nightlife and culture. It’s less about all-night partying though, twentysomethings prefer places where they can easily go for a run or bike ride, work out or walk their dog, grab a coffee, take in a concert, see interesting new art, or take in a good meal with friends. • College educated workforce—the share of the workforce with a bachelor’s degree or higher. • Rental housing—having an abundant, available stock of rental housing is key. We measured this as the share of all housing made up of rental units. • Youth-oriented amenities—like bars, restaurants, cafes, sports facilities and entertainment venues. • Creative capital: We use this to capture the creative energy of a place. It’s measured as the share of employed artists, musicians, actors, dancers, writers, designers, and entertainers in the workforce. Affordability: The overall rankings do not take housing costs into account. Generally speaking, new college grads are renters and can easily share apartments to reduce costs. It’s also difficult to get a handle on the full living costs borne by young people—some communities have accessible mass transit; in others, new grads must buy a car (and pay for insurance, maintenance, gas, and parking). So, we decided to break out an additional index to account for affordability. This index includes a variable for rent levels—median contract rent. It weights affordability at 25 percent of the overall index value, and lets the other nine indicators account for the remaining 75 percent. We mark cities that rank in the top 25 on this combined affordability index with an asterisk(*). The data is the most current available, for 2008, 2009, or 2010 depending on the variables. All nine variables are equally weighted. The technical analysis was conducted by a Martin Prosperity Institute team of Charlotta Mellander, Kevin Stolarick, Patrick Adler, and Ian Swain. College towns dominate the top spots. Ithaca is first followed by Madison, Wisconsin; Ann Arbor, Michigan; Durham, North Carolina; Austin, Texas; and Boulder Colorado. That may seem a bit surprising to the legions of new grads who are off to the big city. Boulder and Austin are two of the country’s leading centers for innovation and high-tech business with great sports and music scenes to boot. And college towns—from Iowa City, Iowa to Charlottesville, Virginia, from Lawrence, Kansas to Lincoln, Nebraska, from Columbia, Missouri to State College, Pennsylvania—provide terrific “stay-over” locations for new grads who want to maintain their networks, try out their skills or develop new ones. They have high percentages of young, highly educated singles; they provide an affordable alternative to bigger cities while still delivering a high quality of life; and they’ve proven to be among the most resilient communities during the economic downturn. The list also has its share of big cities. D.C. is the top big city on our list in seventh place; and it’s followed closely by New York City and Boston. San Francisco, San Diego, L.A., Seattle, and San Jose (the hub of Silicon Valley, still hands-down the best place for techies) all make the top 25. But do remember: There’s no absolute best place for new grads—or anyone else for that matter. Different strokes for different folks: For every twentysomething that wants to head to the big city there are those who prefer some place closer to home or a smaller, more affordable community. It’s best to think of this list as a general guide to help you orient your choices. When we were building our index we found that small shifts in the datasets we used and how they were weighted would reorder the cities near the top, but the picks in the top 25 remained surprisingly consistent. Ithaca, for example, always made the top 25, but adding the last two variables to the index raised its rank from 14th to first. So college grads should think of this list as a way to orient their own personal list, rather than a winner-take-all competition. That’s the key thing, really—to pick the place that’s best for you—that fits your own career outlook, your current situation, and your life plans. My team at the Martin Prosperity Institute has developed a tool called Place Finder that asks for some of your preferences and generates a custom list of places that might be right for you. That choice is more important now than ever. While the place you choose to start your career and your life is always important, it’s taken on additional importance during the current economic downturn. This is no run-of-the-mill economic cycle recession but a full-blown economic transformation, the kind that comes around only once every generation or two. Great Resets like these give rise to the life-altering “gales of creative destruction” that the great economist Joseph Schumpeter wrote of—to new technologies, new industries, and whole new ways of living. If some cities may fall further and further behind, others—the most innovative, adaptive, open-minded places—may be on the brink of unprecedented prosperity. And you might just be a part of it. Choose wisely. Richard Florida is Director of the University of Toronto’s Martin Prosperity Institute and author of The Great Reset, published this month by Harper Collins. Kevin Stolarick developed the data; Charlotta Mellander conducted the statistical analysis. Patrick Adler and Ian Swain assisted with the analysis.
Relationship among catheter insertions, vascular access infections, and anemia management in hemodialysis patients. BACKGROUND Arteriovenous fistulas are the recommended permanent vascular access (VA) for chronic hemodialysis. However, in the United States most patients begin chronic hemodialysis with a catheter. Recent data suggest that VA type contributes to recombinant human erythropoietin (rHuEPO) resistance. We examined catheter insertions, VA infections, and anemia management in Medicare, rHuEPO-treated, chronic hemodialysis patients. METHODS We compared hemoglobin values and rHuEPO and intravenous iron dosing with concurrent catheter insertions and VA infections in 186,348 period-prevalent patients in 2000. We studied anemia management after catheter insertions and VA infections in 67,410 incident patients from 1997 to 1999. Multiple linear regression models examined follow-up hemoglobin and rHuEPO dose per week (rHuEPO/wk) by numbers of catheter insertions and hospitalizations for VA infection. RESULTS In the prevalent cohort, increasing temporary and permanent catheter insertions and VA infections were associated with slightly lower hemoglobin, higher rHuEPO doses, and higher intravenous iron doses. In the incident cohort, compared to patients with no VA infections or no catheter insertions (temporary or permanent), respectively, patients with 2+ VA infections or 2+ catheter insertions had 0.12 g/dL and 0.06 g/dL lower mean hemoglobin (P = 0.0028 and P < 0.0001), and 25.7% and 12.2% higher mean rHuEPO/wk (P < 0.0001). CONCLUSION Higher rHuEPO doses may be required to maintain similar or slightly lower mean hemoglobin values among chronic hemodialysis patients with higher numbers of catheter insertions and VA infections, compared to patients without any.
/** * \file Rmy85000Node.cpp * * Created by <NAME> on 8/11/16. * Copyright (c) 2016 Agilatech. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: * The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. * */ #include "Rmy85000Node.h" namespace rmy85000 { using v8::FunctionCallbackInfo; using v8::FunctionTemplate; using v8::Function; using v8::Persistent; using v8::Isolate; using v8::Context; using v8::Local; using v8::Handle; using v8::Object; using v8::String; using v8::Value; using v8::Number; using v8::Boolean; Persistent<Function> Rmy85000Node::constructor; Rmy85000Drv* Rmy85000Node::driver = 0; void Rmy85000Node::Init(Local<Object> exports) { Isolate* isolate = exports->GetIsolate(); // prep the constructor template Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New); // associates the New function with the class named Rmy85000 tpl->SetClassName(String::NewFromUtf8(isolate, "Rmy85000")); // InstanceTemplate is the ObjectTemplate assocated with the function New tpl->InstanceTemplate()->SetInternalFieldCount(1); NODE_SET_PROTOTYPE_METHOD(tpl, "deviceName", getDeviceName); NODE_SET_PROTOTYPE_METHOD(tpl, "deviceType", getDeviceType); NODE_SET_PROTOTYPE_METHOD(tpl, "deviceVersion", getDeviceVersion); NODE_SET_PROTOTYPE_METHOD(tpl, "deviceNumValues", getDeviceNumValues); NODE_SET_PROTOTYPE_METHOD(tpl, "typeAtIndex", getTypeAtIndex); NODE_SET_PROTOTYPE_METHOD(tpl, "nameAtIndex", getNameAtIndex); NODE_SET_PROTOTYPE_METHOD(tpl, "deviceActive", isDeviceActive); NODE_SET_PROTOTYPE_METHOD(tpl, "valueAtIndexSync", getValueAtIndexSync); NODE_SET_PROTOTYPE_METHOD(tpl, "valueAtIndex", getValueAtIndex); // store a reference to this constructor constructor.Reset(isolate, tpl->GetFunction()); exports->Set(String::NewFromUtf8(isolate, "Rmy85000"), tpl->GetFunction()); } void Rmy85000Node::getDeviceName(const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); std::string name = driver->getDeviceName(); Local<String> rmy85000 = String::NewFromUtf8(isolate, name.c_str()); args.GetReturnValue().Set(rmy85000); } void Rmy85000Node::getDeviceType(const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); std::string type = driver->getDeviceType(); Local<String> deviceType = String::NewFromUtf8(isolate, type.c_str()); args.GetReturnValue().Set(deviceType); } void Rmy85000Node::getDeviceVersion(const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); std::string ver = driver->getVersion(); Local<String> deviceVer = String::NewFromUtf8(isolate, ver.c_str()); args.GetReturnValue().Set(deviceVer); } void Rmy85000Node::getDeviceNumValues (const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); int value = driver->getNumValues(); Local<Number> deviceNumVals = Number::New(isolate, value); args.GetReturnValue().Set(deviceNumVals); } void Rmy85000Node::getTypeAtIndex (const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); std::string type = driver->getTypeAtIndex(args[0]->NumberValue(context).FromMaybe(0)); Local<String> valType = String::NewFromUtf8(isolate, type.c_str()); args.GetReturnValue().Set(valType); } void Rmy85000Node::getNameAtIndex (const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); std::string name = driver->getNameAtIndex(args[0]->NumberValue(context).FromMaybe(0)); Local<String> valName = String::NewFromUtf8(isolate, name.c_str()); args.GetReturnValue().Set(valName); } void Rmy85000Node::isDeviceActive (const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); bool active = driver->isActive(); Local<Boolean> deviceActive = Boolean::New(isolate, active); args.GetReturnValue().Set(deviceActive); } void Rmy85000Node::getValueAtIndexSync (const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); std::string value = driver->getValueAtIndex(args[0]->NumberValue(context).FromMaybe(0)); Local<String> retValue = String::NewFromUtf8(isolate, value.c_str()); args.GetReturnValue().Set(retValue); } void Rmy85000Node::getValueAtIndex (const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); Work * work = new Work(); work->request.data = work; // get the desired value index from the first param in the JS call work->valueIndex = args[0]->NumberValue(context).FromMaybe(0); // store the callback from JS in the work package so we can invoke it later Local<Function> callback = Local<Function>::Cast(args[1]); work->callback.Reset(isolate, callback); // kick of the worker thread uv_queue_work(uv_default_loop(),&work->request,WorkAsync,WorkAsyncComplete); args.GetReturnValue().Set(Undefined(isolate)); } void Rmy85000Node::New(const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); String::Utf8Value param0(isolate, args[0]); std::string devfile = std::string(*param0); float calibration = args[1]->IsUndefined() ? 1.75 : args[1]->NumberValue(context).FromMaybe(0); // if invoked as costructor: 'new Rmy85000(...)' if (args.IsConstructCall()) { Rmy85000Node* obj = new Rmy85000Node(devfile, calibration); obj->Wrap(args.This()); args.GetReturnValue().Set(args.This()); } // else invoked as plain function 'Rmy85000(...)' -- turn into construct call else { const int argc = 2; Local<Value> argv[argc] = { args[0], args[1] }; Local<Function> cons = Local<Function>::New(isolate, constructor); Local<Context> context = isolate->GetCurrentContext(); Local<Object> instance = cons->NewInstance(context, argc, argv).ToLocalChecked(); args.GetReturnValue().Set(instance); } if (!driver) { driver = new Rmy85000Drv(devfile, calibration); } } // called by libuv worker in separate thread void Rmy85000Node::WorkAsync(uv_work_t *req) { Work *work = static_cast<Work *>(req->data); work->value = driver->getValueAtIndex(work->valueIndex); } // called by libuv in event loop when async function completes void Rmy85000Node::WorkAsyncComplete(uv_work_t *req, int status) { Isolate * isolate = Isolate::GetCurrent(); v8::HandleScope handleScope(isolate); Work *work = static_cast<Work *>(req->data); // the work has been done, and now we store the value as a v8 string Local<String> retValue = String::NewFromUtf8(isolate, work->value.c_str()); // set up return arguments: 0 = error, 1 = returned value Handle<Value> argv[] = { Null(isolate) , retValue }; // execute the callback Local<Function>::New(isolate, work->callback)->Call(isolate->GetCurrentContext()->Global(), 2, argv); // Free up the persistent function callback work->callback.Reset(); delete work; } void init(Local<Object> exports) { Rmy85000Node::Init(exports); } NODE_MODULE(rmy85000, init) } // namespace rmy85000
import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; /** * Generated class for the QuizPage page. * * See https://ionicframework.com/docs/components/#navigation for more info on * Ionic pages and navigation. */ @Component({ selector: 'page-quiz', templateUrl: 'quiz.html', }) export class QuizPage { constructor(public navCtrl: NavController) { } // TODO: translate to Angular code. See Orcamento page ionViewDidLoad() { (function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(d.getElementById(id))return;js=d.createElement(s);js.id=id;js.src='https://embed.playbuzz.com/sdk.js';fjs.parentNode.insertBefore(js,fjs);}(document,'script','playbuzz-sdk')); } openSobre() { this.navCtrl.push('SobrePage'); } }
/** * Renders the specified item of the inventory slot at the specified location. Args: slot, x, y, partialTick */ private void renderInventorySlot(int p_73832_1_, int p_73832_2_, int p_73832_3_, float p_73832_4_) { ItemStack var5 = this.mc.thePlayer.inventory.mainInventory[p_73832_1_]; if (var5 != null) { float var6 = (float)var5.animationsToGo - p_73832_4_; if (var6 > 0.0F) { GL11.glPushMatrix(); float var7 = 1.0F + var6 / 5.0F; GL11.glTranslatef((float)(p_73832_2_ + 8), (float)(p_73832_3_ + 12), 0.0F); GL11.glScalef(1.0F / var7, (var7 + 1.0F) / 2.0F, 1.0F); GL11.glTranslatef((float)(-(p_73832_2_ + 8)), (float)(-(p_73832_3_ + 12)), 0.0F); } itemRenderer.renderItemAndEffectIntoGUI(this.mc.fontRenderer, this.mc.getTextureManager(), var5, p_73832_2_, p_73832_3_); if (var6 > 0.0F) { GL11.glPopMatrix(); } itemRenderer.renderItemOverlayIntoGUI(this.mc.fontRenderer, this.mc.getTextureManager(), var5, p_73832_2_, p_73832_3_); } }
Edgeworthia chrysantha, commonly called paper bush, is a compact deciduous shrub that blooms in late winter through early spring. Paper bush is native to China, but thrives in United States Department of Agriculture plant hardiness zones 7 through 10. It has dark green leaves and produces clusters of small, fragrant yellow flowers that fade to shades of white. The white flowers then give way to small purple berries. Paper bush is prized in American landscapes for its interesting form and flowers. In its native Asia, paper bush is widely grown because its bark is used -- as the name implies -- to make paper. Divide a mature paper bush in mid to late winter to produce a new paper bush plant; dig up the root ball and separate at the roots into multiple plants. You can also purchase paper bushes from nurseries to transplant in the garden. Select a planting location that receives full sun or partial shade and that has fertile, well-drained soil. Amend the soil with organic material as needed to produce fertile clay to loamy soil with good drainage a pH that is neutral to acidic. You can add finished compost, straw, grass clippings, manure, shredded bark mulch, sand, sphagnum peat and other organic matter to improve soil structure. Dig a hole as deep as the root ball and twice as wide. Plant the paper bush to the same depth as the original container or plant, whether you purchased the plant from a nursery or propagated your own transplant from a cutting. Cover the soil with a 1-inch layer of finished compost, then spread 1 to 3 inches of mulch around the plant without piling the mulch against the base of the paper bush. Water the paper bush slowly and deeply to ensure deep root development. Older, woody shrubs require less frequent watering than young, tender, herbaceous paperbush shrubs. Soaker hoses are recommended to provide the slow, deep water necessary for healthy root development. Plants require less frequent watering in winter than in summer, but should be watered during dry winter periods. Fertilize plants annually in fall with a complete fertilizer, such as 10-10-10, or with regular finished compost applications to restore nutrients to the soil. Prune the branches as needed to control the size and remove dead or broken branches. Paper bush isn't as fussy as many other shrubs and doesn't require regular seasonal pruning. Paperbush is a relatively low-maintenance plant that requires little outside attention once planted. You can also propagate paperbush from seed, but it takes much longer to establish a healthy plant. Allonsy, Amelia. "How to Grow Edgeworthia Chrysantha." Home Guides | SF Gate, http://homeguides.sfgate.com/grow-edgeworthia-chrysantha-27483.html. Accessed 18 April 2019.
By CORRESPONDENT, NYAHURURU, Kenya, Oct 27 – A man died after he was allegedly sodomised and his private parts chopped off at Mwangaza village in Subukia area of Nakuru County. James Wanderi, 32, succumbed to the injuries while undergoing treatment at the Nyahururu District Hospital early on Wednesday. According to Subukia civic leader Kiriethe Ndigirigi who rushed the victim to the hospital, Mr Wanderi was walking home on Tuesday night when he was accosted by a gang of seven men at Mwangaza trading center. They roughed him up and stole his mobile phone and money before committing the heinous act. “They took him to a nearby bush where they sexually assaulted him before chopping off his private parts and fled in the darkness,” he said. The civic leader noted that villagers found the man writhing in pain in the morning and took him to hospital. “It was after they noticed blood trickling from his private parts that they realised that they had been chopped off,” he noted. A nurse at the hospital who declined to be named said that Mr Wanderi was pronounced dead moments after ha had been attended to and attributed it to excessive bleeding. “He had bled so much. He had just narrated to us of the painful ordeal before he died. It was a pathetic story,” she said. Nakuru Deputy Police boss Mathias Guyo also confirmed the incident saying police in the area had launched investigations. He urged the public to assist in the probe. The incident occurred barely two weeks after another man was allegedly sodomised by a gang of five at Kwa Ndatho farm in Subukia. The councilor urged the government to deploy more security personnel in the town regretting that violent crime rate was now on the rise.
<gh_stars>0 package jorge.veamurguia.entidad; // // // Generated by StarUML(tm) Java Add-In // // @ Project : Untitled // @ File Name : Articulo.java // @ Date : 01/03/2010 // @ Author :jorge.veamurguia // @email:<EMAIL> // public class Articulo { public Integer ID; public String Articulo; public String Codigo; public Categoria ID_Categoria; public Float precio; }
A new system for offline signature identification and verification Biometric features have great importance in authentication systems nowadays. One of the most important and conventional biometrics is signature. In this paper, we proposed a system which has two independent phases for offline signature identification and verification. The identification phase is based on Triangular Spatial Relationship (TSR) that is a rotation invariant feature extraction method. Also, a symbolic representation of signature has been employed to make using TSR possible. In the verification phase, a hybrid method is proposed that combines Discrete Wavelet Transform (DWT), Gabor filter, and image fusion methods. Experimental results on some benchmarks have confirmed the robustness and precision of proposed method together with its robustness against translation, scaling, and rotation.
Looking for news you can trust? Subscribe to our free newsletters. Today the government released data showing how much different hospitals charge for the same procedure. I’ve been struggling since last night to figure out what to say about this, since in one way there’s no news here. The fact that there are huge disparities has been well known for quite a while. This new data simply lays it out in more mind-numbing detail than usual. For now, then, I’m just going to offer up a couple of good graphical presentations that I’ve seen. The first is your basic map, courtesy of the New York Times. I zoomed in on Los Angeles here: Take a look in the hospitals in the middle. There’s a disparity of 2-4x in pricing between hospitals that are only a couple of miles apart. Why? Some is probably due to the nature of the cases they take, and the amount of unpaid work they do. But 2-4x? What accounts for this? Part of the answer comes from the chart below, courtesy of the Washington Post: This doesn’t explain everything, but it explains a fair amount. The private sector, we’re told, is always more efficient than the public sector. Competition, you understand. But that doesn’t seem to be the case in the healthcare industry. I will allow you to draw your own conclusions.
/** * Convert preamble from number of symbols to ms * @param preambleLen Preamble in number of symbols * @return The preamble duration in ms */ uint32_t preamble_symbols_to_timems(uint16_t preambleLen) { double pTime; pTime = (preambleLen+4.25)*get_symbol_time(); return floor(pTime * 1e3); }