content
stringlengths
7
2.61M
Bacopa monnieri Supplements Offset Paraquat-Induced Behavioral Phenotype and Brain Oxidative Pathways in Mice. BACKGROUND Parkinson's Disease (PD) is characterized by alterations in cerebellum and basal ganglia functioning with corresponding motor deficits and neuropsychiatric symptoms. Involvement of oxidative dysfunction has been implicated for the progression of PD, and environmental neurotoxin exposure could influence such behavior and psychiatric pathology. Assessing dietary supplementation strategies with naturally occurring phytochemicals to reduce behavioral anomalies associated with neurotoxin exposure would have major clinical importance. The present investigation assessed the influence of Bacopa monneri (BM) on behaviors considered to reflect anxiety-like state and motor function as well as selected biochemical changes in brain regions of mice chronically exposed to ecologically relevant herbicide, paraquat (PQ). MATERIALS & METHODS Male mice (4-week old, Swiss) were daily provided with oral supplements of standardized BM extract (200 mg/kg body weight/day; 3 weeks) and PQ (10 mg/kg, i.p. three times a week; 3 weeks). RESULTS We found that BM supplementation significantly reversed the PQ-induced reduction of exploratory behavior, gait abnormalities (stride length and mismatch of paw placement) and motor impairment (rotarod performance). In a separate study, BM administration prevented the reduction in dopamine levels and reversed cholinergic activity in brain regions important for motor (striatum) pathology. Further, in mitochondria, PQ-induced decrease in succinate dehydrogenase (SDH) activity and energy charge (MTT reduction), was restored with BM supplementation. CONCLUSION These findings suggest that BM supplementation mitigates paraquat-induced behavioral deficits and brain oxidative stress in mice. However, further investigations would enable us to identify specific molecular mechanism by which BM influences behavioural pathology.
ISAR maneuvering target imaging based on compressive time-frequency distribution Based on compressive sensing theory, a new method for high resolution inverse aperture radar (ISAR) imaging of maneuvering target is proposed. In this method, only a few measurements in ambiguity function plane are sampled to reconstruct the time-frequency distribution by solving the inverse problem using basis pursuit method. With a Gaussian window as a sampling function, the trade-off between the cross-term suppression and resolution is properly dealt with, and the proposed method obtains clear ISAR image of maneuvering target with a high resolution. The effectiveness of the proposed method is studied with testing data.
St Saviour's Church, Oxton History St Saviour's was built between 1889 and 1892 to replace a church of 1846 that had become too small for the needs of its congregation. The architects were C. W. Harvey with Pennington and Bridgen. The foundation stone was laid on 26 March 1889. The first service was held in the church in 1891, although the tower was not fully built at that time. The building of the tower was completed in the following year, and the church was dedicated on 26 May 1892. In 1941 the roof and east end of the church were damaged by a bomb, These were rebuilt by Leonard Barnish, the east wall being rebuilt in a simplified form. Exterior The church is constructed in red sandstone with a Welsh slate roof. Its architectural style is Decorated, and the church has a cruciform plan; the plan consists of a nave with a clerestory, north and south aisles under lean-to roofs, a south porch, north and south transepts, a tower at the crossing, and a chancel. At the west end is a large five-light window containing Decorated tracery. Along the sides of the aisles are eight lancet windows, and the clerestory has four three-light windows with Decorated tracery, between which are pilaster buttresses. In the transepts are two-light Decorated windows, with a rose window above them. The tower rises for two stages above the body of the church, and has angle buttresses that rise to octagons and end in pinnacles. In the south east corner of the tower is an octagonal stair turret. The bell openings are in pairs, louvred, and contain plate tracery. Between them are pilasters that terminate in pinnacles. The parapet of the tower is embattled. At the east end of the church is a circular window, which replaced the original bomb-damaged window. Interior Inside the church are four-bay arcades. Many of the fittings are in rich Arts and Crafts style. The reredos was designed by G. F. Bodley. It is in gilded oak and takes the form of a triptych. In the centre is a depiction of Christ in Glory above a depiction of the Nativity. These are flanked by figures of four Church Fathers. On the wings are figures of Saint Werburgh and Saint Cecilia, each of which is flanked by two angels. The chancel screen and choir and clergy stalls are by Edward Rae. They are carved and inlaid; the clergy stalls include canopies and misericords. The reredos in the south chapel is also by Rae; this contains four angels carved by Harry Hems. By the north chapel is a plaque by Della Robbia. The octagonal font is in alabaster. It contains panels carved with depictions of Christ and a lamb, and of three of the evangelists. Set into the west wall of the church is a war memorial of 1920 by Giles Gilbert Scott. It is in white marble with a black marble background and a red sandstone surround, and depicts the Crucifixion and angels. Below this are inscribed the names of the fallen. The stained glass in the east window, dedicated in 1974, is by L. C. Evetts. Before the war damage there was a set of windows in the chancel by C. E. Kempe, but only one of these has survived. There is more glass by Kempe in the vestry that has been moved from a house nearby. The west window and a window in the north transept of 1903 were designed by Edward Burne-Jones and made by Morris & Co.. The original three-manual pipe organ was made by Robert Hope-Jones. It was reconstructed in 1908 by Norman and Beard. Work was carried out on this organ in 1935, 1947 and 1962 by Rushworth and Dreaper. In 1985 Rushworth and Dreaper replaced this organ, re-using parts and pipes from the previous organ, and from a Conacher organ taken from a redundant church in Southport, to make a four-manual organ. There is a ring of ten bells, eight of which were cast in 1895 by John Taylor & Co, the ring being augmented to 10 in 1976.
<gh_stars>0 // // ACDJoinGroupCell.h // ChatDemo-UI3.0 // // Created by liang on 2021/10/26. // Copyright © 2021 easemob. All rights reserved. // #import "ACDCustomCell.h" NS_ASSUME_NONNULL_BEGIN @interface ACDJoinGroupCell : ACDCustomCell @property (nonatomic, strong, readonly) UIButton *joinButton; @property (nonatomic, copy) void (^joinGroupBlock)(); @end NS_ASSUME_NONNULL_END
Eugen Weber Career He was born in Bucharest, Romania, the son of Sonia and Emmanuel Weber, a well-to-do industrialist. When he was ten, his parents hired a private tutor. But the tutor did not stay long. From the age of 10 Weber was already reading The Three Musketeers by Alexander Dumas, adventure novels by Karl May, poetry by Victor Hugo and Homer. He was also reading George Sand, Jules Verne and "every cheap paperback I could afford". At age 12, he was sent to boarding school in Herne Bay, in south-eastern England, and later to Ashville College, Harrogate. During World War II, he served with the British Army in Belgium, Germany and India between 1943 and 1947 rising to the rank of captain. Afterward, Weber studied history at the Sorbonne and Institut d'Etudes Politiques de Paris (Sciences Po) in Paris. In 1950, Weber married Jacqueline Brument-Roth. He graduated with a BA in 1950 and an MA from the University of Cambridge in 1954. He then taught at Emmanuel College, Cambridge (1953–1954) and the University of Alberta (1954–1955) before settling in the United States, where he taught first at the University of Iowa (1955–1956) and then, until 1993 on his retirement, at the University of California, Los Angeles (UCLA). At Cambridge University, Eugen Weber studied with the historian David Thomson. He studied for his PhD but the dissertation was refused because the outside examiner, Alfred Cobban, of the University of London, gave a negative review of his dissertation, saying it lacked sufficient archival sources. Eugen Weber wrote a column titled "LA Confidential" for the Los Angeles Times. He also wrote for several French popular newspapers and, in 1989, presented an American public television series, The Western Tradition, which consisted of fifty-two lectures of 30 minutes each. He died in Brentwood, Los Angeles, California, aged 82. Methodology Weber took a pragmatic approach to history. He once observed: Nothing is more concrete than history, nothing less interested in theories or in abstract ideas. The great historians have fewer ideas about history than amateurs do; they merely have a way of ordering their facts to tell their story. It isn’t theories they look for, but information, documents, and ideas about how to find and handle them. Impact Weber is associated with several important academic arguments. His book: Peasants into Frenchmen: The Modernization of Rural France 1870-1914 is a classic presentation of modernization theory. Although other historians such as Henri Mendras had put forward similar theories about the modernization of the French countryside, Weber's book was amongst the first to focus on changes in the period between 1870 and 1914. Weber emphasizes that well into the 19th century few French citizens regularly spoke French, but rather regional languages or dialects such as Breton, Gascon, Basque, Catalan, Flemish, Alsatian, and Corsican. Even in French-speaking areas provincial loyalties often transcended the putative bond of the nation. Between 1870 and 1914, Weber argued, a number of new forces penetrated the previously isolated countryside. These included the judicial and school systems, the army, the church, railways, roads, and a market economy. The result was the wholesale transformation of the population from "peasants," basically ignorant of the wider nation, to Frenchmen. His book Apocalypses: Prophecies, Cults, and Millennial Beliefs through the Ages chronicles "apocalyptic visions and prophecies from Zarathustra to yesterday ... . beginning with the ancients of the West and the Orient and, especially ... the Jews and earliest Christians," finding that "an absolute belief in the end of time, when good would do final battle with evil, was omnipresent," inspiring "Crusades, scientific discoveries, works of art, voyages such as those of Columbus, rebellions" and reforms including American abolitionism. Weber proclaimed in "The Western Tradition" lectures of 1989: "... here we are at the end of the 20th century with a lot of people lonely in a Godless world—and now they are denied not only God but the solid substance of judgment and perception". "The world has always been disgracefully managed but now you no longer know to whom to complain." After he traversed the whole spectrum of western thought, tradition, civilization, and progress in The Western Tradition, Weber pointed at some of the profound ancient lessons from the Bible and laments the fact that many people today do not read it themselves. As an agnostic, Weber viewed the Bible primarily as an important piece of historical literature, calling it: "the epitome of wisdom, violence, high aspiration, and the hurtful achievements of mankind". He concluded his final lecture in the Western Tradition series by praising Western man as Promethean and then with Wordsworth's poetic phrase, "we feel that we are greater than we know." A 2010 biography by Stanford Franklin, "Eugen Weber The Greatest Historian of our Times: Lessons of Greatness to the Future" presents Weber's life and works in grandiose terms as the greatest modern historian.
Quantifying cause-related mortality by weighting multiple causes of death Abstract Objective To investigate a new approach to calculating cause-related standardized mortality rates that involves assigning weights to each cause of death reported on death certificates. Methods We derived cause-related standardized mortality rates from death certificate data for France in 2010 using: (i) the classic method, which considered only the underlying cause of death; and (ii) three novel multiple-cause-of-death weighting methods, which assigned weights to multiple causes of death mentioned on death certificates: the first two multiple-cause-of-death methods assigned non-zero weights to all causes mentioned and the third assigned non-zero weights to only the underlying cause and other contributing causes that were not part of the main morbid process. As the sum of the weights for each death certificate was 1, each death had an equal influence on mortality estimates and the total number of deaths was unchanged. Mortality rates derived using the different methods were compared. Findings On average, 3.4 causes per death were listed on each certificate. The standardized mortality rate calculated using the third multiple-cause-of-death weighting method was more than 20% higher than that calculated using the classic method for five disease categories: skin diseases, mental disorders, endocrine and nutritional diseases, blood diseases and genitourinary diseases. Moreover, this method highlighted the mortality burden associated with certain diseases in specific age groups. Conclusion A multiple-cause-of-death weighting approach to calculating cause-related standardized mortality rates from death certificate data identified conditions that contributed more to mortality than indicated by the classic method. This new approach holds promise for identifying underrecognized contributors to mortality. Introduction Good understanding of mortality data is essential for developing and evaluating health policies. The causes of any death are usually reported on parts I and II of a death certificate, in accordance with the international form presented in the International classification of diseases and related health problems, tenth revision (ICD-10), 1 and data are usually collected in a standardized and consistent way. 2 In part I, the physician describes the causal sequence of events that led directly to the death. In part II, the physician can report any other significant morbid condition but only if that condition may have contributed to the death. Generally, cause-of-death statistics are derived from the so-called underlying cause of death in a process hereafter referred to as the classic method. 3 The World Health Organization (WHO) defines the underlying cause of death as "the disease or injury which initiated the train of morbid events leading directly to death or the circumstances of the accident or violence which produced the fatal injury". 1 However, deaths are often caused by more than one disease. Moreover, in a world characterized by an ageing population and decreasing mortality and fertility, death due to infectious disease is progressively being replaced by death due to chronic and degenerative diseases. As a result, the classic method discards potentially useful information about the contribution of other conditions to a death. Today, analysis of mortality data increasingly uses a multiple-cause-of-death approach, 3,4, which is defined as any statistical treatment that simultaneously considers more than one of the causes of death reported on a death certificate. In particular, such approaches have been used to recalculate mortality attributable to specific conditions. In practice, when cause-specific mortality is re-evaluated to take into account multiple causes of death, the number of mentions of a specific cause is usually considered -here the statistical unit is the cause of death rather than the death itself, which raises serious questions about interpretation. For example, studies examining the influence of several diseases on mortality may count a single death two or more times if two or more causes of death are mentioned on the certificate. The resulting apparent increase in mortality could yield an artificial increase in statistical power and possibly result in misleading inferences. An additional problem is that each cause of death mentioned on a certificate is given an equal weight, even though its individual contribution may not have been equally important -the relative importance of each cause of death is not considered. In this study, we investigated an experimental approach that assigns a weight to each cause of death listed on a death certificate by analysing French death certificate data using three multiple-cause-of-death weighting methods. This approach conceptualizes death as the outcome of a mixture of conditions, as we described elsewhere. 13 Consequently, each death contributes only a fraction, rather than a unit, when calculating standardized mortality rates for each cause of death -the fraction depends on the weight assigned. The approach accepts that multiple factors may contribute to a death but also reflects the relative contribution of each cause of death. 13 Use of a multiple-cause-of-death weighting approach could help us identify conditions whose contribution to mortality is underestimated by the classic method. Objective To investigate a new approach to calculating cause-related standardized mortality rates that involves assigning weights to each cause of death reported on death certificates. Methods We derived cause-related standardized mortality rates from death certificate data for France in 2010 using: (i) the classic method, which considered only the underlying cause of death; and (ii) three novel multiple-cause-of-death weighting methods, which assigned weights to multiple causes of death mentioned on death certificates: the first two multiple-cause-of-death methods assigned non-zero weights to all causes mentioned and the third assigned non-zero weights to only the underlying cause and other contributing causes that were not part of the main morbid process. As the sum of the weights for each death certificate was 1, each death had an equal influence on mortality estimates and the total number of deaths was unchanged. Mortality rates derived using the different methods were compared. Findings On average, 3.4 causes per death were listed on each certificate. The standardized mortality rate calculated using the third multiple-cause-of-death weighting method was more than 20% higher than that calculated using the classic method for five disease categories: skin diseases, mental disorders, endocrine and nutritional diseases, blood diseases and genitourinary diseases. Moreover, this method highlighted the mortality burden associated with certain diseases in specific age groups. Conclusion A multiple-cause-of-death weighting approach to calculating cause-related standardized mortality rates from death certificate data identified conditions that contributed more to mortality than indicated by the classic method. This new approach holds promise for identifying underrecognized contributors to mortality. Methods We examined data on all deaths reported in France during 2010. We had access to information on all the causes of death declared on death certificates, including the underlying cause of death, as coded using the ICD-10 by CpiDc-Inserm-the epidemiology centre on medical causes of death of the French National Institute for Health and Medical Research. We used the 2012 version of the European shortlist for causes of death to analyse mortality by cause-of-death category, 14 though the list was modified slightly for the analysis. In addition, we removed codes for causes of death that were not relevant to our study, such as those that did not refer to diseases but rather to: (i) risk factors; (ii) family history; (iii) socioeconomic and psychosocial circumstances; and (iv) injury or poisoning or other external causes of death (i.e. ICD-10 cause-of-death codes beginning with S, T, U or Z, which relate to chapters XIX, XXI and XXII). Of note, none of these causes was designated an underlying cause of death. First, we classified the data using cause-of-death categories and determined whether each cause was reported as an underlying or a contributory cause. We also examined the number of causes reported on each death certificate, whether in both parts of the certificate or only in part II. Then we calculated age-and sex-standardized mortality rates for each cause-of-death category using: (i) the classic method, which considered only the underlying cause of death; and (ii) three multiple-cause-of-death weighting methods that assigned a weight to each cause of death, as described below. For the analysis, we used the Eurostat Europe and European Free Trade Association standard population for 2013. 15 All analyses were performed using SAS v. 9.3 (SAS Institute Inc., Cary, United States of America). Multiple-cause weighting The first multiple-cause-of-death weighting method, MCW 1, attributes an equal weight to each cause of death reported on a death certificate. Thus, if cause i is mentioned on certificate i,on which a total of n i causes are reported, the weight attributed to cause c, w c,i is given by: Here, the underlying cause is not given a greater weight than other causes. The second weighting method, MCW 2, attributes a weight w UC to the disease selected as the underlying cause of death, with w UC having a fixed value between 0 and 1. The total remaining weight (i.e. 1 -w UC ) is distributed among all other causes of death mentioned on the certificate (i.e. n i -1). Hence, the weight attributed to cause c on certificate i, w c,i, is given by: if c is the underlying cause, and by: if c is the not underlying cause. With the classic method, w UC =1, the death is wholly attributed to the underlying cause regardless of other causes mentioned on the certificate. In contrast, the first two weighting methods enable all diseases mentioned on the death certificate to be included in the analysis. Although the attributed value of w UC is subjective, so is choosing w UC to be 1. Therefore, the effect of different choices of w UC should be examined in a sensitivity analysis. In our analysis, we set w UC equal to 0.5 to give a good illustration of the impact of the weighting method on standardized mortality rates. Choosing an intermediate weight between 0.5 and 1 would lead to mortality rates between those based on the classic method and those based on a weighting method with w UC set to 0.5. The third weighting method, MCW 3, is similar to the second except that all causes of death mentioned in part I of the death certificate other than the underlying cause are given a weight of zero. Hence, the weight attributed to cause c on certificate i, w c,i is given by: if c is the underlying cause, by: if c is mentioned in part I and is not the underlying cause, and by: if c is mentioned in part II and is not the underlying cause, where w UC is the weight attributed to the underlying cause of death and n II,i is the number of causes reported on part II of the death certificate (apart from the underlying cause if it is reported on part II, as could occur with some ICD-10 coding rules). The aim of this approach was to take into account the underlying cause of death and only other causes of death that were regarded as being on a different causal pathway from the main morbid process initiated by the underlying cause. Studying separate disease processes in this way is more meaningful from a causal perspective. For both MCW 2 and MCW 3 methods, when only one cause is reported, that cause is necessarily the underlying cause and its weight w c,i is 1. In addition, with all three weighting methods, the sum of the weights for all the different causes of death on each death certificate is 1. Moreover, the sum of the weights across individuals equals the total number of deaths. Consequently, each death has an equal influence on mortality estimates. Table 1 illustrates how the classic method and the three weighting methods are applied (additional examples are available from the corresponding author on request). After we assigned weights to each cause of death on each death certificate using a weighting method, we calculated age-and sex-standardized mortality rates for each cause. First, the sum of the weights attributed to cause c mentioned on death certificates across all individuals i was computed for specific age (a) and sex (s) groups: where w c,i is the weight attributed to cause c on the certificate of individual i. Then, the standardized mortality rate for cause c was obtained as: where R c is the standardized mortality rate, pop a s std, and pop a,s are the number of Quantifying cause-related mortality Clara Piffaretti et al. individuals of age a and sex s (by 5-year age group and sex) in the standard population and in the French population, 16 respectively. Finally, for each cause of death, we calculated the change in the standardized mortality rate derived using each weighting method relative to the corresponding rate obtained using the classic method, both overall and by age group and sex. Results In total, 552 571 deaths were reported in France in 2010. On average, 3.4 causes of death were mentioned on each death certificate (standard deviation: 1.92; median: 3; interquartile range: 2 to 4). The variation in the mean number of causes of death by age was low: it varied between 3.2 and 3.6 per individual over the age range 55 to 93 years, within which 80% of deaths occurred (Fig. 1). However, the mean was lower in individuals aged 15 to 35 years, varying between 2.6 and 3.1 causes in each certificate. Some categories of the underlying cause of death appeared more frequently than others on certificates that mentioned a high number of causes: a high mean number of causes was associated with conditions in the categories of musculoskeletal diseases, skin diseases, endocrine and nutritional diseases and blood diseases (Table 2). Moreover, when one of these conditions was mentioned as the underlying cause of death, the ratio of the number of mentions of the condition to the number of mentions as the underlying cause was also high. However, the category symptoms, signs, ill-defined causes was associated with the highest ratio and with the lowest mean number of causes reported. Here, we report mainly our findings with the MCW 3 method, which are the easiest to interpret and the most interesting. We found that the increase in the standardized mortality rate derived using this method relative to the classic method exceeded 20% in five cause-ofdeath categories: skin diseases, mental disorders, endocrine and nutritional diseases, blood diseases and genitourinary diseases( Table 3). The overall increase in the standardized mortality rate we observed for mental disorders was due in a large part to increases in specific subcategories: for other mental and behavioural disorders the increase was 112% and for alcohol abuse (including alcoholic psychosis), it was 43% (Table 4; available at: http://www.who.int/bulwletin/volumes/94/121/16-172189). In contrast, the increase for drug dependence and toxicomania was 28% and for dementia, 12%. Notable increases were also observed in other disease subcategories: rheumatoid arthritis and osteoarthrosis increased by 44%, other diseases of the circulatory system by 19% Research Quantifying cause-related mortality Clara Piffaretti et al. and viral hepatitis by 19%. There was either no change or a small decrease in the standardized mortality rate in categories such as diseases of the circulatory system, diseases of the respiratory system and perinatal diseases. However, as expected, our analysis found a decrease in the contribution of conditions that are almost systematically specified as the underlying cause of death: for example, external causes of morbidity and mortality, neoplasms, congenital malformations and digestive system diseases. These decreases were most marked with the MCW 1 method (Table 3), particularly when the number of other causes of death mentioned was high, because this method does not attribute a greater weight to the underlying cause relative to other causes. In addition, the MCW 3 method also enabled us to highlight the increase in the mortality burden associated with certain conditions in specific age groups. For example, the increase in the standardized mortality rate derived using the MCW 3 method relative to the classic method was as high as 48% for endocrine and nutritional diseases in people aged 60 to 69 years. The increase was very small in those aged 0 to 34 years, large in those aged 35 to 74 years and smaller again in those 75 years of age or older (Fig. 2). For mental disorders, the increase in mortality burden was much greater for people aged 0 to 34 years and 35 to 74 years than for those aged 75 years or older (Fig. 3). The increase in mortality burden for rheumatoid arthritis and osteoarthrosis was found to be greatest in people 75 years of age or older (Fig. 4). Analysing mortality data by sex using the MCW 3 method did not reveal any other increases in the mortality burden associated with particular conditions in addition to those already identified in the overall analysis. Similar increases were observed for men and for women with the MCW 3 method relative to the classic method, except for mental disorders, where the increase was 40% in men and 27% in women and for genitourinary diseases, where it was 29% and 15%, respectively. Discussion Our analysis of all death certificates in France for 2010, in which we used three multiple-cause-of-death weighting methods to derive standardized mortality rates, aimed to provide a better estimate of the actual causes of death than the classic method. In particular, we confirmed the findings of previous studies that some conditions that are rarely designated as the underlying cause of death actually make a substantial contribution to mortality: namely, diabetes, 3,17,18 skin disease, blood disease 9,19 and renal disease. 1,3,7 However, as previously observed, 3 the increase in the standardized mortality rate we found for each condition varied widely with the disease category. In contrast, other conditions that we revealed to have contributed more to mortality than previously recognized were little mentioned in the literature, such as mental disorders 12 and diseases of the musculoskeletal system, especially rheumatoid arthritis and osteoarthrosis. 20 Moreover, application of the MCW 3 method showed that the contribution of certain conditions to mortality varied even in young people: in particular, mental disorders contributed more in young people than indicated by the classic method. The contribution of conditions in other Quantifying cause-related mortality Clara Piffaretti et al. disease categories, such as diseases of the circulatory system, was found to be unaffected, or only slightly affected, by application of the MCW 3 method, which again confirmed literature findings. 3 In contrast to published results, 3 we found that the contribution to mortality of some conditions, for example influenza, was less than indicated by the classic method. In particular, the contribution of conditions in the category external causes of death was much less. Although this finding may be surprising at first, it reflects the possibility that, even when the underlying cause of death was categorized as an external cause of death, the physician thought some other condition contributed to the death and chose to mention it on the death certificate. One limitation shared by all studies on multiple causes of death is that data quality and comparability are not perfect and numerous studies have tried to identify the flaws. In addition, the numerous coding rules and the multiplicity and complexity of possible disease combinations listed on a death certificate could lead to misinterpretations. Nevertheless, mortality databases are essential for monitoring public health and all attempts to improve their use should be welcomed, especially those taking into account multiple causes of death. The weighting approach described in our study could help clarify the impact of various conditions on mortality in countries that collect multiple-causeof-death data. For other countries, the existence of weighting methods could encourage a more systematic approach to the collection of data on multiple causes of death. Another limitation is that the MCW 3 method takes into account only the contributing causes of death mentioned in part II of the death certificate (in addition to the underlying cause) that are regarded as being on different causal pathways from the main morbid process. However, this assumption is correct only if the death certificate is properly completed, which may not be certain. Moreover, some information is lost by not attributing weights to all causes of death listed on part I. The MCW 3 method may be less appropriate when the research question concerns a complication of a disease rather than the disease itself. Furthermore, when researchers are investigating a specific topic, the set of disease codes considered when implementing a weighting method can be adapted: for example, a study on the external causes of death could include ICD-10 cause-of-death codes that refer to types of injury or poisoning (i.e. codes beginning with S and T), which were excluded in the present study. Although we studied standardized mortality rates, the weighting method could also be applied in other ways. For instance, some policy-makers may be more interested in the crude number of deaths. To date, we have not estimated the statistical variance of the indicators obtained using a weighting method. This may be a problem if a study is comparing mortality distributions between, for instance, several locations. One solution would be to use a nonparametric bootstrap approach. However, as our analysis considered a large number of deaths, sampling variability should not affect the interpretation of the results. The main limitation of our study is that the process of weighting multiple causes of death provides only a synthetic view of the causal process by which diseases act together to bring about death. 13 Consequently, the values given to the weights are subjective and weighting methods could be used to carry out a sensitivity analysis that takes into account different possibilities. In the future, the assignment of weights to items listed on a death certificate could be done by international consensus. Research is needed to determine the value of the weights that should be attributed to the different causes of death contributing to a death, although this process may also be based on a subjective view of how causal responsibility is distributed among different causes of death. 26 Further, this process would require large longitudi- Research Quantifying cause-related mortality Clara Piffaretti et al. nal databases that record pathological conditions and health events over time. Finally, it would be useful to have international rules that assign a specific role to each cause of death mentioned on a death certificate. In particular, the weight given to ill-defined causes of death and cardiac arrest should probably be smaller than that given to other causes. These international rules could also help to systematically distinguish causes of death on separate causal pathways. Moreover, death certification by physicians should be standardized both within and between countries to improve the comparability of the statistics obtained. In conclusion, although it is valuable to know the underlying cause of death, the contribution of other possible causes of death listed on a death certificate should not be neglected. The multiple-cause-of-death weighting methods we used in this study to assess the contribution of different conditions to mortality are promising. Previously, we applied a similar weighting approach to study the burden of mortality, and the etiological processes, associated with individual diseases using survival regression models. 13
BOSTON (AP) — The Massachusetts Department of Transportation is recommending a new way for drivers, and others, to open car doors to protect bicyclists. The department announced Tuesday it has added the door-opening technique known as the “Dutch Reach” to its driver’s manual. The technique requires motorists to use their right hand to open a car door. The idea is to force drivers to turn their bodies, a motion that will help them see oncoming bicycles. Getting “doored” — crashing into a door that is thrown open just as a bicyclist is nearing the car — can result in injury or death for bicyclists, especially in urban areas. Advertisement Massachusetts transportation officials have posted a one-minute video online explaining the maneuver, the preferred method for opening car doors in the Netherlands, hence the name.
package provider import ( "github.com/stretchr/testify/assert" "github.com/zedge/kubecd/pkg/model" "testing" ) func TestGetClusterProvider(t *testing.T) { type testCase struct { name string cluster *model.Cluster expectedProviderType interface{} } for _, tc := range []testCase{ {"gke", &model.Cluster{Provider: model.Provider{GKE: &model.GkeProvider{}}}, &GkeClusterProvider{}}, {"aks", &model.Cluster{Provider: model.Provider{AKS: &model.AksProvider{}}}, &AksClusterProvider{}}, {"docker", &model.Cluster{Provider: model.Provider{DockerForDesktop: &model.DockerForDesktopProvider{}}}, &DockerForDesktopClusterProvider{}}, {"minikube", &model.Cluster{Provider: model.Provider{Minikube: &model.MinikubeProvider{}}}, &MinikubeClusterProvider{}}, } { t.Run(tc.name, func(t *testing.T) { cp, err := GetClusterProvider(tc.cluster, false) assert.NoError(t, err) assert.IsType(t, tc.expectedProviderType, cp) cp, err = GetClusterProvider(tc.cluster, true) assert.NoError(t, err) assert.IsType(t, &GitlabClusterProvider{}, cp) }) } } func TestGetContextInitCommands(t *testing.T) { env := &model.Environment{Name: "test", KubeNamespace: "default"} minikube := &MinikubeClusterProvider{} cmds := GetContextInitCommands(minikube, env) assert.Equal(t, [][]string{{"kubectl", "config", "set-context", "env:test", "--cluster", "minikube", "--user", "minikube", "--namespace", "default"}}, cmds) }
Slate is working with the anonymous operator of the @GunDeaths Twitter feed to track gun deaths since the December 14 shootings in Newtown, Connecticut. The Rochester school board will meet at 6 p.m. on Wednesday, January 2, to elect a board president. West Webster coverage: too much? The federal Bureau of Alcohol, Tobacco, Firearms, and Explosives has managed to trace the guns that William Spengler Jr. used in his assault on Webster firefighters. Several years ago, students staged a mini revolt over the food they were being served in the Rochester school district. To prove their point, they brought a hamburger from a district cafeteria to a school board meeting, but couldn't find a board member willing to take a bite. For Albion residents and government officials, the Orleans Sanitary Landfill has been an on and off source of controversy. And right now, with a proposal to build a new landfill on the site, the controversy is definitely on. Recreation: Get some fresh air this weekend by participating in one of the Genesee Valley Hiking Club’s hikes. On Saturday, December 29, at 11 a.m., meet at the Durand golf course lot for a moderate 6-7 mile hike. If conditions permit, snowshoeing is allowed, but skiing is prohibited. The event has no charge. For more information, call 323-1911 or visit gvhchikes.org. On Sunday, December 30, at 1 p.m., meet at Cobb’s Hill Norris Drive lot for a moderate 3 mile hike. For more info, call 254-4047 or visit gvhchikes.org. Recreation: It’s winter, and we’ve finally got the precipitation to show for it. Take advantage of the recent sky dump by participating in the Moonlight Snowshoe at Helmer Nature Center (154 Pinegrove Ave., Irondequoit) tonight, 7-9 p.m. The cost to participate is $5-$7, and this event is adults-only. For more information, call 336-3035 or visit westirondequoit.org/helmer.htm. Music: Even if you were far away from home or friend or family this holiday season, you still have to chance to be almost home, or at least close to it, tonight. Check out Close To Home (with We Are Defiance opening up) tonight at Water Street Music Hall (204 N. Water St., waterstreetmusic.com) at 4:30 p.m. this afternoon. Tickets cost $10. MUSIC: It’s always great to see local musicians coming together for a good cause, and tonight you have a chance to give back at X-Fest, a benefit Concert for the victims of the Newtown tragedy at California Brew Haus (402 Ridge Road West). Over ten acts will be there, music starts at 8 p.m. and costs $5, with all proceeds going to the cause. THEATER | "The Man In Black"/"My Gal Patsy" THEATER | "An Evening Of Andrew Lloyd Webber" MOVIE REVIEW: "The Guilt Trip"
T-Connector Modification for Reducing Recurrent Distal Shunt Failure: Report of 2 Cases. BACKGROUND AND IMPORTANCE Cerebrospinal fluid shunt placement is used to treat the various causes of hydrocephalus by redirecting the cerebrospinal fluid to the body, most commonly from the ventricle to the peritoneum. Distal catheter displacement from the peritoneal cavity can occur as a complication, necessitating reoperation. CLINICAL PRESENTATION We report 2 such cases in obese patients involving retropulsion of the distal tubing. To address this complication, we implanted a T-connector to the distal catheter construct. CONCLUSION This study supports the use of a T-connector catheter construct to decrease and prevent the possibility of distal peritoneal catheter retropulsion in cases of elevated intra-abdominal pressure, both prophylactically and in revisions.
"And what I said, frankly, is what I said. And some people like what I said, if you want to know the truth." Donald Trump said in a radio interview on Wednesday that he doesn't regret calling Sen. John McCain, who was captured and held prisoner during the Vietnam War, "not a war hero." Last July, Trump said of McCain: "He's not a war hero. He's a war hero because he was captured. I like people who weren't captured." Appearing on the Imus in the Morning, Trump was asked if he would apologize to veterans, as McCain has recently requested. "Well I've actually done that, Don," Trump replied. "You know frankly, I like John McCain, and John McCain is a hero. Also, heroes are people that are, you know, whether they get caught or don't get caught — they're all heroes as far as I'm concerned. And that's the way it should be." "So do you regret saying that?" asked Imus. "I don't, you know — I like not to regret anything," Trump said. "You do things and you say things. And what I said, frankly, is what I said. And some people like what I said, if you want to know the truth. There are many people that like what I said. You know after I said that, my poll numbers went up seven points." "You understand that, I mean, some people liked what I said," added Trump. "I like John McCain, in my eyes John McCain is a hero. John McCain's a good guy." Imus said someone like Trump, who got multiple Vietnam War draft deferments, shouldn't be criticizing someone like McCain. "I understand that. Well, I was going to college, I had student deferments. I also got a great lottery number," Trump said.
<reponame>xuuuuuuchen/PASTA import tensorflow as tf def Combining_Affine_Para(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] tx = tf.squeeze(tf.slice(array, [0,0], [n_batch, 1]), 1) ty = tf.squeeze(tf.slice(array, [0,1], [n_batch, 1]), 1) sin0 = tf.squeeze(tf.slice(array, [0,2], [n_batch, 1]),1) cos0 = tf.sqrt(1.0-tf.square(sin0)) sx = tf.squeeze(tf.slice(array, [0,3], [n_batch, 1]), 1) sy = tf.squeeze(tf.slice(array, [0,4], [n_batch, 1]), 1) cx = tf.squeeze(tf.slice(array, [0,5], [n_batch, 1]), 1) cy = tf.squeeze(tf.slice(array, [0,6], [n_batch, 1]), 1) """ CORE """ x1 = cos0 * cx - sin0 * cx * sx x2 = sin0 * cx + cos0 * cx * sx x3 = cos0 * cy * sy - sin0 * cy x4 = sin0 * cy * sy + cos0 * cy x5 = tx x6 = ty x1 = tf.expand_dims(x1, 1) x2 = tf.expand_dims(x2, 1) x3 = tf.expand_dims(x3, 1) x4 = tf.expand_dims(x4, 1) x5 = tf.expand_dims(x5, 1) x6 = tf.expand_dims(x6, 1) # print(" >>>>>>>>>> x1: "+str(x1.shape)) # print(" >>>>>>>>>> x2: "+str(x2.shape)) # print(" >>>>>>>>>> x3: "+str(x3.shape)) # print(" >>>>>>>>>> x4: "+str(x4.shape)) # print(" >>>>>>>>>> x5: "+str(x5.shape)) # print(" >>>>>>>>>> x6: "+str(x6.shape)) array = tf.concat([x1,x2,x5,x3,x4,x6], 1) # print(" >>>>>>>>>> matrix: "+str(array.shape)) return array def Combining_Affine_Para3D(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] tx = tf.squeeze(tf.slice(array, [0,0], [n_batch, 1]), 1) ty = tf.squeeze(tf.slice(array, [0,1], [n_batch, 1]), 1) tz = tf.squeeze(tf.slice(array, [0,2], [n_batch, 1]), 1) sin0x = tf.squeeze(tf.slice(array, [0,3], [n_batch, 1]),1) sin0y = tf.squeeze(tf.slice(array, [0,4], [n_batch, 1]),1) sin0z = tf.squeeze(tf.slice(array, [0,5], [n_batch, 1]),1) cos0x = tf.sqrt(1.0-tf.square(sin0x)) cos0y = tf.sqrt(1.0-tf.square(sin0y)) cos0z = tf.sqrt(1.0-tf.square(sin0z)) shxy = tf.squeeze(tf.slice(array, [0,6], [n_batch, 1]), 1) shyx = tf.squeeze(tf.slice(array, [0,7], [n_batch, 1]), 1) shxz = tf.squeeze(tf.slice(array, [0,8], [n_batch, 1]), 1) shzx = tf.squeeze(tf.slice(array, [0,9], [n_batch, 1]), 1) shzy = tf.squeeze(tf.slice(array, [0,10], [n_batch, 1]), 1) shyz = tf.squeeze(tf.slice(array, [0,11], [n_batch, 1]), 1) scx = tf.squeeze(tf.slice(array, [0,12], [n_batch, 1]), 1) scy = tf.squeeze(tf.slice(array, [0,13], [n_batch, 1]), 1) scz = tf.squeeze(tf.slice(array, [0,14], [n_batch, 1]), 1) """ CORE """ x1 = -shxz*scx*sin0y+scx*cos0y*cos0z+shxy*scx*cos0y*sin0z x2 = shxz*scx*sin0x*cos0y+shxy*scx*cos0x*cos0z +scx*sin0x*sin0y*cos0z-scx*cos0x*sin0z+shxy*scx*sin0x*sin0y*sin0z x3 = shxz*scx*cos0x*cos0y-shxy*scx*sin0x*cos0z+scx*cos0x*sin0y*cos0z+scx*sin0x*sin0z+shxy*scx*cos0x*sin0y*sin0z x4 = scy*cos0y*sin0z+scy*cos0y*cos0z*shyx-scy*sin0y*shyz x5 = scy*cos0x*cos0z+scy*sin0x*sin0y*sin0z+scy*sin0x*sin0y*cos0z*shyx-scy*cos0x*sin0z*shyx+scy*sin0x*cos0y*shyz x6 = -scy*sin0x*cos0z+scy*cos0x*sin0y*sin0z+scy*cos0x*sin0y*cos0z*shyx+scy*sin0x*sin0z*shyx+scy*cos0x*cos0y*shyz x7 = -scz*sin0y+shzx*scz*cos0y*cos0z+shzy*scz*cos0y*sin0z x8 = scz*sin0x*cos0y+shzy*scz*cos0x*cos0z+shzx*scz*sin0x*sin0y*cos0z-shzx*scz*cos0x*sin0z+shzy*scz*sin0x*sin0y*sin0z x9 = scz*cos0x*cos0y-shzy*scz*sin0x*cos0z+shzx*scz*cos0x*sin0y*cos0z+shzx*scz*sin0x*sin0z+shzy*scz*cos0x*sin0y*sin0z x10 = tx x11 = ty x12 = tz x1 = tf.expand_dims(x1, 1) x2 = tf.expand_dims(x2, 1) x3 = tf.expand_dims(x3, 1) x4 = tf.expand_dims(x4, 1) x5 = tf.expand_dims(x5, 1) x6 = tf.expand_dims(x6, 1) x7 = tf.expand_dims(x7, 1) x8 = tf.expand_dims(x8, 1) x9 = tf.expand_dims(x9, 1) x10 = tf.expand_dims(x10, 1) x11 = tf.expand_dims(x11, 1) x12 = tf.expand_dims(x12, 1) array = tf.concat([ x1,x2,x3,x10, x4,x5,x6,x11, x7,x8,x9,x12,], 1) # print(" >>>>>>>>>> matrix: "+str(array.shape)) return array def affine_flow(tensors): # print(" >>>>>>>>>> affine_flow_layer") imgs = tensors[0] array = tensors[1] # print(" >>>>>>>>>> imgs: "+str(imgs.shape)) # print(" >>>>>>>>>> array: "+str(array.shape)) n_batch = tf.shape(imgs)[0] xlen = tf.shape(imgs)[2] ylen = tf.shape(imgs)[3] grids = batch_mgrid(n_batch, xlen, ylen) coords = tf.reshape(grids, [n_batch, 2, -1]) theta = tf.reshape(array, [-1, 2, 3]) matrix = tf.slice(theta, [0, 0, 0], [-1, -1, 2]) t = tf.slice(theta, [0, 0, 2], [-1, -1, -1]) T_g = tf.matmul(matrix, coords) + t T_g = tf.reshape(T_g, [n_batch, 2, xlen, ylen]) output = batch_warp2d(imgs, T_g) return output def affine_flow_3D(tensors): imgs = tensors[0] array = tensors[1] print(imgs.shape) n_batch = tf.shape(imgs)[0] xlen = tf.shape(imgs)[2] ylen = tf.shape(imgs)[3] zlen = tf.shape(imgs)[4] grids = batch_mgrid(n_batch, xlen, ylen, zlen) grids = tf.reshape(grids, [n_batch, 3, -1]) theta = tf.reshape(array, [-1, 3, 4]) matrix = tf.slice(theta, [0, 0, 0], [-1, -1, 3]) t = tf.slice(theta, [0, 0, 3], [-1, -1, -1]) T_g = tf.matmul(matrix, grids) + t T_g = tf.reshape(T_g, [n_batch, 3, xlen, ylen, zlen]) output = batch_warp3d(imgs, T_g) return output def affine_flow_output_shape(input_shapes): shape1 = list(input_shapes[0]) return (shape1[0],1,shape1[2],shape1[3]) def affine_flow_3D_output_shape(input_shapes): shape1 = list(input_shapes[0]) return (shape1[0], shape1[2],shape1[3],shape1[4]) def batch_mgrid(n_batch, *args, **kwargs): """ create batch of orthogonal grids similar to np.mgrid Parameters ---------- n_batch : int number of grids to create args : int number of points on each axis low : float minimum coordinate value high : float maximum coordinate value Returns ------- grids : tf.Tensor [n_batch, len(args), args[0], ...] batch of orthogonal grids """ grid = mgrid(*args, **kwargs) grid = tf.expand_dims(grid, 0) grids = tf.tile(grid, [n_batch] + [1 for _ in range(len(args) + 1)]) return grids def mgrid(*args, **kwargs): """ create orthogonal grid similar to np.mgrid Parameters ---------- args : int number of points on each axis low : float minimum coordinate value high : float maximum coordinate value Returns ------- grid : tf.Tensor [len(args), args[0], ...] orthogonal grid """ low = kwargs.pop("low", -1) high = kwargs.pop("high", 1) low = tf.to_float(low) high = tf.to_float(high) coords = (tf.linspace(low, high, arg) for arg in args) grid = tf.stack(tf.meshgrid(*coords, indexing='ij')) return grid def batch_warp2d(imgs, mappings): n_batch = tf.shape(imgs)[0] coords = tf.reshape(mappings, [n_batch, 2, -1]) x_coords = tf.slice(coords, [0, 0, 0], [-1, 1, -1]) y_coords = tf.slice(coords, [0, 1, 0], [-1, 1, -1]) x_coords_flat = tf.reshape(x_coords, [-1]) y_coords_flat = tf.reshape(y_coords, [-1]) # print(" >>>>>>>>>> imgs: "+str(imgs.shape)) # imgs = tf.transpose(imgs,[0,2,3,1]) # print(" >>>>>>>>>> imgs: "+str(imgs.shape)) output = _interpolate2d(imgs, x_coords_flat, y_coords_flat) return output def batch_warp3d(imgs, mappings): n_batch = tf.shape(imgs)[0] coords = tf.reshape(mappings, [n_batch, 3, -1]) x_coords = tf.slice(coords, [0, 0, 0], [-1, 1, -1]) y_coords = tf.slice(coords, [0, 1, 0], [-1, 1, -1]) z_coords = tf.slice(coords, [0, 2, 0], [-1, 1, -1]) x_coords_flat = tf.reshape(x_coords, [-1]) y_coords_flat = tf.reshape(y_coords, [-1]) z_coords_flat = tf.reshape(z_coords, [-1]) output = _interpolate3d(imgs, x_coords_flat, y_coords_flat, z_coords_flat) return output def batch_displacement_warp3d(tensors): print(" batch_displacement_warp3d") imgs = tensors[0] vector_fields = tensors[1] print(" >>>>>>>>>> imgs: "+str(imgs.shape)) print(" >>>>>>>>>> vector_fields: "+str(vector_fields.shape)) n_batch = tf.shape(imgs)[0] xlen = tf.shape(imgs)[2] ylen = tf.shape(imgs)[3] grids = batch_mgrid(n_batch, xlen, ylen) print(" >>>>>>>>>> grids: "+str(grids.shape)) # T_g = grids + vector_fields # print(" >>>>>>>>>> T_g: "+str(T_g.shape)) output = batch_warp3d(imgs, vector_fields) return output def warp_3d_layer_output_shape(input_shapes): shape1 = list(input_shapes[0]) # print(" >>>>>>>>>> shape1: "+str(tuple(shape1).shape)) return tuple(shape1) def _interpolate2d(imgs, x, y): # print("interpolate2d") n_batch = tf.shape(imgs)[0] n_channel = tf.shape(imgs)[1] xlen = tf.shape(imgs)[2] ylen = tf.shape(imgs)[3] x = tf.cast(x, tf.float32) y = tf.cast(y, tf.float32) xlen_f = tf.cast(xlen, tf.float32) ylen_f = tf.cast(ylen, tf.float32) zero = tf.zeros([], dtype='int32') max_x = tf.cast(xlen - 1, 'int32') max_y = tf.cast(ylen - 1, 'int32') # scale indices from [-1, 1] to [0, xlen/ylen] x = (x + 1.) * (xlen_f - 1.) * 0.5 y = (y + 1.) * (ylen_f - 1.) * 0.5 # do sampling x0 = tf.cast(tf.floor(x), 'int32') x1 = x0 + 1 y0 = tf.cast(tf.floor(y), 'int32') y1 = y0 + 1 x0 = tf.clip_by_value(x0, zero, max_x) x1 = tf.clip_by_value(x1, zero, max_x) y0 = tf.clip_by_value(y0, zero, max_y) y1 = tf.clip_by_value(y1, zero, max_y) def _repeat(base_indices, n_repeats): base_indices = tf.matmul( tf.reshape(base_indices, [-1, 1]), tf.ones([1, n_repeats], dtype='int32')) return tf.reshape(base_indices, [-1]) base = _repeat(tf.range(n_batch) * xlen * ylen, ylen * xlen) base_x0 = base + x0 * ylen base_x1 = base + x1 * ylen index00 = base_x0 + y0 index01 = base_x0 + y1 index10 = base_x1 + y0 index11 = base_x1 + y1 # use indices to lookup pixels in the flat image and restore # n_channel dim imgs_flat = tf.reshape(imgs, [-1, n_channel]) imgs_flat = tf.to_float(imgs_flat) I00 = tf.gather(imgs_flat, index00) I01 = tf.gather(imgs_flat, index01) I10 = tf.gather(imgs_flat, index10) I11 = tf.gather(imgs_flat, index11) # and finally calculate interpolated values dx = x - tf.to_float(x0) dy = y - tf.to_float(y0) w00 = tf.expand_dims((1. - dx) * (1. - dy), 1) w01 = tf.expand_dims((1. - dx) * dy, 1) w10 = tf.expand_dims(dx * (1. - dy), 1) w11 = tf.expand_dims(dx * dy, 1) output = tf.add_n([w00*I00, w01*I01, w10*I10, w11*I11]) # reshape output = tf.reshape(output, [n_batch, n_channel, xlen, ylen]) return output def _interpolate3d(imgs, x, y, z): n_batch = tf.shape(imgs)[0] n_channel = tf.shape(imgs)[1] xlen = tf.shape(imgs)[2] ylen = tf.shape(imgs)[3] zlen = tf.shape(imgs)[4] x = tf.cast(x, tf.float32) y = tf.cast(y, tf.float32) z = tf.cast(z, tf.float32) xlen_f = tf.cast(xlen, tf.float32) ylen_f = tf.cast(ylen, tf.float32) zlen_f = tf.cast(zlen, tf.float32) zero = tf.zeros([], dtype='int32') max_x = tf.cast(xlen - 1, 'int32') max_y = tf.cast(ylen - 1, 'int32') max_z = tf.cast(zlen - 1, 'int32') # scale indices from [-1, 1] to [0, xlen/ylen] x = (x + 1.) * (xlen_f - 1.) * 0.5 y = (y + 1.) * (ylen_f - 1.) * 0.5 z = (z + 1.) * (zlen_f - 1.) * 0.5 # do sampling x0 = tf.cast(tf.floor(x), 'int32') x1 = x0 + 1 y0 = tf.cast(tf.floor(y), 'int32') y1 = y0 + 1 z0 = tf.cast(tf.floor(z), 'int32') z1 = z0 + 1 x0 = tf.clip_by_value(x0, zero, max_x) x1 = tf.clip_by_value(x1, zero, max_x) y0 = tf.clip_by_value(y0, zero, max_y) y1 = tf.clip_by_value(y1, zero, max_y) z0 = tf.clip_by_value(z0, zero, max_z) z1 = tf.clip_by_value(z1, zero, max_z) def _repeat(base_indices, n_repeats): base_indices = tf.matmul( tf.reshape(base_indices, [-1, 1]), tf.ones([1, n_repeats], dtype='int32')) return tf.reshape(base_indices, [-1]) base = _repeat(tf.range(n_batch) * xlen * ylen * zlen, xlen * ylen * zlen) base_x0 = base + x0 * ylen * zlen base_x1 = base + x1 * ylen * zlen base00 = base_x0 + y0 * zlen base01 = base_x0 + y1 * zlen base10 = base_x1 + y0 * zlen base11 = base_x1 + y1 * zlen index000 = base00 + z0 index001 = base00 + z1 index010 = base01 + z0 index011 = base01 + z1 index100 = base10 + z0 index101 = base10 + z1 index110 = base11 + z0 index111 = base11 + z1 # use indices to lookup pixels in the flat image and restore # n_channel dim imgs_flat = tf.reshape(imgs, [-1, n_channel]) imgs_flat = tf.cast(imgs_flat, tf.float32) I000 = tf.gather(imgs_flat, index000) I001 = tf.gather(imgs_flat, index001) I010 = tf.gather(imgs_flat, index010) I011 = tf.gather(imgs_flat, index011) I100 = tf.gather(imgs_flat, index100) I101 = tf.gather(imgs_flat, index101) I110 = tf.gather(imgs_flat, index110) I111 = tf.gather(imgs_flat, index111) # and finally calculate interpolated values dx = x - tf.cast(x0, tf.float32) dy = y - tf.cast(y0, tf.float32) dz = z - tf.cast(z0, tf.float32) w000 = tf.expand_dims((1. - dx) * (1. - dy) * (1. - dz), 1) w001 = tf.expand_dims((1. - dx) * (1. - dy) * dz, 1) w010 = tf.expand_dims((1. - dx) * dy * (1. - dz), 1) w011 = tf.expand_dims((1. - dx) * dy * dz, 1) w100 = tf.expand_dims(dx * (1. - dy) * (1. - dz), 1) w101 = tf.expand_dims(dx * (1. - dy) * dz, 1) w110 = tf.expand_dims(dx * dy * (1. - dz), 1) w111 = tf.expand_dims(dx * dy * dz, 1) output = tf.add_n([w000 * I000, w001 * I001, w010 * I010, w011 * I011, w100 * I100, w101 * I101, w110 * I110, w111 * I111]) # reshape print(output.shape) output = tf.reshape(output, [n_batch, n_channel, xlen, ylen, zlen]) print(output.shape) # output = tf.reshape(output, [n_batch, xlen, ylen, zlen]) return output def Split_sx(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] sx = tf.slice(array, [0,0], [n_batch, 1]) return sx def Split_sy(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] sy = tf.slice(array, [0,1], [n_batch, 1]) return sy def Split_cx(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] cx = tf.slice(array, [0,2], [n_batch, 1]) return cx def Split_cy(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] cy = tf.slice(array, [0,3], [n_batch, 1]) return cy def Split_theta(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] theta = tf.slice(array, [0,4], [n_batch, 1]) return theta def Split_tx(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] tx = tf.slice(array, [0,5], [n_batch, 1]) return tx def Split_ty(tensors): imgs = tensors[0] array = tensors[1] n_batch = tf.shape(imgs)[0] ty = tf.slice(array, [0,6], [n_batch, 1]) return ty """ x1 = cos0 * cx - sin0 * cx * sx x2 = sin0 * cy + cos0 * cy * sx x3 = cos0 * cx * sy - sin0 * cx x4 = sin0 * cy * sy + cos0 * cy x5 = tx x6 = ty """ def combine_x1(tensors): theta = tensors[0] cx = tensors[1] sx = tensors[2] x1 = tf.cos(theta) * cx - tf.sin(theta) * cx * sx return x1 def combine_x2(tensors): theta = tensors[0] cy = tensors[1] sx = tensors[2] x2 = tf.sin(theta) * cy + tf.cos(theta) * cy * sx return x2 def combine_x3(tensors): theta = tensors[0] cx = tensors[1] sy = tensors[2] x3 = tf.cos(theta) * cx * sy - tf.sin(theta) * cx return x3 def combine_x4(tensors): theta = tensors[0] cy = tensors[1] sy = tensors[2] x4 = tf.sin(theta) * cy * sy + tf.cos(theta) * cy return x4 def Mapping_Squeezed_Affine_Para(tensors): # print(" >>>>>>>>>> Mapping_7_Affine_Para_layer") imgs = tensors[0] Squeezed_Affine_Para = tensors[1] # print(" >>>>>>>>>> imgs: "+str(imgs.shape)) # print(" >>>>>>>>>> Squeezed_Affine_Para: "+str(Squeezed_Affine_Para.shape)) n_batch = tf.shape(imgs)[0] sx = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,0], [n_batch, 1]), 1) sy = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,1], [n_batch, 1]), 1) cx = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,2], [n_batch, 1]), 1) cy = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,3], [n_batch, 1]), 1) theta = tf.slice(Squeezed_Affine_Para, [0,4], [n_batch, 1]) theta = tf.squeeze(theta, 1) # print(" >>>>>>>>>> theta: "+str(theta.shape)) cos0 = tf.cos(theta) sin0 = tf.sin(theta) tx = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,5], [n_batch, 1]), 1) ty = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,6], [n_batch, 1]), 1) # sx = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,0], [n_batch, 1]), 1) * 0.000001 # sy = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,1], [n_batch, 1]), 1) * 0.000001 # cx = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,2], [n_batch, 1]), 1) * 0.000001 + 1.0 # cy = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,3], [n_batch, 1]), 1) * 0.000001 + 1.0 # theta = tf.slice(Squeezed_Affine_Para, [0,4], [n_batch, 1]) # theta = tf.squeeze(theta, 1) * 0.4 - 0.2 # print(" >>>>>>>>>> theta: "+str(theta.shape)) # cos0 = tf.cos(theta) # sin0 = tf.sin(theta) # tx = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,5], [n_batch, 1]), 1) * 0.000001 # ty = tf.squeeze(tf.slice(Squeezed_Affine_Para, [0,6], [n_batch, 1]), 1) * 0.000001 # print(" >>>>>>>>>> 1.sx: "+str(sx.shape)) # print(" >>>>>>>>>> 2.sy: "+str(sy.shape)) # print(" >>>>>>>>>> 3.cx: "+str(sx.shape)) # print(" >>>>>>>>>> 4.cy: "+str(sy.shape)) # print(" >>>>>>>>>> 5.theta: "+str(theta.shape)) # print(" >>>>>>>>>> 5. cos0: "+str(cos0.shape)) # print(" >>>>>>>>>> 5. sin0: "+str(sin0.shape)) # print(" >>>>>>>>>> 6.tx: "+str(tx.shape)) # print(" >>>>>>>>>> 7.ty: "+str(ty.shape)) sx = tf.expand_dims(sx, 1) sy = tf.expand_dims(sy, 1) cx = tf.expand_dims(cx, 1) cy = tf.expand_dims(cy, 1) theta = tf.expand_dims(theta, 1) tx = tf.expand_dims(tx, 1) ty = tf.expand_dims(ty, 1) Affine_Para_7 = tf.concat([sx, sy, cx, cy, theta, tx, ty], 1) Affine_Para_7 = tf.expand_dims(Affine_Para_7, 2) Affine_Para_7 = tf.squeeze(Affine_Para_7, 2) Affine_Para_7 = 2 * Affine_Para_7 - 1 # print(" >>>>>>>>>> Affine_Para_7: "+str(Affine_Para_7.shape)) return Affine_Para_7 def non_linear_warp_2d(tensors): # print(" non_linear_warp_2d") imgs = tensors[0] vector_fields = tensors[1] # print(" >>>>>>>>>> imgs: "+str(imgs.shape)) # print(" >>>>>>>>>> vector_fields: "+str(vector_fields.shape)) n_batch = tf.shape(imgs)[0] xlen = tf.shape(imgs)[2] ylen = tf.shape(imgs)[3] grids = non_linear_mgrid(n_batch, xlen, ylen) # print(" >>>>>>>>>> grids: "+str(grids.shape)) T_g = grids + vector_fields # print(" >>>>>>>>>> T_g: "+str(T_g.shape)) output = non_linear_warp2d(imgs, T_g) return output def non_linear_mgrid(n_batch, *args, **kwargs): grid = mgrid(*args, **kwargs) grid = tf.expand_dims(grid, 0) grids = tf.tile(grid, [n_batch] + [1 for _ in range(len(args) + 1)]) return grids def non_linear_warp2d(imgs, mappings): n_batch = tf.shape(imgs)[0] coords = tf.reshape(mappings, [n_batch, 2, -1]) x_coords = tf.slice(coords, [0, 0, 0], [-1, 1, -1]) y_coords = tf.slice(coords, [0, 1, 0], [-1, 1, -1]) x_coords_flat = tf.reshape(x_coords, [-1]) y_coords_flat = tf.reshape(y_coords, [-1]) # print("non_linear_warp2d") # print(" >>>>>>>>>> imgs: "+str(imgs.shape)) # # imgs = tf.transpose(imgs,[0,2,3,1]) # print(" >>>>>>>>>> imgs: "+str(imgs.shape)) output = _interpolate2d(imgs, x_coords_flat, y_coords_flat) return output def _repeat(base_indices, n_repeats): base_indices = tf.matmul( tf.reshape(base_indices, [-1, 1]), tf.ones([1, n_repeats], dtype='int32')) return tf.reshape(base_indices, [-1]) def _interpolate2d(imgs, x, y): # print("interpolate2d") n_batch = tf.shape(imgs)[0] xlen = tf.shape(imgs)[2] ylen = tf.shape(imgs)[3] n_channel = tf.shape(imgs)[1] x = tf.to_float(x) y = tf.to_float(y) xlen_f = tf.to_float(xlen) ylen_f = tf.to_float(ylen) zero = tf.zeros([], dtype='int32') max_x = tf.cast(xlen - 1, 'int32') max_y = tf.cast(ylen - 1, 'int32') # scale indices from [-1, 1] to [0, xlen/ylen] x = (x + 1.) * (xlen_f - 1.) * 0.5 y = (y + 1.) * (ylen_f - 1.) * 0.5 # do sampling x0 = tf.cast(tf.floor(x), 'int32') x1 = x0 + 1 y0 = tf.cast(tf.floor(y), 'int32') y1 = y0 + 1 x0 = tf.clip_by_value(x0, zero, max_x) x1 = tf.clip_by_value(x1, zero, max_x) y0 = tf.clip_by_value(y0, zero, max_y) y1 = tf.clip_by_value(y1, zero, max_y) base = _repeat(tf.range(n_batch) * xlen * ylen, ylen * xlen) base_x0 = base + x0 * ylen base_x1 = base + x1 * ylen index00 = base_x0 + y0 index01 = base_x0 + y1 index10 = base_x1 + y0 index11 = base_x1 + y1 # use indices to lookup pixels in the flat image and restore # n_channel dim imgs_flat = tf.reshape(imgs, [-1, n_channel]) imgs_flat = tf.to_float(imgs_flat) I00 = tf.gather(imgs_flat, index00) I01 = tf.gather(imgs_flat, index01) I10 = tf.gather(imgs_flat, index10) I11 = tf.gather(imgs_flat, index11) # and finally calculate interpolated values dx = x - tf.to_float(x0) dy = y - tf.to_float(y0) w00 = tf.expand_dims((1. - dx) * (1. - dy), 1) w01 = tf.expand_dims((1. - dx) * dy, 1) w10 = tf.expand_dims(dx * (1. - dy), 1) w11 = tf.expand_dims(dx * dy, 1) output = tf.add_n([w00*I00, w01*I01, w10*I10, w11*I11]) # reshape output = tf.reshape(output, [n_batch, n_channel, xlen, ylen]) return output
The Vancouver Canucks spoke confidently about rejoining the playoff fight in the West. Turns out the momentum from that win had a limited shelf life. EDMONTON — Following their inspired 3-2 overtime win over Toronto on Wednesday night, the Vancouver Canucks spoke confidently about rejoining the playoff fight in the West. With Jacob Markstom providing elite goaltending on a nightly basis, Travis Green opted to start Thatcher Demko against the Oilers, and Green didn’t waste a lot of time explaining his reasoning. Demko, in fact, was the least of the Canucks’ problems against the Oilers; but when Alex Chiasson opened the scoring five minutes in after Derrick Pouliot lost a puck battle to Sam Gagner, the visitors were in chase mode for the duration of the contest. Zack Kassian gave the Oilers a 2-0 lead later in the first, converting a dazzling pass by Connor McDavid; and McDavid was back in the second, presenting Ryan Nugent-Hopkins with an empty net. Down 3-0, the Canucks mounted a determined, if not doomed, comeback, that began when Oilers goalie Mikko Koskinen gifted a goal for Jay Beagle. Koskinen misplayed Brock Boeser’s long-distance wrister, leaving Beagle with a tap-in. Things got serious in the third when Alex Edler beat Koskinen with a wrist shot from the blue line after the Canucks produced a series of near-misses on the power play. A couple of minutes later, Demko stopped Leon Draisaitl on a breakaway before an Edler point shot was tipped off the crossbar. The Canucks also had a later power play negated by a too-many-men-on-the-ice penalty but couldn’t generate much over the final three minutes. Here was Troy Stecher after the win over the Leafs. It was a lovely thought but, following the loss to the Oilers, the Canucks are nine points back of Minnesota, which holds down the eighth and final playoff spot in the West. The Canucks sit 26th overall in the NHL but the larger development is the Rangers, Detroit and Chicago all picked up points on Thursday night. The faithful will always have the draft lottery. Green was asked about the defensive struggles of Pouliot, whose mistake led to the first Oilers’ goal. Jay Beagle tries to contain Connor McDavid (good luck with that) in the first period in Edmonton Thursday night. In 2004, on the recommendation of scout Thomas Gradin, the Canucks reached to take a little-known defenceman, then playing with his hometown Ostersund team in what amounted to Sweden’s third-tier. Thursday night, Alex Edler played his 800th NHL game. Edler is now seventh on the Canucks’ all-time games played list, 22 behind Alex Burrows in sixth and 84 behind fifth-place Markus Naslund. Against the Oilers, Edler scored for the second straight night, logged 28:46 of ice time, recorded five shots on goal and drew the unenviable assignment of trying to shut down McDavid. The Oilers’ superstar was a dominant figure in the contest, drawing the two assists and finishing with three shots on goal. “Our line needs to be better against him,” said Beagle, who was on the ice for the last two Oilers goals. “I take that on myself. I love that matchup and I love going up against top players, but he created a lot out of nothing.
Analysis of Risk Assessment in a Municipal Wastewater Treatment Plant Located in Upper Silesia Nowadays, risk management applies to every technical facility, branch of the economy, and industry. Due to the characteristics of the analyzed wastewater treatment plant and the specificity of the used processes, one must approach different areas individually. Municipal sewage treatment plants are technical facilities; they function as enterprises and are elements of larger systemswater distribution and sewage disposal. Due to their strategic importance for the environment and human beings, it is essential that they are covered by risk management systems. The basic stage of risk management is its assessment. On its basis, strategic decisions are made and new solutions are introduced. Constant monitoring of the operation of a treatment plant allows for assessment of whether actions taken are correct and whether they cause deterioration of the quality of sewage. In our work, we present a method of risk assessment based on historical data for an existing facility and obtained results.
package me.aa07.parautil.spigot.discord; public class Id2CkeyResponseModel { public boolean success; public String data; }
import static gov.nasa.jpf.symbc.ChangeAnnotation.change; import java.io.FileInputStream; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import gov.nasa.jpf.symbc.Debug; public class SymbcDriver { public static final int P_CODE_LENGTH = 128; public static void main(String[] args) { byte[] secret1_pw; byte[] secret2_pw; byte[] public_guess; if (args.length == 1) { String fileName = args[0].replace("#", ","); byte[] bytes; secret1_pw = new byte[P_CODE_LENGTH]; secret2_pw = new byte[P_CODE_LENGTH]; try (FileInputStream fis = new FileInputStream(fileName)) { for (int i = 0; i < P_CODE_LENGTH; i++) { bytes = new byte[1]; if ((fis.read(bytes)) == -1) { throw new RuntimeException("Not enough input data..."); } secret1_pw[i] = Debug.addSymbolicByte(bytes[0], "sym_0_" + i); } for (int i = 0; i < P_CODE_LENGTH; i++) { bytes = new byte[1]; if ((fis.read(bytes)) == -1) { throw new RuntimeException("Not enough input data..."); } secret2_pw[i] = Debug.addSymbolicByte(bytes[0], "sym_1_" + i); } } catch (IOException e) { System.err.println("Error reading input"); e.printStackTrace(); return; } System.out.println("secret1=" + Arrays.toString(secret1_pw)); System.out.println("secret2=" + Arrays.toString(secret2_pw)); } else { secret1_pw = new byte[P_CODE_LENGTH]; secret2_pw = new byte[P_CODE_LENGTH]; for (int i = 0; i < secret1_pw.length; i++) { secret1_pw[i] = Debug.makeSymbolicByte("sym_0_" + i); secret2_pw[i] = Debug.makeSymbolicByte("sym_1_" + i); } } /* Read static image. */ String filePath = Debug.getDataDir() + "/public.jpg"; List<Byte> values_public = new ArrayList<>(); try (FileInputStream fis = new FileInputStream(filePath)) { byte[] bytes = new byte[1]; while ((fis.read(bytes)) != -1) { values_public.add(bytes[0]); } } catch (IOException e) { System.err.println("Error reading input"); e.printStackTrace(); return; } public_guess = new byte[values_public.size()]; for (int i = 0; i < public_guess.length; i++) { public_guess[i] = values_public.get(i); } /* dummy call to symbolic decision */ if (secret1_pw[0] > 0) { int b = 0; } byte[] secret = new byte[secret1_pw.length]; for (int i = 0; i < secret.length; i++) { secret[i] = (byte) change(secret1_pw[i], secret2_pw[i]); } ImageMatcherWorker.test(public_guess, secret); System.out.println("Done."); } }
Personal determinants of mental reliability of an athlete Background and Study Aim. Stability of performances at competitions with a preset effectiveness in the presence of sports competition is the result of the reliable functioning of the psyche of an athlete. The hypothesis of the study - mental reliability is associated with certain individual psychological properties, the similarity and difference of which is determined by the level of success of an athlete. The purpose of the study is to identify a set of personality determinants that affect the mental reliability of an athlete. Material and methods. The study involved 58 fencers aged 17-18 years (M = 17.47 SD = 0.53). In this paper, the measurement of the mental reliability of an athlete was carried out using an integral assessment of the success of sports activities developed by E.V. Melnik and E.V. Silich. According to the final success rate, the total sample of the subjects was divided into 2 groups: «successful and «unsuccessful. For the study of the leading individual psychological properties of the personality, 16-PF multifactor personality questionnaire by R.Kettel was used in the work. The data processing package SPSS was used for data processing. Results. Significant differences were established between the groups of «successful and «unsuccessful athletes in the majority of individual psychological properties. The importance of focused analysis and the development of individual mental properties as internal prerequisites for the mental reliability of fencers has been acknowledged. The relationship of personal factors with the success in sports activities is presented. A high level of correlation between the integral indicator of the success of competitive activity and intelligence (factor B), emotional stability (factor C), emotional hardness (I), confidence (O), independence and autonomy (Q2) is revealed. A significant correlation was found between the average level of success in sports and caution (F). This confirms the possibility of applying the methods of research of individual psychological properties of a person when studying the causes of lesions and the prerequisites for the erroneous actions of an athlete. Conclusions. The success of sports activities of fencers does not depend on one individual psychological property of a person. This is the result of a combination of most of them. A greater number of reliable relationships have been revealed between the final indicator of the fencers success in sports activities and personal factors from the emotional properties group as compared to communicative and intellectual properties groups.
Lawmakers cite a range of restrictions to explain why their taxpayer-funded cars cost so much more to lease than average. The economy is still limping along, but some members of Congress are nevertheless riding in style: At least 10 House members are spending more than $1,000 a month in taxpayer money to lease cars. Rep. Emanuel Cleaver appears to be the biggest spender. In the last quarter of 2009, the Missouri Democrat doled out $2,900 a month to lease a WiFi-equipped, handicap-accessible mobile office that runs on used cooking oil. But at least nine other members are paying more than $1,000 a month for more basic rides. Some lawmakers blame their high lease costs on a policy, enacted in a 2007 energy bill, requiring that the vehicles they choose be fuel efficient. Others say their two-year terms in office prevent them from taking advantage of lower-cost, longer-term leases. A spokesman for House Intelligence Committee Chairman Silvestre Reyes (D-Texas), who is paying $1,628 to lease a GMC Yukon, cited those reasons — and others. “The leasing costs for the district vehicle are higher than in previous years due to the shortened payment period of 21 months, higher leasing fees that were the result of the financial crisis confronting American automakers at the time and new House environmental rules that required vehicles to conform to stricter emissions standards,” Reyes spokesman Vincent Perez said. A spokeswoman for Rep. Carolyn Cheeks Kilpatrick (D-Mich.), who spends $1,230 per month on a 2009 Chevrolet Tahoe, said Kilpatrick does it for her district. Pedro Pierluisi, the Democrat who represents Puerto Rico, spends $1,400 each month on his hybrid GMC Yukon, but a spokeswoman said that figure includes insurance, repair and maintenance costs. Rep. Harry Teague (D-N.M.) — one of the richest members of Congress, with a net worth of more than $36 million — spends $1,279 in taxpayer money on his vehicle, a 2009 Chevy Malibu that helps him traverse his expansive southern New Mexico district. His cost includes additional mileage to facilitate travel in the sixth-largest congressional district in the country, his office said.
Ian McMillan presents Radio 3's 'Cabaret of the Word' with guests Lynne Truss - on writing a novel in the voice of a cat, Irna Qureshi on speaking Bradford Asian English, Nicholas Royle on books he has almost given away, and the singer Nancy Elizabeth. Lynne Truss is a writer, journalist, sometimes radio presenter, and has been described as a professional pedant. Best known for her book Eat, Shoots & Leaves: The Zero Tolerance Approach to Punctuation, her latest offering, ‘Cat Out of Hell’ (Hammer) poses the question, are cats potentially evil? The narrator of the book discovers that Roger, the cat, is not just any cat, he is in fact Roger the evil talking cat. Irna Qureshi is a writer, researcher, and anthropologist of British Asian culture. She blogs for the Guardian, writing about arts, heritage and social issues - all from the ethnic minority perspective. Using oral history, Irna has curated several national touring exhibitions and publications which explore Britain's South Asian communities. Folk singer-songwriter Nancy Elizabeth has released her third album Dancing on The Leaf Label. Since her first release back in 2006 Nancy has gone on a journey that has taken her from a derelict church in Mexico, to caves in Italy, to run down pubs in Paris and finally to a small flat in Manchester where ‘Dancing’ was recorded. With seven novels, two novellas and a short story collection to his name, Nicholas Royle is also a senior lecturer in creative writing at Manchester Metropolitan University and sits on the panel of the Manchester Fiction Prize. He takes Ian Mcmillan on a virtual tour of his bookshelf, explaining which books he thinks about giving away, and why he can’t quite bring himself to take them to the charity shop. His new novel is ‘First Novel’ (Vintage ). A season of Poetry and Performance from Hull. Explore the BBC Arts website and discover the best of British art and culture. Listen to programmes, poetry readings and commentary from Radio 3's Dylan Thomas Day.
3D Printing in Colour: Technical Evaluation and Creative Applications In this paper we report on an ongoing interdisciplinary investigation into the capabilities of colour 3D printing technologies, and present examples of the application of these technologies within art and design practice. The paper demonstrates that a quantitative investigation into colour reproduction in 3D printing can inform real-life design practice. Our research focuses on the powder-binder colour 3D printing system (Z Corporation, Burlington, MA). We present a technical evaluation which compares the colour output of the ZCorp 510 and 650 powder-binder 3D printers, through the production and measurement of colour test blocks. The investigation also compares the effect of two infiltrants (paraffin wax and cyanoacrylate) commonly used to enhance 3d printed output. The paper closes with a practical case study showing an application of colour 3D printing within art and design practice.
/** This method is called from within the constructor to * initialize the form. * WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ @SuppressWarnings("unchecked") private void initComponents() { almacenar = new javax.swing.JButton(); jCalendar1 = new com.toedter.calendar.JCalendar(); jLabel1 = new javax.swing.JLabel(); cerrar = new javax.swing.JButton(); jScrollPane1 = new javax.swing.JScrollPane(); jTable1 = new javax.swing.JTable(); setLayout(new org.netbeans.lib.awtextra.AbsoluteLayout()); almacenar.setText("Borrar"); add(almacenar, new org.netbeans.lib.awtextra.AbsoluteConstraints(170, 270, -1, -1)); jCalendar1.setPreferredSize(new java.awt.Dimension(300, 231)); jCalendar1.setWeekOfYearVisible(false); add(jCalendar1, new org.netbeans.lib.awtextra.AbsoluteConstraints(20, 0, 340, 260)); jLabel1.setBackground(new java.awt.Color(255, 255, 255)); jLabel1.setText("Fecha"); jLabel1.setOpaque(true); add(jLabel1, new org.netbeans.lib.awtextra.AbsoluteConstraints(20, 270, 130, 30)); cerrar.setText("Cerrar"); add(cerrar, new org.netbeans.lib.awtextra.AbsoluteConstraints(520, 280, -1, -1)); jTable1.setModel(new javax.swing.table.DefaultTableModel( new Object [][] { {null}, {null}, {null}, {null} }, new String [] { "Fecha" } ) { boolean[] canEdit = new boolean [] { false }; public boolean isCellEditable(int rowIndex, int columnIndex) { return canEdit [columnIndex]; } }); jScrollPane1.setViewportView(jTable1); add(jScrollPane1, new org.netbeans.lib.awtextra.AbsoluteConstraints(410, 10, 190, 260)); }
Composition changes in gingival crevicular fluid during orthodontic tooth movement: comparisons between tension and compression sides. The aim of this study was to evaluate whether the application of tension or compression forces exerted on the periodontium during the early phase of orthodontic tooth movement is reflected by differences in the composition of the gingival crevicular fluid (GCF), at the level of interleukin-1beta (IL-1beta), substance P (SP), and prostaglandin E (PGE). Eighteen children (mean age 10.8 yr) starting orthodontic treatment were included in the study. Molar elastic separators were inserted mesially to two first upper or lower molars. One of the antagonist molars served as the control. GCF was collected from the mesial and distal sites of each molar, before (-7 d, 0 d) and immediately after (1 min, 1 h, 1 d, and 7 d) the placement of separators. The levels of IL-1beta, SP, and PGE were determined by enzme-linked immunosorbent assay. At the orthodontically moved teeth, the GCF levels of IL-1beta, SP, and PGE were significantly higher than at the control teeth in both tension and compression sides, and at almost all occasions after insertion of separators. The increase, relative to baseline values, was generally higher in tension sides. For the control teeth, the three mediators remained at baseline levels throughout the experiment. The results suggest that IL-1beta, SP, and PGE levels in the GCF reflect the biologic activity in the periodontium during orthodontic tooth movement.
<reponame>1193708911/TabLayout package com.taikang.tkdoctor.base; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import android.app.Application; import android.util.Log; import com.baidu.location.BDLocation; import com.baidu.location.BDLocationListener; import com.baidu.location.LocationClient; import com.lidroid.xutils.DbUtils; import com.lidroid.xutils.util.LogUtils; import com.taikang.tkdoctor.bean.HdBsAuthUser; import com.taikang.tkdoctor.bean.LocationBean; import com.taikang.tkdoctor.bean.UserBaseInfoBean; import com.taikang.tkdoctor.bean.UserPersonalInfoDto; import com.taikang.tkdoctor.db.DBManager; import com.taikang.tkdoctor.db.MyDbUpgradeListener; import com.taikang.tkdoctor.db.SQLiteDBManager; import com.taikang.tkdoctor.requestcallback.BaiduLocationCallBack; import com.taikang.tkdoctor.util.Config; import com.taikang.tkdoctor.util.PreferencesUtil; public class MainApplication extends Application{ private static MainApplication mThis; private HdBsAuthUser user; private UserBaseInfoBean baseInfo; private DBManager dbManager; private UserPersonalInfoDto userBaseInfo; public List<Map<String, String>> listparamsMap=new ArrayList<Map<String,String>>(); public Map<String, String> params_manbing = new HashMap<String, String>(); public LocationClient mLocationClient; public MyLocationListener mLocationListener; public LocationBean mLocationBean; public BaiduLocationCallBack mLocatioCallBack; public String chooseCity; public String chooseCityHan; public String getChooseCityHan() { return chooseCityHan; } public void setChooseCityHan(String chooseCityHan) { this.chooseCityHan = chooseCityHan; } public String getChooseCity() { return chooseCity; } public void setChooseCity(String chooseCity) { this.chooseCity = chooseCity; } public static MainApplication getInstance() { return mThis; } @Override public void onCreate() { super.onCreate(); mThis = this; mLocationClient=new LocationClient(this); mLocationListener=new MyLocationListener(); mLocationClient.registerLocationListener(mLocationListener); DbUtils db = DbUtils.create(mThis, Config.DB_NAME, Config.DB_VERSION, new MyDbUpgradeListener()).configDebug(Config.DB_DEBUG); if (DBManager.getInstance() == null) { dbManager = new SQLiteDBManager(db); } } public DBManager getDbManager() { return dbManager; } public HdBsAuthUser getUser() { return user; } public void setUser(HdBsAuthUser user) { this.user = user; } public UserPersonalInfoDto getUserBaseInfo() { return userBaseInfo; } public void setUserBaseInfo(UserPersonalInfoDto userBaseInfo) { this.userBaseInfo = userBaseInfo; } public LocationClient getBaiduLocationClient(){ return mLocationClient; } public LocationBean getLocationBean(){ return mLocationBean; } public void setBaiduLocationCallBack(BaiduLocationCallBack callBack){ this.mLocatioCallBack=callBack; } public class MyLocationListener implements BDLocationListener { @Override public void onReceiveLocation(BDLocation location) { //Receive Location String city=location.getCity(); String cityCode=location.getCityCode(); double lat=location.getLatitude(); double lng=location.getLongitude(); mLocationBean=new LocationBean(); mLocationBean.setCity(city); mLocationBean.setCityCode(cityCode); mLocationBean.setLat(String.valueOf(lat)); mLocationBean.setLng(String.valueOf(lng)); mLocationClient.stop(); mLocatioCallBack.onLocatedCityCallBack(mLocationBean); } } public UserBaseInfoBean getBaseInfo() { return baseInfo; } public void setBaseInfo(UserBaseInfoBean baseInfo) { this.baseInfo = baseInfo; } }
<filename>lib/store/cleanup_test.go // Copyright (c) 2016-2019 Uber Technologies, Inc. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package store import ( "io/ioutil" "os" "testing" "time" "github.com/uber/kraken/core" "github.com/uber/kraken/lib/store/base" "github.com/uber/kraken/lib/store/metadata" "github.com/uber/kraken/utils/testutil" "github.com/andres-erbsen/clock" "github.com/stretchr/testify/require" "github.com/uber-go/tally" ) func fileOpFixture(clk clock.Clock) (base.FileState, base.FileOp, func()) { var cleanup testutil.Cleanup defer cleanup.Recover() dir, err := ioutil.TempDir("/tmp", "cleanup_test") if err != nil { panic(err) } cleanup.Add(func() { os.RemoveAll(dir) }) state := base.NewFileState(dir) store := base.NewLocalFileStore(clk) return state, store.NewFileOp().AcceptState(state), cleanup.Run } func TestCleanupManagerAddJob(t *testing.T) { require := require.New(t) clk := clock.New() m, err := newCleanupManager(clk, tally.NoopScope) require.NoError(err) defer m.stop() state, op, cleanup := fileOpFixture(clk) defer cleanup() config := CleanupConfig{ Interval: time.Second, TTI: time.Second, } m.addJob("test_cleanup", config, op) name := "test_file" require.NoError(op.CreateFile(name, state, 0)) time.Sleep(2 * time.Second) _, err = op.GetFileStat(name) require.True(os.IsNotExist(err)) } func TestCleanupManagerDeleteIdleFiles(t *testing.T) { require := require.New(t) clk := clock.NewMock() clk.Set(time.Now()) tti := 6 * time.Hour ttl := 24 * time.Hour m, err := newCleanupManager(clk, tally.NoopScope) require.NoError(err) defer m.stop() state, op, cleanup := fileOpFixture(clk) defer cleanup() var names []string for i := 0; i < 100; i++ { names = append(names, core.DigestFixture().Hex()) } idle := names[:50] for _, name := range idle { require.NoError(op.CreateFile(name, state, 0)) } clk.Add(tti + 1) active := names[50:] for _, name := range active { require.NoError(op.CreateFile(name, state, 0)) } _, err = m.scan(op, tti, ttl) require.NoError(err) for _, name := range idle { _, err := op.GetFileStat(name) require.True(os.IsNotExist(err)) } for _, name := range active { _, err := op.GetFileStat(name) require.NoError(err) } } func TestCleanupManagerDeleteExpiredFiles(t *testing.T) { require := require.New(t) clk := clock.NewMock() clk.Set(time.Now()) tti := 6 * time.Hour ttl := 24 * time.Hour m, err := newCleanupManager(clk, tally.NoopScope) require.NoError(err) defer m.stop() state, op, cleanup := fileOpFixture(clk) defer cleanup() var names []string for i := 0; i < 10; i++ { names = append(names, core.DigestFixture().Hex()) } for _, name := range names { require.NoError(op.CreateFile(name, state, 0)) } _, err = m.scan(op, tti, ttl) require.NoError(err) for _, name := range names { _, err := op.GetFileStat(name) require.NoError(err) } clk.Add(ttl + 1) _, err = m.scan(op, tti, ttl) require.NoError(err) for _, name := range names { _, err := op.GetFileStat(name) require.True(os.IsNotExist(err)) } } func TestCleanupManagerSkipsPersistedFiles(t *testing.T) { require := require.New(t) clk := clock.NewMock() clk.Set(time.Now()) tti := 48 * time.Hour ttl := 24 * time.Hour m, err := newCleanupManager(clk, tally.NoopScope) require.NoError(err) defer m.stop() state, op, cleanup := fileOpFixture(clk) defer cleanup() var names []string for i := 0; i < 100; i++ { names = append(names, core.DigestFixture().Hex()) } idle := names[:50] for _, name := range idle { require.NoError(op.CreateFile(name, state, 0)) } persisted := names[50:] for _, name := range persisted { require.NoError(op.CreateFile(name, state, 0)) _, err := op.SetFileMetadata(name, metadata.NewPersist(true)) require.NoError(err) } clk.Add(tti + 1) _, err = m.scan(op, tti, ttl) require.NoError(err) for _, name := range idle { _, err := op.GetFileStat(name) require.True(os.IsNotExist(err)) } for _, name := range persisted { _, err := op.GetFileStat(name) require.NoError(err) } } func TestCleanupManageDiskUsage(t *testing.T) { require := require.New(t) clk := clock.New() m, err := newCleanupManager(clk, tally.NoopScope) require.NoError(err) defer m.stop() state, op, cleanup := fileOpFixture(clk) defer cleanup() for i := 0; i < 100; i++ { require.NoError(op.CreateFile(core.DigestFixture().Hex(), state, 5)) } usage, err := m.scan(op, time.Hour, time.Hour) require.NoError(err) require.Equal(int64(500), usage) }
. The search for risk factors for development of germ cell tumors (GCT) in children who lived in Mexico City (MC). A protective, observational, case-control study was conducted in children under 15 years of age resident in MC, insurer by the Mexican Institute of Social Security. The study population was selected between January 1st, 1990 and December 31st, 1994. Parents of the children were interviewed with a 230-items precoded questionnaire, validated previously with a pilot study. For analysis were obtained simple frequencies and odds ratios (OR) and 95% confidence interval (95%CI). There were 21 cases and 105 controls. The most significant risk factors were winter conception (OR = 7.6, 95% CI 1.5-39.3; P = 0.007); low parental education level (OR = 2.9, 95% CI 1.1-7.5; P = 0.026); and parental combined dust and electricity exposure before pregnancy (OR = 26, 95% CI 2.28-1291.86; P = 0.0007). during (OR8.58, 95% CI 0.89-106.55; P = 0.041) and after pregnancy (OR = 9.66, 95% CI 0.99-120.22; P = 0.027). There was a protective effect with repetitive infections during infancy. In conclusion, Winter conception is in accordance with infectious etiology theory of GCT development. The low parental education level and the combined exposure to dust and electricity are very important. The protective effect of repetitive infections and other factors make necessary more epidemiologic studies in this field.
def csv_import(course, csv_dict_rows): import_manager = csv.TeamMembershipImportManager(course) import_manager.teamset_ids = {ts.teamset_id for ts in course.teamsets} with BytesIO() as mock_csv_file: with TextIOWrapper(mock_csv_file, write_through=True) as text_wrapper: header_fields = csv._get_team_membership_csv_headers(course) csv_writer = DictWriter(text_wrapper, fieldnames=header_fields) csv_writer.writeheader() csv_writer.writerows(csv_dict_rows) mock_csv_file.seek(0) import_manager.set_team_membership_from_csv(mock_csv_file)
def validate_context(context): if context not in _FILTER_CONTEXTS: try: context = ast.literal_eval(context) assert isinstance(context, tuple) assert context[0] in ('in', 'out', 'both') assert isinstance(context[1], int) assert context[1] >= 0 except Exception: message = ( 'filter_context needs to be either a string from {} or' """from "('in', 2)", "('out', 2)", "('both', 2)" where """ '2 may be replaced by any integer ' '>= 0.'.format(', '.join('"{}"'.format(s) for s in _FILTER_CONTEXTS))) raise argparse.ArgumentTypeError(message) return context
import { Component, EventEmitter, Output } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { @Output() add: EventEmitter<any> = new EventEmitter(); addToCart() { this.add.emit('addToCart'); } }
<filename>source/constructs/lib/serverless-image-stack.ts // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { PriceClass } from '@aws-cdk/aws-cloudfront'; import { Aspects, Aws, CfnMapping, CfnOutput, CfnParameter, Construct, Stack, StackProps, Tags } from '@aws-cdk/core'; import { SuppressLambdaFunctionCfnRulesAspect } from '../utils/aspects'; import { BackEnd } from './back-end/back-end-construct'; import { CommonResources } from './common-resources/common-resources-construct'; import { FrontEndConstruct as FrontEnd } from './front-end/front-end-construct'; import { SolutionConstructProps, YesNo } from './types'; export interface ServerlessImageHandlerStackProps extends StackProps { readonly description: string; readonly solutionId: string; readonly solutionName: string; readonly solutionVersion: string; readonly solutionDisplayName: string; readonly solutionAssetHostingBucketNamePrefix: string; } export class ServerlessImageHandlerStack extends Stack { constructor(scope: Construct, id: string, props: ServerlessImageHandlerStackProps) { super(scope, id, props); const corsEnabledParameter = new CfnParameter(this, 'CorsEnabledParameter', { type: 'String', description: `Would you like to enable Cross-Origin Resource Sharing (CORS) for the image handler API? Select 'Yes' if so.`, allowedValues: ['Yes', 'No'], default: 'No' }); const corsOriginParameter = new CfnParameter(this, 'CorsOriginParameter', { type: 'String', description: `If you selected 'Yes' above, please specify an origin value here. A wildcard (*) value will support any origin. We recommend specifying an origin (i.e. https://example.domain) to restrict cross-site access to your API.`, default: '*' }); const sourceBucketsParameter = new CfnParameter(this, 'SourceBucketsParameter', { type: 'String', description: '(Required) List the buckets (comma-separated) within your account that contain original image files. If you plan to use Thumbor or Custom image requests with this solution, the source bucket for those requests will be the first bucket listed in this field.', allowedPattern: '.+', default: 'defaultBucket, bucketNo2, bucketNo3, ...' }); const deployDemoUIParameter = new CfnParameter(this, 'DeployDemoUIParameter', { type: 'String', description: 'Would you like to deploy a demo UI to explore the features and capabilities of this solution? This will create an additional Amazon S3 bucket and Amazon CloudFront distribution in your account.', allowedValues: ['Yes', 'No'], default: 'Yes' }); const logRetentionPeriodParameter = new CfnParameter(this, 'LogRetentionPeriodParameter', { type: 'Number', description: 'This solution automatically logs events to Amazon CloudWatch. Select the amount of time for CloudWatch logs from this solution to be retained (in days).', allowedValues: ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653', '9999'], default: '1' }); const autoWebPParameter = new CfnParameter(this, 'AutoWebPParameter', { type: 'String', description: `Would you like to enable automatic WebP based on accept headers? Select 'Yes' if so.`, allowedValues: ['Yes', 'No'], default: 'No' }); const enableSignatureParameter = new CfnParameter(this, 'EnableSignatureParameter', { type: 'String', description: `Would you like to enable the signature? If so, select 'Yes' and provide SecretsManagerSecret and SecretsManagerKey values.`, allowedValues: ['Yes', 'No'], default: 'No' }); const secretsManagerSecretParameter = new CfnParameter(this, 'SecretsManagerSecretParameter', { type: 'String', description: 'The name of AWS Secrets Manager secret. You need to create your secret under this name.', default: '' }); const secretsManagerKeyParameter = new CfnParameter(this, 'SecretsManagerKeyParameter', { type: 'String', description: 'The name of AWS Secrets Manager secret key. You need to create secret key with this key name. The secret value would be used to check signature.', default: '' }); const enableDefaultFallbackImageParameter = new CfnParameter(this, 'EnableDefaultFallbackImageParameter', { type: 'String', description: `Would you like to enable the default fallback image? If so, select 'Yes' and provide FallbackImageS3Bucket and FallbackImageS3Key values.`, allowedValues: ['Yes', 'No'], default: 'No' }); const fallbackImageS3BucketParameter = new CfnParameter(this, 'FallbackImageS3BucketParameter', { type: 'String', description: 'The name of the Amazon S3 bucket which contains the default fallback image. e.g. my-fallback-image-bucket', default: '' }); const fallbackImageS3KeyParameter = new CfnParameter(this, 'FallbackImageS3KeyParameter', { type: 'String', description: 'The name of the default fallback image object key including prefix. e.g. prefix/image.jpg', default: '' }); const cloudFrontPriceClassParameter = new CfnParameter(this, 'CloudFrontPriceClassParameter', { type: 'String', description: 'The AWS CloudFront price class to use. For more information see: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html', allowedValues: [PriceClass.PRICE_CLASS_ALL, PriceClass.PRICE_CLASS_200, PriceClass.PRICE_CLASS_100], default: PriceClass.PRICE_CLASS_ALL }); const solutionMapping = new CfnMapping(this, 'Solution', { mapping: { Config: { AnonymousUsage: 'Yes', SolutionId: props.solutionId, Version: props.solutionVersion, S3BucketPrefix: props.solutionAssetHostingBucketNamePrefix, S3KeyPrefix: `${props.solutionName}/${props.solutionVersion}` } }, lazy: true }); const anonymousUsage = `${solutionMapping.findInMap('Config', 'AnonymousUsage')}`; const sourceCodeBucketName = `${solutionMapping.findInMap('Config', 'S3BucketPrefix')}-${Aws.REGION}`; const sourceCodeKeyPrefix = solutionMapping.findInMap('Config', 'S3KeyPrefix'); const solutionConstructProps: SolutionConstructProps = { corsEnabled: corsEnabledParameter.valueAsString, corsOrigin: corsOriginParameter.valueAsString, sourceBuckets: sourceBucketsParameter.valueAsString, deployUI: deployDemoUIParameter.valueAsString as YesNo, logRetentionPeriod: logRetentionPeriodParameter.valueAsNumber, autoWebP: autoWebPParameter.valueAsString, enableSignature: enableSignatureParameter.valueAsString as YesNo, secretsManager: secretsManagerSecretParameter.valueAsString, secretsManagerKey: secretsManagerKeyParameter.valueAsString, enableDefaultFallbackImage: enableDefaultFallbackImageParameter.valueAsString as YesNo, fallbackImageS3Bucket: fallbackImageS3BucketParameter.valueAsString, fallbackImageS3KeyBucket: fallbackImageS3KeyParameter.valueAsString }; const commonResources = new CommonResources(this, 'CommonResources', { solutionId: props.solutionId, solutionVersion: props.solutionVersion, solutionDisplayName: props.solutionDisplayName, sourceCodeBucketName: sourceCodeBucketName, sourceCodeKeyPrefix: sourceCodeKeyPrefix, ...solutionConstructProps }); const frontEnd = new FrontEnd(this, 'FrontEnd', { logsBucket: commonResources.logsBucket, conditions: commonResources.conditions }); const backEnd = new BackEnd(this, 'BackEnd', { sourceCodeBucketName: sourceCodeBucketName, sourceCodeKeyPrefix: sourceCodeKeyPrefix, solutionVersion: props.solutionVersion, solutionDisplayName: props.solutionDisplayName, secretsManagerPolicy: commonResources.secretsManagerPolicy, logsBucket: commonResources.logsBucket, uuid: commonResources.customResources.uuid, cloudFrontPriceClass: cloudFrontPriceClassParameter.valueAsString, ...solutionConstructProps }); commonResources.customResources.setupAnonymousMetric({ anonymousData: anonymousUsage, ...solutionConstructProps }); commonResources.customResources.setupValidateSourceAndFallbackImageBuckets({ sourceBuckets: sourceBucketsParameter.valueAsString, fallbackImageS3Bucket: fallbackImageS3BucketParameter.valueAsString, fallbackImageS3Key: fallbackImageS3KeyParameter.valueAsString }); commonResources.customResources.setupValidateSecretsManager({ secretsManager: secretsManagerSecretParameter.valueAsString, secretsManagerKey: secretsManagerKeyParameter.valueAsString }); commonResources.customResources.setupCopyWebsiteCustomResource({ hostingBucket: frontEnd.websiteHostingBucket }); commonResources.customResources.setupPutWebsiteConfigCustomResource({ hostingBucket: frontEnd.websiteHostingBucket, apiEndpoint: backEnd.domainName }); this.templateOptions.metadata = { 'AWS::CloudFormation::Interface': { ParameterGroups: [ { Label: { default: 'CORS Options' }, Parameters: [corsEnabledParameter.logicalId, corsOriginParameter.logicalId] }, { Label: { default: 'Image Sources' }, Parameters: [sourceBucketsParameter.logicalId] }, { Label: { default: 'Demo UI' }, Parameters: [deployDemoUIParameter.logicalId] }, { Label: { default: 'Event Logging' }, Parameters: [logRetentionPeriodParameter.logicalId] }, { Label: { default: 'Image URL Signature (Note: Enabling signature is not compatible with previous image URLs, which could result in broken image links. Please refer to the implementation guide for details: https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/considerations.html)' }, Parameters: [enableSignatureParameter.logicalId, secretsManagerSecretParameter.logicalId, secretsManagerKeyParameter.logicalId] }, { Label: { default: 'Default Fallback Image (Note: Enabling default fallback image returns the default fallback image instead of JSON object when error happens. Please refer to the implementation guide for details: https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/considerations.html)' }, Parameters: [enableDefaultFallbackImageParameter.logicalId, fallbackImageS3BucketParameter.logicalId, fallbackImageS3KeyParameter.logicalId] }, { Label: { default: 'Auto WebP' }, Parameters: [autoWebPParameter.logicalId] } ], ParameterLabels: { [corsEnabledParameter.logicalId]: { default: 'CORS Enabled' }, [corsOriginParameter.logicalId]: { default: 'CORS Origin' }, [sourceBucketsParameter.logicalId]: { default: 'Source Buckets' }, [deployDemoUIParameter.logicalId]: { default: 'Deploy Demo UI' }, [logRetentionPeriodParameter.logicalId]: { default: 'Log Retention Period' }, [autoWebPParameter.logicalId]: { default: 'AutoWebP' }, [enableSignatureParameter.logicalId]: { default: 'Enable Signature' }, [secretsManagerSecretParameter.logicalId]: { default: 'SecretsManager Secret' }, [secretsManagerKeyParameter.logicalId]: { default: 'SecretsManager Key' }, [enableDefaultFallbackImageParameter.logicalId]: { default: 'Enable Default Fallback Image' }, [fallbackImageS3BucketParameter.logicalId]: { default: 'Fallback Image S3 Bucket' }, [fallbackImageS3KeyParameter.logicalId]: { default: 'Fallback Image S3 Key' }, [cloudFrontPriceClassParameter.logicalId]: { default: 'CloudFront PriceClass' } } } }; /* eslint-disable no-new */ new CfnOutput(this, 'ApiEndpoint', { value: `https://${backEnd.domainName}`, description: 'Link to API endpoint for sending image requests to.' }); new CfnOutput(this, 'DemoUrl', { value: `https://${frontEnd.domainName}/index.html`, description: 'Link to the demo user interface for the solution.', condition: commonResources.conditions.deployUICondition }); new CfnOutput(this, 'SourceBuckets', { value: sourceBucketsParameter.valueAsString, description: 'Amazon S3 bucket location containing original image files.' }); new CfnOutput(this, 'CorsEnabled', { value: corsEnabledParameter.valueAsString, description: 'Indicates whether Cross-Origin Resource Sharing (CORS) has been enabled for the image handler API.' }); new CfnOutput(this, 'CorsOrigin', { value: corsOriginParameter.valueAsString, description: 'Origin value returned in the Access-Control-Allow-Origin header of image handler API responses.', condition: commonResources.conditions.enableCorsCondition }); new CfnOutput(this, 'LogRetentionPeriod', { value: logRetentionPeriodParameter.valueAsString, description: 'Number of days for event logs from Lambda to be retained in CloudWatch.' }); Aspects.of(this).add(new SuppressLambdaFunctionCfnRulesAspect()); Tags.of(this).add('SolutionId', props.solutionId); } }
/** * (C) VyanTech.com Ltd 2022 */ package com.example.problems.arrays.sort.movezeros; import com.example.problems.arrays.sort.movezeros.MoveZerosToEnd; import com.example.problems.arrays.sort.movezeros.TwoPointerMoveZerosToEnd; import java.util.Arrays; import org.junit.Test; import static org.junit.Assert.assertEquals; public class TwoPointerMoveZerosToEndTest { private final MoveZerosToEnd algo = new TwoPointerMoveZerosToEnd(); @Test public void test() { int[] input = new int[] { 0, 1, 2, 3, 0, 4, 0, 5, 0, 6, 0, 0 }; int[] output = new int[] { 6, 1, 2, 3, 5, 4, 0, 0, 0, 0, 0, 0 }; algo.move(input); assertEquals(Arrays.toString(output), Arrays.toString(input)); } }
. Among the wounded admitted to the departments of anesthesiology, resuscitation and intensive therapy there were from 57.8 to 77.6% of gunshot injuries of the abdomen. Successful treatment of such patients is dependent not only on the timeliness and quality of surgical interventions but also on the correct choice of intensive therapy before and during operation and in the postoperative period. The temporizing strategy providing for expanding the list of the methods used as late as the symptoms of the unfavorable course of the postoperative period can not be considered sufficiently effective. Complex intensive therapy with a forestalling action on different links of the wound disease pathogenesis in most cases allows not only the elimination of organic and systemic impairments resulting from the wound but also is more effective for defensive compensatory mechanisms. Differentiation of the programs of treatment depending not only on the severity of the patient's state but also on the character of injuries of organs of the abdominal and retroperitoneal areas is of the leading significance.
package com.webank.wecross.account.service.db; import org.springframework.data.jpa.repository.JpaRepository; public interface UniversalAccountACLTableJPA extends JpaRepository<UniversalAccountACLTableBean, Integer> { UniversalAccountACLTableBean findByUsername(String username); }
package response import ( "fmt" "net/http" ) type Error struct { StatusCode int Message string Messages []string Result interface{} } func (e *Error) Error() string { return fmt.Sprintf("HTTP %d: %s", e.StatusCode, e.Message) } func (e *Error) WithMessage(msg string) *Error { e.Message = msg return e } func (e *Error) AddMessages(msgs ...string) *Error { e.Messages = append(e.Messages, msgs...) return e } func (e *Error) WithResult(result interface{}) *Error { e.Result = result return e } func makeError(status int) *Error { return &Error{ StatusCode: status, Messages: make([]string, 0), Result: []string{}, } } // ----------------------------------------------- func ErrUnexpected() *Error { return makeError(http.StatusInternalServerError). WithMessage("An unexpected error has occured") } func ErrBadRequest() *Error { return makeError(http.StatusBadRequest). WithMessage("Bad request") } func ErrNotFound() *Error { return makeError(http.StatusNotFound). WithMessage("Requested resources not found") } func ErrConflict() *Error { return makeError(http.StatusConflict). WithMessage("Conflict") } func ErrInvalidJson() *Error { return ErrBadRequest().AddMessages("Invalid JSON body") } func ErrorMethodNotAllowed() *Error { return makeError(http.StatusMethodNotAllowed).AddMessages("Method not allowed") }
<reponame>16kilobyte/typescript-graphql-api<gh_stars>0 import bcrypt from "bcrypt"; import type { IResolvers } from "graphql-tools"; import { BankAccount, User } from "../../database/entities"; import type { MutationResolvers, QueryResolvers } from "../graphql/generated"; import type { Context } from "../../utils/context"; export const authQueries: Pick<QueryResolvers<Context>, "me"> = { me: async (_, _args, { user }) => { if (!user) { throw new Error("You are not authenticated!"); } return (await User.findOne(user.id)) ?? null; } } export const authMutations: Pick< MutationResolvers<Context>, "login" | "register" | "addBankAccount" > = { login: async (_, { email, password }) => { const user = await User.findOne({ email, }); if (!user) { throw new Error("No user with that email"); } const valid = await bcrypt.compare(password, user.password); if (!valid) { throw new Error("Incorrect password"); } const token = user.generateJwtToken() return { token, user, }; }, register: async (_, { firstName, lastName, email, phone, countryCode, password }) => { const existing = await User.findOne({ email, }); if (existing) { throw new Error("Email already in use"); } const user = User.create({ firstName, lastName, email, phone, countryCode, password, }); await user.save(); return user; }, addBankAccount: async (_, { bankId, accountName, accountNumber }, { user }) => { const bankAccount = BankAccount.create({ bankId, accountName, accountNumber, user, }); await bankAccount.save(); return bankAccount; } } export const auth: IResolvers = { Query: { ...authQueries, }, Mutation: { ...authMutations, }, };
Although the lifetime ban and disqualification of the results of Lance Armstrong is now secure, USADA CEO Travis Tygart's work is not nearly done. The arbitration cases for Johan Bruyneel and Jose "Pepe" Martí are still pending, so there may be more details to emerge from the seedy tale of cycling's doping culture. After unearthing the disturbing truths, Tygart sees independent organisations such as his as the only way forward for the sport. Related Articles Tygart received death threats during USADA's Armstrong investigation Tygart praises Mercier for refusing to dope Bruyneel vows to continue fight against USADA charges UCI confirms Lance Armstrong's life ban Reactions to UCI's confirmation of Lance Armstrong's ban Mercier: The UCI must get rid of Pat McQuaid At the same time as the International Cycling Union was turning its back on whistle-blowers such as Jörg Jaksche, Tyler Hamilton and Floyd Landis, USADA was taking notes, taking them seriously and investigating the allegations. Why the UCI failed to do so sooner was due to what Tygart calls the inherent conflict of interest or "fox guarding the henhouse" that is key to cycling's problems. In fact, if one precedent is established by the Armstrong case, Tyargt hopes it is that clean athletes have greater faith in the anti-doping establishment, and trust that "they're not going to turn a blind eye, regardless of how powerful or influential those who broke the rules may be," Tygart told Cyclingnews. Compare that with the actions of the UCI, of which Tygart would only say, "they speak louder than words". "Back in August, they were arguing and telling everybody we were on a witch hunt. They had no idea what the evidence was, but they sued Floyd ... they've called the whistle-blowers scum bags. Those certainly aren't the actions you would take if you truly wanted to move your sport in the right direction on this topic." Splitting anti-doping from UCI not necessary There have been those who have called for cycling to create its own independent anti-doping agency in order to remove the conflict of interest from the UCI, but Tygart said that this step was not necessary. But the UCI does need to remove itself from total control and allow better coordination with the independent anti-doping agencies. This very topic caused conflict between the USADA and UCI ahead of the 2011 Tour of California. USADA wanted to be able to perform targeted testing and receive the results, but while the UCI was ready to allow USADA to simply perform the controls, it wanted absolute results management authority. A similar conflict happened between the UCI and French Anti-Doping Agency (AFLD) before the Tour de France in 2010. "When you have the UCI sending its own collectors in, and the reports only go to the UCI, that's an inherent conflict. Why you would want to do that as a sport, other than to control each and every aspect of it to your best interest? "Unlike any other International Federation we work with, they've never articulated why they do it that way. The only conclusion, particularly when they can't articulate any other logical reason to do it, is that they want to control the outcomes. That's where the 'fox guarding the henhouse' just reeks and the perception of that is killing them right now." That very kind of conflict of interest is what led to the creation of WADA and USADA, but it is a philosophy the UCI has been unwilling to fully embrace. "You have to give up a little control, but at the end of the day it's the best thing for clean athletes - it may generate some ugly headlines from time to time - some of the top athletes who decide to cheat will be held accountable - but at the end of the day you've taken yourself, as a promoter of sport, out of that inherent conflict of having to bring discipline against one of your own." Tygart suggests allowing the national anti-doping agencies to perform the testing and decide who gets tested and for what substance - the latter point made more important by the fact that the first Amgen Tour of California did not include doping control tests for EPO, Amgen's key money-making drug, a drug we now know was being widely abused leading up to that first edition in 2006. "It was terrible for the sponsors, and terrible for clean athletes," said of the failure to perform EPO tests. "If we or any NADA were testing that event, that would never have happened. Did it happen because of incompetence of a foreign entity sending testers into the US? Or did it happen because they intentionally didn't want to have people test positive for EPO? You have to ask that question. I don't know that the proof of that is there, but you have to ask the question: Why, as a sport organiser, would you put yourself in the position to have those questions asked... unless you want to control it for your own self-interested outcome." Independent Commission The UCI's Management Committee decided at its emergency meeting last month that it was necessary to form an independent commission to examine the "various allegations made about UCI relating to the Armstrong affair", but Tygart hopes the scope will be broader than just looking into a few key issues such as Armstrong's 2001 Tour de Suisse doping control which was suspicious for EPO. "It has to have a broad term of reference and look into any and every aspect... to do to some extent what the Mitchell Report did for baseball - not only look into the past and expose the past, but to learn the lesson so you can unshackle yourself from that past. And have tangible recommendations put in place so you can ensure the sport moves in the right direction." Tygart reiterated that he thinks a "truth and reconciliation" and a willingness to objectively examine cycling's past is the only way for the sport to move on, and does not agree with the Sky team's "zero tolerance" anti-doping policy. "If you doped, that's basically a lie. So you're going to have no problem continuing to lie. If you're asked if you doped in the past and you know there will be consequences if you finally tell the truth, are you going to tell the truth? No, you're just going to continue to lie. "That further sustains cheaters within the sport who are living a lie, who have to continue to live the lie, so there's a lot less likelihood they're ever going to change their ways, seek forgiveness and be redeemed. Which is why we think it's really important to have a meaningful truth and reconciliation." If anything comes out of all of the ugliness unearthed in the investigation into cycling's doping issues, Tygart hopes that his agency, their equivalents around the world, and WADA itself, have proven that they can provide a reasonable avenue for clean athletes to report on doping activities. "Hopefully this proves they will treat you with empathy and compassion. They will hold you accountable where you've broken the rules, but they'll be compassionate about it and look at the bigger picture rather than stifle one individual athlete," he said. "Hopefully what that does is send a very powerful deterrent and preventative message to those athletes who may have thought they could get away with it or grow so big they become too big to fail."
The English Conquest: Gildas and Britain in the Fifth Century by N.J. Higham (review) Nick Higham has been, over the last ten years, one of the most prolific historians of early medieval Britain. In addition to numerous articles (many of them dealing with Gildas), Higham has produced two regional histories (The Northern Counties to AD 1000 and The Origins ofCheshire), a synthesis of the archaeology and history of fifthcentury Britain (Rome, Britain, and the Anglo-Saxons), and a scholarly 'coffee table' book on northern Britain (The Kingdom ofNorthumbria: AD 350-1100). His latest book, however, is sure to cause the most sensation, for not only is it the most thorough examination of the crucial text for the 'Age of Arthur' (or 'Sub-Roman Britain,' to use the preferred scholarly term), it also is a study which puts forth radicaland likely to be controversialviews on Gildas's geography, chronology, and purpose in writing his enigmatic De Excidio Britanniae , 'On the Ruin ofBritain. ' Higham organizes his arguments logically and delivers them forcefully with an occasional rhetorical flourish reminiscent of E.A.Thompson, perhaps the last true historian to tackle such an ambitious study of the sources for a period all but abandoned to archaeological speculation. The book begins with a lengthy (chs. 13) examination of Gildas's prose style and purpose in writing. Here Higham arrives at the same conclusion which several literary scholars have made in the last twentyyears: Gildas adopts an obvious Biblical style to construct a providential history of Britain and a jeremiad against contemporary rulers and clerics. As a Latin writer Gildas is neither clumsy nor provincial, but rather erudite and, as Higham adeptly shows, continuing in a tradition ofChristian historical writers ofLate Antiquity that included Eusebius, Orosius, Sidonius Apollinaris, and Salvian ofMarseilles. Higham then tackles two more controversial areas: Gildas's geographic location (ch. 5) and his dates (ch. 6). On the first matter, Higham again follows recent trends in preferring a southern Gildas. Wiltshire or Dorset are good candidates because they were Romanized areas with towns and near both the Saxon settlers in the East and the few geographic features (Verulamium, the rivers Thames and Severn) named by Gildas. In dating Gildas and the events he describes, however, Higham makes some radical assertions: the Siege ofMount Badon took place c.430 (instead of the traditional c.500), the Britons ultimately lost the 'War of the Saxon Federates' (by 441), and Gildas was writing in the fifth century (not c.540) under English domination. While others have argued on different grounds for a fifth-century Gildas, no modern scholar has placed both Badon and the English Conquest this early. Self-convinced, Higham then continues with an equally radical description ofGildas's Britain (ch. 6) and concludes with a brief Postscript (ch. 7) summarizing his peculiar take on Gildas's narrative. The few maps provided are helpful in following Higham s arguments, but the lack of a bibliography is lamentable.
def call_action(): hass.async_run_job( action, {"trigger": {"platform": "sun", "event": event, "offset": offset}} )
Automated Prediction of Sudden Cardiac Death Risk Using Kolmogorov Complexity and Recurrence Quantification Analysis Features Extracted from HRV Signals Sudden Cardiac Death (SCD) is an unexpected sudden death of a person followed by Ventricular Fibrillation (VF) or Ventricular Tachycardia (VT) which is usually diagnosed using Electrocardiogram (ECG). Prediction of developing SCD is important for expeditious treatment and thus reducing the mortality rate. In our previous paper, we have developed the Sudden Cardiac Death Index (SCDI) to predict the SCD four minutes prior to its onset using nonlinear features extracted from Discrete Wavelet Transform (DWT) coefficients using ECG signals. In this present paper, we are proposing an automated prediction of SCD using Recurrence Quantification Analysis (RQA) and Kolmogorov complexity parameters extracted from Heart Rate Variability (HRV) signals. The extracted features ranked using t-test are subjected to k-Nearest Neighbor (k-NN), Decision Tree (DT), Support Vector Machine (SVM) and Probabilistic Neural Network (PNN) classifiers for automated classification of normal and SCD classes for of 1min, 2min, 3min and 4 min before SCD durations. Our results show that, we are able to predict the SCD four minutes before its onset with an average accuracy of 86.8%, sensitivity of 80%, and specificity of 94.4% using k-NN classifier and average accuracy of 86.8%, sensitivity of 85%, specificity of 88.8% using PNN classifier. The performance of the proposed system can be improved further by adding more features and more robust classifiers.
def promotion_inverse(self): return lambda x : self.classical_crystal(x.to_tableau().promotion_inverse(self._cartan_type[1]))
/************************************************************************************/ /* Copyright (c) 2008-2011 The Department of Arts and Culture, */ /* The Government of the Republic of South Africa. */ /* */ /* Contributors: Meraka Institute, CSIR, South Africa. */ /* */ /* Permission is hereby granted, free of charge, to any person obtaining a copy */ /* of this software and associated documentation files (the "Software"), to deal */ /* in the Software without restriction, including without limitation the rights */ /* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell */ /* copies of the Software, and to permit persons to whom the Software is */ /* furnished to do so, subject to the following conditions: */ /* The above copyright notice and this permission notice shall be included in */ /* all copies or substantial portions of the Software. */ /* */ /* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR */ /* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, */ /* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE */ /* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER */ /* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, */ /* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN */ /* THE SOFTWARE. */ /* */ /************************************************************************************/ /* */ /* AUTHOR : <NAME> */ /* DATE : 12 October 2008 */ /* */ /************************************************************************************/ /* */ /* List implementation of SList container. */ /* */ /* */ /************************************************************************************/ #ifndef _SPCT_LIST_LIST_H__ #define _SPCT_LIST_LIST_H__ /** * @file list_list.h * Doubly linked list, list data container implementation. */ /** * @ingroup SList * @defgroup SListList Doubly Linked List * Doubly linked list, list data container implementation. * @{ */ /************************************************************************************/ /* */ /* Modules used */ /* */ /************************************************************************************/ #include "include/common.h" #include "base/containers/list/list.h" #include "containers/list/list.h" /************************************************************************************/ /* */ /* Begin external c declaration */ /* */ /************************************************************************************/ S_BEGIN_C_DECLS /************************************************************************************/ /* */ /* Macros */ /* */ /************************************************************************************/ /** * @hideinitializer * Return the given parent/child class object of an #SListList type as * an SListList object. * * @param SELF The given object. * * @return Given object as #SListList* type. * * @note This casting is not safety checked. */ #define S_LISTLIST(SELF) ((SListList *)(SELF)) /************************************************************************************/ /* */ /* SListList definition */ /* */ /************************************************************************************/ /** * The SListList structure. * Inherits and implements #SList as a doubly linked list. * @extends SList */ typedef struct { /** * @protected Inherit from #SList. */ SList obj; /** * @protected Doubly linked list container for values. */ s_list *list; } SListList; /************************************************************************************/ /* */ /* SListListClass definition */ /* */ /************************************************************************************/ /** * Typedef for value-list list container class struct. Same as #SListClass as * we are not adding any new methods. */ typedef SListClass SListListClass; /************************************************************************************/ /* */ /* Function prototypes */ /* */ /************************************************************************************/ /** * Add the #SListList class to the object system. * @private @memberof SListList * @param error Error code. */ S_LOCAL void _s_list_list_class_add(s_erc *error); /************************************************************************************/ /* */ /* End external c declaration */ /* */ /************************************************************************************/ S_END_C_DECLS /** * @} * end documentation */ #endif /* _SPCT_LIST_LIST_H__ */
<reponame>dckesler/wallet<gh_stars>10-100 import { render } from '@testing-library/react-native' import * as React from 'react' import { Provider } from 'react-redux' import EscrowedPaymentLineItem from 'src/escrow/EscrowedPaymentLineItem' import { escrowPaymentDouble } from 'src/escrow/__mocks__' import { createMockStore } from 'test/utils' import { mockE164Number, mockE164NumberHashWithPepper, mockE164NumberPepper } from 'test/values' const mockName = '<NAME>' describe(EscrowedPaymentLineItem, () => { it('renders correctly', () => { const store = createMockStore({}) const tree = render( <Provider store={store}> <EscrowedPaymentLineItem payment={escrowPaymentDouble({})} /> </Provider> ) expect(tree).toMatchSnapshot() }) it('fetches the correct phone number from the identifier mapping', () => { const store = createMockStore({ identity: { e164NumberToSalt: { [mockE164Number]: mockE164NumberPepper, }, }, recipients: { phoneRecipientCache: {}, }, }) const tree = render( <Provider store={store}> <EscrowedPaymentLineItem payment={escrowPaymentDouble({ recipientIdentifier: mockE164NumberHashWithPepper, })} /> </Provider> ) expect(tree.toJSON()).toEqual(mockE164Number) }) it('fetches the correct name from the recipient cache', () => { const store = createMockStore({ identity: { e164NumberToSalt: { [mockE164Number]: mockE164NumberPepper, }, }, recipients: { phoneRecipientCache: { [mockE164Number]: { name: mockName, contactId: '123', }, }, }, }) const tree = render( <Provider store={store}> <EscrowedPaymentLineItem payment={escrowPaymentDouble({ recipientIdentifier: mockE164NumberHashWithPepper, })} /> </Provider> ) expect(tree.toJSON()).toEqual(mockName) }) })
Are Alloplastic Implants Safe in Rhinoplasty? BACKGROUND Over 1 million rhinoplasties are performed worldwide each year. While autologous cartilage remains the gold standard for implantable material, controversy exists regarding the role of alloplastic implants and alleged risks of infection and extrusion. Some experts avoid alloplastic implants altogether due to feared complications in unforgiving anatomy. In other areas of the world, however, they are used widely and successfully. The aim of this study is to summarize the evidence about the safety of alloplastic materials in rhinoplasty.
package sk.filo.plantdiary.enums; public enum ExceptionCode { INVALID_CREDENTIALS, DISABLED_USER, SESSION_EXPIRED, PLANT_NOT_FOUND, EVENT_NOT_FOUND, EVENT_TYPE_NOT_FOUND, USER_NOT_FOUND, LOCATION_NOT_FOUND, PHOTO_PROCESSING_FAILED, PHOTO_NOT_FOUND, PLANT_TYPE_NOT_FOUND, USERNAME_IN_USE, EMAIL_IN_USE, SCHEDULE_NOT_FOUND, DUPLICATE_SCHEDULE_OF_SAME_TYPE, EVENT_TYPE_NOT_SCHEDULABLE; }
MicroRNA-9501 inhibits breast cancer proliferation and metastasis through regulating Wnt/-catenin pathway. OBJECTIVE This research was designed to explore the expression characteristics of microRNA-9501 in breast cancer (BCa), and to further explore whether it can influence the development of BCa through the regulation of Wnt/-Catenin pathway. PATIENTS AND METHODS QPCR was carried out to examine microRNA-9501 level in tumor tissue samples and paracancerous ones collected from 42 BCa patients, and the interplay between microRNA-9501 expression and the clinical indicators, as well as the prognosis of BCa patients were analyzed. In addition, we detected microRNA-9501 expression in BCa cell lines by qPCR. Subsequently, microRNA-9501 overexpression model was constructed in BCa cell lines MCF-7 and MDA-MB-231. Then, CCK-8, EdU, cell wound healing, as well as transwell assays, were carried out to evaluate the impact of microRNA-9501 on the biological functions of BCa cells. Finally, the Dual-Luciferase reporting test and tumor formation experiment in nude mice were conducted to further clarify the potential molecular mechanism. RESULTS QPCR results indicated that microRNA-9501 level in tumor tissue specimens of BCa patients was remarkably higher than that in adjacent ones, and the difference was statistically significant. Compared with patients with high expression of microRNA-9501, patients with lowly-expressed microRNA-9501 had higher tumor stage, higher incidence of lymph node or distant metastasis, and lower overall survival rate. In addition, compared with control group, cells in microRNA-9501 overexpression group showed a significant decrease in proliferation rate, invasiveness, and migration ability. Meanwhile, luciferase reporting assay revealed that overexpression of -Catenin remarkably attenuated the luciferase activity of the vector containing wild-type microRNA-9501 sequences, further demonstrating that microRNA-9501 can be targeted by -Catenin. Meanwhile, qPCR revealed a negative association between -Catenin and microRNA-9501 in BCa tissues. Finally, tumor-bearing experiments in nude mice also demonstrated that microRNA-9501 may suppress the malignant growth of breast tumor. CONCLUSIONS MicroRNA-9501 expression was found remarkably decreased in BCa tissues and cell lines, which was closely relevant to the pathological stage, metastasis incidence, and prognosis of BCa patients. In addition, microRNA-9501 may suppress the malignant progression of BCa via modulating Wnt/-Catenin path-way.
We are recruiting for a fast growing Healthcare Company based in Worcester. This is an excellent opportunity for a Business-Minded person seeking a new and exciting Leadership Role within the domiciliary care sector. We are looking for an experienced Registered Manager who is happy to join a fast growing team covering the Worcestershire and surrounding areas. Accountable to the Business Owners, you will be responsible for the safe and secure delivery of care to our customers, actively participate in the growth and development of the Company and manage budgets to ensure profitability of the business. As the registered manager you will be fundamental in the operational day to day running of the branch including allocation of care staff, quality control, process and systems management, people management, complaints and business development. The successful candidate must meet CQC Registered Manager requirements and also be dedicated to achieving the highest quality care standards. This position will offer the right candidate the opportunity to deliver continued growth of a client-focused care business of the highest quality. Managing the setup and delivery of a domiciliary care service. Managing the day-to-day running of the business and acting as the person-in-charge reporting to the Directors. Identifying opportunities for growth and development and working with the Director to achieve targets and deliver within budget. Be responsible for the safe delivery of the service in line with legislative requirements and company policy and procedures. Manage the effective recruitment, induction and training of the community care workers. Promote, drive and grow brand new care packages. Managing staff rotas and on-call responsibilities. Generous holiday allowance as well as bank holidays. Must hold a current driving licence and have own vehicle. Experience of managing an effective team. Ideally at least 3-5 year's recent relevant experience in a management position for the Registered Manager's role. The ability to develop and promote positive working relationships with individual service users, their family and professional colleagues. A positive attitude to work and to change. The ability to deal effectively with crisis/emergencies. Provide leadership, management and support to the branch team. Understanding of CQC assessment criteria. Experience of care services, risk assessment and person-centred care and support. Understanding of person-centred care and of the rights and needs of the service user. Knowledge and experience in dealing with staffing and HR issues. You will likely already be a successful registered manager with a strong track record and know the stakeholders in your community well. You will be a very busy team builder and leader and so you will need business, sales and staff management experience and have the ability to grow with our business and move quickly with change. We want someone that will go the extra mile, put quality at the forefront of what they do while managing a team and their ongoing development. Experience of running a domiciliary care service alongside strong leadership and management qualities are essential. If this sounds like a role you'd be interested in, then do apply. About Company Greenhill Support is a reliable and fast growing Domiciliary Care company based in Northamptonshire, providing excellent care solutions to clients.
. Synchronous changes were found in the intracellular ATP content and in the ratio of "dark" and "light" cells in the monolayer culture of hepatocytes. An attempt has been made to change these ratios by the effect of hormones. The rhythmical processes in the culture are interpreted as a universal cell property.
<filename>src/cogs/mofupoints.py from discord.ext import commands import random from .utils.dbms import db from .utils.prettyList import prettyList def giveMofuPoints(user, points): db.set_data( """INSERT INTO users (id, mofupoints) VALUES(%s, %s) ON CONFLICT(id) DO UPDATE SET mofupoints = users.mofupoints + %s""", (user.id, points, points), ) def incrementEmbedCounter(user): db.set_data( """INSERT INTO users (id, numberOfEmbedRequests) VALUES(%s, 1) ON CONFLICT(id) DO UPDATE SET numberOfEmbedRequests = users.numberOfEmbedRequests + 1""", (user.id,), ) class MofuPoints(commands.Cog): def __init__(self, bot): self.bot = bot def getUsersLeaderboard(self, ctx, category): if category == "mofupoints": rows = db.get_data( """SELECT id, mofupoints FROM users ORDER BY mofupoints DESC""" ) elif category == "numberOfEmbedRequests": rows = db.get_data( """SELECT id, numberOfEmbedRequests FROM users ORDER BY numberOfEmbedRequests DESC""" ) else: raise ValueError( "Unknown category. Available args: mofupoints, numberOfEmbedRequests" ) users = [] if ctx.guild is None: for k, v in rows: user = self.bot.get_user(k) name = user.name if user is not None else k users.append((v, name)) else: for k, v in rows: user = self.bot.get_user(k) if user in ctx.guild.members: name = user.name if user is not None else k users.append((v, name)) return users @commands.command(aliases=["top"]) async def leaderboard(self, ctx): """Show the leaderboard for the top fluffer""" users = self.getUsersLeaderboard(ctx, "mofupoints") title = "***MOFUPOINTS LEADERBOARD***" await prettyList(ctx, title, users, "points") @commands.command(aliases=["requesttop"]) async def nolife(self, ctx): """leaderboard shows the people who requested the most pictures""" users = self.getUsersLeaderboard(ctx, "numberOfEmbedRequests") title = "***NO LIFE LEADERBOARD (people who requested the most images)***" await prettyList(ctx, title, users, "requests") @commands.cooldown(1, 3600 * 24, commands.BucketType.user) @commands.command() async def daily(self, ctx): """Get your daily portion of mofupoints""" amount = random.randint(10, 50) giveMofuPoints(ctx.author, amount) msg = f"You received {amount} points! They've been added to your balance!" await ctx.send(msg) def setup(bot: commands.Bot): bot.add_cog(MofuPoints(bot))
<gh_stars>10-100 /** * Copyright (c) 2021 <NAME> * * This software is released under the MIT License. * https://opensource.org/licenses/MIT */ import TimeLimitItem from "@entity/dto/time-limit-item" import limitService from "@service/limit-service" import { t2Chrome } from "@util/i18n/chrome/t" import { ChromeMessage } from "@util/message" const maskStyle: Partial<CSSStyleDeclaration> = { width: "100%", height: "100%", position: "absolute", zIndex: '99999', backgroundColor: '#444', opacity: '0.9', display: 'block', top: '0px', left: '0px', textAlign: 'center', paddingTop: '120px' } const linkStyle: Partial<CSSStyleDeclaration> = { color: '#EEE', fontFamily: '-apple-system,BlinkMacSystemFont,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Segoe UI","PingFang SC","Hiragino Sans GB","Microsoft YaHei","Helvetica Neue",Helvetica,Arial,sans-serif', fontSize: '16px !important' } function messageCode(url: string): ChromeMessage<string> { return { code: 'openLimitPage', data: encodeURIComponent(url) } } let mask: HTMLDivElement function link2Setup(url: string): HTMLParagraphElement { const link = document.createElement('a') Object.assign(link.style, linkStyle) link.setAttribute('href', 'javascript:void(0)') const text = t2Chrome(msg => msg.message.timeLimitMsg) .replace('{appName}', t2Chrome(msg => msg.app.name)) link.innerText = text link.onclick = () => chrome.runtime.sendMessage(messageCode(url)) const p = document.createElement('p') p.append(link) return p } function moreMinutes(url: string, limitedRules: TimeLimitItem[]): HTMLParagraphElement { const p = document.createElement('p') p.style.marginTop = '100px' const canDelayRules = limitedRules.filter(r => r.allowDelay) if (canDelayRules && canDelayRules.length) { // Only delay-allowed rules exist, can delay // @since 0.4.0 const link = document.createElement('a') Object.assign(link.style, linkStyle) link.setAttribute('href', 'javascript:void(0)') const text = t2Chrome(msg => msg.message.more5Minutes) link.innerText = text link.onclick = async () => { await limitService.moreMinutes(url, canDelayRules) mask.remove() document.body.style.overflow = '' } p.append(link) } return p } function generateMask(url: string, limitedRules: TimeLimitItem[]): HTMLDivElement { const modalMask = document.createElement('div') modalMask.id = "_timer_mask" modalMask.append(link2Setup(url), moreMinutes(url, limitedRules)) Object.assign(modalMask.style, maskStyle) return modalMask } export default async function processLimit(url: string) { const limitedRules: TimeLimitItem[] = await limitService.getLimited(url) if (!limitedRules.length) return mask = generateMask(url, limitedRules) window.onload = () => { document.body.append(mask) document.body.style.overflow = 'hidden' } }
Across-Species Pose Estimation in Poultry Based on Images Using Deep Learning Animal pose-estimation networks enable automated estimation of key body points in images or videos. This enables animal breeders to collect pose information repeatedly on a large number of animals. However, the success of pose-estimation networks depends in part on the availability of data to learn the representation of key body points. Especially with animals, data collection is not always easy, and data annotation is laborious and time-consuming. The available data is therefore often limited, but data from other species might be useful, either by itself or in combination with the target species. In this study, the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data (turkeys and broilers) were investigated. Broilers and turkeys were video recorded during a walkway test representative of the situation in practice. Two single-species and one multi-species model were trained by using DeepLabCut and tested on two single-species test sets. Overall, the within-species models outperformed the multi-species model, and the models applied across species, as shown by a lower raw pixel error, normalized pixel error, and higher percentage of keypoints remaining (PKR). The multi-species model had slightly higher errors with a lower PKR than the within-species models but had less than half the number of annotated frames available from each species. Compared to the single-species broiler model, the multi-species model achieved lower errors for the head, left foot, and right knee keypoints, although with a lower PKR. Across species, keypoint predictions resulted in high errors and low to moderate PKRs and are unlikely to be of direct use for pose and gait assessments. A multi-species model may reduce annotation needs without a large impact on performance for pose assessment, however, with the recommendation to only be used if the species are comparable. If a single-species model exists it could be used as a pre-trained model for training a new model, and possibly require a limited amount of new data. Future studies should investigate the accuracy needed for pose and gait assessments and estimate genetic parameters for the new phenotypes before pose-estimation networks can be applied in practice. Animal pose-estimation networks enable automated estimation of key body points in images or videos. This enables animal breeders to collect pose information repeatedly on a large number of animals. However, the success of pose-estimation networks depends in part on the availability of data to learn the representation of key body points. Especially with animals, data collection is not always easy, and data annotation is laborious and time-consuming. The available data is therefore often limited, but data from other species might be useful, either by itself or in combination with the target species. In this study, the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data (turkeys and broilers) were investigated. Broilers and turkeys were video recorded during a walkway test representative of the situation in practice. Two single-species and one multi-species model were trained by using DeepLabCut and tested on two single-species test sets. Overall, the within-species models outperformed the multi-species model, and the models applied across species, as shown by a lower raw pixel error, normalized pixel error, and higher percentage of keypoints remaining (PKR). The multi-species model had slightly higher errors with a lower PKR than the within-species models but had less than half the number of annotated frames available from each species. Compared to the single-species broiler model, the multi-species model achieved lower errors for the head, left foot, and right knee keypoints, although with a lower PKR. Across species, keypoint predictions resulted in high errors and low to moderate PKRs and are unlikely to be of direct use for pose and gait assessments. A multi-species model may reduce annotation needs without a large impact on performance for pose assessment, however, with the recommendation to only be used if the species are comparable. If a single-species model exists it could be used as a pre-trained model for training a new model, and possibly require a limited amount of new data. Future studies should investigate the accuracy needed for pose and gait assessments and estimate genetic parameters for the new phenotypes before pose-estimation networks can be applied in practice. Keywords: broilers, computer vision, deep learning, gait, multi-species, pose-estimation, turkeys, within-species INTRODUCTION In poultry production, locomotion is an important health and welfare trait. Impaired locomotion is a major welfare concern (Scientific Committee on Animal Health and Animal Welfare, 2000;van ), and a cause of economic losses in both turkeys and broilers (Sullivan, 1994;van ). Impaired locomotion has been linked to high growth rate, high body weight, infection, and housing conditions (e.g., light and feeding regime) in broilers (). Birds with impaired locomotion have trouble accessing feed and water (), performing motivated behaviors like dust bathing (Vestergaard and Sanotra, 1999), and likely with peck avoidance. Studies have reported that in broilers approximately 15-28% of the birds, and in turkeys, approximately 8-13% of the birds, examined had impaired locomotion (;;;;). Gait-scoring systems have been developed for both turkeys and broilers (e.g., ;;;). Generally, a human expert judges the gait of an animal from behind, or the side, based on several locomotion factors, which often relate to the fluidity of movement and leg conformation. Gait scores were found to be heritable in turkeys . The gait scores are valuable to breeding programs, yet the gait-scoring process is laborious, and gait scores are prone to subjectivity. Sensor technologies could provide relatively effortless, non-invasive, and objective gait assessments, while also allowing for the assessment of a larger number of animals with higher frequency. Pose-estimation networks that use deep learning can be trained to predict the spatial location of key body points in an image or video frame, and hence make physical markers placed on key body points obsolete. Pose-estimation networks enable repeated pose assessment on a large number of animals, which is needed to achieve accurate breeding values. Pose-estimation methods that use deep learning () can learn the representation of key body points from annotated training data. In brief, these pose-estimation methods based on deep learning consists of two parts, a feature extractor that extracts visual features from a video image (frame), and a predictor that uses the output of the feature extractor to predict the body part and its location in the frame (). In part, the success of a supervised deep learning model depends on the availability of annotated data to learn these representations (). In the human domain, markerless pose estimation has been an active field of research for many years (e.g., Toshev and Szegedy, 2014;;;) and large datasets have been collected over the years . Animal pose estimation has been investigated in more recent studies (e.g., ;;), but large datasets remain scarce. One dataset () is publicly available, however, it is smaller than the human pose-estimation datasets and does not include broilers or turkeys. The creation of large datasets is difficult; large-scale animal data collection is not always easy, and data annotation is laborious and timeconsuming. Therefore, efforts should be made to investigate methods that could permit deep-learning-based pose-estimation networks to work with limited data, and with that reduce annotation needs. One method to work with limited data could be the use of data from different sources, like different species. Only a few studies have investigated the use of pose data from one or multiple species on another species (;). In Sanakoyeu et al., a chimpanzee pose-estimation network was trained on chimpanzee pseudolabels originating from a network trained on data of humans and other species (bear, dog, elephant, cat, horse, cow, bird, sheep, zebra, giraffe, and mouse). Pseudo-labels are labels that are predicted by a model and not the result of manual annotation. In Mathis et al., a part of the research focused on the generalization of a pose-estimation network across species (horse, dog, sheep, cat, and cow). The pose-estimation network was trained on one or all other animal species whilst withholding either sheep or cow as test data. In both Mathis et al. and Sanakoyeu et al., despite differences in approach, pre-training with multiple species or training with multiple species resulted in better performance on the unseen species than when pre-training or training with one species. However, it is unclear whether the improved performance stems from a larger data availability or the multi-species data since no notion of dataset size was given. Furthermore, the investigated species were visually distinct, this might have affected the performance of the networks. The objective of this study is to investigate the across-species performance of an animal pose-estimation network trained on broilers and tested on turkeys, and vice versa. Furthermore, since the interest is in working with limited data, the performance of an animal pose-estimation network trained on a multi-species training dataset (turkeys and broilers) will also be investigated. A multi-species dataset could potentially reduce annotation needs in both species without a negative effect on performance. Data Collection The data used in this research were collected in two different trials, one for turkeys and one for broilers. The data was not specifically collected for this study, but representative of the situation in practice. In both cases, the data collection will be presented separately though with a similar structure for easier comparison. Turkeys Data were collected on 83 male breeder turkeys at 20 weeks of age. This is approximately the slaughter age for commercial turkeys. The animals were subjected to a standard walkway test applied in the turkey breeding program of Hybrid Turkeys (Hendrix Genetics, Boxmeer, The Netherlands). The birds were stimulated to walk along a corridor (Width: ∼1.5 m, Length: ∼6 m) within the barn. Video recordings (RGB) were made from behind with an Intel R RealSense TM Depth Camera D415 (Intel Corporation, Santa Clara, United States; Resolution: 1,280 720, Frame Rate: 30). The camera was set up on a small tripod on a bucket to get a clear view of the legs of the birds. The camera was parallel to the ground and in the center of the walkway. A person trailed behind the birds to stimulate walking, and if needed waving their hand or tapping on the back of the bird. During the trial, the birds were equipped with three IMUs, one around the neck, the other two just above the hock. The IMU data was not used in this study but the IMUs were visible in the videos. Other birds were visible through wire-mesh fencing. The videos were cropped to a size of 600 720 to reduce the visibility of other turkeys through the wire-mesh fencing. The birds were housed under typical commercial circumstances. Broilers Data were collected on 47 conventional broilers at 37 days of age. The broilers were in the finishing stage and nearing the age of slaughter age at 41 days (Van Horne, 2020).The birds were stimulated to walk along a corridor (width: ∼0.4 m, length: ∼3 m) within the pen. Video recordings (RGB) were made from behind with the same Intel R RealSense TM Depth Camera D415 as used in the turkey experiment. The camera was set up in a fixed position on a metal rig attached to the front panel of the runway to get a clear view of the legs of the birds from behind. The camera was parallel to the ground and in the center of the walkway. The birds were stimulated to walk with a black screen made of wire netting on a stick. Other birds were not visible due to non-transparent side panels. The videos were not cropped since other broilers were not visible. The birds were housed in an experimental facility with a low stocking density (25 on 6 m 2 ) but with standard light and feeding regime. Frame Extraction and Annotation The toolbox of DeepLabCut 2.0 (version 2.1.8.2; ) was used to extract and annotate the frames from the collected RGB-videos ( Table 1). For the turkeys, 20 frames per video/turkey were manually extracted to ensure no other animals were visible within the walkway and to exclude frames with human-animal interaction. For two turkeys, 50 frames were extracted. These two turkeys were part of our initial trial with DeepLabCut and hence had more annotated frames available. For the broilers, 40 frames per video/broiler were extracted, randomly sampled from a uniform distribution across time. The number of frames per broiler was roughly double the number of frames per turkey because the number of available broiler videos was roughly half that of the number of available turkey videos. In principle, eight keypoints were annotated in each frame: head, neck, left knee, left hock, left foot, right knee, right hock, right foot (Figure 1). However, in some frames not all keypoints were visible (e.g., rump obscuring the head because the bird put its head down), these frames were retained, but the occluded keypoint was not annotated. The annotations are visually estimated locations founded on morphological knowledge, but can deviate from ground truth, particularly for keypoints obscured by plumage. The head was annotated at the top, the neck at the base, the knees at the estimated location of the knee, the hocks at the transition of the feathers into scales, and the feet approximately at the height of the first toe in the middle. The annotated data consisted of the x and y coordinates of the visible keypoints within the frames. Extracted frames with no animal in view or no visible keypoints (i.e., animal too close to the camera) were not annotated and subsequently removed. This only occurred in broiler frames, due to random frame extraction for the broilers vs. the manual frame extraction for the turkeys. Altogether, a total of 350 broiler frames were removed. There was no threshold on the minimal number of keypoints within a frame. In total, 3,277 frames were annotated by one annotator, consisting of 1,747 turkey frames and 1,530 broiler frames. The number of frames differed per animal ( Table 1). Datasets for Training and Testing Five datasets were created from the annotated frames to train and test pose-estimation networks: two turkey datasets, two broiler datasets, and one multi-species training (turkey and broiler) dataset ( Table 2). The single-species datasets were created by splitting the total number of frames in a training and test set (80 The multi-species dataset reports two numbers, the first relates to turkeys and the second to broilers. and 20%, respectively). Animals in the test set did not occur in the training set. Most animals in the test set were randomly selected, some were selected to get a proper 80/20-split since the number of frames differed per animal. The remainder of the frames made up the training data. The multi-species dataset was a combination of turkeys and broilers training frames. Most animals in the multispecies dataset were randomly selected from the animals in the turkey and broiler training set, some were selected to get the correct total number of frames. The five datasets thus consisted of three training datasets (turkey, broiler, multi-species) and two test datasets (turkey and broiler). An overview of the datasets is provided in Table 2. Pose-Estimation DeepLabCut is an open-source deep-learning-based poseestimation tool (;). In DeepLabCut, the feature detector from DeeperCut () is followed by deconvolutional layers to produce a score-map and a location refinement field for each keypoint. The score-map encodes the location probabilities of the keypoints FIGURE 2 | Example of a broiler score-map. The score-map encodes the location probabilities of the keypoints. ( Figure 2). The location refinement field predicts an offset to counteract the effect of the down sampled score-map. The feature detector is a variant of deep residual neural networks (ResNet-50; ) pre-trained on ImageNet-a large-scale dataset for object recognition (). The pretrained network was fine-tuned for our task. This fine-tuning improves performance, reduces computational time, and reduces data requirements (). During fine-tuning, the weights of the pre-trained network are iteratively adjusted on the training data of our task to ensure that the network returns high probabilities for the annotated keypoint locations (). DeepLabCut returns the location (x i, i ) with the highest likelihood ( i ) for each predicted keypoint in each frame (Figure 2). Analyses DeepLabCut (core, version 2.1.8.1; ) was used to train three networks, one for each training dataset . All three networks were tested on both test datasets (turkey and broiler), thus withinspecies and across-species ( Table 2). The model and test set will be indicated with the following convention; the first letter denotes the model, and the second letter the test set, i.e., MT stands for multi-species model on turkey test set and BB stands for broiler model on broiler test set. All three networks were trained with default parameters for 1.03 million iterations (default). The number of epochsthe number times the entire dataset is presented through the network-differed between networks due to different training set sizes (turkey: 737 epochs; broiler: 841 epochs; multi-species: 858 epochs). In Table 2, a testing scheme is presented. The testing scheme shows within-species (TT and BB), across-species (TB and BT) and multi-species model (MT and MB) testing. The withinspecies test established the performance of the networks on the species on which the model was trained. The across-species test was used to assess a network's performance across species, i.e., on the species on which the model was not trained. The multi-species model was tested on both test sets to assess the performance of a network trained with a combination of species and fewer annotations per species. Evaluation Metrics The performance of the models was evaluated with the raw pixel error, the normalized pixel error, and the percentage of keypoints remaining (PKR). The raw pixel error and normalized pixel error were calculated for all keypoints or keypoints with a likelihood higher or equal to 0.6 (default in DeepLabCut). The raw pixel error was expressed as the Euclidean distance between the x and y coordinates of the model predictions and the human annotator. Where d ij is the Euclidean distance between the predicted location of keypoint i, (x i, i ), and its annotated location, (x i, y i ), in frame j. The average Euclidean distance was calculated per keypoint over all frames. Where d i is the average Euclidean distance of keypoint i. N is the total number of frames, and N is the number of frames in which keypoint i was annotated, thus visible. The overall average Euclidean distance was calculated over all keypoints over all frames. Where d is the overall average Euclidean distance, M is the set of all valid annotations of all keypoints i in all frames j. Since the animal is moving away from the camera, the size of the animal in relation to the frame changes, i.e., the animal becomes smaller. The normalized pixel error corrects the raw pixel error for the size of the animal in the frame, i.e., a pixel error of five pixels when the animal is near the camera is better than a pixel error of five pixels when the animal is further from the camera. The raw pixel errors were normalized by the square root of the bounding box area of the legs, as head and neck keypoints were not always visible. The bounding box was constructed from the annotated keypoints to ensure that the normalization of the raw pixel error was independent of the predictions. The square root of the bounding box area penalized the pixel errors less for large bounding boxes than for small bounding boxes. The normalized pixel error was calculated as follows: Where d ij is the raw pixel error as in Equation, L is a set of annotated leg keypoint coordinates, (x i, y i ), in frame j. Leg keypoints consist of the knees, the hocks, and the feet. The normalized pixel error was reported either as the average normalized error per keypoint as in Equation or as the overall average normalized error as in Equation with d ij substituted with Normd ij. The PKR is the percentage of keypoints with a likelihood higher or equal to 0.6 over the total keypoints with a Euclidean distance. Only annotated keypoints have a Euclidean distance (see also Equation 1). The PKR is a proxy for the confidence of the model. The PKR should always be considered in unison with the pixel error, a model with a high PKR and a low pixel error is confidently right. RESULTS The models were used to investigate the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data. The models were tested according to the testing scheme in Table 3. The performances of all models over all keypoints are shown in Tables 4, 5. Comparison Between Within-Species, Across-Species, and Multi-Species On all evaluation metrics calculated overall keypoints, the withinspecies models (TT, BB) outperformed the multi-species model (MT, MB) and the models applied across species (TB, BT) (Tables 4, 5). The within-species models had lower raw pixel errors, lower normalized pixel errors, and higher PKRs than the multi-species model and the models applied across-species. Compared to the within-species models, the multi-species model had slightly higher normalized errors (+0.01). However, the errors across-species were considerably higher (+0.57; +0.49) than they were for the within-species models. Performance varied per keypoint, not only within models but also between models ( Table 6). In general, the head, neck and knee keypoints were predicted with the highest errors. Acrossspecies, the models always performed worse than the withinspecies counterpart and the multi-species model. On the broiler test set, the multi-species model outperformed the broiler model for the head and right knee keypoint, although this did coincide with a lower PKR. The turkey model had either a similar or better performance than the multi-species model on the turkey test set but the multi-species model did generally have a lower PKR. Within-Species On the training dataset, both within-species models (TT, BB) showed comparable raw pixel errors and normalized pixel errors ( Table 4). The turkey model (TT) had a lower raw and normalized pixel error and higher PKR than the broiler model (BB). The turkey model had the lowest keypoints error for the left hock and left foot and the highest error for the right knee keypoint ( Table 6). The right knee keypoint error was 0.03 higher than the left knee keypoint error. The leg keypoint errors of the broiler model were rather consistent within each leg, except for the right knee keypoint. Multi-Species Multi-species model performance was different between species (MT, MB; Table 4). The multi-species model performed better on the turkey test set (MT) than it did on the broiler test set (MB). The multi-species model on the turkey test set had the highest error for the neck keypoint and the lowest error for the left hock keypoint ( Table 6). On the broiler test set, the multispecies model had the highest errors for the hocks and right knee keypoints, and the lowest error for the head keypoint. Across-Species Across species, the turkey and broiler model had high errors (TB, BT; Table 5). The turkey model on the broiler test set had the highest error for the head keypoint, whereas the left foot keypoint had the lowest error ( Table 6). The broiler model on turkey test set also had the highest error for the head keypoint and lowest error for the left foot keypoint. DISCUSSION In this study, the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data (turkeys and broilers) were investigated. The results showed that within-species the models had the best performance, followed by the multi-species model, and across-species the models had the worst performance, as illustrated by the raw pixel errors, normalized pixel errors, and PKRs. However, the multi-species model outperformed the broiler model on the broiler test set for the head, left foot, and right knee keypoints, though with a lower PKR. Data Availability and Model Performance The turkey model outperformed the broiler model on the within species test set (Table 4), even though both models had approximately comparable raw pixel errors and normalized pixel errors on the training dataset. For the turkeys, the training set was slightly larger (n = 1,397) than the broiler training set (n = 1,224), which might explain the difference in performance. However, the turkey test set was likely less challenging, as the difference between the unfiltered and filtered error was smaller for the turkey model than it was for the broiler model. The difference in difficulty can partly be explained by the difference in frame extraction. The broiler dataset consisted of frames that were randomly sampled from a uniform distribution across time, whereas the turkey dataset consisted of consecutive frames. The temporal correlation between the frames may explain why the turkey test set was less challenging. Overall, the multi-species model had higher errors and a lower PKR than the single-species models. Yet, compared to the withinspecies models, the multi-species model had less than half the number of annotated frames of the tested species. Interestingly, the multi-species model performed better or similar for certain keypoints compared to the single-species models, but with less confidence, hence a lower PKR. The multi-species model performance suggests that data from the other species was useful to improve performance for certain keypoints but did lower the PKR. The lower PKR is more apparent on the broiler test set but also noticeable on the turkey test set. The lower PKR may be caused by an interplay between the inclusion of other species training data and a lower variability within the species-specific training data. The pose-estimation networks applied across species had no data available on the target species and could still estimate keypoints. Those keypoint estimates appear to be relatively informed as indicated by the normalized errors. This suggests that, in the case of comparable species, with an existing model and limited availability of data on the new species, the existing model could be fine-tuned on limited data of the target species. The performance of the pose-estimation models confirmed that the success of a supervised deep learning model depends on the availability of data, as was noted by Sun et al.. Across-species, the head and neck showed high normalized pixel errors for both turkeys and broilers. Across-species poseestimation is influenced by differences in appearance of the animals and the differences in environment. There are inherent differences in appearance between turkeys and broilers, especially concerning the head and neck. A turkey head is featherless and has a light-blue tint and a broiler head is feathered and white. In our case, it appears that DeepLabCut was dependent on the color of the keypoints. The broiler model tended to predict the turkey head in the white overhead lights, on workers' white boots, and turkeys at the end of the walkway. These locations were relatively far away from the bird, as indicated by the pixel error. A model that uses spatial information of other keypoints within a frame could notice that these predicted keypoints are too far off and search for the second-best location closer to the other keypoints. This suggests that single animal DeepLabCut could benefit from the use of spatial information of other keypoints within a frame, as was also noted by Labuguen et al.. Data Collection In this study, the data was collected in two different trials, one for turkeys and one for broilers but neither specific for this study. Recording both species in the same setting under the same conditions may have been better for assessing model performance between the two species, but can only be done in an experimental setting, which often poorly translates to practical implementation. The datasets used here were representative of the situation in practice for poultry breeding programs. In the end, the models will have to work in less regulated environments, i.e., barns and pens, to be of use. In the turkey trial, multiple sensors collected data to assess the gait of the animals. The trial did not only involve a video camera, but the animals were also equipped with IMUs, and there was a force plate hidden underneath the bedding. The IMUs were attached to both legs and the neck, and hence they were visible in the turkey video frames. The presence of the IMUs was likely picked up by the pose-estimation network, as the hocks often had the lowest normalized pixel error, and highest PKR of all keypoints within a turkey leg. Likewise, when the broiler model was tested on the turkey test set it tended to predict the hocks at the transition of the Velcro strap of the IMU to the feathers, instead of the transition from scales into feathers. The presence of external sensors seems to have influenced the performance of the pose-estimation networks on the turkey test set. The turkey trial was conducted during a standard walkway test applied in the turkey breeding program of Hybrid Turkeys (Hendrix Genetics, Boxmeer, The Netherlands), and therefore, representative of a practical gait scoring situation. The turkeys were stimulated to walk by a worker causing occlusions in the frames. However, occlusions could also occur because of another bird in queue, while the bird of interest was still walking. In the turkey dataset, only frames without occlusion by a worker or other bird were included. These occlusions limit the amount of usable data available for gait and pose estimations. The occlusions did not hinder the human expert who can move around freely, while the camera is in a fixed position. In an ideal situation, each animal is walked one-by-one for the full extent of the walkway, as was done with the broilers. This will not only make the videos more usable but also allow for a better sampling of the frames to train a network. Annotation During the annotation process, not all keypoints could be annotated as accurately. For both turkeys and broilers, the knees were annotated at the estimated location as the knees of the birds cannot be observed directly. The uncertainty in labeling, and thereby the variability in labeling, declined when the animal was further away from the camera since the likely knee area simply declined, but annotator uncertainty was still present. The larger likely knee area when the animal was near the camera, coupled with the annotator uncertainty is likely to increase the raw pixel errors. The annotator uncertainty probably increased the variability of the knee keypoint annotations which would have a negative effect on the PKR, as the network would have more trouble learning the knee keypoint. The annotator uncertainty becomes evident when we look at the normalized pixel error and the PKR of the turkey and broiler model applied within species. The knees had the highest normalized pixel error and/or lowest PKR of the keypoints within each leg. Ideally, the normalized pixel error of the knees reflected the decline of the likely knee area by being equal to the normalized pixel error of the other keypoints within the leg. However, the normalized pixel error of the knee keypoints was only equal to the normalized pixel error of the other keypoins within the left leg of the broilers, in all other cases, it was higher, showing that labeling uncertainty was still present. Prospects This study provides insight into the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data. Accurate pose-estimation networks enable automated estimation of key body points in an images or video frames, which are a prerequisite to use camera's for objective assessment of poses and gaits, hence within species trained models would perform best, if sufficient annotated data is available on the species. Within-species models will provide more accurate keypoints from which more accurate spatiotemporal (e.g., step time and speed) and kinematic (e.g., joint angles) gait and pose parameters can be estimated. In case of limited data availability, a multi-species model could potentially be considered for pose assessment without a large impact on performance if the used species are comparable. The acrossspecies keypoint estimates may not be precise enough for accurate gait and pose assessments, but still appear to be relatively informed as indicated by the normalized errors. A pose estimation network may not be directly applicable across species, but the network could serve as a pre-trained network that can be fine-tuned on the target species if there is limited available data. An alternative could be the use of Generative Adversarial Neural networks (GANs; ). However, recent GANs appear to work better to change coat color than to change a dog into a cat (). Furthermore, if the species change is successful, the accuracy of the converted keypoint labels could be negatively impacted (). CONCLUSION In this study, the across-species performance of animal poseestimation networks and the performance of an animal poseestimation network trained on multi-species data (turkeys and broilers) were investigated. Across species, keypoint predictions resulted in high errors in low to moderate PKRs and are unlikely to be of direct use for pose and gait assessments. The multispecies model had slightly higher errors with a lower PKR than the within-species models but had less than half the number of annotated frames available from each species. The withinspecies model had the overall best performance. The withinspecies models will provide more accurate keypoints from which more accurate spatiotemporal and kinematic-geometric and time-dependent aspects of motion-gait and pose parameters can be estimated. A multi-species model could potentially reduce annotation needs without a large impact on performance on pose assessment, however, with the recommendation to only be used if the species are comparable. Future studies should investigate the actual accuracy needed for pose and gait assessments and estimate genetic parameters for the new phenotypes before pose-estimation networks can be applied in practice. DATA AVAILABILITY STATEMENT The turkey dataset analyzed for this study is not publicly available as it is the intellectual property of Hendrix Genetics. Requests to access the dataset should be directed to Bram Visser, bram.visser@hendrixgenetics.com. The broiler dataset analyzed for this study is available upon reasonable request from Wageningen Livestock Research. Requests to access the dataset should be directed to Aniek C. Bouwman, aniek.bouwman@wur.nl. ETHICS STATEMENT The Animal Welfare Body of Wageningen Research decided ethical review was not necessary because the turkeys were not isolated, semi-familiar with the corridor, the IMUs low in weight (1% of body weight), and the IMUs were attached for not longer than one hour. The Animal Welfare Body of Wageningen University noted that the broiler study did not constitute an animal experiment under Dutch law. The experimental procedures described in the protocol of the broiler study would cause less pain or distress than the insertion of a needle under good veterinary practice. AUTHOR CONTRIBUTIONS JD, AB, GK, and RV contributed to the conceptualization of the study. JD, AB, and JE were involved with broiler data collection. JD and TS performed annotation and analysis. JD wrote the first draft of the manuscript. JD, AB, EE, JE, GK, and RV reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript. FUNDING This study was financially supported by the Dutch Ministry of Economic Affairs (TKI Agri and Food project 16022) and the Breed4Food partners Cobb Europe, CRV, Hendrix Genetics and Topigs Norsvin.
Meetings are an important part of day to day work in corporate organizations. A lot of valuable time is spent in meetings or discussions, so it is extrremely important that the meeting time is utilized carefully. Meetings are all about communication, not only the speaking and listening skills, but all the eight elements of communication model play a vital role in conducting productive meetings. So first let us understand these eight basic components of a communication model: Source/ Sender Message Communication Channel Receiver Feedback Environment Context Interruptions/ Interference Now, having understood the above, let us look at some of the key points which will make sure that you get most out of the meetings by being focused and attentive: Don't be preoccupied There is a possibility that you may be preoccupied with certain thoughts when you move from one meeting to other, especially when you have back to back meetings. Hence, try to avoid back to back meetings, take a break and gather your thoughts before you move from one meeting to another. Before you join the meeting spend some time to understand the meeting agenda in detail and also have a look at the profile of meeting participants. This will ensure that you are well informed in advance and hence productive during the course of the meeting. Be prepared to listen If you attending the meeting over phone make sure that you are not distracted by other discussions around you. Try to find a room where you can get audio privacy. You may be a good speaker yourself but in order to take a discussion to a logical conclusion you need to be a good listener too. Be prepared to listen to what others have to say. DO NOT interrupt when someone is speaking. Make your point but only when the other person has completed speaking. Dont jump the guns and assume based on half baked information. Be organized Always carry a notepad and a pen to write/ note the key highlights and action items discussed in the meeting. Asking for a notepad or a pen are not very professional habits. If you have the habit of noting down the points in your tablet or laptop then ensure that you dont get distracted by emails popping up in your inbox or other important tasks diverting your attention. If you are in a room where there is a large group of people attending the meeting on phone then make sure you are within the audible distance of the phone. You should also ensure that you move near the phone when you speak so that you are audible to the people on the other side of the phone. Speak with a speed that is good enough for the people of understand the meaning of what you are saying. Dont be in a hurry and dont rush through your statements. Identify and avoid the distractions DO NOT whatsapp while you are in a meeting. You get singled out if you keep browsing on your phone when others are engrossed in a discussion. DO NOT chat over IM(instant messaging service) on your computer while you are attending a meeting. Best way to keep your focus in a meeting is to logout from the IM messaging as soon as you join a meeting. DO NOT daydream, it may so happen that sometime one of the point being discussed in the meeting may divert your thought process and you may start thinking in a tangent. Avoid such daydreaming and stay honest to the agenda and context of the meeting. How to be awake?
News_release Using human induced pluripotent stem cells (hiPSCs), researchers at Skaggs School of Pharmacy and Pharmaceutical Sciences at University of California, San Diego have discovered that neurons from patients with schizophrenia secrete higher amounts of three neurotransmitters broadly implicated in a range of psychiatric disorders. The findings, reported online Sept. 11 in Stem Cell Reports, represent an important step toward understanding the chemical basis for schizophrenia, a chronic, severe and disabling brain disorder that affects an estimated one in 100 persons at some point in their lives. Currently, schizophrenia has no known definitive cause or cure and leaves no tell-tale physical marks in brain tissue. "The study provides new insights into neurotransmitter mechanisms in schizophrenia that can lead to new drug targets and therapeutics,” said senior author Vivian Hook, PhD, a professor with Skaggs School of Pharmacy and UC San Diego School of Medicine. In the study, UC San Diego researchers with colleagues at The Salk Institute for Biological Studies and the Icahn School of Medicine at Mount Sinai, N.Y., created functioning neurons derived from hiPSCs, themselves reprogrammed from skin cells of schizophrenia patients. The approach allowed scientists to observe and stimulate human neurons in ways impossible in animal models or human subjects. Researchers activated these neurons so that they would secrete neurotransmitters – chemicals that excite or inhibit the transmission of electrical signals through the brain. The process was replicated on stem cell lines from healthy adults. A comparison of neurotransmitters produced by the cultured “brain in a dish” neurons showed that the neurons derived from schizophrenia patients secreted significantly greater amounts of the catecholamine neurotransmitters dopamine, norepinephrine and epinephrine. Catecholamine neurotransmitters are synthesized from the amino acid tyrosine and the regulation of these neurotransmitters is known to be altered in a variety of psychiatric diseases. Several psychotropic drugs selectively target the activity of these neurotransmitters in the brain. In addition to documenting aberrant neurotransmitter secretion from neurons derived from patients with schizophrenia, researchers also observed that more neurons were dedicated to the production of tyrosine hydroxylase, the first enzyme in the biosynthetic pathway for the synthesis of dopamine, from which both norepinephrine and epinephrine are made. This discovery is significant because it offers a reason for why schizophrenia patients have altered catecholamine neurotransmitter levels: They are preprogrammed to have more of the neurons that make these neurotransmitters. “All behavior has a neurochemical basis in the brain,” Hook said. “This study shows that it is possible to look at precise chemical changes in neurons of people with schizophrenia.” The applications for future treatments include being able to evaluate the severity of an individual’s disease, identify different sub-types of the disease and pre-screen patients for drugs that would be most likely to help them. It also offers a way to test the efficacy of new drugs. “It is very powerful to be able to see differences in neurons derived from individual patients and a big accomplishment in the field to develop a method that allows this,” Hook said. Co-authors include Kristen Brennand, Yongsung Kim and Fred H. Gage, The Salk Institute for Biological Studies; Thomas Toneff, Lydiane Funkelstein, Kelly C. Lee and Michael Ziegler, UC San Diego. Funding for this study was provided, in part, by UC San Diego Academic Senate, Brain and Behavior Research Foundation, National Institutes of Health (grants R01MH077305, U01GM092655, R01MH101454, UL1TR000100 and P01HL58120), The JPB Foundation, The Leona M. and Harry B. Helmsley Charitable Trust and The New York Stem Cell Foundation. # # # Media contacts; Scott LaFee or Christina Johnson, 619-543-6163, slafee@ucsd.edu
package com.emergentideas.webhandle.bootstrap; import static org.junit.Assert.*; import java.util.List; import org.junit.Test; import com.emergentideas.utils.StringUtils; public class FlatFileConfigurationParserTest { @Test public void testParse() throws Exception { FlatFileConfigurationParser parser = new FlatFileConfigurationParser(); List<ConfigurationAtom> atoms = parser.parse(StringUtils.getStreamFromClassPathLocation("com/emergentideas/webhandle/bootstrap/config1.conf")); assertEquals(3, atoms.size()); ConfigurationAtom atom; atom = atoms.get(0); assertEquals("lets-get-this-party-started", atom.getType()); assertEquals("com.emergentideas.webhandle.StandAloneServer", atom.getValue()); atom = atoms.get(1); assertEquals("directory-template-source", atom.getType()); assertEquals("templates", atom.getValue()); atom = atoms.get(2); assertEquals("", atom.getType()); assertEquals("com.emergentideas.webhandle.bootstrap.Wire", atom.getValue()); } }
The Relevance of AI Research to CAI This article provides a tutorial introduction to Artificial Intelligence (AI) research for those involved in Computer Assisted Instruction (CAI). The general theme espoused is that much of the current work in AI, particularly in the areas of natural language understanding systems, rule induction, programming languages, and socratic systems, has important applications to CAI. It is hoped that this tutorial will stimulate or catalyze more intensive interaction between AI and CAI.
<filename>source/maze/include/maze/maze.h #pragma once #include <maze/maze_api.h> #include <boost/optional.hpp> #include <functional> // std::reference_wrapper #include <string> #include <vector> #include "maze/room.h" namespace maze { class MAZE_API Maze final { public: explicit Maze(int rows = 10, int columns = 10); // constructor boost::optional<Room*> find_room(const Position& position) const; boost::optional<Room*> find_room(int x, int y); std::vector<Room>& rooms(); const int& rows() const; const int& columns() const; bool all_rooms_visited() const; private: int _columns; int _rows; std::vector<Room> _rooms; }; } // namespace maze
# Copyright 2013 Blue State Digital # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os from setuptools import setup, find_packages version = '2' README = os.path.join(os.path.dirname(__file__), 'README') long_description = 'Command line client for making API calls.' setup( name='bsdapi', version=version, description=long_description, author='Blue State Digital', author_email='<EMAIL>', packages=['bsdapi'], package_dir={'bsdapi': 'bsdapi'}, entry_points={ 'console_scripts': [ 'bsdapi = bsdapi.Main:Cli' ] }, license="Apache", keywords="API, Client, HTTP", url="http://tools.bluestatedigital.com/", classifiers=[ "Programming Language :: Python", "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Natural Language :: English", ], install_requires=["requests", 'pytest', 'pytest-html', 'pytest-mock', 'requests_mock', 'mock==2.0.0', 'freezegun'] )
A Study on Case-Based Learning in Engineering Graduate Education Research capacity of an engineering graduate student is significant guarantee and support for national science and technology development. Because of the variation of graduate recruitment policies and national requirements, a new teaching method is in urgent need. A teaching method proposed in this paper, use case-based learning as a starting point, not only helps students think independently but also trains all-round abilities like communication, collaboration, presentation etc.
import { getUsername } from 'utilities/utils'; export const USE_PROFILE = false; export const USE_ACTION_COUNTER = true; export const USER_NAME = getUsername(); // 设置是否到rcl8后任然常驻升级工 export const PERMANENT_UPGRADER = false;
// This file is a part of SimpleXX/SimpleKernel // (https://github.com/SimpleXX/SimpleKernel). // // sched.c for SimpleXX/SimpleKernel. #ifdef __cplusplus extern "C" { #endif #include "assert.h" #include "cpu.hpp" #include "sync.hpp" #include "linkedlist.h" #include "intr/include/intr.h" #include "sched/sched.h" #include "debug.h" void clock_handler(pt_regs_t *regs __UNUSED__) { schedule(); return; } void sched_init() { bool intr_flag = false; local_intr_store(intr_flag); { // 注册时间相关的处理函数 register_interrupt_handler(IRQ0, &clock_handler); enable_irq(IRQ0); // curr_task = get_current_task(); // printk_debug("curr_task: 0x%08X\n", curr_task); printk_info("sched init\n"); } local_intr_restore(intr_flag); return; } void sched_switch(task_context_t *curr __UNUSED__, task_context_t *next __UNUSED__) { return; } // 比较方法 static int vs_med(void *v1, void *v2) { return v1 == v2; } void schedule() { bool intr_flag = false; local_intr_store(intr_flag); { // 首先从链表中找到正在执行的进程,结果不为空 ListEntry *tmp = list_find_data(runnable_list, vs_med, curr_task); assert((tmp != NULL), "Error at sched.c, tmp == NULL!\n"); task_pcb_t *next = (((task_pcb_t *)(list_data(list_next(tmp)))) == NULL) ? ((task_pcb_t *)(list_nth_data(runnable_list, 0))) : ((task_pcb_t *)(list_data(list_next(tmp)))); if ((curr_task->pid != next->pid)) { task_pcb_t *prev = curr_task; curr_task = next; // printk_debug("prev: 0x%08X\t", prev); // printk_debug("prev->context: 0x%08X\t", prev->context); // printk_debug("prev->pid: 0x%08X\t", prev->pid); // printk_debug("prev->name: %s\t", prev->name); // printk_debug("prev->eip 0x%08X\t", prev->context->eip); // printk_debug("prev->esp 0x%08X\t", prev->context->esp); // printk_debug("prev->ebp 0x%08X\t", prev->context->ebp); // printk_debug("prev->ebx 0x%08X\t", prev->context->ebx); // printk_debug("prev->ecx 0x%08X\t", prev->context->ecx); // printk_debug("prev->edx 0x%08X\t", prev->context->edx); // printk_debug("prev->esi 0x%08X\t", prev->context->esi); // printk_debug("prev->edi 0x%08X\n", prev->context->edi); // printk_debug("next: 0x%08X\t", next); // printk_debug("next->context: 0x%08X\t", next->context); // printk_debug("next->pid: 0x%08X\t", next->pid); // printk_debug("next->name: %s\t", next->name); // printk_debug("next->eip 0x%08X\t", next->context->eip); // printk_debug("next->esp 0x%08X\t", next->context->esp); // printk_debug("next->ebp 0x%08X\t", next->context->ebp); // printk_debug("next->ebx 0x%08X\t", next->context->ebx); // printk_debug("next->ecx 0x%08X\t", next->context->ecx); // printk_debug("next->edx 0x%08X\t", next->context->edx); // printk_debug("next->esi 0x%08X\t", next->context->esi); // printk_debug("next->edi 0x%08X\n", next->context->edi); // print_stack(1); // printk_debug("switch_to-----\n"); switch_to(prev, curr_task, prev); // printk_debug("switch_to END.\n"); // asm ("hlt"); } } local_intr_restore(intr_flag); return; } #ifdef __cplusplus } #endif
import vcr import unittest from pokemontcgsdk import Set class TestSet(unittest.TestCase): def test_find_returns_set(self): with vcr.use_cassette('fixtures/xy11.yaml'): set = Set.find('xy11') self.assertEqual('xy11', set.id) self.assertEqual('Steam Siege', set.name) self.assertEqual('XY', set.series) self.assertEqual(114, set.printedTotal) self.assertEqual(116, set.total) self.assertEqual('STS', set.ptcgoCode) self.assertEqual("2016/08/03", set.releaseDate) def test_where_filters_on_name(self): with vcr.use_cassette('fixtures/filtered_sets.yaml'): sets = Set.where(q='name:steam') self.assertEqual(1, len(sets)) self.assertEqual('xy11', sets[0].id) def test_all_returns_all_sets(self): with vcr.use_cassette('fixtures/all_sets.yaml'): sets = Set.all() self.assertGreater(len(sets), 70)
DOWNREGULATION OF THE IMMUNE RESPONSE TO HISTOCOMPATIBILITY ANTIGENS AND PREVENTION OF SENSITIZATION BY SKIN ALLOGRAFTS BY ORALLY ADMINISTERED ALLOANTIGEN1 The effects of oral administration of major histocompatibility antigens on the alloimmune response have not been investigated. Lymphocytes from inbred LEW (RT1u) rats that were pre-fed allogeneic WF (RT11) splenocytes exhibited significant antigen specific reduction of the mixed lymphocyte response in vitro and delayed-type hypersensitivity response in vivo, when compared with unfed controls. In an accelerated allograft rejection model, LEW rats were presensitized with BN (RTln) skin allografts 7 days before challenging them with (LEWxBN)F1 or BN vascularized cardiac allografts. While sensitized control animals hyperacutely reject their cardiac allografts within 2 days, animals prefed with BN splenocytes maintained cardiac allograft survival to 7 days, a time similar to that observed in unsensitized control recipients. This phenomenon was antigen-specific, as third-party WF grafts were rejected within 2 days. Immunohistologic examination of cardiac allografts harvested on day 2 from the fed animals had markedly reduced deposition of IgG, IgM, C3, and fibrin. In addition, there were significantly fewer cellular infiltrates of total white blood cells, neutrophils, macrophages, T cells, IL-2 receptor-positive T cells, and mononuclear cells with positive staining for the activation cytokines IL-2 and IFN-g. On day 6 posttransplant, the grafts from fed animals showed immunohistologic changes typical of acute cellular rejection usually seen in unsensitized rejecting controls. Feeding allogeneic splenocytes prevents sensitization by skin grafts and transforms accelerated rejection of vascularized cardiac allografts to an acute form typical of unsensitized recipients. Oral administration of alloantigen provides a novel approach to down-regulate the specific systemic alloimmune response against histocompatibility antigens.
Differential expression of T-bet, a T-box transcription factor required for Th1 T-cell development, in peripheral T-cell lymphomas. We studied T-bet expression in 91 cases of peripheral T-cell lymphoma (PTCL) by immunostaining and found expression in 42 cases (46%), including all 5 lymphoepithelioid lymphoma cases and 12 (86%) of 14 angioimmunoblastic lymphoma cases, but only 9 (25%) of 36 anaplastic large cell lymphoma cases. Expression of T-bet in PTCL correlates with expression of other markers of Th1 T-cell differentiation, including CXCR3 (P <.0001), CD69 (P =.0013), LEF-1 (P =.0007), and OX40/CD134 (P =.005), and absence of expression of markers of Th2 T-cell differentiation, including CD30 (P =.0001) and CXCR4 (P =.0144). Of 22 cases of PTCL immunoreactive for all Th1-associated markers previously studied and nonreactive for Th2-associated markers, 20 (91%) were immunoreactive for T-bet. Of 22 PTCL cases immunoreactive for Th2-associated markers studied and nonreactive for all Th1-associated markers studied, 4 (18%) were immunoreactive for T-bet. The remaining 47 PTCL cases (52%) exhibited incomplete or mixed staining for Th1- and Th2-associated markers, with 18 (38%) of 47 immunoreactive for T-bet. T-bet is a new marker that may contribute to the diagnosis and subtyping of PTCLs. T-bet expression in these neoplasms provides further support for a model of PTCL in which tumor subsets express markers of, and may be derived from, Th1- or Th2-committed T cells.
#!/usr/bin/env python3 # -*- coding: utf-8 -*- # FLEDGE_BEGIN # See: http://fledge-iot.readthedocs.io/ # FLEDGE_END """Automation script starter""" import logging import json import http.client import argparse from fledge.common import logger __author__ = "<NAME>" __copyright__ = "Copyright (c) 2022 Dianomic Systems Inc." __license__ = "Apache 2.0" __version__ = "${VERSION}" if __name__ == '__main__': _logger = logger.setup("Automation Script", level=logging.INFO) parser = argparse.ArgumentParser() parser.add_argument("--name", required=True) parser.add_argument("--address", required=True) parser.add_argument("--port", required=True, type=int) namespace, args = parser.parse_known_args() script_name = getattr(namespace, 'name') core_management_host = getattr(namespace, 'address') core_management_port = getattr(namespace, 'port') # Get services list get_svc_conn = http.client.HTTPConnection("{}:{}".format(core_management_host, core_management_port)) get_svc_conn.request("GET", '/fledge/service') r = get_svc_conn.getresponse() res = r.read().decode() svc_jdoc = json.loads(res) write_payload = {} for svc in svc_jdoc['services']: if svc['type'] == "Core": # find the content of script category for write operation get_script_cat_conn = http.client.HTTPConnection("{}:{}".format(svc['address'], svc['service_port'])) get_script_cat_conn.request("GET", '/fledge/category/{}-automation-script'.format(script_name)) r = get_script_cat_conn.getresponse() res = r.read().decode() script_cat_jdoc = json.loads(res) write_payloads = json.loads(script_cat_jdoc['write']['value']) for wp in write_payloads: write_payload.update(wp['values']) break for svc in svc_jdoc['services']: if svc['type'] == "Dispatcher": # Call dispatcher write API with payload post_dispatch_conn = http.client.HTTPConnection("{}:{}".format(svc['address'], svc['service_port'])) data = {"destination": "script", "name": script_name, "write": write_payload} post_dispatch_conn.request('POST', '/dispatch/write', json.dumps(data)) r = post_dispatch_conn.getresponse() res = r.read().decode() write_dispatch_jdoc = json.loads(res) _logger.info("For script category with name: {}, dispatcher write API response: {}".format( script_name, write_dispatch_jdoc)) break
/******************************************************************************* * Copyright (c) 2017, 2020 IBM Corp. and others * * This program and the accompanying materials are made available under * the terms of the Eclipse Public License 2.0 which accompanies this * distribution and is available at https://www.eclipse.org/legal/epl-2.0/ * or the Apache License, Version 2.0 which accompanies this distribution and * is available at https://www.apache.org/licenses/LICENSE-2.0. * * This Source Code may also be made available under the following * Secondary Licenses when the conditions for such availability set * forth in the Eclipse Public License, v. 2.0 are satisfied: GNU * General Public License, version 2 with the GNU Classpath * Exception [1] and GNU General Public License, version 2 with the * OpenJDK Assembly Exception [2]. * * [1] https://www.gnu.org/software/classpath/license.html * [2] http://openjdk.java.net/legal/assembly-exception.html * * SPDX-License-Identifier: EPL-2.0 OR Apache-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 OR LicenseRef-GPL-2.0 WITH Assembly-exception *******************************************************************************/ package java.lang.invoke; class MethodHandleNatives { static LinkageError mapLookupExceptionToError(ReflectiveOperationException roe) { String exMsg = roe.getMessage(); LinkageError linkageErr; if (roe instanceof IllegalAccessException) { linkageErr = new IllegalAccessError(exMsg); } else if (roe instanceof NoSuchFieldException) { linkageErr = new NoSuchFieldError(exMsg); } else if (roe instanceof NoSuchMethodException) { linkageErr = new NoSuchMethodError(exMsg); } else { linkageErr = new IncompatibleClassChangeError(exMsg); } Throwable th = roe.getCause(); linkageErr.initCause(th == null ? roe : th); return linkageErr; } }
Sonographic Detection of Unilateral Retinoblastoma Retinoblastoma is a cancer that affects the eye, and if untreated, it can spread to other parts of the body. Retinoblastoma is the most common pediatric eye cancer and accounts for 3% of all childhood cancers. It can be hereditary or sporadic (nonhereditary). This case study presents a unilateral retinoblastoma of the right eye in a pediatric patient. A diagnosis of retinoblastoma was made by correlating sonography, magnetic resonance imaging, and ophthalmology. Treatment for retinoblastoma depends on the severity of the cancer but can include radiation, chemotherapy, focal laser therapy, and/or surgery. This particular case of retinoblastoma was treated with chemotherapy.
<reponame>GURUIFENG9139/rocky-mogan<gh_stars>0 # Copyright 2016 Huawei Technologies Co.,LTD. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ast import hashlib import inspect import six from oslo_versionedobjects import fields as object_fields from mogan.common import utils Field = object_fields.Field ObjectField = object_fields.ObjectField ListOfObjectsField = object_fields.ListOfObjectsField ListOfDictOfNullableStringsField \ = object_fields.ListOfDictOfNullableStringsField class IntegerField(object_fields.IntegerField): pass class UUIDField(object_fields.UUIDField): pass class StringField(object_fields.StringField): pass class StringAcceptsCallable(object_fields.String): @staticmethod def coerce(obj, attr, value): if callable(value): value = value() return super(StringAcceptsCallable, StringAcceptsCallable).coerce( obj, attr, value) class StringFieldThatAcceptsCallable(object_fields.StringField): """Custom StringField object that allows for functions as default In some cases we need to allow for dynamic defaults based on configuration options, this StringField object allows for a function to be passed as a default, and will only process it at the point the field is coerced """ AUTO_TYPE = StringAcceptsCallable() def __repr__(self): default = self._default if (self._default != object_fields.UnspecifiedDefault and callable(self._default)): default = "%s-%s" % ( self._default.__name__, hashlib.md5(inspect.getsource( self._default).encode()).hexdigest()) return '%s(default=%s,nullable=%s)' % (self._type.__class__.__name__, default, self._nullable) class DateTimeField(object_fields.DateTimeField): pass class BooleanField(object_fields.BooleanField): pass class ListOfStringsField(object_fields.ListOfStringsField): pass class FlexibleDict(object_fields.FieldType): @staticmethod def coerce(obj, attr, value): if isinstance(value, six.string_types): value = ast.literal_eval(value) return dict(value) class FlexibleDictField(object_fields.AutoTypedField): AUTO_TYPE = FlexibleDict() # TODO(lucasagomes): In our code we've always translated None to {}, # this method makes this field to work like this. But probably won't # be accepted as-is in the oslo_versionedobjects library def _null(self, obj, attr): if self.nullable: return {} super(FlexibleDictField, self)._null(obj, attr) class MACAddress(object_fields.FieldType): @staticmethod def coerce(obj, attr, value): return utils.validate_and_normalize_mac(value) class MACAddressField(object_fields.AutoTypedField): AUTO_TYPE = MACAddress() class BaseMoganEnum(object_fields.Enum): def __init__(self, **kwargs): super(BaseMoganEnum, self).__init__(valid_values=self.__class__.ALL) class NotificationPriority(BaseMoganEnum): AUDIT = 'audit' CRITICAL = 'critical' DEBUG = 'debug' INFO = 'info' ERROR = 'error' SAMPLE = 'sample' WARN = 'warn' ALL = (AUDIT, CRITICAL, DEBUG, INFO, ERROR, SAMPLE, WARN) class NotificationPhase(BaseMoganEnum): START = 'start' END = 'end' ERROR = 'error' ALL = (START, END, ERROR) class NotificationAction(BaseMoganEnum): UPDATE = 'update' EXCEPTION = 'exception' DELETE = 'delete' POWER_ON = 'power_on' POWER_OFF = 'power_off' SOFT_POWER_OFF = 'soft_off' REBOOT = 'reboot' SOFT_REBOOT = 'soft_reboot' SHUTDOWN = 'shutdown' CREATE = 'create' REBUILD = 'rebuild' ALL = (UPDATE, EXCEPTION, DELETE, CREATE, POWER_OFF, REBUILD) class NotificationPhaseField(object_fields.BaseEnumField): AUTO_TYPE = NotificationPhase() class NotificationActionField(object_fields.BaseEnumField): AUTO_TYPE = NotificationAction() class NotificationPriorityField(object_fields.BaseEnumField): AUTO_TYPE = NotificationPriority()
Tim Harvey is still commentating on the BTCC races, but not any of the supports, due to having to drive in some of them. Oh, and where's Katherine this week? She hasn't complained about Teach's absence from Bahrain yet. I was under the impression she'd kidnapped him last week so he couldn't make it out to Bahrain. Aruis, would you get to hear if any of their emails are getting through?? I've never read so much crap before in my whole life. Please Arius, could you get to hear if any of the emails are getting through?? Because I really will despair if they even take any notice of such awful ramblings. Perhaps she is just so depressed he wasn't on the show she has locked herself away with newspaper/magazine cutouts? Don't suppose he has things to drive at Thruxton too? One brief negative point I forgot to mention about the F1 coverage from Bahrain is the F***ing football again. I thought ITV had stopped that? We could have had another few interviews in that time. I mean, they're in the paddock, I'm sure they could have grabbed someone if there were no LG or TK interviews available? The footie put a complete downer on what was a decent round up of the day's racing, and it spoils the flow of the post race show. In fact it completely ruins the post race show, and I see it as very sloppy and unproffessional. Football is not linked in any way to F1 or motorsport, and these ads can be run after the F1 show has ended. I don't even care if F1 finishes 2 minutes shorter to go to a footie ad. Just don't put it on until F1 is finished. dont worry i've got a feeling footie on itv will be finished fairly soon-no more'the premiership'(hooray)after this season,they've already dropped 'the goal rush' and 'on the ball',maybe sky and the beeb can carve up the champions league between them,just think-ron atkinson-free coverage.BTW allen must go. Football has a very big link to motorsport, in that ITV wants the same type of viewers to watch both (that's ABC1 men 16-35). If you have watched any of our football output recently you'll have noticed that they often promo motorsport, particularly WRC. You'll also notice later in the year promos for the Tour de France. We know from reading this thread that a lot of F1 fans also enjoy watching the Tour, so it is logical to cross promote. Last night's football game was the most important of the season so far, and was exclusively live on ITV1. Next season as well as live Champions League football (which ITV has covered since its inception) there is a very real possibility of live Premiership games on ITV. Oh dear! Still hate footy. Mind you I see ITV have lost the chance to cover the MotoGPs up until 2009 thanks to the BBC. What you lose with one hand you gain in the other? So after 3 races covered by Bernie Vision/FOM, what are the chances of them covering anymore GPs this year?? Mind you I see ITV have lost the chance to cover the MotoGPs up until 2009 thanks to the BBC. What you lose with one hand you gain in the other? I read that with a bit of dread. Do MotoGp's clash with F1 race weekends? I was hoping that the BBC would at least make some form of bid for the rights to F1 when they come up (if only to give irritate ITV for a moment or two!) but if there is a clash between MGP and F1, then the BBC wouldn't be interested in writing a figure, bunging it into a envelope, and posting it to Bernie. Unless of course the BBC Sport are still in a massive grump over what happened in 1995 - in which case, no bid will ever be tabled by them for the foreseeable future. MotoGP doesn't clash with F1. ITV is obviously disappointed not to win the rights, after coming very close two years ago when they came up. Given the awful way the BBC has treated MotoGP since it took over from Five, it's hard to see why they were given such a long contract. Still, it would have clashed with WRC, which we believe has the potential to be much bigger the MotoGP. I can't wait. British Eurosport's extensive cycling coverage this year is really whetting my appetite. But I am really looking forward to the ITV/ITV2's coverage as VTV's programming is always the best sporting highlight package of any sport on any channel. Slightly off topic I know, but I really cannot VTV's Tour programmes. It's the missing word which holds the key to this sentence. Perhaps "I really cannot stand VTV's Tour programmes" Or "I really cannot fault VTV's Tour programmes" Or "I really cannot praise highly enough VTV's Tour programmes" Sadly we lost the live VTV coverage every day last time, (and at least one of the ITV1 'live' programmes was delayed by 10-15 minutes) but at least the hilights were on at a regular time, just like the old C4 days. (With much the same team.) I'll often criticise ITV but they've treated the TDF fairly well. I understand why there isn't daily live coverage, its just that VTV are so much better than Eurosport. Can I just put on record that I appreciate and really enjoy watching both wheeled and ball sports. I think I might be in a minority here! Thanks for pointing this out Ariusuk- it is the sentence above. Admitedly Eurosports live and highlight programmes are sometimes messy in places, but at least you see more action than VTV's programmes, which seem more interested in competitions and other things to gloryify themselves. Also, I find Liggett and Sherwin "wooden"; Duffield and Kelly are brilliant. Duffers' humour and sheer passion for the sport is vastly superior to Liggetts, and Kelly provides great insights on the racing when DD is getting excited about last night's wine ! Quickly - cus it is massivly off topic.. DD's commentary can be amusing, but the 'fluff and padding' aspects can get tiresome. One year (2000 I think) about an hour into stage 1, he started to ramble on about where he was going to get the opportunity to wash his underpants during the tour.. ..At least James Allen hasn't decended down to this level - yet.
Re-Examining the Existing Legal and Policy Regulations of Genetically Modified Organisms (GMOs) in Relation to Agricultural Products from a Consumer's Perspective The biotechnology industry established itself over the past twenty years and commercialising them became a part of the phenomenon. However, concerns were raised through public protests and international statements, but to little effect. The genetically modified organism (GMO) is a much controversial area. There are scientific uncertainties as to the harm, these GMOs can cause. The position of the consumers is against the GMOs and their argument is simple: they are not willing to risk their health over uncertainty. The problem does not end here. The GM companies have been persistently refusing to label the GM food in fear of discrimination. The response of the consumers state that they have a right to choose whether they want GM food or not. The companies have no right to keep consumers in the dark for the sake of trade. This report will address some if not all of these issues. The following will be discussed accordingly: 1) This report will discuss the current issues of coexistence and traceability and the laws tackling these issues; 2) It will substantially debate the issue of lack of labelling and its respective laws; 3) The report will analyse the role of precautionary principle and discuss whether it is effective enough; 4) The report will state the role of public participation and Aarhus Convention; 5) It will also state the liability and critically analyse the effectiveness; 6) And lastly, mentioning some relevant trade laws, the report will move to the conclusion whether the laws are sufficiently protecting the consumers interest.
Greenhouse Gas Emissions Generated by Tofu Production: A Case Study ABSTRACT The objective of this study was to evaluate the greenhouse gas emissions (GHGEs) generated by the production of tofu. A partial life cycle assessment (LCA) was performed using SimaPro 8 software with a functional unit of 1 kg of packaged tofu and a farm to factory gate boundary. Original production data for the period of 1 year were obtained from a tofu manufacturer based in the United States and used with soybean production data from SimaPro 8 Ecoinvent 3.1 and U.S. Life Cycle Inventory databases to calculate the associated GHGEs as carbon dioxide equivalents (CO2e). The LCA calculations included resource inputs required to produce and package tofu: soybeans, water, electricity, natural gas, transportation, and packaging materials. The LCA boundary was from the cradle (i.e., soybean farm) to the factory exit gate (i.e., postpackaging). Uncertainty analyses were performed using Monte Carlo simulations. Total CO2e from packaged tofu were 982 g/kg, 9820 g/kg of protein, 1150 g/1000 calories, and 336 g/retail packet of 396 g. For 1 kg of packaged tofu, 16% of CO2e resulted from soybean production, 52% from tofu manufacturing, 23% from packaging, and 9% from transportation. Tofu, a protein-rich plant food, generates relatively low GHGEs.
/** * The aim of this class is to issue security tokens and add security cookie to a browser * that should be exchanged for access and refresh bearer tokens * when user goes though api-gateway to secure endpoint * * @author Andrii Murashkin / Javatar LLC * @author Borys Zora / Javatar LLC * @version 2019-05-28 */ @ConditionalOnProperty(value = "javatar.security.gateway.login-enabled", havingValue = "true", matchIfMissing = false) @RestController @RequestMapping(value = "/login", consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE) public class LoginResource { private static final Logger logger = LoggerFactory.getLogger(LoginResource.class); private GatewaySecurityService gatewaySecurityService; private GatewayConverter converter; @Autowired public LoginResource(GatewaySecurityService gatewaySecurityService, GatewayConverter converter) { this.gatewaySecurityService = gatewaySecurityService; this.converter = converter; logger.info("LoginResource created"); } @PostMapping public ResponseEntity login(@RequestBody AuthRequestTO loginRequest, HttpServletRequest request, HttpServletResponse response) { logger.info("received login request: {}", loginRequest); AuthRequestBO authRequestBO = converter.toAuthRequestBO(loginRequest); String rootToken = gatewaySecurityService.login(authRequestBO, request, response); logger.info("rootToken: {} was issued for loginRequest: {}", rootToken, loginRequest); Map<String, String> body = new HashMap<>(); body.put("login", "success"); return ResponseEntity.created(null) .body(body); // TODO add more info about session expiration } }
Remote microscope for Polymer Crystallization WebLab The Remote Microscope for the Polymer Crystallization WebLab was developed at MIT as part of the iLab Project. The purpose of the iLab Project is to build web-accessible remote laboratories that allow real-time experiments from anywhere at any time. The Remote Microscope WebLab allows users to operate and view in real time an actual microscope. Some of the benefits that the WebLab will bring students are that it will allow them to access high-end pieces of equipment, it will allow them to have more flexibility to run the experiments over a wide range of hours, and when it suits their own schedules, and will give them the opportunity to repeat the experiment as many times as they want. The Remote Microscope System consists of a motorized light microscope, a digital camera and an XY stage, all controlled via a web-enabled client interface that can be run in any browser or platform. The client interface is able to display real-time images, video and status messages, and is able to control and change hardware settings. To do this the client must be connected to a server process running on a Windows PC. This server process is able to communicate with the hardware through several implemented software controllers. Currently only one client can connect to the server at any given time, since no more than one client should be controlling the microscope at any given time. Thesis Supervisor: Gregory C. Rutledge Title: Associate Professor of Chemical Engineering
# -*- coding: utf-8 -*- # Copyright (c) 2019-2021 <NAME>. # All rights reserved. # Licensed under BSD-3-Clause-Clear. See LICENSE file for details. from django.conf import settings from django.urls import reverse from django.utils.formats import localize from django.views.generic import TemplateView from django.contrib.auth.mixins import UserPassesTestMixin from django.templatetags.static import static from Competitie.models import (Competitie, DeelCompetitie, DeelcompetitieRonde, LAAG_REGIO, LAAG_RK, INSCHRIJF_METHODE_1) from Functie.rol import Rollen, rol_get_huidige_functie from Plein.menu import menu_dynamics from Taken.taken import eval_open_taken from Wedstrijden.models import CompetitieWedstrijd, BAAN_TYPE_EXTERN from types import SimpleNamespace import datetime TEMPLATE_OVERZICHT = 'vereniging/overzicht.dtl' class OverzichtView(UserPassesTestMixin, TemplateView): """ Deze view is voor de beheerders van de vereniging """ # class variables shared by all instances template_name = TEMPLATE_OVERZICHT raise_exception = True # genereer PermissionDenied als test_func False terug geeft def __init__(self, **kwargs): super().__init__(**kwargs) self.rol_nu, self.functie_nu = None, None def test_func(self): """ called by the UserPassesTestMixin to verify the user has permissions to use this view """ self.rol_nu, self.functie_nu = rol_get_huidige_functie(self.request) return self.functie_nu and self.rol_nu in (Rollen.ROL_SEC, Rollen.ROL_HWL, Rollen.ROL_WL) def get_context_data(self, **kwargs): """ called by the template system to get the context data for the template """ context = super().get_context_data(**kwargs) context['nhb_ver'] = ver = self.functie_nu.nhb_ver context['clusters'] = ver.clusters.all() if self.functie_nu.nhb_ver.wedstrijdlocatie_set.exclude(baan_type=BAAN_TYPE_EXTERN).filter(zichtbaar=True).count() > 0: context['accommodatie_details_url'] = reverse('Vereniging:vereniging-accommodatie-details', kwargs={'vereniging_pk': ver.pk}) context['url_externe_locaties'] = reverse('Vereniging:externe-locaties', kwargs={'vereniging_pk': ver.pk}) if self.rol_nu == Rollen.ROL_SEC or ver.regio.is_administratief: # SEC comps = list() deelcomps = list() deelcomps_rk = list() else: # HWL of WL context['toon_competities'] = True # if rol_nu == Rollen.ROL_HWL: # context['toon_wedstrijdkalender'] = True comps = (Competitie .objects .filter(is_afgesloten=False) .order_by('afstand', 'begin_jaar')) deelcomps = (DeelCompetitie .objects .filter(laag=LAAG_REGIO, competitie__is_afgesloten=False, nhb_regio=ver.regio) .select_related('competitie')) deelcomps_rk = (DeelCompetitie .objects .filter(laag=LAAG_RK, competitie__is_afgesloten=False, nhb_rayon=ver.regio.rayon) .select_related('competitie')) for deelcomp_rk in deelcomps_rk: if deelcomp_rk.heeft_deelnemerslijst: comp = deelcomp_rk.competitie comp.bepaal_fase() if comp.fase == 'K': # RK voorbereidende fase deelcomp_rk.text_str = "Schutters van de vereniging aan-/afmelden voor het RK van de %s" % comp.beschrijving deelcomp_rk.url_lijst_rk = reverse('Vereniging:lijst-rk', kwargs={'rk_deelcomp_pk': deelcomp_rk.pk}) # for pks = (DeelcompetitieRonde .objects .filter(deelcompetitie__is_afgesloten=False, plan__wedstrijden__vereniging=ver) .values_list('plan__wedstrijden', flat=True)) if CompetitieWedstrijd.objects.filter(pk__in=pks).count() > 0: context['heeft_wedstrijden'] = True # bepaal de volgorde waarin de kaartjes getoond worden # 1 - aanmelden # 2 - teams regio aanmelden / aanpassen # 3 - teams rk # 4 - ingeschreven # 5 - wie schiet waar (voor inschrijfmethode 1) context['kaartjes'] = kaartjes = list() prev_jaar = 0 prev_afstand = 0 for comp in comps: begin_jaar = comp.begin_jaar comp.bepaal_fase() if prev_jaar != begin_jaar or prev_afstand != comp.afstand: if len(kaartjes) and hasattr(kaartjes[-1], 'heading'): # er waren geen kaartjes voor die competitie - meld dat kaartje = SimpleNamespace() kaartje.geen_kaartjes = True kaartjes.append(kaartje) # nieuwe heading aanmaken kaartje = SimpleNamespace() kaartje.heading = comp.beschrijving kaartjes.append(kaartje) prev_jaar = begin_jaar prev_afstand = comp.afstand # 1 - leden aanmelden voor de competitie (niet voor de WL) if comp.fase < 'F' and self.rol_nu != Rollen.ROL_WL: kaartje = SimpleNamespace() kaartje.titel = "Aanmelden" kaartje.tekst = 'Leden aanmelden voor de %s.' % comp.beschrijving kaartje.url = reverse('Vereniging:leden-aanmelden', kwargs={'comp_pk': comp.pk}) if comp.afstand == '18': kaartje.img = static('plein/badge_nhb_indoor.png') else: kaartje.img = static('plein/badge_nhb_25m1p.png') if comp.fase < 'B': kaartje.beschikbaar_vanaf = localize(comp.begin_aanmeldingen) kaartjes.append(kaartje) for deelcomp in deelcomps: if deelcomp.competitie == comp: if deelcomp.regio_organiseert_teamcompetitie and comp.fase == 'E' and 1 <= deelcomp.huidige_team_ronde <= 7: # team invallers opgeven kaartje = SimpleNamespace( titel="Team Invallers", tekst="Invallers opgeven voor ronde %s van de regiocompetitie voor de %s." % (deelcomp.huidige_team_ronde, comp.beschrijving), url=reverse('Vereniging:teams-regio-invallers', kwargs={'deelcomp_pk': deelcomp.pk}), icon='how_to_reg') kaartjes.append(kaartje) else: # 2 - teams aanmaken if deelcomp.regio_organiseert_teamcompetitie and comp.fase <= 'E': kaartje = SimpleNamespace() kaartje.titel = "Teams Regio" kaartje.tekst = 'Verenigingsteams voor de regiocompetitie samenstellen voor de %s.' % comp.beschrijving kaartje.url = reverse('Vereniging:teams-regio', kwargs={'deelcomp_pk': deelcomp.pk}) kaartje.icon = 'gamepad' if comp.fase < 'B': kaartje.beschikbaar_vanaf = localize(comp.begin_aanmeldingen) kaartjes.append(kaartje) # for del deelcomp # 3 - teams RK for deelcomp_rk in deelcomps_rk: if deelcomp_rk.competitie == comp: if 'E' <= comp.fase <= 'K' and self.rol_nu != Rollen.ROL_WL: kaartje = SimpleNamespace() kaartje.titel = "Teams RK" kaartje.tekst = "Verenigingsteams voor de rayonkampioenschappen samenstellen voor de %s." % comp.beschrijving kaartje.url = reverse('Vereniging:teams-rk', kwargs={'rk_deelcomp_pk': deelcomp_rk.pk}) kaartje.icon = 'api' # niet beschikbaar maken tot een paar weken na de eerste regiowedstrijd vanaf = comp.eerste_wedstrijd + datetime.timedelta(days=settings.COMPETITIES_OPEN_RK_TEAMS_DAYS_AFTER) if datetime.date.today() < vanaf: kaartje.beschikbaar_vanaf = localize(vanaf) kaartjes.append(kaartje) # for del deelcomp_rk for deelcomp in deelcomps: if deelcomp.competitie == comp: # 4 - ingeschreven if 'B' <= comp.fase <= 'F': # vanaf RK fase niet meer tonen kaartje = SimpleNamespace() kaartje.titel = "Ingeschreven" kaartje.tekst = "Overzicht ingeschreven leden voor de %s." % comp.beschrijving kaartje.url = reverse('Vereniging:leden-ingeschreven', kwargs={'deelcomp_pk': deelcomp.pk}) if comp.afstand == '18': kaartje.img = static('plein/badge_nhb_indoor.png') else: kaartje.img = static('plein/badge_nhb_25m1p.png') kaartjes.append(kaartje) # 5 - wie schiet waar if deelcomp.inschrijf_methode == INSCHRIJF_METHODE_1 and 'B' <= comp.fase <= 'F': kaartje = SimpleNamespace() kaartje.titel = "Wie schiet waar?" kaartje.tekst = 'Overzicht gekozen schietmomenten voor de %s.' % comp.beschrijving kaartje.url = reverse('Vereniging:schietmomenten', kwargs={'deelcomp_pk': deelcomp.pk}) kaartje.icon = 'gamepad' if comp.fase < 'B': kaartje.beschikbaar_vanaf = localize(comp.begin_aanmeldingen) kaartjes.append(kaartje) # for # for eval_open_taken(self.request) menu_dynamics(self.request, context, actief='vereniging') return context # end of file
Article by Loz Kaye on The Lanchester about the CIA Torture Report. If the history of this century has been about anything so far, then it is the bargain of national security. A constant state of war carried out on a need-to-know basis. Our governments of various political hues, the NSA, CIA, GCHQ, have constantly asked for, even demanded, our trust. We're keeping you safer, trust us. We're acting within the law, trust us. We need the powers we ask for (and many more you don't know about), trust us. The shocking report to the Senate Intelligence Committee on CIA torture activities has revealed one tiny corner of the truth, one tiny corner of the misery the US - and by collusion its allies - has unleashed on the world. News outlets have shied away from describing the atrocities contained in it for what they actually are. I can't. It's rape, kidnap, mental cruelty, thuggery, torture and murder. Once and for all, this report shows how flawed that bargain of national security has become. The trust we have been asked to have in the war on terror and the rush to mass surveillance has been dangerously misplaced. The report is full of instances where the public and their elected representatives have been lied to. The CIA claimed that these “enhanced techniques” led to useful information, preventing terrorist attacks. The committee found that in no case examined was this true. Not one. CIA Deputy Director of Operations James Pavitt told the Senate Intelligence committee in 2001 they would be informed of each individual who entered CIA custody. Didn't happen. Pavitt denied torture, and in 2002 denied existence of a detention facility. Lies. The CIA lied about the number of people detained. They lied about videotaping of interrogations. They lied about using starvation. They lied about using sleep deprivation to medically damaging extent. The idea that we should take the security services' word at face value after this is not just laughable, it's obscene. In lots of places, coverage of the report has been rather warped by the CIA's point of view. It was presented that the failure had mainly been that the torture was ineffective. In other words, that if it had been effective, then it might have been worth persevering with the anal rehydration and simulated drownings. To my mind that is obviously monstrous. What this has done, though, has been to dispel the Jack Bauer, 24 fantasy that for our spooks the ends justify the means and can be made to do so within a very strict timeframe with space for adverts. The constant claim has been that lives have been saved, and therefore complaining about collateral damage was naive or dangerous. We now know that those claims have been made falsely in the past and there is no need to take them as true without question in the future. Equally, the notion that this was done by a few “bad apples” has also been stripped away. Far from being a few rogue agents, this torture programme was devised by contractor psychologists James Mitchell and Bruce Jessen. They formed a company worth $180m, and received $81m in payouts over seven years. This shows that abuse was planned, systemic and well-funded. In all the detail of the report, as journalist Trevor Timm pointed out, there is one case that seems to sum up that whole miserable saga. Gul Rahman was tortured at the CIA black site known as the Salt Pit, he was chained to the floor and froze to death. Footnote 32 explains curtly, “Gul Rahman, another case of mistaken identity.” A human life, someone who lived, loved and was loved, ended up as a footnote by mistake. The favourite go to phrase for the mass surveillance lobby is that if you have nothing to hide, then you have nothing to fear. Clearly, Gul Ruhman had everything to fear, freezing to death as a footnote in history. In the globalised war on terror, we can all fear becoming another fatal footnote. Of course, some of us more than others. Currently, Muslims and people of Middle Eastern origin. But until the government and mainstream parties truly face up to what they have done, until we have a proper inquiry in the UK, and until the release of the Chilcot Report, then the powers that be deserve our fear, not our trust.
<reponame>Seojun-Park/fr_community import { Field, ObjectType } from '@nestjs/graphql'; import { Rent } from '../entity/rent.entity'; @ObjectType() export class RentReturn { @Field((type) => Boolean) success: boolean; @Field((type) => String || Error, { nullable: true }) error?: string | null; @Field((type) => Rent, { nullable: true }) data?: Rent | null; } @ObjectType() export class RentsReturn { @Field((type) => Boolean) success: boolean; @Field((type) => String || Error, { nullable: true }) error?: string | null; @Field((type) => [Rent], { nullable: true }) data?: Rent[] | null; }
Big Narstie created some drama in the Bake Off tent when he had to pull out halfway through the episode. The rapper and TV personality had completed day one of the challenges alongside Olympian Katarina Johnson-Thompson, MP Jess Phillips and comedian Johnny Vegas. When the celebrity bakers arrived for the second day of filming they were told that Big Narstie wouldn’t be taking part in the next stage. Sandi was then revealed to be the replacement for Big Narstie as she completed the second day of baking challenges. It’s not been revealed exactly what was wrong with the rapper, but it was enough to make him unable to show off his baking skills for the second day. If you want more of the rapper, the second series of The Big Narstie Show is currently airing on Friday’s on Channel 4. Has anyone been unwell on Bake Off before? It’s not the first time that somebody has missed out on a Bake Off challenge. The 2018 series saw fan favourite Terry miss episode four completely due to illness, and that the rest of the bakers carried on without him. His fellow contestants agreed that he should be allowed to return to the show the following week, which meant that nobody was eliminated on the week that he was away. Instead there was a double eliminated on episode five which saw Terry and Karen leave the tent. In series 5, which aired in 2014, contestant Diana left the show after episode 4 due to an illness. She had suffered a head injury the day before the recording of the fifth episode and had to stay in a&e after losing her sense of taste and smell. The judges decided that nobody would be eliminated that week.
/** * Test parsing the file from PDFBOX-4339, which brought a * NullPointerException before the bug was fixed. */ @Test void testPDFBox4339() { try { Loader.loadPDF(new File(TARGETPDFDIR, "PDFBOX-4339.pdf")).close(); } catch (Exception exception) { fail("Unexpected Exception"); } }
Core-binding factor b (CBF b ), but not CBF b smooth muscle myosin heavy chain, rescues definitive hematopoiesis in CBF b -deficient embryonic stem cells Core-binding factor b (CBF b ) is the non DNA-binding subunit of the heterodimeric CBFs. Genes encoding CBF b (CBFB),and one of the DNA-binding CBF a subunits, Runx1 (also known as CBF a 2, AML1, and PEBP2 a B), are required for normal hematopoiesis and are also frequent targets of chromosomal translocations in acute leukemias in humans. Homozygous disruption of either the Runx1 or Cbfb gene in mice results in embryonic lethality at midgestation due to hemorrhaging in the central nervous system, and severely impairs fetal liver hematopoiesis. Results of thisstudyshowthatCbfb-deficientmouse embryonic stem (ES) cells can differentiate into primitive erythroid colonies in vitro, but are impaired in their ability to produce definitive erythroid and myeloid colonies, mimicking the in vivo defect. Definitive hematopoiesis is restored by ectopic expression of full-length Cbfb transgenes, as well as by a transgene encoding only the heterodimerization domain of CBF b. In contrast, the CBF b smooth muscle myosin heavy chain (SMMHC) fusion protein generated by the inv associated with acute myeloid leukemias (M4Eo) cannot rescue definitive hematopoiesis byCbfb-deficient ES cells. Sequences responsible for the inability of CBF b -SMMHC to rescue definitive hematopoiesis reside in the SMMHC portion of the fusion protein. Results also show that theCBF b -SMMHCfusionproteintransdomi-nantly inhibits definitive hematopoiesis, but not to the same extent as homozygous loss of Runx1or Cbfb. CBF b -SMMHC preferentially inhibits the differentiation of myeloid lineage cells, while increasing the number of blastlike cells in culture. (Blood. 2001;97: 2248-2256) Genetic experiments in mice demonstrated that the CBF b SMMHC protein transdominantly inhibits wild-type Runx1:CBF b function in vivo, in that mice heterozygous for a knocked-in
# ---------------- User Configuration Settings for speed-cam.py --------------------------------- # Ver 8.4 speed-cam.py webcam720 Stream Variable Configuration Settings ####################################### # speed-cam.py plugin settings ####################################### # Calibration Settings # =================== cal_obj_px = 310 # Length of a calibration object in pixels cal_obj_mm = 4330.0 # Length of the calibration object in millimetres # Motion Event Settings # --------------------- MIN_AREA = 100 # Default= 100 Exclude all contours less than or equal to this sq-px Area x_diff_max = 200 # Default= 200 Exclude if max px away >= last motion event x pos x_diff_min = 1 # Default= 1 Exclude if min px away <= last event x pos track_timeout = 0.0 # Default= 0.0 Optional seconds to wait after track End (Avoid dual tracking) event_timeout = 0.4 # Default= 0.4 seconds to wait for next motion event before starting new track log_data_to_CSV = True # Default= True True= Save log data as CSV comma separated values # Camera Settings # --------------- WEBCAM = True # Default= False False=PiCamera True=USB WebCamera # Web Camera Settings # ------------------- WEBCAM_SRC = 0 # Default= 0 USB opencv connection number WEBCAM_WIDTH = 1280 # Default= 1280 USB Webcam Image width WEBCAM_HEIGHT = 720 # Default= 720 USB Webcam Image height # Camera Image Settings # --------------------- image_font_size = 20 # Default= 20 Font text height in px for text on images image_bigger = 1 # Default= 1 Resize saved speed image by value # ---------------------------------------------- End of User Variables -----------------------------------------------------
<filename>common/objloader.h #ifndef OBJLOADER_H #define OBJLOADER_H #include <glm/glm.hpp> #include <GL/glew.h> #include <vector> #include "aabb.h" bool loadQuadOBJ(const char * path, std::vector<glm::vec3> &out_vertices, std::vector<glm::vec3> &out_normals); struct SimpleVertex { glm::vec3 position, normal; }; struct Mesh { std::vector<SimpleVertex> vertices; }; class MeshBin { private: //const GLuint m_max_object_num = 256; const GLuint m_max_object_num; size_t m_object_num{ 0 }; std::vector<GLuint> m_vao_id{m_max_object_num, 0}; std::vector<GLuint> m_vbo_id{m_max_object_num, 0}; std::vector<size_t> m_vb_size{m_max_object_num, 0}; std::vector<size_t> m_vertex_num{m_max_object_num, 0}; std::vector<Mesh> m_meshes; //< binned meshes AABB m_aabb; public: MeshBin() = delete; MeshBin(const std::string &filename); ~MeshBin() { for (int i = 0; i < m_object_num; i++) { glDeleteBuffers(1, &m_vbo_id[i]); glDeleteVertexArrays(1, &m_vao_id[i]); } } glm::vec3 Center() const { return m_aabb.Center(); } float LogestDim() const { return m_aabb.LongestEdge(); } size_t size() const { return m_object_num; } size_t vertex_num(int index) const { return m_vertex_num[index]; } GLuint vao(int index) const { return m_vao_id[index]; } private: void create_vaos(); }; #endif
/* Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.activiti.designer.export.bpmn20.export; /** * @author <NAME> */ public interface ActivitiNamespaceConstants { public static final String BPMN2_NAMESPACE = "http://www.omg.org/spec/BPMN/20100524/MODEL"; public static final String XSI_NAMESPACE = "http://www.w3.org/2001/XMLSchema-instance"; public static final String SCHEMA_NAMESPACE = "http://www.w3.org/2001/XMLSchema"; public static final String XPATH_NAMESPACE = "http://www.w3.org/1999/XPath"; public static final String PROCESS_NAMESPACE = "http://www.activiti.org/test"; public static final String ACTIVITI_EXTENSIONS_NAMESPACE = "http://activiti.org/bpmn"; public static final String ACTIVITI_EXTENSIONS_PREFIX = "activiti"; public static final String BPMNDI_NAMESPACE = "http://www.omg.org/spec/BPMN/20100524/DI"; public static final String BPMNDI_PREFIX = "bpmndi"; public static final String OMGDC_NAMESPACE = "http://www.omg.org/spec/DD/20100524/DC"; public static final String OMGDC_PREFIX = "omgdc"; public static final String OMGDI_NAMESPACE = "http://www.omg.org/spec/DD/20100524/DI"; public static final String OMGDI_PREFIX = "omgdi"; public static final String CLASS_TYPE = "classType"; public static final String EXPRESSION_TYPE = "expressionType"; public static final String DELEGATE_EXPRESSION_TYPE = "delegateExpressionType"; public static final String ALFRESCO_TYPE = "alfrescoScriptType"; public static final String EXECUTION_LISTENER = "executionListener"; public static final String TASK_LISTENER = "taskListener"; }
DEAD-box RNA helicases in Arabidopsis thaliana: establishing a link between quantitative expression, gene structure and evolution of a family of genes. The model genome of Arabidopsis thaliana contains a DEAD-box RNA helicase family (RH) of 58 members, i.e. almost twice as many as in the animal or yeast genomes. Transcript profiling using real-time quantitative polymerase chain reaction (PCR) has been obtained for 20 AtRHs from nine different organs. Two AtRHs exhibited plant-specific profiles associated with photosynthetic and sink organs. The other 18 AtRHs had the same transcript profile, and the levels of transcription of these 'housekeeping'AtRHs were under strict quantitative control over a large range of values. Transcript levels may be very different between the most recently duplicated genes. The master regulatory element in the definition of the transcript level is the simultaneous presence of a TATA-box and an intron in the 5' untranslated region (UTR). There is a positive and highly significant correlation between the size of the 5' UTR intron and the transcription level, as long as a characteristic TATA-box is present. Our work on the housekeeping AtRHs suggests a scenario for the evolution of duplicated genes, leading to both highly and poorly transcribed genes in the same terminal branch of the phylogenetic tree. The general evolutionary drive of the AtRH family, after duplication of a highly transcribed ancestral AtRH, was towards an alteration of the transcriptional activity of the divergent duplicates through successive events of suppression of the TATA-box and/or the 5' UTR intron.
Endtime Ministries Endtime Ministries is an American Pentecostal Christian organization and a teacher of biblical prophesy founded and headed by minister Irvin Baxter Jr.. The organization is based in Plano, Texas. It focuses on explaining world events from its view of the Bible, with an emphasis on prophecy and exposition of eschatological theories. These tend to follow a general Pentecostal and Fundamentalist/Evangelical exegesis, with emphasis upon various modern nations as being allegedly prophesied in the Bible, and events heralding the impending advent of the Antichrist. Some of these predictions include a new world war that will kill up to two billion people, and the identification of Britain, the reunified Holy Roman Empire, Russia and Germany with the "four beasts" of the Book of Daniel. Endtime Ministries produces a Biblical prophecy magazine, Endtime Magazine, together with an internationally syndicated radio talk show, Politics and Religion which is heard on stations such as KLNG, KPSZ, KKPZ, and KYFI, among others. It has also created a series of Bible study and prophecy books. In 2006, Endtime Ministries hosted a rally in its home city of Garland, Texas to protest the REAL ID Act of 2005, which Baxter linked to the Mark of the Beast prophesied in Revelation 13:15-18.
Assessing Interannual Urbanization of Chinas Six Megacities Since 2000 : As a large and populous developing country, China has entered the rapid urbanization stage since 2000. Until 2018, China has accounted for nearly 1 / 5 of global megacities. Understanding their urbanization processes is of great significance. Given the deficiencies of existing research, this study explored the interannual urbanization process of Chinas six megacities during 20002018 from four aspects, namely, the basic characteristics of urban land expansion, expansion types, cotemporary evolution of urban landpopulationeconomy, and urbanization e ff ects on the local environment. Results indicated that urban lands in Chinas six megacities increased by 153.27%, with distinct di ff erences across megacities; all of six megacities experienced the expansion processes from high-speed to low-speed, but they varied greatly in detail; the speeds of urban land expansion in Chinas megacities outpaced the population growth but lagged behind in GDP increase; and urbanization has triggered an environmental crisis, which is represented by the decline in vegetation coverage and the increase in land surface temperature in newly expanded urban lands. This study enriched the content of urbanization, supplemented the existing materials of megacities, and provided a scientific reference for designing rational urban planning. of vegetation coverage and land surface temperature in Chinas megacities during 20002018. Introduction Global urbanization has progressed considerably with the proportion of urban population increasing from 33% in 1950 to 55% in 2018 and urban lands expanding at twice the rate of population growth. Urbanization has both advantages and disadvantages. On the one hand, it provides increased urban housing, convenient transportation, sophisticated education, and excellent medical treatment and social services. On the other hand, the irrational urbanization may result in negative effects on socioeconomic and eco-environment development, such as increasing housing prices, traffic congestion, and emissions of domestic garbage, and vehicle exhaust. All these issues pose both opportunities and challenges to the further development of urban areas in the 21st century. Researches of the urbanization processes could provide basic materials as references when designing rational urban planning and building the inclusive, safe, resilient, and sustainable human settlements, which is also the goals of the 2030 Development Agenda. Urbanization, characterized by the demographic migration from rural to urban areas and the conversion of territorial resources from other types to urban lands, has remarkably improved cities in both number and scale, thereby facilitating the formation of megacities. According to the United Nations, only two megacities existed in the world in 1950. This number increased to 29 in 2014 and will exceed 40 by 2030. Megacities are defined as cities with over 10,000,000 inhabitants. Materials and Methods The main workflow of our study includes five steps. Firstly, all datasets were downloaded and preprocessed. Secondly, urban lands of China's six megacities were delineated (Section 3.2). Thirdly, urban expansion types were characterized (Section 3.3). Fourthly, the cotemporary evolution of urban land-population-GDP was analyzed (Section 3.4). Fifthly, the effect of urban expansion on environment was monitored (Section 3.5). Data Acquisition and Preprocessing In this study, 135 scenes of multisource remotely sensed images with 30-80 m spatial resolutions, less than 10% cloud cover, and vigorous vegetation growth were applied to delineate the urban lands of China's six megacities during 2000-2018 (Appendix A Table A1). With a spatial resolution of nearly 1000 m, Terra MOD13A3 and MOD11A2 8-day composite products were selected to obtain the VC and LST in summer from 2000 to 2018 (Appendix A Table A2). Urban populations and GDPs of China's six megacities during 2000-2016 were collected from the China City Statistical Yearbook (Table Materials and Methods The main workflow of our study includes five steps. Firstly, all datasets were downloaded and preprocessed. Secondly, urban lands of China's six megacities were delineated (Section 3.2). Thirdly, urban expansion types were characterized (Section 3.3). Fourthly, the cotemporary evolution of urban land-population-GDP was analyzed (Section 3.4). Fifthly, the effect of urban expansion on environment was monitored (Section 3.5). Data Acquisition and Preprocessing In this study, 135 scenes of multisource remotely sensed images with 30-80 m spatial resolutions, less than 10% cloud cover, and vigorous vegetation growth were applied to delineate the urban lands of China's six megacities during 2000-2018 (Appendix A Table A1). With a spatial resolution of nearly 1000 m, Terra MOD13A3 and MOD11A2 8-day composite products were selected to obtain the VC and LST in summer from 2000 to 2018 (Appendix A Table A2). Urban populations and GDPs of China's six megacities during 2000-2016 were collected from the China City Statistical Yearbook (Table 2). During data preprocessing, all multisource remotely sensed images and MODIS products were resampled to 30 or 1000 m pixel sizes by using Albers equal-area conic coordinate system, respectively. Urban Land Extracting Visual interpretation method was used to extract urban lands in China's six megacities, and a subsequent procedure that consists of four steps is deployed ( Figure 2). Step 1: Apart from the data preprocessing, a clear definition of urban lands is necessary. Our research focused on the central built-up areas, which were defined as urban lands where buildings have been developed contiguously with available municipal utilities and public facilities. Step 2: The band composition was executed based on standard false-color synthesis and the images were enhanced using linear contrast stretching and histogram equalization. By employing this step, differences (i.e., color, and hue) between various land use/cover types seemed more obvious. Step 1: Apart from the data preprocessing, a clear definition of urban lands is necessary. Our research focused on the central built-up areas, which were defined as urban lands where buildings have been developed contiguously with available municipal utilities and public facilities. Step 2: The band composition was executed based on standard false-color synthesis and the images were enhanced using linear contrast stretching and histogram equalization. By employing this step, differences (i.e., color, and hue) between various land use/cover types seemed more obvious. Besides, the accuracy of geometric correction in terms of the relative position error or the same feature point does not exceed 2 pixels. Step 3: In accordance with various interpretation symbols as shown in Figure 2, urban lands were separated from other land use/cover types. When interpreting the remotely sensed images of next year, the original urban lands were applied as the basic layers and the newly developed urban lands during this year were delineated. Step 4: Quality control was executed by adopting field validation and repeated interpretation. Among which, field verification mainly employed in 2000, 2005, 2008, 2010, and 2015, by taking photos and recording the situation of local land use/cover on tables. The repeated interpretation was employed annually by referring to interpretation symbols, Google Earth platform and topographic maps. If the urban lands showed low accuracies (less than 90%), they should be re-interpreted. This procedure was performed on the Modular GIS Environment (MGE) platform, which was developed by the Intergraph Company of America and had strong image processing functions. The visual interpretation was accomplished by professional interpreters with rich experience. More detailed information of the procedure had been elaborated by Zhang et al.. Noticeably, given the difficulty in obtaining high-quality remotely sensed images of China's six megacities in some years, the interpolation data on urban land areas in these years were applied as a supplement by executing the method of Liu et al.. Measurement for Characterizing Urban Land Expansion Types As the result of the comprehensive development of factors, such as economy, society, culture, and national policies, the urban land expansion in different cities generally exhibited distinct differences in morphology. Area and perimeter dynamics are two of the basic manifestations of urban land expansion. In this study, the combination of growth rates of urban land area (GRA) (Formula ) and urban land perimeter (GRP) (Formula ) were applied to characterize the urban expansion types of China's megacities. where A t1 and A t2 are the urban land areas in t1 and t2, respectively; P t1 and P t2 are the urban land perimeters in t1 and t2, respectively. When characterizing urban land expansion types, the following steps were used according to the method proposed by Shi et al. : Calculating the GRA and GRP, standardizing the GRA and GRP by employing Z-score normalization in the SPSS 2.0 software, and dividing the normalized GRA and GRP into four levels, including Levels 1 (minimum, −1), 2 (−1, 0), 3, and 4 (1, maximum). Figure 3 shows that each block is combined by using two-digit codes. The first and second numbers indicated the GRA and GRP levels, respectively. High values of GRA and GRP indicated high expanding speeds and decreased compact morphologies. Then, all newly expanded urban lands were categorized into four expanding types. In this study, Types A, B, C, and D represented the loose expansion at high speed, compact expansion at high speed, loose expansion at low speed, and compact expansion at low speed, respectively. (1, maximum). Figure 3 shows that each block is combined by using two-digit codes. The first and second numbers indicated the GRA and GRP levels, respectively. High values of GRA and GRP indicated high expanding speeds and decreased compact morphologies. Then, all newly expanded urban lands were categorized into four expanding types. In this study, Types A, B, C, and D represented the loose expansion at high speed, compact expansion at high speed, loose expansion at low speed, and compact expansion at low speed, respectively. Growth Rates of Urban Population (GRPOP) and Gross Domestic Product (GRGDP) Urbanization is a complex process involving multiple developments in physical, demographic, and socioeconomic dimensions. Therefore, apart from the scale and morphology of urban land, urban population and GDP were also regarded as representative indicators of the urbanization process. Researches about the cotemporary evolution of these indicators could help further understand the urbanization process of China's megacities. Given the accessibility of statistical data and the definition of urban lands, year-end household-registered populations, and GPDs in city districts from 2000 to 2016 were applied as urban populations and GDPs of China's six megacities. The growth rate of urban population (GRPOP) and the growth rate of urban GDP (GRGDP) were calculated by using Formulas and, respectively. where O t1 and O t2 are the urban populations in t1 and t2, respectively; D t1 and D t2 are the urban GDPs in t1 and t2, respectively. Vegetation Coverage (VC) and Land Surface Temperature (LST) As a crucial factor in achieving urban sustainability, the dynamics of urban environment have attracted considerable attention from researchers in the remote sensing community during the past several years. The previous researches have shown that VC and LST were two important indicators of environmental conditions. To explore the urbanization effects on the local environment of China's megacities, the dynamics of VC and LST in both pre-grown urban lands in 2000 ("R1" hereinafter) and newly expanded urban lands during 2000-2018 ("R2" hereinafter) were calculated by using Formula. where ∆ i indicates the changed VC or LST of the ith pixel from 2000 to 2018; I i_2000 and I i_2018 is the VC or LST of the ith pixel in 2000 and 2018, respectively. The VC in 2000 and 2018 is calculated using Formula. where V i indicates the VC of the ith pixel; N i is the NDVI value of the ith pixel in MOD13A3 products; N s and N l are the minimum and maximum NDVI values of MOD13A3 products, respectively. The LST in 2000 and 2018 is calculated using Formula. where T i indicates the LST of the ith pixel; and D i is the DN value of the ith pixel in MOD11A2 products. Basic Characteristics of Urban Land Expansion The magnitude of urban lands varied greatly in China's six megacities (Figure 4a). In 2000, Beijing had the largest urban land area of 830.78 km 2, followed by Shanghai (598.78 km 2 ) and Guangzhou (491.75 km 2 ). Urban land in Shenzhen (461.82 km 2 ) and Tianjin (266.23 km 2 ) ranked third and second from the end. Chongqing had the smallest urban land area of 161.43 km 2, which was less than 1/5 of that in Beijing. In 2003, urban land areas in Shenzhen surpassed Guangzhou and ranked third. Spatially, the newly expanded urban lands of Beijing mainly distributed in the north, east, and south directions (Figure 1). Tianjin had witnessed urban land expansion at all directions. For Shanghai, urban land expansion mainly emerged in the west, south, and east directions. Urban lands in Chongqing and Guangzhou mainly expanded along the south-north directions. For Shenzhen, the urban lands expanded minimally in the south direction, however, the north direction had undergone dramatic expansion. During 2000-2018, the distributions and contribution rates of four urban land expansion types were uneven in the six megacities ( Figure 6). Type A was the dominated expansion type of Beijing with the proportion of 43.63%, followed by Types B (30.94%), D (17.12%), and C (8.31%). The four expansion types distributed evenly in the north, east, and south directions. For Tianjin, Types A (33.18%) and B (31.27%) were its main expansion types, supported by Types C (10.46%) and D (25.09%). Although Tianjin had witnessed urban land expansion at all directions, the distributions of four expansion types were distinct. Among which, the southeast direction of Tianjin mainly expanded by Type B. Nearly 2/3 of the newly developed urban lands in Shanghai were from Type A and B, meanwhile, the other 1/3 were from Types C and D. Type A dominated the west direction, while Type D mainly took place in the east direction. Types A, C, and D contributed 32.03%, 22.97%, and 45.00% to the urban land expansion of Chongqing, respectively. Similar to Chongqing, Guangzhou also expanded to the south by Type C and D, and to the north by Type A. Totally, 75.04%, 18.97%, and 6.00% of the newly developed urban lands adopted Types A, D, and C, respectively. For Shenzhen, the north direction had undergone dramatic expansion and mainly adopted Type A. Overall, Type A was the main urban land expansion type of China's megacities, Type C contributed relatively little to urban land expansion, whereas, Type B had no contribution to urban land expansion in Guangzhou and Chongqing. During 2000-2018, the distributions and contribution rates of four urban land expansion types were uneven in the six megacities ( Figure 6). Type A was the dominated expansion type of Beijing with the proportion of 43.63%, followed by Types B (30.94%), D (17.12%), and C (8.31%). The four expansion types distributed evenly in the north, east, and south directions. For Tianjin, Types A (33.18%) and B (31.27%) were its main expansion types, supported by Types C (10.46%) and D (25.09%). Although Tianjin had witnessed urban land expansion at all directions, the distributions of four expansion types were distinct. Among which, the southeast direction of Tianjin mainly expanded by Type B. Nearly 2/3 of the newly developed urban lands in Shanghai were from Type A and B, meanwhile, the other 1/3 were from Types C and D. Type A dominated the west direction, while Type D mainly took place in the east direction. Types A, C, and D contributed 32.03%, 22.97%, and 45.00% to the urban land expansion of Chongqing, respectively. Similar to Chongqing, Guangzhou also expanded to the south by Type C and D, and to the north by Type A. Totally, 75.04%, 18.97%, and 6.00% of the newly developed urban lands adopted Types A, D, and C, respectively. For Shenzhen, the north direction had undergone dramatic expansion and mainly adopted Type A. Overall, Type A was the main urban land expansion type of China's megacities, Type C contributed relatively little to urban land expansion, whereas, Type B had no contribution to urban land expansion in Guangzhou and Chongqing. (Figure 8). The urban population grew faster than urban land areas in Shenzhen and Chongqing with average GRPOPs of 7.02% and 7.30% and GRPOP dispersions of 0.42 and 0.06, respectively. The average GRPOPs in the four other megacities ranged from 1.57% (Shanghai) to 2.84% (Tianjin) and clearly lagged behind their corresponding GRAs (3.35-7.77%). Beijing, Chongqing, Guangzhou, Shanghai, Shenzhen, and Tianjin showed high average GRGDPs of 16.70%, 20.96%, 14.84%, 12.91%, 16.92%, and 17.45%, respectively, and their GRGDP dispersions ranged from 0.08 (Guangzhou) to 0.19 (Beijing). By contrast, all these six megacities presented higher GRGDPs than GRAs. Overall, the speed of urban land expansion in China's megacities outpaced their corresponding population growth but lagged behind their GDP increase from 2000 to 2016. Urbanization Effects on Local Environment From 2000 to 2018, VC dynamics in R1 and R2 varied greatly in China's six megacities ( Figure 9). The VCs of six megacities in R1 showed increasing trends and their average value grew by 3.32% during the past 18 years. VC increases in Beijing (8.09%) and Tianjin (4.27%) were larger than the average. VC increase in Shanghai was 3.02% and ranked third. However, VC increases in Shenzhen, Guangzhou and Chongqing had not reached 3.00%. On the contrast, the VCs of six megacities in R2 mainly exhibited the descending trends with their average value reducing by 5.07%. The most obvious decline of VC values emerged in Chongqing (15.40%), followed by Shanghai (6.82%) and Guangzhou (5.40%). The VC of Tianjin reduced by 3.55%, which was lower than the average. Beijing showed the minimal VC decrease with 0.90%, whereas, VC in Shenzhen had not even decreased. Moreover, the LSTs of the six megacities presented increasing trends, gentle in R1 and dramatic in R2. The averaged LST increased by 1.65 °C in R1 and 2.12 °C in R2, respectively. For R1, the highest LST increases emerged in Chongqing (3.92 °C), while the lowest occurred in Tianjin (0.30 °C). LST increases in Guangzhou and Shanghai ranked second and third, which grew by 2.17 °C and 2.02 °C, respectively. LST increases in Beijing and Shenzhen were lower than 1.00 °C, far below the average. Urbanization Effects on Local Environment From 2000 to 2018, VC dynamics in R1 and R2 varied greatly in China's six megacities (Figure 9). The VCs of six megacities in R1 showed increasing trends and their average value grew by 3.32% during the past 18 years. VC increases in Beijing (8.09%) and Tianjin (4.27%) were larger than the average. VC increase in Shanghai was 3.02% and ranked third. However, VC increases in Shenzhen, Guangzhou and Chongqing had not reached 3.00%. On the contrast, the VCs of six megacities in R2 mainly exhibited the descending trends with their average value reducing by 5.07%. The most obvious decline of VC values emerged in Chongqing (15.40%), followed by Shanghai (6.82%) and Guangzhou (5.40%). The VC of Tianjin reduced by 3.55%, which was lower than the average. Beijing showed the minimal VC decrease with 0.90%, whereas, VC in Shenzhen had not even decreased. Moreover, the LSTs of the six megacities presented increasing trends, gentle in R1 and dramatic in R2. The averaged LST increased by 1.65 C in R1 and 2.12 C in R2, respectively. For R1, the highest LST increases emerged in Chongqing (3.92 C), while the lowest occurred in Tianjin (0.30 C). LST increases in Guangzhou and Shanghai ranked second and third, which grew by 2.17 C and 2.02 C, respectively. LST increases in Beijing and Shenzhen were lower than 1.00 C, far below the average. For R2, the LST increases in Chongqing, Shanghai, and Guangzhou have surpassed the average, with 4.38 C, 2.70 C, and 2.22 C, respectively. Meantime, the LST increases in Beijing (1.26 C), Shenzhen (1.35 C), and Tianjin (0.84 C) was lower than the average. LST increases emerged in Chongqing (3.92 °C), while the lowest occurred in Tianjin (0.30 °C). LST increases in Guangzhou and Shanghai ranked second and third, which grew by 2.17 °C and 2.02 °C, respectively. LST increases in Beijing and Shenzhen were lower than 1.00 °C, far below the average. For R2, the LST increases in Chongqing, Shanghai, and Guangzhou have surpassed the average, with 4.38 °C, 2.70 °C, and 2.22 °C, respectively. Meantime, the LST increases in Beijing (1.26 °C), Shenzhen (1.35 °C), and Tianjin (0.84 °C) was lower than the average. Discussions The 2030 Development Agenda has devoted a specific goal to cities, which aims to "make cities and human settlements inclusive, safe, resilient and sustainable". Understanding the urbanization processes might help to achieve the goal. As the important city forms carrying dense population and social activities, megacities in China was selected as the study areas in this work. Urban lands in China's six megacities were delineated from multi-source remotely sensed images using visual interpretation method. To ensure the accuracy of monitoring results more than 90%, this procedure was executed based on strict criteria and accomplished by professional interpreters with rich experience. Section 4.1 elaborated the differences of urban lands in China's six megacities, in terms of magnitudes and expansion directions. Apart from the various historical, socioeconomical and political backgrounds of six megacities illustrated in Section 2, natural terrain (Table 3) might be the other vital factor that influenced the urban land expansion. The interpretation results provided the data base for subsequent research. However, the spatial resolutions of multi-source remotely sensed images used in this work are 30-80 m, therefore, the urban land products were only applied as the referenced extents when analyzing urbanization effects on environment. In the future, more remotely sensed images with various spatial resolutions should be used to obtain urban land products to meet the needs of multi-scales. Table 3. Basic information of China's six megacities. Megacity Terrain Reference Beijing Covers a wide topographic gradient from 83 the mountainous areas in the north and west to the plain areas in the central, south, and east Tianjin Relatively flat Shanghai Relatively flat Chongqing The famous "mountainous city", with many SN-trending mountains and a complex elevation ranging from 75 m to 2800 m Guangzhou Higher in the northeast and lower in the southwest Shenzhen Higher in the southeast and lower in the northwest Dynamics of physical features (i.e., scale, and morphology) of urban lands are vital indicators to understand urbanization processes. The previous studies (i.e., ) mostly characterized urban land sprawl types via edge-expansion, infilling, and outlying patterns. Shi et al. provided a simple approach to identify expansion types by synergistically considering areas and perimeters, and divided China's 340 cities into four types based on the newly developed urban land during 1987-2010. In this work, the interannual newly grown urban land of each megacity was applied as a basic unit to characterize expansion types, and the monitored epoch was updated to 2018. Therefore, more detailed information could be acquired. Figure 5 showed that Type A was the main expansion type in six megacities before 2004, while Type D dominated the urban land expansion after 2012. Although urban land expansion in six megacities exhibited the similar tendency from high GRA/GRP to low GRA/GRP, they underwent a diverse sprawl process. For instance, expansion types presented distinct differences during 2004-2012, those changed frequently in coastal megacities (i.e., Tianjin, and Shanghai) but gently in four other megacities. Accordingly, four types contributed distinctly to urban land expansion in the six megacities. By employing this method, the expansion types of more cities in China or other countries could be characterized in the future. Urbanization is a complex process involving many aspects. Apart from the uncontrollable urban sprawl, the remarkable economic growth, continuous population explosion, and negative effects on environment are also characteristics considered in the urbanization process. Results of this work revealed two problems existing in China's six megacities at the onset of the 21st century. From a statistical perspective, urban lands, populations, and GDPs in China's six megacities grew at different speeds, and their specific manifestation was that the speed of urban land expansion outpaced the population growth but lagged behind in GDP increase. This finding also supported the standpoints of Fei et al.. From an environmental perspective, the rise in LST has become a problem that cannot be ignored. Besides, VC in R2 presented obvious decline because proportional vegetation had been encroached by the newly expanded urban lands. However, VC in R1 showed a slight increase. According to Qian et al., this phenomenon could be ascribed to the great efforts in increasing urban greenspace by the local government. In this study, researches about the VC and LST were just a preliminary attempt. However, it provided referenced materials and new ideas for further studies. More exploring about urbanization effects on the environment should be executed in the future. Specially, urban sustainability is defined as an adaptive capacity that can balance social wellbeing, economic development, and environmental protection. Li et al. stated that the sustainable development of megacities has four major challenges, including land subsidence, environment, traffic, and energy aspects. According to the results of this work, there was still a distance to achieve sustainability for China's six megacities. Natural conditions are stable factors that cannot be changed frequently during a short period, but social factors can be rationally regulated and controlled. In the future, helpful measures and special urban planning should be applied to the six megacities to mitigate the negative effects of urbanization on the local environment. Overall, obtaining sustainability in China's cities is an expected achievable goal that requires the joint efforts of the government and ordinary people. In addition, this work characterized China's six megacities from the limited aspects, providing some materials as references when designing rational urban planning. However, these aspects could not thorough respect urbanization of China's six megacities. More key indicators (i.e., population density, air temperature, food production, cropland losses, etc.) of cities and megacities should be investigated in the future. Conclusions and Chongqing (370.33 km 2 ), respectively. The contributions of urban land expansion types varied greatly in China's six megacities. Two expansion types-loose expansion at high speed and compact expansion at low speed-dominated the urban land expansion in the early and later years, respectively. Urban land-population-GDP showed an uneven evolution. GDP increased the fastest (1140.57 billion RMB), followed by urban land expansion (4036.33 km 2 ), whereas population growth (31.80 million persons) was the slowest. Urbanization resulted in distinct environmental effects. Vegetation coverage in newly expanded urban lands decreased significantly, whereas those in pre-grown urban lands increased slightly. Land surface temperatures in newly expanded urban lands exhibited a higher increase than those in pre-grown urban lands. This study enriched the content of urbanization, supplemented the existing materials of megacities, and provided a scientific reference in designing rational urban planning.
import { useTheme } from "@material-ui/core"; import { ColDef, DataGrid, ValueFormatterParams } from "@material-ui/data-grid"; import { Delete, PlayCircleOutline } from "@material-ui/icons"; import React from "react"; import { useDispatch, useSelector } from "react-redux"; import { AppChip } from "../../atom/AppChip"; import AppIconButton from "../../atom/AppIconButton"; import { removeBidCase, simulateBidCase } from "../../../store/bidCases"; export type BidCaseTableProps = { height: number; }; export function BidCaseTable({ height }: BidCaseTableProps) { const theme = useTheme(); const { bidCases } = useSelector((s) => s.bidCases); const dispatch = useDispatch(); const renderStatus = (params: ValueFormatterParams) => { const status = params.value as string; const color = () => { switch (status) { case "waiting": return theme.palette.info.main; case "active": return theme.palette.warning.main; case "completed": return theme.palette.success.main; case "failed": return theme.palette.error.main; } }; return ( <div style={{ display: "flex", alignItems: "center" }}> <AppChip size="small" label={status === "active" ? "in processing" : status} variant="outlined" color={color()} /> {status === "waiting" && ( <AppIconButton size="small" color={theme.palette.primary.main} onClick={() => dispatch(simulateBidCase(params.getValue("id") as number)) } > <PlayCircleOutline /> </AppIconButton> )} </div> ); }; const renderDelete = (params: ValueFormatterParams) => { return ( <AppIconButton size="small" color={theme.palette.error.dark} onClick={() => dispatch(removeBidCase(params.getValue("id") as number))} > <Delete /> </AppIconButton> ); }; const columns: ColDef[] = [ { field: "id", headerName: "ID", flex: 1 }, { field: "buyerCount", headerName: "Buyer Count", flex: 1 }, { field: "sellerCount", headerName: "Seller Count", flex: 1 }, { field: "minBuyPrice", headerName: "Min Buy Price", flex: 1 }, { field: "maxBuyPrice", headerName: "Max Buy Price ", flex: 1 }, { field: "minSellPrice", headerName: "Min Sell Price", flex: 1 }, { field: "maxSellPrice", headerName: "Max Sell Price", flex: 1 }, { field: "minBuyVolume", headerName: "Min Buy Volume", flex: 1 }, { field: "maxBuyVolume", headerName: "Max Buy Volume", flex: 1 }, { field: "minSellVolume", headerName: "Min Sell Volume", flex: 1 }, { field: "maxSellVolume", headerName: "Max Sell Volume", flex: 1 }, { field: "agreedPrice", headerName: "Agreed Price", flex: 1 }, { field: "status", headerName: "Status", width: 150, renderCell: renderStatus, }, { field: "Delete", flex: 1, renderCell: renderDelete }, ].map((column) => { return { ...column, disableClickEventBubbling: true }; }); return ( <div style={{ height: `${height}px` }}> <DataGrid rows={bidCases} columns={columns} pageSize={25} rowHeight={40} /> </div> ); }
Physiologically relevant changes in serotonin resolved by fast microdialysis. Online microdialysis is a sampling and detection method that enables continuous interrogation of extracellular molecules in freely moving subjects under behaviorally relevant conditions. A majority of recent publications using brain microdialysis in rodents report sample collection times of 20-30 min. These long sampling times are due, in part, to limitations in the detection sensitivity of high performance liquid chromatography (HPLC). By optimizing separation and detection conditions, we decreased the retention time of serotonin to 2.5 min and the detection threshold to 0.8 fmol. Sampling times were consequently reduced from 20 to 3 min per sample for online detection of serotonin (and dopamine) in brain dialysates using a commercial HPLC system. We developed a strategy to collect and to analyze dialysate samples continuously from two animals in tandem using the same instrument. Improvements in temporal resolution enabled elucidation of rapid changes in extracellular serotonin levels associated with mild stress and circadian rhythms. These dynamics would be difficult or impossible to differentiate using conventional microdialysis sampling rates.
Overexpression of human testis antigens in Escherichia coli host cells is influenced by site of expression and the induction temperature A panel of twenty human testis cDNA clones were expressed in an Escherichia coli expression system and six clones were found to express identifiable fusion polypeptides. Expression was found to be influenced not only by the site of localization of the polypeptide in the host cells, but also by the temperature used for induction. This emphasized the need for cytoplasmic and periplasmic expression of new antigens of unknown properties, as well as the use of temperatures of 30°C or lower. A majority of the expressed polypeptides were mainly in an insoluble form. By reducing the induction temperature to 30°C production of the soluble fraction was further improved.
/******************************************************************************* * Copyright (c) 2005, 2008 BEA Systems, Inc. * All rights reserved. This program and the accompanying materials * are made available under the terms of the Eclipse Public License v1.0 * which accompanies this distribution, and is available at * http://www.eclipse.org/legal/epl-v10.html * * Contributors: * <EMAIL> - initial API and implementation * *******************************************************************************/ package org.eclipse.jdt.apt.tests.annotations.filegen; import java.io.IOException; import java.io.PrintWriter; import org.eclipse.jdt.apt.tests.annotations.BaseProcessor; import org.eclipse.jdt.apt.tests.annotations.ProcessorTestStatus; import com.sun.mirror.apt.AnnotationProcessorEnvironment; import com.sun.mirror.apt.Filer; public class FileGenLocationAnnotationProcessor extends BaseProcessor { public FileGenLocationAnnotationProcessor(AnnotationProcessorEnvironment env) { super(env); } public void process() { ProcessorTestStatus.setProcessorRan(); try { Filer f = _env.getFiler(); //$NON-NLS-1$ PrintWriter pwa = f.createSourceFile("test.A"); pwa.print(CODE_GEN_IN_PKG); pwa.close(); //$NON-NLS-1$ PrintWriter pwb = f.createSourceFile("B"); pwb.print(CODE_GEN_AT_PROJ_ROOT); pwb.close(); } catch (IOException ioe) { ioe.printStackTrace(); } } protected String CODE_GEN_IN_PKG = "package test;" + "\n" + "public class A" + "\n" + "{" + "\n" + "}"; protected String CODE_GEN_AT_PROJ_ROOT = "public class B" + "\n" + "{" + "\n" + " test.A a;" + "\n" + "}"; }
Cleveland Cavaliers guard Matthew Dellavedova was hospitalized for severe cramping after team’s Game 3 96-91 victory on Wednesday over the Golden State Warriors in the NBA Finals, the team announced. The team said Dellavedova experienced cramping and needed an IV after playing 39 minutes and scoring 20 points. Dellavedova continued to receive treatment after being taken by ambulance to the Cleveland Clinic. ESPN's Marc Stein reported on Wednesday that Dellavedova is scheduled to be discharged “shortly” from the hospital and is expected to meet with the media later in the day. “Well, I mean, I know one thing I'm going to count on Delly is how hard he's going to play,” Cavaliers forward LeBron James said, according to Cleveland.com. “He's going to give everything he's got. His body, he's going to throw his body all over the place.” James had 40 points, 12 rebounds and eight assists as Cleveland took a 2-1 lead in the best-of-seven series. Game 4 is Thursday night in Cleveland. Cavaliers guard Iman Shumpert suffered a left shoulder injury in the first quarter after he ran into a screen set by Warriors forward Draymond Green. Shumpert will undergo tests to see if there is any damage to the shoulder. Shumpert missed 20 games earlier this season while playing for the New York Knicks due to problems with the same shoulder. • ROSENBERG: Warriors desperately need Curry to regain MVP form “We just can't afford any more injuries," James said. "We just can't, especially from a guard perspective. I just thought about [Shumpert's] shoulder. As soon as it happened, I knew exactly which shoulder it was, and I was just hoping for the best." - Scooby Axson
Trapeziometacarpal arthroplasty. A clinical review. Silicone implants for the damaged trapeziometacarpal joint have been used for over 15 years. Relief of pain has been significant with increasing grip and pinch strengths reported for up to one year following arthroplasty. Three problems remain with the use of these implants: instability, deformation and implant failure, and more recently reported, silicone synovitis. Silicone synovitis is such a major concern that surgeons are now using allograft or autograft tendon as a spacer when resection arthroplasty of the trapezium is required.
. PURPOSES Evaluate the outcome of cataract surgery in Clinic of Ophthalmology, Kaunas University of Medicine (COKUM) and to compare it with the outcome of the European Cataract Outcome Study Group (ECOSG) data. METHODS The study was started on the 1st of October, 2000 and ended on the 30th April, 2001 on the basis of the protocol of the European Cataract Outcome Study Group. Every patient at each participating unit having surgery during the first study month was evaluated. The study was closed 6 months after surgery. RESULTS The study enroled 3944 patients, out of them 361 was from COKUM. The mean induced astigmatism was 0.86+/-0.21 D in COKUM and 0.63+/-0.23 D in ECOSG. The visual acuity of the operated eye was 0.3 or lower in 28.1 percent of patients, 0.4-0.7 in 31.7 percent and 0.8 or higher in 40.2 percent of COKUM patients in the whole study group percentages were 11.8, 27.6 and 60.6, respectively. 44.3 percent of COKUM patients; underwent phacoemulsiphication, while among ECOSG this procedure was the most common (91.8 percent). The number of complications during surgery was 5.5 percent of all cases in COKUM while in European countries it was 3.7 percent. CONCLUSION Cataract surgery data collected from 39 units in 18 European countries allowed participants to compare their performance with that of colleagues in an anonymous manner. This study is also an indicator of cataract surgery development in COKUM and in Lithuania.
I started off 2011 by attending the Sundance Film Festival for the first time. I was only there for a few days but got to see the films Becoming Chaz and Gun Hill Road. I left the film festival feeling infinitely inspired as an actress, artist, and trans woman. I was certain that the game would be forever changed for trans folks in the media because of these two films. Chaz Bono, the subject of Becoming Chaz, of course, went on to have a groundbreaking year for transgender visibility. Harmony Santana, the transgender actress who plays Michael/Vanessa in Gun Hill Road, has won critical raves and public adulation for her moving performance as a teen struggling for paternal acceptance as she begins her gender transition. She was recently nominated for an Independent Spirit Award for her role. She is the first trans woman to be so honored. 2011 has been a notable year for other transgender actors in film and on television, as well, including yours truly, though my films have yet to be released. What's notable for me, I think, is that I got the opportunity to play dynamic, complicated characters in six different independent films in one year. I have never done six films in one year. This suggests to me that more films are being written and produced with transgender characters and that there is a willingness to hire trans actors to play these roles. In the case of two of the films, I got to play characters that weren't written as transgender. This seems like very good news for transgender representation in the media. The day I returned from Sundance, I had a call back for the film Musical Chairs. Musical Chairs is a film about wheelchair ballroom dancing, which is very popular in Europe and Asia, though it has yet to find the same popularity in America. I play Chantelle, a flirtatious busybody who is an old-fashioned romantic at heart. Chantelle is transgender and a paraplegic. I got to learn how to do a tango and waltz in a wheelchair and in the process developed a new respect and admiration for people with disabilities. The film is directed by Susan Seidelman, who directed Desperately Seeking Susan, She-Devil, and the pilot of Sex and the City, among other gems. Musical Chairs comes to theaters nationwide in March 2012. In the independent film Carl(a), I play Cinnamon, the self-destructive best friend of the title character. Both Carla and Cinnamon are transgender. Carla is played by the young trans actress Joslyn Defreece, who makes her feature film debut with Carl(a). It was a pleasure getting to know Joslyn and working with her. She has a beautiful, raw, emotional instrument and was utterly committed to bringing truth to every moment of her tour de force of a performance. Carla is saving for reassignment surgery by some unconventional means. She meets a guy and falls in love, but he doesn't want her to have the surgery. I was so excited about this film because this is a very real story I have seen time and again over the years with people in my life but have never seen it told in a film so truthfully. The film also co-stars Mark Margolis and Gregg Bello. Carl(a) is currently being shopped to film festivals internationally. In an article about transgender performers in the December 8-14, 2011 issue of Backstage, Simi Horwitz writes, "Casting director Sig De Miguel ... looks forward to the time when a character's transgender status is incidental to the script and an actor's trans identity is irrelevant to casting. 'You may be born male, but you're a woman now,' De Miguel says." De Miguel represents a growing number of industry professionals who are open to casting trans actors in roles that aren't necessarily written as trans. He cast three of the films in which I acted in 2011. He also cast Harmony Santana in Gun Hill Road. In 36 Saints, one of those films, I play the effusive party promoter Genesius. Nowhere in the script does it say that she is transgender, nor is it inauthentic to the story that she is. The highest profile fictional transgender film character of 2011 was the controversial role of Kimmy in the blockbuster The Hangover 2. The role was played by transsexual adult film actress Yasmin Lee. I had the pleasure of briefly meeting Yasmin a few years ago. She seemed very sweet. I hope this high-profile role has opened other doors for Lee. On television the transgender actress Jamie Clayton made her acting debut on the HBO original series Hung. Jamie was, of course, my first choice of co-stars for my short-lived VH1 makeover show TRANSform Me. I often joke with Jamie about having given her her first job in TV when she's now a huge star. It's been a pleasure to watch Jamie's growth as an actress and artist firsthand while studying with her at The Studio here in New York City. Jamie's second episode of Hung, the seventh of season 3, is surprisingly educational about trans identity, as well as being very moving. Ray, trying to bond with Kyla, Jamie's character, initiates a conversation about his son, whom he suspects is gay. Kyla asserts, "I'm not gay. I'm a woman." Ray asks, "When did that start for you, being a woman?" She replies, "When did you start being a man?" It's a beautiful, smart episode that shows Ray's character overcoming his transphobia.
. Poor patency of the nasal passage has a negative impact on respiration. In 52 patients with increased nasal resistance blood gases and acid-base balance were measured according to the method of Astrup before and after surgical treatment of the upper respiratory organs. We found a significant increase of paO2 (on the average by 0.656 kpa; p less than 0.05) and of the buffer bases. The other parameters did not change to a statistically significant extent. The lower paO2 value in increased nasal resistance is most likely not due to hypoventilation. Altered nasopulmonary and nasothoracis reflexes exercise a negative influence on pulmonary mechanics and the laryngotracheobronchial tonus with subsequent alteration of the intrapulmonary distribution of the inspired air, and on ventilation-perfusion conditions of the lungs.
The invention relates to a reducer coupling useful for connecting conduit sections of different sized, i.e. diameters and a positioner tool for moving the coupling into a desired position for making the connection. The positioner tool also provides a means for turning the reducer coupling to thread it into a conduit section. There are many industrial operations in which a reducer coupling is used as a connection for joining two conduit sections of different sizes. The usual conduit connections are those where two sections of pipe are connected, or a pipe is connected to a valve connection, or a pipe is connected to a pump, or the like. For example, in oil field operations such as well stimulation and completion, it is common practice to use fittings known as swage nipples to connect the production casing with smaller piping. In these operations fluids are delivered from the smaller piping, usually under high pressure, through the nipple and into the production casing. To set up for such an operation, the usual procedure is to first lift the fitting into a "hook-up" position above the production casing. The large end of the nipple is then threaded into the production casing and turned down tight with a strap wrench, which fastens around the nipple just below the small end. Following this, the small end is connected into the delivery pipe with a conventional union fitting. Swage nipples now in use have several drawbacks, which make them less than satisfactory for the operations described above and for other commercial applications. One problem is that swage nipples are structurally weak, particularly at the point where the wrench clamps around the fitting. The weakness is caused by severe deformation of the construction material during fabrication of the fitting. As the strap wrench is tightened on the nipple, it tends to slip and score the metal surface, causing further weakness at this point on the fitting. This weakness is particularly undesirable because of the high pressure conditions the fitting must endure during normal use. The structural weakness of swage nipples, plus the fact that the small end of each fitting defines a "neck" portion, creates another problem. Sometimes, particulaly after an operation is completed, the fittings are dropped or otherwise mis-handled. Frequently, such treatment causes the neck of the fitting to break off, so that it must be repaired or replaced for the next operation. This is both costly and inconvenient. The reducer coupling of this invention avoids the problems described above and it is much cheaper to make than the swage nipples now available. For example, the present reducer coupling is a one-piece structure which can be easily lifted into the "hook-up" position described above, and coupled to the production casing using a tool designed for that purpose. In addition, the present coupling is a much more durable structure than the prior swage nipples, and the smaller end of the fitting has a hub connection which is protected from breaking off, or sustaining other damage during handling of the fitting.
/** * Called the first time the scene is rendered. We need to create the canvas * here because the GraphicsConfiguration must relate to the monitor on which * the runtime window is located, and not the default monitor, otherwise an * exception may be thrown. */ private void initFirstRender() { Window window = SwingUtilities.getWindowAncestor(panel); GraphicsDevice device = window.getGraphicsConfiguration().getDevice(); GraphicsConfigTemplate3D template = new GraphicsConfigTemplate3D(); canvas = new JCanvas3D(template, device); canvas.setResizeMode(JCanvas3D.RESIZE_IMMEDIATELY); canvas.setPreferredSize(new Dimension(100, 100)); canvas.setSize(canvas.getPreferredSize()); canvas.addMouseListener(new MouseAdapter() { }); panel.add(canvas, BorderLayout.CENTER); universe = new SimpleUniverse(canvas.getOffscreenCanvas3D()); universe.getViewingPlatform().setNominalViewingTransform(); if (backgroundColor != null) { float r = (float) backgroundColor.getRed() / 255f; float g = (float) backgroundColor.getGreen() / 255f; float b = (float) backgroundColor.getBlue() / 255f; Background background = new Background(r, g, b); BoundingSphere sphere = new BoundingSphere(new Point3d(0, 0, 0), 1000000); background.setApplicationBounds(sphere); sceneRoot.addChild(background); } View view = canvas.getOffscreenCanvas3D().getView(); view.setBackClipDistance(3000.0d); view.setFrontClipDistance(0.01d); panel.add(canvas, BorderLayout.CENTER); initAdditional(); sceneRoot.compile(); universe.getLocale().addBranchGraph(sceneRoot); setPause(true); }
/** * Return the string representation of the qualified name using the * the '{ns}foo' notation. Performs * string concatenation, so beware of performance issues. * * @return the string representation of the namespace */ public String toNamespacedString() { return (_namespaceURI != null ? ("{"+_namespaceURI + "}" + _localName) : _localName); }
Recurrent Pleomorphic Adenoma of the Parotid Gland: Role of Neutron Radiation Therapy Abstract Recurrent pleomorphic adenoma (RPA) of the parotid gland represents a challenging task for maxillofacial surgeons. The role of radiotherapy in the treatment of RPA of the parotid gland has been studied in previous experiences, and its use has been considered questionable. The aims of our article were to analyze and illustrate a case of RPA, initially treated with enucleations at another institution, showing a multinodular pattern with positivity for S-100 protein and cytokeratin, managed with conservative parotidectomy and neutron radiotherapy.
/* * <NAME>, <NAME>, <NAME> * Runs the GUI */ #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> #include "creature.h" QT_BEGIN_NAMESPACE namespace Ui { class MainWindow; } QT_END_NAMESPACE class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(QWidget *parent = nullptr); ~MainWindow(); Creature* realMatt; Creature* evilMatt; string *que; void run(); void run(Creature player, Creature enemy); private slots: void on_pushButton_clicked(); void on_pushButton_2_clicked(); void on_pushButton_3_clicked(); void on_pushButton_4_clicked(); void on_progressBar_valueChanged(int value); //void on_label_2_linkActivated(const QString &link); private: Ui::MainWindow *ui; void initalizeFighter(Creature *fighter, QLabel *personalDisplay, QProgressBar *healthbar); }; #endif // MAINWINDOW_H
A low temperature polysilicon thin film transistor (LTPS-TFT) possesses better electrical performance than an amorphous silicon (a-Si) thin film transistor. The size of an LTPS-TFT may be smaller than that of an a-Si TFT, and thus light penetration rate can be increased, which in turn reduces the load of the backlight module of a liquid crystal display panel and extends the service life of the liquid crystal display panel. Furthermore, since a low temperature polysilicon film (LTPS) can be processed to form a high speed CMOS (Complementary Metal Oxide Semiconductor) driver circuit system on a substrate directly, which allows less pins for an external printed board and less connection points for wirings, lowering the probability of defects in the liquid crystal display panel and increasing the endurance. In a low temperature polysilicon thin film transistor, a polysilicon thin film is used for an active layer. For the prior art technology, in the process of forming the polysilicon thin film active layer, an amorphous non-crystalline silicon thin film is first deposited as a precursor film, and then the precursor film is crystallized into a polysilicon film with e.g., excimer laser annealing. However, in this method, the pulsed laser generated by the excimer laser has a short pulse width and melting time thus obtained is of only tens of nanoseconds, and therefore the crystallization rate is fast, resulting in grains of small sizes, which tends to generate many grain boundaries in the channel, hence reducing carrier mobility and increasing leakage current. In addition, since an amorphous non-crystalline silicon thin film is used as the precursor film, and the non-crystalline silicon still has a high melting point, while the energy for laser crystallization is limited in a certain rage, and in case of low energy, non-crystalline silicon that can be completely molten concentrates in the superficial layer. For underlying layer, the temperature is lower than the melting point for crystallizing silicon, and therefore it exhibits a semi-molten state. The direction of crystallization will be growing upwards from the molten seed crystal, and the resulted polysilicon appear in columns, which further impacts improvements of the carrier mobility. However, if the energy density of incident laser is raised, it is likely to cause non-uniform crystalline particles with obvious bumps, which will impose disadvantageous influence on subsequent film deposition.
A study on role and mechanism of TLR4/NF-B pathway in cognitive impairment induced by cerebral small vascular disease. OBJECTIVE To investigate the role and potential mechanism of Toll-like receptor 4 (TLR4)/nuclear factor-kappa B (NF-B) signaling pathway in cognitive impairment induced by cerebral small vascular disease (CSVD), so as to provide a reference for the clinical treatment of CSVD-induced cognitive impairment. METHODS Mice with TLR4 gene knockout (n=20) and those with wild-type TLR4 gene (n=40) aged 8-10 weeks old were divided into blank control group (Control group, n=20), wild-type+CSVD group (WT+CSVD group, n=20) and TLR4 gene knockout+CSVD group (TLR4 KO+CSVD group, n=20). Allogeneic thrombosis (particle diameter: 50-70mm) was injected to the mouse's external carotid artery to create a model of learning and memory dysfunction. Step-down test and Y-type maze test were utilized to examine the learning and memory abilities of the mice. Reverse transcription-polymerase chain reaction (RT-PCR) and immunoblotting techniques were adopted to measure the levels of apoptosis-related genes in the brain tissues of mice. Terminal dexynucleotidyl transferase (TdT)-mediated dUTP nick end labeling (TUNEL) method was applied to detect the apoptosis of neuronal cells in the brain tissues. Meanwhile, the levels of oxidative stress markers, including superoxide dismutase (SOD), gp91 and malondialdehyde (MDA), were measured. Finally, the expression level of TLR4/NF-B pathway was detected. RESULTS The latency in the step-down test in the WT+CSVD group was remarkably longer than that in the Control group, and the number of errors was evidently larger than that in the Control group (p< 0.05). At the same time, in the WT+CSVD group, the expression levels of pro-apoptotic genes Bax and C-caspase-3 were up-regulated markedly, while the expression level of anti-apoptotic gene Bcl-2 declined notably (p< 0.05). TUNEL results showed that the number of apoptotic cells in the brain tissues in the WT+CSVD group was about 12 times that in the Control group (p< 0.05). Meanwhile, the SOD expression level was lowered, and the MDA expression level was elevated in the brain tissues in the WT+CSVD group. In addition, the TLR4/NF-B pathway was prominently activated in the mice in the WT+CSVD group (p< 0.05). After TLR4 gene knockout, the cognitive functions of the mice were improved markedly, and the apoptosis of neuronal cells and oxidative stress in the brain tissues were suppressed significantly in the meantime. Moreover, the activation of the TLR4/NF-B signaling pathway was also inhibited. CONCLUSION The TLR4/NF-B pathway is involved in the occurrence and development of CSVD-induced cognitive impairment through regulating oxidative stress and cell apoptosis.
def _create_sensor(xknx: XKNX, config: ConfigType) -> XknxSensor: return XknxSensor( xknx, name=config[CONF_NAME], group_address_state=config[SensorSchema.CONF_STATE_ADDRESS], sync_state=config[SensorSchema.CONF_SYNC_STATE], always_callback=config[SensorSchema.CONF_ALWAYS_CALLBACK], value_type=config[CONF_TYPE], )
1. Field of the Invention The present invention relates to a beam scanner and a surface measurement apparatus, and more particularly, to abeam scanner and a surface measurement apparatus which can minimize errors caused by the movement of a spinning mirror for beam scanning. 2. Description of the Related Art In general, semiconductor integrated circuits are fabricated by forming circuits on a wafer using a photolithography process. In this case, a plurality of the same integrated circuits are disposed on a wafer and divided into individual integrated circuit chips. If foreign bodies exist on the wafer, defective circuit patterns may be formed in the wafer portion where the foreign bodies exist. This may render the use of a corresponding integrated circuit impossible. As a result, integrated circuits obtainable from a single wafer decrease in number, and the yield is reduced. In addition to the semiconductor integrated circuits, examples of advanced materials that are adversely affected by foreign bodies or defects on the micrometer scale may include glass for display devices and materials for circuit boards. Accordingly, there is a need for equipment for measuring and inspecting such foreign bodies or defects. In general, a method of collecting lasers on the surface of a wafer, receiving light scattered from a laser collection point on the wafer and detecting foreign bodies based on a signal corresponding to the received light is in use, to measure foreign bodies or defects on a wafer. FIG. 1 is a schematic perspective view depicting a related art surface measurement apparatus. Referring to FIG. 1, a related art surface measurement apparatus 10 includes a light source emitting laser beams (L), an object 11 of measurement, such as a wafer, and first and second beam detectors 12 and 13. The first beam detector 12 detects beams Ls scattered from the wafer 11. That is, light, scattered from a light collection point on the wafer 11, is collected in the first beam detector 12 serving as a photoelectric converter, through a lens. The first beam detector 12, having collected the scattered light, outputs a pulse signal corresponding to the intensity of beams scattered by foreign bodies. Thus, the sizes of the foreign bodies may be determined based on the magnitude of the output signal. The second beam detector 13 detects a beam Lr reflected by the wafer 11. By detecting signals by both scattered and reflected beams, the surface measurement apparatus 10 can determine the presence of foreign bodies on the wafer 11, measure the sizes of the foreign bodies and also measure the angle of the reflected beam to obtain a three-dimensional shape.