content
stringlengths 7
2.61M
|
---|
EXCLUSIVE…Former Christian Peacemaker Teams Hostages Harmeet Singh Sooden and James Loney Remember Murdered Colleague Tom Fox and Explain Why They Forgive Their Captors | Democracy Now!
one of the three members of the Christian Peacemaker Teams held hostage in Iraq. They were kidnapped on November 26, 2005 and held for 118 days before being freed by British and American forces on March 23, 2006.
one of the three members of the Christian Peacemaker Teams.
AMY GOODMAN: We move now to London, where three aid workers from the Christian Peacemakers Team, who had been held hostage in Iraq, held a news conference today. Norman Kember of Britain, Canadians Harmeet Singh Sooden and James Loney announced they “unconditionally” forgive their Iraqi captors and wish them no retribution. They have yet to decide whether to give evidence at the men’s trial, which is set for next year in Iraq.
In November of 2005, Loney, Sooden and Kember, part of the Christian Peacemakers Team delegation to Iraq, were kidnapped along with U.S. peace activist Tom Fox. Fox was also a full-time member of the Christian Peacemakers. He was working in Baghdad at the time. He was murdered on March 9, 2006. The remaining three were set free on March 23rd.
Throughout their ordeal, videotape of the prisoners was released periodically. On December 7, 2005, a video was released of Tom Fox and Norman Kember making this plea.
TOM FOX: I’d like to offer my pleas to the people of America, not to the government of America—a plea for my release from captivity and also a plea for a release from captivity of all the people of Iraq. We are all suffering from the same fate, and that is the occupation of the American troops and the British troops, which have brought me to this condition and has brought the Iraqi people to the condition they are in. So I would ask the people of America to do what they can to free us all from this captivity.
NORMAN KEMBER: I’m a Christian Peacemaker. I’m a friend of Iraq. I have been opposed to this war, Mr. Blair’s war, since the very beginning. I ask of him now, and the British government, to do all that they can to work for my release and the release of the Iraqi people from oppression.
AMY GOODMAN: Norman Kember and Tom Fox. A few days after their release, James Loney made this statement in London to the media.
JAMES LONEY: I sometimes entertain myself by imagining this day. Sometimes I despair of ever seeing it. Always I ache for it. And so, here we are. For 118 days I disappeared into a black hole, and somehow, by God’s grace, I was spit out again. My head is swirling, and there are times when I can hardly believe it’s true. We had to wear flak jackets during our helicopter transport from the international zone to the Baghdad airport, and I had to keep knocking on the body armor I was wearing to reassure myself that this was all really happening. It was a terrifying, profound, powerful, transformative and excruciatingly boring experience. Since my release, my rescue from captivity, I have been in a constant state of wonder, bewilderment and surprise, as I slowly discover the magnitude of the effort to secure our lives and freedom.
AMY GOODMAN: James Loney and Harmeet Singh Sooden have just held their news conference and come into the studio in London for this national broadcast exclusive. We welcome you both to Democracy Now!
I wanted to start with Harmeet Singh Sooden. Can you talk about why the three of you, the surviving Christian Peacemaker Team members of the hostage ordeal in Iraq, came together today in London?
HARMEET SINGH SOODEN: Well, we were recently approached by the authorities in our respective countries. We were told that they had captured four of our captors and that a trial would ensue, and they want us to testify. That’s the background.
AMY GOODMAN: And what is your response?
HARMEET SINGH SOODEN: Well, our current position is that we don’t have enough information to make a decision, and we’re hoping that that information will be provided for us and/or we can obtain that information from, you know, other sources.
AMY GOODMAN: James Loney, you and Harmeet, as well as Norman Kember, are calling for forgiveness for your captors. Talk further about your ordeal—we watched you right after you were released just now—and what it is you want to accomplish right now.
JAMES LONEY: Well, it was a grim 118 days. We suffered from the deprival of our freedom, and we were given very little food. We were in a constant state of fear and anxiety about what was going to happen to us. But, yes, it was an awful experience, but we really desire that we want good to come out of this. And we are very, very concerned that the death penalty is on the table for these men. They could face execution, and that would be the worst possible outcome for us.
We—you know, Bishop Tutu has this phrase, you know: There’s no future without forgiveness. And for us, forgiveness opens up possibilities. It opens up the future, that something different can happen than what happened in the past. And what happened in the past was—it was awful. And Tom was killed in that. And we want something different. We don’t want more people to be killed. We want the possibility of restoration and a justice that is about healing the relationships that have been broken.
AMY GOODMAN: James, can you talk about why you were in Iraq and how you came to know Norman Kember and Harmeet and Tom Fox, who was killed by his kidnappers?
JAMES LONEY: So, Tom Fox was a member of the CPT team working in Iraq, and I work for CPT and was leading a delegation that included Harmeet and Norman. We met up in Amman. And our purpose in going there was to learn about the realities of the occupation of Iraqi life from regular Iraqis, because this is a point of view that the mainstream media has not really been representing. And this is a war that’s been based on lies from the very beginning. So we wanted to go there and to learn for ourselves, see for ourselves, and to bring those stories back and to be part of the ongoing work of the Iraq team, which at that time was documenting torture in the Interior Ministry.
AMY GOODMAN: And how were you taken? How were you kidnapped?
JAMES LONEY: We were—it was right after a meeting. There was—a car basically was waiting for us. And when we pulled out, it stopped in front of us, and four men stormed the vehicle, and they had guns, and they pulled our translator and driver out of the vehicle. And that was it. We were kidnapped.
AMY GOODMAN: Harmeet Singh Sooden, could you take it from there? Where were you taken then?
HARMEET SINGH SOODEN: We were driven for about 20 minutes to half an hour. We were taken to the, what we call the first house. We were sort of driven around the same spot twice, at least. I wasn’t really paying attention to our surroundings, and more the people who had abducted us. We were brought in, and the man who became our handler or manager was there. We were basically—they took our stuff. We were handcuffed, blindfolded, and basically moved into a room. And that’s how it remained for about a week, until Tom Fox and Norman were separated from us. We actually feared the worst for a while, but a week later, we were reunited with them in the second house. And that’s when the long haul began basically.
AMY GOODMAN: In that second house, is that where you were held for more than a hundred days?
HARMEET SINGH SOODEN: That’s correct. That’s approximately correct, yeah.
AMY GOODMAN: When was your last moment with Tom Fox?
HARMEET SINGH SOODEN: It was February the 12th. We thought we were all going back to the first house to be released, and they took Tom first. I believe I was the last of the three to speak to him. I just said, “See you soon,” and gave him a hug, and he left.
AMY GOODMAN: What can you tell us about Tom and about your experience in captivity with him, Harmeet?
HARMEET SINGH SOODEN: I was—I think I was the person handcuffed to him the longest, every night and on most days. He was committed to nonviolence. If there was a possibility of escape, he said he would not use any form of violence to escape. We would hear car bombs or explosions in the distance, and he would pray for the victims, and even the perpetrators. That should give you a good idea of what kind of person he was.
AMY GOODMAN: Now, you’re calling, Harmeet, for forgiveness. You’re calling for the greatest leniency for the men who will be tried. Do you know who has been arrested at this point?
HARMEET SINGH SOODEN: Well, we’ve been told four people have been arrested. We’ve been given their names. Unfortunately, that means very little at this point. We need more information before we can make any sort of decision about testifying or not testifying. That’s the current position we’re in.
AMY GOODMAN: What is the information you’re looking for?
HARMEET SINGH SOODEN: Well, there are a few specific things. One would be the trial date. The other would be the location, the process itself. Just things of that nature.
AMY GOODMAN: Can you tell us about your captors? Who were they? What particular group, or were they with a particular group?
HARMEET SINGH SOODEN: Well, we don’t have much information on that. But in terms of who they were, what we saw, we had given them nicknames. There was “Uncle.” He claimed to be a former POW guard in the Iran-Iraq War. He was quite confident and sort of treated us like his crops. He said he was also a farmer.
There was another younger man in his mid-twenties. We called him “Junior.” He was very volatile, appeared to be traumatized. He said that his family, some siblings and his fiancée, were killed in Fallujah when his house was bombed.
And there was another person we called “Nephew.” He was quite unsure of himself, as if he hadn’t done this before. And he said his house was destroyed in Fallujah, as well, and he was a father of five. He hadn’t told his family what he was doing, and he didn’t know who his, I guess, his chief was or who the leaders of his group were.
HARMEET SINGH SOODEN: Then there was a manager, of course.
AMY GOODMAN: And the manager, what?
AMY GOODMAN: And did they deal with Tom Fox, the only American in your group—you and James are Canadian, Norman Kember is British—differently than the three of you, before he was killed?
HARMEET SINGH SOODEN: Junior definitely treated Tom with more disdain. I do recall one incident, which is sort of blanked out in my—well, I remember the surrounding situation, of course. Norman and Jim have filled me in. Medicine Man basically pointed his gun at him and said, “You’re the devil. And if you try to escape, we will kill you.” And I, at the time, thought that this was just a tactic to control us and keep us in line, basically. I’m not sure why they thought Tom was the mastermind of escape, because, in fact, he was most opposed to escape through violent means, anyway.
AMY GOODMAN: Why do you think they killed Tom Fox?
HARMEET SINGH SOODEN: I do not know why someone had to be killed, but obviously an American national would be the most likely in this situation to be killed if someone had to be killed.
AMY GOODMAN: And when did you learn that Tom was killed?
HARMEET SINGH SOODEN: Tom’s body, I believe, was found on March the 9th—on March the 7th. Since the separation, since he was separated from us, we would keep asking how he was, and we were told he was OK. But two days before his body was found, we were told that they were going to announce that he has been killed to the world, but that would just be a ruse to try and—there were some secret negotiations apparently going on for a prisoner exchange in the U.S. That was the story that was put forward.
Two days later, we were—at that point we were allowed an hour or two of TV a night. And it was on a news channel, an Arabic news channel, and we saw Tom, a clip of Tom with a camera zooming in on him and then a shot of a street. And it’s at that point we suspected that perhaps he had actually been killed. And this was March the 11th. We only knew for sure when we were released.
AMY GOODMAN: James Loney, you’re calling for forgiveness for your captors. Do you believe that they should be tried? Do you believe that they should be punished in any way, if found guilty?
JAMES LONEY: I, myself, have no desire for punishment. That means nothing to me. That won’t restore or repair anything. Iraqi society has an interest in this matter and has an interest in law and order and security for citizens there. The concern is, is that there is the death penalty. There is a real lack of transparency. Time magazine reported, you know, unofficially 90 executions, the names of the people and the details of their crimes are not available. There is a lack of transparency around this.
And I am concerned that—about that aspect of it, but also that the death penalty is part of this spiral of violence, you know, of actions and reactions to violence that lead ultimately to self-destruction. That is the logic of violence. And we went to Iraq to speak about peace and to speak about a different way of being in the world, and we—I believe that forgiveness gives us this opportunity to imagine a different kind of future.
Now, I know we are—there are—you know, we’re working with reality. You know, we don’t have the kinds of institutions and systems in place that might facilitate a restorative justice kind of outcome. But nevertheless, it is important that at least there be some basic level of clemency, I feel, in this situation. I think that there is a legitimate—punishment, in and of itself, for me has no legitimacy, but if there is a need to protect society by withholding someone’s liberty for a period of time, in the interest of protecting society and in the interest of perhaps facilitating some kind of rehabilitation or restoration, that would have some legitimacy or some purpose. But punishment, in and of itself, for me, does not.
AMY GOODMAN: We’re talking to James Loney and Harmeet Singh Sooden, two of the three members of the Christian Peacemaker Teams who spoke out today in a news conference in London, calling for forgiveness for their captors. Men have been arrested in Iraq. The trial, it is not clear when it will be held. They are making their decision about whether to testify at the trial. Harmeet, one of the things that’s not known, as well, about the Christian Peacemakers Team is that really it was your group that was the first document abuse at Abu Ghraib. Were you personally involved with that?
HARMEET SINGH SOODEN: Actually, I wasn’t. I was only a short-term delegate, which means I just volunteered, so I’m not a member of CPT. So I was only supposed to be volunteering for two weeks.
AMY GOODMAN: And in terms of your captors, were they part of a group—Shia, Sunni, a political group, insurgent, from out—well, you explained that they were Iraqi?
HARMEET SINGH SOODEN: Well, they—it’s difficult to be sure, but they appeared to be Sunni. They appeared to belong to some sort of group, at least for that duration. But we’re bordering on speculation.
AMY GOODMAN: How have you recovered, after being held for 118 days, but now having gone on now, this period, many months after you were freed?
HARMEET SINGH SOODEN: Well, I mean, I’m a university student, so I went back to university a few months after being released. Obviously, this experience is—it’s still at the forefront of my mind, and I still in some ways feel captive, emotionally at least.
HARMEET SINGH SOODEN: I feel—at times I feel I have an obligation to continue this sort of work, although that’s not really—it’s a very personal question. I would rather not answer it. Sorry.
AMY GOODMAN: And now, you will leave London, and you and James will return to Canada?
HARMEET SINGH SOODEN: I actually live in New Zealand, so I’m a resident in New Zealand, so I’ll be returning there. Jim will go back to Toronto.
AMY GOODMAN: And when will your decision be made about whether you will testify?
HARMEET SINGH SOODEN: That all depends on what new information comes to light. And we’re hoping the authorities will provide that as soon as possible.
AMY GOODMAN: Well, I want to thank you very much for being with us, Harmeet Singh Sooden, James Loney, two of the three Christian Peacemakers who survived their kidnapping ordeal, speaking to us from London, where they’ve just held a news conference. |
While Behar basically ridiculed hundreds of millions of people who commune with their Creator through prayer, she apparently failed to heap public ridicule on Hillary Clinton, who reportedly talks to dead people like "Eleanor Roosevelt" and "Mahatma Gandhi." Hillary had new age psychic, Jean Houston, "virtually" move "into the White House," when Bill Clinton was president. With the help of Houston, Hillary routinely communed with, and received counsel from, "Eleanor Roosevelt," who has been dead for over 50-years!
While the Clinton White House tried to pas this off as "imaginary" friends, in rituals that are part and parcel of Satanism and witchcraft, Hillary's medium, Jean Houston, makes it pretty clear that such spirit entities are quite real. In fact Houston and her co-author warn of the dangers of contacting these spirits. When leading a séance and summoning a spirit, she instructs those with her to take "precautions" stating, "We are gathered here in this circle ... the entity we have called ... can appear to us all ... We will ... see it, and hear it, and we even could touch it, were it not necessary to take certain precautions..."(Robert Masters and Jean Houston, Mind Games (Dell Publishing Co., 1972), pp. 199- 201).
Houston states that in a "normal, conscious" state, humans are protected from these spirits, as "contact with these other life forms has been made impossible by some kind of shielding against it..." However, Houston states, "By altering consciousness we sometimes drop the shield, and the contacts become possible." (Ibid, p. 70-71).
Tragically, Houston is encouraging her followers, like Hillary and others, to drop the shield God has erected as a barrier to keep the demonic world from having wholesale access to the human psyche. In the past, Houston led more than 300 people into altered states of consciousness through LSD inebriation, making their minds an "open sesame" to the demonic world. Houston claimed that such experiments "were most effective in conveying psychic truth to the participant," and "mystical experiences occur among the drug subjects" (New Age Encyclopedia, p. 221).
However, Albert Hoffman, the Swiss chemist who first developed LSD, not only became possessed himself after ingesting LSD (by what he called an "LSD demon"), but claimed in his book, LSD: My Problem Child, that the sensation of spirit possession is the one common denominator found in LSD experimentation.
In a book Houston co-wrote with her husband, Robert Masters, entitled The Varieties of Psychedelic Experience, they acknowledge what appear to be demonic experiences that were encountered by some subjects while experimenting with LSD and DMT (Dimethyltryptamine). One of the so-called "hell experiences" is described by a man who lamented that he "swirled [in] a purple sea, irresistible, angry, and teeming with clammy, serpentine shapes that I thought tried to fasten and feed ..." Another, who was injected with the powerful hallucinogen DMT, said "I was told that I would see God," but instead, experienced "The most terrifying three minutes" of her life. She described her experience as "demonic" and "diabolical," further stating, "I opened my eyes and jumped from my chair screaming."
Since LSD and DMT are illegal Schedule 1 Drugs, Houston now seeks permissible ways to introduce her patients, like Hillary, to spirit guides. Instead of relying on lysergic acid diethylamide (LSD), Houston and her husband "developed the ASCID (Altered States of Consciousness Induction Device), which is better known as "the Witches Cradle" (Encyclopedia of Occultism and Parapsychology, p. 485). This device was designed to help bring about altered states of consciousness in their subjects so they could experience the spirit world. It is unclear whether or not Houston used "The Witches Cradle" or ASCID, to help Hillary get in touch with her spirit guides.
Could you imagine "President Hillary" taking orders in the White House from a demonic spirit guide claiming to be "Eleanor Roosevelt"? What might that mean for the world, with Hillary and "Eleanor's" fingers on the "nuclear football" and the power to authorize a nuclear war?
However, Joy Behar suggests that Vice President Pence is mentally ill for praying to his Creator, but where was she when Bill was stating that Hillary, the Secretary of State, was still getting messages from a spirit guide that poses as Eleanor Roosevelt? Perhaps Joy did call Hillary "mentally ill" or a demoniac on "The View", but if she did, I missed it.
It is a dangerous thing to alter your consciousness through drugs and new age techniques, and this is why God commands us, "Be alert and of sober mind. Your enemy the devil prowls around like a roaring lion looking for someone to devour" (1 Peter 5:8).
"And when they say to you, 'Seek those who are mediums and wizards, who whisper and mutter,' should not a people seek their God? Should they seek the dead on behalf of the living? To the law and to the testimony! If they do not speak according to this word, it is because there is no light in them" (Isaiah 8:19-20).
Joe Schimmel is senior pastor of Blessed Hope Chapel in Simi Valley, CA and president of the apologetics ministry, Good Fight Ministries, dealing with pop culture, Hollywood, and music from a Christian perspective. He is best known for They Sold Their Souls for Rock n Roll, which exposes the satanic influences behind much of yesterday and today's popular music and how it is negatively influencing our youth. Other popular DVD releases are A Shack of Lies, The Submerging Church, Hollywood's War on God, and The Kinsey Syndrome".
Could We Be on the Verge of a New Temperance Movement? |
Gis-Based Accuracy Assessment of Global Geopotential Models: A Case Study of Egypt Geoid modelling is a fundamental procedure in geomatics and geosciences applications to estimate the orthometric heights from the ellipsoidal heights measured using Global Navigation Satellite Systems (GNSS) observations. In case of no local geoid model is available for any area, a Global Geopotential Model (GGM) is utilized for height conversion. However; the availability of too many GGMs, more than 160 models, makes the selection of the most acceptable one is a significant task. This paper aims to develop a straightforward scheme to acquire, manipulate, and investigate the accuracy of GGMs, within a Geographic Information Systems (GIS) environment. Four GGMs, namely XGM2016, GECO, EIGEN-6C4, and EGM2008, have been utilized in Egypt. Accomplished results show that the standard deviations of the investigated GGMs' discrepancies, over Egypt, range from ± 10.90 mGal to ± 13.10 mGal for gravity anomalies, and from ± 0.23 m to ± 0.30 m for geoid heights. In order to pick up the optimum GGM, a dimensionless reliability index is computed for each GGM. Based on the investigated GGMs, the available datasets, and also the proposed criteria applied in analyzingresults in this paper, we can conclude that the EGM2008 is still the most suitable GGM for representing the gravitational field over Egypt with an average reliability index of 5.10. The proposed efficient GIS-based process is practically beneficial for height conversion in several geodetic, environmental, surveying, and mapping applications in Egypt. Introduction Civil and surveying engineers deal with three fundamental surfaces of the Earth and; consequently, several types of heights. These surfaces include the terrain or physical surface, geoid or the true irregular equipotential surface of the Earth, and the ellipsoid or the most nearest regular mathematical surface to the geoid. The ellipsoidal height is measured from an ellipsoid, while the orthometric height is based on the geoid. The vertical separation between these both surfaces represents the geoid undulation or geoidal height. Hence, geoid modelling is needed for transforming the ellipsoidal heights, related to the ellipsoid surface, estimated from (GNSS) measurements into orthometric heights, related to the Mean Sea Level (MSL), which obtained using levelling works. This transformation is necessary because the important needs of this orthometric heights in topographic maps, and many other applications in civil engineering. The comparison of (GGM)s over a spatial region is crucial to choose the most suitable one to be used in geoid modelling. In addition, the availability of more than 160 GGMs currently makes the assessment of their performance in representing the gravitational field over a selected area is essential, e.g., Japan, and Argentina. In Egypt, geoid modelling and different (GGM)s evaluation have been tremendously investigated in the last couple of decades, e.g. and. Number of researches has utilized Geographic Information Systems (GIS) in geodetic applications in general, and in geoid modelling in particular. GIS-based raster analysis has been used to improve the GGM-derived geoidal heights through linear sharpening the original raster representation of the models. Investigated the possibility of developing a local geoid model by evaluating several GGMs within a GIS environment. However, this study used GIS just to compare the results obtained from an online service for point-by-point geoid calculations, which is not the case of the present study. Over a small spatial region where little number of known GNSS/levelling points are available, GIS could be used, first, to build a local geoid model, and then to interpolate geoidal heights at other unknown GNSS stations to estimate their corresponding orthometric heights and. The International Center for Global Earth Model (ICGEM) organization, recently, starts an online service to compute many geoidal properties and deliver them in a grid format. Based on that precious service, the assessment of GGMs utilizing national geodetic data could be carried out, solely, by GIS. The main objective of this paper is to develop a simple scheme to acquire, manipulate, investigate the accuracy of GGMs over Egypt region, to determine the most appropriate model and enhance its precision, completely, within a GIS environment. This work can be considered as an extra GIS-based geodetic application for height conversion in Egypt. Global Geopotential Models GGMs have been developed since 1960s to express the gravitational potential of the Earth (V) into a series of spherical harmonics as: (for more details, refer for example to ) where: GM is the gravitational constant, r is the radial distance, a is the equatorial radius of the Earth, and are the latitude and longitude respectively, J n are the zonal harmonics, those contain the S nm and C nm are the tesseral harmonics, P nm are the associated Legendre coefficients n and m are the degree and order of the geopotential model, and N max is the maximum degree of the model. The geoidal undulations or geoidal heights (N), representing the difference between the geoid and ellipsoid surfaces, can be evaluated through two methods based on available geodetic measurements. First, it can be computed through the well-known Stokes' formula, if gravity anomalies are utilized, N grav, as: Where: R is the radius of the Earth, ∆g is the gravity anomalies, and S () is the Stokes' function, and d is an infinitesimal surface element on the unit sphere. In practice, the geoidal undulation, N, is divided into three components and computed as: Where, N res is the geoidal undulation component related to residual gravity anomalies, N GGM is the component related to GGM, and N H is the component related to topography effect. Consequently, geoid modelling requires the utilization of a GGM to represent the global variations or long wavelengths of the Earth gravitational field, along with a Digital Elevation Model (DEM) to depict the topography of the local area and determine its effects on the developed geoid model. The second manner of computing geoidal heights is the so-called GNSS/levelling or the geometric approach, where N geometric can be computed from both GNSS-based ellipsoidal heights relative the ellipsoid surface (h) and the orthometric heights related to MSL (H) as: In this paper, four GGMs have been selected for evaluation, namely: Experimental Gravity Field Model (XGM2016), GOCE and EGM2008 combined model (GECO), European Improved Gravity model of the Earth by (EIGEN-6C4), and Earth Geopotential Model (EGM2008). Although the first one is a medium-resolution model, it has a great attention since it would be the basis for the upcoming Earth Geopotential Model 2020 (EGM2020), while the other three GGMs have been selected since they represent high-resolution models with degree and order equal 2190. A brief description of each model is follows: -XGM2016: The experimental gravity model is a GGM up to 719 degree and order, supported by an improved terrestrial global data of 15'x15' gravity anomalies, along with satellite-based gravity data from the Gravity Recovery and Climate Experiment (GRACE) and the Gravity and Ocean Circulation Explorer (GOCE). XGM2016 utilized, in addition, new promising processing methodology. -GECO: A global gravity model utilizes the GOCE satellite-based gravity data to improve the accuracy of the EGM2008 GGM in low and medium frequencies. GECO is developed, in 2015, up to degree and order 2190. GECO is the most-recent published high -resolution GGM until 2017. -EIGEN-6C4: A model released in 2014, that utilizes satellite tracking data (from LAGEOS, GRACE, and GOCE missions) along with a global surface gravity anomaly grid and altimetry data. The model is up to 2190 degree, developed by both the Germany GFZ research center, and the French CNES research center. -EGM2008: An integrated GGM developed by the US National Geospatial-Intelligence Agency (NGA) up to 2190 degree. It was developed in 2008 based on GRACE-based satellite tracking data, terrestrial gravity data, and altimetry data. It was a millstone in GGM development, since its preceding model did not exceed 360 maximum degree. The evaluation of equation 2 over a spatial region requires a specific mostly-academic software package, e.g., GRAVSOFT and GRAFIM, prior to apply GIS for modelling and mapping obtained results. However, the handling and assessment of GGMs carried out in this study, is performed, totally, within a GIS environment. The performance of each GGM in representing the gravity field over Egypt is, first, investigated and then, the obtained GGM-based free-air gravity anomalies against the free-air gravity anomalies of the known terrestrial stations (∆g) are compared. At these stations, relative gravity have been measured and tied to the national absolute gravity network, to obtain absolute gravity values. So, the free-air gravity anomalies are computed as: Where: g is the known absolute gravity, H is the orthometric height obtained from precise levelling, a and b are the equatorial and polar normal gravity values, e 2 is the second eccentricity of the used ellipsoid, and p = (b b / a a ) -1, where a and b are the WGS84 semi-major and semi-minor ellipsoidal axis respectively. The second GIS-based comparison step judges the GGM-based geoidal heights against the known GNSS/levelling undulations at the utilized stations in Egypt. Differences in both steps are; consequently, statistically investigated to define the accuracy and reliability of the tested GGMs. Processing and Analysis The available terrestrial geodetic datasets include (as shown in Fig. 1): The first-order Egyptian national gravity networks of both 1997 and 1977 containing 247 measured gravity points (shown in red dots symbol); and 976 GNSS/levelling stations (shown in green dots symbol) all over Egypt, that were observed by the Survey Research Institute (SRI) in various projects over the last five years. It worth mentioning that the average accuracy of the Egyptian National Gravity Standardization Network of 1997 (ENGSN97) is ± 0.02 mGal; while the corresponding value of the National Gravity Standard Base Network of 1977 (NGSBN77) is ± 0.08 mGal. Although, there are other available second-order gravity measurements in Egypt, particularly in the western desert, their accuracy in both gravity and coordinates are questionable. The utilized GPS/levelling points have been surveyed using the first-order levelling, and the first-order GPS geodetic network standards, and their average accuracy could be estimated as ± (3 -4) cm. Thus, the most-precise available national datasets have been utilized, herein, to investigate the reliability of the four tested GGMs as will be discussed in details in the next section. Figure 1. Available Terrestrial Data As stated previously, the ICGEM organization starts an online service to compute many geoidal properties (for some GGM) and deliver them in a grid format. For any GGM, a specified grid could be computed online and downloaded for a variety of geoidal characteristics such as geoid heights, height anomaly, gravity anomaly, and gravity disturbance. Each grid has a gdf format, which can be easily manipulated to be applicable for GIS. Therefore, two 5'x5' grids, for gravity anomaly and geoid heights, have been downloaded for each model of the other three tested GGMs. The interpolated gravity anomaly and geoid undulation values at each control point have been compared against their known values, and their differences are statistically investigated. The ArcGIS 10 software has been utilized, herein, for interpolation and mapping as an example of GIS packages. In the next step, the two utilized control point datasets have been used to interpolate the GGM-based gravity anomalies and geoidal heights from each GGM corresponding raster. The Inverse Distance Weighted (IDW) interpolation method has been adopted, since the geoid variations over Egypt region are moderately depend inversely on distances. Then, the known gravity anomaly and geoid undulation, for each station, have been compared against the corresponding values obtained from each GGM. Table 1 presents the statistical properties of those differences for the utilized four GGMs, while Fig. 4 depicts the histograms of those differences. It can be realized, from Table 1 and the histograms in Fig. 4 that the standard deviations of the investigated GGMs' discrepancies, over Egypt, range from ± 10.90 to ± 13.10 mGal for gravity anomalies, and from ± 0.23 to ± 0.30 m in geoid heights. It is interesting to notice, from Fig. 4, that the results of the XGM2016 model, even though its degree equal 790, are close to those of the other high-resolution GGMs. Moreover, table 4 indicates that the statistical measures vary from one model to another, for example, EIGEN-6C4 has the best standard deviation, while GECO has the least range of differences, and EGM2008 got the least average in gravity anomalies. The same remark is valid; also, for the GGMs-based geoid undulation differences. Therefore, judging the performances of GGMs should not be accomplished based on a single statistical measure. Hence, the concept of reliability index, as introduced by, is applied herein. A reliability index (RI) is computed, for each GGM, as a weighted mean of three individual indices define the relative value for the three statistical measures: the mean, range, and standard deviation. For each statistical measure, the values of all GGMs have been sorted in a descending order, and descending ranks, on a scale of 10. So, unique dimensionless RI values are obtained for the mean, range, and standard deviation. Then, the overall RI is computed as the weighted mean of the individual RI values. The utilized weights are 4 for the standard deviation, and 3 for both the mean and range. Therefore, it can be realized that the mean RI is unitless in nature that describes the overall accuracy performance of each GGM. That procedure has been repeated twice to obtain final RI for both gravity anomalies and geoidal heights of each investigated GGM. Accomplished results are tabulated in Table 2, which reveals that the EGM2008 is the optimum GGM for representing the gravitational field over Egypt with an average reliability index of 5.10, the EIGEN-6C4 comes in the second place, and the GECO comes third while the XGM2016 is the most worst GGM of the four investigated GGMs. Once more, it can be seen that the XGM2016 medium-resolution GGM performance is approximately similar to the GECO and EIGEN-6C4 high-resolution models. According to the obtained results represented, two important remarks should be highlighted here regarding the investigated GGMs and the available data. First notice is that the EGM2008 is the most suitable GGM, until now, to be applied in geoid modelling in Egypt. Second one: the XGM2016 is a promising model to represent the long wavelength of gravity, which indicates that the upcoming EGM2020 would be more precise in representing the Earth's gravitational field. Finally, the undulation errors of EGM2008 have been spatially modeled, using the IDW interpolation method, as a raster format (Fig. 5). Therefore, equation 4 will be modified into: Where, N EGM2008 is the original geoid undulation of the EGM2008 model, and E EGM2008 represents its error estimate as interpolated from the correction surface at any GNSS observed station. Along the current study area, it will be easy for GNSS users after observing the geodetic height of any point (h), to calculate its orthometric height (H) by estimating undulation value at this point using EGM2008 and also estimating undulation corresponding value from the EGM2008 correction surface, which should be open-access available. Consequently, within a specific level of precision applicable to small-scale mapping, such a practice significantly reduces the economic costs of field data collection stage as a result of dispensing levelling works anymore. Such a developed process is valuable for all GNSS-based surveying, mapping, data collection, and GIS activities in Egypt. Same procedures could be applied for heights transformation in any other country or regions. Conclusions GIS has been widely applied in numerous geodetic, surveying, and mapping activities. The creation of local geoid models over small areas and geoid interpolation have been, also, investigated using GIS. This paper presents a straightforward scheme to acquire, manipulate GGMs, and determine the most appropriate model and enhance its precision, entirely, within a GIS environment. Such a task, typically, was accomplished using specific mostly-academic geodetic software. GNSS and GIS users can considerably benefit from our proposed simple method in various geodetic activities. It is a matter of fact that an optimum GGM is required to convert the GNSS-based ellipsoidal heights into the MSL-based orthometric heights, or elevations, which is used in civil engineering different applications. The developed plan is mainly based on the ICGEM service, through downloading two grid files for each GGM, one for the gravity anomalies and the other for geoidal undulations. Using GIS, the precision and reliability of each GGM can be assessed over national geodetic databases, to determine the most suitable GGM to be used along any country or spatial region. The proposed method can be considered as a powerful GIS-based geodetic application for height conversion along Egypt region. The current study has utilized four GGMs in Egypt along with precise local geodetic datasets. Based on the introduced reliability measure and the available datasets, it has been concluded that the EGM2008 is still the best possible GGM, out of the investigated models, for representing the gravitational field over Egypt with an average reliability index of 5.10 on a scale of 10. A GIS-based correction surface has been; also, developed to increase its accuracy in GNSS height conversion for several geodetic, environmental, surveying, and mapping applications. The proposed approach could be; similarly, applied in other countries as well. |
The U.S. Secret Service has been warning financial institutions about an increase in an ATM hack that is physically installed to pilfer off customer account data from ATM card readers, according to Brian Krebs' KrebsOnSecurity, who obtained a non-public alert the service sent to banks this week. The Secret Service told Axios its Electronic Crimes Task Force partners obtained intelligence about ATM skimming and that fraud alerts were sent to financial institutions about it.
What to watch: The Secret Service told Axios that the ATM skimming activity has been detected "throughout the east coast from Maryland to Massachusetts," but their current data does not yet point to a trend in location for the hacks.
Krebs advises: "If you visit an ATM that looks strange, tampered with, or out of place, try to find another machine. Use only ATMs in public, well-lit areas, and avoid those in secluded spots. Most importantly, cover the PIN pad with your hand when entering your PIN."
"Finally, try to stick to cash machines that are physically installed inside of banks," Krebs wrote last year.
Editor's note: This has been updated with the latest details from the Secret Service. |
RAPPER THE Game has made a diss track threatening to put a gun to fellow musician Kreayshawn after she allegedly said the N-word.
Despite Gucci Gucci singer Kreayshawn, 21, claiming she “never” uses the racial slur in any of her songs, she later admitted that she and her sister, fellow group member V-Nasty, do use the N-word.
Game told Shade 45 radio station: "You can't be playing with that word, some people will take it serious, especially coming from someone that's [not black]. There's a lot of tragic history behind it."
Last month 21-year-old V-Nasty caused controversy after she released a video to blast her “haters” and members of the black community who have condemned her use of the offensive term. |
Hierarchical Classification of Moving Vehicles Based on Empirical Mode Decomposition of Micro-Doppler Signatures A novel method is proposed for moving wheeled vehicle and tracked vehicle classification using micro-Doppler features from returned radar signals within short dwell time. In this method, an adaptive analysis technique called Empirical Mode Decomposition (EMD) is utilized to decompose the motion components of moving vehicles, and a hierarchical classification structure using the decomposition results of returned signals is proposed to discriminate the two kinds of vehicles. The first stage of the structure elementarily identifies the tracked vehicle data by checking the existence of its unique feature and a further classification via our proposed features based on EMD is implemented in the second stage by using Support Vector Machine (SVM) classifier. Experimental results based on the simulated data and measured data are presented, including the performance analysis for low signal-to-noise ratio (SNR) case, generalization evaluation for different target circumstances and comparison with some related methods. |
Evaluation of intravesical alum irrigation for massive bladder hemorrhage. The efficacy of intravesical alum irrigation was analyzed after application to 9 patients with continuous and severe bladder hemorrhage. Causes of bleeding were radiation cystitis in 4 patients, vesical invasion by cervical cancer in 3, bladder cancer in 1 and cyclophosphamide-induced cystitis in 1. Though alum treatment was initially effective for control of massive bladder hemorrhage in all patients, it eventually failed to suppress a subsequent hemorrhage in 2 patients (78% success rate). No significant side effects directly related to this therapy were observed. In conclusion, alum irrigation is effective for controlling massive bladder hemorrhage for a rather short time. Therefore, additional treatment modalities should also be considered for primary diseases. |
Proteomic Analysis of Ubiquitin-Like Posttranslational Modifications Induced by the Adenovirus E4-ORF3 Protein ABSTRACT Viruses interact with and regulate many host metabolic pathways in order to advance the viral life cycle and counteract intrinsic and extrinsic antiviral responses. The human adenovirus (Ad) early protein E4-ORF3 forms a unique scaffold throughout the nuclei of infected cells and inhibits multiple antiviral defenses, including a DNA damage response (DDR) and an interferon response. We previously reported that the Ad5 E4-ORF3 protein induces sumoylation of Mre11 and Nbs1, which are essential for the DDR, and their relocalization into E4-ORF3-induced nuclear inclusions is required for this modification to occur. In this study, we sought to analyze a global change in ubiquitin-like (Ubl) modifications, with particular focus on SUMO3, by the Ad5 E4-ORF3 protein and to identify new substrates with these modifications. By a comparative proteome-wide approach utilizing immunoprecipitation/mass spectrometry, we found that Ubl modifications of 166 statistically significant lysine sites in 51 proteins are affected by E4-ORF3, and the proteome of modifications spans a diverse range of cellular functions. Ubl modifications of 92% of these identified sites were increased by E4-ORF3. We further analyzed SUMO3 conjugation of several identified proteins. Our findings demonstrated a role for the Ad5 E4-ORF3 protein as a regulator of Ubl modifications and revealed new SUMO3 substrates induced by E4-ORF3. IMPORTANCE The adenovirus E4-ORF3 protein induces dynamic structural changes in the nuclei of infected cells and counteracts host antiviral responses. One of the mechanisms that accounts for this process is the relocalization and sequestration of cellular proteins into an E4-ORF3 nuclear scaffold, but little is known about how this small viral protein affects diverse cellular responses. In this study, we analyzed for the first time the global pattern of ubiquitin-like (Ubl) modifications, with particular focus on SUMO3, altered by E4-ORF3 expression. The results suggest a role for the Ad5 E4-ORF3 protein as a regulator of Ubl modifications and reveal new SUMO3 substrates targeted by E4-ORF3. Our findings propose Ubl modifications as a new mechanism by which E4-ORF3 may modulate cellular protein functions in addition to subnuclear relocalization. |
<gh_stars>10-100
/**
* @license
* Copyright Google Inc. All Rights Reserved.
*
* Use of this source code is governed by an MIT-style license that can be
* found in the LICENSE file at https://angular.io/license
*/
import {CommonModule} from '@angular/common';
import {Component, NgModule, ɵdetectChanges} from '@angular/core';
import {buildTree, emptyTree} from '../util';
export function destroyDom(component: TreeComponent) {
component.data = emptyTree;
ɵdetectChanges(component);
}
export function createDom(component: TreeComponent) {
component.data = buildTree();
ɵdetectChanges(component);
}
const numberOfChecksEl = document.getElementById('numberOfChecks')!;
let detectChangesRuns = 0;
export function detectChanges(component: TreeComponent) {
for (let i = 0; i < 10; i++) {
ɵdetectChanges(component);
}
detectChangesRuns += 10;
numberOfChecksEl.textContent = `${detectChangesRuns}`;
}
@Component({
selector: 'tree',
inputs: ['data'],
template: `
<span [style.backgroundColor]="bgColor"> {{data.value}} </span>
<tree *ngIf='data.right != null' [data]='data.right'></tree>
<tree *ngIf='data.left != null' [data]='data.left'></tree>
`,
})
export class TreeComponent {
data: any = emptyTree;
get bgColor() {
return this.data.depth % 2 ? '' : 'grey';
}
}
@NgModule({declarations: [TreeComponent], imports: [CommonModule]})
export class TreeModule {
}
|
<reponame>kisman2000/RailHack-Ratted-<gh_stars>1-10
//Deobfuscated with https://github.com/SimplyProgrammer/Minecraft-Deobfuscator3000 using mappings "C:\Users\Admin\Desktop\Minecraft-Deobfuscator3000-1.2.2\1.12 stable mappings"!
//Decompiled by Procyon!
package me.shatteredhej.railhack.railhackmod.modules;
import me.shatteredhej.railhack.railhackmod.module.*;
import me.shatteredhej.railhack.railhackmod.guiscreen.settings.*;
import me.shatteredhej.railhack.railhackmod.category.*;
import me.shatteredhej.railhack.*;
import net.minecraft.client.gui.*;
public class ClickGui extends Module
{
Setting label_frame;
Setting name_frame_r;
Setting name_frame_g;
Setting name_frame_b;
Setting background_frame_r;
Setting background_frame_g;
Setting background_frame_b;
Setting background_frame_a;
Setting border_frame_r;
Setting border_frame_g;
Setting border_frame_b;
Setting label_widget;
Setting name_widget_r;
Setting name_widget_g;
Setting name_widget_b;
Setting background_widget_r;
Setting background_widget_g;
Setting background_widget_b;
Setting background_widget_a;
Setting border_widget_r;
Setting border_widget_g;
Setting border_widget_b;
private static ClickGui INSTANCE;
public ClickGui() {
super(Category.Gui);
this.label_frame = this.register("Frame", "ClickGUIInfoFrame", "Frames");
this.name_frame_r = this.register("Name R", "ClickGUINameFrameR", 255, 0, 255);
this.name_frame_g = this.register("Name G", "ClickGUINameFrameG", 255, 0, 255);
this.name_frame_b = this.register("Name B", "ClickGUINameFrameB", 255, 0, 255);
this.background_frame_r = this.register("Background R", "ClickGUIBackgroundFrameR", 230, 0, 255);
this.background_frame_g = this.register("Background G", "ClickGUIBackgroundFrameG", 100, 0, 255);
this.background_frame_b = this.register("Background B", "ClickGUIBackgroundFrameB", 50, 0, 255);
this.background_frame_a = this.register("Background A", "ClickGUIBackgroundFrameA", 210, 0, 255);
this.border_frame_r = this.register("Border R", "ClickGUIBorderFrameR", 255, 0, 255);
this.border_frame_g = this.register("Border G", "ClickGUIBorderFrameG", 255, 0, 255);
this.border_frame_b = this.register("Border B", "ClickGUIBorderFrameB", 255, 0, 255);
this.label_widget = this.register("Widget", "ClickGUIInfoWidget", "Widgets");
this.name_widget_r = this.register("Name R", "ClickGUINameWidgetR", 255, 0, 255);
this.name_widget_g = this.register("Name G", "ClickGUINameWidgetG", 255, 0, 255);
this.name_widget_b = this.register("Name B", "ClickGUINameWidgetB", 255, 0, 255);
this.background_widget_r = this.register("Background R", "ClickGUIBackgroundWidgetR", 255, 0, 255);
this.background_widget_g = this.register("Background G", "ClickGUIBackgroundWidgetG", 255, 0, 255);
this.background_widget_b = this.register("Background B", "ClickGUIBackgroundWidgetB", 255, 0, 255);
this.background_widget_a = this.register("Background A", "ClickGUIBackgroundWidgetA", 100, 0, 255);
this.border_widget_r = this.register("Border R", "ClickGUIBorderWidgetR", 255, 0, 255);
this.border_widget_g = this.register("Border G", "ClickGUIBorderWidgetG", 255, 0, 255);
this.border_widget_b = this.register("Border B", "ClickGUIBorderWidgetB", 255, 0, 255);
this.name = "ClickGui";
this.tag = "GUI";
this.set_bind(54);
}
public void onUpdate() {
RailHack.clickGui.theme_frame_name_r = this.name_frame_r.getValue(1);
RailHack.clickGui.theme_frame_name_g = this.name_frame_g.getValue(1);
RailHack.clickGui.theme_frame_name_b = this.name_frame_b.getValue(1);
RailHack.clickGui.theme_frame_background_r = this.background_frame_r.getValue(1);
RailHack.clickGui.theme_frame_background_g = this.background_frame_g.getValue(1);
RailHack.clickGui.theme_frame_background_b = this.background_frame_b.getValue(1);
RailHack.clickGui.theme_frame_background_a = this.background_frame_a.getValue(1);
RailHack.clickGui.theme_frame_border_r = this.border_frame_r.getValue(1);
RailHack.clickGui.theme_frame_border_g = this.border_frame_g.getValue(1);
RailHack.clickGui.theme_frame_border_b = this.border_frame_b.getValue(1);
RailHack.clickGui.theme_widget_name_r = this.name_widget_r.getValue(1);
RailHack.clickGui.theme_widget_name_g = this.name_widget_g.getValue(1);
RailHack.clickGui.theme_widget_name_b = this.name_widget_b.getValue(1);
RailHack.clickGui.theme_widget_background_r = this.background_widget_r.getValue(1);
RailHack.clickGui.theme_widget_background_g = this.background_widget_g.getValue(1);
RailHack.clickGui.theme_widget_background_b = this.background_widget_b.getValue(1);
RailHack.clickGui.theme_widget_background_a = this.background_widget_a.getValue(1);
RailHack.clickGui.theme_widget_border_r = this.border_widget_r.getValue(1);
RailHack.clickGui.theme_widget_border_g = this.border_widget_g.getValue(1);
RailHack.clickGui.theme_widget_border_b = this.border_widget_b.getValue(1);
}
public void onEnable() {
if (ClickGui.mc.world != null && ClickGui.mc.player != null) {
ClickGui.mc.displayGuiScreen((GuiScreen)RailHack.clickGui);
}
}
public void whenDisabled() {
if (ClickGui.mc.world != null && ClickGui.mc.player != null) {
ClickGui.mc.displayGuiScreen((GuiScreen)null);
}
}
public static ClickGui getINSTANCE() {
if (ClickGui.INSTANCE == null) {
ClickGui.INSTANCE = new ClickGui();
}
return ClickGui.INSTANCE;
}
private void setInstance() {
ClickGui.INSTANCE = this;
}
static {
ClickGui.INSTANCE = new ClickGui();
}
}
|
Nick Garvan of the Thames Valley Police, London, is skeptical about the purported number of security cameras in the UK. He says that the often quoted number of more than four million was based on a single study in 2002 which took its numbers from a single London street and scaled them up. He has not given an alternative figure.
He and the ACPO (Association of Chief Police Officers) do, however, want to commandeer the existing cameras operated by local councils (around 30,000 of them). They also want to tie these together into cohesive network, coupled with a national facial recognition database. Garvan, somewhat ironically given his plans, said:
Any perception on the part of the public that there is some kind of Orwellian infrastructure sitting behind society where these cameras are terribly well integrated and joined up as part of the surveillance state is entirely wrong.
I guess that might soon change.
UK CCTV numbers 'may be over-stated' [The Register]
Photo: Takomabibelot/Flickr |
package token
import (
"context"
"math/rand"
"os"
"reflect"
"runtime"
"strings"
"testing"
"time"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/zap"
"github.com/chef/automate/components/authn-service/tokens/mock"
"github.com/chef/automate/components/authn-service/tokens/pg"
"github.com/chef/automate/components/authn-service/tokens/pg/testconstants"
tokens "github.com/chef/automate/components/authn-service/tokens/types"
tutil "github.com/chef/automate/components/authn-service/tokens/util"
uuid "github.com/chef/automate/lib/uuid4"
)
var logger *zap.Logger
func init() {
cfg := zap.NewProductionConfig()
cfg.Level.SetLevel(zap.ErrorLevel)
logger, _ = cfg.Build()
rand.Seed(time.Now().Unix())
}
type adapterTestFunc func(context.Context, *testing.T, tokens.Storage)
// TestToken tests the mock and pg adapters via their implemented adapter
// interface
func TestToken(t *testing.T) {
pgURLGiven := false
// Note: this matches CI
pgCfg := pg.Config{
PGURL: "postgresql://postgres@127.0.0.1:5432/authn_test?sslmode=disable",
}
if v, found := os.LookupEnv("PG_URL"); found {
pgCfg.PGURL = v
pgURLGiven = true
}
adapters := map[string]tokens.TokenConfig{
"mock": &mock.Config{},
"pg": &pgCfg,
}
// Note: because the pg adapter doesn't let us set the stage so easily,
// these overlap a bit: most _create_ 1+ tokens first
// (any failures in these "setup creates" are triggering a test failure,
// i.e., they're t.Fatal'ing out)-
tests := []adapterTestFunc{
testGetTokens,
testGetToken,
testGetTokenIDWithValue,
testGetTokenIDWithValueNotFound,
testCreateToken,
testCreateTokenWithValue,
testCreateLegacyTokenWithValue,
testDeleteToken,
testDeleteTokenNotFound,
testUpdateTokenActiveOnly,
testUpdateTokenUpdatesUpdatedField,
testUpdateTokenNotFound,
} // Note: if a "not found" case is last, we'll leave a tidy test database
for adpName, adpCfg := range adapters {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
t.Run(adpName, func(t *testing.T) {
for _, test := range tests {
// TODO 2017/09/02 sr: this is pretty inefficient, we'll run the pg
// migrations for each and every test case. Since the overall
// performance still isn't that bad, I'll leave it at that for now.
adp, err := adpCfg.Open(nil, logger)
if err != nil {
// The logic to determine if we want to ignore this PG connection
// failure is as follows:
// - if the developer has passed PG_URL, we assume they want to run
// the pg tests (for testing migrations, etc)
// - if this is running on CI, never skip
// Why bother skipping? -- We don't want our test suite to require
// a running postgres instance, as that we would be annoying.
if pgURLGiven || os.Getenv("CI") == "true" {
t.Fatalf("opening connector: %s", err)
} else {
t.Logf("opening database: %s", err)
t.Logf(testconstants.SkipPGMessageFmt, pgCfg.PGURL)
t.SkipNow()
}
}
require.Nil(t, err, "opening connector: %s", err)
if r, ok := adp.(tokens.Resetter); ok {
err := r.Reset(ctx)
require.Nil(t, err, "reset adapter: %s", err)
}
// use the function name to identify the test case
name := strings.Split(runtime.FuncForPC(reflect.ValueOf(test).Pointer()).Name(), ".")[2]
t.Run(name, func(t *testing.T) {
test(ctx, t, adp)
})
}
})
}
}
func testGetTokens(ctx context.Context, t *testing.T, ta tokens.Storage) {
_, err := ta.CreateToken(ctx, "id0", "<PASSWORD>", true, []string{"project-1"})
require.Nil(t, err, "expected no error, got err=%v", err)
_, err = ta.CreateToken(ctx, "id1", "<PASSWORD>", true, []string{"project-1"})
require.Nil(t, err, "expected no error, got err=%v", err)
toks, err := ta.GetTokens(ctx)
if err != nil {
t.Fatal(err)
}
if len(toks) != 2 {
t.Errorf("expected two tokens, got %d", len(toks))
}
}
func testGetToken(ctx context.Context, t *testing.T, ta tokens.Storage) {
tok0, err := ta.CreateToken(ctx, "id0", "<PASSWORD>", true, []string{"project-1"})
require.Nil(t, err, "expected no error, got err=%v", err)
tok, err := ta.GetToken(ctx, tok0.ID)
require.Nil(t, err, "expected no error, got err=%v", err)
require.NotNil(t, tok, "expected token 'node1', got token=%v", tok)
if tok.Value != tok0.Value {
t.Errorf("expected token 'node1' to have a token %v, got %v", tok0.Value, tok.Value)
}
}
func testGetTokenIDWithValueNotFound(ctx context.Context, t *testing.T, ta tokens.Storage) {
_, err := ta.GetTokenIDWithValue(ctx, "token1")
if err != nil {
if _, ok := errors.Cause(err).(*tokens.NotFoundError); !ok {
t.Errorf("expected token.NotFoundError, got %s", err)
}
}
}
func testGetTokenIDWithValue(ctx context.Context, t *testing.T, ta tokens.Storage) {
tok, err := ta.CreateToken(ctx, "id0", "<PASSWORD>", true, []string{"project-1"})
require.Nilf(t, err, "expected no error, got err=%v", err)
require.NotNilf(t, tok, "expected token 'node3', got token=%v", tok)
if tok.Value == "" {
t.Error("expected returned token to have a value, got ''")
}
tokID, err := ta.GetTokenIDWithValue(ctx, tok.Value)
require.Nilf(t, err, "expected no error, got err=%v", err)
assert.Equalf(t, tokID, tok.ID, "expected token ID to match %q", tok.ID)
}
func testCreateToken(ctx context.Context, t *testing.T, ta tokens.Storage) {
before := time.Now().Add(-(time.Second * 20)).UTC()
tok, err := ta.CreateToken(ctx, "id0", "<PASSWORD>", true, []string{"project-1"})
require.Nil(t, err, "expected no error, got err=%v", err)
require.NotNil(t, tok, "expected token 'node3', got token=%v", tok)
if tok.Value == "" {
t.Error("expected returned token to have value, got ''")
}
tok2, err := ta.GetToken(ctx, tok.ID)
require.Nil(t, err, "expected no error, got err=%v", err)
require.NotNil(t, tok2, "expected token 'node3', got token=%v", tok2)
if tok2.Value != tok.Value {
t.Errorf("expected token 'node3' to have a value %v, got %v", tok.Value, tok2.Value)
}
if tok2.Description != tok.Description {
t.Errorf("expected token 'node3' to have a description %v, got %v", tok.Description, tok2.Description)
}
if !tok2.Created.After(before) {
t.Errorf("expected token 'node3' creation time to be after %s, got %s", before, tok2.Created)
}
assert.ElementsMatch(t, tok.Projects, tok2.Projects)
}
func testCreateTokenWithValue(ctx context.Context, t *testing.T, ta tokens.Storage) {
before := time.Now().Add(-time.Second * 20).UTC()
tok, err := ta.CreateTokenWithValue(ctx,
"id0", generateRandomTokenString(tutil.MinimumTokenLength()), "node3", true, []string{"project-1"})
require.NoError(t, err)
require.NotNilf(t, tok, "expected token 'node3', got token=%v", tok)
require.NotZero(t, tok.Value, "expected returned token to have a value")
tok2, err := ta.GetToken(ctx, tok.ID)
require.Nil(t, err, "expected no error, got err=%v", err)
require.NotNil(t, tok2, "expected token 'node3', got token=%v", tok2)
assert.Equal(t, tok.Value, tok2.Value)
assert.Equal(t, tok.Description, tok2.Description)
if !tok2.Created.After(before) {
t.Errorf("expected token 'node3' creation time to be after %s, got %s", before, tok2.Created)
}
}
func testCreateLegacyTokenWithValue(ctx context.Context, t *testing.T, ta tokens.Storage) {
before := time.Now().Add(-time.Second * 20).UTC()
tok, err := ta.CreateLegacyTokenWithValue(ctx, generateRandomTokenString(tutil.MinimumLegacyTokenLength-1))
if err == nil {
t.Errorf("expected token validation error")
}
tok, err = ta.CreateLegacyTokenWithValue(ctx, generateRandomTokenString(tutil.MinimumLegacyTokenLength))
require.NoError(t, err)
require.NotNil(t, tok, "expected token got token=%v", tok)
require.NotZero(t, tok.Value, "expected returned token to have a value")
tok2, err := ta.GetToken(ctx, tok.ID)
require.Nil(t, err, "expected no error, got err=%v", err)
require.NotNil(t, tok2, "expected token got token=%v", tok2)
assert.Equal(t, tok.Value, tok2.Value)
assert.Equal(t, tokens.LegacyTokenDescription, tok2.Description)
assert.ElementsMatch(t, tok.Projects, tok2.Projects)
if !tok2.Created.After(before) {
t.Errorf("expected token creation time to be after %s, got %s", before, tok2.Created)
}
}
func generateRandomTokenString(length int) string {
var letters = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ=")
b := make([]rune, length)
for i := range b {
b[i] = letters[rand.Intn(len(letters))]
}
return string(b)
}
func testDeleteToken(ctx context.Context, t *testing.T, ta tokens.Storage) {
tok0, err := ta.CreateToken(ctx, "id0", "node1", true, []string{"project-1"})
require.Nil(t, err, "expected no error, got err=%v", err)
err = ta.DeleteToken(ctx, tok0.ID)
require.Nil(t, err, "expected deleted token 'node1', got err=%v", err)
tok1, err := ta.GetToken(ctx, tok0.ID)
require.NotNil(t, err, "expected no error, got err=%v", err)
require.Nil(t, tok1, "expected token not to be found, got token=%v", tok1)
if _, ok := errors.Cause(err).(*tokens.NotFoundError); !ok {
t.Errorf("expected not found token error, got err=%v", err)
}
}
func testDeleteTokenNotFound(ctx context.Context, t *testing.T, ta tokens.Storage) {
err := ta.DeleteToken(ctx, uuid.Must(uuid.NewV4()).String())
if err != nil {
if _, ok := errors.Cause(err).(*tokens.NotFoundError); !ok {
t.Errorf("expected not found token 'node1', got err=%v", err)
}
}
}
func testUpdateTokenActiveOnly(ctx context.Context, t *testing.T, ta tokens.Storage) {
tok0, err := ta.CreateToken(ctx, "id0", "node1", true, []string{"project-1"})
if err != nil {
t.Fatalf("expected no error, got err=%v", err)
}
tok, err := ta.UpdateToken(ctx, tok0.ID, "", false, []string{"project-1"})
require.NoError(t, err)
require.NotNil(t, tok)
assert.Equal(t, tok0.Description, tok.Description)
assert.Equal(t, false, tok.Active)
assert.ElementsMatch(t, tok0.Projects, tok.Projects)
}
func testUpdateTokenUpdatesAll(ctx context.Context, t *testing.T, ta tokens.Storage) {
tok0, err := ta.CreateToken(ctx, "id0", "node1", true, []string{"project-1"})
if err != nil {
t.Fatalf("expected no error, got err=%v", err)
}
newDesc := "newDesc"
newProj := []string{"project-2"}
tok, err := ta.UpdateToken(ctx, tok0.ID, newDesc, false, newProj)
require.NoError(t, err)
require.NotNil(t, tok)
assert.Equal(t, newDesc, tok.Description)
assert.Equal(t, false, tok.Active)
assert.ElementsMatch(t, newProj, tok.Projects)
}
func testUpdateTokenNotFound(ctx context.Context, t *testing.T, ta tokens.Storage) {
_, err := ta.UpdateToken(ctx, uuid.Must(uuid.NewV4()).String(), "desc", true, []string{"project-1"})
if err != nil {
if _, ok := errors.Cause(err).(*tokens.NotFoundError); !ok {
t.Errorf("expected not found token 'node1', got err=%v", err)
}
}
}
func testUpdateTokenUpdatesUpdatedField(ctx context.Context, t *testing.T, ta tokens.Storage) {
tok0, err := ta.CreateToken(ctx, "id0", "node1", true, []string{"project-1"})
require.Nil(t, err, "expected no error, got err=%v", err)
tok, err := ta.UpdateToken(ctx, tok0.ID, "", false, []string{"project-1"})
require.Nil(t, err, "expected no error, got err=%v", err)
require.NotNil(t, tok, "expected token 'node1', got token=%v", tok)
if tok.Created != tok0.Created {
t.Errorf("expected token 'node1' to have created=%v, got %v", tok0.Created, tok.Created)
}
}
|
<gh_stars>0
import {LogModel} from './Log.model';
import {UserModel} from './User.model';
export {
LogModel,
UserModel
} |
/**
* Creado por Juan Carlos el dia 27/05/2015.
*/
public class APIService {
private final static String API_URL = "https://airportfsi.herokuapp.com/api/v1/";
private final static String FLIGHT_PATH = "flights";
private final static String RESERVATION_PATH = "reservations";
private final static String QR_PATH = "https://chart.googleapis.com/chart?cht=qr&chs=170x170&chl=";
private final static String USER = "?user_email=juankgalvis-9a@hotmail.com";
private final static String TOKEN = "&user_token=2YtxKrpAFzQKSJUGyKyB";
public static List getFlights(String originAirport, String destinationAirport, String departureTime) {
List<Flight> flights = new ArrayList<>();
String url = API_URL + FLIGHT_PATH + USER + TOKEN
+ "&departure_date=" + departureTime
+ "&arrival_date=" + departureTime
+ "&origin_airport_id=" + originAirport
+ "&destination_airport_id=" + destinationAirport;
Log.e("url", url);
BufferedReader br = HttpManager.getInstance().get(url);
if (br != null) {
StringBuilder json = new StringBuilder();
String line;
try {
while (true) {
line = br.readLine();
if (line == null) {
break;
}
json.append(line);
}
Log.e("json", json.toString());
Genson genson = new Genson();
List<HashMap> hashMaps = genson.deserialize(json.toString(), List.class);
for (HashMap hashMap : hashMaps) {
flights.add(genson.deserialize(genson.serialize(hashMap), Flight.class));
}
} catch (IOException e) {
e.printStackTrace();
}
}
return flights;
}
public static List getReservations() {
List<Reservation> reservations = new ArrayList<>();
String url = API_URL + FLIGHT_PATH + USER + TOKEN;
Log.e("url", url);
BufferedReader br = HttpManager.getInstance().get(url);
if (br != null) {
StringBuilder json = new StringBuilder();
String line;
try {
while (true) {
line = br.readLine();
if (line == null) {
break;
}
json.append(line);
}
Log.e("json", json.toString());
Genson genson = new Genson();
List<HashMap> hashMaps = genson.deserialize(json.toString(), List.class);
for (HashMap hashMap : hashMaps) {
reservations.add(genson.deserialize(genson.serialize(hashMap), Reservation.class));
}
} catch (IOException e) {
e.printStackTrace();
}
}
return reservations;
}
public static Drawable generateQR(String data) {
return HttpManager.getInstance().downloadImage(QR_PATH + data);
}
} |
Reduction filters for minimizing data transfers in distributed query optimization It has long been recognized that query optimization in distributed database systems is an important research issue. The challenge is to determine a sequence of operations which will process the query while minimizing the chosen cost function. Finding the optimal optimization for a general query is an NP-hard problem so, in general, heuristics are employed to find a cost-effective and efficient processing method. We present a novel approach to the problem, which uses reduction filters, with the objective of minimizing the total volume of data transferred in the network. We assume a distributed relational database management system and select-project-join queries. This means that we have a number of relations, each located at a different site in the network, which must be joined and the result made available at some distinct query site. Our technique is to reduce the relations, before shipment to the query site, using reduction filters and thereby significantly reduce the total communication cost. |
Effects of fixation and freezing on some morphometric characteristics of Nile tilapia Fish preservation methods including use of formalin and freezing are widely used to preserve fish specimen in the laboratory to maintain their freshness for future laboratory analysis. This present study aimed to investigate the effects of fixation and freezing on the morphometric characteristics of Nile tilapia, Oreochromis niloticus. Forty samples of a single cohort of O. niloticus were obtained from the Tono reservoir in Navrongo, Ghana. Total length (TL) and body weight (W) of each fish were measured. Twenty samples of O. niloticus were subjected to freezing at -4oC whilst the remaining twenty were fixed in 4% formaldehyde solution. The study lasted for thirteen days during which the length and weight were determined repeatedly in a sequence during the storage period. Although there was no significant difference (p > 0.05) in the change of length and weight measured during the study, all samples showed some degree of shrinkage within the storage period. For samples preserved by freezing, there was a 5.62 % and 19.61 % reduction in length and weight respectively, while those preserved in formalin reduced by 5.24% and 10.72% in length and weight respectively. For condition factor (k), there was no change at the end of the experiment for samples preserved by freezing but a marginal increase of 0.08% was realized for those preserved in formalin. Though shrinkage occurred in both samples preserved in formalin and freezing, the greatest shrinkage was recorded by those preserved by freezing. INTRODUCTION Nile tilapia, Oreochromis niloticus is a fish of African origin belonging to the family Cichlidae. It occurs in a wide variety of freshwater habitats like rivers, lakes, canals, and irrigation channels. It is a species of high economic value and it has been widely introduced outside its natural range. O. niloticus and its hybrids are the most important cultured fish species, as well as becoming an increasingly important food fish in many parts of the world. It is the major species farmed in Ghana, and according to FAO, it constitutes over 80% of aquaculture production in the country and it occurs in several rivers, as well as manmade lakes. their freshness for future laboratory analysis. However, body proportions of fish preserved in this manner may show variable degrees of change after a standard period. Most authors have reported a decrease in length (Al-Hassan and Abdullah, 1992) and some authors report changes in weight and condition factor. Studies of fish preservation have been carried out with various fish species such as herring (Schnack and Rosenthal, 1978), marine fish food organisms (), young walleye (Glenn and Mathias, Stizostedion vitreum, Perca fluviatilis L., and pike Esox lucius L., (), and sea trout egg (Al-Hassan and Shawafi, 1997). In the reports of all these studies, there were varying changes in the morphometrics of fish after the preservation periods. Various studies have shown that the use of formalin and freezing have effects on the length, weight and condition factor of fish after a while (Glenn and Mathias, 1987;Al-Hassan and Shawafi, 1997;Al-Hassan and Abdullah, 1992). It is important to find out how these preservation methods will affect the morphometric measurements of an important fish such as Nile tilapia. Such information will help researchers use the appropriate method to preserve their tilapia specimen to meet the needs of their research. It is for this reason that the present study investigated the effects formalin or freezing on the morphometric characteristics of Nile tilapia, O. niloticus harvested from the Tono reservoir in Navrongo, Ghana. Collection of fish samples The fish samples were obtained from Tono reservoir in Navrongo in the Upper East Region of Ghana. A total of forty specimens of a single cohort of Nile tilapia, O. niloticus were obtained from the Tono reservoir by artisanal fishers who use gillnets of mesh size ranging 2 -8 cm. The fish samples were packaged in sterile polyethene bags and placed in an ice chest containing ice and immediately transported to the GetFund Laboratory of the University for Development Studies, Navrongo Campus, Ghana, where the experiment was conducted. The identity of the samples was confirmed using the field identification guide by Dankwa et al.. Measurement of length and weight Immediately the samples got to the laboratory, measurement of length and weight of each fish sample was taken. The total length (TL) of each sample was measured on a measuring board. The measurement was taken from the tip of the snout to the tip of the caudal fin and recorded to the nearest 0.01 cm as the initial length. The body weight was also measured using an electronic balance (XY-2C series electronic balance) after using a tissue to mop off water from the surface of the specimens and recorded to the nearest 0.01g as the initial weight. Experimental set-up Twenty specimens of O. niloticus were subjected to freezing at -4C temperature while the remaining twenty were fixed in 4% formaldehyde treatment after the initial measurement of length and weight. The experiment took place for thirteen days within which the measurement of length and weight were repeated every other day during the storage period (i.e. 3 rd, 5 th, 7 th, 9 th, 11 th and 13 th days). Before measurements were taken on a set day, specimens in the formalin were removed and allowed to dry for some time before their measurements were taken. The frozen samples were also allowed to completely thaw before measurements were taken. The fish samples were placed back into their respective treatments right after the measurements were taken. Estimation of condition factor (k) The condition factor (k) was calculated from the relationship: k = 100W/L 3 (Gomiero and Braga, 2005). Where 'W' and 'L' are the mean body weight and mean total length of the fish, respectively. Statistical analysis Data collected from the experiment were presented as the mean ± standard error of the mean (SEM). The data failed normality test and were subjected to Mann Whitney U test to detect the difference in length, weight and condition factor (k) before and after treatment. RESULTS AND DISCUSSION Although there were no significant differences (p > 0.05) between all the values of length and weight measured during the study, the samples showed some degree of shrinkage within the storage period. Table 1 below shows the effects of freezing and formalin treatments on the length and Figure 1 shows the percentage (%) change in length during the preservation period of O. niloticus. A mean reduction in length (from 17.14 -16.72 cm for freezing and from 17.55 -17.34 for formalin representing 2.42% and 1.19% change for freezing and formalin preservations respectively) was seen to occur after the first two days of storage. On the final day (13th day), the mean length reduced to 15.90 cm for freezing and 16.63 cm for formalin representing a percentage reduction of 7.22% and 5.24% for freezing and formalin respectively. From this result, it was realized that the greatest change in length was recorded in specimens preserved by freezing. Table 2 below shows the effects of freezing and formalin on the weight and Figure 2 shows the percentage (%) change in Weight during the preservation period. A mean reduction in weight (from 102.13 -96.53g for freezing and from 103.19 -101.35g for formalin representing 5.47% and 1.74% change for freezing and formalin respectively) was seen to occur after the first two days of storage. On the final day (13th day), the mean weight reduced to 82.1g for freezing and 91.5g for formalin representing a percentage reduction of 19.61% and 10.72% for freezing and formalin respectively. Likewise, the case of length, it was realized that the greatest change in weight was recorded in specimens preserved by freezing. Table 3 below shows the effects of freezing and formalin on the condition factor (k) whilst Figure 3 shows percentage (%) change in condition factor (k) during the Preservation Period of O. niloticus. An increment in condition factor (from 2.03 -2.07 for freezing, and from 1.91 -1.98 for formalin were observed representing -0.04 and -0.07changes in condition factor for freezing and formalin respectively) was seen to occur after the first two days. Changes in the condition factor reduced on the 5th, 7th, 9th, and 11th days, while an increment was observed on the 13th day for both freezing and formalin. Likewise, the case of length and weight, it was also realized that the greatest change in condition factor was recorded in specimens preserved by freezing. The study revealed a reduction in length and weight of the O. niloticus samples after preserving them in formalin and freezing. Similar findings have been observed by other researchers using different fish species. For instance, Ajah and Nunoo, observed shrinkage of Sardinella aurita after subjecting them to four preservation conditions -freezing, formalin, smoking and salting. S. aurita decreased in length, weight and condition factor, except for an increase in condition factor (k) with formalin in this study. They, therefore, proposed that some adjustments in length, weight and condition factor were necessary to equate preserved fish samples to fresh ones. Puigcerver studied European minnow, Phoxinus phoxinus and noted a significant decrease in both length and weight measurements due to fixation and preservation. Al-Hassan and Shawafi kept marine Rastrelliger kanagurta in different concentrations of formalin and observed that some fishes increased in size, whilst other fishes reduced. This study however, did not show any increment in the morphometric measurement of the O. niloticus samples. It rather revealed a faster shrinkage rate especially, in samples preserved by freezing (though not significantly different (p > 0.05) from those preserved in formalin), which did not conform to the result of Jawad who observed a greater shrinkage in fish samples preserved in 5% formalin with tap water than in fishes stored in 70% alcohol with tap water. However, the result of this study is in accordance with those of Billy for Sarotherodon mossambicus and Al-Hassan and Abdullah for Barbus luteus, in which they both reported slow changes in the fish body proportions due to preservation, though the rate of shrinkage in their study was comparatively higher. According to Theilacker, most literature reports have recorded shrinkage for larvae of various species at all length, when placed in Formaldehyde solution. This discrepancy between studies may be due to different handling of the species before storage which can cause a greater degree of shrinkage than the fixative itself. The shrinkage in this study might have occurred due to the loss of internal water from the fish. A little change in length and weight or the morphometric characteristics affected the true condition and biology of the fish and that is likely to affect the understanding of existing relationships among various taxonomic categories. CONCLUSION AND RECOMMENDATION This study realized a reduction in the morphometric characteristics of O. niloticus samples preserved in both formalin and freezing, but the greatest shrinkage was recorded from samples preserved by freezing. However, these changes were not statistically significant. The shrinkage that occurred was attributable to loss of internal water from the fish. As much as possible, samples of fish collected from the field should be worked on immediately on the field or in the laboratory, but if it becomes necessary to preserve, formalin should be used. Further study should be conducted using alcohol as a preservative to see whether its effects will be lower than formalin as reported in some studies. |
Effect of attentive fixation in macaque thalamus and cortex. Attentional modulation of neuronal responsiveness is common in many areas of visual cortex. We examined whether attentional modulation in the visual thalamus was quantitatively similar to that in cortex. Identical procedures and apparatus were used to compare attentional modulation of single neurons in seven different areas of the visual system: the lateral geniculate, three visual subdivisions of the pulvinar , and three areas of extrastriate cortex representing early, intermediate, and late stages of cortical processing (V2, V4/PM, area 7a). A simple fixation task controlled transitions among three attentive states. The animal waited for a fixation point to appear (ready state), fixated the point until it dimmed (fixation state), and then waited idly to begin the next trial (idle state). Attentional modulation was estimated by flashing an identical, irrelevant stimulus in a neuron's receptive field during each of the three states; the three responses defined a "response vector" whose deviation from the line of equal response in all three states (the main diagonal) indicated the character and magnitude of attentional modulation. Attentional modulation was present in all visual areas except the lateral geniculate, indicating that modulation was of central origin. Prevalence of modulation was modest (26%) in pulvinar, and increased from 21% in V2 to 43% in 7a. Modulation had a push-pull character (as many cells facilitated as suppressed) with respect to the fixation state in all areas except Pdm where all cells were suppressed during fixation. The absolute magnitude of attentional modulation, measured by the angle between response vector and main diagonal expressed as a percent of the maximum possible angle, differed among brain areas. Magnitude of modulation was modest in the pulvinar (19-26%), and increased from 22% in V2 to 41% in 7a. However, average trial-to-trial variability of response, measured by the coefficient of variation, also increased across brain areas so that its difference among areas accounted for more than 90% of the difference in modulation magnitude among areas. We also measured attentional modulation by the ratio of cell discharge due to attention divided by discharge variability. The resulting signal-to-noise ratio of attention was small and constant, 1.3 +/- 10%, across all areas of pulvinar and cortex. We conclude that the pulvinar, but not the lateral geniculate, is as strongly affected by attentional state as any area of visual cortex we studied and that attentional modulation amplitude is closely tied to intrinsic variability of response. |
<filename>event/builder_test.go
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !disable_events
package event_test
import (
"context"
"fmt"
"strings"
"testing"
"github.com/google/go-cmp/cmp"
"golang.org/x/exp/event"
"golang.org/x/exp/event/keys"
)
func TestClone(t *testing.T) {
var labels []event.Label
for i := 0; i < 5; i++ { // one greater than len(Builder.labels)
labels = append(labels, keys.Int(fmt.Sprintf("l%d", i)).Of(i))
}
ctx := event.WithExporter(context.Background(), event.NewExporter(event.NopHandler{}, nil))
b1 := event.To(ctx)
b1.With(labels[0]).With(labels[1])
check(t, b1, labels[:2])
b2 := b1.Clone()
check(t, b1, labels[:2])
check(t, b2, labels[:2])
b2.With(labels[2])
check(t, b1, labels[:2])
check(t, b2, labels[:3])
// Force a new backing array for b.Event.Labels.
for i := 3; i < len(labels); i++ {
b2.With(labels[i])
}
check(t, b1, labels[:2])
check(t, b2, labels)
b2.Log("") // put b2 back in the pool.
b2 = event.To(ctx)
check(t, b1, labels[:2])
check(t, b2, []event.Label{})
b2.With(labels[3]).With(labels[4])
check(t, b1, labels[:2])
check(t, b2, labels[3:5])
}
func check(t *testing.T, b event.Builder, want []event.Label) {
t.Helper()
if got := b.Event().Labels; !cmp.Equal(got, want, cmp.Comparer(valueEqual)) {
t.Fatalf("got %v, want %v", got, want)
}
}
func valueEqual(l1, l2 event.Value) bool {
return fmt.Sprint(l1) == fmt.Sprint(l2)
}
func TestTraceBuilder(t *testing.T) {
// Verify that the context returned from the handler is also returned from Start,
// and is the context passed to End.
ctx := event.WithExporter(context.Background(), event.NewExporter(&testTraceHandler{t}, nil))
ctx, end := event.To(ctx).Start("s")
val := ctx.Value("x")
if val != 1 {
t.Fatal("context not returned from Start")
}
end()
}
type testTraceHandler struct {
t *testing.T
}
func (*testTraceHandler) Log(ctx context.Context, _ *event.Event) {}
func (*testTraceHandler) Annotate(ctx context.Context, _ *event.Event) {}
func (*testTraceHandler) Metric(ctx context.Context, _ *event.Event) {}
func (*testTraceHandler) Start(ctx context.Context, _ *event.Event) context.Context {
return context.WithValue(ctx, "x", 1)
}
func (t *testTraceHandler) End(ctx context.Context, _ *event.Event) {
val := ctx.Value("x")
if val != 1 {
t.t.Fatal("Start context not passed to End")
}
}
func TestFailToClone(t *testing.T) {
ctx := event.WithExporter(context.Background(), event.NewExporter(event.NopHandler{}, nil))
catch := func(f func()) {
defer func() {
r := recover()
if r == nil {
t.Error("expected panic, did not get one")
return
}
got, ok := r.(string)
if !ok || !strings.Contains(got, "Clone") {
t.Errorf("got panic(%v), want string with 'Clone'", r)
}
}()
f()
}
catch(func() {
b1 := event.To(ctx)
b1.Log("msg1")
// Reuse of Builder without Clone; b1.data has been cleared.
b1.Log("msg2")
})
catch(func() {
b1 := event.To(ctx)
b1.Log("msg1")
_ = event.To(ctx) // re-allocate the builder
// b1.data is populated, but with the wrong information.
b1.Log("msg2")
})
}
|
Cloudy with showers. Thunder possible. Low 47F. Winds NW at 15 to 25 mph. Chance of rain 50%..
Cloudy with showers. Thunder possible. Low 47F. Winds NW at 15 to 25 mph. Chance of rain 50%.
MOWEAQUA — By all accounts, the bus trip home was excruciating. |
from sampling import Sampler
import algos
import numpy as np
from simulation_utils import create_env, get_feedback, run_algo
import sys
def batch(task, method, N, M, b):
if N % b != 0:
print('N must be divisible to b')
exit(0)
B = 20*b
simulation_object = create_env(task)
d = simulation_object.num_of_features
w_true = 2*np.random.rand(d)-1
w_true = w_true / np.linalg.norm(w_true)
print('If in automated mode: true w = {}'.format(w_true/np.linalg.norm(w_true)))
lower_input_bound = [x[0] for x in simulation_object.feed_bounds]
upper_input_bound = [x[1] for x in simulation_object.feed_bounds]
w_sampler = Sampler(d)
psi_set = []
s_set = []
i = 0
while i < N:
w_sampler.A = psi_set
w_sampler.y = np.array(s_set).reshape(-1,1)
w_samples = w_sampler.sample(M)
mean_w_samples = np.mean(w_samples,axis=0)
print('Samples so far: ' + str(i))
print('w estimate = {}'.format(mean_w_samples/np.linalg.norm(mean_w_samples)))
print('Alignment = {}'.format(mean_w_samples.dot(w_true)/np.linalg.norm(mean_w_samples)))
inputA_set, inputB_set = run_algo(method, simulation_object, w_samples, b, B)
for j in range(b):
input_A = inputA_set[j]
input_B = inputB_set[j]
psi, s = get_feedback(simulation_object, input_B, input_A, w_true)
psi_set.append(psi)
s_set.append(s)
i += b
w_sampler.A = psi_set
w_sampler.y = np.array(s_set).reshape(-1,1)
w_samples = w_sampler.sample(M)
mean_w_samples = np.mean(w_samples, axis=0)
print('Samples so far: ' + str(N))
print('w estimate = {}'.format(mean_w_samples/np.linalg.norm(mean_w_samples)))
print('Alignment = {}'.format(mean_w_samples.dot(w_true)/np.linalg.norm(mean_w_samples)))
def nonbatch(task, method, N, M):
simulation_object = create_env(task)
d = simulation_object.num_of_features
w_true = 2*np.random.rand(d)-1
w_true = w_true / np.linalg.norm(w_true)
print('If in automated mode: true w = {}'.format(w_true/np.linalg.norm(w_true)))
lower_input_bound = [x[0] for x in simulation_object.feed_bounds]
upper_input_bound = [x[1] for x in simulation_object.feed_bounds]
w_sampler = Sampler(d)
psi_set = []
s_set = []
for i in range(N):
w_sampler.A = psi_set
w_sampler.y = np.array(s_set).reshape(-1,1)
w_samples = w_sampler.sample(M)
mean_w_samples = np.mean(w_samples,axis=0)
print('Samples so far: ' + str(i))
print('w estimate = {}'.format(mean_w_samples/np.linalg.norm(mean_w_samples)))
print('Alignment = {}'.format(mean_w_samples.dot(w_true)/np.linalg.norm(mean_w_samples)))
input_A, input_B = run_algo(method, simulation_object, w_samples)
psi, s = get_feedback(simulation_object, input_A, input_B, w_true)
psi_set.append(psi)
s_set.append(s)
w_sampler.A = psi_set
w_sampler.y = np.array(s_set).reshape(-1,1)
w_samples = w_sampler.sample(M)
print('Samples so far: ' + str(N))
print('w estimate = {}'.format(mean_w_samples/np.linalg.norm(mean_w_samples)))
print('Alignment = {}'.format(mean_w_samples.dot(w_true)/np.linalg.norm(mean_w_samples)))
|
Individual differences methods for randomized experiments. Experiments allow researchers to randomly vary the key manipulation, the instruments of measurement, and the sequences of the measurements and manipulations across participants. To date, however, the advantages of randomized experiments to manipulate both the aspects of interest and the aspects that threaten internal validity have been primarily used to make inferences about the average causal effect of the experimental manipulation. This article introduces a general framework for analyzing experimental data to make inferences about individual differences in causal effects. Approaches to analyzing the data produced by a number of classical designs and 2 more novel designs are discussed. Simulations highlight the strengths and weaknesses of the data produced by each design with respect to internal validity. Results indicate that, although the data produced by standard designs can be used to produce accurate estimates of average causal effects of experimental manipulations, more elaborate designs are often necessary for accurate inferences with respect to individual differences in causal effects. The methods described here can be diversely applied by researchers interested in determining the extent to which individuals respond differentially to an experimental manipulation or treatment and how differential responsiveness relates to individual participant characteristics. |
<reponame>mohdab98/cmps252_hw4.2<gh_stars>1-10
package cmps252.HW4_2.UnitTesting;
import static org.junit.jupiter.api.Assertions.*;
import java.io.FileNotFoundException;
import java.util.List;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Test;
import cmps252.HW4_2.Customer;
import cmps252.HW4_2.FileParser;
@Tag("30")
class Record_1155 {
private static List<Customer> customers;
@BeforeAll
public static void init() throws FileNotFoundException {
customers = FileParser.getCustomers(Configuration.CSV_File);
}
@Test
@DisplayName("Record 1155: FirstName is Octavia")
void FirstNameOfRecord1155() {
assertEquals("Octavia", customers.get(1154).getFirstName());
}
@Test
@DisplayName("Record 1155: LastName is Shove")
void LastNameOfRecord1155() {
assertEquals("Shove", customers.get(1154).getLastName());
}
@Test
@DisplayName("Record 1155: Company is Jacobs Engineering Group Inc")
void CompanyOfRecord1155() {
assertEquals("Jacobs Engineering Group Inc", customers.get(1154).getCompany());
}
@Test
@DisplayName("Record 1155: Address is 405 Murray Hill Pky")
void AddressOfRecord1155() {
assertEquals("405 Murray Hill Pky", customers.get(1154).getAddress());
}
@Test
@DisplayName("Record 1155: City is East Rutherford")
void CityOfRecord1155() {
assertEquals("East Rutherford", customers.get(1154).getCity());
}
@Test
@DisplayName("Record 1155: County is Bergen")
void CountyOfRecord1155() {
assertEquals("Bergen", customers.get(1154).getCounty());
}
@Test
@DisplayName("Record 1155: State is NJ")
void StateOfRecord1155() {
assertEquals("NJ", customers.get(1154).getState());
}
@Test
@DisplayName("Record 1155: ZIP is 7073")
void ZIPOfRecord1155() {
assertEquals("7073", customers.get(1154).getZIP());
}
@Test
@DisplayName("Record 1155: Phone is 201-756-1976")
void PhoneOfRecord1155() {
assertEquals("201-756-1976", customers.get(1154).getPhone());
}
@Test
@DisplayName("Record 1155: Fax is 201-756-6761")
void FaxOfRecord1155() {
assertEquals("201-756-6761", customers.get(1154).getFax());
}
@Test
@DisplayName("Record 1155: Email is <EMAIL>")
void EmailOfRecord1155() {
assertEquals("<EMAIL>", customers.get(1154).getEmail());
}
@Test
@DisplayName("Record 1155: Web is http://www.octaviashove.com")
void WebOfRecord1155() {
assertEquals("http://www.octaviashove.com", customers.get(1154).getWeb());
}
}
|
Part of the Truthout Series Solutions
Here’s how government could be run “The Bain Way”: Strip the assets of 50 companies (states) by selling off parks and universities, raiding pension funds and privatizing prisons to pay off debts and favored lobbyists. Why didn’t Lincoln think of this?
Republican presidential candidate Mitt Romney has touted his business acumen as one of his biggest assets to solving America’s economic problems. He even mused that perhaps you should not be able to run for president until you had a couple of years of running a business under your belt.
Romney has had one major business experience – founding and running a private equity firm called Bain Capital, something very different than his father, who ran a car company. In the past several weeks there have been two detailed investigative stories on how he ran his private equity firm, and there has been much discussion in the media on how he ran his business differently than others: It has been dubbed, “The Bain Way.”
I know that he believes that he can structure the American tax and regulation codes in a way to help small and large American businesses bloom, and then just get out of the way and watch the economy grow. But that isn’t the main thing that he will have to deal with immediately if he gets into office in January.
He will be facing the problem of making a “grand bargain” debt deal with the Congress on what to do about the federal government deficit and its long-term debt. Right now, as pointed out by Bill Clinton, Romney’s debt plans “arithmetic” is not adding up using the old Washington ways, so he may have to fall back on his business acumen.
So how would he use that hard-earned knowledge in private equity to get us out of this mess? I have some suggestions using The Bain Way and the assets of the United States – and I do mean states. But first we have to learn, through these two investigative reports, how The Bain Way made Mr. Romney and his investors very rich.
Matt Taibbi of Rolling Stone Magazine, and Jesse Eisinger of ProPublica, lay out part of The Bain Way of doing business. First from Taibbi:
Here’s how Romney would go about “liberating” a company: A private equity firm like Bain typically seeks out floundering businesses with good cash flows. It then puts down a relatively small amount of its own money and runs to a big bank like Goldman Sachs or Citigroup for the rest of the financing. (Most leveraged buyouts are financed with 60 to 90 percent borrowed cash.) The takeover firm then uses that borrowed money to buy a controlling stake in the target company, either with or without its consent. When an LBO [leveraged buyout] is done without the consent of the target, it’s called a hostile takeover; such thrilling acts of corporate piracy were made legend in the 80s, most notably the 1988 attack by notorious corporate raiders Kohlberg Kravis Roberts against RJR Nabisco, a deal memorialized in the book “Barbarians at the Gate.” Romney and Bain avoided the hostile approach, preferring to secure the cooperation of their takeover targets by buying off a company’s management with lucrative bonuses. Once management is on board, the rest is just math. So if the target company is worth $500 million, Bain might put down $20 million of its own cash, then borrow $350 million from an investment bank to take over a controlling stake. But here’s the catch. When Bain borrows all of that money from the bank, it’s the target company that ends up on the hook for all of the debt… Once all that debt is added, one of two things can happen. The company can fire workers and slash benefits to pay off all its new obligations to Goldman Sachs and Bain, leaving it ripe to be resold by Bain at a huge profit. Or it can go bankrupt – this happens after about 7 percent of all private equity buyouts – leaving behind one or more shuttered factory towns. Either way, Bain wins. By power-sucking cash value from even the most rapidly-dying firms, private equity raiders like Bain almost always get their cash out before a target goes belly up.
Now from Eisinger and ProPublica:
…Romney’s firm wasn’t always looking for startups or troubled companies that it could turn around. Private equity companies conduct a variety of transactions other than buying startups with growth potential, or troubled firms ripe for a turnaround. Some seek out family-run operations under the theory that those typically have a lot of fat to cut. Some like “roll-ups,” buying up a bunch of small operations in one industry and combining them into a powerhouse with economies of scale. Firms buy divisions of large corporations that are trying to streamline their operations. Some acquisitions fit more than one of these descriptions. The constant is debt, and plenty of it. Private equity firms use such borrowed money to maximize their gains. The Romney campaign says Bain did various types of deals. And it celebrates that Bain helped launch or rebuild some American corporate stalwarts, like Staples, Bright Horizons and Sports Authority. Yet in addition, under Romney’s tenure, Bain often sought out solid businesses that didn’t need to be turned around. The reason: Such companies could operate under the burden of the enormous debt that Bain would layer on them. …The Wall Street Journal found that many of the businesses Bain bought went bust, even when Bain reaped big financial wins. The paper analyzed 77 businesses Bain invested in while Mr. Romney led the firm, from its 1984 start until early 1999, finding that 22 percent either filed for bankruptcy reorganization or closed their doors by the end of the eighth year after Bain first invested. An additional 8 percent ran into so much trouble that all of the money Bain invested was lost. But overall, the hits more than made up for the losses, and Bain recorded 50 percent to 80 percent annual gains in the period, the paper found.
Now how does Romney face the huge debt problem once he is president? Can’t raise taxes because he took Grover Norquist’s no-tax pledge. He has to cut taxes for the wealthy to help the millionaires and billionaires who gave so much money to the Super PACS.
He was able to get through the election by never saying what tax loopholes he was going to close, but he also knows that he will not be able to get rid of the middle class’ favorite tax break, the home mortgage deduction – and the churches would go nuts if he eliminated the charitable deductions. The Senate would surely filibuster those suggestions because of the pressure.
He can’t cut the Pentagon because he promised the neocons to up its budget trillions of dollars beyond what the Pentagon even wants. And it would be very hard to really cut the social programs to the bone because it isn’t enough to get you going to a balanced budget and those pesky Democrats would probably filibuster that too.
If he only had some companies to leverage and suck out their assets…
But wait, he has to think government now, but apply his business knowledge. It is the United States, so he already has 50 companies to look to – they are now called states.
After looking at the fiscal health of these company states, he also found that they don’t all pull their fiscal weight evenly. Some states get much more in federal aid than they pay in taxes, and other states get much less federal aid than they pay in taxes. All of them have state assets to use for the national debt, but let’s leverage the tax slackers first.
Turns out that the lion’s share of states who get more from the feds than they pay in taxes are ironically Republican (red) or Republican-leaning states.
These states are underwater to the federal government – the fed’s money to them is larger than what they give us back. Hmmm…it is mainly the Democratic (blue) states that are pulling more than their share of the load and are paying in more to the feds to pay on the debt.
This is a sticky political situation, but he has four years before he has to face the red states again in an election and he has plenty of billionaire friends who can deluge the airways to spin his actions…he decides to first go after the slacker states who aren’t producing and pulling their weight.
So what would be The Bain Way of doing this? The federal government is already highly leveraged in debt, in part because of the slacker states who take so much from the federal government. So he needs to get to their assets and strip as much as he can out of the states to pay our debts, get some profit for the executive branch and some favors for his favorite lobbyists.
It doesn’t matter what it would do to the state that is suppose to, by law, balance its budget. Let them cut their budgets to pay up because what else are they going to do?
Let’s take one of the worst offenders of underwater states, Mississippi. It is not a rich state, with the median income of only around $38,000 a year compared to $52,000 nationwide. But it has eight state universities, with its pride and joy, the University of Mississippi – Ole Miss – at the top of list.
These universities could be liquidated for their acreage; Ole Miss alone has 2500 acres. Or he could sell them to the growing number of for-profit universities. Wonder if University of Phoenix could pick up some of the smaller ones – or even go after Ole Miss?
Mississippians love their Ole Miss football program, so maybe he could sell it as an NFL team.
Mississippi also has 14 state prisons and six private prisons, so he could unload the state prisons on several of the largest private prison companies. I’m sure that would go over well in the state, because don’t all Republicans love privatization? Think of all the money he could save laying off state prison guards. Mississippi also has 22 state parks that he could sell off – they aren’t anything like California state parks, but you got to work with what you have.
And of course, he needs to go to the honey hole of most state money, the state employee pension fund. Mississippi’s fund, which is rated in pretty good fiscal health, as it pays out $1.8 billion annually.
Once he scoops out the assets of the fund to pay our debt, and for Mississippi to pay our executive branch consulting fees on how to liquidate their state, he can dump 90,000 retirees who are receiving pensions now and make sure that the 162,000 state employees who are paying into the fund don’t get their hands on pension fund assets that are needed for the federal debt.
There will probably be a lot of screaming, picketing and yelling by Mississippians when the layoffs and privatization hits, but Romney can get past executives – called governors in state language – to help him pull it off, as well paid consultants. Former governor Haley Barbour who already is now a lobbyist, sounds like a good first hire.
Besides, what is Mississippi going to do? Secede? They tried that and the feds now have a bigger army.
Because Romney has such a big debt to tackle, he is going to have to march down the list of slacker states – West Virginia, New Mexico, Alaska, Alabama, South Carolina, Montana and so forth. But remember, part of The Bain Way is to also leverage company states that are in good shape with larger assets, that “could operate under the burden of the enormous debt.“
One of the states that fits that bill is blue state California. They are having some trouble with their current budget but wow, what assets they have! If California would break off as its own country, it would be the eighth largest economy in the world and could join the G8.
It has one of the largest university systems in the country, with a very high rating that could bring big money if it was privatized or sold off for the land. State parks make up a large portion of the state, because this blue state embraced the environmental movement early on, and also saved very valuable old growth redwood trees that could be cut for good money. And many of the state parks would sell high on the real estate market, especially the ones right on the picturesque California coast – wonder what Pfeiffer State Park in Big Sur would bring?
California also has 33 prisons that could be privatized, and so much money could be saved by laying off their highly-paid, union-organized prison guards. The notorious San Quentin prison has 275 acres with beautiful views of San Francisco Bay and could be leveled for luxury homes. Just the land has been estimated to cost up to $664 million, so with the high price of real estate in California, that could be real money.
And California has two very large state pension funds, one for the public employees and one for the teachers, that are underfunded since the recession; but the sheer size of these two funds will still be very lucrative in paying off the debt.
Of course, Romney has to hope that the wily Democratic governor, Jerry Brown, doesn’t get smart and present the federal government with a bill for all the taxes it paid into the federal government that it didn’t get back in benefits. Californians are pushy that way against authority.
I have obviously taken this to the point of absurdity. Governments don’t exist to make profits, but are suppose to serve the people. The executive branch doesn’t get consulting fees, 50 percent to 80 percent annual return on its money or giant multimillion dollar profits.
Even knowing how to do business The Bain Way is not going to enhance the rest of American business, especially manufacturing and service businesses, which need to build and invest for the long-term future. Much of the American public has been stunned or sickened at the thought of callously laying off American workers to ship jobs overseas, and watching long-term companies and their towns fail. Most Americans cannot even fathom how Romney turned his IRA account into $100 million.
After reading about the extremes of The Bain Way, even for the most aggressive capitalists, it is clear that the federal government needs to be run like a government, not a high stakes profit and loss game.
Our American government assets that are tied up in our states, are for everyone and are to be conserved and cherished not only for ourselves, but the next generation. We do need to get our fiscal house in order, but after reading about The Bain Way and Mr. Romney’s love of his business acumen, I disagree, more than ever, on his musings about the necessity of a business background to be president.
Herbert Hoover and George W. Bush had the most business acumen and training of all our presidents. Abraham Lincoln and Harry Truman were failed businessmen. I know out of those four who I would pick to be president. After studying The Bain Way, isn’t it obvious? |
def print_trades_map(trades_map):
print_trades(
sorted(
trades_map.values(),
key=lambda t: t.id
)
) |
When it comes to playing a superhero on television there really is no better job. For Katie Cassidy, it was literally a dream come true to get to portray the iconic Black Canary on Arrow for four seasons.
While Laurel Lance met a tragic end last season on Arrow, her death was met with fan outrage and sadness at her character’s surprising end. But Laurel’s presence will forever live on in the hearts of the audience, the show, and in the actress herself.
“It was like a dream come true,” Cassidy told us of playing Black Canary. “It was such an honor to be asked to do that. The fact that the fans responded so well was such a big payoff. This part of my life is certainly something I’ll never forget. Black Canary will live on within me forever, and I’m grateful for every moment of it.”
But the beauty about the Arrowverse is that no one is really dead – unless sadly you’re Tommy Merlyn. With the universe expanding as it has with The Flash, Legends of Tomorrow, and Supergirl, there are different elements at play that make it possible for characters that we’ve lost to return. Arrow has flashbacks, The Flash introduced dopplegangers, and Legends of Tomorrow has the ability to travel to different time periods. Basically the possibilities are endless. As well are the possibilities of Cassidy’s return.
“Black Canary will live on within me forever, and I’m grateful for every moment of it.”
Cassidy was invited to The Flash last season to portray Laurel Lance’s Earth-2 doppleganger Black Siren in the show’s penultimate episode. The hour allowed Cassidy to explore her character is an entirely new way as she went from Black Canary the hero to Black Siren the villain. But who doesn’t love to play the villain?
“Playing the villain was so fun,” she said. “Obviously it’s a different show. The Flash is lighter. It was flattering to be asked to portray Laurel on Earth-2, Black Siren.”
With Black Siren being introduced on The Flash, it allowed Laurel’s doppleganger to have a supersonic scream – a real Canary Cry – given that on that Earth she is a metahuman. We got to see an entirely different side to the character as she partnered with the villainous Zoom.
But given the last we saw of Black Siren on The Flash she was locked away in the pipeline, there’s definitely a very real possibility we’ll see her character in the future.
“Hopefully we’ll get to see more of her,” Cassidy said.
In the Arrowverse, anything is possible.
Of course the possibility of Cassidy’s return prompts our minds to wander in every which way. But one thing that we’re most excited for is the possibility of a Black Siren vs. White Canary showdown, which was brought up during Cassidy’s panel at MegaCon back in May.
Given the history between Laurel and Sara (Caity Lotz) and the deep love the sisters share, just the idea of Sara not only coming face-to-face with Laurel’s doppleganger but possibly having to fight her really gives the potential storyline so much emotion.
“I mean emotionally it would be more difficult for [Sara] to see her sister like that,” Cassidy said. “Because Laurel of Earth-1 doesn’t really have a bad bone in her body. I think she means well and has a good heart. So to see this side of her, it’s not really Laurel. It’s the antithesis.”
But it’s definitely a dynamic that Cassidy would love to explore.
“I think it would be awesome and amazing and such a cool dynamic,” she said. “Just create more story for Legends. It would be great.”
“Hopefully we’ll get to see more of [Black Siren].”
While Laurel’s famous for her role as Black Canary, that’s not the only thing that defined who Laurel Lance was and will continue to be known as. Laurel has always had a strong allegiance to her family, especially her father, Quentin, and sister, Sara. While things have not always been perfect throughout their history, the one thing that has never changed is how much love there is between them.
“I have sisters in real life, and they are my closest, best friends I’ve ever had,” Cassidy said. “So I understand that bond and that relationship you can have with a sibling.”
Cassidy praised Lotz and Paul Blackthorne for incredible experiences that prompted fun, love, as well as learning as they brought the complicated yet loving Lance family dynamic to life.
“As far as working with both Caity Lotz and Paul Blackthorn was and is an amazing, amazing opportunity and such an amazing learning experience,” Cassidy said. “Especially with Paul you know he’s done this for so long and his craft is similar to mine and I feel like a sponge when I’m around him because I want to soak up everything he tells me. And creating backstory and trying to fill our characters. And having it portrayed and come through on a television screen and for our fans to be able to see and feel that is a pretty amazing thing and again another reason why I’m so thankful.”
“I feel like everything they wrote was justified and made sense to me. I got to explore different levels of emotion. I had a place to go. I continuously had an interesting and evolving journey.”
Cassidy has been a significant part of Arrow since the pilot, and Laurel Lance has grown tremendously over these past four seasons. We saw Laurel rise as a hero in the light in the justice system in season one. We saw Laurel struggle with addiction in season two and rise up from that hardship. We saw Laurel deal with losing her sister, Sara, again and dedicate herself to honoring her sister’s memory as a different Canary. And we saw Laurel come full-circle as Black Canary in season four. Simply said, Laurel has had quite an emotional and significant journey.
“I think as an actor, the writers they wrote for me from season one to the end of season four, I had an amazing arc,” Cassidy said. “I feel like everything they wrote was justified and made sense to me. I got to explore different levels of emotion. I had a place to go. I continuously had an interesting and evolving journey. As an actor on a show it’s all you can ask for. Again I am just so thankful.”
Arrow returns for season 5 Oct. 5 on The CW. |
The map, below, of the proposed new U.S. Congressional districts in Cook and collar counties is very disturbing. It was drawn by Democrats in order to make trouble for Republican opponents, for the most part (See Rick Pearson's analysis here) . And though I realize that Republicans would have drawn an equally cynical map were they have held all the map-making cards this year (and that they will draw an equally cynical map when, down the line, they hold all the map-making cards), I'm still struck by the angry, admittedly naive thought that this is not the way it should be.
Ideally, districts would be as close to square as common sense and cartography allows -- drawn systematically by colorblind, party-blind computers. I'm not even a fan of gerrymandering to assure that geographically disconnected ethnic and racial groups get proportional representation; in fact, I'm not even sure that all this creative boundary manipulation actually makes these groups politically stronger.
If you exclude the Northeast part of Illinois, the rest of the map look entirely reasonable to me. In fact , if the Chicago area had the same type of "blocks" I'd think it's okay. As it is, it is just another incumbent protection plan- like ethics reform and campaign finance laws. I've said it here before and I'll say it again, I was initially opposed to term limits but now I think we desperately need them.
Do you have a current map of Northeast Illinois, before these changes were made?
Iowa doesn't have square maps either and it uses a computer method to pick Districts. Iowa does keep Counties together, but that doesn't mean much for Cook County or the other Collar Counties.
Fairly simple computer modeling works in Iowa because it has a tiny minority population and generally a far less degree of diversity across the board. If you were to use the Iowa process in Illinois, Illinois Democrats would be in a minority for Congressional Districts while winning statewide by fairly good margins.
Put quite simply, the population in dense places tends to be very Democratic. This is a general rule, though not universal, across the country. Republicans tend to hold majorities in broader swaths of land, but lower margins. Illinois is one of the clearest examples of this and it shows the problem.
If you use the Iowa remap process you get Republican majorities because they have smaller margins in larger geographic areas while large proportions of Democrats are in small areas. So in a case like Illinois, the Iowa remap process is no more fair if your goal is to have a representative Congressional Delegation.
The obvious way around it would be to do away with Districts that are not required by the US Constitution, but every state Constitution enshrines. In larger states a proportional representation system would work quite well in avoiding the bias of either party and potentially even give 3rd parties a chance.
Thanks, I've heard Bruce Dold and Mark Kirk complaining, believe is has a lot to do with the 10th district, where I live. Isn't it a damn shame that more lower incomes and minorities are included? It breaks my heart that the wealthy North Shore won't be controling this district anymore, NOT!
" Isn't it a damn shame that more lower incomes and minorities are included? It breaks my heart that the wealthy North Shore won't be controling this district anymore, NOT! "
Yeah, the wealthy don't even deserve representation. They stole that money. We don't need them in this state.
The blame has to go to Congress which wrote the flawed Voting Rights Act, the courts which ruled it constitutional & various minority groups which have mistakenly believed that having only members of their own groups will best represent them in legislative bodies.
What's happened is that they elect one of their own, who then decides that he's more important than the group he came from.
They would have so much more influence with the person that both promises & delivers on those promises. If they fail, it's much easier to vote those people out.
It is easier for extreme candidates to win in homogeneous districts. It is easier for moderate candidates to win in heterogeneous districts.
That in part accounts for the polarization we now see in Washington.
@Jimmy G - Please tell us more about the under-representation of the wealthy in Washington.
ZORN REPLY - -The ArchPundit's point is that neater, more square districts would disadvantage Democrats because dense urban areas tend to be more Democratic than non-dense rural areas tend to be Republican. So my notion of simply drawing boxes would tend to create a small number of overwhelmingly Democratic districts and a large number of slightly Republican districts.
resulting in a 3-1 split favoring the Republicans in our state House.
While this is obviously offensive at some level to our concept of democracy, the question becomes what if anything should be the remedy? Must we take it upon ourselves to tweak for human nature?
You can easily reverse the numbers in the above example to create a scenario in which the rural concentration of Republicans is far greater than the urban concentration of Democrats, skewing the result the other way.
So maybe the problem is with the somewhat outdated idea of geographical interests being paramount.
Most people, and I am certainly among them, could not give you even a rough idea of the boundaries of their ward or their state and congressional districts.
Gerrymandering today is used for politicians to pick their voters not the other way around. Then, they get to Washington and sell themselves to raise money for their campaigns. They need to do this because their districts are so large, they need to raise money every day in office in order to have enough to run again.
Smaller districts mean less money is necessary to run a campaign or defeat an incumbent. Small districts mean that voters know their representative better.
Smaller districts require much less in constituent service; smaller staffs will be necessary. Representatives can truly be part time under this kind of system. They don't need to get paid hundreds of thousands of dollars. New Hampshire pays its Reps $100 a year and never has a problem getting people to run. They don't stay forever either. Term limits will not be necessary either; part time representatives who don't get paid much won't overstay their welcome.
So what if Congress has 10,000 members; it will be far less easy for a special interest group to buy members to pass their preferred legislation. Lobbyists will still be necessary but they will do what they should do - lobby with information, not hand out money for campaigns. The Internet will make information dissemination much easier. Congressmen could even vote or attend hearings remotely. They spend most of their time in Washington raising money or visiting their mistresses anyway. Work will get done - we elect these people to vote on legislation - plain and simple. We didn't elect them to sell their votes to the highest bidder.
Rich people won't have an advantage with small districts - ordinary people can run and win. We can truly bring government back to the people. You are correct - this has to be a grassroots effort; politicians and their enablers won't give up power easily.
It is about time we did something about government for sale.
If it comes down to handing the Democrats unilateral power to draw the electoral map, or having to have Extremist Brady as governor to be a counterbalance, I am *very* happy it worked out the way it did.
That map will protect a woman's right to choose, keep a concealed carry law at bay, and keep extremists like Brady out of the governor's mansion for years to come.
You make a coherent argument. It is well written. There are some proposals that sound goofy/extreme but I am sure they were made for the purpose of conducting an intellectual exercise.
As you know, the benchmark document regarding the size of Congress is the FEDERALIST PAPERS.
Chicago has 50 aldermen. That is large. They are still controlled by special interests – primarily the unions. They do not operate on the citizen/statesmen model. Campaigns are still expensive.
A House of 10,000 members will have to conduct business remotely. Flying and being housed in Washington D.C. will be too expensive especially if salaries are kept two low.
Why not take your example to an extreme and have direct democracy by people voting for legislation on their computers the same way they pay taxes on their computers.
I suggest that you review The Economist issue devoted to the folly of direct democracy as practiced in California. It was a boon -- not a bust -- for PR firms marketing proposed referenda.
My final suggestion is that you have to learn some things from actual practice.
Let me give a hypothetical. Assume for the sake of argument that the voter split in Illinois is 50.1 % Democrat and 49.9 % Republican. By using computer programs districts can be drawn to reflect this split. And theoretically 100% of the elected officials would be Democrats.
Of course in actual practice voter loyalty is not that clear cut or stable. Thus things might be slightly more mushy and a few Republican might get elected. However you see my point.
Also remember -- what goes around comes around. Thus be careful for what you wish.
Read the Zorn Reply to the 5/28/11 10:30 AM post above.
The Zorn analysis is valid.
Chicago has now taken control of Will county. Our votes will no longer count for at least the next 10 years when it comes to the US rep vote.
First: Have districts drawn by some sophisticated computer process of randomization. Randomization is a major concept in applied mathematics for which I only have a passing acquaintance.
Second: let us have legislatures of some optimal size. The benchmark analysis is in the Federalist Papers. I note that this is still a topic explored by academics when drafting new constitutions.
Jill, I admire a woman who sticks by her principles: political gerrymandering of the worst, most despicable kind is perfectly acceptable so long as it results in consequences that favor my party. Well said.
There is absolutely no justification for drawing districts that attempt to ensure the election of members of specific races, ethniticities, or religions.
"State Sen. Kwame Raoul, the Chicago Democrat who heads the Senate Redistricting Committee, called the proposed congressional boundaries "fair and balanced."....Asked by a Republican if the map was political, Raoul said, "I think everything we do in this building is political."
Actually, boundaries should be drawn so as to define actual communities of interest.
What Americans call "proportional representation", the rest of the world calls "representation by population". The term "proportional representation" is used to denote a type of voting system that makes every vote count, and boundary allocation largely irrelevant.
"The blame has to go to Congress which wrote the flawed Voting Rights Act, the courts which ruled it constitutional & various minority groups which have mistakenly believed that having only members of their own groups will best represent them in legislative bodies."
Uh, Garry, you are a Democrat, right? You voted for that.
1. I wasn't old enough to vote when the Voting Rights Act was passed.
2. If the Republicans hadn't opposed it so vehemently, it might have been written better so the courts wouldn't have interpreted it so weirdly. Many federal judges that were appointed by Republicans have upheld these monstrosities of districts.
Or are you accusing those judges of being RINOs?
I didn't suggest direct democracy. That certainly has its limits. What I did suggest is much smaller districts for representative democracy - a small 'r' republican system.
You made my point on Chicago. It has 3 million people and 50 wards. Each ward has roughly 60,000 people. That is much too large a ward. If those wards were 3,000 or 5,000 people, ordinary citizens could serve. The job wouldn't be full time with ward heelers and corrupt individuals taking the positions.
They would be accountable as campaigns wouldn't require large organizations or major fundraising. People would know their aldermen and one alderman wouldn't have disproportionate power (read that Eddie Burke).
If there were 500 aldermen (serving without pay or huge staffs - this would save millions for Chicago) that were individually accountable for doing the right thing, they would band together with other like minded aldermen and do the right thing. They wouldn't allocate all the money to downtown because of funding they get from big developers. They wouldn't be bought and sold by employee unions angling for big pensions or benefits in return for campaign contributions.
They would be moved by policy - not politics. They wouldn't be bought because big campaign spending would attract unwanted negative attention.
What we have now - at the local level like Chicago (and LA and NY, etc), state legislatures (again Illinois, NY, CA, etc) and in the Federal system (Congress) is a situation where campaign funding supercedes good policy. We are in a race to spend - as can be seen from $1 billion Presidential campaigns, not to mention Congressional campaigns that run $5 million or more. Campaigns are about 30 second attack ads, infuriating mailers, large armies of campaign workers. They are not about policy; about knowing your representative. They are about scare tactics and misinformation.
The media age has partially been responsible for this. The Internet age may well rescue us. It is up to us to see this solution and implement it.
Rescuecalifornia.org. This is where it starts.
"Many federal judges that were appointed by Republicans have upheld these monstrosities of districts.
Or are you accusing those judges of being RINOs?"
Actually, yes. Republican presidents have had a tough time getting conservative judges confirmed by the Senate. Remember Robert Bork? Remember how Clarence Thomas was so viciously attacked? Most of the Republican judges who make it through are indeed RINO's like John Paul Stevens and David Souter.
However, one has to vote Republican for there to be even a CHANCE to get conservative judges appointed. That is one of the reasons I vote Republican. No Democratic president since John F Kennedy has appointed any conservative judges.
Of course, one would not realize that from reading the press. In the language of the press -- not to be mistaken with English -- conservative judges are called "extremist judges" and liberal judges are called "mainstream judges." No liberal judge, no matter how far left, is ever called extreme and no conservative judge is ever called mainstream.
In this case, the cure is worse than the disease.
Back in November, it was painfully obvious that Brady is an extremist. It was also very obvious that if Quinn was re-elected, Madigan would have unilateral control over the redistricting process.
There is no way I could vote for Brady. None. There is way too much on the line.
Besides, this map can only bolster support for Obama in 2012, right when he'll need it the most, which is what I'd like see happen for Illinois.
--Archpundit put it better than I can, but this is the internet, and letters are free, so what the heck: what's so great about squareness? As Archpundit put it, squares are not a neutral shape: because Democrats tend to live in areas more densely packed with Democrats than Republicans do for Republicans, square districts will tend to ensure that Democrats are underrepresented.
Here is what the measure of fairness should be: if Democrats get 60% of the two-party vote in the entire state of Illinois, they should get 60% of the seats from Illinois (proportional representation would be a fantastic way to achieve this, and would avoid the whole gerrymandering problem forever). Who cares what the districts look like? No one's day has ever been ruined by the fact that the congressional district in which he lives is lumpy.
There is also no basis for drawing districts to reflect "communities of interest". None. The problems inherent in such a system are too obvious to waste time discussing.
---One could argue that compact (for instance, square) districts do reflect communities of interest, on the theory that I have more interest in common with a person of, again for instance, different ethnicity living across the street than with one of my ethnicity living half the state away.
ZORN REPLY -- Certainly SOME of your "interests" are geographically rooted, though very little of those geographically rooted interests -- zoning comes to mind; school and police policies -- are handled by state and federal lawmakers.
The idea behind representative democracy is rooted in the concept that similarly situated people will have more or less common interests/views that can be expressed by an elected person in a legislative body. Not that everyone on your block or your street is of like mind on everything, but that the representative will best express the majority view on most issues.
At its heart it's flawed. Direct democracy on every issue would be better, though dangerous and unwieldy.
==First: Have districts drawn by some sophisticated computer process of randomization. Randomization is a major concept in applied mathematics for which I only have a passing acquaintance.
Sorry, slow getting back. This is certainly possible and you could draw maps according to an improved Iowa methodology that evened the playing field for concentrations of people. My sense is that such a system would then be susceptible to being gamed, but I'm open to it.
===Second: let us have legislatures of some optimal size. The benchmark analysis is in the Federalist Papers. I note that this is still a topic explored by academics when drafting new constitutions.
It sure is, but it doesn't affect the problem of drawing districts if you still go with single member districts.
There are variations on what I wrote above. You could create three or four districts in the state (neither is neat when we have 18 so maybe 2 or 3 at large reps with Cook County, The Collars and outstate or outstate divided with four could be districts from which 5 or 4 reps are elected. Any of these would also increase electoral diversity by producing Republicans in Cook and some Dems in downstate. |
package com.car.modules.car.service.impl;
import com.car.modules.car.entity.ExerciseContentEntity;
import org.springframework.stereotype.Service;
import java.util.Map;
import com.baomidou.mybatisplus.mapper.EntityWrapper;
import com.baomidou.mybatisplus.plugins.Page;
import com.baomidou.mybatisplus.service.impl.ServiceImpl;
import com.car.common.utils.PageUtils;
import com.car.common.utils.Query;
import com.car.modules.car.dao.MsgTypeDao;
import com.car.modules.car.entity.MsgTypeEntity;
import com.car.modules.car.service.MsgTypeService;
@Service("msgTypeService")
public class MsgTypeServiceImpl extends ServiceImpl<MsgTypeDao, MsgTypeEntity> implements MsgTypeService {
@Override
public PageUtils queryPage(Map<String, Object> params) {
Page<MsgTypeEntity> page = this.selectPage(
new Query<MsgTypeEntity>(params).getPage(),
new EntityWrapper<MsgTypeEntity>()
);
return new PageUtils(page);
}
@Override
public PageUtils queryWithProject(Map<String, Object> params) {
Page<MsgTypeEntity> page = new Query<MsgTypeEntity>(params).getPage();
page = page.setRecords(this.baseMapper.queryWithProject(page));
return new PageUtils(page);
}
}
|
A ceremony has been held to mark the completion of a £1bn gas-fired power station in Pembrokeshire.
RWE npower has said its Pembroke Power Station, the largest of its type in Europe, will power 3.5 million homes.
Wales Office minister Stephen Crabb welcomed the boost to the economy, with 100 long term jobs created.
However, the European Commission is investigating how the permissions for the plant were granted and whether it damages the marine environment.
RWE npower will be pleased this £1bn gas power station is completed, and starting to earn some money by producing electricity for the National Grid.
According to the company's chief operating officer Kevin McCullough, the technology is "state of the art" and the five combined cycle gas turbines are 60% efficient.
Gas is piped beneath the Milford Haven waterway, near to Pembroke Dock.
Some environmentalists, especially Friends of the Earth, believed the technology used to be wasteful - firstly in not using all the waste heat emitted, and more controversially in using water from the Cleddau estuary to cool the five gas-fired turbines, before draining it back, 8C warmer, into a special area of conservation.
After months of pressure, the European Commission agreed to look at Friends of the Earth's complaints, and that investigation in Brussels is still ongoing.
Since July others locally have noticed strands of white foam at low tide near the outflow pipes from the power station.
The embarrassment may be the timing just before the official opening, rather than any lasting environmental damage.
The Environment Agency concedes it's unexpected, but says it has been tested and hasn't been found to be harmful.
It's likely to be caused by algae and plankton being broken down organically in the estuary, but the EA will ask RWE to try and reduce the amount of foam in future.
Planning permission for the station was granted by the UK government in 2009, and it was granted a permit by the Environment Agency last November.
After three years of contruction work, control of the final part of the plant was handed over to the station team last week.
RWE npower described the facility as one of Europe's largest and most efficient combined cycle gas turbine plants.
It said it would provide a highly flexible and reliable source of energy.
Mr Crabb, who is the MP for Preseli Pembrokeshire, was among 200 guests at the official opening ceremony.
Speaking before what will be his first official visit since joining the Wales Office, he said: "I have no doubt that Pembroke Power Station will play a vital role in maintaining the UK's energy supplies for the future, and make its own contribution to creating economic prosperity for Wales."
Friends of the Earth Cymru has complained that the station, which takes water from the Cleddau estuary as a coolant and discharges it back at a higher temperature, could damage marine life in a special area of conservation.
Environmental campaigners have also criticised the station's technology as "second rate" when Wales should be aiming at more sustainable technology.
David Hughes, head of the European Commission office in Cardiff, confirmed that it was currently dealing with a complaint into the plant, and was in touch with the UK authorities on the matter.
"What the commission is looking at now is a complaint that the permissions that were granted for building and operating the power station were not granted in the proper way," said Mr Hughes.
"The other aspect of the investigation is a possible adverse effect on the Cleddau estuary."
Mr Hughes said the commission hoped to wind the investigation up in the next couple of months and would come to a conclusion on whether or not to proceed with, or drop, the complaint.
Environment Agency Wales said it had set out strict conditions to protect and maintain the environment as part of the permit for the power station.
"This followed extensive consultation with interested parties and detailed investigations into potential environmental impacts," said a spokesman.
Both the Wales Office and the Welsh government have said they are satisfied with the plant and the technology used. |
Surgical Margin Status of Patients with Pancreatic Ductal Adenocarcinoma Undergoing Surgery with Radical Intent: Risk Factors for the Survival Impact of Positive Margins Background: For pancreatic ductal adenocarcinoma (PDAC), surgical margin status is an important pathological factor for evaluating surgical adequacy. In this study, we attempted to investigate predictive factors for the survival impact of positive surgical margins. Materials and Methods: From February 2004 to December 2013, 204 patients were diagnosed with PDAC and underwent surgery with radical intent; 189 patients fulfilled our selection criteria and were enrolled for analysis. Results: For the 189 enrolled patients with PDAC, we found male predominance (112/189, 59%) and a median age of 64 years; most patients were diagnosed with stage IIB disease (n=115, 61%). The positive surgical margin rate was 21% (n=40). Carbohydrate antigen 19-9 (CA19-9) level higher than 246 U/ml (odds ratio (OR)=2.318; 95% confidence interval (CI)=1.037-5.181 p=0.040) and lesion location in the uncinate process (OR=2.996; 95% CI=1.232-7.284 p=0.015) were the only two independent risk factors for positive surgical margins. Positive retroperitoneal soft-tissue margins were the most frequently observed (24/40, 60%). Overall, positive surgical margins had no survival impact in the 189 patients with PDAC who underwent surgery; however, positive surgical margins had an unfavorable survival impact on patients with stage IIA PDAC who underwent surgery. Conclusion: Retroperitoneal soft-tissue was the most common site for positive surgical margins. Additionally, surgical margin positivity was more likely for tumors located in the uncinate process than for other tumors. Positive surgical margins had an unfavorable survival impact on patients with stage IIA PDAC who underwent surgery. Pancreatic ductal adenocarcinoma (PDAC) is a dismal condition with poor prognosis. The 5-year overall survival for PDAC may be as low as 1.3%. This poor outcome is attributable to advanced disease at diagnosis and inefficient treatment modalities. However, surgical resection remains the mainstream treatment for both primary tumor excision and precise staging for adjuvant treatment. Unfortunately, only 15-20% patients present with resectable disease at diagnosis. For resectable PDAC, 5-year overall survival may be improved to up to 18%. For resectable PDAC, surgical margin status is an important pathological factor for evaluating surgical adequacy. However, the impact of this status on long-term clinical outcome remains debatable. In this study, we attempted to investigate predictive factors for positive surgical margins after surgery with radical intent and the survival impact of surgical margin status. Materials and Methods From February 2004 to December 2013, 204 patients were diagnosed with PDAC and underwent surgery with radical intent. Eleven patients were excluded due to involvement of the superior mesentery artery or occult distant metastasis revealed after laparotomy. Four patients with surgical mortality (4/204, 1.96%, hospital mortality within 30 days after surgery) were also excluded. Therefore, 189 patients were enrolled for analysis ( Figure 1). Medical records were reviewed and analyzed; the assessed items included clinical, laboratory, and pathological findings. Conventional Whipple's operation, pylorus-preserving pancreaticoduodenectomy, subtotal or distal pancreatectomy, or total pancreatectomy was performed as appropriate given the location of the lesion. Pathological results were reviewed. The location of the lesion was categorized into four major groups: head and neck; body; tail; and uncinate process. The investigated margins included pancreatic margins, the common bile duct margin, the duodenal margin, and the retroperitoneal soft-tissue margin. In our study, positive margins were defined either macroscopically or microscopically. A margin clearance of more than 0 mm was used to define R0 resection. Disease stage was determined based on the seventh edition of the American Joint Committee of Cancer (AJCC) staging manual. Risk factors for positive surgical margin. To explore risk factors for positive surgical margins, the 189 patients were categorized into two groups: negative margins (n=149) and positive margins (n=40). Age, gender, laboratory data, tumor markers, disease stage, macroscopic pathological factors, and microscopic pathological factors were compared between these groups. Factors revealed to be significant by univariate analysis were further assessed via multivariate analysis. Survival impact. Survival analysis was conducted for both groups based on individual margin status. Subgroup analyses based on disease stage were also performed. In addition to univariate analysis, multivariate Cox proportional hazard analysis was also conducted to eliminate confounding effects produced by other factors. Statistical procedures. The threshold for statistical significance was defined as p<0.05. Continuous variables were analyzed using the independent Student's t-test, whereas categorical variables were assessed using the Pearson chi-square test. Multivariate analysis was performed via logistic regression. Survival analysis was conducted using both a log-rank test (univariate) and a Cox hazards model (multivariate). SPSS v.21 (IBM Corp., Armonk, NY, USA) was used for statistical analysis. A value of p<0.05 was considered statistically significant. Table I summarizes the demographic data, disease status, surgical method, and margin status for the study cohort. Male predominance (n=112, 59%) and a median age of 64 years were observed. Most lesions were located in the head and neck (n=132, 70%), and most of the surgical procedures were pancreaticoduodenectomies, either conventional (n=102, 54%) or pylorus-preserving (n=62, 33%). Most patients in the study cohort were diagnosed with stage IIB PDAC (n=115, 61%). The rate of surgical margin positivity was 21% (n=40). Retroperitoneal soft-tissue margins were the most commonly observed positive surgical margins (n=24; 24/40, 60%). When the site of the primary tumor was considered (Table III), the risk of a positive retroperitoneal soft-tissue margin was higher for patients with tumors in the uncinate process (92%) than for those with tumor in other locations (50%). Patients with negative surgical margins (n=149) had overall survival similar to that of patients with positive surgical margins (n=40) (Figure 2). Because of the small number of patients with stage IA or stage IB PDAC, only analyses of stage IIA (n=57) and stage IIB (n=114) were performed using the Cox proportional hazards model. The factors used for analysis included age, gender, tumor location, CA19-9 level, tumor size, histology, mitotic count, lymph node ratio, lymphatic invasion, vascular invasion, peri-neural invasion, margin status, and chemotherapy application. Negative surgical margins, normal CA19-9 level, and well-differentiated histology were the three independent factors that favored survival for those with stage IIA disease (Table IV). No independent factors that significantly affected survival were identified for those with stage IIB disease (Table V) Discussion PDAC is a disease with a dismal outcome, and surgical resection is the only method for improving survival. In the seventh edition of the AJCC staging manual, PDAC of stages IA, IB, IIA and IIB is categorized as resectable. In contrast, stage III and stage IV PDAC are categorized as borderline resectable or unresectable. For most pathological diagnoses for malignancies, clearance of the resection margin is an index of surgical quality and an indicator of prognosis. In our study, we attempted to investigate the impact of margin status on overall survival. From the perspective of surgical oncology, optimal margin clearance of solid tumors is the primary objective. However, positive margin status is sometimes inevitable. In our study, the rate of margin positivity was 21%. In previous reports, positive margin rates among PDAC patients undergoing resection with radical intent ranged from 14% to 60%. In our study, we found that high serum CA19-9 level and tumor location in the uncinate process were factors predictive of margin positivity. Furthermore, our analyses indicated that CA19-9 was an independent risk factor. Based on our receiver operating characteristics curve analysis, the optimal cutoff value of CA19-9 for prediction of positive surgical margins was 246 U/ml. CA19-9 has been reported to be a useful biochemical marker for diagnosing pancreatic cancer and a predictor of overall survival for patients with resectable PDAC. Therefore, CA19-9 should be regarded as a predictive marker for positive surgical margins that has versatile clinical applications. Furthermore, our report found that the incidence of margin positivity was elevated for patients with tumors located in the uncinate process (Table II). Our series included 34 patients (17.9%) with a tumor in an uncinate location. However, the incidence of surgical margin positivity was much higher in this group than among patients with non-uncinate lesions (35% versus 18%). Retroperitoneal soft-tissue margins were the most common location for positive surgical margins, accounting for 60% of our cases with such margins. This result was consistent with the corresponding percentages of 80% and 73% reported in prior studies that also suggested that the retroperitoneal soft tissue is the most common site of positive surgical margins. In addition, our results (Table III) also showed a trend that positive retroperitoneal soft-tissue margins were more common for those with tumors primarily located in the uncinate process than for other tumors (p=0.074). For years, studies have attempted to elucidate factors that influence survival for patients with PDAC. Various risk factors, including preoperative biliary stent, CA19-9 level, blood transfusion, R0 resection, tumor size, absence of lymph node or distant metastases, and peri-neural infiltration, have been discussed. For a surgical oncologist, R0 resection may be the most important consideration. Certain studies have demonstrated that surgical margin affects survival, although this effect was not observed in other investigations. One study even proposed that repeated resection for margin clearance did not improve outcome. One possible reason for these inconsistent research results is lack of standardization of margin definitions ; another potential reason is the evolution of more efficient adjuvant chemotherapy regimens. In our study, adjuvant chemotherapy regimens were not unified but were instead chosen based on physicians' preferences and patient tolerance. All of these factors may have influenced the results of individual studies. For patients with cancer, different pathological stages can indicate different survival statuses; staging systems exist for this reason. The main purpose of our work was to evaluate the impact of positive signal margins on survival. Therefore, in our study, it is reasonable to conduct subgroup analysis based on cancer stage, using multivariate analysis to eliminate the confounding effects of other factors., different disease stages may reflect different extents of involvement. This phenomenon may explain our observations. For stage IIA disease, a positive surgical margin was an independent factor for survival; for stage IIB disease, no local factors for survival were identified. Stage IIA PDAC should be regarded as a more localized condition for which the surgical margin affects survival. Pancreatic surgeons should attempt to perform radical surgery for PDAC by stage IIA in order to achieve favorable survival outcomes. Our study had several limitations. The study period was almost 10 years. Treatment of PDAC has improved during this period with respect to both surgical techniques and systemic treatment. This evolution in treatment may have produced detectable or undetectable survival benefits. In addition, the definition of a positive surgical margin and the standard procedure for pathological examination of the resection margin may have incorporated additional biases into our study. These potential biases render it difficult to compare different studies on this topic. Conclusion In our study, we found that the retroperitoneal soft tissue is the most common location for positive margins among patients with PDAC who underwent surgery. In addition, positive margins were more common for tumors located in the uncinate process than for other tumors. With respect to survival impact, positive surgical margins had a negative impact on survival of patients with stage II PDAC who underwent surgery. |
/*-
* SPDX-License-Identifier: Zlib
*
* Copyright (c) 2009-2021 <NAME> <<EMAIL>>
* For conditions of distribution and use, see LICENSE file
*/
#include <ananas/types.h>
#include <sys/socket.h>
#include <ananas/syscalls.h>
#include "_map_statuscode.h"
int connect(int socket, const struct sockaddr* address, socklen_t address_len)
{
statuscode_t status = sys_connect(socket, address, address_len);
return map_statuscode(status);
}
|
<reponame>PKUfudawei/cmssw
#include "CondTools/Ecal/interface/EcalTPGBadTTHandler.h"
#include "OnlineDB/EcalCondDB/interface/EcalLogicID.h"
#include "OnlineDB/EcalCondDB/interface/RunTPGConfigDat.h"
#include "OnlineDB/EcalCondDB/interface/FEConfigMainInfo.h"
#include "OnlineDB/EcalCondDB/interface/FEConfigBadTTInfo.h"
#include "OnlineDB/EcalCondDB/interface/RunList.h"
#include "FWCore/ParameterSet/interface/ParameterSetfwd.h"
#include "FWCore/MessageLogger/interface/MessageLogger.h"
#include "CondFormats/EcalObjects/interface/EcalTPGTowerStatus.h"
#include <iostream>
#include <fstream>
#include <ctime>
#include <unistd.h>
#include <string>
#include <cstdio>
#include <typeinfo>
#include <sstream>
popcon::EcalTPGBadTTHandler::EcalTPGBadTTHandler(const edm::ParameterSet& ps)
: m_name(ps.getUntrackedParameter<std::string>("name", "EcalTPGBadTTHandler")) {
edm::LogInfo("EcalTPGBadTTHandler") << "EcalTPGTowerStatus Source handler constructor.";
m_firstRun = static_cast<unsigned int>(atoi(ps.getParameter<std::string>("firstRun").c_str()));
m_lastRun = static_cast<unsigned int>(atoi(ps.getParameter<std::string>("lastRun").c_str()));
m_sid = ps.getParameter<std::string>("OnlineDBSID");
m_user = ps.getParameter<std::string>("OnlineDBUser");
m_pass = ps.getParameter<std::string>("OnlineDBPassword");
m_locationsource = ps.getParameter<std::string>("LocationSource");
m_location = ps.getParameter<std::string>("Location");
m_gentag = ps.getParameter<std::string>("GenTag");
m_runtype = ps.getParameter<std::string>("RunType");
edm::LogInfo("EcalTPGBadTTHandler") << m_sid << "/" << m_user << "/" << m_location << "/" << m_gentag;
}
popcon::EcalTPGBadTTHandler::~EcalTPGBadTTHandler() {}
void popcon::EcalTPGBadTTHandler::getNewObjects() {
edm::LogInfo("EcalTPGBadTTHandler") << "Started GetNewObjects!!!";
unsigned int max_since = 0;
max_since = static_cast<unsigned int>(tagInfo().lastInterval.since);
edm::LogInfo("EcalTPGBadTTHandler") << "max_since : " << max_since;
edm::LogInfo("EcalTPGBadTTHandler") << "retrieved last payload ";
// here we retrieve all the runs after the last from online DB
edm::LogInfo("EcalTPGBadTTHandler") << "Retrieving run list from ONLINE DB ... ";
edm::LogInfo("EcalTPGBadTTHandler") << "Making connection...";
econn = new EcalCondDBInterface(m_sid, m_user, m_pass);
edm::LogInfo("EcalTPGBadTTHandler") << "Done.";
if (!econn) {
std::cout << " connection parameters " << m_sid << "/" << m_user << std::endl;
throw cms::Exception("OMDS not available");
}
LocationDef my_locdef;
my_locdef.setLocation(m_location);
RunTypeDef my_rundef;
my_rundef.setRunType(m_runtype);
RunTag my_runtag;
my_runtag.setLocationDef(my_locdef);
my_runtag.setRunTypeDef(my_rundef);
my_runtag.setGeneralTag(m_gentag);
readFromFile("last_tpg_badTT_settings.txt");
unsigned int min_run;
if (m_firstRun < m_i_run_number) {
min_run = m_i_run_number + 1;
} else {
min_run = m_firstRun;
}
if (min_run < max_since) {
min_run = max_since + 1; // we have to add 1 to the last transferred one
}
std::cout << "m_i_run_number" << m_i_run_number << "m_firstRun " << m_firstRun << "max_since " << max_since
<< std::endl;
unsigned int max_run = m_lastRun;
edm::LogInfo("EcalTPGBadTTHandler") << "min_run= " << min_run << "max_run= " << max_run;
RunList my_list;
// my_list=econn->fetchRunListByLocation(my_runtag, min_run, max_run, my_locdef);
my_list = econn->fetchGlobalRunListByLocation(my_runtag, min_run, max_run, my_locdef);
std::vector<RunIOV> run_vec = my_list.getRuns();
size_t num_runs = run_vec.size();
std::cout << "number of runs is : " << num_runs << std::endl;
std::string str = "";
unsigned int irun = 0;
if (num_runs > 0) {
// going to query the ecal logic id
std::vector<EcalLogicID> my_TTEcalLogicId_EE;
my_TTEcalLogicId_EE = econn->getEcalLogicIDSetOrdered(
"EE_trigger_tower", 1, 200, 1, 70, EcalLogicID::NULLID, EcalLogicID::NULLID, "EE_offline_towerid", 12);
std::cout << " GOT the logic ID for the EE trigger towers " << std::endl;
for (size_t kr = 0; kr < run_vec.size(); kr++) {
irun = static_cast<unsigned int>(run_vec[kr].getRunNumber());
// retrieve the data :
std::map<EcalLogicID, RunTPGConfigDat> dataset;
econn->fetchDataSet(&dataset, &run_vec[kr]);
std::string the_config_tag = "";
int the_config_version = 0;
std::map<EcalLogicID, RunTPGConfigDat>::const_iterator it;
int nr = 0;
for (it = dataset.begin(); it != dataset.end(); it++) {
++nr;
//EcalLogicID ecalid = it->first;
RunTPGConfigDat dat = it->second;
the_config_tag = dat.getConfigTag();
the_config_version = dat.getVersion();
}
// it is all the same for all SM... get the last one
// here we should check if it is the same as previous run.
if ((the_config_tag != m_i_tag || the_config_version != m_i_version) && nr > 0) {
std::cout << " run= " << irun << " tag " << the_config_tag << " version=" << the_config_version << std::endl;
std::cout << "the tag is different from last transferred run ... retrieving last config set from DB"
<< std::endl;
FEConfigMainInfo fe_main_info;
fe_main_info.setConfigTag(the_config_tag);
fe_main_info.setVersion(the_config_version);
try {
econn->fetchConfigSet(&fe_main_info);
// now get TPGTowerStatus
int badttId = fe_main_info.getBttId();
if (badttId != m_i_badTT) {
FEConfigBadTTInfo fe_badTT_info;
fe_badTT_info.setId(badttId);
econn->fetchConfigSet(&fe_badTT_info);
std::vector<FEConfigBadTTDat> dataset_TpgBadTT;
econn->fetchConfigDataSet(&dataset_TpgBadTT, &fe_badTT_info);
EcalTPGTowerStatus* towerStatus = new EcalTPGTowerStatus;
typedef std::vector<FEConfigBadTTDat>::const_iterator CIfeped;
EcalLogicID ecid_xt;
FEConfigBadTTDat rd_badTT;
// reset the map
// EB
for (int ism = 1; ism <= 36; ism++) {
for (int ito = 1; ito <= 68; ito++) {
int tow_eta = (ito - 1) / 4;
int tow_phi = ((ito - 1) - tow_eta * 4);
int axt = (tow_eta * 5) * 20 + tow_phi * 5 + 1;
EBDetId id(ism, axt, EBDetId::SMCRYSTALMODE);
const EcalTrigTowerDetId towid = id.tower();
int tower_status = 0;
towerStatus->setValue(towid.rawId(), tower_status);
}
}
//EE
for (size_t itower = 0; itower < my_TTEcalLogicId_EE.size(); itower++) {
int towid = my_TTEcalLogicId_EE[itower].getLogicID();
int tower_status = 0;
towerStatus->setValue(towid, tower_status);
}
// now put at 1 those that are bad
int icells = 0;
for (CIfeped p = dataset_TpgBadTT.begin(); p != dataset_TpgBadTT.end(); p++) {
rd_badTT = *p;
int tcc_num = rd_badTT.getTCCId();
int tt_num = rd_badTT.getTTId();
std::cout << " tcc/tt" << tcc_num << "/" << tt_num << std::endl;
if (tcc_num > 36 && tcc_num <= 72) {
// SM number
int smid = tcc_num - 54;
if (tcc_num < 55)
smid = tcc_num - 18;
// TT number
int towerid = tt_num;
int tow_eta = (towerid - 1) / 4;
int tow_phi = ((towerid - 1) - tow_eta * 4);
int axt = (tow_eta * 5) * 20 + tow_phi * 5 + 1;
EBDetId id(smid, axt, EBDetId::SMCRYSTALMODE);
const EcalTrigTowerDetId towid = id.tower();
towerStatus->setValue(towid.rawId(), rd_badTT.getStatus());
++icells;
} else {
// EE data
// TCC number
int tccid = tcc_num;
// TT number
int towerid = tt_num;
bool set_the_tower = false;
int towid;
for (size_t itower = 0; itower < my_TTEcalLogicId_EE.size(); itower++) {
if (!set_the_tower) {
if (my_TTEcalLogicId_EE[itower].getID1() == tccid &&
my_TTEcalLogicId_EE[itower].getID2() == towerid) {
towid = my_TTEcalLogicId_EE[itower].getLogicID();
set_the_tower = true;
break;
}
}
}
if (set_the_tower) {
towerStatus->setValue(towid, rd_badTT.getStatus());
} else {
std::cout << " these may be the additional towers TCC/TT " << tccid << "/" << towerid << std::endl;
}
++icells;
}
}
edm::LogInfo("EcalTPGBadTTHandler") << "Finished badTT reading.";
Time_t snc = (Time_t)irun;
m_to_transfer.push_back(std::make_pair((EcalTPGTowerStatus*)towerStatus, snc));
m_i_run_number = irun;
m_i_tag = the_config_tag;
m_i_version = the_config_version;
m_i_badTT = badttId;
writeFile("last_tpg_badTT_settings.txt");
} else {
m_i_run_number = irun;
m_i_tag = the_config_tag;
m_i_version = the_config_version;
writeFile("last_tpg_badTT_settings.txt");
// std::cout<< " even if the tag/version is not the same, the badTT id is the same -> no transfer needed "<< std::endl;
}
}
catch (std::exception& e) {
std::cout << "ERROR: THIS CONFIG DOES NOT EXIST: tag=" << the_config_tag << " version=" << the_config_version
<< std::endl;
std::cout << e.what() << std::endl;
m_i_run_number = irun;
}
} else if (nr == 0) {
m_i_run_number = irun;
// std::cout<< " no tag saved to RUN_TPGCONFIG_DAT by EcalSupervisor -> no transfer needed "<< std::endl;
} else {
m_i_run_number = irun;
m_i_tag = the_config_tag;
m_i_version = the_config_version;
writeFile("last_tpg_badTT_settings.txt");
}
}
}
delete econn;
edm::LogInfo("EcalTPGBadTTHandler") << "Ecal - > end of getNewObjects -----------";
}
void popcon::EcalTPGBadTTHandler::readFromFile(const char* inputFile) {
//-------------------------------------------------------------
m_i_tag = "";
m_i_version = 0;
m_i_run_number = 0;
m_i_badTT = 0;
FILE* inpFile; // input file
inpFile = fopen(inputFile, "r");
if (!inpFile) {
edm::LogError("EcalTPGBadTTHandler") << "*** Can not open file: " << inputFile;
return;
}
char line[256];
std::ostringstream str;
fgets(line, 255, inpFile);
m_i_tag = to_string(line);
str << "gen tag " << m_i_tag << std::endl; // should I use this?
fgets(line, 255, inpFile);
m_i_version = atoi(line);
str << "version= " << m_i_version << std::endl;
fgets(line, 255, inpFile);
m_i_run_number = atoi(line);
str << "run_number= " << m_i_run_number << std::endl;
fgets(line, 255, inpFile);
m_i_badTT = atoi(line);
str << "badTT_config= " << m_i_badTT << std::endl;
fclose(inpFile); // close inp. file
}
void popcon::EcalTPGBadTTHandler::writeFile(const char* inputFile) {
//-------------------------------------------------------------
std::ofstream myfile;
myfile.open(inputFile);
myfile << m_i_tag << std::endl;
myfile << m_i_version << std::endl;
myfile << m_i_run_number << std::endl;
myfile << m_i_badTT << std::endl;
myfile.close();
}
|
<reponame>XiaoWinter/TheYang
/**
* @author: 黄聪<<EMAIL>>
* @since: 2021-08-26 14:26:34
* @lastTime: 2021-08-26 14:32:15
* @description:配置文件
* @copyright: Copyright (c) 2021, Hand
*/
let config: Config = {
waitTime: 4000,
};
export default config;
|
from __future__ import absolute_import, unicode_literals
from json import JSONEncoder
# Data about an artist which is a subset of mopidy's Artist object which the frontend requires.
class ArtistDTO:
def __init__(self, mopidy_artist):
self.name = mopidy_artist.name
self.uri = mopidy_artist.uri
# Data about an album which is a subset of mopidy's Album object which the frontend requires.
class AlbumDTO:
def __init__(self, mopidy_album):
self.name = mopidy_album.name
self.uri = mopidy_album.uri
self.artists = []
for mopidy_artist in mopidy_album.artists:
self.artists.append(ArtistDTO(mopidy_artist))
self.images = []
# Data about a track, which is a subset of mopidy's Track object which the frontend requires.
class TrackDTO:
def __init__(self, mopidy_track):
self.uri = mopidy_track.uri
self.name = mopidy_track.name
self.artists = []
for mopidy_artist in mopidy_track.artists:
self.artists.append(ArtistDTO(mopidy_artist))
self.images = []
self.length = mopidy_track.length
self.album = AlbumDTO(mopidy_track.album)
if "downvote_sounds" in self.uri:
self.is_downvote_sound = True
return
self.is_downvote_sound = False
# Data about a single user of BAMP
class UserDTO:
def __init__(self, id="", alias=""):
self.user_id = id
self.alias = alias
# Data about an item in BAMP's pending queue
class QueueItemDTO:
def __init__(self, queue_item):
self.track_uri = queue_item.track_uri
self.user_id = queue_item.user_id
self.upvotes = len(queue_item.upvote_ids)
self.downvotes = len(queue_item.downvote_ids)
self.instance = queue_item.instance
self.epoch = queue_item.epoch
self.is_downvote_sound = queue_item.is_downvote_sound
# Playback state contains both the mopidy playback state, but also
# BAMP's own playback enabled state.
class PlaybackStateDTO:
def __init__(self, mopidy_state="invalid", playback_enabled=False):
self.mopidy_state = mopidy_state
self.playback_enabled=playback_enabled
self.track_length_seconds = 0.0
self.progress_seconds = 0.0
self.progress_percent = 0.0
# Data about a track in the history
class HistoryItemDTO:
def __init__(self, history_item):
self.track_uri = history_item.track_uri
self.user_id = history_item.user_id
self.upvotes = history_item.upvotes
self.downvotes = history_item.downvotes
self.was_voted_off = history_item.was_voted_off
self.epoch = history_item.epoch
# Data about a requested config value
class ConfigValueDTO:
def __init__(self, name, value):
self.name = name
self.value = value
# List of actions available per track
class TrackActions:
QUEUE = 'queue' # User can queue the track
REMOVE = 'remove' # User can remove the track from the queue
UPVOTE = 'upvote' # User can upvote the track
DOWNVOTE = 'downvote' # User can downvote the track
# List of reasons for allowing/disallowing actions per track
class TrackActionReasons:
ON_QUEUE = 'on_queue' # Track is already on the queue so cannot be queued again!
OWNER = 'owner' # User is the owner, cannot vote, but can remove
TOO_SOON = 'too_soon' # Its too soon to be able to queue
VOTED_OFF_QUEUE = 'voted_off_queue' # Its too soon to be able to queue as it was voted off the queue
VOTED_UP = 'voted_up' # User already voted up
VOTED_DOWN = 'voted_down' # User already voted down
NOT_LOGGED_IN = 'not_logged_in' # User is not logged in!
# List of available actions, and the list of reasons actions are available/not available
class AvailableTrackActionsDTO:
def __init__(self, track_uri, actions, reasons):
self.track_uri = track_uri
self.actions = actions
self.reasons = reasons
# JSON encoder for our custom DTO types which returns all fields of the object instance!
class DTOEncoder(JSONEncoder):
def default(self, z):
if isinstance(z, TrackDTO):
return z.__dict__
if isinstance(z, AlbumDTO):
return z.__dict__
if isinstance(z, ArtistDTO):
return z.__dict__
if isinstance(z, UserDTO):
return z.__dict__
if isinstance(z, QueueItemDTO):
return z.__dict__
if isinstance(z, PlaybackStateDTO):
return z.__dict__
if isinstance(z, HistoryItemDTO):
return z.__dict__
if isinstance(z, AvailableTrackActionsDTO):
return z.__dict__
if isinstance(z, ConfigValueDTO):
return z.__dict__
else:
return JSONEncoder.default(self, z)
|
Maternal Death Due to Stroke Associated With Pregnancy-Induced Hypertension Background: The aim of this study was to clarify the clinical features of maternal death due to stroke associated with pregnancy-induced hypertension (PIH) in Japan. Methods and Results: Reported maternal deaths occurring between 2010 and 2012 throughout Japan were analyzed by the Maternal Death Exploratory Committee. Among a total of 154 reports of maternal death, those due to stroke with (n=12) or without (n=13) PIH were compared. Cerebral stroke occurred more frequently in the third tri-mester and during the second stage of labor in deaths with PIH, whereas it occurred at any time point in deaths not involving PIH. Although 83% of patients with PIH who died had experienced initial symptoms in a hospital, more than half of them required maternal transport due to lack of medical resources. Among the patients without PIH, some vascular abnormalities were identified, but no evidence was found among the patients with PIH. In addition, 58% of PIH cases resulting in stroke were complicated by hemolysis, elevated liver enzymes and low platelet count (HELLP) syndrome. Conclusions: Appropriate management of PIH during pregnancy and labor, including anti-hypertensive therapy and early maternal transport to tertiary hospital, may reduce the maternal death rate. HASEGAWA J et al. sion in Pregnancy for Japanese obstetric care providers. 10 PIH was defined as hypertension (blood pressure ≥140/90 mmHg) with or without proteinuria (≥300 mg/24 h) emerging after 20 weeks of gestation and resolving up to 12 weeks after delivery. Furthermore, it is recommended in the guidelines proposed by the Japan Society of Obstetrics and Gynecology that hypotensive drugs, including -methyldopa (250-2,000 mg/day), hydralazine (30-200 mg/day), nifedipine (20-40 mg/day) or labetalol (150-450 mg/day), should be administered, if systolic blood pressure is ≥160 mmHg or if the diastolic blood pressure is ≥110 mmHg. When a sudden elevation of blood pressure occurs during labor (≥160/110 mmHg), the use of hydralazine or nicardipine should also be considered. 11 In Japan, pregnant women usually undergo regular prenatal checkups, which include blood pressure measurement and a urine test every 2 weeks after 26 weeks' gestation and every week after 36 weeks. Thus, patients are evaluated for PIH at least every 2 weeks. Therefore, in the present study, "patients without PIH" were defined as those in whom PIH had not appeared by the final examination in a hospital or in the recent prenatal checkups. The diagnosis and location in the brain of intracerebral hemorrhage (ICH), subarachnoid hemorrhage and ischemic stroke associated with PIH were compared with that without PIH collected by the JAOG and analyzed by the Maternal Death Exploratory Committee. When maternal death occurs in Japan, a detailed report is submitted to JAOG and the individual data are analyzed by the Maternal Death Exploratory Committee (Chairman: T. Ikeda). This committee consists of 15 obstetricians, 4 anesthesiologists, 2 pathologists, an emergency physician and various specialists who attend review sessions each month to make annual recommendations to reduce the maternal mortality rate. The present study was performed as part of a series analyzing maternal deaths in Japan by this committee. 9 In cases of maternal death in which the mother died during pregnancy or within 1 year after delivery, report forms are submitted to the registration system. The report form contains 22 pages of approximately 100 questions to elicit detailed information regarding the clinical history of each death and the characteristics of the facility and personnel that participated in the patient's care (Supplementary File 1). All anonymized reports are analyzed for factors associated with maternal mortality and the circumstances of death. The definition and classification of PIH followed the guidelines published by the Japan Society for the Study of Hyperten- AMD, -methyldopa; bl., bilateral; BMI, body mass index; BP, blood pressure; CS, cesarean section; G, gravida; GA, gestational age (GA at delivery used in cases of puerperium onset); HELLP, hemolysis, elevated liver enzymes and low platelet count; ICH, intracerebral hemorrhage; JNS, Japan Neurosurgical Society; L, left; NR, not reported; P, parity; PIH, pregnancy-induced hypertension; R, right. ( Table 1 continued the next page.) Maternal Death Due to Stroke JAOG) were analyzed by the Maternal Death Exploratory Committee between 2010 and 2012. The maternal death rate (per 100,000 births) was 4.8 in 3,236,452 births after 12 weeks of pregnancy in Japan between 2010 and 2012. 7 Of these, 17 met the criteria for PIH at the onset of initial symptoms (11% of all maternal deaths). The characteristics of the patients with maternal death associated with PIH are given in Table 1. The final diagnosis of the direct cause of maternal death was cerebral stroke in 12 cases (71%) of maternal death associated with PIH. Of the remaining 5 maternal deaths associated with PIH, direct cause of death was pulmonary edema in 1 case, cardiac myopathy in 1 case, amniotic fluid embolism in 1 case, and not clearly explained due to the presence of multifactorial factors in 2 cases. The clinical characteristics of the maternal deaths due to stroke associated with PIH were compared with those of the 13 cases without PIH collected by the JAOG and analyzed by the Maternal Death Exploratory Committee. The characteristics of the maternal deaths due to stroke without PIH are listed in Table 2. The clinical features of the maternal deaths due to stroke vs. the presence of PIH are listed in Table 3. The maternal characteristics did not differ between the patients with and without PIH. The median gestational age at the onset of ICH was 38 weeks (range, 33-41 weeks) in the patients with PIH, whereas stroke occurred at any time point, ranging from 9 to 39 weeks' were based on the interpretation of imaging by a radiologist and/or neurosurgeon using computed tomography (CT) and/ or magnetic resonance imaging (MRI), and/or based on the findings during surgery or autopsy. Statistical significance was defined as P<0.05. The data were entered into SPSS (Windows version 20.0 J; SPSS, Chicago, IL, USA). Continuous variables are reported as the median and range according to Mann-Whitney U-test. Categorical variables are reported as frequencies and were compared using Fisher's exact test. Ethics This study was approved by the ethics board of National Cerebral and Cardiovascular Center, Osaka, Japan and the JAOG. This investigation was conducted according to the principles expressed in the Declaration of Helsinki. Informed consent was not obtained from patients and their family, because this study was based on analysis of reported forms from institution, and patient records/information was anonymized and de-identified prior to analysis. Results A total of 154 reports of maternal death (reports sent from 151 institutions in a total of 2,683 institutions that provide maternity services across Japan identified from a hospital list of the ID no. Onset Maternal transfer (duration from onset to admission) Maternal Death Due to Stroke reported rate of eclampsia and pre-eclampsia in patients with ICH ranging from 14% to 50%. 12- 15 Stroke associated with PIH occurred more frequently in the third trimester, especially during the pushing stage of labor, and less frequently after delivery in the patients with PIH, in comparison with maternal deaths due to stroke without PIH. It is thought that pre-existing cerebral vascular disease plays a significant role in the onset of pregnancy-associated hemorrhagic stroke. 16 In the present case series, stroke occurred at any time period, ranging from 9 to 39 weeks' gestation in the patients without PIH. It has also been reported that hemorrhagic stroke without pre-existing cerebral vascular disease occurred significantly later than that associated with such disorders (mean, 33.7±8.7 weeks vs. 25.3±9.6 weeks, respectively). 16 In patients without PIH, pre-existing brain vascular abnormalities with possible associations with stroke, such as moyamoya disease, cerebral aneurysm and arteriovenous malformation, were reported at imaging facilities in the present study. ICH is a subtype of stroke that occurs within the brain tissue itself and is a serious medical emergency, because it can increase intracranial pressure. 17 Pregnancy-related ICH has an estimated mortality rate of 9-38%. 13,14,17-19 Because PIH is a disease involving damaged endothelial cells, cerebral ischemia due to spasms and the leakage of cerebral blood vessels may cause cerebral edema and hemorrhage. The higher rate of ICH observed in patients with PIH may be explained by these changes induced by PIH. More than half of all cases of PIH in our series involved ICH complicated by HELLP syndrome. A previous report showed that 45% of maternal deaths due to HELLP syndrome are associated with cerebral hemorrhage. 20 In addition to hypertension and endothelial dysfunction of the cerebral vasculature, decreased platelet count and coagulation factors may contribute to the high mortality of ICH associated with HELLP syn-gestation, in the patients without PIH. Cerebral stroke occurred more frequently during the second stage of labor (33%) among the patients with PIH, whereas this symptom was more likely to occur after delivery (40%) among the patients without PIH. Stroke occurred outside of the hospital in 38% of patients without PIH, and in 17% of those with PIH. Whereas 83% of patients with PIH who died had experienced initial symptoms in a general or private hospital, more than half of these patients required maternal transport due to a lack of medical resources, such as specialists (brain surgeons and/or emergency physicians), medical staff, stored blood, imaging modalities, such as CT and MRI, and/or intensive care units. The cause of cerebral stroke was ICH in all patients with PIH, whereas, in the patients without PIH, ICH was noted in 8 (62%), with subarachnoid hemorrhage being diagnosed in 4 of the 13 patients (31%) and hemorrhagic infarction in 1. Among the patients without PIH, moyamoya disease, cerebral aneurysm, arteriovenous malformation and protein S deficiency were considered to be causes of cerebral stroke and maternal death. Moreover, there were 2 cases of stroke possibly induced by massive bleeding complicated by DIC during delivery. Among patients with PIH, however, no evidence of vascular abnormalities was found except for PIH itself. In addition, 7 of the 12 PIH patients who had ICH (58%) also had hemolysis, elevated liver enzymes and low platelet count (HELLP) syndrome. Discussion In this review of maternal deaths in Japan between 2010 and 2012, 11% of all maternal deaths were associated with PIH. More than 70% of the causes of maternal death associated with PIH was due to stroke (ICH), and 12 of 25 deaths (48%) due to stroke were associated with PIH, similar to the previous drome. 17 The physiological changes that occur during pregnancy have a significant impact on the vasculature in cases of arteriovenous malformation, and rupture during pregnancy is by no means coincidental. 16 The significance of pregnancy-associated ischemic and hemorrhagic stroke has been emphasized in patients with moyamoya disease. 21 It should also be noted that not only hypertension during labor, but also pregnancy itself induced stroke in patients with pre-existing vascular abnormalities in the brain. 22 After a review of these case series, the Maternal Death Exploratory Committee considered most of the cases of stroke without PIH to be unpreventable as a result of sudden unforeseen onset without control outside of the hospital. In contrast, given that most of the cases of ICH occurred around delivery in women with PIH that was not treated using hypotensive drugs before the onset of initial symptoms, such as headache and consciousness disorder, there may be a possibility to avoid maternal death by allowing for the appropriate control of hypertension, termination of the pregnancy or improvement of the medical resources (transfer to a different hospital). Clark et al reported the results of a retrospective evaluation of maternal deaths from 2007 to 2012 after the introduction of diseasespecific protocols that included blood pressure management for severe intrapartum or postpartum hypertension based on 2000-2006 data, and noted that there was a significant decline in the rate of deaths from pre-eclampsia. 23 We feel that better recommendations for blood pressure control during pregnancy are needed in Japan. There are limitations, however, associated with the prevention of maternal death, because it remains unclear whether the ICH in women with PIH was associated with pre-existing brain vascular abnormalities. It was previously reported that the detection rate of hemorrhage in patients with cerebral vascular disease is 71.7% during pregnancy, 23.1% at delivery and 33.5% in the postnatal period. 22 In addition, even if diagnostic imaging of women with pre-existing occult brain vascular diseases was performed during pregnancy, it is unclear whether these diseases can be detected. It also might be difficult to evaluate the details of the blood pressure control in the present case series, because this study was based on analyses of report forms sent from each institution. Conclusions ICH was the final causative disease in more than two-thirds of maternal deaths associated with PIH. Although many women were hospitalized due to delivery or the management of PIH, they could not be appropriately treated for PIH at their local hospital, and thus initially experienced serious symptoms. As a result, such women had to be transported to tertiary medical centers due to a lack of medical resources and such delays in receiving proper treatment sometimes resulted in maternal death. Although most maternal deaths are not preventable after the onset of ICH, an increased recognition of PIH, which is directly associated with maternal death, is needed. |
from django.contrib import admin
from feedback.models import GeneralFeedback, SignFeedback, MissingSignFeedback, InterpreterFeedback
class GeneralFeedbackAdmin(admin.ModelAdmin):
list_display = ['user', 'date', 'comment']
list_filter = ['user']
admin.site.register(GeneralFeedback, GeneralFeedbackAdmin)
class SignFeedbackAdmin(admin.ModelAdmin):
list_display = ['user', 'date', 'name']
list_filter = ['user']
admin.site.register(SignFeedback, SignFeedbackAdmin)
class MissingSignFeedbackAdmin(admin.ModelAdmin):
list_display = ['user', 'date']
list_filter = ['user']
admin.site.register(MissingSignFeedback, MissingSignFeedbackAdmin)
class InterpreterFeedbackAdmin(admin.ModelAdmin):
list_display = ['user', 'date']
list_filter = ['user']
admin.site.register(InterpreterFeedback, MissingSignFeedbackAdmin)
|
from datetime import datetime
from app.api.models.adapter_enums import AdapterEnums
from pydantic import BaseModel
class Adapter(BaseModel):
id: int
user_id: int
created_at: datetime
updated_at: datetime
adapter_name: str
cron_expression: str
|
<reponame>Lzw2016/clever-security-demo
package org.clever.security.session.demo.config;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.annotation.Configuration;
/**
* 作者: lzw<br/>
* 创建时间:2018-10-01 16:30 <br/>
*/
@Configuration
@Slf4j
public class BeanConfiguration {
}
|
from django.contrib import admin
from .models import Job, JobType, ProfileJob, JobHistory
@admin.register(JobType)
class JobTypeAdmin(admin.ModelAdmin):
pass
@admin.register(Job)
class JobAdmin(admin.ModelAdmin):
pass
@admin.register(ProfileJob)
class ProfileJobAdmin(admin.ModelAdmin):
pass
@admin.register(JobHistory)
class JobHistoryAdmin(admin.ModelAdmin):
pass
|
package com.zhou.ch2.event;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Component;
/**
*
* @Description: 事件发布类
*
* @author zhk
* @version 2.0 2018年8月26日
*
*/
@Component
public class DemoPublisher {
// 注入ApplicationContext用来发布事件
@Autowired
ApplicationContext applicationContext;
public void publish(String msg) {
applicationContext.publishEvent(new DemoEvent(this, msg));
}
}
|
Q:
Reverse engineering temperature values
I have a black box, where an input exists for a temperature sensor, which has a 10k NTC connected to it, and I see the following values of the 10k NTC from a log file the black box generates, and a reference value I measured with an electronic sensor:
NTC | deg C
886 | open circuit
860 | -2.6 deg C (ice cube water)
820 | 18
785 | 30
720 | 40
700 | 45
I cannot make sense of these values. Is there a way to find the formula to convert the NTC values to deg C? It seems non-linear (if not nonsensical).
A:
Yes, a standard NTC thermistor is non-linear.
There are a number of different models used to pull temp info out of a thermistor, depending on how much accuracy you need and how many parameters you want to use.
The simplest is the simple "beta" equation,
$$ \frac{1}{T} = \frac{1}{T_0}+ \frac{1}{\beta}\ln \left ( \frac{R}{R_0}\right) $$
so
$$R = R_0 \exp \left ( -\beta \left ( \frac{1}{T_0} - \frac{1}{T}\right) \right)$$
Thus, you pick a reference \$ T_0\$ at some temperature near your region of interest, and measure the \$R_0\$ associated with it, and measure some other temperature's R, and calculate \$\beta\$. Then you use that \$\beta\$ for other calculations. Different temperature points will yield you different calculations of \$\beta\$, as the formula is not exact. You'll need to figure out your tolerance for errors.
There are more accurate models, like the Steinhart-Hart equations (of which the beta formulation is a simplified case)
I don't know what "open circuit" means in your table, but I don't think it necessarily belongs in your calculations. I also can't tell how you're measuring the resistance (maybe as a voltage from a divider sampled on an ADC??), and this will have an impact.
Be careful not to pass too much current through a thermistor, as they can self heat. |
Harold Camping Doomsday: Why Do People Listen to Him?
Why do people believe Harold Camping’s doomsday prophecies? Glenn Shuck, Assistant Professor of Religion at Williams College in Williamstown, replies. |
. UNLABELLED Optimal modalities of surveillance of colorectal cancers (CRC) resected for cure have not been determined so far and the overall improvement of 5-year survival related to surveillance has not been demonstrated. AIM OF THE STUDY To retrospectively evaluate modalities, results and costs of follow-up of patients during the 5 years following the resection for cure of CRC. METHODS We studied medical and economical data from records of 256 patients registered in the cancer registry of the Herault area who underwent a potentially curative resection of CRC in 1992. We analyzed comparatively modalities of follow-up in patients who were followed according to recommendations from the 1998 French consensus conference (standard follow-up) and in those who had a simplified follow-up. We evaluated cumulative costs of follow-up. RESULTS Nine patients died in the postoperative period. Recurrence rate was 27% (69 patients). Sixty-nine patients had a standard follow-up (30% of the 231 classified patients) and 162 patients (70%) had a simplified follow-up. The specific survival rate (taking into account only death related to CRC) 5 years after resection for cure was 75%. The 5-year specific survival rate after diagnosis of recurrence was 12% in the patients with recurrent disease within the 5 years after initial therapy. The 5-year survival rate after standard and simplified follow-up were 85% and 79%, respectively (P=0.25). Total cost of follow-up of the 256 patients was 1 085 507 French francs (FF). Mean follow-up cost per patient was 5 527 FF. Cost of the examinations not recommended by the consensus conference represented 30% of the expenses. Individual total cost of the follow-up of patients alive 5 years after the diagnosis of the recurrence was 120 356 FF. CONCLUSION In Herault area, clinicians carried out in 70% of the patients a simplified follow-up and in 30% of the cases a reinforced follow-up in comparison with French recommendations. Survival rates were not significantly different between the 2 groups. |
Moderate drinking: alternative treatment goal. There was a misstatement in the abstract of our article "Excipients and additives: hidden hazards in drug products and in product substitution" (Can Med Assoc J 1984; 131: 1449-1452). In the sentence beginning "For example, the United States has legislation requiring complete labelling of all food, drugs and cosmetics that incorporate more than one ingredient" the word "some"V should be substituted for the word "all". US regulations require that the presence of tartrazine and a few other excipients and additives be indicated on the label, but they do not require that all possible excipients and additives be listed there. The Health Protection Branch of the Department of National Health and Welfare reviewed its policy on tartrazine in Information Letter #634 (Sept. 10, 1982). It concluded that "colouring agents in drugs should not present a hazard to health and that their concentration should be kept to the minimum for purposes of product identification". It also stated that "manufacturers are asked to make a declaration of the presence of tartrazine in their products to the C.Ph.A.", which would then be published in the "Compendium of Pharmaceuticals and Specialties". For the manufacturer of a new drug or proprietary medicine to obtain an "identification number" Canadian drug regulations require that all submissions contain a quantitative list of ingredients, including colouring agents. Although such information is regarded as confidential, the Health Protection Branch has been instrumental in bringing the consumer-physician and the manufacturer together in difficult cases. |
export const LOCAL_WORKFLOWS_PATH = "./.github/workflows";
export const UTF8 = "utf-8";
export const DOCS_URL = "https://github.com/vemel/github_actions_js";
export const EXTENSIONS = [".yml", ".yaml"];
|
/*******************************************************************************
* Copyright 2016 <NAME>
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not
* use this file except in compliance with the License. You may obtain a copy
* of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations under
* the License.
******************************************************************************/
package com.fab.gui.xmlView;
import com.foc.admin.FocUser;
import com.foc.desc.FocConstructor;
import com.foc.desc.FocObject;
public class UserMenuSelectionHistory extends FocObject{
public UserMenuSelectionHistory(FocConstructor constr) {
super(constr);
newFocProperties();
}
public void setUser(FocUser user){
setPropertyObject(UserMenuSelectionHistoryDesc.FLD_USER, user);
}
public FocUser getUser(){
return (FocUser) getPropertyObject(UserMenuSelectionHistoryDesc.FLD_USER);
}
public void setMenuCode(String menuCode){
setPropertyString(UserMenuSelectionHistoryDesc.FLD_MENU_CODE, menuCode);
}
public String getMenuCode(){
return getPropertyString(UserMenuSelectionHistoryDesc.FLD_MENU_CODE);
}
public void setMenuOrder(int menuOrder){
setPropertyInteger(UserMenuSelectionHistoryDesc.FLD_MENU_ORDER, menuOrder);
}
public int getMenuOrder(){
return getPropertyInteger(UserMenuSelectionHistoryDesc.FLD_MENU_ORDER);
}
}
|
Director Derek Cianfrance, Michelle Williams, and producer Alex Orlovsky. Photo via PatrickMcMullan.com.
It didn’t nab the Golden Globes nod for Best Picture this morning, but the Weinstein Company’s indie darling Blue Valentine received nominations for both of its stars: Michelle Williams and Ryan Gosling, whose gut-wrenching performances made the drama a favorite at Sundance. And hours before the H.F.P.A. announced the pair’s names in the West, the Cinema Society and Piaget fêted the movie in the East. After taking in a screening at the Tribeca Grand Hotel, a posh crowd toasted DeLeon tequila cocktails to Blue Valentine at the Soho Grand’s Club Room Monday night; the group included writer/director Derek Cianfrance, Williams, Gabourey Sidibe, Patricia Clarkson, Paul Haggis, Joan Rivers, and others—including Harvey Weinstein himself. After his recent, rigorous lobbying of the M.P.A.A. over *Blue Valentine’*s rating, we’re sure he earned a celebratory quaff. Cheers to Williams and Gosling! |
def test1(self) -> None:
config1 = _get_test_config1()
config2 = _get_test_config2()
act = cconfig.convert_to_dataframe([config1, config2])
act = hunitest.convert_df_to_string(act, index=True)
exp = pd.DataFrame(
{
"build_model.activation": ["sigmoid", "sigmoid"],
"build_targets.target_asset": ["Crude Oil", "Gold"],
"build_targets.preprocessing.preprocessor": [
"tokenizer",
"tokenizer",
],
"meta.experiment_result_dir": ["results.pkl", "results.pkl"],
}
)
exp = hunitest.convert_df_to_string(exp, index=True)
self.assert_equal(str(act), str(exp)) |
// NewCommitmentHistoryMessage is a convenience function for creating a new
// CommitmentHistoryMessage from an array of commitment objects
func NewCommitmentHistoryMessage(c []*Commitment) *CommitmentHistoryMessage {
msg := new(CommitmentHistoryMessage)
msg.Commitments = c
return msg
} |
<gh_stars>1-10
#include "u.h"
#include "../port/lib.h"
#include "mem.h"
#include "dat.h"
#include "fns.h"
#include "io.h"
#include "../port/error.h"
#include "../port/netif.h"
/*
* currently no DMA or flow control (hardware or software)
*/
enum
{
Stagesize= 1024,
Dmabufsize=Stagesize/2,
Nuart=7, /* max per machine */
CTLS= 023,
CTLQ= 021,
};
typedef struct Uart Uart;
struct Uart
{
QLock;
int opens;
int enabled;
int frame; /* framing errors */
int overrun; /* rcvr overruns */
int soverrun; /* software overruns */
int perror; /* parity error */
int bps; /* baud rate */
uchar bits;
char parity;
int inters; /* total interrupt count */
int rinters; /* interrupts due to read */
int winters; /* interrupts due to write */
int rcount; /* total read count */
int wcount; /* total output count */
int xonoff; /* software flow control on */
int blocked; /* output blocked */
/* buffers */
int (*putc)(Queue*, int);
Queue *iq;
Queue *oq;
UartReg *reg;
/* staging areas to avoid some of the per character costs */
uchar *ip;
uchar *ie;
uchar *op;
uchar *oe;
/* put large buffers last to aid register-offset optimizations: */
char name[NAMELEN];
uchar istage[Stagesize];
uchar ostage[Stagesize];
};
static Uart *uart[Nuart];
static int nuart;
static void
uartset(Uart *p)
{
UartReg *reg = p->reg;
ulong ocr3;
ulong brdiv;
int n;
brdiv = TIMER_HZ/16/p->bps - 1;
ocr3 = reg->utcr3;
reg->utcr3 = ocr3&~(UTCR3_RXE|UTCR3_TXE);
reg->utcr1 = brdiv >> 8;
reg->utcr2 = brdiv & 0xff;
/* set PE and OES appropriately for o/e/n: */
reg->utcr0 = ((p->parity&3)^UTCR0_OES)|(p->bits&UTCR0_DSS);
reg->utcr3 = ocr3;
/* set buffer length according to speed, to allow
* at most a 200ms delay before dumping the staging buffer
* into the input queue
*/
n = p->bps/(10*1000/200);
p->ie = &p->istage[n < Stagesize ? n : Stagesize];
}
/*
* send break
*/
static void
uartbreak(Uart *p, int ms)
{
UartReg *reg = p->reg;
if(ms == 0)
ms = 200;
reg->utcr3 |= UTCR3_BRK;
tsleep(&up->sleep, return0, 0, ms);
reg->utcr3 &= ~UTCR3_BRK;
}
/*
* turn on a port
*/
static void
uartenable(Uart *p)
{
UartReg *reg = p->reg;
if(p->enabled)
return;
uartset(p);
reg->utsr0 = 0xff; // clear all sticky status bits
// enable receive, transmit, and receive interrupt:
reg->utcr3 = UTCR3_RXE|UTCR3_TXE|UTCR3_RIM;
p->blocked = 0;
p->enabled = 1;
}
/*
* turn off a port
*/
static void
uartdisable(Uart *p)
{
p->reg->utcr3 = 0; // disable TX, RX, and ints
p->blocked = 0;
p->xonoff = 0;
p->enabled = 0;
}
/*
* put some bytes into the local queue to avoid calling
* qconsume for every character
*/
static int
stageoutput(Uart *p)
{
int n;
Queue *q = p->oq;
if(q == nil)
return 0;
n = qconsume(q, p->ostage, Stagesize);
if(n <= 0)
return 0;
p->op = p->ostage;
p->oe = p->ostage + n;
return n;
}
static void
uartxmit(Uart *p)
{
UartReg *reg = p->reg;
ulong e = 0;
if(!p->blocked) {
while(p->op < p->oe || stageoutput(p)) {
if(reg->utsr1 & UTSR1_TNF) {
reg->utdr = *(p->op++);
p->wcount++;
} else {
e = UTCR3_TIM;
break;
}
}
}
reg->utcr3 = (reg->utcr3&~UTCR3_TIM)|e;
}
static void
uartrecvq(Uart *p)
{
uchar *cp = p->istage;
int n = p->ip - cp;
if(n == 0)
return;
if(p->putc)
while(n-- > 0)
p->putc(p->iq, *cp++);
else if(p->iq)
if(qproduce(p->iq, p->istage, n) < n){
/* if xonoff, should send XOFF when qwindow(p->iq) < threshold */
p->soverrun++;
//print("qproduce flow control");
}
p->ip = p->istage;
}
static void
uartrecv(Uart *p)
{
UartReg *reg = p->reg;
ulong n;
while(reg->utsr1 & UTSR1_RNE) {
int c;
n = reg->utsr1;
c = reg->utdr;
if(n & (UTSR1_PRE|UTSR1_FRE|UTSR1_ROR)) {
if(n & UTSR1_PRE)
p->perror++;
if(n & UTSR1_FRE)
p->frame++;
if(n & UTSR1_ROR)
p->overrun++;
continue;
}
if(p->xonoff){
if(c == CTLS){
p->blocked = 1;
}else if (c == CTLQ){
p->blocked = 0;
}
}
*p->ip++ = c;
if(p->ip >= p->ie)
uartrecvq(p);
p->rcount++;
}
if(reg->utsr0 & UTSR0_RID) {
reg->utsr0 = UTSR0_RID;
uartrecvq(p);
}
}
static void
uartclock(void)
{
Uart *p;
int i;
for(i=0; i<nuart; i++){
p = uart[i];
if(p != nil)
uartrecvq(p);
}
}
static void
uartkick(void *a)
{
Uart *p = a;
int x = splhi();
uartxmit(p);
splx(x);
}
/*
* UART Interrupt Handler
*/
static void
uartintr(Ureg*, void* arg)
{
Uart *p = arg;
UartReg *reg = p->reg;
ulong m = reg->utsr0;
int dokick;
dokick = p->blocked;
p->inters++;
if(m & (UTSR0_RFS|UTSR0_RID|UTSR0_EIF)) {
p->rinters++;
uartrecv(p);
}
if(p->blocked)
dokick = 0;
if((m & UTSR0_TFS) && (reg->utcr3&UTCR3_TIM || dokick)) {
p->winters++;
uartxmit(p);
}
if(m & (UTSR0_RBB|UTSR0_REB)) {
print("<BREAK>");
/* reg->utsr0 = UTSR0_RBB|UTSR0_REB; */
reg->utsr0 = m & (UTSR0_RBB|UTSR0_REB);
/* hangup? this could adversely affect some things,
like the IR keyboard... what is appropriate to do here?
qhangup(p->iq, 0);
*/
}
}
static void
uartsetup(ulong port, char *name)
{
Uart *p;
if(nuart >= Nuart)
return;
p = xalloc(sizeof(Uart));
uart[nuart++] = p;
strcpy(p->name, name);
p->reg = UARTREG(port);
p->bps = 9600;
p->bits = 8;
p->parity = 'n';
p->iq = qopen(4*1024, 0, 0 , p);
p->oq = qopen(4*1024, 0, uartkick, p);
p->ip = p->istage;
p->ie = &p->istage[Stagesize];
p->op = p->ostage;
p->oe = p->ostage;
intrenable(UARTbit(port), uartintr, p, BusCPU);
}
static void
uartinstall(void)
{
static int already;
if(already)
return;
already = 1;
uartsetup(3, "eia0");
/* uartsetup(2, "eia1"); */ /* sometimes causes a uart <BREAK> interrrupt which halts the kernel */
addclock0link(uartclock);
}
/*
* called by main() to configure a duart port as a console or a mouse
*/
void
uartspecial(int port, int bps, char parity, Queue **in, Queue **out, int (*putc)(Queue*, int))
{
Uart *p;
uartinstall();
if(port >= nuart)
return;
p = uart[port];
if(bps)
p->bps = bps;
if(parity)
p->parity = parity;
uartenable(p);
p->putc = putc;
if(in)
*in = p->iq;
if(out)
*out = p->oq;
p->opens++;
}
Dirtab *uartdir;
int ndir;
static void
setlength(int n)
{
Uart *p;
int i = n;
if(n < 0) {
i = 0;
n = nuart;
}
for(; i < n; i++) {
p = uart[i];
if(p && p->opens && p->iq)
uartdir[3*i].length = qlen(p->iq);
}
}
/*
* all uarts must be uartsetup() by this point or inside of uartinstall()
*/
static void
uartreset(void)
{
int i;
Dirtab *dp;
uartinstall();
ndir = 3*nuart;
uartdir = xalloc(ndir * sizeof(Dirtab));
dp = uartdir;
for(i = 0; i < nuart; i++){
/* 3 directory entries per port */
strcpy(dp->name, uart[i]->name);
dp->qid.path = NETQID(i, Ndataqid);
dp->perm = 0660;
dp++;
sprint(dp->name, "%sctl", uart[i]->name);
dp->qid.path = NETQID(i, Nctlqid);
dp->perm = 0660;
dp++;
sprint(dp->name, "%sstat", uart[i]->name);
dp->qid.path = NETQID(i, Nstatqid);
dp->perm = 0444;
dp++;
}
}
static Chan*
uartattach(char *spec)
{
return devattach('t', spec);
}
static int
uartwalk(Chan *c, char *name)
{
return devwalk(c, name, uartdir, ndir, devgen);
}
static void
uartstat(Chan *c, char *dp)
{
if(NETTYPE(c->qid.path) == Ndataqid)
setlength(NETID(c->qid.path));
devstat(c, dp, uartdir, ndir, devgen);
}
static Chan*
uartopen(Chan *c, int omode)
{
Uart *p;
c = devopen(c, omode, uartdir, ndir, devgen);
switch(NETTYPE(c->qid.path)){
case Nctlqid:
case Ndataqid:
p = uart[NETID(c->qid.path)];
qlock(p);
if(p->opens++ == 0){
uartenable(p);
qreopen(p->iq);
qreopen(p->oq);
}
qunlock(p);
break;
}
return c;
}
static void
uartclose(Chan *c)
{
Uart *p;
if(c->qid.path & CHDIR)
return;
if((c->flag & COPEN) == 0)
return;
switch(NETTYPE(c->qid.path)){
case Ndataqid:
case Nctlqid:
p = uart[NETID(c->qid.path)];
qlock(p);
if(--(p->opens) == 0){
uartdisable(p);
qclose(p->iq);
qclose(p->oq);
p->ip = p->istage;
}
qunlock(p);
break;
}
}
static long
uartstatus(Chan *c, Uart *p, void *buf, long n, long offset)
{
char str[256];
USED(c);
str[0] = 0;
snprint(str, sizeof(str),
"b%d l%d p%c s%d x%d\n"
"opens %d ferr %d oerr %d perr %d baud %d parity %c"
" intr %d rintr %d wintr %d"
" rcount %d wcount %d",
p->bps, p->bits, p->parity, (p->reg->utcr0&UTCR0_SBS)?2:1, p->xonoff,
p->opens, p->frame, p->overrun+p->soverrun, p->perror, p->bps, p->parity,
p->inters, p->rinters, p->winters,
p->rcount, p->wcount);
strcat(str, "\n");
return readstr(offset, buf, n, str);
}
static long
uartread(Chan *c, void *buf, long n, ulong offset)
{
Uart *p;
if(c->qid.path & CHDIR){
setlength(-1);
return devdirread(c, buf, n, uartdir, ndir, devgen);
}
p = uart[NETID(c->qid.path)];
switch(NETTYPE(c->qid.path)){
case Ndataqid:
return qread(p->iq, buf, n);
case Nctlqid:
return readnum(offset, buf, n, NETID(c->qid.path), NUMSIZE);
case Nstatqid:
return uartstatus(c, p, buf, n, offset);
}
return 0;
}
static void
uartctl(Uart *p, char *cmd)
{
int i, n;
/* let output drain for a while (up to 4 secs) */
for(i = 0; i < 200 && (qlen(p->oq) || p->reg->utsr1 & UTSR1_TBY); i++)
tsleep(&up->sleep, return0, 0, 20);
if(strncmp(cmd, "break", 5) == 0){
uartbreak(p, 0);
return;
}
n = atoi(cmd+1);
switch(*cmd){
case 'B':
case 'b':
if(n <= 0)
error(Ebadarg);
p->bps = n;
uartset(p);
break;
case 'f':
case 'F':
qflush(p->oq);
break;
case 'H':
case 'h':
qhangup(p->iq, 0);
qhangup(p->oq, 0);
break;
case 'L':
case 'l':
if(n < 7 || n > 8)
error(Ebadarg);
p->bits = n;
uartset(p);
break;
case 'n':
case 'N':
qnoblock(p->oq, n);
break;
case 'P':
case 'p':
p->parity = *(cmd+1);
uartset(p);
break;
case 'K':
case 'k':
uartbreak(p, n);
break;
case 'Q':
case 'q':
qsetlimit(p->iq, n);
qsetlimit(p->oq, n);
break;
case 'X':
case 'x':
p->xonoff = n;
break;
}
}
static long
uartwrite(Chan *c, void *buf, long n, ulong offset)
{
Uart *p;
char cmd[32];
USED(offset);
if(c->qid.path & CHDIR)
error(Eperm);
p = uart[NETID(c->qid.path)];
switch(NETTYPE(c->qid.path)){
case Ndataqid:
return qwrite(p->oq, buf, n);
case Nctlqid:
if(n >= sizeof(cmd))
n = sizeof(cmd)-1;
memmove(cmd, buf, n);
cmd[n] = 0;
uartctl(p, cmd);
return n;
}
}
static void
uartwstat(Chan *c, char *dp)
{
Dir d;
Dirtab *dt;
if(!iseve())
error(Eperm);
if(CHDIR & c->qid.path)
error(Eperm);
if(NETTYPE(c->qid.path) == Nstatqid)
error(Eperm);
dt = &uartdir[3 * NETID(c->qid.path)];
convM2D(dp, &d);
d.mode &= 0666;
dt[0].perm = dt[1].perm = d.mode;
}
Dev uartdevtab = {
't',
"uart",
uartreset,
devinit,
uartattach,
devdetach,
devclone,
uartwalk,
uartstat,
uartopen,
devcreate,
uartclose,
uartread,
devbread,
uartwrite,
devbwrite,
devremove,
uartwstat,
};
|
import { NgModule } from "@angular/core";
import { PopupMenuListComponent } from "./popup-menu-list.component";
import { PopupMenuItemComponent } from "./popup-menu-item.component";
import { CommonModule } from "@angular/common";
@NgModule({
declarations: [
PopupMenuListComponent,
PopupMenuItemComponent
],
imports: [
CommonModule
],
exports: [
PopupMenuListComponent,
PopupMenuItemComponent
],
})
export class PopupMenuModule {
}
|
Footprints of Selection Derived From Temporal Heterozygosity Patterns in a Barley Nested Association Mapping Population Nowadays, genetic diversity more than ever represents a key driver of adaptation to climate challenges like drought, heat, and salinity. Therefore, there is a need to replenish the limited elite gene pools with favorable exotic alleles from the wild progenitors of our crops. Nested association mapping (NAM) populations represent one step toward exotic allele evaluation and enrichment of the elite gene pool. We investigated an adaptive selection strategy in the wild barley NAM population HEB-25 based on temporal genomic data by studying the fate of 214,979 SNP loci initially heterozygous in individual BC1S3 lines after five cycles of selfing and field propagation. We identified several loci exposed to adaptive selection in HEB-25. In total, 48.7% (104,725 SNPs) of initially heterozygous SNP calls in HEB-25 were fixed in BC1S3:8 generation, either toward the wild allele (19.9%) or the cultivated allele (28.8%). Most fixed SNP loci turned out to represent gene loci involved in domestication and flowering time as well as plant height, for example, btr1/btr2, thresh-1, Ppd-H1, and sdw1. Interestingly, also unknown loci were found where the exotic allele was fixed, hinting at potentially useful exotic alleles for plant breeding. INTRODUCTION Around 10,000 years ago, domestication of crops enabled mankind to settle and to commence agriculture. Year after year, early farmers and breeders selected the best performing plants for the next season. Domestication and selection were accompanied by a progressive depletion of genetic diversity, known as bottleneck effect (Tanksley and McCouch, 1997). To cope with future agricultural challenges, there is a need to replenish the limited elite gene pools with favorable exotic alleles from the wild progenitors of our crops (Zamir, 2008;;). However, the identification of beneficial exotic material can be laborious and challenging (). NAM populations can be developed to investigate a multitude of exotic allele effects in an adapted background. They are created by crossing a diverse set of wild progenitors with one recurrent elite cultivar. This way high allele richness and statistical power are combined to evaluate complex traits through genome-wide association studies (GWAS). Subsequent backcross steps with the elite cultivar may serve as a first step to integrate exotic alleles in adapted breeding material and increase the reliability of estimated wild allele effects. The wild barley NAM population HEB-25 is backcrossbased and comprises 1,420 BC 1 S 3 lines resulting from crosses of 25 highly divergent wild barley accessions (Hordeum vulgare ssp. spontaneum and ssp. agriocrithon) with the elite barley cultivar Barke (). In this population recombination rate () and several important agronomic traits like plant development (;a;), yield formation ;a), grain nutrient concentration (;b), as well as tolerance to abiotic (;;) and biotic ((Vatter et al.,, 2018;) stresses were investigated. Although all of these studies proved useful to find genomic regions controlling the investigated traits, in many cases, it remains unclear which effects can truly be designated as beneficial across certain environments. For instance, a clear statement about the usefulness of flowering time affecting wild alleles is often not possible due to their environment-dependent phenotypic plasticity (;), complicating the transfer of beneficial alleles from one environment to another. In this context, outsourcing the job of selection to mother nature may help to define what is beneficial in a certain environment. Over the long term, this enables the prevalence of certain ideotypes with a complex of optimally coordinated properties rather than focusing on single traits in classical selection. This principle of natural selection is key of the evolutionary plant breeding concept (Suneson, 1956;Phillips and Wolfe, 2005;). In the present study, we investigated an adaptive selection strategy in HEB-25 by temporal screening of initially heterozygous loci (where 6.25% of the genome is expected to be heterozygous in each HEB line in generation BC 1 S 3 ) after 5 years of selfing and field propagation without conscious selection as a by-product of population conservation. A clear fixation of exotic alleles could hint at potentially useful exotic alleles that were more successful in contributing to the next field generation. Plant Material The wild barley nested association mapping (NAM) population HEB-25 resulted from parallel crosses of 25 highly divergent wild barley accessions (Hordeum vulgare ssp. spontaneum and ssp. agriocrithon, hereafter named Hsp) with the German elite barley cultivar Barke (Hordeum vulgare ssp. vulgare, hereafter named Hv). F1 plants of the initial crosses were backcrossed with Barke and after three subsequent rounds of selfing the population comprised 1,420 BC 1 S 3 plants (). Due to the mating design, each BC 1 S 3 plant is expected to harbor 71.875% homozygous Barke loci, 6.25% heterozygous loci, and 21.875% homozygous wild loci under the assumption of no selection. Population Conservation In 2011, BC 1 S 3 -derived lines were created by growing the complete progeny of each single BC 1 S 3 plant (i.e., BC 1 S 3:4 ) in small plots (double rows of 1.50 m length) in the field (Halle, Germany, 51°2946.47N; 11°5941.81E) and 20 randomly chosen ears of each line, displaying a representative subset of the whole plot, were harvested at maturity. After threshing and manual seed processing, 60 seeds thereof were randomly selected for sowing in 2012 (i.e., BC 1 S 3:5 ). The same process of harvesting, processing, and sowing was repeated with 60 BC 1 S 3:6 seeds in 2013 and 100 BC 1 S 3:7 seeds in 2014. Then, 20 BC 1 S 3:8 seeds (harvested from the BC 1 S 3:7 plants in 2014) were grown and leaf material from 12 randomly chosen seedlings per line (50-100 mg) was harvested to form pooled samples for DNA extraction. This way, initial heterozygosity could be reconstructed through heterogeneity within the 12 plants (Figure 1). DNA Extraction and SNP Genotyping DNA was extracted according to the manufacturer's protocol, using the BioSprint 96 DNA Plant Kit and a BioSprint96 work station (Qiagen, Hilden, Germany), and finally dissolved in distilled water at approximately 50 ng/l. The original 1,420 BC 1 S 3 plants were genotyped with the barley 9 k Infinium iSelect SNP array (), consisting of 7,864 SNP markers as reported in Comadran et al.. Pooled DNA samples of 12 BC 1 S 3:8 seedlings per HEB-25 line were genotyped with the barley 50 k Infinium iSelect SNP array (;Maurer and Pillen, 2019) at TraitGenetics GmbH, Gatersleben, Germany to reconstruct original heterozygosity. SNP calling in Illumina genotyping assays is based on the concept of hybridization technology with specifically designed oligonucleotide probes, where the intensity of two distinct fluorescently labeled target sequences represents the signal strength for each of the two alleles. Then, a cluster algorithm is applied to distinguish the two contrasting homozygous classes and heterozygous calls (). At TraitGenetics GmbH, the cluster files determining the thresholds for allelic discrimination have been manually revised to improve the call quality, both for the 9 k and the 50 k array. After SNP calling, SNP markers that did not meet the quality criteria (polymorphic in at least one HEB family, < 10% failure (i.e., no call) rate, and < 12.5% heterozygous calls, which is twice the expectancy in BC 1 S 3 ) were removed from the dataset. Furthermore, 256 SNPs were removed as they revealed exact segregation among all HEB lines, indicating that they were in complete linkage disequilibrium (LD). Only one of these duplicates was kept. Altogether, 4,717 SNPs, genotyped in both SNP arrays, met the quality criteria. In the present study, 57 of the initial 1,420 lines were eliminated due to clearly inconsistent genotypes between BC 1 S 3 and BC 1 S 3:8. A B C FIGURE 1 | Population conservation and sampling strategy from generation BC 1 S 3 to BC 1 S 3:8. Exemplified for one heterozygous SNP in BC 1 S 3, which is traced through four seasons of field propagation followed by pooled DNA sampling of 12 seedlings in BC 1 S 3:8. Heterogeneity within these 12 seedlings leads to heterozygous allele calls in Illumina 50 k genotyping and is termed "reconstructed heterozygosity". The expected segregation of a single SNP in HEB-25 generation BC 1 S 3 is equal to 71.875% homozygous Barke, 6.25% heterozygous, and 21.875% homozygous wild barley. Without selection, the expected SNP segregation of the offspring of a heterozygous HEB-25 plant in generation BC 1 S 3:8 is equal to 48.4375% homozygous Barke, 3.125% heterozygous, and 48.4375% homozygous wild barley, giving rise to a reconstructed heterozygous genotype B, resulting in a score of 0 in the reconstructed genotype matrix; Supplementary Table S1). A and C indicate different possible scenarios of allele distribution in case of selection for Hsp allele A, resulting in a score of 1 in the reconstructed genotype matrix) or Hv allele (C, resulting in a score of −1 in the reconstructed genotype matrix) in previous generations (here exemplified for BC 1 S 3:7 ). Note that a homozygously fixed allele call in Illumina genotyping can be obtained despite the presence of small amounts of opposite alleles in the pooled sample C.. To determine the relative fixation direction (RFD) in which an originally heterozygous SNP allele moved, a reconstructed genotype matrix was created (Supplementary Table S1) containing fixation values for each SNP*genotype combination where "0" represents SNPs that retained their (reconstructed) heterozygosity state, i.e., both alleles are still present in the 12 sampled plants of a BC 1 S 3:8 line, "-1" represents SNPs that were fixed for the homozygous Hv allele, and "+1" represents SNPs that were fixed for the homozygous Hsp allele. The RFD for each SNP was then determined as Both AFR and RFD were only calculated for SNPs containing at least 10 heterozygous HEB lines in generation BC 1 S 3 (n = 3,872 SNPs) to avoid strong bias. To test for significance of RFD, chi-square goodness-of-fit tests were conducted to measure deviations from a 1:1 ratio of homozygous Hv to homozygous Hsp genotypes in BC 1 S 3:8. For this purpose, a sliding window approach summarizing allele counts of 10 consecutive SNPs was applied. A significant deviation was accepted at a Bonferroni-corrected value of p < 0.01. Pearson's correlations of RFD with Hsp allele SNP effects estimated in the whole HEB-25 population were calculated. For this purpose, SNP effects were obtained from a simple linear model regressing published data on plant height and flowering time, ear number, grain number per ear, grain yield, and threshability (), as well as unpublished data of powdery mildew susceptibility and brittleness of rachis on the quantitative SNP scores (matrix D of Maurer and Pillen ). Expected Probabilities of Allele Fixation During Population Conservation The expected segregation ratio in the BC 1 S 3 generation of HEB-25 is 0.71875: 0.0625: 0.21875 for homozygous Hv allele, heterozygous, and homozygous Hsp allele, respectively. Those 6.25% of heterozygous SNPs will segregate in advanced generations. In each selfing generation, the heterozygous rate is halved and a quarter is going to be fixed in each of the two homozygous classes. The progeny of a single BC 1 S 3 plant, which is termed BC 1 S 3:4, is therefore expected to segregate in a 0.25: 0.5: 0.25 ratio at an initially heterozygous SNP. By following this rule, a segregation of 0.484375: 0.03125: 0.484375 at initially heterozygous loci is expected for BC 1 S 3:8 generation, if no selection occurs during reproduction and plant cultivation ( Figure 1B). Consequently, at an initially heterozygous locus, an unbiased reconstructed genotype in BC 1 S 3:8 should consist of a pooled DNA sample of ~6 Hv and ~ 6 Hsp plants, giving rise to a heterozygous allele call in 50 k genotyping. This way, the initial heterozygosity can be reconstructed. In case of no selection, 100% heterozygous calls are expected in the pooled sample of 12 BC 1 S 3:8 plants originating from an initially heterozygous BC 1 S 3 plant. To estimate the impact of accidentally biased sampling on genotype calling in BC 1 S 3:8 generation, the probabilities of obtaining a biased pooled sample of 12 plants, used for 50 k genotyping, were determined in 1,000,000 binomial trials with regard to different simulated allele segregations (0.05-0.95) of the previously harvested generation. The probabilities of accidental sampling of 12 or ≥ 9 plants of the same homozygous allele class, leading to homozygous genotype calls in Illumina genotyping, were then estimated for each preset allele segregation. RESULTS In total, 214,979 (6,02%) heterozygous SNP calls were obtained from 3,573,466 polymorphic SNP assays in generation BC 1 S 3 of population HEB-25 (Supplementary Table S1), which is close to the expected frequency of 6.25%. This represents an average number of ≈ 46 heterozygous HEB lines per SNP. Altogether, 104,725 (48.7%) SNP calls out of those initially heterozygous SNPs were homozygously fixed after five cycles of selfing in BC 1 S 3:8 generation, based on SNP analysis of a pooled sample of 12 plants for each BC 1 S 3:8 line. This is also visible in the frequency distribution of the allele fixation rate (AFR) of each single SNP (Supplementary Figure S1; Supplementary Table S1), indicating that on average originally heterozygous SNPs did not segregate equally into both allele classes in half of the lines (average AFR = 48.6%). In 42,752 (19.9%) cases, the SNP was fixed toward the Hsp allele, while in 61,973 (28.8%) cases, the SNP was fixed toward the Hv allele (Supplementary Table S1). Mapping these tendencies on the genome revealed clear patterns of genomic regions where the Hsp allele and the Hv allele were favored, respectively (Figure 2), enabling to separate the observed effects from genetic drift, which would not favor a specific allele and therefore would result in light yellow colors on the heat map. We expected that Barke (Hv), as a semi-dwarf European spring barley cultivar with resistance to powdery mildew, should be well adapted to the environmental conditions during propagation in the spring-sown trials in Halle. The obviously relevant loci Ppd-H1, Vrn-H2, mlo, and sdw1/denso were all under selection (Figure 2). AFR was highest for Ppd-H1 (>90%) and showed a clear tendency of fixation for the Hv allele (negative RFD). This was also true for the Vrn-H2 and mlo loci conferring vernalization independency and resistance to powdery mildew, respectively. However, at denso/sdw1, the Hsp allele was clearly favored (positive RFD). Technical Factors Affecting Allele Fixation One challenge of conserving extensive experimental plant populations with remaining heterozygosity is to develop a strategy to maintain the population in an effective manner. We applied a method of maintaining the barley NAM population HEB-25 by harvesting 20 randomly selected representative ears from each of the 1,420 lines during four seasons of field propagation. By comparing genotype data of the original BC 1 S 3 plants with a pooled DNA sample of the resulting progeny in BC 1 S 3:8, we observed that the original heterozygosity present in the HEB lines was halved as indicated by the reconstructed heterozygosity in BC 1 S 3:8. In other words, half of the lines segregating for a specific SNP locus were fixed toward a homozygous allele during population conservation. To interpret this number, one has to consider that the obtained genotype score is the product of leaf sampling for DNA extraction and subsequent microarray-based SNP genotyping and allele calling. Since the initial heterozygosity declines after five selfing generations, we collected leaf material of 12 plants to reconstruct heterozygosity. Noteworthy, the selection of 12 plants itself harbors the potential to bias the true heterozygosity score if by chance only plants with the same homozygous genotype are collected. However, the probability that either only Hv or only Hsp genotypes are collected in this way is <0.2% for each homozygous allele class (Supplementary Table S2), assuming that both alleles occur in an equal expected proportion of 0.484375 in BC 1 S 3:8 (Supplementary Figure S2). In microarray-based SNP genotyping, allele classes are defined with a certain tolerance for the fluorescence signals of both tested nucleotides. This means that a homozygous genotype call can be obtained even if there are up to ~20% of the opposite allele in the pooled DNA sample. If taking this into account, the probability of allele fixation is technically raised, though still rather low (<6% for each homozygous allele class; Supplementary Table S2). The observed AFR of ~0.5 could only be realized if the fractions of harvested seeds from the previous generation(s) are skewed (Supplementary Figure S2), either by natural or artificial selection. It must be noted that in addition to the previous remarks also the fact matters that the cluster file for allele discrimination was revised for the 50 k array, possibly leading to a per se reduced rate of heterozygous calls. This can explain the relatively high AFR of ~0.5 throughout the whole genome without indication of a specifically selected locus. Sources of Selection Pressure During Population Conservation Deviations from this background noise may hint on a certain selective pressure. During field propagation of a population, many natural sources of selection pressure may exist. For instance, under extreme environmental conditions, specific genotypes may be lost if they cannot cope with existing selection pressure. Furthermore, plant-plant competition occurs in a plot, leading to an unequal contribution of alleles to subsequent generations. Lower sowing densities could help to mitigate this competition to maintain the genotype during plot propagation. However, also artificial selection can occur during maintaining a population, for instance by unconsciously harvesting the most vital looking ears or by chance harvesting unequal fractions of ears of both genotype classes. Therefore, for future studies, we recommend harvesting whole plots rather than a sample of each plot to avoid a potential source of artificial selection. In our case, we assume that a mixture of both sources of selection acted on HEB-25 lines during field cultivation. Genomic Regions Under Selection Interestingly, we observed a systematic pattern of genomic regions being fixed either toward Hv or Hsp alleles (Figure 2; Supplementary Table S1). Those genomic regions are likely the reasons for selection events that occurred during population conservation. Assigning candidate genes to these regions revealed that most of the loci correspond to well-known genes of barley domestication. Since HEB-25 results from crosses of wild barley and the domesticated elite barley cultivar Barke, this finding can be interpreted as a short story of domestication in barley. Supporting this, there was a clear tendency of selection against the dominant Btr1/Btr2 allele on chromosome 3H () conferring a brittle rachis and shattering of the ear of wild barley at maturity. In BC 1 S 3:8 lines, the genomic region of btr1/btr2 was predominantly fixed toward the cultivated Hv allele. Plausibly, predominately intact ears without brittle rachis were harvested, leading to a fixation of non-brittle ears in future generations. This is rather a source of artificial selection due to harvesting than a natural advantage of non-brittle lines. Likewise, another potentially artificially selected genomic region is the threshability locus thresh-1 on 1H (). Here, the wild allele causes awns and rachis remaining attached to the grain after mechanical threshing. In contrast, the domesticated dominant allele confers a "clean" grain that can directly be used for sowing in the next season. Although for all grains remaining awns and rachis were manually removed, there might have been the tendency that already "clean" seeds were preferred for next season sowing. Another prominent domestication locus, vrs1, leading to the six-rowed ear phenotype in cultivated barley, could not reliably be captured in our study. We would expect a clear shift toward the six-rowed phenotypes, since the probability that their grains contribute to the next generation is increased due to the higher number of grains per ear. For the sake of completeness, however, it should be mentioned that this effect could be partly compensated by the increased seedling vigor of larger seeds produced in two-rowed ears (). However, only a single HEB family (F24) shows six-rowed ears, derived from its H. v. ssp. agriocrithon parent HID380. Only one out of 56 lines of F24 was heterozygous at the vrs1 locus (BOPA2_12_30897) in BC 1 S 3. Therefore, a clear statement for this locus is not possible. However, in general, we observed that loci affecting the grain number per ear in HEB-25 () were often associated with genomic regions with an increased AFR. This may indicate that more harvested grains from genotypes carrying a grain numberincreasing locus lead to a higher probability of those grains being selected for next season sowing. This tendency is supported by the slightly positive correlation of 0.28 between Hsp allele SNP effects for grain number and RFD (Figure 3; Supplementary Figure S3). Phenology-Related Effects Strikingly, many of the selection-affected regions correspond to flowering time and plant development genes (Figure 2; Supplementary Table S1). As indicated in Maurer et al. and Maurer et al., eight major flowering time loci could be identified in HEB-25 that explained large proportions of the variance for flowering time and other developmental traits. Interestingly, most of them showed a clear fixation tendency. However, the direction of fixation differed and was not correlated with the accelerating or delaying effects of the Hsp alleles (Figure 3; Supplementary Figure S4). At these loci predominately, the Hv allele was fixed, except at the sdw1 locus, where the Hsp allele was fixed more often. This finding indicates that the reason for the fixation might be a selection for higher plants rather than flowering time, since sdw1 is the main determinant of plant height in HEB-25 and Barke carries the semi-dwarf allele. Either higher plants were favored at manual harvesting avoiding strenuous stooping or higher plants impeded growth of semi-dwarf plants early on. The latter is supported by Raggi et al., who also detected several plant height associated genetic loci under natural selection in machine-harvested trials with barley composite crosses, and by Chen et al., who substantiated that taller plants were more competitive in terms of light interception. Most of the obtained fixed loci in the present study co-localize with QTL for plant height in HEB-25, which was underlined by a correlation of r = 0.48 between Hsp allele effects for plant height at maturity and RFD (Figures 3, 4), indicating that alleles increasing plant height were preferentially fixed. Most likely, those alleles increase competitiveness already early on during plant development. In spring wheat, cultivars with increased plant height showed an increased weed suppression ability (). Conscious selection on early competitiveness might be useful to breed new cultivars with increased weed suppression ability enabling a sustainable herbicide-reduced cropping system. One of the prerequisites for the expansion of barley cultivation during domestication was the development of a spring growth habit. In contrast to wild barley, spring barley has no or reduced vernalization requirement and is insensitive to day length allowing for an extended vegetative growth period under long-day conditions. The optimum exploitation of the growth period results in higher grain yield and is, therefore, most likely the reason why major flowering time genes were fixed toward the Hv allele, which also indicates that Barke (Hv) seems to be phenologically optimally adapted to the investigated environment Halle. The most extreme example in this context is Ppd-H1, the main determinant of photoperiod response in barley (). Out of the 78 heterozygous lines in BC 1 S 3 74 (95%) were fixed in BC 1 S 3:8 (Supplementary Table S1, BK_12). Thereof, 67 lines were fixed toward the Hv allele conferring photoperiod insensitivity. This is among the highest AFR observed throughout the whole genome and outlines the importance of Ppd-H1 for adaptation to Central and Northern European climates. However, interestingly, in 7 lines, the Hsp allele of Ppd-H1 was fixed in BC 1 S 3:8. We assume that this indicates random selection events or is due to simultaneous co-selection for another locus, as we do not see any indication of a specific Ppd-H1 haplotype being preferred (). Note that the observed fixation of Ppd-H1 is environment-specific and would probably be less pronounced or even opposite in other latitudes. For instance, in the Mediterranean region, earliness is key to escape early season terminal drought, which would mean a selection advantage for the Hsp allele (Andrs and Coupland, 2012;a;). Further Interesting Insights Besides many known domestication loci, also other genomic regions showed specific allele preferences, although the AFR was not higher than the background noise in the rest of the genome. However, the clearly directed fixation at these loci might point to the fixation of alleles in specific families of the NAM population. Examples are the centromere regions of chromosomes 1H and 3H, where a significant tendency toward the Hsp allele was observed. This was also true for distal parts of chromosome 3HL and 6HS. All these regions might harbor wild barley alleles that create a positive selection pressure, hinting to promising sources of new allelic variation for future breeding. These wild barley alleles may confer (a)biotic stress tolerance, for instance against drought, which frequently occurs in the studied environment Halle. In 2011, the first year of field propagation, the lowest total precipitation sum until maturity was observed (Supplementary Figure S5). However, one has to admit that these loci might also be a source of artificial selection. In contrast to looking for an increased AFR, also the opposite approach is interesting. Loci with a low fixation rate might hint to alleles conferring hybrid vigor and might represent potential candidates for hybrid barley breeding. In this context, the peri-centromeric region of chromosome 4H and a region on the long arm of chromosome 2H might be promising targets. CONCLUSION By screening heterozygosity patterns in a wild elite barley NAM population, we were able to determine loci affected by selective fixation of either the exotic or the elite barley allele during five cycles of reproduction and field cultivation. The factors causing allele frequency changes through competition in composite crosses are manifold (Blijenburg and Sneep, 1975;;;Phillips and Wolfe, 2005;). The selection factors may be grouped into (i) natural selection caused by plant-plant competitiveness, phenological advantages as well as superior abiotic and biotic stress resilience and (ii) artificial selection through cultivation and harvesting practices. With our approach, we could unveil the genetic basis of those selection events and define alleles which are superior during reproduction and plant cultivation in the investigated environment. The use of a large segregating mapping population for the temporal screening of heterozygosity enabled to define alleles in distinct genome regions as drivers of adaptive evolution. Defining such superior alleles could help to select chromosomal regions covering potentially beneficial wild alleles conferring, for example, stress tolerance, which is of special importance to cope with drastic upcoming challenges in times of climate change. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. Original genotype matrices that were used for the analysis are available in Maurer et al., and Maurer and Pillen. The resulting fixation matrix is available in Supplementary Table S1. AUTHOR CONTRIBUTIONS AM analyzed data and wrote the manuscript. KP coordinated the project and co-wrote the paper. All authors contributed to the article and approved the submitted version. |
Use of computed tomography in the management of colorectal cancer. Computed tomography (CT) plays an important role in the management of colorectal cancer (CRC). The use of CT (colonography) as a screening tool for CRC has been validated and is expected to rise over time. The results of prior studies suggest that CT is suboptimal for assessment of local T stage and moderate for N stage disease. Recent advances in CT technology are expected to lead to some improvement in staging accuracy. At present, the main role of CT in pre-treatment imaging assessment lies in its use for the detection of distant metastases, especially in the liver. In a select group of patients, routine post-treatment surveillance with CT confers survival benefits. The role of CT for post-treatment assessment has been radically altered and improved with the advent of fusion positron emission tomography/CT. Perfusion CT shows promise as another functional imaging modality but further experience with this technique is necessary before it can be applied to routine clinical practice. INTRODUCTION The majority of patients suffering from colorectal cancer (CRC) are over 50 years of age, with a relatively equal gender incidence. Recent declines in CRC incidence and mortality are attributable to reduced risk factor exposure, early detection and prevention through polypectomy, and improved treatment. Despite this, CRC remains the third commonest adult cancer with approximately 1 in 19 adults diagnosed with CRC during their lifetime. Imaging plays an important role in screening for CRC. According to the current American Cancer Society guidelines for CRC screening, 5-yearly computed tomography (CT) colonography (CTC) is recommended for asymptomatic patients with average risk. In patients with known CRC, CT plays an important role in both pretreatment staging of disease, as well as assessing for response to treatment. Traditionally, this has been done by anatomical imaging assessment on CT. Advances in technology have further increased the role of CT, by facilitating functional imaging with positron emission tomography (PET) and perfusion studies. Screening Although elevated serum carcinoembryonic antigen (CEA) levels are often present in CRC, they are neither sensitive nor specific enough to be used as a screening tool for asymptomatic patients. CTC (otherwise known as virtual colonoscopy) allows a minimally invasive imaging examination of the entire colon and rectum. Compared to optical colonoscopy, the risk for colonic perforation during screening is extremely low, being 0.005% for asymptomatic patients and up to 0.06% for symptomatic patients, Use of carbon dioxide with an insufflator that regulates pressure rather than room air for gas insufflation of the colon may further reduce the incidence of perforation. In CTC, high resolution image acquisition of the entire large intestine in a single breath hold is permitted by the use of multi-row detector CT. Integrated 3D and 2D analysis with specialised post-processing software allows for ease of polyp detection, characterization of lesions and location. For optimal assessment, adequate bowel preparation and gaseous distension of the colon are essential. Newer techniques such as faecal tagging reduce the need for vigorous bowel preparation and decreases false positives from the presence of adherent faecal matter. In contrast with optical colonoscopy, extracolonic structures are also evaluated in the same examination. Hellstrm et al showed that potentially important extracolonic findings, such as lymphadenopathy, aortic aneurysms and solid hepatic and renal masses, were present in 23% of patients. The American College of Radiology Imaging Network National CT Colonography Trial, which included 2500 patients across 15 institutions in the United States, has shown comparable accuracy between CTC and standard colonoscopy. Pickhardt et al reported a sensitivity of 89% for adenomas greater than 5 mm. For invasive CRC, the pooled CTC sensitivity was higher at 96%. As with other screening techniques, CTC accuracy improves with lesion size. All patients with one or more polyps larger than 10 mm or 3 or more polyps larger than 6 mm should be referred for colonoscopy. However, the management of patients with fewer polyps (fewer than three) in which the largest polyp is 6 to 9 mm or smaller remains controversial at present. For patients with suspected CRC, the diagnostic accuracies of contrast-enhanced CTC were even better. Using the tumour, node, and metastasis system, rates of 95%, 85%, and 100% were achieved. The sensitivity of both CTC and optical colonoscopy for cancer detection were both 100%, while the overall sensitivity of CT colonography was even higher than initial colonoscopy for polyp detection (90% vs 78%, P = 0.001, Figure 1). The main drawback of CTC is radiation exposure. A single CTC study results in an estimated organ dose to the colon of 7 to 13 mSv, which is an additional 0.044% to the lifetime risk of colon cancer. More efficient lowdose protocols (estimated organ dose ranges of 5 to 8 mSv) have been shown to be feasible with encouraging results. Pre-treatment staging Preoperative CT is typically performed for the following indications: suspected haematogenous or distal nodal (e.g. paraaortic) metastases; suspected invasion into adjacent organs or abscess formation; unexplained or atypical symptoms; and unusual histologic results. The major goal of CT is to determine if there is direct invasion of adjacent organs, enlargement of local nodes, or evidence of distant metastases. On CT, CRC commonly manifests as focal thickening of the bowel wall and luminal narrowing; hence adequate distension of the bowel is crucial for accurate assessment. CT has a role in the detection of potential complications, such as perforation, fistulation and intussusceptions, which may require early surgical intervention. The clinical use of CT for local tumour (T) staging of rectal cancer is limited, with a reported accuracy of around 70%. This is attributable to the lack of attenuation differences between tumour and normal visceral soft tissue. In a study by O'Neil looking at patients with rectal cancer, CT consistently overestimated tumour volume and underestimated distance from the anal verge compared to magnetic resonance imaging (MRI). CT is also poor for the assessment of levator ani invasion in low rectal lesions, although it may assess the more proximal lesions with reasonable accuracy (Figure 2). Similarly, for the more proximal large bowel, CT fares suboptimally, with a sensitivity and specificity rate of 60% and 67%, respectively, for the detection of extramural spread of tumour. This is largely due to failure to detect microscopic disease. CT can be considered to be more efficacious for nodal and metastases (N and M) staging than for T staging. A large meta-analysis by Bipat et al that included 90 studies showed similar accuracies between ultrasound, CT and MRI for the assessment of nodal involvement by rectal cancer. In a study of 137 patients, Valls et al showed good accuracy (85.1%), high positive predictive value (96.1%) and low positive predictive value (3.9%) of CT for the detection of liver metastases. For the detection of CRC metastases, CT imaging in the portal venous phase is the technique of choice. The addition of hepatic arterial phase imaging has been shown not to increase sensitivity, even though it improves the specific-ity in diagnosing liver metastases in a small number of cases. At present, the optimal imaging strategy for the pretreatment distant staging of CRC remains controversial. For instance, chest CT often detects indeterminate lung lesions, of which only a small proportion develop into definite metastases. Similarly, in rectal cancer, where pelvic MRI has already been performed, CT of the abdomen and pelvis will not provide additional value. Therefore, further studies are required to define optimal preoperative imaging. Other than the liver, the peritoneum is a major site for metastatic disease (Figure 3). The presence of peritoneal metastasis predicts for a higher local recurrence rate. Furthermore, the Peritoneal Cancer Index, an assessment of the tumour burden attributed to peritoneal disease, has been recognized as an independent prognostic indicator for long-term outcomes. The role of CT in the detection of peritoneal carcinomatosis is limited for small metastases. In the study by de Bree et al, CT detection of peritoneal metastases was only moderate (ranging from 9% for subcentimeter lesions to 66% for lesions larger than 5 cm) with significant interobserver differences. A more recent study by Koh et al echoed these findings, with a sensitivity of 11% for lesions smaller than 0.5 cm contrasting with 94% for le- A: Surveillance axial contrast-enhanced CT image shows a metastatic deposit in the right rectus abdominis muscle (arrow); B: A second metastatic lesion is present in the left paracolic gutter (arrow). The high spatial resolution of CT and the contrast with the adjacent fat allows for easy detection of metastatic disease in these areas. sions larger than 5 cm, significantly underestimating the Peritoneal Cancer Index. Post-treatment assessment For routine surveillance, the American Society of Clinical Oncology currently recommends CEA assays every 3 mo for the first 3 years, CT scan of the chest, abdomen and pelvis annually for the first 3 years and colonoscopy at 3 years in patients with stage 2 and stage 3 CRC. Local disease recurrence is evidenced on CT by the serial progression of a mass, its nodular configuration and invasion of adjacent structures. However, CT cannot reliably differentiate tumour from post-treatment scar formation. For both local and nodal assessment of rectal cancer after neoadjuvant chemoradiation therapy, CT may not be able to reliably predict pathological response, and has a tendency to overstage disease. The study by Huh et al looked at 80 rectal cancer patients following neoadjuvant chemoradiation therapy. It was found that the overall accuracy of CT for restaging the depth of tumour invasion and lymph node metastasis were 46.3% and 70.4%, respectively, while complete pathology-proved remission (11 patients) could not be correctly predicted. Nevertheless, for the diagnosis of recurrent hepatic metastases, CT has already been shown to be more helpful than laboratory studies (liver function tests, measurement of CEA level). Specifically, there is a 25% lower mortality in patients undergoing liver imaging compared with nonimaging strategies. This is further supported by the study of 530 patients conducted by Chau et al, in which routine post-treatment surveillance with CT and CEA levels in asymptomatic patients were shown to confer a median survival advantage of 13.8 mo over patients who were symptomatic. The reader should note however, given the increased costs, use of routine CT surveillance in these patients is only justified for those who are surgically fit to undergo metastasectomy. Therefore, CT currently still plays an important role in the postoperative surveillance of CRC. 18 Fluoro-deoxyglucose is the most widely used substrate for PET imaging. Fusion PET/CT combines the functional evaluation by PET with the anatomic detail provided by CT (Figures 4 and 5). PET/CT is increasingly shown to be superior to the other imaging modalities in demonstrating recurrent disease activity and has become B A an integral part of the surveillance strategy for CRC. It has the potential to replace CT as the first-line diagnostic tool for restaging patients for recurrent CRC. In one study, PET/CT revealed unsuspected disease and modified the scope of surgery in around 10% of patients. In another study, FDG PET/CT altered treatment plans in 38% of patients largely through the detection of unsuspected lymphadenopathy. For local disease, PET/ CT can improve preoperative target volume delineation by CT for conformal radiation therapy in rectal cancer. Preoperative PET/CT colonography may yield valuable information on the presence of synchronous tumors and for surgical planning. However, by far the greatest value of PET/CT in the management of CRC lies in its ability for whole body lesion detection. In one study, PET/CT showed high accuracy for the detection of liver metastases, with a reported accuracy of up to 99%, sensitivity up to 100% and specificity up to 98%. In the meta-analysis conducted by Kinkel et al that included 110 studies, PET/CT afforded the highest mean weighted sensitivity (92%) and was significantly more sensitive for the detection of hepatic metastases from gastrointestinal cancers than CT. Rappeport et al showed that PET/CT was superior to CT alone for the detection of extrahepatic metastases in CRC patients, with sensitivity and specificity rates of 83% and 96% for PET/CT and 58% and 87% for CT. Contrast-enhanced PET/CT and PET/CT colonography shows promise for improving accuracy in staging of disease. PET/CT can distinguish between tumour recurrence and post-surgical scar, as well as pinpoint the site of recurrence in cases with an unexplained rise in serum CEA. It is therefore recommended for evaluation of equivocal findings on serial CT and MRI. To detect recurrent nodal disease, PET/CT is superior to MRI, with a sensitivity of 93%. PET/CT is superior to contrast-enhanced CT in detecting local recurrences at the colorectal anastamosis, intrahepatic recurrences and extrahepatic disease, with sensitivity rates close to or exceeding 90%. Quantitative measurements of standardised uptake value and tumour volume may be used as a marker of tumour burden in cases of tumour recurrence. Note that PET/CT should be performed more than 6 wk following local therapy, as inflammatory changes can result in false positives. In one study, PET/CT correctly assessed response of liver metastases to Bevacizumab-based therapy in 70% of cases compared to 35% by CT. For evaluation of liver metastases after radiofrequency ablation, PET/CT is comparable to MRI. In the study by Kuehl et al, the accuracy and sensitivity for the detection of liver metastases was 91% and 83% for PET/CT and 92% and 75% for MRI, respectively. After treatment of liver metastases with Y-90 microspheres, metabolic response on PET/CT correlates better with CEA levels than anatomic response with CT or MRI. This having been said, it should be noted that complete metabolic response on FDG-PET after neoadjuvant chemotherapy does not necessarily imply complete pathologic response. Therefore, currently, curative resection of liver metastases should not be deferred solely on the basis of FDG-PET findings. Perfusion CT Novel techniques such as perfusion CT and combined perfusion CT/PET CT show promise. Perfusion CT is performed at various time intervals after the injection of contrast. A precontrast scan is required for determination of increase in Hounsfield attenuation. Standard imaging protocols are to image at 45 and 130 s after contrast injection. For perfusion CT, iodinated contrast needs to be injected at a high rate, typically at 5 mL/s. Tissue blood flow, blood volume, mean transit time, and vascular permeability-surface area product are calculated based on the enhancement curves. Aggressive tumors with poor differentiation are thought to be more vascular, and may therefore be distinguished from more well differentiated lesions with the use of perfusion CT. In the study by Sahani et al, rectal cancer showed higher tissue blood flow and shorter mean transit times than normal rectum. In another study, similar findings were echoed whereby CT perfusion was able to differentiate cancer from inflammation secondary to diverticulitis. An elevated liver perfusion index has also been found to be associated with the presence of hepatic metastases. Increased arterial perfusion appears to be an indicator of liver metastases, whereas reduced portal perfusion may indicate progressive disease. Perfusion CT may also play a role in predicting progression to metastatic disease. In the study by Goh et al, tumour blood flow differed significantly between disease-free and metastatic patients (76.0 mL/min per 100 g tissue vs 45.7 mL/min per 100 g tissue, respectively). Using blood flow < 64 mL/min per 100 g tissue as a cut-off, sensitivity and specificity for the development of metastases were 100% and 73%, respectively. Perfusion CT has potential for predicting the response of rectal cancer to combined neoadjuvant chemotherapy and radiation therapy. In a study of 19 patients, blood flow, blood volume and permeability-surface area product significantly decreased after combined chemotherapy and radiation therapy (P < 0.009). To date, however, the technique of perfusion CT remains the subject of research. The main drawback to this technique is the additional exposure to ionising radiation (estimated at 10 mSv). This translates to an added 1 in 2000 risk of lifetime cancer risk. To reduce the risk of ionising radiation, the radiation dose should be carefully optimised on a per patient basis. There is also a need for standardisation of techniques. For example, the position and size of tumour region of interest analysis and observer variation have been found to substantially influence perfusion values. Region of inter-est analysis for outlined entire tumour is more reliable for perfusion measurements and more appropriate clinically than use of arbitrarily determined smaller ROIs, although this may mean increased post-processing times. CONCLUSION CT plays an important role in the management of CRC. The use of CT (colonography) as a screening tool for CRC has been validated and is expected to rise over time. The results of prior studies suggest that CT is suboptimal for the assessment of local T stage and moderate for N stage disease. Recent advances in CT technology are expected to lead to some improvement in staging accuracy. At present, the main role of CT in pre-treatment imaging assessment lies in its use for the detection of distant metastases, especially in the liver. In a select group of patients, routine post-treatment surveillance with CT confers survival benefits. The role of CT for post-treatment assessment has been radically altered and improved with the advent of fusion PET/CT. Perfusion CT shows promise as another functional imaging modality, but further experience with this technique is necessary before it can be applied to routine clinical practice. |
Q:
"Another" and other adjectives
Let me show you an example of this: blah blah
Here is another subtle example: blah blah
Is it technically strange to combine "another" and "subtle" (or whatever adjectives) here, when the first "example" is not "subtle" ? Do I have to say like this?
Here is another example which is subtle: blah blah
Compare it with these sentences which I think are perfectly valid:
Let me show you a subtle example of this: blah blah
Here is another subtle example: blah blah
because both "examples" are "subtle".
A:
As pointed out, adding subtle to the second sentence makes the sentence seem out of place due to a lack of parallelism. You could, however, specify that the second example is more subtle than the former. This would maintain readability and provide the intended differentiation from the previous sentence.
Let me show you an example of this: blah blah
Here is another, more subtle example: blah blah blah
Or
Let me show you an example of this: blah blah
Here is a more subtle example of this: blah blah
A:
Let me show you an example of the mesomeric effect: ...
Here is another interesting example: ...
Adding an adjective this way is perfectly okay. The implication is that the previous example was also interesting.
If you want to make it clear that it is only the second example that seems particularly interesting to you, you might put it thus:
Let me show you an example of the mesomeric effect: ...
Here is another example, and it's interesting: ...
P.S. User83984's example
Here is a more subtle example of this: blah blah
is great, it looks very natural. |
NEW YORK -- He is not the sort of toe-the-line star New York baseball fans became accustomed to as the Yankees won championships with Derek Jeter and the Core Four, and the say-no-evil David Wright felt like the lone beam of sunshine for the Mets. Forget that. If you ask Mets ace Matt Harvey who is best positioned to be the face of New York baseball now that Jeter has retired, Harvey could not tell a lie: He'd like the nomination himself.
"I'm going out there, I'm going to fight, I'm going to do everything to win -- not just for my teammates, but for New York," Harvey said early in spring training, after his electric first appearance in which he broke two bats and struck out four of the six hitters he faced with a fastball that topped out at 99 mph. Since then, Harvey has only continued to defuse worries about how he'd come back from a 1½-year layoff because of Tommy John surgery.
There's no one template for what makes an athlete the face of New York sports, beyond this simple requirement: You have to win. Big.
After that? Being as colorful and blunt as Harvey is isn't a necessity. But it sure makes everything a hell of a lot more fun if you show some flair like Reggie Jackson and Doc Gooden did, or you make your bones in this town by pulling off some uncanny magic in the biggest moments. Think of the Jets' Joe Namath and The Guarantee, Eli Manning at the end of both Super Bowls he won, Mark Messier when the Rangers snapped their 54-year-old Stanley Cup drought in 1994.
Harvey, who is still just 25, seems like a star capable of delivering magic. If you ask him to define his attacking pitching style, he'll often smirk and call it "primal." He's got swagger. He's got gall. He's nakedly ambitious too. But all of it is offset by his compensating sense of amusement -- he often seems to be smiling at some inside joke only he's in on -- and his unapologetic determination to make sure his run in New York is the time of his life.
"Dirty martinis and music -- that's the big motto in our family," Harvey once joked.
Given all the attention he attracts, it's easy to forget Harvey's entire big league résumé still consists of only a 12-10 record and 237 2/3 innings pitched. But Harvey's talent was so arresting after he splashed down here and became the National League's starter in the All-Star Game at Citi Field in the summer of 2013, it didn't take long before he sought -- or got -- all the perks that come with being a superstar jock in New York: the secret phone numbers that celebs are given to hot restaurants to get a table whenever they like; the guest shot on Jimmy Fallon's late-night show; a personal shopper at the hip clothier John Varvatos; and the supermodel girlfriends, starting with Russian-born Anne Vyalitsyna, who appeared nine straight years in Sports Illustrated's swimsuit issue.
(The day the New York Post outed Harvey and Vyalitsyna as a couple in a photo, Harvey got a standing ovation from his teammates the next time he walked into the Mets' clubhouse for a game.)
Facts like that floated up in two of the more revealing interviews Harvey has given so far -- this story with Men's Journal in 2013 and another that posted this week in New York magazine. A recurring theme in both profiles is Harvey's rapacious appetite to be tops in everything he does. He seems to fit New York like New York fits him.
Who wants to sleepwalk through life in the city that never sleeps?
Harvey said he seeks fashion tips from debonair Rangers goaltender Henrik Lundqvist ("He always makes the best-dressed lists. Well, I want to be on those lists"). He volunteered he looks forward to the day his superagent Scott Boras gets him his first free-agency windfall ("I've gotta wait for that $200 million contract. If I'm going to buy an apartment, it has to be the best apartment in the city"), and he actively sought Jeter's advice on how to handle his off-field exploits.
"That guy is the model," Harvey told Men's Journal. "I mean, first off, let's just look at the women he's dated. Obviously, he goes out -- he's meeting these girls somewhere -- but you never hear about it. That's where I want to be."
It's admissions like that -- and similar statements like "I'm young, I'm single, I want to be in the mix ... I will never apologize for having a life ... I am going to go out" -- that have earned Harvey comparisons to Namath. Both have a wink-wink lust for New York nightlife. But the memory of Namath posing in panty hose way back when -- which was considered risqué at the time -- seems laughably tame compared to Harvey's decision to pose nude for ESPN The Magazine's "Body" issue in a hotel hallway, with only a room-service tray slapped over his midsection.
If Harvey pitches well and wins, all of these things will be a blithe part of his legend. And if he doesn't? He'll be criticized for having too many distractions and too many run-ins with Mets management.
But Harvey, who grew up not far away in Mystic, Connecticut, knows such trade-offs are all part of the bargain. He seems unworried that his King of the City ambitions won't work out. Quite the opposite. He has spoken of going to the window of his 10th-floor apartment in the East Village and looking out over Manhattan and thinking, "Yes, New York. I'm here."
It's possible to think of the Mets making the playoffs this season with Harvey.
But it's nearly impossible to see them doing it without him.
Right now, it's also hard to see anyone eclipsing him as the new face of New York baseball. But here are some dark-horse contenders:
The fondness Yankees fans have for Tanaka's Japanese countryman Hideki Matsui, especially after Matsui's 2009 World Series MVP honors, suggests that the face of New York baseball needn't be an English speaker. And Tanaka is not. But Tanaka, like Harvey, does have a tantalizing array of pitches and a sense of comfort in the spotlight that dates back to his superstar career in Japan.
WHY TANAKA CAN ECLIPSE HARVEY: Tanaka can collect strikeouts at the same clip, and he has a bulldog mentality, same as Harvey does. Tanaka could've played it safe with the partially torn ulnar collateral ligament in his elbow last season. But he went ahead and made two late starts at the end of the year though the Yanks were out of the playoffs.
WHY TANAKA CAN'T: Tanaka already has a lot of miles on his arm, and his elbow could blow at any minute. Even if he stays healthy but decides to reduce the wear on his elbow by throwing fewer splitters -- his out pitch -- he might not be the same pitcher.
DAVID WRIGHT
Wright is the last remaining homegrown star of his baseball generation in New York. He's been so loyal to the Mets -- taking less money than he could've gotten on the free-agent market, sticking with the team through the lean years and the Madoff scandal -- he is moving into the same sentimental territory that the Yanks' Don Mattingly did late in his career. Fans are rooting for Wright to be rewarded with at least one World Series before he's done.
WHY WRIGHT CAN ECLIPSE HARVEY: Wright opted out of offseason shoulder surgery, but after rehabbing says his swing and power should no longer be inhibited.
WHY WRIGHT CAN'T: What if Wright finds out that even though the Mets moved in the fences at Citi Field a second time, his decrease in production was not attributable to injuries or the previously large dimensions crawling into his head? What if the reality is worse: At age 32, Wright is inexorably moving toward the twilight of his career.
Pineda, who had shoulder surgery in 2012, has pitched better at times than any Yankees starter during spring training. He could become the ace of the Yanks' 2015 staff as CC Sabathia ages out, or Tanaka hopes his elbow holds up (wishful thinking that didn't work out for the Mets' Zack Wheeler).
Pineda has a gregarious personality that drifts a little toward Pedro Martinez's sometime wackiness too. And Pineda's chin-trembling vulnerability after he was caught last April pitching with a glob of pine tar on his neck was received as almost, well ... endearing, not just slapstick. It was his signature Yankees moment so far. And yet, rather than just screech "Did Pineda really think he could get away with THAT?" and howl with laughter, a lot of folks complained Pineda's veteran teammates and manager Joe Girardi should've given the kid better guidance on how to hide the pine tar, since nearly every pitcher uses it. So ... wait, it was their fault?
WHY PINEDA CAN ECLIPSE HARVEY: He could be the power pitcher who leads the Yanks to the playoffs while the Mets don't contend.
WHY PINEDA CAN'T: The Yanks need him to be very good, he still has a lot to learn about the art of pitching, and it's hard to predict how he'll handle expectations.
The Mets second-year pitcher was the 2014 National League Rookie of the Year. DeGrom is still green, but that didn't prevent him from often being the best pitcher in town last season after Tanaka missed time. DeGrom could win even more this year if the Mets' offense improves as much as the team hopes.
WHY DEGROM CAN ECLIPSE HARVEY: He gets to enjoy life as Harvey's wingman, and not feel the weight of being the stopper who has to end losing streaks, the guy who starts opposite other teams' staff aces.
WHY DEGROM CAN'T: DeGrom didn't have much pressure last year. This year, the Mets will be playing for the postseason and, with Wheeler gone, he will be expected to duplicate or exceed his rookie year success against a league that's now had a long look at him. As hitters adjust, will he?
A-Rod has hit two home runs in spring training, which may be two more than cynics expected. Presuming he doesn't have to play the field much -- he's looked like a statue at times during spring training -- Rodriguez will be free for the first time in his career to concentrate on just hitting, the thing he used to do best.
Paradoxical as this sounds, it's easy to imagine scenarios in which A-Rod would not be the face of New York baseball if he has a good to great offensive year because people will just start charging he's back on PEDs. But he could be the face of baseball around here if he stinks and both clubs flop. His failure would be emblematic of the Yankees' strategy of relying on Rodriguez and so many other aging-out, highly paid stars like oft-injured Mark Teixeira at first, and brittle Carlos Beltran as their everyday right fielder though he turns 38 in a month. Who thought that was a good idea?
WHY A-ROD CAN ECLIPSE HARVEY: A-Rod's potential to hog headlines is huge either way. If the Yanks find themselves in only a race to the bottom, no one sparks criticism and loathing like A-Rod does. And if A-Rod is the X factor that helps the Yanks become the only playoff team in town, it'll be a talking point.
WHY A-ROD CAN'T: Because he's A-Rod. And with A-Rod, it's always something. |
Submitted photo - Pitches with Attitude Pitches with Attitude, the Salve Regina all-female a cappella group, will end itsí 2014 tour 3 p.m., Saturday, May 24, at the Town Green Gazebo on Main Street.
OLD SAYBROOK >> Pitches with Attitude, the Salve Regina all-female a cappella group, will end its 2014 tour 3 p.m., Saturday, May 24, at the Town Green Gazebo on Main Street.
Senior Kristina Levick leads the group of 12 other freshmen through senior girls studying a variety of majors including nursing, studio art, elementary, special, and music education, psychology, administration of justice, accounting, biology, and health administration.
Julia Casberg, a Pitches with Attitude member and a 2012 graduate of Old Saybrook High School, is a sophomore secondary education English major at Salve Regina University and in the Pell honors program. She is also a part of the SRU Dance club and University Chorus and Madrigals.
The May 24 performance ends a five-day tour of the girls’ hometowns, which also included stops in Rockport, Massachusetts and Orono, Maine.
The free hour-long concert includes contemporary pop to easy listening selections made famous by Sara Bareilles, Mariah Carey and others. Bring family, friends and a chair or blanket for a fun afternoon filled with music.
For more information visit Pitches with Attitude’s facebook page, https://www.facebook.com/pitcheswithattitude?ref=br_tf, or email athanasula@comcast.net. |
U.S. stocks started November with a strong rally Thursday, following mostly upbeat jobs reports. All three major indexes jumped more than 1%, as Wall Street continues to recover from a two-day trading suspension due to Superstorm Sandy.
The Dow Jones industrial average rose 135 points, or 1%, the Nasdaq increased 1.4% and the S&P 500 jumped 1.1%.
The gains came as investors parsed through a bevy of corporate and economic news, including planned job cuts, private sector job gains and initial unemployment claims. Those numbers are all a prelude to the government's monthly jobs report , which will be released before markets open Friday.
While outplacement firm Challenger, Gray & Christmas reported the number of planned job cuts surged to a five-month high in October, two other reports were positive. Payroll processor ADP said U.S. private-sector employers added 158,000 jobs in October, which was above expectations, and weekly initial jobless claims fell by 9,000 to 363.000 last week, coming in lower than forecasts.
"The employment numbers look a little stronger than we were anticipating," said Kim Forrest, senior equity analyst at Fort Pitt Capital. But Forrest warned that doesn't necessarily mean the October jobs report will be stronger than expected.
"From all the chatter that we've been listening to during company earnings calls, it doesn't seem like American companies have been doing much hiring," she said. "There may be some job growth at small businesses, but even with that, I'd be surprised to see a strong employment number."
Economists surveyed by CNNMoney are expecting the economy to have added 125,000 jobs in October, up from 114,000 the prior month. They're also expecting the unemployment rate to tick up to 7.9%, from 7.8% in September.
Meanwhile, U.S. manufacturing activity continued to rebound in October, and the Conference Board's consumer confidence index rose to the highest level since February 2008. Construction spending increased in September by the most in three months, though the jump was slightly less than analysts were looking for.
Companies: The corporate world was also busy Thursday. Better-than-expected earnings sent shares of Exxon Mobil (XOM) slightly higher.
Netflix (NFLX) shares retreated after a massive 14% runup Wednesday. The previous day's gains came after famed corporate raider Carl Icahn disclosed he bought a 10% stake in Netflix, and strongly hinted he'd like a larger company to buy the streaming video and DVD service.
Japan-based Panasonic (PC) released an earnings report Thursday that was full of bad news. The electronics company posted a loss, dramatically lowered its forecast for the year, and announced it will suspend its dividend. Business conditions are expected to become "much more severe." Shares of the company declined.
Also in Japan, Sony (SNE) reported a narrower loss for its fiscal second quarter and reaffirmed its full-year forecast for a swing to profit. Its U.S.-traded gained ground.
Shares of Ford (F) rose after the company announced that Alan Mulally would remain president and CEO through at least 2014, and named Mark Fields as chief operating officer.
Miracle on 34th St. for Macy's.
Shares of Visa (V) gained after its Wednesday earnings report topped analysts' estimates. Sirius (SIRI) also gained after the satellite radio company reported its new subscriber sign-ups were strong last quarter.
Pfizer (PFE) slipped after it reported quarterly revenue that fell far below analysts' estimates.
GM (GM) said its U.S. sales jumped 5% in October to the highest levels since 2007.
After the closing bell, AIG (AIG), Starbucks (SBUX) and LinkedIn (LNKD) reported results.
Shares of Starbucks spiked in after-hours trading after the coffee chain hiked its dividend as it beat its profit and revenue forecast.
AIG also topped expectations, but shares of the insurance giant slipped in after-hours trading.
LinkedIn's stock bounced in after-hours trading after the company topped Wall Street's expectations.
World Markets: European stocks closed sharply higher. Britain's FTSE 100 rose 1.3%, the DAX in Germany increased 1% and France's CAC 40 added 1.4%.
Asian markets closed higher. The Shanghai Composite had the strongest gains, up 1.7%, while the Hang Seng in Hong Kong jumped 0.8%, and Japan's Nikkei rose 0.2%.
China's government reported earlier in the day that its official purchasing manager's index jumped to 50.2 in October, from 49.8 the previous month. Any reading above 50 indicates that factory conditions are improving in the manufacturing sector.
Currencies and commodities: The dollar rose versus the euro, the British pound and the Japanese yen.
Oil for December delivery added 85 cents to settle at $87.09 a barrel.
Gold futures for December delivery fell $3.60 to settle at $1,715.50 an ounce.
Bonds: The price on the benchmark 10-year U.S. Treasury edged lower, pushing the yield up to 1.72% from 1.69% late Wednesday. |
<filename>src/tmxparser.h
/**
*
*
*/
#ifndef _LIB_TMX_PARSER_H_
#define _LIB_TMX_PARSER_H_
#include <string>
#ifdef __GXX_EXPERIMENTAL_CXX0X__
#include <unordered_map>
#else
#include <map>
#endif
#include <vector>
#include <tinyxml2.h>
namespace tmxparser
{
typedef enum
{
kSuccess,
kErrorParsing,
kMissingRequiredAttribute,
kMissingMapNode,
kMissingDataNode,
kMalformedPropertyNode,
} TmxReturn;
#ifdef __GXX_EXPERIMENTAL_CXX0X__
typedef std::unordered_map<std::string, std::string> TmxPropertyMap_t;
#else
typedef std::map<std::string, std::string> TmxPropertyMap_t;
#endif
/**
* Used to identify tmx file encoding type for data tags
*/
typedef enum
{
kEncodingXml, //!< No encoding in tiled means XML
kEncodingBase64,//!< kEncodingBase64
kEncodingCsv //!< kEncodingCsv
} TmxDataNodeEncodingType;
typedef enum
{
kCompressionNone,
kCompressionZlib,
kCompressionGzip,
} TmxDataCompressionType;
typedef enum
{
kOrthogonal,
kIsometric,
kStaggered
} TmxOrientation;
typedef enum
{
kPolygon,
kPolyline,
kEllipse,
kSquare,
} TmxShapeType;
typedef struct
{
std::string name;
std::string value;
} TmxProperty;
typedef struct
{
unsigned int id;
TmxPropertyMap_t propertyMap;
} TmxTileDefinition;
typedef std::vector<TmxTileDefinition> TmxTileDefinitionCollection_t;
typedef struct
{
int x;
int y;
} TmxTileOffset;
typedef struct
{
std::string format;
std::string source;
std::string transparentColor;
unsigned int width;
unsigned int height;
} TmxImage;
typedef struct
{
unsigned int firstgid;
std::string name;
unsigned int tileWidth;
unsigned int tileHeight;
unsigned int tileSpacingInImage;
unsigned int tileMarginInImage;
TmxImage image;
TmxTileDefinitionCollection_t _tiles;
} TmxTileset;
typedef std::vector<TmxTileset> TmxTilesetCollection_t;
typedef struct
{
unsigned int gid;
unsigned int tilesetIndex;
unsigned int tileInTilesetIndex;
} TmxLayerTile;
typedef std::vector<TmxLayerTile> TmxLayerTileCollection_t;
typedef struct
{
std::string name;
unsigned int width;
unsigned int height;
float opacity;
bool visible;
TmxPropertyMap_t propertyMap;
TmxLayerTileCollection_t tiles;
} TmxLayer;
typedef std::vector<TmxLayer> TmxLayerCollection_t;
typedef std::vector<std::pair<int, int> > TmxShapePointCollection_t;
typedef struct
{
std::string name;
std::string type;
int x;
int y;
unsigned int width;
unsigned int height;
float rotation;
unsigned int referenceGid;
bool visible;
TmxPropertyMap_t propertyMap;
TmxShapeType shapeType;
TmxShapePointCollection_t shapePoints;
} TmxObject;
typedef std::vector<TmxObject> TmxObjectCollection_t;
typedef struct
{
std::string name;
std::string color;
float opacity;
bool visible;
TmxPropertyMap_t propertyMap;
TmxObjectCollection_t objects;
} TmxObjectGroup;
typedef std::vector<TmxObjectGroup> TmxObjectGroupCollection_t;
typedef struct
{
std::string version;
std::string orientation;
unsigned int width;
unsigned int height;
unsigned int tileWidth;
unsigned int tileHeight;
std::string backgroundColor;
TmxPropertyMap_t propertyMap;
TmxTilesetCollection_t tilesetCollection;
TmxLayerCollection_t layerCollection;
TmxObjectGroupCollection_t objectGroupCollection;
} TmxMap;
/**
* Parse a tmx from a filename.
* @param fileName
* @param outMap
* @return
*/
TmxReturn parseFromFile(const std::string& fileName, TmxMap* outMap);
/**
* Parse a tmx file from memory.
* @param data Tmx file in memory
* @param length Size of the data buffer
* @param outMap
* @return
*/
TmxReturn parseFromMemory(void* data, size_t length, TmxMap* outMap);
}
#endif /* _LIB_TMX_PARSER_H_ */
|
extern crate iron;
use iron::prelude::*;
fn main() {
let chain = Chain::new(hello_world);
Iron::new(chain).http("localhost:8556").unwrap();
}
//TODO: Remove this and put the actual web implementation in.
fn hello_world(_: &mut Request) -> IronResult<Response> {
Ok(Response::with((iron::status::Ok, "Hello World")))
}
|
Last week at DevNation we announced the MicroProfile, which is work we're doing with IBM, TomiTribe, Payara and the London Java Community, amongst others. Since then I've seen a few people write articles or talk about it on social media and there appear to be a few things we definitely need to clarify.
For a start, the work we're doing is not a standard. As I mentioned during the keynote, we may eventually take it to a standards body (more on that later), but at this stage the world of microservices is relatively new and evolving, let alone the world of enterprise Java-based microservices. In general, standardising too early results in ineffective or otherwise irrelevant standards so we don't want to consider that until we're further down the line. Now that doesn't mean we won't be using standards to develop. Far from it, as I mentioned we're thinking about a minimum profile based on Java EE (probably 7 since 8 isn't yet finalised) initially. Portability and interoperability are key things we want to achieve with this work. We may never be able to get everyone to agree on a single implementation, but at least if they can port their applications or communicate within heterogeneous deployments, then that's a much more realistic goal. After all microservices, and SOA before it, isn't prescriptive about an implementation, and probably never should be.
Although we're starting with Java EE as a basis, we're not going to tie ourselves to that. If you look at some of the other approaches to microservices, such as Netflix OSS, or OpenShift, there are features such as logging, or events or even asynchrony, which aren't currently available as part of EE. Again, I mentioned this during the announcement, but we all expect this work to evolve enterprise Java in these and other areas as we progress. Java EE represents an evolution of enterprise middleware and we all believe that enterprise Java has to evolve beyond where it is today. Maybe we'll take these evolutions to a standards body too, but once again it's way too early to commit to any of that.
Another thing which we brought out during the announcement was that we want this work to be driven through good open source principles. We're working in the open, with a public repository and mailing list for collaboration. We're also not restricting the people or companies that can be involved. In fact we want as wide participation as possible, something which we have seen grow since the original announcement, which is good! This means that our initial thoughts on what constitutes the minimum profile are also open for discussion: we had to put a stick in the ground for the announcement, but we're willing to change our position based on the community collaboration. We've placed few limitations on ourselves other than the fact we feel it important to get an agreed initial (final) profile out by around September 2016.
I think this leaves me with just one other thing to address: which standards body? The obvious candidate would be the JCP given that we're starting with Java EE. However, as I mentioned earlier we may find that we need to evolve the approach to incorporate things which go way beyond the existing standard, which may make a different standards body more appropriate. We simply don't know at this stage and certainly don't want to rule anything in or out. There's enough time for us think on that without rushing to a decision. |
/* Generated by RuntimeBrowser
Image: /System/Library/PrivateFrameworks/PhotosPlayer.framework/PhotosPlayer
*/
@interface ISVideoAnalyzer : NSObject {
long long __currentRequestID;
NSObject<OS_dispatch_queue> * _isolationQueue;
NSOperationQueue * _operationQueue;
NSMutableDictionary * _operationsByRequestID;
}
@property (setter=_setCurrentRequestID:, nonatomic) long long _currentRequestID;
+ (id)defaultAnalyzer;
- (void).cxx_destruct;
- (long long)_currentRequestID;
- (void)_handleAllFrameTimesRequestFinishedForTime:(double)arg1 frameTimes:(id)arg2 completion:(id /* block */)arg3;
- (void)_setCurrentRequestID:(long long)arg1;
- (id)init;
- (long long)requestAllFrameTimesInAsset:(id)arg1 completion:(id /* block */)arg2;
- (long long)requestLastFrameTimeBeforeTime:(double)arg1 inAsset:(id)arg2 completion:(id /* block */)arg3;
@end
|
MEMPHIS, Tenn. — Memphis radio legend George Klein was laid to rest Sunday afternoon at Memorial Park in Memphis.
Klein passed away at the age of 83 on February 5, 2019. He is survived by his wife, Dara Patterson.
The popular radio personality became friends with Elvis Presley in high school and stayed close to the King of Rock ‘n’ Roll throughout his career.
Presley’s former wife, Priscilla Presley, told The Associated Press that Klein died at a hospice in Memphis. Priscilla said Klein had been suffering from illness, including pneumonia, for about two weeks. |
const { dialog, ipcMain } = require('electron');
import { Convert } from './convert';
import { Logging } from './logging';
import { Serial } from './serial';
const serial = {
gps: new Serial(),
lora: new Serial()
};
// renderer から main プロセスへのデータ受取り
ipcMain.on('serialList', (event, arg) => {
Serial.list().then(ports => {
event.reply('serialList-reply', ports);
}).catch(message => {
event.reply('error', message || 'シリアル通信ポートの一覧取得に失敗しました');
});
});
ipcMain.on('baudrateList', (event, arg) => {
event.reply('baudrateList-reply', Serial.boudRates);
});
ipcMain.on('connect', (event, arg) => {
serial[arg.module].connect({
baudRate: arg.baudRate,
autoOpen: true,
path: arg.path,
on: (data: string) => {
Logging.data(event, arg.module, data);
event.reply(arg.module + '-data-received', data);
}
}).catch(message => {
event.reply('error', message || '予期しないエラーが発生しました');
});
});
ipcMain.on('disconnect', (event, arg) => {
serial[arg].disconnect();
event.reply(arg + '-disconnect-reply', 'ok');
});
ipcMain.on('openSavePathDialog', (event, arg) => {
const path = dialog.showOpenDialogSync({ properties: ['openDirectory', 'createDirectory', 'promptToCreate'] });
event.reply('openSavePathDialog-reply', path);
});
ipcMain.on('logging', (event, arg) => {
Logging.toggle(event, arg.savePath);
event.reply('logging-reply', Logging.status);
});
ipcMain.on('convertToGeoJson', (event, arg: string) => {
if (!arg) {
event.reply('error', 'ファイルを選択してください');
}
try {
Convert.toGeoJson(arg);
} catch (error) {
event.reply('error', error.message);
}
})
|
package mrwms.com.m_tact.sampleproject.util.runtask;
import android.os.AsyncTask;
import android.os.Handler;
/**
* Created by NeverMore on 2018/03/10.
*
* a util of request task
*
* example:new TaskUtil(()->Requester.getString(),Requester.getInt()){
* void onPostExecute() {
Object[] results=getResults();
String a= (String) results[0];
int b= (int) results[1];
}
* }
*/
public abstract class TaskUtil {
final static int TIMEOUT=10000;
public TaskUtil(CustomSupplier<?> ... suppliers){
this.suppliers=suppliers;
results=new Object[suppliers.length];
//it's not bad to request at this time...i think
// execute();
}
//request
private final CustomSupplier<?>[] suppliers;
//response
private final Object[] results;
//the count of completed request
private int overTaskCount=0;
private boolean flag=false;
public void execute(){
for(int i=0;i<suppliers.length;i++){
//if there is no cast again,it causes final keyword execption
final int c=i;
AsyncTask<Void,Void,Void> task = new AsyncTask<Void,Void,Void>(){
@Override
protected Void doInBackground(Void... voids) {
results[c]=suppliers[c].get();
//count the response
overTaskCount++;
return null;
}
@Override
protected void onPostExecute(Void aVoid) {
if(overTaskCount==suppliers.length){
synchronized (this){
if(!flag) {
flag=true;
//UIThread onPostExecute()
TaskUtil.this.onPostExecute();
}
}
}
}
}.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
Handler handler = new Handler();
handler.postDelayed(()->{
if(task.getStatus()== AsyncTask.Status.RUNNING)
task.cancel(true);
},TIMEOUT);
}
}
//the response of request
public Object[] getResults(){
return results;
}
/**
* the process after request
*/
public abstract void onPostExecute();
}
|
// Return the status of the file fname according to mode.
//
check_t
filestat::check_file(const char *fname, int mode)
{
if (!fname) {
save_err("Null filename.");
return (NOGO);
}
while (isspace(*fname)) fname++;
if (!*fname) {
save_err("Empty filename.");
return (NOGO);
}
GFTtype rt = filestat::get_file_type(fname);
if (rt == GFT_NONE) {
if (errno == ENOENT) {
if (mode == R_OK) {
save_err("File %s does not exist.", fname);
return (NO_EXIST);
}
FILE *fp = fopen(fname, "w");
if (fp) {
fclose(fp);
return (WRITE_OK);
}
}
}
if (rt == GFT_FILE) {
if (mode == R_OK) {
if (!access(fname, R_OK))
return (READ_OK);
}
else {
if (!access(fname, W_OK))
return (WRITE_OK);
}
}
const char *msg = "Error: can't %s file %s.";
save_err(msg, (mode == R_OK) ? "read" : "write", fname);
return (NOGO);
} |
// FetchOrganization returns the Organization with the given id
func (pd *API) FetchOrganization(orgID int) (Organization, error) {
url := fmt.Sprintf(pd.Endpoints.Organization, orgID)
res, err := pd.getEndpoint(url)
if err != nil {
return Organization{}, err
}
var pres struct {
apiResult
Data Organization
}
var buf bytes.Buffer
tee := io.TeeReader(res.Body, &buf)
err = json.NewDecoder(tee).Decode(&pres)
if err != nil {
logrus.Errorf("Error decoding result: %s", buf.String())
return Organization{}, err
}
o := pres.Data
if pd.mapFieldsOrg != nil {
var cvres struct {
apiResult
Data map[string]interface{}
}
err = json.Unmarshal(buf.Bytes(), &cvres)
if err != nil {
logrus.Errorf("Error decoding result: %s", buf.String())
return Organization{}, err
}
pd.mapFieldsOrg(&o, cvres.Data)
}
return o, nil
} |
Lenticular rubidium uptake in hypertensive `cataractprone' saltsensitive rats We have previously reported a high incidence of cataract formation in adult hypertensive salt-sensitive rats, suggesting that hypertension may be an important cataractogenic risk factor. Weanling salt-sensitive rats that eventually developed cataracts showed a marked increase in the pressor response to a high-sodium diet compared to salt-sensitive rats that did not develop cataracts. A lens and aqueous fluid electrolyte imbalance occurred in all adult salt-sensitive rats examined, but was greater in the salt-sensitive rats that developed cataracts, suggesting an alteration in lens and/or ciliary ion transport in cataracts associated with hypertension. In the present study, lens 86Rb uptake was measured in adult hypertensive salt-sensitive rats prior to cataract formation. `Cataract-prone' salt-sensitive hypertensive rats (increased pressor response to a high sodium diet given at weanling age), salt-sensitive hypertensive rats unlikely to develop cataracts and control salt-resistant rats were studied at the age of 16 weeks. Total and ouabain-insensitive lens 86Rb uptake were measured for the determination of ouabain-sensitive uptake, an index of Na+,K+-ATPase activity. Lens oaubain-sensitive 86Rb uptake was low in adult hypertensive cataract-prone salt-sensitive rats before cataract formation compared with values in control resistant rats. Intermediate values were observed in hypertensive salt-sensitive rats unlikely to develop cataracts. These data suggest that altered ion transport may play a pivotal role in cataractogenesis associated with this model of hypertension. The data are also consistent with the concept of a generalized defect in epithelial ion transport, at least in salt-sensitive hypertension. |
Two-Year Sustained Benefit of an Absorbable Implant for the Treatment of NVC N asal airway obstruction (NAO) is one of the most frequently presenting symptoms in a typical otolaryngology practice. Therapies to correct nasal valve collapse (NVC) include invasive functional rhinoplasty procedures involving lateral wall grafting with autologous or synthetic nonabsorbable graft materials and nonsurgical solutions such as nasal strips or cones. Recently, a minimally invasive procedure that entails placement of an absorbable nasal implant to support upper and lower lateral cartilage was developed to address NVC. N asal airway obstruction (NAO) is one of the most frequently presenting symptoms in a typical otolaryngology practice. Therapies to correct nasal valve collapse (NVC) 1 include invasive functional rhinoplasty procedures involving lateral wall grafting with autologous 2 or synthetic nonabsorbable graft materials 3 and nonsurgical solutions such as nasal strips or cones. Recently, a minimally invasive procedure that entails placement of an absorbable nasal implant to support upper and lower lateral cartilage was developed to address NVC. 1 Case Report The patient was a 53-year-old white man with a 3-year history of NAO. The patient underwent prior septoplasty and turbinate reduction procedures. Examination using the modified Cottle maneuver revealed NVC was a contributor to the patient's symptoms. The patient completed the Nasal Obstruction Symptom Evaluation (NOSE) instrument at baseline and at subsequent follow-up visits. The patient had a baseline NOSE score of 65, indicative of severe NAO symptoms. 4 Surgical correction consisting of functional rhinoplasty using cartilaginous grafts, suturing techniques, or an absorbable implant was offered. The patient opted for and received the absorbable implant as part of a clinical study. The study protocol and informed consent were reviewed and approved by the governing ethics committees and the Federal Institute for Drugs and Medical Devices (BfArM) prior to subject enrollment, and the study was registered on clinicaltrials.gov (NCT02188589). The study was supported with research funding by Spirox (Redwood City, California). The implant comprises a 70:30 blend of poly-L-lactide (PLA), predominantly cylindrical, and measures approximately 1 mm in diameter and 24 mm in length (Latera; Spirox). The implant is introduced through an endonasal insertion technique using a delivery tool consisting of a 16gauge cannula. The target location of the implant was identified to provide maximum support to the upper and lower lateral cartilages ( Figure 1A) at the area of maximal collapse. A skin hook was used to evert the alar rim, and the delivery device cannula was used to pierce the vestibular skin in the area of a conventional marginal incision. The cannula was advanced toward the vestibular lining and the caudal edge of the lower lateral cartilage. The implant was fully deployed in the target location. Follow-up visits occurred at 1 week and 1, 3, 6, 12, 18, and 24 months postprocedure. Throughout the 2-year follow-up period, no adverse events were reported, and the patient did not require intranasal steroids, external nasal device usage, surgeries, or other treatments. The NOSE score improved from a preoperative classification of severe to postoperative classification of mild for all follow-up time points (week 1 = 25, month 1 = 25, month 3 = 15, month 6 = 5, month 12 = 5, month 18 = 20, and month 24 = 25). Although the NOSE scores fluctuated across the follow-up time points (5 to 25), all of the scores were indicative of mild symptoms. Cosmetic changes were assessed using 4 photographic views obtained under both static and full inhalation (frontal view, left side, right side, and chin up). An independent physician reviewer assessed cosmetic changes by comparing baseline images to follow-up images. This evaluation confirmed the absence of cosmetic changes from baseline to all follow-up time points ( Figure 1B, static baseline and Figure 1C, 24 months postprocedure). Discussion This case report presents a patient with persistent nasal obstruction symptoms due to NVC. The patient was treated with an absorbable implant to support the upper and lower lateral cartilages. Recently, a meta-analysis was conducted by Rhee et al 5 of studies covering conventional invasive surgical procedures, such as septoplasty, turbinate reduction, and functional rhinoplasty, for treatment of NAO. The meta-analysis showed mean improvement from a baseline NOSE score of 42 points. The patient in this case study achieved lasting benefits through 24 months with average NOSE score improvement of 48 points. Future studies will need to confirm the benefit of this new technology, including objective assessments of NVC such as the grading system for lateral nasal wall insufficiency proposed by Lipan and Most. 4 In contrast to spreader grafts and batten grafts, this technique can be completed in a minimally invasive manner and does not require donor cartilage harvesting, shaping, and invasive surgical placement. In addition, use of an absorbable copolymer that is incorporated into tissue over time may lower risks associated with extrusions compared to nonabsorbable alloplastic materials that are associated with high extrusion rates. While the absorbable implant presents these advantages, understanding the long-term improvement beyond the absorption profile of the implant 2 has not been fully evaluated. However, as this case study suggests, patient improvement may continue past the absorption profile. Here we report on a successful case study with 24month follow-up using a minimally invasive technique to provide support to the lateral cartilages as an alternative to functional rhinoplasty in this patient with NVC. Author Contributions Marion San Nicol, principal investigator (conception of the work, data acquisition, analysis and interpretation, drafting, final approval, accountability for all aspects of the work); Alexander Berghaus, coprincipal investigator (conception of the work, data acquisition, analysis and interpretation, drafting, final approval, accountability for all aspects of the work). Disclosures Competing interests: Marion San Nicol is a consultant for Spirox and has received research funding. Sponsorships: Spirox, including approval of manuscript, clinical database management, and monitoring. |
Silicon substrate surface modification with nanodiamonds for CVD-synthesis of polycrystalline diamond The use of polycrystalline diamond films is promising in photonics and electronics, as well as in other fields of science and technology. At present, it is limited by the complexity of obtaining high-quality films of required size, associated with the cracks formation at the film periphery caused by thermal stresses. Also, one of the key points is to increase films growth rate without sacrificing of their continuity and high quality. Substrate surface preparation makes possible to increase the initial rate of film formation and to form a continuous layer of diamond film on its surface. This work presents the results of polycrystalline films synthesis and the selection of optimal deposition regime. These results make possible to obtain high-quality polycrystalline diamond films of a larger area, which will significantly expand scope of their application. Introduction High quality single-crystal and polycrystalline diamond films can be obtained using synthesis from the gas phase (CVD). At the same time, precise control of synthesis parameters (temperature, pressure, flow rate and purity of used gases) gives the possibility to obtain films with the required structure and, respectively, required properties with high repeatability. CVD-synthesis of polycrystalline diamond films can be realized using substrates of various materials. The quality of the receiving films directly depends on the used substrate properties; in order to avoid the cracks formation along the plate periphery and discontinuities of the depositing film, the substrate must have a number of special properties. Silicon substrates are the most used due to the specific properties of silicon: high heat resistance and thermal conductivity, a low coefficient of thermal expansion and a high probability of diamond nucleation on its surface. An increase in the number of nucleation centers is a promising direction for increasing the continuity of the deposited film and for the possibility of obtaining diamond films with the required grain size and, respectively, properties. Special substrates pretreatment, including seeding with nanodiamonds, can be used to increase the number of nucleation centers. At the present time, there are different variations for modifying of silicon substrate surface with nanodispersed diamonds. This work presents one of the variants of seeding a silicon substrate with nanodiamonds for the synthesis of continuous polycrystalline diamond Materials and methods In this work, polycrystalline diamond CVD-films were deposited on silicon substrates with diameter of 62.5 mm and orientation. We used diamond nanopowder manufactured by the company "Diamond Center" (St. Petersburg, Russia) as centers of nucleation. The study of the quality of nanodiamonds and their dimensional characteristics was carried out using transmission electron microscope JEOL JEM 2100F. The selection of the dispersing medium was carried out using analyzer Malvern Zetasizer Nano ZS and was based on the study of colloidal solutions stability (tests of particles zeta potential) and on the measuring the average size of their agglomerates. We used the unique scientific equipment for dispersion and control of dispersion of our system -the hardware and software complex for analysis and production of nanodispersed systems by chemical methods and the unique research stand for high-intensity cavitation effects (UNIS VKV, NUST "MISIS", Moscow, Russia). We used Ardis 300 to deposit a polycrystalline diamond film of optimal quality. The gas phase consisted of methane with a purity of 99.5% and hydrogen with a purity of 99.9999%; the microwave power was maintained at 3800 W, the pressure in the system was 8.7 kPa, the substrate temperature was 900 °C. Figure 1 shows an image of diamond nanoparticles and electron diffraction pattern of the selected area that was taken with transmission electron microscope JEM JEOL 2100F. The absence of rings blurring shows that nanodiamonds consist of crystalline particles that do not have X-ray amorphous films or particles on their surface. Preparation of nanodiamond suspension We studied following substances as a dispersing medium for nanodiamond deposition on silicon substrate: x double-distilled water obtained using a Merck Millipore Direct Q8 UV; x heptane; x ethyl alcohol 95%. Experiments using the Zetasizer Nano showed that the best dispersion was achieved in ethyl alcohol media. This is confirmed by measuring the zeta potential of a colloidal solution of nanodiamond in each medium. Zetta-potential was minus 34.8 V for the double-distilled water with nanodiamonds, minus 44.1 V -for heptane, minus 43.8 V -for ethanol. Taking into account the proximity of values of the zeta potentials of diamond nanopowder colloidal solutions in heptane and ethyl alcohol, as well as the higher cost of heptane in comparison with ethyl alcohol, in this work we decided to use as a dispersing medium ethyl alcohol. Figure 2 demonstrates a typical bar graph of nanodiamond agglomerates size distribution in ethanol after using UNIS VKV. The average size of agglomerates is about 20 nm. This was connected with strong bonds between individual particles due to the peculiarities of the diamond nanopowder obtaining process, with the formation of double electrical layer in ethanol, and with the tendency of nanoparticles to compensate their high surface energy by reducing the specific surface area. The activation of nanodiamonds surface and its modification (functionalization) was carried out using the unique research stand for high-intensity cavitation effects (UNIS VKV) in NUST "MISIS" (Moscow, Russia). For this we loaded the optimal amount of nanodiamond powder (0.002 g) into the UNIS VKV reactor and then added ethanol up to 0.1 l. Then the installation was turned on for 4 min with ultrasonic power of 500 W. To control the dispersion of the system we used the hardware and software complex for analysis and production of nanodispersed systems by chemical methods. Substrate preparation In this work, we used silicon wafers with a diameter of 62.5 mm as substrates for the deposition of diamond polycrystalline CVD-films. One of these substrates is shown in figure 3. To create homogeneous seeding for the subsequent operation of forming crystallization centers, the plate must be polished to a roughness of less than 1 m. Figure 4 shows a silicon wafer after polishing. Formation of crystallization centers To form crystallization centers, we applied the colloidal solution to a silicon substrate immediately after preparation using a pipette and then rubbed for 30 s with a lint-free tissue. Then we left samples for natural drying in a ventilation hood until them were completely dry. After which samples were ready for depositing of diamond CVD-layer. Polycrystalline diamond synthesis on silicon wafers In this study, we used Ardis 300 to deposit polycrystalline diamond films of optimal quality. The gas phase consisted of methane with a purity of 99.5% and hydrogen with a purity of 99.9999%; the microwave power was maintained at 3800 W, the pressure in the system was 8.7 kPa, the substrate temperature was 900 °C. By varying the flow of gases, we experimentally chose their optimal value to obtain high-quality films: x hydrogen -591 cm 3 /min; x methane -9 cm 3 /min. Based on the flow values, we calculated the percentage of methane in the plasma which was 1.5%. The percentage of hydrogen, respectively, was 98.5 %. In the research, we obtained polycrystalline diamond layers on silicon substrates; films thickness was 20 m (figure 5). Silicon substrate etching Etching is used to separate the obtained polycrystalline diamond film from the silicon substrate. In order to reduce gas formation in the process of etching, it is advisable to use a solution of ammonium fluoride with hydrofluoric acid. After etching was completed, we washed separated films in distilled water twice. After that, the resulting films are dried; also, they can be subjected to further operations, for example, annealing in air at 540 °C to remove the graphite-like phase and clarify obtained polycrystalline diamond films. Conclusion In this work, we optimized polycrystalline diamond film deposition process. We obtained the parameters at which a high-quality polycrystalline diamond film grows: x Power of microwave radiation from the magnetron: 3.8 kW. x Total gas pressure in the reactor: 8.7 kPa. The developed technology of polycrystalline diamond film deposition makes possible to synthesize experimental samples of CVD diamond polycrystalline wafers for use in high power lasers. |
package com.redpois0n.gscrot.keys;
import java.util.ArrayList;
import java.util.List;
import org.jnativehook.keyboard.NativeKeyEvent;
import org.jnativehook.keyboard.NativeKeyListener;
public class KeyListener implements NativeKeyListener {
private static final List<Integer> pressed = new ArrayList<Integer>();
public static boolean isPressed(int keycode) {
return pressed.contains(keycode);
}
private static void removePressed(int keycode) {
for (int i = 0; i < pressed.size(); i++) {
if (pressed.get(i).equals(keycode)) {
pressed.remove(i);
}
}
}
@Override
public void nativeKeyPressed(NativeKeyEvent e) {
pressed.add(e.getRawCode());
for (KeyBinding.Type k : KeyBindings.KEYBINDINGS.keySet()) {
KeyBinding kb = KeyBindings.KEYBINDINGS.get(k);
boolean trigger = false;
for (int i : kb.getKeys()) {
if (i != 0) {
if (pressed.contains(i)) {
trigger = true;
} else {
trigger = false;
break;
}
}
}
if (trigger) {
k.trigger();
}
}
}
@Override
public void nativeKeyReleased(NativeKeyEvent e) {
removePressed(e.getRawCode());
}
@Override
public void nativeKeyTyped(NativeKeyEvent e) {
}
}
|
ANN ARBOR, Mich. – University of Michigan researchers have identified the first biomarker of graft-versus-host disease of the skin. The discovery makes possible a simple blood test that should solve a treatment dilemma facing doctors with patients who frequently develop rashes after bone marrow transplants. The biomarker also makes it possible to predict who is at greatest risk of dying of graft-versus-host disease, or GVHD.
GVHD is a serious, frequently fatal complication of allogeneic bone marrow transplants. These transplants, in which a person’s own bone marrow cells are replaced with bone marrow cells from a donor, are a common treatment for children and adults with sickle cell anemia, leukemia, lymphoma, myeloma and other blood diseases.
Rashes are very common in patients after bone marrow transplants. They may signal the onset of acute GVHD. But until now, a skin biopsy was the only reliable way for doctors to determine whether the rash is caused by antibiotics commonly used to treat bone marrow transplant patients, or is instead GVHD of the skin, where the disease appears in about half of cases.
Because a firm diagnosis is not easy and the threat of GVHD is grave, many doctors who suspect a rash is due to GVHD prescribe systemic high-dose steroids to suppress GVHD, which further weaken a patient’s already compromised immune system.
The U-M scientists identified a key biomarker or signature protein of GVHD of the skin called elafin. Elafin levels can be measured in a blood test to identify which bone marrow transplant patients with skin rashes actually have GVHD.
The test, which U-M hopes to make available to clinicians soon, will make informed treatment possible, says James Ferrara, M.D., Ruth Heyn Endowed Professor of Pediatrics and Communicable Diseases and director of the bone marrow transplant program at U-M. He is senior author of the study, which appears online this week in Science Translational Medicine.
“This blood test can determine the risk a patient may have for further complications, and thus physicians will be able to adjust therapy to the degree of risk, rather than treating every patient in exactly the same way,” says Ferrara.
The researchers also found that bone marrow transplant patients with high levels of elafin were more likely to die of GVHD than people with low levels. That information also could guide treatment choices. A method to evaluate treatment options is badly needed because transplant patients today often require more than 20 different medications a day, many with very serious side effects.
“This is a good example of how proteomics, the large-scale study of proteins, can help lead to personalized medicine in the future,” says Ferrara.
More than 18,000 people in 2005 had allogeneic bone marrow transplants or autologous transplants, in which a person’s own cells are used. About half of allogeneic transplant patients develop significant GVHD, in which cells from the donor attack and destroy the patient’s cells. Affecting the skin, liver and gastrointestinal tract, GVHD is a serious complication for an otherwise life-saving treatment.
Using mass spectrometry, the scientists screened a large number of proteins in the blood and skin of bone marrow transplant patients to search for biomarkers involved in GVHD of the skin. A biomarker is a protein present in blood or other bodily fluids whose level can be measured to determine if a disease is present.
Elafin emerged as a significant biomarker. It is made in the surface layer of skin cells in response to certain inflammatory proteins involved in GVHD.
Using blood samples from bone marrow transplant patients with and without GVHD, the researchers found that people with GVHD overproduced elafin in their epidermis. The researchers then looked at 500 patients with skin rashes, and found high levels of elafin in those with GVHD rashes, but not in people with other rashes.
By tracking people with high and low elafin levels over time, they also found that those with high levels of elafin died from bone marrow transplant complications three times more often than patients with low levels.
Other U-M authors: Thomas M. Braun, John E. Levine, Jeffrey Crawford, Bryan Coffing, Stephen Olsen, Sung W. Choi, Carrie Kitko, Shin Mineishi, Gregory Yanik, Edward Peres, David Hanauer, Ying Wang and Pavan Reddy.
Authors at the Fred Hutchinson Cancer Research Center: Jason Hogan, Hong Wang, Vitor Faca, Sharon Pitteri, Qing Zhang, Alice Chin and Samir Hanash. |
Don't Lose Your Head! Program on Prevention and Early Detection of Head and Neck Cancers in Poland in the Years 2017-2019 Amount raised: 1 635 652,11 Polish Zloty (PLN; about 480,000 USD) Background and context: Head and neck cancers (HNCs) (IDC10: C00-C15, C30-C33, C69; C73) are significant clinical and social problem. While the overall number of new cases is stable and almost on the same level (∼6000 new cases each year) increase of HNCs incidence among young adults (<40 y.o.) is observed. This phenomenon is mostly connected with HPV infections, because a great majority of this group has never smoke and never abuse alcohol (smoking and drinking high-percentage alcohol are well-known risk factors for HNCs). Because there is no screening program for HNCs and treatment prognosis for these cancers are unfavorable, preventive actions are basic and most effective tool in decreasing HNCs incidence and mortality. Aim: To implement in 5 Polish voivodeships pilot prophylactic program on early detection on HNCs. Strategy/Tactics: The main objective will be achieved by influencing the 5 basic causal areas of the problem of late HNCs recognition in Poland. These are: 1) awareness about HNCs risk factors in Polish society, 2) competences of medical staff in prophylaxis, health education and diagnostic of HNCs (120 doctors and nurses - especially from primary health care, 100 dentists), 3) access to preventive examinations (800 people from 5 voivodeships), 4) launching mechanisms of HNCs prophylaxis through the involvement of representatives of nongovernmental and local governments organizations who have constant contact with people in HNCs risk groups, 5) increasing the knowledge on the incidence of oncogenic HPV varieties in the oral cavity of healthy people and the frequency of HPV infection in the oral cavity from smoking and drinking alcohol. Apart from the trainings for health professionals, trainings for street workers are also provided in this program. Program process: Maria Sklodowska-Curie Institute - Oncology Center successfully applied for funds for the implementation of the created project. Program is cofinanced by European Union, from European Social Funds within the Operational Program Knowledge Education Development 2014-2020, V. Priority axis: Support for the health area, Measure 5.1: Preventive programs and is free of charge for participants. Nowadays project team conducts procedures aiming among the others recruitment of participants, cooperation with NGOs, creation of agenda of the meetings, preparation of the awareness campaign. Costs and returns: Main obstacles and costs are combined with administrative difficulties and doctor's tight schedule (lack of time for additional activities). The biggest return will be improvement of early HNCs detection and mortality decrease caused by these cancers. What was learned: Preliminary observations show that patients are very interested in participation in HNCs early detection and prevention program. Moreover, in many cases they have never participated in any actions concern HNCs education. |
/**
* Determines whether a given JVMCI AMD64.CPUFeature is present on the current hardware. Because
* the CPUFeatures available vary across different JDK versions, the features are queried via
* their name, as opposed to the actual enum.
*/
private static boolean isFeaturePresent(String featureName, AMD64LibCHelper.CPUFeatures cpuFeatures) {
switch (featureName) {
case "CX8":
return cpuFeatures.fCX8();
case "CMOV":
return cpuFeatures.fCMOV();
case "FXSR":
return cpuFeatures.fFXSR();
case "HT":
return cpuFeatures.fHT();
case "MMX":
return cpuFeatures.fMMX();
case "AMD_3DNOW_PREFETCH":
return cpuFeatures.fAMD3DNOWPREFETCH();
case "SSE":
return cpuFeatures.fSSE();
case "SSE2":
return cpuFeatures.fSSE2();
case "SSE3":
return cpuFeatures.fSSE3();
case "SSSE3":
return cpuFeatures.fSSSE3();
case "SSE4A":
return cpuFeatures.fSSE4A();
case "SSE4_1":
return cpuFeatures.fSSE41();
case "SSE4_2":
return cpuFeatures.fSSE42();
case "POPCNT":
return cpuFeatures.fPOPCNT();
case "LZCNT":
return cpuFeatures.fLZCNT();
case "TSC":
return cpuFeatures.fTSC();
case "TSCINV":
return cpuFeatures.fTSCINV();
case "AVX":
return cpuFeatures.fAVX();
case "AVX2":
return cpuFeatures.fAVX2();
case "AES":
return cpuFeatures.fAES();
case "ERMS":
return cpuFeatures.fERMS();
case "CLMUL":
return cpuFeatures.fCLMUL();
case "BMI1":
return cpuFeatures.fBMI1();
case "BMI2":
return cpuFeatures.fBMI2();
case "RTM":
return cpuFeatures.fRTM();
case "ADX":
return cpuFeatures.fADX();
case "AVX512F":
return cpuFeatures.fAVX512F();
case "AVX512DQ":
return cpuFeatures.fAVX512DQ();
case "AVX512PF":
return cpuFeatures.fAVX512PF();
case "AVX512ER":
return cpuFeatures.fAVX512ER();
case "AVX512CD":
return cpuFeatures.fAVX512CD();
case "AVX512BW":
return cpuFeatures.fAVX512BW();
case "AVX512VL":
return cpuFeatures.fAVX512VL();
case "SHA":
return cpuFeatures.fSHA();
case "FMA":
return cpuFeatures.fFMA();
default:
throw VMError.shouldNotReachHere("Missing feature check: " + featureName);
}
} |
/**
* Convert Object to XML String
* Explain all empty element
*
* @param outputFragment Output fragment
* @param formattedOutput Formatted output
* @param encoding Charset encoding
* @return XML String
* @throws XmlException the xml exception
*/
public String toXML(boolean outputFragment, boolean formattedOutput, String encoding) throws XmlException {
StringWriter stringWriter = null;
try{
if (encoding == null) {
encoding = Globals.DEFAULT_ENCODING;
}
stringWriter = new StringWriter();
XMLStreamWriter xmlWriter = XMLOutputFactory.newInstance().createXMLStreamWriter(stringWriter);
CDataStreamWriter streamWriter = new CDataStreamWriter(xmlWriter);
JAXBContext jaxbContext = JAXBContext.newInstance(this.getClass());
Marshaller marshaller = jaxbContext.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, formattedOutput);
marshaller.setProperty(Marshaller.JAXB_ENCODING, encoding);
marshaller.setProperty(Marshaller.JAXB_FRAGMENT, true);
marshaller.marshal(this, streamWriter);
streamWriter.flush();
streamWriter.close();
if (formattedOutput) {
Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.setOutputProperty(OutputKeys.ENCODING, encoding);
transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "4");
String xml = stringWriter.toString();
stringWriter = new StringWriter();
transformer.transform(new StreamSource(new StringReader(xml)), new StreamResult(stringWriter));
}
if (outputFragment) {
if (formattedOutput) {
return StringUtils.replace(FRAGMENT, "{}", encoding) + "\n" + stringWriter;
} else {
return StringUtils.replace(FRAGMENT, "{}", encoding) + stringWriter;
}
} else {
return stringWriter.toString();
}
} catch (Exception e) {
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Error stack message: ", e);
}
return null;
} finally {
IOUtils.closeStream(stringWriter);
}
} |
Design Features for Employment-Supportive Personal Assistance Services in Medicaid Programs To support the employment of persons with disabilities, state Medicaid programs are allowed to offer personal assistance services (PAS) at work as well as at home. These employment-supportive personal assistance services (EPAS) can be used to help persons with disabilities obtain or retain employment. This article explores Medicaid-funded EPAS through examination of programs in seven states and analysis of results from PAS user focus groups and employer interviews. Medicaid policy recommendations include: link EPAS with existing PAS programs in Medicaid state plans or waivers, distinguish EPAS from job- or task-related assistance that is the responsibility of the employer, and specify explicit coverage for personal care services at home, at the work site, and for work-related transportation. |
<filename>Shader.h
#pragma once
#include <string>
#include <vector>
enum class SHADER_TYPE {
VERTEX, FRAGMENT
};
class Shader {
public:
void load(std::string code);
void compile();
void use();
unsigned int GetShader() const;
static Shader CreateShader(const SHADER_TYPE type);
private:
Shader(const SHADER_TYPE type);
void destroy();
std::string m_code;
SHADER_TYPE m_type;
unsigned int m_shader;
static std::vector<Shader> m_shaders;
}; |
<filename>src/waldur_rancher/migrations/0035_drop_spl.py
from django.db import migrations
from waldur_core.core.migration_utils import build_spl_migrations
class Migration(migrations.Migration):
dependencies = [
('structure', '0021_project_backend_id'),
('waldur_rancher', '0034_delete_catalogs_without_scope'),
]
operations = build_spl_migrations(
(
'application',
'cluster',
'ingress',
'service',
),
)
|
National Chief of the Assembly of First Nations Perry Bellegarde speaks during the AFN Special Chiefs Assembly in Gatineau, Que., on May 1, 2018.
The Liberal commitment to recognizing and reinforcing Indigenous rights faced a major litmus test Tuesday as a special gathering of the Assembly of First Nations embarked on a close, critical look at whether Justin Trudeau’s government truly has their best interests at heart.
“We called this special chiefs assembly because there is so much happening. Several pieces of legislation are coming at us and we want to be sure First Nations benefit from all of them,” National Chief Perry Bellegarde told participants as the two-day event got underway.
One key focus is the government’s rights recognition and implementation framework Trudeau announced in February, billing it as a significant turning point towards the recognition of Indigenous rights and in fixing Canadian laws, policies and practices.
But questions remain on how the framework will be structured, how it will impact treaties, bands and reserves and how it will fit with the federal government’s stated intention to “move beyond the Indian Act,” Bellegarde said.
He also made the point that it’s far from clear whether Trudeau will continue to be prime minister after October 2019.
The day’s plenary sessions examined several bills currently making their way through Parliament, including legislation to protect Aboriginal languages, ensure safe drinking water on reserves, and even the Cannabis Act, the government’s much-discussed plan to legalize marijuana.
Delegates also debated the merits of ongoing environmental and regulatory reviews, as well as Bill C-262, a private member’s bill that aims to ensure Canada’s laws are in harmony with the United Nations declaration on the rights of Indigenous peoples.
As discussions got underway Tuesday, a number chiefs raised concerns that consultations on the framework promised by Trudeau have been lacking and that legislation on rights and Indigenous languages are being rushed in order to see them passed before the 2019 election.
Others expressed distrust of the federal government, dismissing Trudeau’s frequent promises of a new nation-to-nation approach as little more than empty platitudes. Trudeau himself is scheduled to speak to the assembly Wednesday.
Carolyn Bennett, minister of Crown-Indigenous relations, insisted Tuesday that no final decisions have been made on the rights recognition and implementation framework, which she said has yet to be even written.
“We know that the process isn’t perfect and we welcome any advice. We want your communities to know that and we want to continue to improve how we engage,” Bennett told the delegates.
Conservative Leader Andrew Scheer took the podium later in the day, striking a conciliatory tone – “open hearts and open minds,” he said – and taking pains to accuse Trudeau’s Liberals of being all talk, no action on Indigenous issues.
“Building that trust and mutual respect comes from listening to the Indigenous people of Canada, comes from opening a dialogue and that’s what I’m here to do today,” Scheer said.
NDP Leader Jagmeet Singh highlighted his party’s long-standing commitment to forwarding and promoting Indigenous issues.
Chiefs and delegates raised their own concerns about the progress of government’s framework and questioned how they are being respected as sovereign nations if their rights are being enshrined in legislation over which they have no direct control.
A detailed resolution calling on government to comprehensively engage with and consult First Nations on the framework was met with so many amendments and concerns from the floor it was tabled for later discussion. Chiefs met in a huddle in the evening to discuss how to proceed with the resolution when it is brought back for a vote Wednesday. |
0 of 11
There are some footballers whose reputation has always preceded their talent, some who have wasted a chance of glory due to persistent ill discipline and making the wrong decisions and some who had such a gift for football that no matter they did off the pitch they will be fondly remembered for their achievements as footballers.
So for every Mbulelo Mabizela (above) there is an Eric Cantona.
Two of the world's greatest ever players, Diego Maradona and George Best led famously controversial and often illegal lifestyles.
However, for every high there is a low. In the case of several footballers, that meant taking their own lives or being caught in the wrong time at the wrong place as was the case for tragic Colombian star Andrés Escobar Saldarriaga in 1994.
Here in no particular order is a list of the 50 most troubled stars to have played professional football. |
Computer hackers from five countries have been charged with stealing more than 40 million credit card numbers from nine retailers. The more fortunate victims did business with four of the retailers. They disclosed the breach, allowing customers to cancel their credit cards and reduce the chances of identity theft.
The other five decided not to disclose the security breach because their internal investigations didn’t show their customers were at risk, The Wall Street Journal reported.
The biggest of the companies that did disclose the breach, TJX, which runs the Marshalls and T.J. Maxx chains, has spent more than $200 million settling claims after hackers found a way to tap into credit card transactions at their stores. To its credit, TJX informed its customers shortly after the breach was discovered.
The federal indictments of 11 hackers also named OfficeMax, Barnes & Noble, Sports Authority, Boston Market and Forever 21 as having their credit card data breached. Those companies told federal investigators that they did not disclose the possibility of security breaches because they could not determine that any were made.
However, investigators discovered that OfficeMax’s customers at a North Carolina store had been the victims of identity theft after it was discovered that the breach was made at that store.
Forty states, California among them, have laws that mandate informing people whose financial information may be compromised. Most companies take the security of their customers seriously, both in protecting them at the cash register and informing them when a breach is suspected.
The indictments of the 11 accused masterminds are a huge step forward in stopping this crime. To save their skins, the 11 men, from the United States, China, Ukraine and other countries, doubtlessly will tell the feds how they were able to steal the 40million credit card numbers, thus enabling companies to plug holes in their security.
Meanwhile, companies that suspect a breach has been made have the moral duty, irrespective of the law, to tell their customers they may be at risk for identity theft. |
Hotspots of Ecological and Environmental Risk Research in China Based on Multidimensional Scaling Analysis and Cluster Analysis Taking the China Academic Journal Network Publishing Database as a platform, we constructed a co-word matrix with the high-frequency keywords of ecological/environmental risk related papers published on core journals. We made multidimensional scaling analysis and cluster analysis using SPSS, and discussed the status of ecological risk related research. The results show that the research hotspots can be divided into five aspects, including ecological risk assessment on heavy metals in the bottom mud, ecological risk assessment of toxic organic pollutants, regional landscape ecological risk, environmental safety accidents and risk management, and other types of ecological risk such as transgenic and biological invasion. |
A Bayesian-inferred physical module to estimate robust mitigation pathways with cost-benefit IAMs Cost-benefit integrated assessment models (IAMs) include a simplified representation of both the anthropogenic and natural components of the Earth system, and of the interactions and feedbacks between them. As such, they embed economicand physics-based equations, and the uncertainty in one domain will inevitably affect the other. Most often, however, the physical uncertainty is explored by testing the sensitivity of the optimal mitigation pathway to a few key physical parameters; but for robust decision-making, the optimal pathway itself should ideally embed the uncertainty. |
Elevated Levels of Toxic Bile Acids in Serum of Cystic Fibrosis Patients with CFTR Mutations Causing Pancreatic Insufficiency Hepatobiliary involvement is a hallmark in cystic fibrosis (CF), as the causative CF Transmembrane Conductance Regulator (CFTR) defect is expressed in the biliary tree. However, bile acid (BA) compositions in regard to pancreatic insufficiency, which is present at an early stage in about 85% of CF patients, have not been satisfactorily understood. We assess the pattern of serum BAs in people with CF (pwCF) without CFTR modulator therapy in regard to pancreatic insufficiency and the CFTR genotype. In 47 pwCF, 10 free and 12 taurine- and glycine-conjugated BAs in serum were prospectively assessed. Findings were related to genotype, pancreatic insufficiency prevalence (PIP)-score, and hepatic involvement indicated by serum liver enzymes, as well as clinical and ultrasound criteria for CF-related liver disease. Serum concentrations of total primary BAs and free cholic acid (CA) were significantly higher in pwCF with higher PIP-scores (p = 0.025, p = 0.009, respectively). Higher total BAs were seen in pwCF with PIP-scores ≥0.88 (p = 0.033) and with pancreatic insufficiency (p = 0.034). Free CA was higher in patients with CF-related liver involvement without cirrhosis, compared to pwCF without liver disease (2.3-fold, p = 0.036). pwCF with severe CFTR genotypes, as assessed by the PIP-score, reveals more toxic BA compositions in serum. Subsequent studies assessing changes in BA homeostasis during new highly effective CFTR-modulating therapies are of high interest. Introduction Cystic fibrosis (CF) is the most frequent life-threatening inherited disease in populations with Caucasian descent, characterized by multi-organ involvement due to impaired ion transport in apical membranes of the exocrine glands. Over the last decades, pulmonary disease has been in scientific and clinical focus, as about 90% of people with CF (pwCF) die prematurely from pulmonary destruction. With a marked improvement in survival, the involvement of other organs, including hepatobiliary pathology, is currently coming into focus, as the underlying CF Transmembrane Conductance Regulator (CFTR) defect is equally expressed in biliary ducts and gallbladder epithelia. As a result, hepatobiliary involvement is the third most frequent cause of premature death in pwCF. Whereas CF-related liver disease (CFLD) generally manifests asymptomatically, the most CFTR dysfunction on the apical side of the cholangiocytes leads to secretion of bile with high viscosity, occluding the bile canaliculi. Cystic fibrosis liver disease (CFLD) is caused by accumulation of hydrophobic, toxic, glycineconjugated BA promoting neutrophil activation and inflammation, which damages hepatocytes and bile ducts. Mucus plugs and dysbiosis due to increased acidity, antibiotic use, and swallowed contaminated saliva promote deconjugation of BAs, resulting in higher concentrations of toxic secondary BAs (LCA, DCA) and decreased enteric BA reabsorption (enterohepathic circulation). Impaired BA resorption caused by bowel wall thickening. (a) Active resorption of BAs activates FXR, stimulating the synthesis of FGF19. (b) FGF19 exerts negative feedback on 7-alphahydroxylase, the key enzyme in BA synthesis. (c) FXR activation is also hypothesized to downregulate ASBT channels. BA malabsorption in CF results in impaired FXR-FGF19 signaling. In general, pwCF reveal a more toxic BA profile, which may be caused by the inherently altered viscous mucoid secretion in bile ducts and the consequent retention of cytotoxic BAs. Although still debatable, pwCF have been reported to show higher levels of primary and secondary BAs, which are potentially more toxic due to increased deconjugation by the altered intestinal flora, as compared to healthy controls. On the CFTR dysfunction on the apical side of the cholangiocytes leads to secretion of bile with high viscosity, occluding the bile canaliculi. Cystic fibrosis liver disease (CFLD) is caused by accumulation of hydrophobic, toxic, glycine-conjugated BA promoting neutrophil activation and inflammation, which damages hepatocytes and bile ducts. Mucus plugs and dysbiosis due to increased acidity, antibiotic use, and swallowed contaminated saliva promote deconjugation of BAs, resulting in higher concentrations of toxic secondary BAs (LCA, DCA) and decreased enteric BA reabsorption (enterohepathic circulation). BAs are primarily regarded as detergents, as their central function is to eliminate cholesterol from the body via the intestinal lumen and feces. BAs also play a key role in the solubilization, digestion, and absorption of dietary lipids, as well as lipid-soluble vitamins. As recently demonstrated, BAs also act as signaling molecules in liver regeneration after partial hepatectomy and partial liver transplantation. Primary BAs (chenodeoxycholic acid (CDCA) and cholic acid (CA)) are de novo synthesized from cholesterol by hepatocytes in the liver as a result of hydroxylation processes at carbon positions of different steroid nuclei. After their synthesis in the liver and before being secreted into the intestine, free primary BAs conjugate with glycine or taurine, thereby increasing their water solubility (hydrophilicity-low pKa values) and, consequently, resulting in bile acid anions. Such an increase in BAs' solubility facilitates their return to the liver, either by passive absorption across the entire small intestine or active transport in the terminal ileum. In the small bowel, conjugated BAs become metabolized by bile salt hydrolase enzymes to release unconjugated and more hydrophobic BAs, which may be excreted with the feces or biotransformed into more toxic secondary BA species. Differences in intestinal bacterial flora composition induce variations in bile salt composition. In healthy individuals, approximately 95% of BAs are reabsorbed during their passage through the intestine and returned to the liver as part of the enterohepatic circulation. Reabsorption occurs through active transport in the terminal ileum by the apical sodium-dependent bile salt transporter (ASBT) and by passive diffusion along the entire axis of the intestine. After reabsorption, the remaining 5% of BAs becomes substrate for significant microbial biotransforming reactions in the large bowel or is excreted in feces. Many factors are directly involved in BA malabsorption in CF such as (see Figure 1): defective CFTR channels, small intestine bacterial overgrowth (SIBO), increased BA losses, decreased BA resorption in the terminal ileum, and an impaired BA interaction with the hepatic and intestinal farnesoid X receptor (FXR), which modulates cholesterol 7-hydroxylase (CYP7A1), the rate-limiting enzyme in BA synthesis. To date, however, the exact underlying mechanism of BA malabsorption remains unknown. In general, pwCF reveal a more toxic BA profile, which may be caused by the inherently altered viscous mucoid secretion in bile ducts and the consequent retention of cytotoxic BAs. Although still debatable, pwCF have been reported to show higher levels of primary and secondary BAs, which are potentially more toxic due to increased deconjugation by the altered intestinal flora, as compared to healthy controls. On the other hand, the observation of abnormally high fecal excretion of BAs together with the similarity in duodenal BA concentrations found in pwCF and controls may imply an increase in de novo BA synthesis in the liver of pwCF. High levels of hydrophobic BAs have been hypothesized to contribute to the development of CFLD. In addition, the identification of non-CFTR genetic polymorphism SERPINA1 Z allele was mentioned as a risk factor of liver disease in CF. More recently, only one study has explored the association between BA concentrations in serum and the degree of liver involvement (LI) in pwCF, wherein it is suggested that serum deoxycholic acid and its glycine conjugate have the potential to serve as biomarkers to differentiate between pwCF with non-cirrhotic LI and pwCF with no detectable liver disease. Nevertheless, there is a lack of studies investigating the relationship between CFTR genotype/phenotype and BAs observed in pwCF. The objective of this study was to assess the composition patterns of free, taurine-and glycine-conjugated BAs from pwCF in regard to exocrine pancreatic insufficiency, according to recently defined CFTR genotype (pancreatic insufficiency prevalence score) and CFLD classifications. This allows for assessment of the role of CF patients' CFTR genotype and phenotype in BA homeostasis. ) mol/L. A correlation between AP and the total BA concentration was found to be significant (r = 0.43; p = 0.003) ( Figure 2F). However, no correlation was observed between BA concentrations and the 17 CF-relevant pathologies examined by abdominal US. Although tertiary BAs were included in the quantification of total BA concentrations, the concentration of each tertiary BA showed no association with CFTR genotype or phenotype classifications. was (median, ): 2.1 (1.3, 3.6) mol/L. A correlation between AP and the total BA concentration was found to be significant (r = 0.43; p = 0.003) ( Figure 2F). However, no correlation was observed between BA concentrations and the 17 CF-relevant pathologies examined by abdominal US. Although tertiary BAs were included in the quantification of total BA concentrations, the concentration of each tertiary BA showed no association with CFTR genotype or phenotype classifications. Bile Acids in pwCF in Relation to CFTR Genotype and Phenotype Bile acid distributions in pwCF with mild and severe CFTR genotypes are shown in Figures 3 and 4. Therein, it can be seen that G-CDCA is predominant in both groups, followed by CA. A slightly higher amount of G-CDCA was observed in pwCF with the severe CFTR genotype (29.6% vs. 24.5%). Concentrations of total BAs in serum were significantly higher in pwCF with severe CFTR genotypes (2.1-fold; p = 0.033), as measured by the PIP score. More specifically, free CA concentrations were found to be 3.4-fold higher in pwCF with severe CFTR genotypes (p = 0.009). In a similar way, total CDCA tended to be lower in pwCF with mild CFTR genotypes, although this result did not attain statistical significance (0.5-fold; p = 0.123). Correspondingly, G-CA tended towards higher values in pwCF with severe CFTR Concentrations of total BAs in serum were significantly higher in pwCF with severe CFTR genotypes (2.1-fold; p = 0.033), as measured by the PIP score. More specifically, free CA concentrations were found to be 3.4-fold higher in pwCF with severe CFTR genotypes (p = 0.009). In a similar way, total CDCA tended to be lower in pwCF with mild CFTR genotypes, although this result did not attain statistical significance (0.5-fold; p = 0.123). Correspondingly, G-CA tended towards higher values in pwCF with severe CFTR genotypes without reaching statistical significance (1.5-fold; p = 0.123). The sum of all CA concentrations, i.e., CA+G-CA+T-CA, was 2.7-fold higher in pwCF with severe CFTR genotypes than in those with the mild CFTR genotype (p = 0.004) ( Figure 2 and Table 2). Additionally, the concentration of total free primary BAs, i.e., CA+CDCA, in serum was significantly higher in pwCF with severe CFTR genotypes (4.3-fold; p = 0.020). In general, total primary BAs, i.e., CA+G-CA+T-CA+CDCA+G-CDCA+T-CDCA, were found to be higher in pwCF with severe CFTR genotypes (2.4-fold; p = 0.025). The two ratios of free CA/CDCA and conjugated CA/CDCA had a tendency towards higher values in pwCF with severe CFTR genotypes (1.9-fold; p = 0.299 and 1.3; p = 0.566, respectively). Similarly, the sum of glycine-conjugated CA and CDCA (G-CA+G-CDCA) tended towards higher values in pwCF with severe CFTR genotypes (2.0-fold; p = 0.069) ( Figure 2 and Table 2). In contrast, the ratio between taurine-conjugated CA and CDCA (T-CA/T-CDCA) had a tendency towards higher values in pwCF with mild CFTR genotypes compared to pwCF with severe CFTR genotypes (1.3-fold; p = 0.755). Secondary BA differences in relation to CFTR genotype did not reach significance (Table 3). Regarding liver function tests, AST and AP levels were higher in the CF subgroup with severe CFTR genotypes (p = 0.022 for both). Although ALT, GLDH, and -GT tended to show higher values in the severe CFTR genotype cohort, those changes did not attain statistical significance (Table 4). Figure 5). Similarly to the group with severe CFTR genotype, total CA, total primary BAs, total BAs, and the G:T ratio were significantly elevated in pwCF with PI status (Table 5 and Figure 5). In addition to that, total CDCA, i.e., CDCA+G-CDCA+T-CDCA, was significantly higher in the subgroup with PI status than in the pancreaticsufficient subgroup. Furthermore, significantly higher levels of AST were observed in pwCF with PI (0.39 (0.31, 0.56) vs. 0.21 (0.19, 0.37); p = 0.033), whereas the other parameters tended to be elevated without reaching statistical significance. Bile Acids in Relation to CF Liver Disease (CFLD) Free CA was significantly higher in the CFLI w/o LC subgroup, compared to w/o LI subgroup (2.3-fold; p = 0.036). G-CDCA was elevated in the CFLI w/o LC sub and was higher than that in pwCF w/o LI, but lower than in the CFLD with LC sub although no significance was achieved for these comparisons. In all subgroups, m concentrations of free and glycine-conjugated primary BAs were higher than med taurine conjugates ( Figure 6). Additionally, total BAs showed the highest values CFLI w/o LC subgroup. Furthermore, T-CA/T-CDCA was significantly elevated CFLD with LC subgroup compared to CF w/o LI (2.4-fold; p = 0.038) and CFLI w/o L fold; p = 0.036) subgroups. CA/CDCA and G-CA + G-CDCA showed the highest va the CFLI w/o LC subgroup (Table 6). Bile Acids in Relation to CF Liver Disease (CFLD) Free CA was significantly higher in the CFLI w/o LC subgroup, compared to the CF w/o LI subgroup (2.3-fold; p = 0.036). G-CDCA was elevated in the CFLI w/o LC subgroup and was higher than that in pwCF w/o LI, but lower than in the CFLD with LC subgroup, although no significance was achieved for these comparisons. In all subgroups, median concentrations of free and glycine-conjugated primary BAs were higher than medians of taurine conjugates ( Figure 6). Additionally, total BAs showed the highest values in the CFLI w/o LC subgroup. Furthermore, T-CA/T-CDCA was significantly elevated in the CFLD with LC subgroup compared to CF w/o LI (2.4-fold; p = 0.038) and CFLI w/o LC (2.8-fold; p = 0.036) subgroups. CA/CDCA and G-CA+G-CDCA showed the highest values in the CFLI w/o LC subgroup (Table 6). Comparisons with respect to CFLD revealed that all liver function test parameters were significantly higher in pwCF with CFLD (p = 0.005, 0.033, 0.0002, 0.039, and 0.036 for ALT, AST, -GT, AP, and GLDH, respectively) ( Table 4). Discussion In this prospective study, we assessed the association between serum BA levels in pwCF and the status of pancreatic insufficiency, represented clinically and by the pancreatic insufficiency prevalence (PIP) score. This surrogate measure classifies the severity of specific CFTR mutations, associating higher scores with pancreatic insufficiency (PI) and, conversely, lower scores to increased risks for pancreatitis. The complex pattern of bile acids in serum from pwCF associates increased total BA concentrations in pwCF with clinical pancreatic insufficiency and with higher PIP scores. Specifically, we found that higher PIP scores ≥ 0.88 are significantly associated with increased serum concentrations of total primary BAs. Particularly, free CA concentration was 3.4-fold higher when compared to concentrations in pwCF carrying a mild CFTR genotype. To our knowledge, this is the first study showing an association between CFTR genotype and BAs. Our findings are supported by reports by Smith et al. showing that histological markers of CF-related liver injury (severity of fibrosis and degree of inflammation) are significantly associated with elevation of CA. Similarly, Azer and colleagues found high levels of CA to be associated with progression of hepatic injury. More recently, Drzymaa et al. observed higher CA concentrations in serum from pwCF compared to healthy subjects. In addition, the authors found CA concentrations to be higher in patients with some degree of liver involvement, including cirrhosis, than in pwCF without a diagnosis of liver disease. Although we did not find a strong correlation between PIP scores, the pancreatic status, and CFLD, increased CA levels in the serum of pwCF with severe CFTR genotypes could be seen as a pro-inflammatory response and, consequently, as a risk for progression to CFLD. Furthermore, liver injury in CF with higher tissue permeability, more frequent in pwCF with more severe CFTR genotypes, may contribute to higher levels of bile acids in the serum of these pwCF. Previously, elevated sums of G-CA+G-CDCA had been reported to be a marker for early hepatic allograft dysfunction in transplanted pwCF. Interestingly, in our CF cohort, this sum tended to be elevated, accounting for almost 45% of BAs in pwCF with severe CFTR genotypes. Moreover, a significantly increased G:T ratio was observed in the severe CFTR genotype subgroup. As pointed out in previous studies, the predominance of toxic hydrophobic glycine conjugates and, correspondingly, decreased taurine conjugates could contribute to the maintenance of a potentially harmful cytotoxicity and induce hepatocyte apoptosis. The imbalance in the G:T ratio observed in pwCF with severe CFTR genotypes may, at least partially, derive from bowel wall abnormalities, an important factor impairing the enterohepatic circuit in pwCF. In a previous study including abdominal ultrasound, higher rates of pathologies, including thickened bowel walls (TBW) > 4mm, were found in pwCF with PI and with more severe class I-III CFTR mutation. Furthermore, taurine deficiency in pwCF with severe CFTR genotypes has been attributed to decreased BA resorption in the terminal ileum, a pathology supposedly more frequent in pwCF with TBW. Similar to the results obtained with the PIP score, phenotype classification with regard to the pancreatic status revealed a more toxic BA pattern in pwCF with PI. This is to a large extent expected, as this classification is associated with CFTR genotype severity as measured with the PIP score. Furthermore, the above-described toxic BA pattern associated with higher CA concentrations was also observed in pwCF with CF-related liver involvement (CFLI) w/o LC. This is in agreement with the results of Smith et al. and O'Brien et al., who proposed that pwCF with CFLI still preserve some residual liver function and, therefore, accumulate more BAs in the canaliculi obstructed with viscous bile. Moreover, cirrhotic pwCF revealed, as expected, lower CA levels than the CFLI w/o LC group. A similar pattern was observed by Drzymaa et al., reporting lower CA levels in patients with liver cirrhosis compared to those without liver involvement. Compared to the CFLI w/o LC group, this appears to be a consequence of the impaired hepatic bile synthesis in cirrhosis. Accordingly, Vlahcevic et al. found a reduction in CA and CDCA synthesis in patients with alcohol-related cirrhosis, concluding that a reduction in bile acid synthesis present in patients with cirrhosis is caused by both defective feedback control regulating bile acid synthesis and defective BA synthesis in the liver. In cirrhotic pwCF, the T-CA/T-CDCA ratio was significantly higher than in the other subgroups, i.e., CF without LI and CFLI without LC. Analogously to G-CA+G-CDCA, this ratio was observed to be a marker of early hepatic allograft dysfunction. To our knowledge, however, no further studies have been conducted validating those findings. The increased T-CA/T-CDCA ratio in cirrhotic pwCF may be a consequence of a decreased amount of T-CDCA in the BA pool, resulting from CDCA's higher hydrophobicity and, thus, higher toxicity. Other studies have postulated that bacteria of several genera have evolutionarily developed mechanisms to protect themselves from bile acid toxicity via bile salt hydrolase (BSH) activity. According to this, BSH activity results in the transformation of BA into deconjugated BA species that are less toxic, resulting in less glycine conjugates and the apparent resistance of T-CDCA to being deconjugated by intestinal bacteria due to its lower toxicity (toxicity of glycine > taurine conjugates). Following this hypothesis, this would imply a decreased proportion of T-CDCA returning to the liver and, consequently, to the taurine pool in the intestine. However, given the limited sample size of cirrhotic pwCF considered herein, studies with larger subgroups of patients are necessary to assess this hypothesis. Further studies addressing the role of taurine supplements as a therapeutic approach to shift the BA pool to a less toxic pattern are lacking. Other factors could be attributed to the complex etiology of the impaired BA metabolism in the enterohepatic circuit in CF, such as: (A) an impaired microflora (due to increased acidity, antibiotic use, and swallowed contaminated saliva) promoting BA deconjugation, more toxic secondary BAs (LCA, DCA), and increased BA elimination; (B) a thickened bowel wall decreasing BA resorption in the terminal ileum; (C) increased BA excretion (BA losses); and (D) impaired FXR-FGF19 signaling by a defective feedback control regulating BA synthesis and, consequently, promoting BA accumulation. However, the exact mechanism remains unknown (see Figure 1). Altogether, the impaired BA pattern in pwCF with severe CFTR genotypes, characterized by increased CA and the predominance of glycine conjugates, appears to be related to more hepatotoxic effects contributing to the complex multifactorial etiology of CFLD. Although a phenotype/genotype CFLD correlation has not yet been established, it was recently proposed that modifier genes contribute to the risk of severe CFLD. Risk factors such as class I-III mutations on both alleles, meconium ileus, and male gender have been identified as contributing to the development of liver involvement. In line with this, according to Drzymaa et al., cirrhotic and non-cirrhotic liver involvement is characterized by several determinants, such as high BA levels and severe CFTR genotypes. Nevertheless, data regarding genotype severity have not yet been available for bile acid profiles. This field requires a better understanding in order to identify potential targets for modulating liver disease severity in CF. Limitations In terms of limitations, our results are being published many years after recruitment finalization and the analysis of the prospectively obtained serum samples. However, this delay allowed us to implement new categorizations of pwCF regarding CFLD criteria, as defined by Debray et al. in 2011, and PIP scores, as defined by Ooi et al. in 2011. The delayed publication of these important classifications by Ooi et al. and Debray et al. demonstrates the lack of attention abdominal involvement received in previous decades, when pwCF tended to die at young ages due to pulmonary destruction. This is reflected in the relatively lower number of publications regarding hepatic and biliary involvement compared to pulmonary disease in CF. Furthermore, the limited number of cirrhotic pwCF (n = 4) examined in our cohort may not sufficiently represent the BA values in this subgroup. At the same time, our publication has the advantage of assessing a cohort nave for CFTR-modulating therapies. Thus, it emphasizes the need to perform consecutive studies assessing the effects of CFTR modulators on bile homeostasis. Participants and Settings This prospective study was performed by recruiting pwCF of all ages (4-66 years) who were attended to between 2004-2005 at the CF Center of the Jena University Hospital, Germany. The study included n = 47 pwCF. The inclusion criteria were: a diagnosis of CF determined by two positive sweat tests (sweat chloride of ≥30 mEq/L) and/or detection of 2 disease-causing CFTR mutations with evidence of end organ involvement. Ethical Statement The study was approved by the Jena University ethics committee (registration number: 1222-11/03) and all methods were performed in accordance with the relevant guidelines and regulations. This study was conducted in strict accordance with the ethical guidelines in the Declaration of Helsinki. All pwCF and parents or guardians of minors provided written informed consent. Measures of Clinical Data BA analysis was performed with a modified method according to. A total of 10 free and 12 taurine-and glycine-conjugated bile acids were analyzed in the serum of pwCF using high performance liquid chromatography (HPLC) with postcolumn derivatization and fluorescence detection. The The established PIP score adapted from Ooi et al., as published in 2011, was used to measure the severity of specific CFTR mutations in regard to pancreatic function. PwCF carrying mutations not included in the study by Ooi et al. were excluded from the PIP-genotype analysis (8/47 pwCF); 4 pwCF's CFTR mutations had not been identified and the mutations of 4 other pwCF had not been described in the PIP cohort from Ooi. et al.. Mutations of pwCF with a PIP score ≤ 0.40 were classified as mild CFTR genotypes (n = 9), and those with a PIP score ≥ 0.88 as severe genotypes (n = 30). It is important to mention that these cutoffs differ from those originally described by Ooi et al. (classified as either "mild" (≤0.25) or "severe" (>0.25) on the basis of the PIP score). As none of the included pwCF revealed PIP scores between 0.4 and 0.88, we excluded moderate as a classification and defined pwCF's CFTR genotype severity as either mild or severe. Pancreatic insufficiency (PI) was defined as a clinical diagnosis by the need for pancreatic enzyme replacement therapy (PERT). Ultrasound (US) examinations were performed in all pwCF and included the evaluation of 17 CF-relevant pathologies in abdominal US. Furthermore, CFLD was determined retrospectively in 46 of the 47 pwCF, according to criteria defined in 2011 by Debray et al.. Based on a consensus among hepatologists at a meeting of the North American CF Foundation in 2007, pwCF were classified into three categories: cystic fibrosis without evidence of liver disease (CF w/o LD) (n = 32), cystic fibrosis-related liver involvement without cirrhosis (CFLI w/o LC) (n = 10), and cystic fibrosis-related liver disease with cirrhosis (CFLD with LC) (n = 4). Biliary acid composition did not show any significant differences according to ursodeoxycholic acid (UDCA) administration. Therefore, UDCA and its conjugates were excluded from our analysis. Data Analysis All statistical analyses were performed using SPSS v.25.0 (IBM Corp. 2015, Version 25.0. Inc., Armonk, NY, USA). Normality in the distributions of the samples was tested using the Kolmogorov-Smirnov test. As all BA data samples failed to meet normality assumptions, Mann-Whitney U tests were performed to determine statistical differences between the medians of two independent samples. Results are reported as median and first and third quartiles (abbreviated as ) and are represented in boxplots. Pairwise correlations between variables were calculated using the Pearson's correlation coefficient. A p-value ≤0.05 indicated a significant difference or correlation. Figures were created with GraphPad Prism version 8.4.3 for Windows, GraphPad Software, San Diego, CA, USA. Conclusions We assessed the concentrations of 22 BAs in the serum of pwCF, including primary, secondary, and tertiary BAs, as well as their respective glycine and taurine conjugates. Higher concentrations of total BAs were significantly associated with both CFTR genotype severity and pancreatic insufficiency. When measuring each BA individually, CA levels were significantly associated with more severe CFTR genotypes, as quantified by their PIP score for pancreatic insufficiency and non-cirrhotic CF-related liver involvement. Our study highlights the relevance of CFTR genotype severity in the assessment of enterohepatic circulation. Clinically, the improvement in BA homeostasis is a subject of high importance, as hepatobiliary involvement is the third most frequent reason of premature death in CF. In this regard, the assessment of BAs as potential surrogate markers when assessing the impact of highly effective CFTR modulator therapies on liver function may provide new insights into the pathophysiology of CFLD. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. All pwCF and parents or guardians of minors provided written informed consent. |
<reponame>MicrosoftArchive/openFrameworks
#pragma once
#include "ofMain.h"
#include <opencv2\imgproc\types_c.h>
#include <memory>
enum CVAlgorithm
{
Preview,
GrayScale,
Canny,
Blur,
FindFeatures,
Sepia
};
class ofApp : public ofBaseApp{
public:
ofApp();
void setup();
void update();
void draw();
void keyPressed(int key);
void keyReleased(int key);
void mouseMoved(int x, int y );
void mouseDragged(int x, int y, int button);
void mousePressed(int x, int y, int button);
void mouseReleased(int x, int y, int button);
void windowResized(int w, int h);
void dragEvent(ofDragInfo dragInfo);
void gotMessage(ofMessage msg);
void setAlgorithm(CVAlgorithm algorithm);
private:
void updateFrame(unsigned char* buffer, int width, int height);
void processFrame();
void applyGrayFilter();
void applyCannyFilter();
void applySepiaFilter();
void applyBlur();
void applyFindFeatures();
ofVideoGrabber vidGrabber;
ofTexture videoTexture;
int camWidth;
int camHeight;
cv::Mat m_cvInput;
cv::Mat m_cvOutput;
CVAlgorithm m_algorithm;
};
|
/**
* Initializes the timer as if it was started at the given point of time.
*
* @param start the start point of time (in milliseconds)
*
* @since 1.00
*/
private void start(long start) {
this.state = TimerState.START;
this.start = start;
this.value = 0;
} |
package com.ilya40umov.badge.security;
import org.springframework.security.web.RedirectStrategy;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
/**
* A redirect strategy that results in no redirect.
*
* @author isorokoumov
*/
public class NoRedirectStrategy implements RedirectStrategy {
@Override
public void sendRedirect(HttpServletRequest request,
HttpServletResponse response,
String url) throws IOException {
// no redirect
}
}
|
. Deficiency of Qi (healthy energy) and blood stasis are the basic pathological changes of hepatic fibrosis according to the theories of traditional Chinese medicine. Fuzheng Huayu Capsule, a compound Chinese herbal medicine for hepatic fibrosis, is produced in the light of this pathological mechanism. More than a decade of clinical studies and experimental researches show that this medicine has effects of protecting hepatic cells, relieving liver injury, and controlling the development of hepatic fibrosis. It has definite functional mechanisms on anti-hepatic fibrosis. It is a safe and effective medicine for hepatic fibrosis, and deserves to be well introduced to clinic. |
package com.shirashyad.gettysdk.search;
public enum Orientation {
None,
Horizontal,
Panoramic_Horizontal,
Panoramic_Vertical,
Square,
Vertical
}
|
. UNLABELLED Streptococcus pneumoniae is a reason of many infectious diseases, from prosy respiratory tract infections to the grave bacterial hematosepsis which often is a cause of patients death. Infection spreads with droplets or sometimes by direct contact. Symptomatic Staphylococcal infections most often unfold as a otitis, sinusitis, broncho-pneumonia and lobal pneumonia or as a chronic obstructive pulmonary disease, bronchial asthma aggravation or they can be the cause of many other illnesses like: meningitis and encephalitis, endocarditis, epicarditis, peritonitis, arthritis and hematosepsis. The aim of the study was to evaluate anti-streptococcal vaccinations and to analyze cardiology patients and General Practitioners patients knowledge about Streptococcus pneumoniae. MATERIAL AND METHODS There were 312 Cardiology and General Practitioners patient's from Outpatients Clinic in Katowice included to the study. Additionally there were national registers analyzed involving anti-streptococcal vaccination and streptococcal infections data from years from 2006 to 2009. Informations about anti-streptococcal vaccination and data evaluating knowledge about streptococcal infections problems were obtained from the poll made especially for this study. RESULTS Results of the study showed, that patient's knowledge about anti-streptococcal vaccination is very poor. From 312 patients included to the study only 16 were vaccinated and 118 persons had no knowledge about Streptococus pneumoniae. Data from the national registers showed, that in the years 2006-2009 the invasive form of streptococcal infection had similar number of patients - 273 and 274 respectively - in Silesia, 28 and 26 patients respectively. CONCLUSIONS The knowledge about anti-streptococcal vaccinations is very poor and a number of people vaccinated small. There is the need to provide more information to rise the number of vaccinated persons, especially in the group of increased risk and consequently reduce worker absenteeism in the work and financial loss. |
<filename>compiler/src/main/java/net/ziyoung/lox/semantic/SemanticErrorList.java
package net.ziyoung.lox.semantic;
import net.ziyoung.lox.ast.Position;
import java.util.ArrayList;
public class SemanticErrorList extends ArrayList<SemanticError> {
public void add(Position position, String msg) {
this.add(new SemanticError(position, msg));
}
}
|
<reponame>raito-kitada/MOEAFramework
/* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package aos.example;
import aos.IO.IOCreditHistory;
import aos.IO.IOQualityHistory;
import aos.IO.IOSelectionHistory;
import aos.aos.AOSMOEA;
import aos.aos.AOSStrategy;
import aos.creditassigment.ICreditAssignment;
import aos.creditassignment.offspringparent.ParentDomination;
import aos.creditassignment.offspringparent.ParentIndicator;
import aos.creditassignment.setcontribution.ParetoFrontContribution;
import aos.creditassignment.setimprovement.BiCreteria;
import aos.creditassignment.setimprovement.OffspringParetoFrontDominance;
import aos.nextoperator.IOperatorSelector;
import aos.operator.AOSVariation;
import aos.operatorselectors.ProbabilityMatching;
import java.io.File;
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.moeaframework.Instrumenter;
import org.moeaframework.algorithm.NSGAII;
import org.moeaframework.analysis.collector.InstrumentedAlgorithm;
import org.moeaframework.core.EpsilonBoxDominanceArchive;
import org.moeaframework.core.NondominatedSortingPopulation;
import org.moeaframework.core.PopulationIO;
import org.moeaframework.core.Problem;
import org.moeaframework.core.Variation;
import org.moeaframework.core.comparator.ChainedComparator;
import org.moeaframework.core.comparator.CrowdingComparator;
import org.moeaframework.core.comparator.ParetoDominanceComparator;
import org.moeaframework.core.operator.RandomInitialization;
import org.moeaframework.core.operator.TournamentSelection;
import org.moeaframework.core.operator.real.PM;
import org.moeaframework.core.operator.real.SBX;
import org.moeaframework.core.spi.OperatorFactory;
import org.moeaframework.problem.CEC2009.UF1;
import org.moeaframework.problem.DTLZ.DTLZ2;
import org.moeaframework.problem.DTLZ.DTLZ3;
import org.moeaframework.problem.DTLZ.DTLZ4;
import org.moeaframework.problem.WFG.WFG1;
import org.moeaframework.problem.WFG.WFG2;
import org.moeaframework.problem.WFG.WFG6;
import org.moeaframework.problem.WFG.WFG8;
import org.moeaframework.problem.WFG.WFG9;
import org.moeaframework.problem.ZDT.ZDT1;
import org.moeaframework.problem.ZDT.ZDT4;
/**
*
* @author nozomihitomi
*/
public class Sample1 {
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
//create the desired problem
// UF1 prob = new UF1();
//ArrayList<Problem> prob = new ArrayList();
int obj = 4;
//DTLZ2 prob = new DTLZ2(obj);
//DTLZ3 prob = new DTLZ3(obj);
//DTLZ4 prob = new DTLZ4(obj);
//WFG1 prob = new WFG1(obj-1,10,obj);
//WFG2 prob = new WFG2(obj-1,10,obj);
//WFG6 prob = new WFG6(obj-1,10,obj);
//WFG8 prob = new WFG8(obj-1,10,obj);
WFG9 prob = new WFG9(obj-1,10,obj);
//ZDT1 prob = new ZDT1();
//ZDT4 prob = new ZDT4();
//create the desired algorithm
// for(int s=0; s < prob.size();s++){
int numberofSeeds = 1;
int[] populationSize = new int[]{1000};
//int[] populationSize = new int[]{100,300,500,1000};
//int populationSize = 1000;
for(int i=1;i<=numberofSeeds;i++){
System.out.println("Seeds=" + i);
for(int k=0;k<populationSize.length;k++){
System.out.println("PopulationSize=" + populationSize[k]);
AOSVariation variation = new AOSVariation();
NondominatedSortingPopulation population = new NondominatedSortingPopulation();
EpsilonBoxDominanceArchive archive = new EpsilonBoxDominanceArchive(0.01);
TournamentSelection selection = new TournamentSelection(2,
new ChainedComparator(
new ParetoDominanceComparator(),
new CrowdingComparator()));
RandomInitialization initialization = new RandomInitialization(prob, populationSize[k]);
NSGAII nsgaii = new NSGAII(prob, population, archive, selection, variation, initialization);
//example of operators you might use
ArrayList<Variation> operators = new ArrayList();
Properties prop = new Properties();
prop.put("populationSize", populationSize[k]);
OperatorFactory of = OperatorFactory.getInstance();
operators.add(of.getVariation("um", prop, prob));
operators.add(of.getVariation("sbx+pm", prop, prob));
operators.add(of.getVariation("de+pm", prop, prob));
operators.add(of.getVariation("pcx+pm", prop, prob));
operators.add(of.getVariation("undx+pm", prop, prob));
operators.add(of.getVariation("spx+pm", prop, prob));
//create operator selector
IOperatorSelector operatorSelector = new ProbabilityMatching(operators, 0.8, 0.1);//(operators,alpha,pmin)
//create credit assignment
// ICreditAssignment creditAssignment1 = new ParentDomination(1, 0, 0);
ICreditAssignment creditAssignment2 = new ParetoFrontContribution(1, 0);
ICreditAssignment creditAssignment3 = new ParentIndicator(prob,0.6);
//ICreditAssignment creditAssignment = new OffspringParetoFrontDominance(1, 0);//error
//create AOS
AOSStrategy aosStrategy = new AOSStrategy(creditAssignment3,creditAssignment2, operatorSelector);
AOSMOEA aos = new AOSMOEA(nsgaii,variation, aosStrategy);
//attach collectors
Instrumenter instrumenter = new Instrumenter()
.withFrequency(5)
.attachElapsedTimeCollector();
InstrumentedAlgorithm instAlgorithm = instrumenter.instrument(aos);
//conduct search
//int maxEvaluations = populationSize[k] * 100;
int maxEvaluations = populationSize[k] * 3000;
int gen = 0;
while (!instAlgorithm.isTerminated() &&
(instAlgorithm.getNumberOfEvaluations() < maxEvaluations)) {
gen += 1;
instAlgorithm.step();
try {
//one way to save current population
//System.out.println(prob);
// System.out.println("generation=" + gen);
//PopulationIO.writeObjectives(new File("output13/Popsize"+populationSize[k]+"/DTLZ2_"+ obj +"/archive_gen" + gen +"_seed"+i+".txt"), aos.getArchive());
//PopulationIO.writeObjectives(new File("output13/Popsize"+populationSize[k]+"/WFG1_"+ obj +"/archive_gen" + gen +"_seed"+i+".txt"), aos.getArchive());
//PopulationIO.writeObjectives(new File("output13/Popsize"+populationSize[k]+"/ZDT4/archive_gen" + gen +"_seed"+i+".txt"), aos.getArchive());
//PopulationIO.writeObjectives(new File("output3/Popsize"+populationSize[k]+"/test.txt"), aos.getArchive());
PopulationIO.writeObjectives(new File("Sample1/Popsize"+populationSize[k]+"/WFG9_"+ obj +"/archive_gen" + gen +"_seed"+i+".txt"), aos.getArchive());
} catch (IOException ex) {
Logger.getLogger(TestCase.class.getName()).log(Level.SEVERE, null, ex);
}
//save AOS results
/*
IOSelectionHistory iosh = new IOSelectionHistory();
iosh.saveHistory(aos.getSelectionHistory(), "output3/Popsize"+populationSize[k]+"/ZDT4/selection.csv", ",");
IOCreditHistory ioch = new IOCreditHistory();
ioch.saveHistory(aos.getCreditHistory(), "output3/Popsize"+populationSize[k]+"/ZDT4/credit.csv", ",");
IOQualityHistory ioqh = new IOQualityHistory();
ioqh.saveHistory(aos.getQualityHistory(), "output3/Popsize"+populationSize[k]+"/ZDT4/quality.csv", ",");
//}
*/
}
}
}
}
}
|
Associations between digital media use and brain surface structural measures in preschool-aged children The American Academy of Pediatrics recommends limits on digital media use (screen time), citing cognitive-behavioral risks. Media use in early childhood is ubiquitous, though few imaging-based studies have been conducted to quantify impacts on brain development. Cortical morphology changes dynamically from infancy through adulthood and is associated with cognitive-behavioral abilities. The current study involved 52 children who completed MRI and cognitive testing at a single visit. The MRI protocol included a high-resolution T1-weighted anatomical scan. The childs parent completed the ScreenQ composite measure of media use. MRI measures included cortical thickness (CT) and sulcal depth (SD) across the cerebrum. ScreenQ was applied as a predictor of CT and SD first in whole-brain regression analyses and then for regions of interest (ROIs) identified in a prior study of screen time involving adolescents, controlling for sex, age and maternal education. Higher ScreenQ scores were correlated with lower CT in right-lateralized occipital, parietal, temporal and fusiform areas, and also lower SD in right-lateralized inferior temporal/fusiform areas, with substantially greater statistical significance in ROI-based analyses. These areas support primary visual and higher-order processing and align with prior findings in adolescents. While differences in visual areas likely reflect maturation, those in higher-order areas may suggest under-development, though further studies are needed. www.nature.com/scientificreports/ Magnetic resonance imaging (MRI). Details of play-based acclimatization techniques prior to MRI have been described previously 49. The protocol involved structural and functional MRI, but only the T1-weighted structural scan was used for the current study. Children were awake and non-sedated during MRI, which was conducted using a 3-Tesla Philips Ingenia scanner with a 32-channel head coil. High-resolution, 3D T1-weighted anatomical images were acquired (TR/TE = 8.1/3.7 ms; duration 5.25 min; FOV = 256 256 mm; matrix = 256 256; in-plane resolution = 1 1 mm; slice thickness = 1 mm; number of slices = 180, sagittal plane). Processing utilized the Computational Anatomy Toolbox (CAT12, Structural Brain Mapping Group, Jena, Germany), which performs non-linear transformations for voxel-based preprocessing, then computes surface-based morphometric (cortical thickness) measures. Individual subjects were mapped to a standard template space (~ 2 mm spacing) using age-matched a prior tissue probability maps generated from the TOM8 toolbox 50 for tissue segmentation. After this voxel-based spatial registration, the central surface and morphometric measures (CT, SD) were determined using the projection-based thickness method. The central surface was then spatially registered to the Freesurfer "FsAverage" template. Finally, measures of CT and SD were projected onto the template space and then smoothed along the surface with a 10 mm and 15 mm full-width half-maximum Gaussian kernel, respectively. Subjects with weighted image quality (calculated based on resolution, signal-to-noise ratio, and bias field strength) of 2 or more standard deviations below the group mean and/or subjects with a mean correlation coefficient of CT 2 standard deviations or more below the group mean were excluded as outliers. Regions of interest for MRI analyses. To increase statistical power, regions of interest (ROIs) were selected based on the largest effect sizes involving digital media use and CT and SD, respectively, in a recently published MRI study involving a large sample of young adolescents 7. These were selected from group factor analyses 1 and 3 in that study, which loaded most strongly on overall digital media use, which was considered most similar to the ScreenQ measure, as opposed to specific usage factors such as social media. These were defined via the Desikan-Killiany cortical atlas 51, as in the prior work. Given the young age of the subjects where many cortical functions are less likely to have lateralized (e.g., language), bilateral ROIs were included. For CT, the ROIs selected were bilateral cuneus, fusiform, inferior temporal, lateral occipital, lingual, pericalcarine, postcentral, precuneus, superior parietal and supramarginal gyri. For SD, the ROIs were bilateral cuneus, fusiform, inferior temporal, lateral occipital, lingual and pericalcarine gyri. MRI analyses. Analyses involved multiple regression modeling with CT and SD as the respective dependent variable, applying ScreenQ score (continuous) as the predictor and controlling for covariates sex (categorical), age (continuous) and maternal education level (categorical). Maternal education level was chosen as a proxy for socioeconomic status (SES), as it has been cited as most strongly associated with child cognitive and socialemotional development 52. Smoothed thickness maps were fit to these models to estimate the effect of ScreenQ total scores on CT and SD across the cerebrum. These were then computed for the ROIs identified above in respective analyses of CT and SD, controlling for these covariates. To account for multiple comparisons testing, False Discovery Rate (FDR) correction was applied for all analyses using thresholds of = 0.05 and also a more liberal = 0.10, with a two-sided test. Results Sample characteristics and ScreenQ scores. A total of 58 children completed MRI, 52 of them with acceptable image quality for analyses, applying criteria described above (age 52.7 ± 7.7 months-old, range 37-63; 29 girls, 23 boys). The mean ScreenQ score for those included was 10.1 ± 4.5 (range 3-21). ScreenQ scores were negatively associated with maternal education level (Pearson r = − 0.41, p < 0.001). These data are summarized in Table 1. MRI analyses. In whole-brain analyses, higher ScreenQ scores were correlated with lower CT in extensive clusters located in bilateral yet right-lateralized occipital, parietal, temporal and fusiform regions, controlling for sex and age, though with marginal statistical significance (two-tailed p-FDR < 0.10), shown in Fig. 1A and detailed in Table 2. When adding maternal education (SES) as an additional covariate, these associations did not reach statistical significance (Fig. 1B, Table 2). Higher ScreenQ scores were also correlated with lower SD in two clusters located in the right fusiform cortex, controlling for sex and age (two-tailed p-FDR < 0.05), shown in Fig. 1B and detailed in Table 3. When adding maternal education (SES) as a covariate, the extent of these associations was similar yet with marginal statistical significance (p-FDR < 0.10), shown in Fig. 2B and summarized in Table 3. For the ROI-based analyses, higher ScreenQ scores were correlated with lower CT in bilateral cuneus, left lingual gyrus and right precuneus, superior parietal and supramarginal gyri, controlling for sex and age (twotailed p-FDR < 0.05), shown in Fig. 3A and detailed in Table 4. When applying material education (SES) as an additional covariate, the extent of associations was similar, yet with marginal statistical significance (two-tailed p-FDR < 0.10), shown in Fig. 3B and detailed in Table 4. Higher ScreenQ scores were also correlated with greater SD in the right cuneus and lesser SD in the right fusiform gyrus (two-tailed p-FDR < 0.05) and marginally lesser SD in the left inferior temporal gyrus, shown in Fig. 4A www.nature.com/scientificreports/ (SES) as an additional covariate, the extent of associations was nearly identical and remained statistically significant at p-FDR < 0.05 for the cuneus and fusiform areas, shown in Fig. 4B and detailed in Table 5. Discussion Brain development is a dynamic, non-linear process influenced by genetic and environmental factors. Environmental influences include relationships and experiences and can be nurturing, adverse or neutral. Given the prominent and increasing role of digital media for families beginning in infancy, it is critical to understand the direct and indirect impacts of various aspects of use on emerging skills and underlying neurobiology. These are likely to be greatest during early childhood when brain networks develop rapidly and plasticity is high, manifest via differences in gray and white matter structure 30. However, currently, very little is known about these potential impacts. The purpose of this study was to examine associations between digital media use and established measures of cortical morphology (CT, SD) at this formative age. In line with our hypotheses, in both whole-brain and ROI-based analyses, higher media use was related to differences in CT (all lesser) and SD (primary visual greater, higher-order lesser) in both primary visual and higher-order association areas. Cortical thickness (CT) reflects synaptic density and supporting cellular architecture 53. While overall CT reaches maximal levels by age 2, that of limbic and sensory areas precedes higher-order (e.g. association, executive) areas, which do not achieve local maxima until adolescence 35. It has been suggested that thickness may even be a marker for "lower" sensory processes (thinning occurs earlier) versus "higher" associative and integrative processes (thinning occurs later) 54. Changes reflect cortical remodeling in response to environmental stimulation, which can be accretive (e.g., synaptogenesis) or reductive (e.g., pruning) 53. The current study involved 3-5-year old children, whose overall CT is expected to have largely peaked, though not yet in higher-order areas. Despite limited statistical power, particularly when controlling for maternal education, significant (ROI-based) and/or marginally significant (whole-brain) associations were identified between higher screen-based media use and lesser CT involving both primary and higher-order areas. The most extensive and significant clusters were in right-lateralized occipital and superior parietal regions (Figs. 1 and 3) that support both sensory (e.g., cuneus) and multi-modal associative (e.g., supramarginal gyrus) processes, suggesting impacts in areas expected to be mature at this age and in others that are expected to still be developing. Synchronous thinning in functionally related areas has been linked to environmental factors (e.g., visual network via visual stimuli) 42. Thinning in visual cortices has also been attributed to higher maturation and efficiency 7. Association between higher ScreenQ scores and lower CT in bilateral, right-lateralized occipital areas (cuneus) in the current study is consistent with these models, possibly via greater exposure to screen-based media during early childhood. Higher ScreenQ scores were also associated with lower CT in the right superior parietal lobe, which is a major node in the "top-down" dorsal attention network, particularly involving visual-spatial 100,001-150,000 11 Above 150,000 10 *Income Relative to Needs At or under poverty threshold 9 Above poverty threshold 43 Maternal Education High School or Less 4 Some College 9 College graduate 22 More than college 17 ScreenQ total score 52 10.1 ± 4.5 Vol.: www.nature.com/scientificreports/ stimuli 55. Whether this finding reflects accelerated maturation via more frequent and/or stimulating screen-based media use, or under-development via less exposure to non-screen stimuli (e.g., shared reading) is unclear and in need of further study. By contrast to primary visual areas, lower CT in the lingual gyrus, which is considered to be a higher-order visual-association area, was left-lateralized (especially ROI-based, Fig. 3), suggesting asynchronous thinning that tends to occur in these specialized brain areas. Adjacent to the parahippocampus, the lingual gyrus is involved with complex visual memory encoding, including facial and emotional expressions, core social-cognitive processes 56. Lower CT in the lingual gyrus has been linked to lower episodic memory and social cognition in adults 57. The lingual gyrus has also been found to support printed letter recognition, a pre-reading skill that www.nature.com/scientificreports/ typically develops in the preschool-age range, with greater left-lateralization linked to higher skill 58,59. As both social cognition and emergent literacy skills are typically in early stages of development in the formative preschool age range, lower CT found here may reflect under-development rather than efficiency, though this is speculative and in need of further study. Association between higher ScreenQ scores and lower CT in the postcentral gyrus, whose major role is somatosensory processing, is more counter-intuitive. A reasonable potential mechanism involves the stimulation of mirror neurons during the processing of imagined sensations in video scenes 60,61. Indeed, these clusters with lower CT were in the more posterior Brodmann Area 2, where mirror neurons are well-documented 62 and which supports higher-order somatosensory processing and social cognition 63. Thus, if this mechanism is accurate, a major question is whether somatosensory cortical remodeling via digitally presented scenes is of functional relevance compared to thinning that may manifest via real-world human-interactive situations. In contrast to primary sensory areas where thinning is generally adaptive, CT in higher-order areas (e.g., executive, association) has been positively associated with cognitive performance, including IQ, language, social cognition and emergent literacy skills 64,65. Thus, akin to findings involving the lingual gyrus, it is less clear whether associations between higher ScreenQ scores and lower CT in the right inferior parietal lobe, which supports multi-modal (e.g., visual, somatosensory, emotional) processing 66 and also learned and creative skills such as music 67 and math 68, are benign or maladaptive in nature. Similarly, higher media use was associated with lower CT in the right supramarginal gyrus (SMG), a higher-order area not expected to have peaked at preschool age. The right SMG supports empathy (in children, overcoming egocentricity bias) 69,70, and lower CT in this area has been linked to conduct disorder in adolescents 71. While not assessed here, excessive and inappropriate digital media use has been linked to lower empathy 72, and a "video-deficit" in social cognition described in preschoolage children 73. Thus, while speculative, findings in the current study may reflect SMG under-development at this age, an additional potential early biomarker of impacts of higher media use on social cognition. Interestingly, the postcentral gyrus is also involved with emotional processing and empathy (largely via the mirror neuron system), with lower CT possibly suggesting maladaptive neurodevelopment in these domains 63. Further studies involving measures of social cognition are needed to better characterize these potential impacts. Table 2. Details of significant clusters from Fig. 1. Corrected p-value, location, atlas labels and major function of clusters with lower cortical thickness (CT) correlated with higher ScreenQ scores controlling for child sex and age, shown in Fig. 1A (thresholds: two-sided p-FDR < 0.10 and p-FDR < 0.05). Montreal Neurological Institute (MNI) coordinates are left-right, posterior-anterior and inferior-superior relative to the anterior commissure. Regions indicates the percentage of each cluster residing in the respective Desikan-Killiany DK40 atlas-defined area. Table 3. Details of significant clusters from Fig. 2 Corrected p-value, location, atlas labels and major function of clusters with lower sulcal depth (SD) correlated with higher ScreenQ scores controlling for child sex and age, shown in Fig. 2A and also controlling for maternal education shown in Fig. 2B (thresholds: two-sided p-FDR < 0.05 for 2A and p-FDR < 0.10 for 2B). MNI coordinates are left-right, posterior-anterior and inferiorsuperior relative to the anterior commissure. Regions indicates the percentage of each cluster residing in the respective Desikan-Killiany DK40 atlas-defined area. www.nature.com/scientificreports/ The current findings align with those from the large, ongoing "ABCD" study involving early-adolescent children, where higher media use was associated with lower CT in both sensory (e.g., primary visual, postcentral) and higher-order (e.g., fusiform, SMG) areas 7. The authors attributed these findings to accelerated maturation of the visual system, with impacts on other, non-functionally homologous areas less clear. At a minimum, findings in the current study involving visual areas are consistent with those in the ABCD study, suggesting that relationships between higher media use and brain structure begin to manifest in early childhood and may become more extensive over time. They are also consistent with recent functional MRI studies involving preschool-age children presented with stories in illustrated and animated formats, where functional connectivity involving www.nature.com/scientificreports/ primary visual networks was substantially higher during the animated story, a potential mechanism for accelerated thinning 74,75. Sulcal depth (SD) is an established measure of cortical surface area, which exhibits more gradual maturational changes with age, reaching overall maxima in late childhood 35,53,76. The current study found significant association between higher ScreenQ scores and significantly greater SD in primary visual cortex (right cuneus), which may reflect accelerated maturation in concert with lower CT. By contrast, higher ScreenQ scores were associated with significantly lesser SD in the right fusiform gyrus, which supports higher-order processing of complex visual stimuli (e.g., faces, places, shapes) 77,78. The fusiform cortex also includes the putative Visual Word Form Area (VWFA), which gradually develops to rapidly process letters and words during reading 79. Greater SD (and also www.nature.com/scientificreports/ CT) in the fusiform cortex has been associated with higher reading abilities 41,80, including at young ages before formal reading instruction 81 and with higher emergent literacy skills 43. They also align with associations between higher media use (ScreenQ) and both lower emergent literacy skills and white matter microstructural integrity supporting these skills found in a related study involving preschool-age children 26. Thus, while speculative, the current findings may be a biomarker of impacts of higher screen-based media use on cortical surface area (SD) supporting reading at this age, though further studies are needed. Cluster # p-FDR MNI coordinates Regions This study has limitations that should be noted. While 17% of participants met poverty criteria, the sample was largely of higher income and maternal education, and results might be different with greater socioeconomic diversity. There were few significant findings applying maternal education level as a covariate alongside child age and sex, attributable to limited statistical power and moderate correlation between this covariate and ScreenQ scores, which is consistent with prior studies linking media use to numerous aspects of SES 82. However, these analyses still generated significant and/or marginally significant results aligned with previous studies involving early adolescents 7, to inform more expansive research. Analyses were limited to children completing MRI and meeting necessary motion criteria, which may bias results towards those with higher self-regulation and other behavioral characteristics. The cross-sectional nature prohibits comment on causality, which requires a longitudinal design. It is also impossible to discern whether associations between higher media use and differences in CT and SD stemmed from direct (e.g., visual stimulation) or indirect (e.g., displacement of reading) mechanisms. While differences in cortical morphology related to higher use were found at a single time point, rates of change may be more relevant to cognitive development 83. Finally, while there were structural differences in areas known to support higher-order skills (e.g., social cognition, emergent literacy), only measures related to emergent literacy were administered (all negatively correlated, reported previously) 26,44, rendering brainbehavior relationships speculative. Future studies incorporating a range of cognitive-behavioral measures at this formative age are needed. This study also has important strengths. It involves a reasonably large sample of very young children, where there have been few MRI-based studies involving media use, and none to our knowledge involving cortical structure. Rather than a single aspect of use, it applies ScreenQ as its predictor variable, which is a validated, composite measure 25,46 capturing evidence-based facets of use cited in AAP recommendations 1. Analyses involved CT and SD, complimentary measures with non-uniform developmental trajectories, reflecting synapse-level changes and brain growth 35. All controlled for age and sex, minimizing the influence of general maturation rather than environment 34,84,85. While impacting statistical power, significant and/or marginally significant results were found controlling for maternal education, which has been cited as a major SES-related predictor of child cognitive and social-emotional development 52. All analyses applied conservative false-discovery rate (FDR) correction, reducing the likelihood of false positive results. Perhaps most importantly, the current findings align with those involving CD and SD in the large ABCD study involving older children 7, and complement previous studies at this age involving differences in cognitive skills, functional connectivity and white matter microstructure 26,74,75. Altogether, while several findings are unclear and/or speculative, attributable to the complex nature of cortical development, this study provides novel evidence that differences in brain structure related to screen-based media use are evident during early childhood. Longitudinal studies, ideally beginning in infancy given trends in digital media use and prevalence of portable devices 86,87, are needed to characterize longer-term impacts on cognitive, social-emotional and overall health outcomes. Table 4. Cohen's d effect sizes and false-discovery rate (FDR) corrected p-values for associations between ScreenQ and cortical thickness for selected regions of interest (ROIs) defined by the Desikan-Killiany cortical atlas and shown in Fig. 3. Negative signs added to effect sizes indicate a negative association between ScreenQ score and cortical thickness. p-FDR false-discovery rate corrected p-value controlling for age and sex, p-FDR SES false-discovery rate corrected p-value controlling for age, sex and socioeconomic status (maternal education), C cortex, G gyrus. *Signifies that p-FDR is less than 0.05, defined as statistically significant (p-FDR < 0.10 is defined as marginally statistically significant). Conclusions This study found associations between higher digital media use and lower cortical thickness and sulcal depth in brain areas supporting primary visual processing and higher-order functions such as top-down attention, complex memory encoding, letter recognition and social cognition. These findings are consistent with those from a large study involving adolescents, suggesting that differences in cortical structure related to screen use may begin to manifest in early childhood. They also compliment associations between higher media use and lower cognitive skills and related white matter microstructure previously found at this age. Further studies are Table 5. www.nature.com/scientificreports/ needed to determine the longer-term evolution and relevance of these structural differences in terms of cognitive, social-emotional and overall development. Data and code availability All survey and MRI data for this study were newly acquired via methods described. These data will be made available to the scientific community in a deidentified manner upon notice of publication via written request to the corresponding author (JH). Requests must include description of the project (e.g., project outline) and also acknowledgment of the data source in any grant submissions, presentations or publications. The rationale for written request is that no repository currently exists and creation would exceed the scope and current funding resources of the study team. Any costs associated with data transfer will be the responsibility of the requesting parties. Software utilized in the current analyses is freely available and described in the methods section. Table 5. Cohen's d effect sizes and false-discovery rate (FDR) corrected p-values for associations between ScreenQ and sulcal depth for selected regions of interest (ROIs) defined by the Desikan-Killiany cortical atlas and shown in Fig. 4. Negative signs added to effect sizes indicate a negative association between ScreenQ score and sulcal depth. p-FDR false-discovery rate corrected p-value controlling for age and sex, p-FDR SES false-discovery rate corrected p-value controlling for age, sex and socioeconomic status, C cortex, G gyrus. *Signifies that p-FDR is less than 0.05, defined as statistically significant (p-FDR < 0.10 is defined as marginally statistically significant). |
Biodegradable nanoparticles have received increasing attention as versatile drug delivery scaffolds to enhance the efficacy of therapeutics. Effectiveness of delivery, however, can be influenced by the particle size and morphology, as these parameters can greatly affect the biological function and fate of the material. [Zweers, M. L. T.; Grijpma, D. W.; Engbers, G. H. M.; Feijen, J., J. Controlled Release 2003, 87, 252-254.] Narrowly dispersed particles are highly preferred for use in delivery or sensing applications with respect to monitoring and predicting their behavior as their exhibit a more constant response to external stimuli. [Lubetkin, S.; Mulqueen, P.; Paterson, E. Pesti. Sci. 1999, 55, 1123-1125.]
One disadvantage of conventional methods is the irreproducibility in the size and shape of the particles, since these can be profoundly influenced by the stabilizer and the solvent used. [Kumar, M. N. V. R.; Bakowsky, U.; Lehr, C. M., Biomaterials 2004, 25, 1771-1777.] Another major drawback of conventional biodegradable nanoparticles, based on poly(ε-caprolactone) and other aliphatic polyesters, is the lack of pendant functional groups, which can make physiochemical, mechanical, and biological properties difficult to modify. [(a) Riva, R.; Lenoir, S.; Jerome, R.; Lecomte, P. Polymer 2005, 46, 8511-8518. (b) Sasatsu, M.; Onishi, H.; Machida, Y. Inter. J. Pharm. 2006, 317, 167-174.] The availability of functional groups is a desirable means of tailoring the properties of a particle, including hydrophilicity, biodegradation rate, and bioadhesion.
Therefore, there remains a need for methods and compositions that overcome these deficiencies and that effectively provide functionalized, degradable nanoparticles with reproducibility in particle size and shape. |
package com.kkteam.simplefiler;
public class IsDirectoryException extends Exception {
/**
*
*/
private static final long serialVersionUID = -7001031942896307096L;
}
|
Structural Redesigning Arabidopsis Lignins into Alkali-Soluble Lignins through the Expression of p-Coumaroyl-CoA:Monolignol Transferase PMT1 Arabidopsis lignins, which are genetically p-coumaroylated up to the grass lignin level, display dramatic structural changes that make them more amenable to solubilization in alkali at room temperature. Grass lignins can contain up to 10% to 15% by weight of p-coumaric esters. This acylation is performed on monolignols under the catalysis of p-coumaroyl-coenzyme A monolignol transferase (PMT). To study the impact of p-coumaroylation on lignification, we first introduced the Brachypodium distachyon Bradi2g36910 (BdPMT1) gene into Arabidopsis (Arabidopsis thaliana) under the control of the constitutive maize (Zea mays) ubiquitin promoter. The resulting p-coumaroylation was far lower than that of lignins from mature grass stems and had no impact on stem lignin content. By contrast, introducing either the BdPMT1 or the Bradi1g36980 (BdPMT2) gene into Arabidopsis under the control of the Arabidopsis cinnamate-4-hydroxylase promoter boosted the p-coumaroylation of mature stems up to the grass lignin level (8% to 9% by weight), without any impact on plant development. The analysis of purified lignin fractions and the identification of diagnostic products confirmed that p-coumaric acid was associated with lignins. BdPMT1-driven p-coumaroylation was also obtained in the fah1 (deficient for ferulate 5-hydroxylase) and ccr1g (deficient for cinnamoyl-coenzyme A reductase) lines, albeit to a lower extent. Lignins from BdPMT1-expressing ccr1g lines were also found to be feruloylated. In Arabidopsis mature stems, substantial p-coumaroylation of lignins was achieved at the expense of lignin content and induced lignin structural alterations, with an unexpected increase of lignin units with free phenolic groups. This higher frequency of free phenolic groups in Arabidopsis lignins doubled their solubility in alkali at room temperature. These findings suggest that the formation of alkali-leachable lignin domains rich in free phenolic groups is favored when p-coumaroylated monolignols participate in lignification in a grass in a similar manner. |
def comment_delete(id):
comment = get_comment(id)
db.session.delete(comment)
db.session.commit()
return redirect(url_for('manage.manage_comments')) |
/*
* A test for sequence of child cooperation deregistration.
*/
#include <iostream>
#include <sstream>
#include <so_5/all.hpp>
struct msg_child_started : public so_5::signal_t {};
void
create_and_register_agent(
so_5::environment_t & env,
int ordinal,
int max_deep );
class a_test_t : public so_5::agent_t
{
typedef so_5::agent_t base_type_t;
public :
a_test_t(
so_5::environment_t & env,
int ordinal,
int max_deep )
: base_type_t( env )
, m_ordinal( ordinal )
, m_max_deep( max_deep )
, m_self_mbox(
env.create_mbox( mbox_name( ordinal ) ) )
{
}
~a_test_t()
{
}
void
so_define_agent()
{
so_subscribe( m_self_mbox )
.event( &a_test_t::evt_child_started );
}
void
so_evt_start()
{
if( m_ordinal != m_max_deep )
create_and_register_agent(
so_environment(),
m_ordinal + 1,
m_max_deep );
else
notify_parent();
}
void
evt_child_started(
const so_5::event_data_t< msg_child_started > & )
{
if( m_ordinal )
notify_parent();
else
so_environment().stop();
}
private :
const int m_ordinal;
const int m_max_deep;
so_5::mbox_t m_self_mbox;
static std::string
mbox_name( int ordinal )
{
std::ostringstream s;
s << "agent_" << ordinal;
return s.str();
}
void
notify_parent()
{
so_environment().create_mbox( mbox_name( m_ordinal - 1 ) )->
deliver_signal< msg_child_started >();
}
};
std::string
create_coop_name( int ordinal )
{
std::ostringstream s;
s << "coop_" << ordinal;
return s.str();
}
void
create_and_register_agent(
so_5::environment_t & env,
int ordinal,
int max_deep )
{
so_5::coop_unique_ptr_t coop = env.create_coop(
create_coop_name( ordinal ) );
if( ordinal )
coop->set_parent_coop_name( create_coop_name( ordinal - 1 ) );
coop->add_agent( new a_test_t( env, ordinal, max_deep ) );
env.register_coop( std::move( coop ) );
}
class a_test_starter_t : public so_5::agent_t
{
typedef so_5::agent_t base_type_t;
public :
a_test_starter_t( so_5::environment_t & env )
: base_type_t( env )
{}
void
so_evt_start()
{
create_and_register_agent( so_environment(), 0, 5 );
}
};
const std::string STARTER_COOP_NAME = "starter_coop";
struct init_deinit_data_t
{
std::vector< std::string > m_init_sequence;
std::vector< std::string > m_deinit_sequence;
};
class test_coop_listener_t
: public so_5::coop_listener_t
{
public :
test_coop_listener_t( init_deinit_data_t & data )
: m_data( data )
, m_active_coops( 0 )
{}
virtual void
on_registered(
so_5::environment_t &,
const std::string & coop_name )
{
std::lock_guard< std::mutex > lock{ m_lock };
std::cout << "registered: " << coop_name << std::endl;
if( STARTER_COOP_NAME != coop_name )
{
m_data.m_init_sequence.push_back( coop_name );
++m_active_coops;
}
}
virtual void
on_deregistered(
so_5::environment_t & env,
const std::string & coop_name,
const so_5::coop_dereg_reason_t & reason )
{
bool need_stop = false;
{
std::lock_guard< std::mutex > lock{ m_lock };
std::cout << "deregistered: " << coop_name
<< ", reason: " << reason.reason() << std::endl;
if( STARTER_COOP_NAME != coop_name )
{
m_data.m_deinit_sequence.insert(
m_data.m_deinit_sequence.begin(),
coop_name );
--m_active_coops;
if( !m_active_coops )
need_stop = true;
}
}
if( need_stop )
env.stop();
}
static so_5::coop_listener_unique_ptr_t
make( init_deinit_data_t & data )
{
return so_5::coop_listener_unique_ptr_t(
new test_coop_listener_t( data ) );
}
private :
std::mutex m_lock;
init_deinit_data_t & m_data;
int m_active_coops;
};
std::string
sequence_to_string( const std::vector< std::string > & s )
{
std::string r;
for( auto i = s.begin(); i != s.end(); ++i )
{
if( i != s.begin() )
r += ", ";
r += *i;
}
return r;
}
class test_env_t
{
public :
void
init( so_5::environment_t & env )
{
env.register_agent_as_coop(
STARTER_COOP_NAME, new a_test_starter_t( env ) );
}
so_5::coop_listener_unique_ptr_t
make_listener()
{
return test_coop_listener_t::make( m_data );
}
void
check_result() const
{
if( m_data.m_init_sequence != m_data.m_deinit_sequence )
throw std::runtime_error( "Wrong deinit sequence: init_seq: " +
sequence_to_string( m_data.m_init_sequence ) +
", deinit_seq: " +
sequence_to_string( m_data.m_deinit_sequence ) );
}
private :
init_deinit_data_t m_data;
};
int
main()
{
try
{
test_env_t test_env;
so_5::launch(
[&test_env]( so_5::environment_t & env )
{
test_env.init( env );
},
[&test_env]( so_5::environment_params_t & params )
{
params.coop_listener( test_env.make_listener() );
params.disable_autoshutdown();
} );
test_env.check_result();
}
catch( const std::exception & ex )
{
std::cerr << "Error: " << ex.what() << std::endl;
return 1;
}
return 0;
}
|
<filename>packages/discordx/examples/button/commands/game.ts<gh_stars>0
import { randomInt } from "crypto";
import type {
ButtonInteraction,
CommandInteraction,
EmojiIdentifierResolvable,
} from "discord.js";
import { MessageActionRow, MessageButton } from "discord.js";
import {
ButtonComponent,
Discord,
Slash,
SlashChoice,
SlashOption,
} from "../../../src/index.js";
enum RPSChoice {
Rock,
Paper,
Scissors,
}
type RPSButtonIdType = `RPS-${RPSChoice}`;
enum RPSResult {
WIN,
LOSS,
DRAW,
}
class RPSProposition {
public static propositions = [
new RPSProposition(RPSChoice.Rock, "💎", `RPS-${RPSChoice.Rock}`),
new RPSProposition(RPSChoice.Paper, "🧻", `RPS-${RPSChoice.Paper}`),
new RPSProposition(RPSChoice.Scissors, "✂️", `RPS-${RPSChoice.Scissors}`),
];
public choice: RPSChoice;
public emoji: EmojiIdentifierResolvable;
public buttonCustomID: RPSButtonIdType;
constructor(
choice: RPSChoice,
emoji: EmojiIdentifierResolvable,
buttonCustomID: RPSButtonIdType
) {
this.choice = choice;
this.emoji = emoji;
this.buttonCustomID = buttonCustomID;
}
public static nameToClass(choice: RPSChoice) {
return this.propositions.find(
(proposition) => choice === proposition.choice
);
}
public static buttonCustomIDToClass(buttonCustomID: string) {
return this.propositions.find(
(proposition) => buttonCustomID === proposition.buttonCustomID
);
}
}
const defaultChoice = new RPSProposition(
RPSChoice.Rock,
"💎",
`RPS-${RPSChoice.Rock}`
);
@Discord()
export class RockPaperScissors {
@Slash("rock-paper-scissors", {
description:
"What could be more fun than play Rock Paper Scissors with a bot?",
})
private async RPS(
@SlashChoice(
{
name: RPSChoice[RPSChoice.Rock] ?? "unknown",
value: RPSChoice.Rock,
},
{
name: RPSChoice[RPSChoice.Paper] ?? "unknown",
value: RPSChoice.Paper,
},
{
name: RPSChoice[RPSChoice.Scissors] ?? "unknown",
value: RPSChoice.Scissors,
}
)
@SlashOption("choice", {
description:
"Your choose. If empty, it will send a message with buttons to choose and play instead.",
required: false,
type: "NUMBER",
})
choice: RPSChoice | undefined,
interaction: CommandInteraction
) {
await interaction.deferReply();
if (choice) {
const playerChoice = RPSProposition.nameToClass(choice);
const botChoice = RockPaperScissors.RPSPlayBot();
const result = RockPaperScissors.isWinRPS(
playerChoice ?? defaultChoice,
botChoice
);
interaction.followUp(
RockPaperScissors.RPSResultProcess(
playerChoice ?? defaultChoice,
botChoice,
result
)
);
} else {
const buttonRock = new MessageButton()
.setLabel("Rock")
.setEmoji("💎")
.setStyle("PRIMARY")
.setCustomId(`RPS-${RPSChoice.Rock}`);
const buttonPaper = new MessageButton()
.setLabel("Paper")
.setEmoji("🧻")
.setStyle("PRIMARY")
.setCustomId(`RPS-${RPSChoice.Paper}`);
const buttonScissor = new MessageButton()
.setLabel("Scissors")
.setEmoji("✂️")
.setStyle("PRIMARY")
.setCustomId(`RPS-${RPSChoice.Scissors}`);
const buttonRow = new MessageActionRow().addComponents(
buttonRock,
buttonPaper,
buttonScissor
);
interaction.followUp({
components: [buttonRow],
content: "Ok let's go. 1v1 Rock Paper Scissors. Go choose!",
});
setTimeout((inx) => inx.deleteReply(), 10 * 60 * 1000, interaction);
}
}
@ButtonComponent(`RPS-${RPSChoice.Rock}`)
@ButtonComponent(`RPS-${RPSChoice.Paper}`)
@ButtonComponent(`RPS-${RPSChoice.Scissors}`)
private async RPSButton(interaction: ButtonInteraction) {
await interaction.deferReply();
const playerChoice = RPSProposition.buttonCustomIDToClass(
interaction.customId
);
const botChoice = RockPaperScissors.RPSPlayBot();
const result = RockPaperScissors.isWinRPS(
playerChoice ?? defaultChoice,
botChoice
);
interaction.followUp(
RockPaperScissors.RPSResultProcess(
playerChoice ?? defaultChoice,
botChoice,
result
)
);
setTimeout(
(inx) => {
try {
inx.deleteReply();
} catch (err) {
console.error(err);
}
},
30000,
interaction
);
}
private static isWinRPS(
player: RPSProposition,
bot: RPSProposition
): RPSResult {
switch (player.choice) {
case RPSChoice.Rock: {
if (bot.choice === RPSChoice.Scissors) {
return RPSResult.WIN;
}
if (bot.choice === RPSChoice.Paper) {
return RPSResult.LOSS;
}
return RPSResult.DRAW;
}
case RPSChoice.Paper: {
if (bot.choice === RPSChoice.Rock) {
return RPSResult.WIN;
}
if (bot.choice === RPSChoice.Scissors) {
return RPSResult.LOSS;
}
return RPSResult.DRAW;
}
case RPSChoice.Scissors: {
if (bot.choice === RPSChoice.Paper) {
return RPSResult.WIN;
}
if (bot.choice === RPSChoice.Rock) {
return RPSResult.LOSS;
}
return RPSResult.DRAW;
}
}
}
private static RPSPlayBot(): RPSProposition {
return RPSProposition.propositions[randomInt(3)] ?? defaultChoice;
}
private static RPSResultProcess(
playerChoice: RPSProposition,
botChoice: RPSProposition,
result: RPSResult
) {
switch (result) {
case RPSResult.WIN:
return {
content: `${botChoice.emoji} ${botChoice.choice} ! Well, noob ${playerChoice.emoji} ${playerChoice.choice} need nerf plz...`,
};
case RPSResult.LOSS:
return {
content: `${botChoice.emoji} ${botChoice.choice} ! Okay bye, Easy!`,
};
case RPSResult.DRAW:
return {
content: `${botChoice.emoji} ${botChoice.choice} ! Ha... Draw...`,
};
}
}
}
|
1. Field of the Invention
The invention relates to a metal foil for secondary battery and a secondary battery.
2. Description of the Related Art
In recent years, along with the popularization of portable devices such as a cell phone or a laptop computer and along with the development and practical application of an electric vehicle or a hybrid car, demand of a compact high-capacity battery has been increasing. Especially a lithium-ion battery has been used in various fields due to its lightweight and high energy density.
The lithium-ion battery is basically composed of a cathode, an anode, a separator for insulating the cathode from the anode, and an electrolyte which permits ion movement between the cathode and the anode.
In general, a metal foil formed of a band-shaped aluminum foil in which an active material such as lithium cobalt oxide is applied to front and back surfaces thereof is used as the cathode. Meanwhile, a metal foil formed of a band-shaped copper foil in which an active material such as a carbon material is applied to front and back surfaces thereof is used as the anode.
When the active material is applied to the front and back surfaces of the copper or aluminum metal foil, there is a disadvantage that the metal foil is less likely to be integrated with the active material and the active material is likely to fall off. Therefore, a method is known in which the active material is prevented from falling off by forming a through-hole on the metal foil and integrating the active materials applied to the front and back surfaces of the metal foil through the through-hole (see JP-A 2002-198055).
Plural conical through holes are formed on front and back surfaces of the metal foil by a rolling process in which the metal foil is passed between a pair of rollers on which a conical convex portion is formed, thereby forming the metal foil in a three-dimensional shape.
Meanwhile, a method of forming through-holes by a lath process is known in which cuts are formed in a thin plate and the thin plate is extended in a direction orthogonal to the cutting direction (e.g., see JP-A 2002-216775).
However, since cutting chips are generated in the method of forming through-holes by the rolling process, removal of the cutting chips is required. Meanwhile, in the method of forming through-holes by the lath process, since strength against tension generated at the time of applying the active material is insufficient when the expansion rate is increased, it is not possible to increase the aperture ratio. |
<reponame>lozarcher/KnightFight<filename>Classes/Attacker.h
//
// Attacker.h
// KnightFight
//
// Created by <NAME> on 05/05/2011.
// Copyright 2011 __MyCompanyName__. All rights reserved.
//
#import <Foundation/Foundation.h>
#import "cocos2d.h"
#import "GameSprite.h"
extern float *const velocity;
@interface Attacker : GameSprite {
CGPoint lastPosition;
BOOL chasingPlayer;
BOOL followingPath;
NSMutableArray *path;
NSThread *thread;
}
@property (nonatomic) CGPoint lastPosition;
@property (nonatomic) BOOL chasingPlayer;
@property (nonatomic) BOOL followingPath;
@property (nonatomic, retain) NSMutableArray *path;
@property (nonatomic, retain) NSThread *thread;
+(id) attacker;
-(void)chasePlayer:(GameSprite *)player;
-(void)createPathToPlayer;
-(void)getPath:(NSArray *)tilePositions;
@end
|
Another Continental Vulture Crisis: Africa's Vultures Collapsing toward Extinction Vultures provide critical ecosystem services, yet populations of many species have collapsed worldwide. We present the first estimates of a 30year PanAfrican vulture decline, confirming that declines have occurred on a scale broadly comparable with those seen in Asia, where the ecological, economic, and human costs are already documented. Populations of eight species we assessed had declined by an average of 62%; seven had declined at a rate of 80% or more over three generations. Of these, at least six appear to qualify for uplisting to Critically Endangered. Africa's vultures are facing a range of specific threats, the most significant of which are poisoning and trade in traditional medicines, which together accounted for 90% of reported deaths. We recommend that national governments urgently enact and enforce legislation to strictly regulate the sale and use of pesticides and poisons, to eliminate the illegal trade in vulture body parts, as food or medicine, and to minimize mortality caused by power lines and wind turbines. Introduction Vultures provide essential ecosystem services, yet they are among the most threatened groups of birds worldwide (Ogada, ). Currently, 69% of vultures and condors are listed as threatened or near-threatened by the IUCN, the majority classed as Endangered or Critically Endangered (BirdLife Interna-tional 2014). The "Asian Vulture Crisis" of the late 1990s saw populations of three species of Gyps vulture collapse throughout South Asia, by >96% in just 10 years, due to incidental Diclofenac poisoning (Prakash 1999;;). Because vultures suppress the number of mammalian scavengers at carcasses, resulting in fewer contacts between potentially infected individuals, levels of disease transmission are likely to be greater in the absence of vultures (Ogada, ). Consequently, the Asian Vulture Crisis has resulted in a parallel increase in feral dog populations, which are now the major consumers of carcasses in urban areas in India (), and also the main reservoir of diseases such as rabies (). The growth in feral dog numbers, following the collapse of vulture populations, will contribute to the risks associated with rabies transmission, both in Africa () and in Asia, where it is estimated to have added $34 billion to healthcare costs in India between 1993 and 2006 (). Vultures also freely dispose of organic waste in towns. Egyptian Vultures, for example, consumed up to 22% of annual waste in towns on Socotra off the Horn of Africa (). In Africa, significant vulture declines have been reported from widely scattered locations since the turn of the century, by numerous authors using various methods, and working at very different spatial and temporal scales (e.g., Thiollay 2001; Rondeau & Thiollay 2004;;;). Collectively, these reports suggest that there may be a continental-scale problem, similar in extent-though more protracted-than the Asian situation, and as yet poorly documented. As representatives of the IUCN Vulture Specialist Group, we present published and unpublished data on Africa's vulture populations, to provide the first comprehensive assessment of their conservation status. We review the major threats to Africa's vultures, identify important knowledge gaps that need to be addressed, and suggest policy-level actions required of governments if they are to ensure the long-term survival of Africa's vultures. Methods The impetus for this review came as a result of discussions during the vulture round-table meeting at the 2012 Pan-African Ornithological Congress. Much of the information that forms the basis of this article was discussed and compiled during the Pan-African Vulture Summit 2012 (). This assessment was made using information derived from two main sources: an extensive review of literature already known to the authors, augmented with publications sourced through Google Scholar and unpublished data, mainly from road surveys and counts of dead vultures. Data for specific countries or regions were searched during June-July 2012 using the name of the country/region followed by the term "and vulture." The data collected included: habitat change, timeframe, survey size, methods used, assessment criteria, species surveyed, percent decline, general trends, and threats. These data were collected using a range of methods, including road counts, foot and aerial surveys, and bird atlas records. We eliminated data sets that we judged to be speculative, if they were based on too few survey days, or the majority of species historically present in the country had not been assessed. Comparisons of declines across countries and regions Where information was available, each vulture species in each country was assigned to a decline category. We recognised four categories on an ordinal scale, based on the quantitative or qualitative information available on the species' population change in each country, over a period of 19-55 years. The four categories were: extinct or in severe decline (>50%), strong decline (>25%), moderate decline (<25%), and no decline. We included only those countries for which the information was country-wide or included the majority of vulture habitats, and was available for the majority of species present (currently or historically; Table S1). Using a subset of these data, from which change rates could be calculated (i.e., for studies yielding both the degree of change and the time period over which it occurred), we estimated median annualized population change rates for species surveyed in each region (southern, west, east, and north) using bootstrapping analysis (n = 1,000 replicates; Figure 1; Table S2). Declines within regions and in relation to protected area status Vulture numbers were estimated from road counts carried out in West Africa (Burkina Faso, Mali, and Niger) and East Africa (Kenya, Uganda) over two time periods: 1969-1973and 2003in West Africa, and 1974-1988-2013 in East Africa ( Figure 2; Table S3). Field methods used in West Africa are described by Rondeau & Thiollay, who surveyed the same routes (transects) during the same months in each time period. In East Africa during 1974-1988, surveys were also carried out by J.M.T., using the same methods as applied in West Africa. The methods used in East Africa Population change by region. Changes in each species' abundance were converted to annualized rates of change, which were pooled across species and study areas. Bootstrapping analysis (n = 1,000 replicates) was used to estimate the median annualized change rate for each region ( r ; vertical bars indicate 95% percentiles). North Africa ( ) is represented by a single species and study area; southern Africa by five species, seven areas; West by six species, four areas; East by six species, four areas. during 2008-2013 (;) were similar to those applied by J.M.T., but involved different observers. To ensure comparability, we limited the analysis to those transects showing at least partial spatial overlap between the two time periods. To determine the effects of protected area (PA) status on changes in vulture abundance, we distinguished between transects situated within PAs (National Parks, National Reserves, Wildlife Reserves, Forest Reserves, Conservancies, and Game Ranches) and those outside of PAs. In each case, the number of birds detected 100 km −1 was calculated for each transect in each survey year, and the mean (±s.e.) birds detected 100 km −1 was calculated for all transect years, grouped according to PA status, region, and time period. Sufficient data were available to compare detection rates for five of the eight species considered. Projected population changes over three generations We estimated the annualized rate of change in abundance for each of the eight vulture species, across a range of study sites surveyed in more than one time period, extracted from 13 published and 3 unpublished accounts (Table S2). The survey methods used often varied between locations, and included road transects (individuals detected 100 km −1 of transect) and breeding surveys (occupied nests at cliff sites, tree nest densities). In each case, we converted the overall change (C), observed over a specified time period (t), in years, to an annualized rate of change (r), using the formula r = -(1-(1 + C)(1/t)). For each species, we calculated the median, Q1 and Q3 annualized change rates from all locations for which estimates were available, using the quantile function in R (3.0.1: R Development Core Team 2009; Tables 1 & S4). Details of the algorithm used are given in the Supplementary Information: "Quartile estimation". From the median, Q1 and Q3 annualized change rates, we calculated the rates of change expected over three generations, using the formula -(1 -(1 + r)(3 gl)), where gl = estimated generation length for the species in question. Generation length estimates were provided by BirdLife International (unpublished data; Table S4). Major threats to vulture populations We assessed the major threats to African vultures based on quantitative data drawn from peer-reviewed articles, unpublished and newspaper reports documenting vulture deaths during 1961-2014 (Table S5). These were assigned to four categories: Poisoning, including intentional killing (e.g. retaliation by poachers to avoid detection), and unintentional killing (feeding on poisonlaced carcasses intended to kill livestock predators), Trade in traditional medicine, Killing for food, and Electrical infrastructure: collision with power lines and wind turbines, and electrocution. The interrelated effects of changes in habitat, food availability, and human disturbance were also considered, and are discussed below. Results Population change assessments were available from 22 African countries, covering 58% of Africa's land surface. Of 95 national populations assessed, 85 (89%) were either nationally extinct or had experienced severe declines (>50%) or strong declines (>25% ; Table S1). Tanzania is the only country in which only half of the species historically present have shown evidence of a decline. Populations are declining throughout Africa, with West and East Africa showing the greatest declines per annum ( Figure 1). Although declines were generally greater in unprotected areas, substantial declines were also evident within protected areas for the five species assessed in both East and West Africa (Figure 2). These trends have been broadly consistent between regions and across species, where this has been measured. To estimate decline rates across Africa, we determined the median decline rate for each species, drawn from 16 studies conducted in 12 countries (Table 1). While sample sizes varied between species, and were particularly small in the case of Bearded, Egyptian, and Cape Vulture (three sites each), the majority of species were each surveyed in at least six countries. The most rapid declines had occurred in White-headed, Rppell's, Cape, and Egyptian Vulture (Table 1; Figure 3). The median decline rates for these four species varied between 5.1% and 6.1% p.a., and averaged 4.6% p.a. for the eight species assessed. Combined with long generation lengths (mean: 16.6 years) and low annual fecundity, these declines meet or exceed the threshold for species qualify-ing as Critically Endangered (IUCN 2012; seven species) or Endangered (African populations of the Bearded Vulture; Figure 3). To further evaluate each species' threat status, we examined three measures: the extent to which the species' median and quartile decline rates exceeded the CR threshold (an 80% decline over three generations), the proportion of range states for which trend data were available (BirdLife International 2014), and the extent to which the species' global range lies within Africa. These measures suggest that the case for uplisting species to CR is more robust for Rppell's and Cape Vulture, followed by White-backed, Hooded, and White-headed Vulture (scoring equally), and by Lappet-faced Vulture (Table S6). The global threat status of Egyptian and Bearded Vulture is less clear, since their ranges outside of Africa extend from southern Europe to South Asia. Of 7,819 vulture deaths recorded across 26 countries (Table S5), 61% were attributed to poisoning, 29% to trade in traditional medicine, 1% to killing for food, and 9% to electrocution or collision with electrical infrastructure ( Figure 4). Note, however, that since detection and reporting rates are likely to vary in relation to threat category, these comparisons should be treated with caution. Discussion Just as in Asia, African vultures are in crisis, their populations declining at a rate which, in at least six cases, meets or exceeds the threshold for species qualifying as Table S5). "Poisoning" includes dead vultures that were victims of intentional or unintentional poisoning. "Trade in traditional medicine" indicates the number of vultures found dead without their heads, or the number of vultures or their parts counted on sale in markets. "Killing for food" indicates the number of dead vultures or their parts counted either when traders were observed at markets or after they were arrested. "Electrical infrastructure" is the number of vultures found electrocuted below power lines or other electrical infrastructure. Critically Endangered. There are, however, two important distinctions between the Asian and African vulture crises. First, to date, the rate of decline evident among the four worst-affected African species (equivalent to 41-50% per decade) has been substantially lower than in Asia (>96% per decade), affording governments the opportunity to enact and enforce legislation to regulate the use of pesticides and other poisons, and hence to reduce the key threat to vultures. There thus remains the potential to avoid the environmental consequences of a collapse in this functionally important group, and the complexities and expense associated with captive breeding and reintroduction. Second, while poisoning and trade in traditional medicine together pose the most serious threat to African vultures, there are a range of other factors involved in their decline that may prove difficult to resolve. African vultures are often the unintended victims of poisoning incidents, in which carcasses are baited with highly toxic agricultural pesticides to kill carnivores such as lions, hyenas, and jackals (Ogada 2014), or to control feral dog populations (Abebe 2013). Furthermore, the recent rapid increase in elephant and rhino poaching throughout Africa has led to a substantial increase in vulture mortality, as poachers have turned to poisoning carcasses specifically to eliminate vultures, whose overhead circling might otherwise reveal the poachers' illicit activities (Roxburgh & McDougall 2012;Ogada 2014). Consequently, the decline rates estimated here may have accelerated sharply in recent years; since July 2011, there have been at least 10 poisoning incidents that have, collectively, killed at least 1,500 vultures in six southern African countries (37-600 birds per incident; Ogada 2014). The illegal trade in vulture body parts for use in traditional medicine is a significant threat that is increasing in intensity (;Saidu & Buij 2013).Vulture body parts have long been valued in many African cultures, especially in South and West Africa, where some believe that they cure a range of physical and mental illnesses, improve success in gambling and business ventures, or increase intelligence in children (Beilis & Esterhuizen 2005;;Saidu & Buij 2013). Similarly, although the consumption of vultures as bushmeat in some West African countries (e.g., Nigeria and Ivory Coast; Rondeau & Thiollay 2004;Thiollay 2006;Saidu & Buij 2013) may be a particular regional concern, smoked vulture meat is known to be trafficked internationally (Rondeau & Thiollay 2004), and our findings suggest that, together, poisoning and the illegal trade in vulture body parts for medicines or as bushmeat, pose a substantial threat, and on a continental scale. African vultures are also frequent victims of electrocution, particularly in southern and North Africa, where there has been an increase in electrical infrastructure development from power lines and wind farms. "Green energy" initiatives such as wind farms can be detrimental to vultures, if bird-friendly designs and careful placement of turbines and power lines are not observed (;Rushworth & Krger 2014). Other threats that are more difficult to quantify include reduction of habitat, disturbance at nest sites, and food declines. Habitat loss reduces nest site availability for disturbance-sensitive, tree-nesting vultures (Monadjem & Garcelon 2005;), including Hooded, White-backed, Lappet-faced, and Whiteheaded Vultures. Disturbance around breeding cliffs has resulted in nest failures (Borello & Borello 2002), and the illicit harvesting of eggs and chicks (G. Abert, in litt. in Rondeau & Thiollay 2004;Ogada & Buij 2011), as well as recreational rock climbing (Rondeau & Thiollay 2004), all further threaten Africa's vultures. The impact of wildlife declines on the food supply of vultures is difficult to assess, but has likely affected populations, most substantially inside West Africa's protected areas. Craigie et al. recorded a composite 59% decline in large mammal populations inside protected areas in 18 countries during 1970-2005, with the greatest regional decline (85%) recorded inside West Africa's protected areas. However, large vulture declines in West Africa during this period were greatest outside of protected areas (−98%), where wild ungulates were already scarce in the 1960s (Thiollay 2006). Also, Hooded Vultures already depended almost entirely on anthropogenic food resources in the 1960s, while the other vulture species fed extensively on livestock (Thiollay 1977;Scholte 1998); populations of which have more than doubled since the 1960s (FAO 2014). This increase will have been offset partly by the modernization of livestock management and improved sanitation at slaughterhouses; impacting mainly on Hooded and Egyptian Vultures (Thiollay 2006;Ogada & Buij 2011;). Conservation needs and actions The situation in Africa requires that a number of environmental and cultural issues are addressed. These were outlined in a resolution to African governments by the participants of the 2012 Pan-Africa Vulture Summit, where the following specific recommendations were made (). Effectively regulate the import, manufacture, sale, and use of poisons, including agricultural chemicals and pharmaceutical products known to be lethal to vultures. Legislate and enforce stringent measures to prosecute and impose harsh penalties on perpetrators of poisoning and those illegally trading in vultures and/or their body parts. Ensure appropriate levels of protection and management for vultures and their breeding sites. Ensure that all new energy infrastructure is vulturefriendly and that existing unsafe infrastructure is modified accordingly. Support research, capacity building, and outreach programs for the conservation and survival of healthy vulture populations. We suggest prioritizing the regulation of pesticides and other poisons as an action likely to have the most significant and positive impact, not just for vultures but for all scavengers and predators targeted by pastoralists. The Vulture Specialist Group of the IUCN has made a similar appeal to African governments (IUCN 2013). In November 2014, the Conference of the Parties of the Convention on Migratory Species (CMS) formally adopted a set of guidelines to tackle causes of poisoning (CMS 2014a), which, although not legally binding, will be a significant step toward recognizing and reducing vulture poisoning. The CMS recommendations include prohibiting the use of poison baits for predator control, creating or improving enforcement legislation, and restricting access to highly toxic substances (CMS 2014b). National governments must work with conservation NGOs to halt the illegal trade in vultures as bushmeat and for traditional medicine. More effective law enforcement is needed to curb the illegal hunting and sale of vulture meat and body parts. Public awareness campaigns are also needed to highlight the dangers and the potential health implications of extracting traditional medicines from vultures, of which approximately 40% are killed using poisons (;Saidu & Buij 2013). Further study is needed to determine the residue levels of toxic pesticides in vultures used for traditional medicine. National Energy Ministries and energy companies need to work with conservation NGOs to ensure that existing and future energy-related developments are vulturefriendly, and to modify unsafe designs. Many African countries have adopted the resolution on Power Lines and Migratory Birds that applies to migratory vultures (CMS 2011). Best practice guidelines for power lines () and for wind energy () need to be integrated into national legislations, and proposed developments that will adversely affect important vulture populations should be modified or relocated (Rushworth & Krger 2014). Research is urgently needed over vast areas of Africa where energy developments are proposed (see http://www.africa-energy.com), but the potential impacts on vultures and other at-risk species are not known. The Asian Vulture Crisis has highlighted the important link between vultures and human health, and shown us that when vulture numbers fall to critical thresholds, reestablishing their populations is a slow, difficult, and expensive undertaking (). Our findings provide the first continent-wide estimates of decline rates in Africa's vultures. They confirm that African vulture populations are in steep decline and will require governments to act now to avoid the environmental and social consequences of losing what are arguably nature's most important scavengers. for earlier discussions, and Will Cresswell for advice on parts of the analyses. Ian Rushworth, Corinne Kendall, and two anonymous reviewers provided comments that improved the manuscript. Supporting Information Additional Supporting Information may be found in the online version of this article at the publisher's web site: Table S1. Declines in vulture populations, by country Table S2. Sources used to estimate rates of change in vulture populations Table S3. Vulture detection rates in West and East Africa during two survey periods Table S4. Median, Q1 and Q3 change rates projected over three generations Table S5. Sources used to estimate major threats to vultures Table S6. Degree to which change rates are likely to represent global trends List S1. Published sources cited in Tables S1-S6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.