content
stringlengths 7
2.61M
|
---|
package io.craigmiller160.mvp.listener;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.ArrayList;
import java.util.List;
/**
* An abstract implementation of <tt>ListenerDialog</tt> that
* mirrors the <tt>AbstractListenerView</tt> class. The main
* difference between the two is that this class does not
* also implement the <tt>PropertyChangeView</tt> interface,
* as dialogs have a short lifespan and in general won't
* persist long enough to need to receive <tt>PropertyChangeEvent</tt>s.
* If a unique case arises where such functionality would be
* necessary, that interface can still be implemented by
* a subclass of this.
* <p>
* Like <tt>AbstractListenerView</tt>, this class has been implemented to accept multiple external
* <tt>ActionListener</tt>s and store them in a list. This class
* also implements <tt>actionPerformed(ActionEvent)</tt> inherited
* from <tt>ActionListener</tt> as a final method. This implementation
* simply creates a new <tt>ActionEvent</tt> with the same action command
* as the one that was passed to it, but with this class as the event's
* source. This allows for the external listening controller(s) to
* abstractly access this class, and the <tt>getValueForAction(String)</tt>
* method specifically.
* <p>
* The <tt>getValueForAction(String)</tt> method inherited from the
* <tt>ListenerView</tt> interface will also need to be implemented by
* subclasses. The documentation for <tt>ListenerView</tt> should be
* consulted for the best way to implement this method.
* <p>
* <b>NOTE:</b> A view extending this class cannot also extend a GUI component class,
* so views using this API will have to rely on composition (wrapping around
* an instance of the GUI component their building) rather than inheritence
* to create components.
* <p>
* <b>NOTE:</b> External listeners to receive <tt>ActionEvent</tt>s
* should be added to instances of <tt>ListenerDialog</tt> before the
* <tt>showDialog()</tt> method is invoked. For maximum flexibility,
* <tt>AbstractListenerView</tt> has been designed to also serve
* as a listener for dialogs implementing this interface, in case
* the dialog is created within a GUI class and can't add the controller
* directly as an <tt>ActionListener</tt>. If <tt>AbstractListenerView</tt>
* receives an <tt>ActionEvent</tt> from this dialog, it will safely
* pass it along to the controller without changing its source.
* <p>
* <b>THREAD SAFETY:</b> Swing is NOT thread safe. All
* methods in this class MUST be invoked on the <tt>EventDispatchThread</tt>.
*
* @author craig
* @version 2.0
* @see io.craigmiller160.mvp.listener.AbstractListenerView AbstractListenerView
*/
public abstract class AbstractListenerDialog
implements ListenerDialog {
/**
* List of listeners/controllers assigned to this class.
*/
private final List<ActionListener> listeners;
/**
* Constructor that initializes the listener list.
*/
public AbstractListenerDialog() {
listeners = new ArrayList<>();
}
/**
* Add the specified <tt>ActionListener</tt> to this
* dialog. The listener should function as an external
* controller for executing actions invoked by this dialog.
*
* @param listener the listener to add to this dialog.
*/
@Override
public void addActionListener(ActionListener listener) {
listeners.add(listener);
}
/**
* Remove the specified <tt>ActionListener</tt> from this
* dialog.
*
* @param listener the listener to remove from this dialog.
*/
@Override
public void removeActionListener(ActionListener listener) {
listeners.add(listener);
}
/**
* Passes an event from an actionable component in the dialog to the
* listeners assigned to this dialog.
*
* @param event the event that needs to be executed.
*/
@Override
public void actionPerformed(ActionEvent event) {
ActionEvent newEvent = new ActionEvent(
this, ActionEvent.ACTION_PERFORMED, event.getActionCommand());
for(ActionListener l : listeners){
l.actionPerformed(newEvent);
}
}
}
|
Soldiers could be out of the country by the summer, one official said.
An Afghan boy who shot to fame after wearing a Messi shirt made out of a bag has fled his home after the Taliban threatened to kill him.
Thirty-two G4S employees were also injured, five seriously, in the suicide attack on a compound on Wednesday.
No-one claimed responsibility for the attack, but both Taliban and Islamic State group insurgents are active in Kabul.
The clerics were attacked as the Prophet Mohammed’s birthday was marked.
Afghanistan's first parliamentary elections in eight years have been marred by violence, with attacks leaving at least 36 people dead.
Elin Ersson booked a seat on a flight from Gothenburg to Turkey after hearing that a failed asylum seeker was being deported on the flight.
At least 12 people were killed before the start of the holiday ceasefire brokered with the Taliban.
The rise comes despite a major Afghan-US operation to wipe the terror group out on the Afghanistan-Pakistan border.
Defence Secretary Gavin Williamson said the fees to apply for indefinite leave to remain should be waived. |
The Partisan Foundations of Judicial Campaign Finance In this comprehensive empirical analysis of judicial campaign finance, we find a predictive relationship between contributions to judges and judicial decisions favorable to contributors, but we also conclude that the intuitive narrative of direct exchanges of money for decisions between individual contributors and judges is too simplistic to describe the larger partisan foundations of modern judicial elections. The Republican and Democratic Parties broker the connections between contributors and their candidates, and we argue in our work that parties, not elections, seem to be the key to moneys influence on judges.We identify broad liberal and conservative political coalitions, allied roughly with the Democratic and Republican Parties, whose collective contributions exercise systematic ideological influence on judges who receive their money. Although the Supreme Court recognized the potential for judicial bias in cases involving major campaign contributors, we find that campaign finance predicts judicial decisions not simply in the most extreme cases, but systematically along partisan lines across the range of cases. We argue, based on our findings, that parties play an indispensable, but so far underrecognized role in connecting campaign contributions and judges.Just as importantly, however, we identify a striking partisan asymmetry in judicial campaign finance between the major parties. While Republican judges respond only to campaign finance contributions from conservative sources and do not appear to be influenced by those from liberal sources, Democratic judges are influenced by campaign support from both liberal and conservative sources and thus are uniquely cross pressured from opposite directions. Our analysis, as a result, shows that the influence of campaign finance helps reinforce Republican conservatism and destabilize Democratic liberalism in judicial decision making, netting out in a conservative direction between the two parties. |
SULLIVAN, MO – The Sullivan Police Department is on the lookout for a missing woman with dementia. The department says 69-year-old Betty Alexander has not been seen since this past Thursday, April 11th around 3 pm.
Ms. Alexander is a white female, 5 foot 2 inches, 145 pounds, has brown hair and blue eyes. She wears her hair at shoulder length and may not be wearing shoes.
She takes medication but is not believed to be in possession of any at this time.
If you have seen or know of the whereabouts of Betty Alexander, please call 911 or the Sullivan Police Department at 573-468-8001.
Volunteers joined first responders on Monday going block by block in search for Alexander. She lives near the fire station on S. Church Street. Her family says Alexander does not drive.
Alexander’s daughter Tonya Tolliver hopes the public will keep their eyes open and report anything that might help.
“Even if you think it’s something little, the Sullivan Police Department is willing to look into it to see if we can find any leads,” she said.
“That’s what we need is a phone tip,” said Sullivan Fire Protection District Capt. Damon Sumpter. |
// Parses resolution preset argument to enum value.
ResolutionPreset ParseResolutionPreset(const std::string& resolution_preset) {
if (resolution_preset.compare(kResolutionPresetValueLow) == 0) {
return ResolutionPreset::kLow;
} else if (resolution_preset.compare(kResolutionPresetValueMedium) == 0) {
return ResolutionPreset::kMedium;
} else if (resolution_preset.compare(kResolutionPresetValueHigh) == 0) {
return ResolutionPreset::kHigh;
} else if (resolution_preset.compare(kResolutionPresetValueVeryHigh) == 0) {
return ResolutionPreset::kVeryHigh;
} else if (resolution_preset.compare(kResolutionPresetValueUltraHigh) == 0) {
return ResolutionPreset::kUltraHigh;
} else if (resolution_preset.compare(kResolutionPresetValueMax) == 0) {
return ResolutionPreset::kMax;
}
return ResolutionPreset::kAuto;
} |
In the latest scientific study to pin the blame for these kids today on everyone but their self-absorbed parents, the Journal Of Studies On Alcohol And Drugs has published findings that say children who are allowed to watch R-rated films are much more likely to start drinking at an early age. According to the report, researchers who surveyed nearly 3,600 New England middle-school students over the course of two years discovered that “3 percent of the kids who said their parents never allowed them to watch R-rated movies said they had started drinking alcohol, compared with 19 percent of those who were sometimes allowed to watch R-rated movies and 25 percent of those who said they were allowed to watch such movies ‘all the time.’” And as with most studies involving kids and their behavior, correlation naturally equals causation, with author Dr. James D. Sargent pointing out that his findings seem to support previous, similar studies that suggested exposure to “adult content” can also lead to drinking, smoking, sex at a young age, and violence. Well, that's ironclad then, isn't it?
Of course, what it does not point out—as with most studies that try to explain kids’ behavior using pop culture as a scapegoat, and we can’t believe we’re still having this conversation in 2010—is that kids who are “allowed to watch R-rated movies all the time” most likely have parents who are already fairly libertine about what their kids are doing, which means many of them (say, 25 percent) basically do whatever they want, including drinking underage—or, at least, bragging to some nerd scientist that they do. Also not taken into consideration, as PopWatch points out: The sweeping generalization that all R-rated movies are created the same, and that the “change in personality” they can jumpstart in impressionable young folk is always a bad thing; imagine, for example, the sheltered lives being led by those kids who have been raised to think that all R-rated films are inherently immoral. But then, we don’t have a random polling of middle-school kids to quantify any of that, so science wins this round. |
Capitalism
I have always been a firm believer in rewarding people for the risks they take but in the last few years I have had to consider this more closely in the context of the financial sector, capitalism and neoliberalism as my understanding of the definitions of each was flawed.
I always thought the concept of capitalism is that entrepreneurial risk deserves an appropriate reward for taking the risk in the first place, i.e. I decide to set up a business selling chocolate teapots and I am loss making in the early years, have pumped every penny of available funds I have into the business to the point I am struggling with my bills, putting food on the table, etc… Then one day my chocolate teapots become trendy and everyone is buying them. The years of hardship are all eventually worthwhile and I begin to make significant profits and live a comfortable life. At that stage, there is a divergence of opinion as to the appropriate amount of tax I should pay once my business succeeds but the general idea that I took a risk and was now being rewarded for taking that risk is something that I believe is essential for innovation and growth in local economies. How that innovation and growth is managed in terms of taxation and regulation is a separate matter.
Upon reading a little further on the actual definition of capitalism, I realized that it can be basically defined as follows (per http://www.investopedia.com/terms/c/capitalism.asp):
“A system of economics based on the private ownership of capital and production inputs, and on the production of goods and services for profit. The production of goods and services is based on supply and demand in the general market (market economy), rather than through central planning (planned economy). Capitalism is generally characterized by competition between producers.”
I don’t have an issue with the above, in theory, because it is basically saying that if I have an opportunity to innovate, I take a risk and invest in my idea and I am then rewarded for that risk at some point in the future (if I am successful). Alternatively, my idea fails and I take the hit on whatever investment I made in developing that idea and taking that risk.
The problem arises with the next piece of the definition:
“Other facets, such as the participation of government in production and regulation , vary across models of capitalism.”
In my naïve version of capitalism, everyone in society would have an equal opportunity to take an entrepreneurial risk and either succeed or fail with their idea. Some people may not think like an entrepreneur and will form part of the “labour force” but will still be rewarded fairly and appropriately for the work they do to ensure they live a comfortable life. However, moving between the two categories should be encouraged and facilitated.
What has actually happened is that the profitability of the successful company or individual that took the risk on Day 1 has been engulfed in this never-ending thirst for profitability and success. The difference in wealth between the entrepreneur and the labour force would likely be fairly large even in a scenario where the labour force are well paid and enjoy a comfortable standard of living and happiness – there is nothing wrong with that given the level of risk required when the entrepreneur starts out. However, in this endless thirst for profit and wealth, we now have a situation where the labour force are no longer valued and are just used as pawns in maximising wealth and profit for the entrepreneurs at the top, i.e. the labour force will likely have lower wages, few benefits, longer hours, etc. The gap in wealth increases drastically when the entrepreneur gets sucked into this thirst for excessive profit and wealth at the expense of the labour force.
However, the game does not stop there. You then have Government participation to try and regulate, drive production and tax appropriately. In theory, this should be a positive thing whereby the Government ensures that a robust system of taxation exists to ensure that the entrepreneurs are supported during their early years and that the entrepreneurs then give back through taxation when they hit the dizzy heights of success. Regulation may assist by ensuring that no company becomes a monopoly to ensure competitive pricing and a better deal for consumers. Most importantly, regulators should be there to ensure that nobody becomes “Too Big To Fail” as we have seen with the financial crisis in 2008!
Effectively, human nature has taken over to ensure that capitalism has created a platform for human greed to take centre stage. The regulators have encouraged that greed and embraced the idea that wealth creation and soaring profits for the few (1%) are a reasonable platform on which society can exist – resulting in the “Too Big To Fail” phenomenon. The taxation systems domestically and internationally have facilitated the greed by failing to follow the logic I set out above whereby entrepreneurs are assisted, encouraged, incentivized to innovate, invent, create, develop, etc. but repay that faith through the tax system when they become successful. It is not a difficult concept!
Neoliberalism – The Cuckoo of Economics
As an added kick in the teeth to the concepts set out above, let’s throw neoliberalism into the mix! Neoliberalism can be defined as follows:
“Neoliberalism is an approach to economic and social studies in which control of economic factors is shifted from the public sector to the private sector . Drawing upon principles of neoclassical economics, neoliberalism suggests that governments reduce deficit spending, limit subsidies, reform tax law to broaden the tax base, remove fixed exchange rates, open up markets to trade by limiting protectionism, privatize state-run businesses, allow private property and back deregulation.”
My naïve definition of capitalism suggests that it is a system designed for entrepreneurs to innovate, invent, develop, etc. by taking on the financial risk of failure of their business idea (if it fails) but also enjoying the reward (i.e. profitability) if the business becomes successful.
Neoliberalism does not encourage entrepreneurialism! Neoliberalism is the “cuckoo” of economics, i.e. Cuckoos don’t bother building their own nests – they just lay eggs that perfectly mimic those of other birds and take over their nests (it’s the best analogy I could come up with!). Effectively, the public sector invests in developing public assets and companies that operate for the benefit of the taxpayer, e.g. energy, transport, health, education, etc. then under the concept of neoliberalism, private companies swoon in and lobby politicians and sweeten them up, use the media to portray the weakness and poor state of these state assets, then purchase these assets (privatization) or enter into lucrative service contracts (outsourcing such as PFI) so that wealth shifts out of the public sector and into the private sector (also known as shrinking the state).
There was no entrepreneurial risk for these private companies. They did not innovate, invent, create, develop, etc. anything of value to society. They simply used their wealth to pry valuable assets from the public sector with promises of efficiency, reduced costs to consumers, etc. (which of course has been proved to be nonsense with the privatisation of assets that has taken place in the UK since the IMF bailout in the 70’s and the Thatcher era). These assets are usually purchased at below market value and the value of the assets increases in a short period following the purchase by the private company (triggering quick profits for shareholders). Neoliberalism is based purely on greed and will never benefit the majority of people in society.
Too Big To Fail
Neoliberalism is also the first step into the “Too Big To Fail” fantasy. How did this happen? Privatisation of money creation, i.e. 97% of the money created in the UK is created by the private banking sector through debt. That was the first step on the ladder to the financial sector increasing their asset values so much through derivatives trading and other casino type financial activities. Certain banks were deemed to be so high value and so inextricably linked to the economy of countries like the UK that they were deemed to be “Too Big To Fail”. Investopedia explain “Too Big To Fail” as follows:
“The idea that a business has become so large and ingrained in the economy that a government will provide assistance to prevent its failure. “Too big to fail” describes the belief that if an enormous company fails, it will have a disastrous ripple effect throughout the economy“.
As well as the casino banking tendencies, Governments have facilitated the creation of “Too Big To Fail” banks through deregulation. The extract below from a Motley Fool article written by John Maxfield is worth digesting:
Banks are too big to fail because we — or, more accurately, our representatives in Washington with the help of the financial industry’s lobbyists — made them that way. Over the last 40 years, Americans have been force-fed the notion that oversight of the financial industry was unnecessary because, as then-Federal Reserve Chairman Alan Greenspan put it in 1998, participants in financial markets are “predominantly professionals that simply do not require the customer protections that may be needed by the general public”. One year later, Congress voted overwhelmingly in favor of the Gramm-Leach-Bliley Act, which repealed what was left of the Glass Steagall Act’s prohibitions against the intermingling of commercial and investment banking activities. “We have learned that government is not the answer,” Senator Phil Gramm said at the time. “We have learned that freedom and competition are the answers”.
The deregulation that took place in London’s financial sector equally facilitated the excessive growth in the financial sector to create these “Too Big To Fail” banks. Capitalist greed had kicked in although I still struggle to see the true entrepreneurialism at play here apart from very clever people finding different ways to package and sell debts and make bets on some. Is a gambler in a casino an entrepreneur or just someone who is very skilled at their job? Probably a mixture of both.
However, once the banks have become “Too Big To Fail”, the risk element is removed because Governments will step in to bailout the banks thus making them risk free! All of the financial loss is transferred from the private banks to the taxpayers which is effectively neoliberalism in reverse! Confusing eh?!
So what is the solution? Well I would go hardcore and let the financial system crash and start afresh! If you believe in capitalism then that is exactly what should have happened! However, this infused neoliberalism means that there is a constant interaction between our Governments and the private sector whereby the citizens of a country are never at the forefront of Government policy and decision making (in the UK and USA specifically). Instead, the “labour force”, “taxpayers”, “lemmings” will be treated as inferior and will not be valued so that wealth can be stripped out of our hands and into the hands of the 1% who pull the Government and media strings!
An alternative solution proposed for the “Too Big To Fail” issue is to ensure that companies (banks in this instance) are forced to break up before they get to be “Too Big To Fail”. Almost like anti-monopoly laws. However, it would take a Government with great courage and values to regulate in this way to avoid “Too Big To Fail” organisations causing another huge shift in financial loss from the private to public sector.
There are no easy answers to these issues but hopefully this blog will provide some food for thought.
Like this: Like Loading... |
An Evaluation of Electroacupuncture at the Weizhong Acupoint (BL-40) as a Means of Relieving Pain Induced by Extracorporeal Shock Wave Lithotripsy Background. Extracorporeal shock wave lithotripsy (ESWL) is the preferred option for urolithiasis treatment. However, intensities of pain may be induced and the sedative anesthetic or analgesics were usually needed. The aim of this study was to develop an improved acupuncture-assisted anesthesia approach in pain relief. Methods. We conducted a single-blind, randomized controlled study in China Medical University Hospital. Patients treated by ESWL due to upper urolithiasis were randomly divided into control group, sham-EA group, and 100Hz EA group. The high frequency electroacupuncture (EA) was applied at the Weizhong acupoint (100Hz EA group) for 20 minutes prior to the ESWL. In the sham-EA group, the same procedures were performed as those of 100Hz EA group but no electric current was given to stimulate the acupoints. In the control group, no action was taken before operation. The information including the numbers and dosage of analgesic requirements, pain score, vital signs, and the satisfaction of procedure was collected. Results. A total of 74 subjects were recruited and we found that the interval to the first request analgesic, the number/total dosage of additional analgesic, recovery time from anesthesia, and the satisfaction were all better in both the 100Hz EA and the sham-EA group. The 100Hz EA also showed better relief of painful sensations by delaying the onset of pain. Conclusions. The 100Hz EA and the sham-EA can effectively relieve pain due to ESWL as well as reducing the dosage of opium analgesic used. Introduction Urolithiasis is one of the most commonly diagnosed diseases of the urinary system, and the prevalence rate in Taiwan is as high as 9%. This rate is showing a very significant increase with time. The causes of urolithiasis can be quite varied and complicated; normally, it is classified based on either an external or an internal origin. External origins refer to environmental factors such as geographic distribution, climate, season, water uptake, diet, and occupation. Internal origins refer to congenital biochemical factors (physiological characters) or anatomic characteristics such as heredity, age, and gender. At present, extracorporeal shock wave lithotripsy (ESWL) is the preferred option for the treatment of upper urolithiasis. The strength of the electrical shock wave when the development of lithotripsy was at an early stage tended to be stronger and this caused perceptible pain for patients; as a consequence, general anesthesia, spinal cord, or epidural anesthesia was often performed. In recent years, the development of new lithotripter models and a trend towards outpatient lithotripsy has resulted in lithotripsy moving toward being a painless treatment. This is because patients tend to benefit from a better and faster postoperational recovery in such circumstances. This trend has also been helped by better and more diversified sedative and anesthetic techniques. It is anticipated that the improvements in anesthetic technology and the enhancement of the lithotripsy apparatus will substantially reduce 2 Evidence-Based Complementary and Alternative Medicine the amount of sedative anesthetic and analgesic administered and this, in turn, will decrease the side effects that arise from these drugs. This will lead to a reduction in patient recovery time for both outpatients and inpatients as well as shorter hospital stays for the patients. Acupuncture is an important treatment approach in the Chinese traditional medicine, and acupuncture anesthesia has long been critically acclaimed by medical researchers. This study was aimed at acquiring an in-depth understanding of the functionality of 100 Hz EA as a pain relief method among patients undergoing ESWL. Patient Selection. After obtaining the consent of the institutional review board, seventy-four patients were recruited from the China Medical University Hospital. These patients suffered from upper urolithiasis and had a confirmed diagnosis from the urologist indicating treatment by ESWL. These patients were viewed as types ASA-I and ASA-II, which classify patients as generally in good health status or with only minor systemic disorders without functional abnormalities. The patient indications that were considered to be appropriate for ESWL included symptoms of hematuria, pain, hydronephrosis, or other urinary infections. The criteria for ESWL fitted the following profile: the size and width of the calculi in the ureter were equal to or smaller than 1.0 cm and/or the size of the calculi in ureter was equal to or smaller than 0.5 cm with obvious obstruction as observed by IVP examination. Experiment Grouping. Patients were divided by randomization into three groups, namely, the control group, the sham-EA group, and the 100 Hz EA group (each group was composed of 24-25 people). In the control group, no action was taken before operation. After lying prone to rest for 20 minutes, the patients underwent extracorporeal shock wave lithotripsy via a Compact Delta prototype produced by the Dornier Company, Germany. For the sham-EA group, before the operation, the patients lying prone on the treatment bed were subject to 75% ethanol sterilization of the Weizhong acupoint and a nonmeridian, nonacupoint target site 3 cm away from the Weizhong site. Two 30 gauge stainless steel acupuncture needles were inserted at the Weizhong acupoint and the nonacupoint on the leg of the affected side of urolithiasis. For both acupuncture sites, no "de-qi" was induced. Both acupuncture sites were set up with the electrostimulator (Trio 300 electrostimulator, 3-3-3 Toyotama-Minami, Nerima, Tokyo, Japan) connected to the patients (negative pole attached to the Weizhong site and the positive pole attached to the nonmeridian/collateral nonacupoint location). However, although the function key of the electrostimulator was switched on, no electric current was produced to stimulate the acupoint. After 20 minutes of sham-EA, the patients were subjected to ESWL. With the 100 Hz EA group, before the operation, the patients lay prone on the treatment bed and were subject to 75% ethanol sterilization of the Weizhong acupoint and a nonmeridian, nonacupoint target site 3 cm away from the Weizhong site. Two 30 gauge stainless steel acupuncture needles were inserted at the Weizhong acupoint and the nonacupoint, which was 3 cm away from the Weizhong, on the leg of the affected side of urolithiasis. Only the Weizhong acupoint was stimulated and "qi" was induced (the patient reported the sensation of "de-qi"). The other point had no "de-qi". Both acupuncture sites were set up with the electrostimulator (Trio 300 electrostimulator, 3-3-3 Toyotama-Minami, Nerima, Tokyo, Japan) connected to the patients (negative pole attached to the Weizhong site and the positive pole attached to the nonmeridian/collateral nonacupoint location) to form a pair of electric circuits using 100 Hz frequency. The pulse wave (sphygmogram) was set with width of 100 s and an appropriate amperage of 1∼2 mA at a level where the patient was aware of sensation and the muscles were observed to pulsate slightly. The patients were stimulated for 20 minutes and then underwent ESWL. Extracorporeal Shock Wave Lithotripsy Protocol. After the preoperational X-ray images of patients were read to confirm the location and size of the calculi, physiological monitors (EKG, BP, and PaO 2 ) were connected to patients, and dormicum (0.04 mg/kg) was injected through an intravenous drip and lithotripsy initiated. When a patient started to raise a hand or move in a manner such that it interfered with the lithotripsy operation, a pain score was obtained from the patient. When the pain score exceeded 3 points, this immediately prompted one administration of alfentanil (3 g/kg). The duration of the lithotripsy operations was between 50 and 60 minutes and the number of shock waves generated was about 3000. During the process of lithotripsy, the strength of shock wave gradually increased. The strength of shock wave was programmed at 11 KV for shots 1-100, at 12 KV for shots 101-500, and at 13 KV for shots 500 during the remaining shock waves. Recorded Items. The following information was recorded during operation: patients' requests for analgesia indicated by raising their hand during the operation as the time of first raising the hand, the number of times the hand was raised for analgesia, the pain visual analogue scale (VAS) values indicated by the patients, and the dosage opium derivative analgesic given; the vital signs such as blood pressure, heart rate, and arterial oxygen partial pressure; and any relevant side effects induced by sedatives and opium derivative analgesic such as nausea, vomit, dizziness, and itching. In addition, the following information was also recorded in the recovery room: any side effects after operation; the recovery time after anesthesia; and a grading on the level of satisfaction with respect to pain control during the operation. Statistical Analysis. SAS8.01 computer software was used to calculate the statistical analysis. The distribution of age, height, weight, size of calculi, strength of lithotripsy shock waves, number of shock waves generated, operational time of the lithotripsy, number of times the patient's hands were raised, the dosage of the drugs, the recovery time after anesthesia, the VAS values, and so forth all exhibited a median Evidence-Based Complementary and Alternative Medicine 3 value (25%-75%) by the Kruskal-Wallis test. A Scheffe test was performed after trial to confirm any statistically significant differences ( < 0.05) between the three groups and when the 1st pain score was collected. Fisher's least significant difference method was used after analysis to confirm any statistically significant differences ( < 0.05). The time when the patients first raised their hands was analyzed using Kaplan-Meier method for survival analysis in order to predict the survival equation of the time of first raising a hand and the log-rank test was carried out to verify any statistically significant difference ( < 0.05) between the groups. Gender, ASA, location of calculi, side effects of the analgesic, patients satisfaction regarding pain controllability, and relevant side effects caused by the opium derivative analgesic were assessed by the 2 test or Fisher's exact test; specifically, when less than 20% of columns had expected values smaller than 5, the 2 test was used and, conversely, when more than 20% of columns had values smaller than 5, Fisher's exact test was used. The raising of a hand during the operation to boost drug dosage was evaluated by 2 test as long as a significant difference was picked up between groups and then a logic regression approach was adopted to explore after analysis the groups having significant variation. Demographic Data Analysis of Each Group. When the demographic data of the seventy-four patients undergoing treatment with ESWL were analyzed, it was found that gender, ASA body type, age, height, and weight factors did not show any statistically significant variation (Table 1). Analysis of the Location and Size of the Calculi. The location of the upper urolithiasis, whether it was on the left and right sides, at the ureteral pelvis junction (UPJ), or at upper part of the ureter together with the size of calculi, did not show any statistically significant variation (Table 2). Analysis of the Strength of the Lithotripsy Shock Waves, the Shock Wave Count, and the Duration of Lithotripsy Operation. The factors associated with the lithotripsy operation for upper urolithiasis did not demonstrate any statistically significant variation (Table 3). (Table 4) Number of patients who have not raised their hands for analgesic during operation: there were more patients within the 100 Hz EA group than the control group who did not raise a hand and the difference was statistically significant ( < 0.05). Drug Related Data Analysis during Operation The time until the first raising of a hand for analgesia during operation: compared to the control group, the 100 Hz EA group and sham-EA group took 35 minutes more and 21 minutes more, respectively, to raise their hands and this was statistically significant ( < 0.001). The number of hand raising events for analgesia during operation among the 100 Hz EA group and the sham-EA group was twice and once fewer than the control group and this was statistically significant ( < 0.001). The total dosage of administrated anesthetic in each group had no statistical difference. The dosage of analgesic provided to the 100 Hz EA group was 210.00 g/kg less than that provided to the control group, and this was statistically significant ( < 0.01). The total dosage of analgesic requested by raising a hand during the operation among the 100 Hz EA group and the sham-EA group was 462.00 g/kg and 309.17 g/kg less than the control group, respectively, and this was statistically significant ( < 0.001). When the recovery time after the anesthesia was analyzed, the 100 Hz EA group and the sham-EA group required 10-minute less recovery time than the control group and this was a statistically significant variation ( < 0.001). Time Survival Analysis for First Hand Raising for More Analgesic (Figure 1) In total, 25% of the patients had requested first analgesic at 3.25 minutes in the control group, at 10 minutes in the sham-EA group, and at 18.25 minutes in the 100 Hz EA group. In total, 40% of the patients had requested first analgesic at 7 minutes in the control group, at 24 minutes in the sham-EA group, and at 25 minutes in the 100 Hz EA group. At the end of the study, 20% of the patients had not raised their hands for analgesic in control group, 4 Evidence-Based Complementary and Alternative Medicine 45.83% had not raised their hands for analgesic in sham-EA group, and 56% had not raised their hands for analgesic in 100 Hz EA group. (Table 5) The pain score for 100 Hz EA group at first hand raising for analgesic was 3 points lower than that of the control group and this was statistically significant ( < 0.05). Pain Score Analysis during Operation The highest pain score during operation for the 100 Hz EA group was 4 points less than that of the control group and this was statistically significant ( < 0.01). The pain scores for the 100 Hz EA group and the sham-EA group after analgesic was administered were 1 point lower than the control group and this was statistically significant ( < 0.001). (Table 6). Each group was analyzed and compared based on the side effects of the analgesia, but there was no statistically significant variation. (Table 7) Within the 100 Hz EA group, 80% of the patients were very satisfied, 20% were satisfied, and 0% were slightly satisfied, unsatisfied, or completely unsatisfied. Pain Controllability Satisfaction Analysis Within the sham-EA group, 54% of the patients were very satisfied, 46% were satisfied, and 0% were slightly satisfied, unsatisfied, or completely unsatisfied. Within the control group, 16% of the patients were very satisfied, 32% were satisfied, 52% were slightly satisfied, and 0% were unsatisfied or completely unsatisfied. Thus both the 100 Hz EA group and the sham-EA group had a high percentage of patients showing a high level of Evidence-Based Complementary and Alternative Medicine 5 satisfaction compared to the control and this was statistically significant. Discussion In acupuncture theory, the bladder meridian separated into two submeridians from the posterior neck and passed through the lumbar region. Then the two submeridians had been integrated at the Weizhong acupoint behind the knee joints. Hence, we can use Weizhong acupoint as an important acupoint in treating back and waist pain problems. Based on this, all diseases originating from waist area can be treated. Based on traditional Chinese medicine, upper urolithiasis is located at this waist area, where the bladder meridian enters the abdominal cavity connecting the kidneys to the bladder. Furthermore, peripheral and organ pain felt during ESWL are also inside this waist area. Based on this, we targeted the Weizhong acupoint as the priority site for acupuncture before proceeding with the ESWL. The Weizhong acupoint is located behind knee joints, passes through the midpoint of popliteal striated muscle, and resides in between the biceps femoris tendon and semitendinosus muscle tendon. Current ESWL related acupuncture studies have used different acupoint selections and most studies have adopted simultaneous stimulation at multiple sets of acupoints. Sun et al. implemented electroacupuncture to treat patients undergoing ESWL and showed that 85% of patients did not require analgesic drugs to relieve pain. Similarly, the results of Wang et al. showed that 85% of electroacupuncture and 70% of manual acupuncture patients did not require analgesic drugs. Chan et al. also reported similar results. This suggests that acupuncture and electroacupuncture are both methods of effective analgesia for patients undergoing ESWL; this agrees with the results of this study. Specifically, this study abides by the Chinese traditional medical theory and obeys the idea that "diseases are managed where meridians pass by, " "meridians flow by where the principle target treatment site resides, " and "if the disease attacks the head, the foot should be treated; if the disease attacks waist, the popliteal area should be treated. " In this study, based on the above approach, the Weizhong acupoint is justified as the site of choice as it is placed on the bladder meridian. Evidence-Based Complementary and Alternative Medicine Max pain score 4.00 (3.00-5.00) 3.00 (0.00-3.50) 0.00 (0.00-3.00) 0.009 * * I > III Controlled pain score 1.00 (0.00-2.00) 0.00 (0.00-0.00) 0.00 (0.00-0.00) <0.001 * * * I > II, I > III Median (25%-75%). * < 0.05, * * < 0.01, and * * * < 0.001. Feel slightly dizzy after operation (score 1) 7 (28%) 1 (4%) 4 (16%) Feel dizzy and walk wobbly after operation (score 2) 1 (4%) 1 (4%) 0 (0%) Feel very dizzy, walk wobbly, and nauseate after operation (score 3) 0 (0%) 0 (0%) 0 (0%) Feel very dizzy, walk wobbly, and vomit after operation (score 4) 0 (0%) 0 (0%) 0 (0%) The best-known mechanism of acupuncture analgesia is via endogenous opiates and their receptors. Different kinds of endogenous opiates, such as -endorphin, enkephalin, endomorphin, and dynorphin, reportedly act as frequency-dependent factors in EA. EA of low frequency (2 Hz) accelerated the release of -endorphin and enkephalin in the CNS whereas EA of high frequency (100 Hz) accelerated the release of dynorphin. However, in our previous study, we found that high frequency EA is more effective than low frequency EA in clinical evaluation. We examined the effects of preoperative EA at classical bilateral acupuncture points (Zusanli, also known as ST-36) on postoperative pain. Patients undergoing lower abdominal surgery were randomly assigned to four treatment regimens: control; sham-EA (needle insertion without electrical stimulation); low-EA (2 Hz of electrical stimulation); and high-EA (100 Hz of electrical stimulation). Postoperative pain was evaluated by recording the total amount of morphine required by PCA. We found that, during the first 24 h, the total amount of morphine required was decreased by 21, 43, and 61% in the sham-, low-, and high-EA groups, respectively. Therefore, this study only included three groups, namely, a control group, a sham-EA group, and a 100 Hz EA group. No low frequency (2 Hz) electroacupuncture group was included because many studies already support the efficacy of high frequency (100 Hz) electroacupuncture compared to the low frequency (2 Hz) electroacupuncture. In the study, no significant difference in analgesia between the 100 Hz EA group and the sham-EA group was found. This is presumed to be due to a number of factors. Firstly, the type of pain and its intensity as induced by the ESWL procedure are different from the pain endured after an anesthetic has worn off during operations at other anatomical sites, such as the thoracic cavity and abdominal cavity. It is well recognized that the most painful sensations among all types of operations are the ones that involve postoperational pain associated with the thoracic cavity or upper abdominal area. This is followed by the lower abdominal operation and peripheral operations are generally the least painful. The primary origin of the pain generated in lithotripsy is due to the shock wave passing through the skin, which induces peripheral pain and this is followed by organ pain induced by effects of the focused shock wave on the area around the kidney where there are nerves distributed in the capsule. Thus, in general, the pain intensity of ESWL is lower. Even, the pain intensity of the ureteral pelvis junction (UPJ) and the upper part of the ureter is lower than the area around the kidney during ESWL operation. Secondly, the structure of the Weizhong acupoint is complicated, and its characteristics are different from the ones we have used previously [11,. These acupoints are located close to muscle structures, such as the Zusanli (足 三 里), the Sanyinjiao (三 陰 交), and the Yanglingquan (陽 陵 泉). The distribution of nerves and blood vessels is more condensed and complicated at Weizhong acupoint and therefore any manipulation of the Weizhong acupoint demands delicate needle movements. This requirement might compromise qi induction. Thirdly, sham-EA can alleviate pain and this had been demonstrated previously. This is because, during the quite accurate inserted at Weizhong accupoint process by the practitioner, the quivering of needling might stimulate the sham-EA group causing qi induction. Fourthly, the dosage level for the acupuncture is low with this study only selecting one acupoint, the Weizhong; this differs from our previous studies , where multiple acupoints have been used. Consequently, the dosage of acupuncture applied in this study is relatively lower. Fifthly, the effect of the analgesic is optimal and the action of relieving pain is quick. Furthermore, alfentanil is a better analgesic and has a more superior recovery time than other commonly used operation room drugs such as fentanyl. Sixthly, due to the characteristics of Weizhong acupoint, electroacupuncture induced electrical stimuli are not evident to the patient. Finally, the number of participating patients in this study is still insufficient. All of the above factors lead to the conclusion that the analgesia effects of the 100 Hz EA group and the sham-EA group do not show statistically significant variation. To the best of knowledge, this is the first study to report that 100 Hz EA and sham-EA can effectively relieve pain due to ESWL as well as reducing the dosage of opium analgesic used. However, it is still unclear how the placebo effect of EA contributes to analgesic effect in the present study since sham-EA was also effective. Future studies with different control group design are needed. We have previously reviewed the control group design in acupuncture randomized controlled trials (RCTs) and proposed four different strategies for designing control groups: absence of acupuncture needle insertion, different location of inserted acupuncture needles, different depth of insertion, and the use of assistant tools. These four strategies can be considered when designing the next study. The number of lithotripsy shock waves is usually programmed in accordance with the patient's heart rate and if the patient exhibits abnormal heart beats, medication is often administered to control the patient's heart rate within a normal range. As a result, differences in the duration of the lithotripsy period between the groups were quite minor. The Weizhong acupoint is capable of recuperating the urinary bladder from qi transformation and modulating the functioning of lower-jiao qi activity (下焦氣機) as well as producing a quick response pain relief through fluid circulation of qi. It also helps to facilitate hemocirculation to remove stasis and move qi in order to quickly alleviate smooth muscle spasms. This feature will help relieve the operational pain and move a foreign object like the calculi downwards for excretion. Further studies are warranted to confirm whether targeting Weizhong acupoint for acupuncture may reduce the operation time for lithotripsy. Study of the opiate related side effects shows that the occurrence rates for each group were generally low and it is presumed that the rationale for the low level of alfentanil related side effects is, firstly, that the originally single dosage of alfentanil is already low (3 g/kg) and, secondly, that the superior effectiveness of analgesia with electroacupuncture makes the dosage of alfentanil needed in the electroacupuncture group lower, which, in turn, leads to fewer side effects. Recovery time from anesthesia is evaluated by the anesthesiologist and the standards of recovery for anesthesia are as follows: consciousness, no dizziness when moving from a sitting position to upright standing, ability to walk around voluntarily without assistance, and a lack of discomfort or anguish even upon standing up. Resim et al. showed that electroacupuncture can effectively reduce the side effects of ESWL, which benefits early recovery. The study further supports the analgesia effect of electroacupuncture and it is clear that, despite the use of different acupoints selected in various anatomical areas using a diversity of selection principles, they all are able to fulfill the objective of pain relief. This study also implies that further study is warranted to explore which operations should be combined with which appropriate acupuncture point(s). |
Detection of trends in extreme streamflow due to climate variability in the Lake Naivasha basin, Kenya ABSTRACT Variability of streamflow has far-reaching impacts especially in developing countries. This is aggravated by climate change which has adversely affected the water resources and food security. This paper presents the characterization trends in extreme streamflow regimes with a view to providing information for planning local coping mechanisms to climate variability and change using streamflow data recorded from 1959 to 2008 in the Lake Naivasha basin in Kenya. The maxima and percentiles of streamflow distributions were investigated to identify changes in extreme intensity and frequency, respectively, using the MannKendall test. The results indicate significant increases in annual maxima at all gauging stations. The flows in the month of November increased significantly at gauging stations 2GB4 and 2GC4. Flow percentile exceedance revealed that the annual 95th percentile exceedance increased significantly at gauging stations 2GB1 and 2GB5 with a decrease in annual 90th and 97th percentile exceedance at gauging stations 2GB4 and 2GC4. The results presented in this paper are useful for climate change adaptation planning and management especially in water supply, hydropower generation and agriculture. |
#include<bits/stdc++.h>
using namespace std;
int main(){
int64_t N, M;
cin >> N >> M;
vector<int64_t> vec(M+2);
for(int i=0; i<M; i++){
cin >> vec.at(i);
}
vector<int64_t> ans(N+2);
int n_exist = 0;
ans.at(0) = 1;
ans.at(1) = 2;
if(vec.at(0) == 1){
ans.at(0) = 0;
ans.at(1) = 1;
n_exist++;
}
if(vec.at(0) == 2){
ans.at(1) = 0;
n_exist++;
}
if(vec.at(1) == 2){
ans.at(1) = 0;
n_exist++;
}
for(int j=2; j<N; j++){
if(n_exist < vec.size() && vec.at(n_exist)-1 == j){
n_exist++;
continue;
}
else{
ans.at(j) = ans.at(j-1) +ans.at(j-2);
ans.at(j) = ans.at(j) %1000000007;
}
}
cout << ans.at(N-1) << endl;
} |
This is a fine show, and one that is definitely worth checking out.
The second season of Maverick has just been released to DVD.
One of the best seasons of this classic TV Western finally available on DVD.
The true literary origins of some of our classic Western heroes.
For any Star Trek fan this is a nice addition to your collection.
The Magnificent Seven movies ride again.
Classic 1957 Sonny Rollins album remastered and reissued with bonus tracks. |
<filename>examples/app/src/main/java/com/getfastah/examples/ui/dashboard/DashboardViewModel.java
package com.getfastah.examples.ui.dashboard;
import android.app.Application;
import android.location.Location;
import androidx.annotation.NonNull;
import androidx.lifecycle.AndroidViewModel;
import androidx.lifecycle.LiveData;
import androidx.lifecycle.MutableLiveData;
import androidx.lifecycle.ViewModel;
import com.getfastah.examples.LocationLiveData;
import com.getfastah.examples.NetworkLatencyLiveData;
public class DashboardViewModel extends AndroidViewModel {
private NetworkLatencyLiveData mLatencyLive;
public DashboardViewModel(@NonNull Application application) {
super(application);
mLatencyLive = new NetworkLatencyLiveData(application.getApplicationContext());
}
public NetworkLatencyLiveData getNetworkLatencyData() {
return mLatencyLive;
}
} |
<filename>zephyr/projects/nissa/src/usbc.c
/* Copyright 2021 The Chromium OS Authors. All rights reserved.
* Use of this source code is governed by a BSD-style license that can be
* found in the LICENSE file.
*/
#include "charge_state_v2.h"
#include "chipset.h"
#include "hooks.h"
#include "usb_mux.h"
#include "usbc_ppc.h"
#include "driver/tcpm/tcpci.h"
#include "driver/tcpm/raa489000.h"
#include "sub_board.h"
#define CPRINTS(format, args...) cprints(CC_USBCHARGE, format, ## args)
struct ppc_config_t ppc_chips[] = {};
unsigned int ppc_cnt = ARRAY_SIZE(ppc_chips);
struct tcpc_config_t tcpc_config[CONFIG_USB_PD_PORT_MAX_COUNT] = {
{
.bus_type = EC_BUS_TYPE_I2C,
.i2c_info = {
.port = I2C_PORT_USB_C0_TCPC,
.addr_flags = RAA489000_TCPC0_I2C_FLAGS,
},
.drv = &raa489000_tcpm_drv,
/* RAA489000 implements TCPCI 2.0 */
.flags = TCPC_FLAGS_TCPCI_REV2_0,
},
{ /* sub-board */
.bus_type = EC_BUS_TYPE_I2C,
.i2c_info = {
.port = I2C_PORT_USB_C1_TCPC,
.addr_flags = RAA489000_TCPC0_I2C_FLAGS,
},
.drv = &raa489000_tcpm_drv,
/* RAA489000 implements TCPCI 2.0 */
.flags = TCPC_FLAGS_TCPCI_REV2_0,
},
};
struct usb_mux usb_muxes[CONFIG_USB_PD_PORT_MAX_COUNT] = {
{
.usb_port = 0,
.driver = &virtual_usb_mux_driver,
.hpd_update = &virtual_hpd_update,
},
{ /* sub-board */
.usb_port = 1,
.driver = &virtual_usb_mux_driver,
.hpd_update = &virtual_hpd_update,
},
};
__override uint8_t board_get_usb_pd_port_count(void)
{
switch (nissa_get_sb_type()) {
default:
return 1;
case NISSA_SB_C_A:
case NISSA_SB_C_LTE:
return 2;
}
}
void board_set_charge_limit(int port, int supplier, int charge_ma,
int max_ma, int charge_mv)
{
int icl = MAX(charge_ma, CONFIG_CHARGER_INPUT_CURRENT);
/*
* Assume charger overdraws by about 4%, keeping the actual draw
* within spec. This adjustment can be changed with characterization
* of actual hardware.
*/
icl = icl * 96 / 100;
charge_set_input_current_limit(icl, charge_mv);
}
int board_is_sourcing_vbus(int port)
{
int regval;
tcpc_read(port, TCPC_REG_POWER_STATUS, ®val);
return !!(regval & TCPC_REG_POWER_STATUS_SOURCING_VBUS);
}
int board_set_active_charge_port(int port)
{
int is_real_port = (port >= 0 &&
port < CONFIG_USB_PD_PORT_MAX_COUNT);
int i;
int old_port;
if (!is_real_port && port != CHARGE_PORT_NONE)
return EC_ERROR_INVAL;
old_port = charge_manager_get_active_charge_port();
CPRINTS("New chg p%d", port);
/* Disable all ports. */
if (port == CHARGE_PORT_NONE) {
for (i = 0; i < CONFIG_USB_PD_PORT_MAX_COUNT; i++)
tcpc_write(i, TCPC_REG_COMMAND,
TCPC_REG_COMMAND_SNK_CTRL_LOW);
return EC_SUCCESS;
}
/* Check if port is sourcing VBUS. */
if (board_is_sourcing_vbus(port)) {
CPRINTS("Skip enable p%d", port);
return EC_ERROR_INVAL;
}
/*
* Turn off the other ports' sink path FETs, before enabling the
* requested charge port.
*/
for (i = 0; i < CONFIG_USB_PD_PORT_MAX_COUNT; i++) {
if (i == port)
continue;
if (tcpc_write(i, TCPC_REG_COMMAND,
TCPC_REG_COMMAND_SNK_CTRL_LOW))
CPRINTS("p%d: sink path disable failed.", i);
}
/*
* Stop the charger IC from switching while changing ports. Otherwise,
* we can overcurrent the adapter we're switching to. (crbug.com/926056)
*/
if (old_port != CHARGE_PORT_NONE)
charger_discharge_on_ac(1);
/* Enable requested charge port. */
if (tcpc_write(port, TCPC_REG_COMMAND,
TCPC_REG_COMMAND_SNK_CTRL_HIGH)) {
CPRINTS("p%d: sink path enable failed.", port);
charger_discharge_on_ac(0);
return EC_ERROR_UNKNOWN;
}
/* Allow the charger IC to begin/continue switching. */
charger_discharge_on_ac(0);
return EC_SUCCESS;
}
uint16_t tcpc_get_alert_status(void)
{
uint16_t status = 0;
int regval;
/*
* The interrupt line is shared between the TCPC and BC1.2 detector IC.
* Therefore, go out and actually read the alert registers to report the
* alert status.
*/
if (!gpio_get_level(GPIO_USB_C0_PD_INT_ODL)) {
if (!tcpc_read16(0, TCPC_REG_ALERT, ®val)) {
/* The TCPCI Rev 1.0 spec says to ignore bits 14:12. */
if (!(tcpc_config[0].flags & TCPC_FLAGS_TCPCI_REV2_0))
regval &= ~((1 << 14) | (1 << 13) | (1 << 12));
if (regval)
status |= PD_STATUS_TCPC_ALERT_0;
}
}
/* TODO(b:212490923) ignore C1 interrupts if port is not present. */
if (!gpio_get_level(GPIO_USB_C1_PD_INT_ODL)) {
if (!tcpc_read16(1, TCPC_REG_ALERT, ®val)) {
/* TCPCI spec Rev 1.0 says to ignore bits 14:12. */
if (!(tcpc_config[1].flags & TCPC_FLAGS_TCPCI_REV2_0))
regval &= ~((1 << 14) | (1 << 13) | (1 << 12));
if (regval)
status |= PD_STATUS_TCPC_ALERT_1;
}
}
return status;
}
int pd_check_vconn_swap(int port)
{
/* Allow VCONN swaps if the AP is on. */
return chipset_in_state(CHIPSET_STATE_ANY_SUSPEND | CHIPSET_STATE_ON);
}
void pd_power_supply_reset(int port)
{
/* Disable VBUS */
tcpc_write(port, TCPC_REG_COMMAND, TCPC_REG_COMMAND_SRC_CTRL_LOW);
/* Notify host of power info change. */
pd_send_host_event(PD_EVENT_POWER_CHANGE);
}
int pd_set_power_supply_ready(int port)
{
int rv;
if (port >= CONFIG_USB_PD_PORT_MAX_COUNT)
return EC_ERROR_INVAL;
/* Disable charging. */
rv = tcpc_write(port, TCPC_REG_COMMAND, TCPC_REG_COMMAND_SNK_CTRL_LOW);
if (rv)
return rv;
/* Our policy is not to source VBUS when the AP is off. */
if (chipset_in_state(CHIPSET_STATE_ANY_OFF))
return EC_ERROR_NOT_POWERED;
/* Provide Vbus. */
rv = tcpc_write(port, TCPC_REG_COMMAND, TCPC_REG_COMMAND_SRC_CTRL_HIGH);
if (rv)
return rv;
/* Notify host of power info change. */
pd_send_host_event(PD_EVENT_POWER_CHANGE);
return EC_SUCCESS;
}
void board_reset_pd_mcu(void)
{
/*
* TODO(b:147316511): could send a reset command to the TCPC here
* if needed.
*/
}
/*
* Because the TCPCs and BC1.2 chips share interrupt lines, it's possible
* for an interrupt to be lost if one asserts the IRQ, the other does the same
* then the first releases it: there will only be one falling edge to trigger
* the interrupt, and the line will be held low. We handle this by running a
* deferred check after a falling edge to see whether the IRQ is still being
* asserted. If it is, we assume an interrupt may have been lost and we need
* to poll each chip for events again.
*/
#define USBC_INT_POLL_DELAY_US 5000
static void poll_c0_int(void);
DECLARE_DEFERRED(poll_c0_int);
static void poll_c1_int(void);
DECLARE_DEFERRED(poll_c1_int);
static void usbc_interrupt_trigger(int port)
{
schedule_deferred_pd_interrupt(port);
task_set_event(PD_PORT_TO_TASK_ID(port), USB_CHG_EVENT_BC12);
}
#define USBC_INT_POLL_DATA(port) poll_c ## port ## _int_data
#define USBC_INT_POLL(port) \
static void poll_c ## port ## _int (void) \
{ \
if (!gpio_get_level(GPIO_USB_C ## port ## _PD_INT_ODL)) { \
usbc_interrupt_trigger(port); \
hook_call_deferred(&USBC_INT_POLL_DATA(port), \
USBC_INT_POLL_DELAY_US); \
} \
}
USBC_INT_POLL(0)
USBC_INT_POLL(1)
void usb_c0_interrupt(enum gpio_signal gpio)
{
/*
* We've just been called from a falling edge, so there's definitely
* no lost IRQ right now. Cancel any pending check.
*/
hook_call_deferred(&USBC_INT_POLL_DATA(0), -1);
/* Trigger polling of TCPC and BC1.2 in respective tasks */
usbc_interrupt_trigger(0);
/* Check for lost interrupts in a bit */
hook_call_deferred(&USBC_INT_POLL_DATA(0), USBC_INT_POLL_DELAY_US);
}
void usb_c1_interrupt(enum gpio_signal gpio)
{
hook_call_deferred(&USBC_INT_POLL_DATA(1), -1);
usbc_interrupt_trigger(1);
hook_call_deferred(&USBC_INT_POLL_DATA(1), USBC_INT_POLL_DELAY_US);
}
static void usbc_init(void)
{
gpio_enable_interrupt(GPIO_USB_C0_PD_INT_ODL);
if (board_get_usb_pd_port_count() == 2)
gpio_enable_interrupt(GPIO_USB_C1_PD_INT_ODL);
}
DECLARE_HOOK(HOOK_INIT, usbc_init, HOOK_PRIO_DEFAULT);
|
The analysis also found that people living in India experience the similar health problems well before they turn 60.
Researchers found a 30-year gap separates countries with the highest and lowest ages at which people experience health problems of a 65-yr-old.
Washington: People living in India experience the health problems associated with ageing at an early stage than those living in Japan or Switzerland, according to a first-of-its-kind study published in The Lancet Public Health.
Researchers at the University of Washington in the US and colleagues found that a 30-year gap separates countries with the highest and lowest ages at which people experience the health problems of a 65-year-old.
They found 76-year-olds in Japan and Switzerland, and 46-year-olds in Papua New Guinea have the same level of age-related health problems as an “average” person aged 65.
The analysis also found that people living in India experience the similar health problems well before they turn 60. “These disparate findings show that increased life expectancy at older ages can either be an opportunity or a threat to the overall welfare of populations, depending on the ageing-related health problems the population experiences regardless of chronological age,” said Angela Y. Chang, lead author of the study and postdoctoral fellow at the University of Washington in the US.
“Age-related health problems can lead to early retirement, a smaller workforce, and higher health spending. Government leaders and other stakeholders influencing health systems need to consider when people begin suffering the negative effects of ageing,” Chang said in a statement. These negative effects include impaired functions and loss of physical, mental, and cognitive abilities resulting from the 92 conditions analysed, five of which are communicable and 81 non-communicable, along with six injuries.
The study is the first of its kind, according to Chang. Where traditional metrics of ageing examine increased longevity, this study explores both chronological age and the pace at which ageing contributes to health deterioration.
The study uses estimates from the Global Burden of Disease study (GBD). The researchers measured “age-related disease burden” by aggregating all disability-adjusted life years (DALYs), a measurement of loss of healthy life, related to the 92 diseases.
Although most countries have similar rankings between age-standardised, age related and all-burden rates, countries such as Ethiopia, Nigeria, and South Africa perform better in age-related disease burden relative to all burden. |
Charged polymer in an electric field. The Brownian motion random-walk model of a polymer gives unphysical results for the case of a charged polymer in an electric field. To avoid these difficulties we use two stochastic processes in which the finiteness of the monomer size is retained. For a continuum model we use Kac's telegrapher process. The relation of this to the Brownian motion picture corresponds to the relation between a Poisson process and its corresponding Wiener process. In both cases idealized and unrealistic properties of the Wiener process are avoided. Explicit results in any dimension are obtained by going over to a completely discrete process. By both methods, and in contrast to Brownian motion predictions, physically reasonable O(${\mathit{N}}^{2}$) dependence is found for the mean-squared extension of the size-N polymer. We also examine the breakdown of the Brownian motion approach by considering the effect of the electric field on the usual limiting process by which the discrete model becomes Brownian motion. |
<gh_stars>100-1000
/*
* Copyright (C) 2009 - present by OpenGamma Inc. and the OpenGamma group of companies
*
* Please see distribution for license.
*/
package com.opengamma.strata.math.impl;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatExceptionOfType;
import org.junit.jupiter.api.Test;
/**
* Test.
*/
public class ComplexNumberTest {
private static final ComplexNumber Z1 = new ComplexNumber(1, 2);
private static final ComplexNumber Z2 = new ComplexNumber(1, 2);
private static final ComplexNumber Z3 = new ComplexNumber(1, 3);
private static final ComplexNumber Z4 = new ComplexNumber(2, 2);
private static final ComplexNumber Z5 = new ComplexNumber(2, 3);
@Test
public void testByteValue() {
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> Z1.byteValue());
}
@Test
public void testIntValue() {
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> Z1.intValue());
}
@Test
public void testLongValue() {
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> Z1.longValue());
}
@Test
public void testFloatValue() {
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> Z1.floatValue());
}
@Test
public void testDoubleValue() {
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> Z1.doubleValue());
}
@Test
public void test() {
assertThat(Double.valueOf(1)).isEqualTo(Double.valueOf(Z1.getReal()));
assertThat(Double.valueOf(2)).isEqualTo(Double.valueOf(Z1.getImaginary()));
assertThat(Z1).isEqualTo(Z2);
assertThat(Z1.hashCode()).isEqualTo(Z2.hashCode());
assertThat("1.0 + 2.0i").isEqualTo(Z1.toString());
assertThat("1.0 + 0.0i").isEqualTo(new ComplexNumber(1, 0).toString());
assertThat("0.0 + 2.3i").isEqualTo(new ComplexNumber(0, 2.3).toString());
assertThat("-1.0 + 0.0i").isEqualTo(new ComplexNumber(-1, 0).toString());
assertThat("0.0 - 2.3i").isEqualTo(new ComplexNumber(0, -2.3).toString());
assertThat(Z1.equals(Z3)).isFalse();
assertThat(Z1.equals(Z4)).isFalse();
assertThat(Z1.equals(Z5)).isFalse();
}
}
|
<reponame>0rC0/niftyseg
/**
* @file seg_maths.cpp
* @author <NAME>
* @date 01/01/2014
*
* Copyright (c) 2014, University College London. All rights reserved.
* Centre for Medical Image Computing (CMIC)
* See the LICENSE.txt file in the nifty_seg root folder
*
*/
#include <iostream>
#include <time.h>
#include "_seg_common.h"
#include "_seg_tools.h"
#include <Eigen/Core>
#include <Eigen/LU>
#include <Eigen/Cholesky>
#include <cfloat>
using namespace std;
#define SegPrecisionTYPE float
void Usage(char *exec)
{
printf("* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n");
printf("\nMath tools:\nUsage:\t%s <input> <operation> <output>.\n\n",exec);
printf("\t* * Operations on 3-D and 4-D images* *\n");
printf("\t-mul\t<float/file>\tMultiply image <float> value or by other image.\n");
printf("\t-div\t<float/file>\tDivide image by <float> or by other image.\n");
printf("\t-add\t<float/file>\tAdd image by <float> or by other image.\n");
printf("\t-sub\t<float/file>\tSubtract image by <float> or by other image.\n");
printf("\t-pow\t<float>\t\tImage to the power of <float>.\n");
printf("\t-thr\t<float>\t\tThreshold the image below <float>.\n");
printf("\t-uthr\t<float>\t\tThreshold image above <float>.\n");
printf("\t-smo\t<float>\t\tGaussian smoothing by std <float> (in voxels and up to 4-D).\n");
printf("\t-equal\t<int>\t\tGet voxels equal to <int>\n");
printf("\t-replace <int1> <int2>\tReplaces voxels equal to <int1> with <int2>\n");
printf("\t-sqrt \t\t\tSquare root of the image.\n");
printf("\t-exp \t\t\tExponential root of the image.\n");
printf("\t-log \t\t\tLog of the image.\n");
printf("\t-recip \t\t\tReciprocal (1/I) of the image.\n");
printf("\t-abs \t\t\tAbsolute value of the image.\n");
printf("\t-bin \t\t\tBinarise the image.\n");
printf("\t-otsu \t\t\tOtsu thresholding of the current image.\n");
printf("\t-edge\t<float>\t\tCalculate the edges of the image using a threshold <float>.\n");
printf("\t-sobel3\t<float>\t\tCalculate the edges of all timepoints using a Sobel filter with a 3x3x3 kernel and applying <float> gaussian smoothing.\n");
printf("\t-sobel5\t<float>\t\tCalculate the edges of all timepoints using a Sobel filter with a 5x5x5 kernel and applying <float> gaussian smoothing.\n");
printf("\t-min\t<file>\t\tGet the min per voxel between <current> and <file>.\n");
printf("\n\t* * Operations on 3-D images * *\n");
printf("\t-smol\t<float>\t\tGaussian smoothing of a 3D label image.\n");
printf("\t-dil\t<int>\t\tDilate the image <int> times (in voxels).\n");
printf("\t-ero\t<int>\t\tErode the image <int> times (in voxels).\n");
printf("\t-pad\t<int>\t\tPad <int> voxels with NaN value around each 3D volume.\n");
printf("\t-crop\t<int>\t\tCrop <int> voxels around each 3D volume.\n");
printf("\n\t* * Operations binary 3-D images * *\n");
printf("\t-lconcomp\t\tTake the largest connected component\n");
printf("\t-concomp6\t\tLabel the different connected components with a 6NN kernel\n");
printf("\t-concomp26\t\tLabel the different connected components with a 26NN kernel\n");
printf("\t-fill\t\t\tFill holes in binary object (e.g. fill ventricle in brain mask).\n");
printf("\t-euc\t\t\tEuclidean distance trasnform\n");
printf("\t-geo <float/file>\tGeodesic distance according to the speed function <float/file>\n");
printf("\n\t* * Dimensionality reduction operations: from 4-D to 3-D * *\n");
printf("\t-tp <int>\t\tExtract time point <int>\n");
printf("\t-tpmax\t\t\tGet the time point with the highest value (binarise 4D probabilities)\n");
printf("\t-tmean\t\t\tMean value of all time points.\n");
printf("\t-tmax\t\t\tMax value of all time points.\n");
printf("\t-tmin\t\t\tMean value of all time points.\n");
printf("\n\t* * Dimensionality increase operations: from 3-D to 4-D * *\n");
printf("\t-merge\t<i> <d> <files>\tMerge <i> images and the working image in the <d> dimension \n");
printf("\t-splitlab\t\tSplit the integer labels into multiple timepoints\n");
printf("\t-splitinter <x/y/z>\t\tSplit interleaved slices in direction <x/y/z> into separate time points\n");
printf("\n\t* * Image similarity: Local metrics * *\n");
printf("\t-lncc\t<file> <std>\tLocal CC between current img and <file> on a kernel with <std>\n");
printf("\t-lssd\t<file> <std>\tLocal SSD between current img and <file> on a kernel with <std>\n");
printf("\n\t* * Normalisation * *\n");
printf("\t-llsnorm\t<file_norm>\t\t Linear LS normalisation between current and <file_norm>\n");
printf("\t-lltsnorm\t<file_norm> <float>\t Linear LTS normalisation assuming <float> percent outliers\n");
printf("\t-qlsnorm\t<order> <file_norm>\t LS normalisation of <order> between current and <file_norm>\n");
printf("\n\t* * NaN handling * *\n");
printf("\t-removenan\t\tRemove all NaNs and replace then with 0\n");
printf("\t-isnan\t\t\tBinary image equal to 1 if the value is NaN and 0 otherwise\n");
printf("\t-masknan <file_norm>\tAssign everything outside the mask (mask==0) with NaNs \n");
printf("\n\t* * Sampling * *\n");
printf("\t-subsamp2\t\tSubsample the image by 2 using NN sampling (qform and sform scaled) \n");
printf("\n\t* * Image header operations * *\n");
printf("\t-hdr_copy <file> \tCopy header from working image to <file> and save in <output>.\n");
printf("\t-scl\t\t\tReset scale and slope info.\n");
printf("\t-4to5\t\t\tFlip the 4th and 5th dimension.\n");
printf("\n\t* * Output * *\n");
printf("\t-odt <datatype> \tSet output <datatype> (char, short, int, uchar, ushort, uint, float, double).\n");
printf("\t-range\t\t\tReset the image range to the min max\n");
printf("\t-v\t\t\tVerbose.\n");
#if defined (_OPENMP)
printf("\t-omp <int>\t\tNumber of openmp threads [%d]\n",omp_get_max_threads());
#endif
#ifdef _GIT_HASH
printf("\t--version\t\tPrint current source code git hash key and exit\n\t\t\t\t(%s)\n",_GIT_HASH);
#endif
printf("\n\t* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n");
return;
}
bool isEdge(float a,float b,double treshold) {
float max=a>b?a:b;
return (fabs(a-b)/max>treshold);
}
int isNumeric (const char *s)
{
if(s==NULL || *s=='\0' || isspace(*s))
return 0;
char * p;
strtod (s, &p);
return *p == '\0';
}
void no_memory ()
{
cout << "Failed to allocate memory!\n";
exit (1);
}
int main(int argc, char **argv)
{
try
{
set_new_handler(no_memory);
if (argc <= 2)
{
Usage(argv[0]);
return 0;
}
if(strcmp(argv[1], "-help")==0 || strcmp(argv[1], "-Help")==0 ||
strcmp(argv[1], "-HELP")==0 || strcmp(argv[1], "-h")==0 ||
strcmp(argv[1], "--h")==0 || strcmp(argv[1], "--help")==0)
{
Usage(argv[0]);
return 0;
}
char * filename_in=argv[1];
nifti_image * InputImage=nifti_image_read(filename_in,true);
if(InputImage == NULL)
{
fprintf(stderr,"* Error when reading the input image\n");
return 1;
}
if(InputImage->datatype!=NIFTI_TYPE_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(InputImage);
}
SegPrecisionTYPE * InputImagePtr = static_cast<SegPrecisionTYPE *>(InputImage->data);
ImageSize * CurrSize = new ImageSize [1]();
CurrSize->numel=(long)(InputImage->nx*InputImage->ny*InputImage->nz);
CurrSize->xsize=InputImage->nx;
CurrSize->ysize=InputImage->ny;
CurrSize->zsize=InputImage->nz;
CurrSize->usize=(InputImage->nu>1)?InputImage->nu:1;
CurrSize->tsize=(InputImage->nt>1)?InputImage->nt:1;
float Scalling[4]= { 1.0f, 1.0f, 1.0f, 1.0f };
bool verbose=0;
int datatypeoutput=NIFTI_TYPE_FLOAT32;
SegPrecisionTYPE ** bufferImages = new SegPrecisionTYPE * [2];
bufferImages[0] = new SegPrecisionTYPE [CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize];
bufferImages[1] = new SegPrecisionTYPE [CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize];
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
{
bufferImages[0][i]=InputImagePtr[i];
}
int current_buffer=0;
for(long i=2; i<(argc-1); i++)
{
if(strcmp(argv[i], "-help")==0 || strcmp(argv[i], "-Help")==0 ||
strcmp(argv[i], "-HELP")==0 || strcmp(argv[i], "-h")==0 ||
strcmp(argv[i], "--h")==0 || strcmp(argv[i], "--help")==0)
{
Usage(argv[0]);
return 0;
}
#if defined (_OPENMP)
else if(strcmp(argv[i], "-omp")==0 || strcmp(argv[i], "--omp")==0)
{
omp_set_num_threads(atoi(argv[++i]));
}
#endif
// ********************* MUTIPLY *************************
else if(strcmp(argv[i], "-mul") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
double multfactor=strtod(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
{
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]*multfactor;
}
current_buffer=current_buffer?0:1;
}
else
{
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]*NewImagePtr[i];
current_buffer=current_buffer?0:1;
}
else
{
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]*NewImagePtr[i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "
<<NewImage->nx<<","<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
nifti_image_free(NewImage);
}
}
}
// ********************* ADD *************************
else if( strcmp(argv[i], "-add") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
double addfactor=strtod(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]+addfactor;
current_buffer=current_buffer?0:1;
}
else
{
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]+NewImagePtr[i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "<<NewImage->nx<<","
<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
nifti_image_free(NewImage);
}
}
// ********************* SUBTRACT *************************
else if(strcmp(argv[i], "-sub") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
double factor=strtod(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]-factor;
current_buffer=current_buffer?0:1;
}
else
{
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]-NewImagePtr[i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "<<NewImage->nx<<","
<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
nifti_image_free(NewImage);
}
}
// ********************* mask *************************
else if(strcmp(argv[i], "-masknan") == 0)
{
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=(NewImagePtr[i]>0)?bufferImages[current_buffer][i]:std::numeric_limits<float>::quiet_NaN();
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "<<NewImage->nx<<","
<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
nifti_image_free(NewImage);
}
// ********************* mask *************************
else if(strcmp(argv[i], "-removenan") == 0)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=isnan(bufferImages[current_buffer][i])==1?0:bufferImages[current_buffer][i];
current_buffer=current_buffer?0:1;
}
// ********************* pad voxels *************************
else if(strcmp(argv[i], "-pad") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890-+")== string::npos)
{
int padding=(int)strtod(parser.c_str(),NULL)*2;
long new_size=((CurrSize->xsize+padding)*(CurrSize->ysize+padding)*(CurrSize->zsize+padding)*CurrSize->tsize*CurrSize->usize);
bufferImages[current_buffer?0:1]=new SegPrecisionTYPE [new_size];
for(long ii=0; ii<new_size; ii++)
bufferImages[current_buffer?0:1][ii]=std::numeric_limits<double>::quiet_NaN();
long old_volume=CurrSize->xsize*CurrSize->ysize*CurrSize->zsize;
long new_volume=(CurrSize->xsize+padding)*(CurrSize->ysize+padding)*(CurrSize->zsize+padding);
for (long t=0;t<CurrSize->tsize*CurrSize->usize;t++) {
for(long z=0; z<CurrSize->zsize; z++) {
for(long y=0; y<CurrSize->ysize; y++) {
for(long x=0; x<CurrSize->xsize; x++) {
long big=t*new_volume+x+(padding/2)+(y+(padding/2))*(CurrSize->xsize+padding)+(z+(padding/2))*(CurrSize->xsize+padding)*(CurrSize->ysize+padding);
long small=t*old_volume+x+y*CurrSize->xsize+z*(CurrSize->xsize*CurrSize->ysize);
bufferImages[current_buffer?0:1][big]=bufferImages[current_buffer][small];
}
}
}
}
current_buffer=current_buffer?0:1;
bufferImages[current_buffer?0:1]=new SegPrecisionTYPE [new_size];
for(long ii=0; ii<new_size; ii++)
bufferImages[current_buffer?0:1][ii]=0;
CurrSize->xsize+=padding;
CurrSize->ysize+=padding;
CurrSize->zsize+=padding;
CurrSize->numel=CurrSize->xsize*CurrSize->ysize*CurrSize->zsize;
}
else
{
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
}
// ********************* crop voxels *************************
else if(strcmp(argv[i], "-crop") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890-+")== string::npos)
{
int cropping=(int)strtod(parser.c_str(),NULL)*2;
long new_size=((CurrSize->xsize-cropping)*(CurrSize->ysize-cropping)*(CurrSize->zsize-cropping)*CurrSize->tsize*CurrSize->usize);
bufferImages[current_buffer?0:1]=new SegPrecisionTYPE [new_size];
for(long ii=0; ii<new_size; ii++)
bufferImages[current_buffer?0:1][ii]=0;
long old_volume=CurrSize->xsize*CurrSize->ysize*CurrSize->zsize;
long new_volume=(CurrSize->xsize-cropping)*(CurrSize->ysize-cropping)*(CurrSize->zsize-cropping);
for (long t=0;t<CurrSize->tsize*CurrSize->usize;t++) {
for(long x=cropping/2; x<CurrSize->xsize-cropping/2; x++) {
for(long y=cropping/2; y<CurrSize->ysize-cropping/2; y++) {
for(long z=cropping/2; z<CurrSize->zsize-cropping/2; z++) {
long small=t*new_volume+x-(cropping/2)+(y-(cropping/2))*(CurrSize->xsize-cropping)+(z-(cropping/2))*((CurrSize->xsize-cropping)*(CurrSize->ysize-cropping));
long big=t*old_volume+x+y*CurrSize->xsize+z*(CurrSize->xsize*CurrSize->ysize);
bufferImages[current_buffer?0:1][small]=bufferImages[current_buffer][big];
}
}
}
}
current_buffer=current_buffer?0:1;
bufferImages[current_buffer?0:1]=new SegPrecisionTYPE [new_size];
for(long ii=0; ii<new_size; ii++)
bufferImages[current_buffer?0:1][ii]=0;
CurrSize->xsize-=cropping;
CurrSize->ysize-=cropping;
CurrSize->zsize-=cropping;
CurrSize->numel=CurrSize->xsize*CurrSize->ysize*CurrSize->zsize;
}
else
{
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
}
// ********************* mask edge *************************
else if(strcmp(argv[i], "-edge") == 0)
{
string parser=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0) || (parser.length()==1 && parser.find("0")!=string::npos)))
{
double treshold=strtod(parser.c_str(),NULL);;
float * Img1prt = bufferImages[current_buffer];
for(int index=0; index<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; index++)
{
bool edge=false;
if((index+1)<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize) {
if(isEdge(Img1prt[index],Img1prt[index+1],treshold)) edge=true;
}
if((index-1)>0) {
if(isEdge(Img1prt[index],Img1prt[index-1],treshold)) edge=true;
}
if((index+CurrSize->xsize)<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize) {
if(isEdge(Img1prt[index],Img1prt[index+CurrSize->xsize],treshold)) edge=true;
}
if((index-CurrSize->xsize)>0) {
if(isEdge(Img1prt[index],Img1prt[index-CurrSize->xsize],treshold)) edge=true;
}
if((index+CurrSize->xsize*CurrSize->ysize)<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize) {
if(isEdge(Img1prt[index],Img1prt[index+CurrSize->xsize*CurrSize->ysize],treshold)) edge=true;
}
if((index-CurrSize->xsize*CurrSize->ysize)>0) {
if(isEdge(Img1prt[index],Img1prt[index-CurrSize->xsize*CurrSize->ysize],treshold)) edge=true;
}
if(edge)
{
bufferImages[current_buffer?0:1][index]=bufferImages[current_buffer][index];
}
else {
bufferImages[current_buffer?0:1][index]=0;
}
}
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
}
else if(strcmp(argv[i], "-sobel3") == 0)
{
string parser=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0) || (parser.length()==1 && parser.find("0")!=string::npos)))
{
double factor=strtod(parser.c_str(),NULL);
float * Img1prt = bufferImages[current_buffer];
long tp=0;
#ifdef _OPENMP
#pragma omp parallel for \
private(tp)\
shared(CurrSize,bufferImages,Img1prt,factor,InputImage)
#endif
for(tp=0; tp<(long)(CurrSize->tsize*CurrSize->usize); tp++){
//create dummy nii
nifti_image * TMPnii = nifti_copy_nim_info(InputImage);
TMPnii->dim[1]=CurrSize->xsize;
TMPnii->dim[2]=CurrSize->ysize;
TMPnii->dim[3]=CurrSize->zsize;
TMPnii->dim[4]=TMPnii->nt=1;
TMPnii->dim[5]=TMPnii->nu=1;
nifti_update_dims_from_array(TMPnii);
//copy pointer, run gaussian, and set to null
TMPnii->data=static_cast<void*>(&Img1prt[CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*tp]);
if(factor>0) GaussianSmoothing5D_nifti(TMPnii,NULL,factor);
TMPnii->data=NULL;
//As TMPnii->data=NULL, the free will not cause any harm
nifti_image_free(TMPnii);
float *imgsort=new float [CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
for(long i=0; i<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; i++) {
imgsort[i]=Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
}
HeapSort(imgsort,CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1);
float max=imgsort[(int)(round((1-0.02)*(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1)))];
float min=imgsort[(int)(round(0.02*(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1)))];
float newMax=1,newMin=0;
for(long i=0; i<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; i++) {
if(min>Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]) Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=min;
if(max<Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]) Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=max;
Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=newMin+(Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]-min)*(newMax-newMin)/(max-min);
}
int inz=0;
float xkernel[3][3][3]={
{{-1,-2,-1},{-2,-4,-2},{-1,-2,-1}},
{{ 0, 0, 0},{ 0, 0, 0},{ 0, 0, 0}},
{{ 1, 2, 1},{ 2, 4, 2},{ 1, 2, 1}}
};
float ykernel[3][3][3]={
{{ 1, 2, 1},{ 0, 0, 0},{-1,-2,-1}},
{{ 2, 4, 2},{ 0, 0, 0},{-2,-4,-2}},
{{ 1, 2, 1},{ 0, 0, 0},{-1,-2,-1}}
};
float zkernel[3][3][3]={
{{-1, 0, 1},{-2, 0, 2},{-1, 0, 1}},
{{-2, 0, 2},{-4, 0, 4},{-2, 0, 2}},
{{-1, 0, 1},{-2, 0, 2},{-1, 0, 1}}
};
#ifdef _OPENMP
#pragma omp parallel for \
private(inz)\
shared(CurrSize,bufferImages,Img1prt,xkernel,ykernel,zkernel)
#endif
for(inz=0; inz<CurrSize->zsize; inz++) {
for(int iny=0; iny<CurrSize->ysize; iny++) {
for(int inx=0; inx<CurrSize->xsize; inx++) {
float sumx=0,sumy=0,sumz=0;
for(int i=-1;i<=1;i++) {
for(int j=-1;j<=1;j++) {
for(int k=-1;k<=1;k++) {
if(inx+k>=0 && iny+j>=0 && inz+i>=0 &&
inx+k<CurrSize->xsize && iny+j<CurrSize->ysize && inz+i<CurrSize->zsize) {
int index=(inx+k)+(iny+j)*CurrSize->xsize+(inz+i)*(CurrSize->xsize*CurrSize->ysize);
sumx+=xkernel[k+1][j+1][i+1]*Img1prt[index+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
sumy+=ykernel[j+1][k+1][i+1]*Img1prt[index+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
sumz+=zkernel[i+1][k+1][j+1]*Img1prt[index+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
}
}
}
}
int index=inx+iny*CurrSize->xsize+inz*(CurrSize->xsize*CurrSize->ysize);
float val=sqrt(sumx*sumx+sumy*sumy+sumz*sumz);
bufferImages[current_buffer?0:1][index+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=val;
}
}
}
for(long i=0; i<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; i++) {
imgsort[i]=bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
}
HeapSort(imgsort,CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1);
max=imgsort[(int)(round((1-0.02)*(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1)))];
min=imgsort[(int)(round(0.02*(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1)))];
for(long i=0; i<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; i++) {
if(min>bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]) bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=min;
if(max<bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]) bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=max;
bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=newMin+(bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]-min)*(newMax-newMin)/(max-min);
}
}
}
else {
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
current_buffer=current_buffer?0:1;
}
else if(strcmp(argv[i], "-sobel5") == 0)
{
string parser=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0) || (parser.length()==1 && parser.find("0")!=string::npos)))
{
double factor=strtod(parser.c_str(),NULL);
float * Img1prt = bufferImages[current_buffer];
long tp=0;
#ifdef _OPENMP
#pragma omp parallel for \
private(tp)\
shared(CurrSize,bufferImages,Img1prt,factor,InputImage)
#endif
for(tp=0; tp<(long)(CurrSize->tsize*CurrSize->usize); tp++){
//create dummy nii
nifti_image * TMPnii = nifti_copy_nim_info(InputImage);
TMPnii->dim[1]=CurrSize->xsize;
TMPnii->dim[2]=CurrSize->ysize;
TMPnii->dim[3]=CurrSize->zsize;
TMPnii->dim[4]=TMPnii->nt=1;
TMPnii->dim[5]=TMPnii->nu=1;
nifti_update_dims_from_array(TMPnii);
//copy pointer, run gaussian, and set to null
TMPnii->data=static_cast<void*>(&Img1prt[CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*tp]);
if(factor>0) GaussianSmoothing5D_nifti(TMPnii,NULL,factor);
TMPnii->data=NULL;
//As TMPnii->data=NULL, the free will not cause any harm
nifti_image_free(TMPnii);
float *imgsort=new float [CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
for(long i=0; i<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; i++) {
imgsort[i]=Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
}
HeapSort(imgsort,CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1);
float max=imgsort[(int)(round((1-0.02)*(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1)))];
float min=imgsort[(int)(round(0.02*(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1)))];
float newMax=1,newMin=0;
for(long i=0; i<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; i++) {
if(min>Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]) Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=min;
if(max<Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]) Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=max;
Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=newMin+(Img1prt[i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]-min)*(newMax-newMin)/(max-min);
}
float xkernel[5][5][5]={
{{-1,-4, -6,-4,-1},{-2, -8,-12, -8,-2},{-4,-16,-24,-16,-4},{-2, -8,-12, -8,-2},{-1,-4, -6,-4,-1}},
{{-2,-8,-12,-8,-2},{-4,-16,-24,-16,-4},{-8,-32,-48,-32,-8},{-4,-16,-24,-16,-4},{-2,-8,-12,-8,-2}},
{{ 0, 0, 0, 0, 0},{ 0, 0, 0, 0, 0},{ 0, 0, 0, 0, 0},{ 0, 0, 0, 0, 0},{ 0, 0, 0, 0, 0}},
{{ 2, 8, 12, 8, 2},{ 4, 16, 24, 16, 4},{ 8, 32, 48, 32, 8},{ 4, 16, 24, 16, 4},{ 2, 8, 12, 8, 2}},
{{ 1, 4, 6, 4, 1},{ 2, 8, 12, 8, 2},{ 4, 16, 24, 16, 4},{ 2, 8, 12, 8, 2},{ 1, 4, 6, 4, 1}}
};
float ykernel[5][5][5]={
{{ 1, 4, 6, 4, 1},{ 2, 8, 12, 8, 2},{ 0, 0, 0, 0, 0},{-2, -8,-12, -8,-2},{-1, -4, -6, -4,-1}},
{{ 2, 8, 12, 8, 2},{ 4, 16, 24, 16, 4},{ 0, 0, 0, 0, 0},{-4,-16,-24,-16,-4},{-2, -8,-12, -8,-2}},
{{ 4, 16, 24, 16, 4},{ 8, 32, 48, 32, 8},{ 0, 0, 0, 0, 0},{-8,-32,-48,-32,-8},{-4,-16,-24,-16,-4}},
{{ 2, 8, 12, 8, 2},{ 4, 16, 24, 16, 4},{ 0, 0, 0, 0, 0},{-4,-16,-24,-16,-4},{-2, -8,-12, -8,-2}},
{{ 1, 4, 6, 4, 1},{ 2, 8, 12, 8, 2},{ 0, 0, 0, 0, 0},{-2, -8,-12, -8,-2},{-1, -4, -6, -4,-1}}
};
float zkernel[5][5][5]={
{{-1, -2, 0, 2, 1},{ -2, -4, 0, 4, 2},{ -4, -8, 0, 8, 4},{ -2, -4, 0, 4, 2},{-1, -2, 0, 2, 1}},
{{-4, -8, 0, 8, 4},{ -8,-16, 0, 16, 8},{-16,-32, 0, 32, 16},{ -8,-16, 0, 16, 8},{-4, -8, 0, 8, 4}},
{{-6,-12, 0, 12, 6},{-12,-24, 0, 24, 12},{-24,-48, 0, 48, 24},{-12,-24, 0, 24, 12},{-6,-12, 0, 12, 6}},
{{-4, -8, 0, 8, 4},{ -8,-16, 0, 16, 8},{-16,-32, 0, 32, 16},{ -8,-16, 0, 16, 8},{-4, -8, 0, 8, 4}},
{{-1, -2, 0, 2, 1},{- 2, -4, 0, 4, 2},{ -4, 8, 0, 8, 4},{ -2, -4, 0, 4, 2},{-1, -2, 0, 2, 1}}
};
int inz=0;
#ifdef _OPENMP
#pragma omp parallel for \
private(inz)\
shared(CurrSize,bufferImages,Img1prt,xkernel,ykernel,zkernel)
#endif
for(inz=0; inz<CurrSize->zsize; inz++) {
for(int iny=0; iny<CurrSize->ysize; iny++) {
for(int inx=0; inx<CurrSize->xsize; inx++) {
float sumx=0,sumy=0,sumz=0;
for(int i=-2;i<=2;i++) {
for(int j=-2;j<=2;j++) {
for(int k=-2;k<=2;k++) {
if(inx+k>=0 && iny+j>=0 && inz+i>=0 &&
inx+k<CurrSize->xsize && iny+j<CurrSize->ysize && inz+i<CurrSize->zsize) {
int index=(inx+k)+(iny+j)*CurrSize->xsize+(inz+i)*(CurrSize->xsize*CurrSize->ysize);
sumx+=xkernel[k+2][j+2][i+2]*Img1prt[index+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
sumy+=ykernel[j+2][k+2][i+2]*Img1prt[index+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
sumz+=zkernel[i+2][k+2][j+2]*Img1prt[index+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
}
}
}
}
int index=inx+iny*CurrSize->xsize+inz*(CurrSize->xsize*CurrSize->ysize);
float val=sqrt(sumx*sumx+sumy*sumy+sumz*sumz);
bufferImages[current_buffer?0:1][index+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=val;
}
}
}
for(long i=0; i<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; i++) {
imgsort[i]=bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize];
}
HeapSort(imgsort,CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1);
max=imgsort[(int)(round((1-0.02)*(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1)))];
min=imgsort[(int)(round(0.02*(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize-1)))];
for(long i=0; i<CurrSize->xsize*CurrSize->ysize*CurrSize->zsize; i++) {
if(min>bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]) bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=min;
if(max<bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]) bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=max;
bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]=newMin+(bufferImages[current_buffer?0:1][i+tp*CurrSize->xsize*CurrSize->ysize*CurrSize->zsize]-min)*(newMax-newMin)/(max-min);
}
}
}
else {
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
current_buffer=current_buffer?0:1;
}
// ********************* ADD *************************
else if( strcmp(argv[i], "-div") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
double divfactor=strtod(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]/divfactor;
current_buffer=current_buffer?0:1;
}
else
{
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i]/NewImagePtr[i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "<<NewImage->nx<<","
<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
nifti_image_free(NewImage);
}
}
// ********************* POWER *************************
else if(strcmp(argv[i], "-pow") == 0)
{
string parser=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0) || (parser.length()==1 && parser.find("0")!=string::npos)))
{
float factor=strtof(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=powf(bufferImages[current_buffer][i],factor);
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
}
// ********************* Is NAN *************************
else if(strcmp(argv[i], "-isnan") == 0)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=isnan(bufferImages[current_buffer][i]);
current_buffer=current_buffer?0:1;
}
// ********************* square_root *************************
else if(strcmp(argv[i], "-sqrt") == 0)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=sqrtf(bufferImages[current_buffer][i]);
current_buffer=current_buffer?0:1;
}
// ********************* Exponential *************************
else if(strcmp(argv[i], "-exp") == 0)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=expf(bufferImages[current_buffer][i]);
current_buffer=current_buffer?0:1;
}
// ********************* Exponential *************************
else if(strcmp(argv[i], "-log") == 0)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=logf(bufferImages[current_buffer][i]);
current_buffer=current_buffer?0:1;
}
// ********************* reciprocal *************************
else if(strcmp(argv[i], "-recip") == 0)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=1/(bufferImages[current_buffer][i]);
current_buffer=current_buffer?0:1;
}
// ********************* absolute value *************************
else if(strcmp(argv[i], "-abs") == 0)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=fabs(bufferImages[current_buffer][i]);
current_buffer=current_buffer?0:1;
}
// ********************* bin value *************************
else if(strcmp(argv[i], "-bin") == 0)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=(bufferImages[current_buffer][i]>0?1.0f:0.0f);
current_buffer=current_buffer?0:1;
}
// ********************* THRESHOLD below *************************
else if(strcmp(argv[i], "-thr") == 0)
{
string parser=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0 ) || (parser.length()==1 && parser.find("0")!=string::npos)))
{
double factor=strtod(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=(bufferImages[current_buffer][i]>factor)?bufferImages[current_buffer][i]:0;
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
}
// ********************* THRESHOLD below *************************
else if(strcmp(argv[i], "-equal") == 0)
{
string parser=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0 ) || (parser.length()==1 && parser.find("0")!=string::npos)))
{
double factor=strtod(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=(bufferImages[current_buffer][i]==factor)?1:0;
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
}
// ********************* Replace below *************************
else if(strcmp(argv[i], "-replace") == 0)
{
string parser=argv[++i];
string parser2=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0 ) || (parser.length()==1 && parser.find("0")!=string::npos)))
{
double factor=strtod(parser.c_str(),NULL);
double factor2=strtod(parser2.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=(bufferImages[current_buffer][i]==factor)?factor2:bufferImages[current_buffer][i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
}
// ********************* THRESHOLD ABOVE *************************
else if(strcmp(argv[i], "-uthr") == 0)
{
string parser=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0) || (parser.length()==1 && parser.find("0")!=string::npos)))
{
double factor=strtod(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=(bufferImages[current_buffer][i]<factor)?bufferImages[current_buffer][i]:0;
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " is not a valid number"<<endl;
i=argc;
}
}
// ********************* Dilate *************************
else if(strcmp(argv[i], "-dil") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
double factor=strtod(parser.c_str(),NULL);
Dillate(bufferImages[current_buffer],(int)round(factor),CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " has to be an integer > 0"<<endl;
i=argc;
}
}
// ********************* Erosion *************************
else if(strcmp(argv[i], "-ero") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
double factor=strtod(parser.c_str(),NULL);
Erosion(bufferImages[current_buffer],(int)round(factor),CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " has to be an integer > 0"<<endl;
i=argc;
}
}
// // ********************* Erosion *************************
// else if(strcmp(argv[i], "-eroT") == 0)
// {
// string parser=argv[++i];
// if(parser.find_first_not_of("1234567890.-+")== string::npos)
// {
// double factor=strtod(parser.c_str(),NULL);
// //TopologicalErosion(bufferImages[current_buffer],(int)round(factor),CurrSize);
// for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
// bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i];
// current_buffer=current_buffer?0:1;
// }
// else
// {
// cout << "ERROR: "<< parser << " has to be an integer > 0"<<endl;
// i=argc;
// }
// }
// ********************* Erosion *************************
else if(strcmp(argv[i], "-erot") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
double factor=strtod(parser.c_str(),NULL);
Erosion(bufferImages[current_buffer],(int)round(factor),CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " has to be an integer > 0"<<endl;
i=argc;
}
}
// ********************* Smooth Label *************************
else if(strcmp(argv[i], "-smol") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
double factor=strtod(parser.c_str(),NULL);
SmoothLab(bufferImages[current_buffer],factor,CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " has to be an integer > 0"<<endl;
i=argc;
}
}
// ********************* Euclidean Distance Transform *************************
else if(strcmp(argv[i], "-euc") == 0)
{
bool * Lable= new bool [CurrSize->numel];
float * Speed= new float [CurrSize->numel];
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
{
Lable[i]=bufferImages[current_buffer][i];
Speed[i]=1.0f;
}
float * Distance = DoubleEuclideanDistance_3D(Lable,Speed,CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=Distance[i];
current_buffer=current_buffer?0:1;
delete [] Distance;
delete [] Lable;
delete [] Speed;
}
// ********************* Geodesic Distance Transform *************************
else if(strcmp(argv[i], "-geo") == 0)
{
string parser=argv[++i];
if(parser.find_first_not_of("1234567890.-+")== string::npos)
{
if(strtod(parser.c_str(),NULL)<=0)
{
cout<< "ERROR: -geo speed should be larger than zero"<<endl;
return 1;
}
bool * Lable= new bool [CurrSize->numel];
float * Speed= new float [CurrSize->numel];
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
{
Lable[i]=bufferImages[current_buffer][i];
Speed[i]=strtod(parser.c_str(),NULL);
}
float * Distance = DoubleEuclideanDistance_3D(Lable,Speed,CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=Distance[i];
current_buffer=current_buffer?0:1;
delete [] Distance;
delete [] Lable;
delete [] Speed;
}
else
{
if( (strtod(parser.c_str(),NULL)!=0 && (parser.find(".nii")==string::npos ||parser.find(".img")==string::npos ||parser.find(".hdr")==string::npos )) ||(parser.length()==1 && parser.find("0")!=string::npos))
{
cerr<<"ERROR: "<<argv[i]<<" has to be an image"<<endl;
exit(1);
}
bool * Lable= new bool [CurrSize->numel];
float * Speed= new float [CurrSize->numel];
nifti_image * SpeedImage=nifti_image_read(parser.c_str(),true);
SpeedImage->nu=(SpeedImage->nu>1)?SpeedImage->nu:1;
SpeedImage->nt=(SpeedImage->nt>1)?SpeedImage->nt:1;
if(SpeedImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(SpeedImage);
}
float * SpeedImagePtr = static_cast<float *>(SpeedImage->data);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
{
Lable[i]=bufferImages[current_buffer][i];
Speed[i]=SpeedImagePtr[i]>0.0001?SpeedImagePtr[i]:0.0001;
}
float * Distance = DoubleEuclideanDistance_3D(Lable,Speed,CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=Distance[i];
current_buffer=current_buffer?0:1;
delete [] Distance;
delete [] Lable;
delete [] Speed;
nifti_image_free(SpeedImage);
}
}
// ********************* linear LS Normlise *************************
else if(strcmp(argv[i], "-llsnorm") == 0)
{
string parser=argv[++i];
if( (strtod(parser.c_str(),NULL)!=0 && (parser.find(".nii")==string::npos ||parser.find(".img")==string::npos ||parser.find(".hdr")==string::npos ))
||(parser.length()==1 && parser.find("0")!=string::npos))
{
cerr<<"ERROR: "<<argv[i]<<" has to be an image"<<endl;
exit(1);
}
else
{
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(NewImage);
}
float * NewImagePtr = static_cast<float *>(NewImage->data);
// Y=a*X+b
float a=0;
float b=0;
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
LS_Vecs(bufferImages[current_buffer],NewImagePtr,NULL, (CurrSize->xsize*CurrSize->ysize*CurrSize->zsize),&a, &b);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=a*NewImagePtr[i]+b;
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "
<<NewImage->nx<<","<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
nifti_image_free(NewImage);
}
}
// ********************* linear LTS Normlise *************************
else if(strcmp(argv[i], "-lltsnorm") == 0)
{
string parser=argv[++i];
string parserout=argv[++i];
float percent_outlier=strtod(parserout.c_str(),NULL);
percent_outlier=percent_outlier>0.5?0.5:(percent_outlier<0?0:percent_outlier);
if( (strtod(parser.c_str(),NULL)!=0 && (parser.find(".nii")==string::npos ||parser.find(".img")==string::npos ||parser.find(".hdr")==string::npos ))
||(parser.length()==1 && parser.find("0")!=string::npos))
{
cerr<<"ERROR: "<<argv[i]<<" has to be an image"<<endl;
exit(1);
}
else
{
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(NewImage);
}
float * NewImagePtr = static_cast<float *>(NewImage->data);
// Y=a*X+b
float a=0;
float b=0;
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
LTS_Vecs(bufferImages[current_buffer],NewImagePtr,NULL,percent_outlier,20, 0.001, (CurrSize->xsize*CurrSize->ysize*CurrSize->zsize),&a, &b);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=a*NewImagePtr[i]+b;
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "
<<NewImage->nx<<","<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
nifti_image_free(NewImage);
}
}
// ********************* QuadraticLS Normlise *************************
else if(strcmp(argv[i], "-qlsnorm") == 0)
{
string order_str=argv[++i];
int order=(int)round(strtod(order_str.c_str(),NULL));
if(order>4){
cout << "ERROR: Order is too high... using order 5"<<endl;
order=4;
}
if(order<1){
cout << "ERROR: Order is too low... using order 1"<<endl;
order=1;
}
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(NewImage);
}
float * NewImagePtr = static_cast<float *>(NewImage->data);
const long nvox=CurrSize->xsize*CurrSize->ysize*CurrSize->zsize;
Eigen::MatrixXf Img1(nvox,order+1);
Eigen::VectorXf Img2(nvox,1);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
Img2(i)=bufferImages[current_buffer][i];
for(int j=0; j<(order+1); j++)
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
Img1(i,j)=pow(NewImagePtr[i],j);
Eigen::MatrixXf Img1TransImg1=Img1.transpose()*Img1;
Eigen::VectorXf Img1TransImg2=Img1.transpose()*Img2;
Eigen::VectorXf x;
x=Img1TransImg1.lu().solve(Img1TransImg2); // using a LU factorization
cout<<x;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]=x(0);
}
for(int j=1; j<(order+1); j++){
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]+=x(j)*pow(NewImagePtr[i],j);
}
}
current_buffer=current_buffer?0:1;
nifti_image_free(NewImage);
}
else if(strcmp(argv[i], "-qlsnorm_mask") == 0)
{
string order_str=argv[++i];
int order=(int)round(strtod(order_str.c_str(),NULL));
if(order>4){
cout << "ERROR: Order is too high... using order 5"<<endl;
order=4;
}
if(order<1){
cout << "ERROR: Order is too low... using order 1"<<endl;
order=1;
}
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(NewImage);
}
float * NewImagePtr = static_cast<float *>(NewImage->data);
parser=argv[++i];
nifti_image * MaskImage=nifti_image_read(parser.c_str(),true);
MaskImage->nu=(MaskImage->nu>1)?MaskImage->nu:1;
MaskImage->nt=(MaskImage->nt>1)?MaskImage->nt:1;
if(MaskImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(MaskImage);
}
float * MaskImagePtr = static_cast<float *>(MaskImage->data);
size_t nvoxmax=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
if(MaskImagePtr[i]>0 && isnan(bufferImages[current_buffer][i])==0 && isnan(NewImagePtr[i])==0)
{
nvoxmax++;
}
}
Eigen::MatrixXf Img1(nvoxmax+1,order+1);
Eigen::VectorXf Img2(nvoxmax+1);
size_t nvox=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
if(MaskImagePtr[i]>0 && isnan(bufferImages[current_buffer][i])==0 && isnan(NewImagePtr[i])==0)
{
Img2(nvox)=bufferImages[current_buffer][i];
nvox++;
}
}
nvox=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
if(MaskImagePtr[i]>0 && isnan(bufferImages[current_buffer][i])==0 && isnan(NewImagePtr[i])==0)
{
for(int j=0; j<(order+1); j++){
Img1(nvox,j)= (j==0)? 1 : pow(NewImagePtr[i],j) ;
}
nvox++;
}
}
cout<<nvox<<endl;
Eigen::MatrixXf Img1TransImg1=Img1.transpose()*Img1;
Eigen::VectorXf Img1TransImg2=Img1.transpose()*Img2;
Eigen::VectorXf x;
x=Img1TransImg1.lu().solve(Img1TransImg2); // using a LU factorization
cout<<x<<endl;
cout <<"ui\n"<<x(0)<<endl;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]=x(0);
}
for(int j=1; j<(order+1); j++){
cout <<x(j)<<endl;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]+=x(j)*pow(NewImagePtr[i],j);
}
}
current_buffer=current_buffer?0:1;
nifti_image_free(NewImage);
nifti_image_free(MaskImage);
}
else if(strcmp(argv[i], "-qlsnorm2_mask") == 0)
{
string order_str=argv[++i];
int order=(int)round(strtod(order_str.c_str(),NULL));
if(order>4){
cout << "ERROR: Order is too high... using order 5"<<endl;
order=4;
}
if(order<1){
cout << "ERROR: Order is too low... using order 1"<<endl;
order=1;
}
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(NewImage);
}
float * NewImagePtr = static_cast<float *>(NewImage->data);
parser=argv[++i];
nifti_image * MaskImage=nifti_image_read(parser.c_str(),true);
MaskImage->nu=(MaskImage->nu>1)?MaskImage->nu:1;
MaskImage->nt=(MaskImage->nt>1)?MaskImage->nt:1;
if(MaskImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(MaskImage);
}
float * MaskImagePtr = static_cast<float *>(MaskImage->data);
size_t nvoxmax=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
if(MaskImagePtr[i]>0 && isnan(bufferImages[current_buffer][i])==0 && isnan(NewImagePtr[i])==0)
{
nvoxmax++;
}
}
Eigen::MatrixXf Img1(nvoxmax+1,order);
Eigen::VectorXf Img2(nvoxmax+1);
size_t nvox=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
if(MaskImagePtr[i]>0 && isnan(bufferImages[current_buffer][i])==0 && isnan(NewImagePtr[i])==0)
{
Img2(nvox)=bufferImages[current_buffer][i];
nvox++;
}
}
nvox=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
if(MaskImagePtr[i]>0 && isnan(bufferImages[current_buffer][i])==0 && isnan(NewImagePtr[i])==0)
{
for(int j=1; j<(order+1); j++){
Img1(nvox,j-1)= pow(NewImagePtr[i],j) ;
}
nvox++;
}
}
Eigen::MatrixXf Img1TransImg1=Img1.transpose()*Img1;
Eigen::VectorXf Img1TransImg2=Img1.transpose()*Img2;
Eigen::VectorXf x;
x=Img1TransImg1.lu().solve(Img1TransImg2); // using a LU factorization
cout<<x<<endl;
for(int j=1; j<(order+1); j++){
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]+=x(j-1)*pow(NewImagePtr[i],j);
}
}
current_buffer=current_buffer?0:1;
nifti_image_free(NewImage);
nifti_image_free(MaskImage);
}
// ********************* QuadraticLS Normlise *************************
else if(strcmp(argv[i], "-qlshnorm") == 0)
{
string order_str=argv[++i];
int order=(int)round(strtod(order_str.c_str(),NULL));
if(order>5){
cout << "ERROR: Order is too high... using order 5"<<endl;
order=4;
}
if(order<1){
cout << "ERROR: Order is too low... using order 1"<<endl;
order=1;
}
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(NewImage);
}
float * NewImagePtr = static_cast<float *>(NewImage->data);
// copy image, sort and fill vector
size_t img3Dsize=(NewImage->nx*NewImage->ny*NewImage->nz);
size_t countnan=0;
for(size_t index=0; index<img3Dsize; index++)
countnan+=isnan(NewImagePtr[index])?0:1;
float * imgsort=new float [countnan];
size_t countindex=0;
for(size_t index=0; index<countnan; index++)
if(isnan(NewImagePtr[index])==0){
imgsort[countindex]=NewImagePtr[index];
countindex++;
}
HeapSort(imgsort,countnan-1);
Eigen::VectorXf Img2(1000,1);
for(int percentile=0; percentile<1000; percentile++)
Img2(percentile)=imgsort[(long)(floor(( (float)(percentile) / 1000.0f ) * (float)( countnan-1 )))];
delete [] imgsort;
// copy image, sort and fill vector
img3Dsize=(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize);
countnan=0;
for(size_t index=0; index<img3Dsize; index++)
countnan+=isnan(bufferImages[current_buffer][index])?0:1;
imgsort=new float [countnan];
countindex=0;
for(size_t index=0; index<countnan; index++)
if(isnan(bufferImages[current_buffer][index])==0){
imgsort[countindex]=bufferImages[current_buffer][index];
countindex++;
}
HeapSort(imgsort,countnan-1);
Eigen::MatrixXf Img1(1000,order+1);
for(int j=0; j<(order+1); j++)
for(int percentile=0; percentile<1000; percentile++){
Img1(percentile,j)=pow(imgsort[(long)(floor(( (float)(percentile) / 1000.0f ) * (float)( countnan-1 )))] , j );
}
delete [] imgsort;
Eigen::MatrixXf Img1TransImg1=Img1.transpose()*Img1;
Eigen::VectorXf Img1TransImg2=Img1.transpose()*Img2;
Eigen::VectorXf x;
x=Img1TransImg1.lu().solve(Img1TransImg2); // using a LU factorization
cout<<x<<endl;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]=x(0);
}
for(int j=1; j<(order+1); j++)
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]+=x(j)*pow(bufferImages[current_buffer][i],j);
}
current_buffer=current_buffer?0:1;
nifti_image_free(NewImage);
}
else if(strcmp(argv[i], "-qlshnorm_mask") == 0)
{
string order_str=argv[++i];
int order=(int)round(strtod(order_str.c_str(),NULL));
if(order>4){
cout << "ERROR: Order is too high... using order 5"<<endl;
order=4;
}
if(order<1){
cout << "ERROR: Order is too low... using order 1"<<endl;
order=1;
}
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(NewImage);
}
float * NewImagePtr = static_cast<float *>(NewImage->data);
parser=argv[++i];
nifti_image * MaskImage=nifti_image_read(parser.c_str(),true);
MaskImage->nu=(MaskImage->nu>1)?MaskImage->nu:1;
MaskImage->nt=(MaskImage->nt>1)?MaskImage->nt:1;
if(MaskImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(MaskImage);
}
float * MaskImagePtr = static_cast<float *>(MaskImage->data);
// copy image, sort and fill vector
size_t img3Dsize=(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize);
size_t countnan=0;
size_t numbsamples=1000;
for(size_t index=0; index<img3Dsize; index++){
if(isnan(NewImagePtr[index])==0&&isnan(bufferImages[current_buffer][index])==0&&MaskImagePtr[index]>0){
countnan++;
}
}
float * imgsort=new float [countnan];
size_t countindex=0;
for(size_t index=0; index<img3Dsize; index++){
if(isnan(NewImagePtr[index])==0&&isnan(bufferImages[current_buffer][index])==0&&MaskImagePtr[index]>0){
imgsort[countindex]=NewImagePtr[index];
countindex++;
}
}
//cout<<countnan<<endl;
//cout<<countindex<<endl;
HeapSort(imgsort,countnan-1);
Eigen::VectorXf Img2(numbsamples,1);
for(size_t percentile=0; percentile<numbsamples; percentile++){
Img2(percentile)=imgsort[(long)(floor(( (float)(percentile) / (float)(numbsamples) ) * (float)( countnan-1 )))];
// cout<<percentile<<" - "<<Img2(percentile)<<endl;
}
// copy image, sort and fill vector
countindex=0;
for(size_t index=0; index<img3Dsize; index++){
if(isnan(NewImagePtr[index])==0&&isnan(bufferImages[current_buffer][index])==0&&MaskImagePtr[index]>0){
imgsort[countindex]=bufferImages[current_buffer][index];
countindex++;
}
}
//cout<<countnan<<endl;
//cout<<countindex<<endl;
HeapSort(imgsort,countnan-1);
Eigen::MatrixXf Img1(numbsamples,order+1);
for(size_t percentile=0; percentile<numbsamples; percentile++){
for(int j=0; j<(order+1); j++){
Img1(percentile,j)=pow(imgsort[(long)(floor(( (float)(percentile) / (float)(numbsamples) ) * (float)( countnan-1 )))] , j );
}
// cout<<percentile<<" - "<<Img1(percentile,1)<<endl;
}
delete [] imgsort;
Eigen::MatrixXf Img1TransImg1=Img1.transpose()*Img1;
Eigen::VectorXf Img1TransImg2=Img1.transpose()*Img2;
Eigen::VectorXf x;
x=Img1TransImg1.lu().solve(Img1TransImg2); // using a LU factorization
cout<<x<<endl;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]=x(0);
}
for(int j=1; j<(order+1); j++){
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++){
bufferImages[current_buffer?0:1][i]+=x(j)*pow(bufferImages[current_buffer][i],j);
}
}
current_buffer=current_buffer?0:1;
nifti_image_free(NewImage);
nifti_image_free(MaskImage);
}
// ********************* GAUSSIAN SMOTHING *************************
else if(strcmp(argv[i], "-smo") == 0)
{
string parser=argv[++i];
if((strtod(parser.c_str(),NULL)!=0 ))
{
float factor=strtof(parser.c_str(),NULL);
for(long tp=0; tp<(long)(CurrSize->tsize*CurrSize->usize); tp++){
//create dummy nii
nifti_image * TMPnii = nifti_copy_nim_info(InputImage);
TMPnii->dim[1]=CurrSize->xsize;
TMPnii->dim[2]=CurrSize->ysize;
TMPnii->dim[3]=CurrSize->zsize;
TMPnii->dim[4]=TMPnii->nt=1;
TMPnii->dim[5]=TMPnii->nu=1;
nifti_update_dims_from_array(TMPnii);
//copy pointer, run gaussian, and set to null
TMPnii->data=static_cast<void*>(&bufferImages[current_buffer][CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*tp]);
GaussianSmoothing5D_nifti(TMPnii,NULL,factor);
TMPnii->data=NULL;
//As TMPnii->data=NULL, the free will not cause any harm
nifti_image_free(TMPnii);
}
//current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " has to be a number > 0"<<endl;
i=argc;
}
}
// ********************* GAUSSIAN SMOTHING *************************
else if(strcmp(argv[i], "-smoNaN") == 0)
{
string filename=argv[++i];
nifti_image * MaskImage=nifti_image_read(filename.c_str(),true);
MaskImage->nu=(MaskImage->nu>1)?MaskImage->nu:1;
MaskImage->nt=(MaskImage->nt>1)?MaskImage->nt:1;
if(MaskImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(MaskImage);
}
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer][i]=(bufferImages[current_buffer][i])?
bufferImages[current_buffer][i]:
std::numeric_limits<float>::quiet_NaN();
for(long tp=0; tp<(long)(CurrSize->tsize*CurrSize->usize); tp++){
//create dummy nii
nifti_image * TMPnii = nifti_copy_nim_info(InputImage);
TMPnii->dim[1]=CurrSize->xsize;
TMPnii->dim[2]=CurrSize->ysize;
TMPnii->dim[3]=CurrSize->zsize;
TMPnii->dim[4]=TMPnii->nt=1;
TMPnii->dim[5]=TMPnii->nu=1;
nifti_update_dims_from_array(TMPnii);
//copy pointer, run gaussian, and set to null
TMPnii->data=static_cast<void*>(&bufferImages[current_buffer][CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*tp]);
GaussianSmoothing4D_Nan_nifti(TMPnii,MaskImage);
TMPnii->data=NULL;
//As TMPnii->data=NULL, the free will not cause any harm
nifti_image_free(TMPnii);
nifti_image_free(MaskImage);
}
//current_buffer=current_buffer?0:1;
}
// ********************* GAUSSIAN sharpening (NOT WORKING) *************************
else if(strcmp(argv[i], "-sharp") == 0)
{
string parser=argv[++i];
if((strtod(parser.c_str(),NULL)!=0 ))
{
float factor=strtof(parser.c_str(),NULL);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i];
GaussianFilter4D_cArray(&bufferImages[current_buffer][0], factor, CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=(bufferImages[current_buffer?0:1][i]-bufferImages[current_buffer][i]);
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " has to be a number > 0"<<endl;
i=argc;
}
}
// ********************* Min *************************
else if(strcmp(argv[i], "-min") == 0)
{
string parser=argv[++i];
if(!(parser.find_first_not_of("1234567890.-+")== string::npos))
{
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=min(bufferImages[current_buffer][i],NewImagePtr[i]);
current_buffer=current_buffer?0:1;
}
}
}
// ********************* Otsu thresholding *************************
else if(strcmp(argv[i], "-otsu") == 0)
{
otsu(bufferImages[current_buffer],NULL,CurrSize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i];
current_buffer=current_buffer?0:1;
}
// ********************* Fill *************************
else if(strcmp(argv[i], "-fill") == 0)
{
if(CurrSize->tsize==1)
{
Close_Forground_ConnectComp<float,float>(static_cast<void*>(bufferImages[current_buffer]),static_cast<void*>(bufferImages[current_buffer?0:1]),CurrSize);
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image to -fill is not 3D"<<endl;
i=argc;
}
}
// ********************* Largest Connected Component *************************
else if(strcmp(argv[i], "-lconcomp") == 0)
{
if(CurrSize->tsize==1)
{
Largest_ConnectComp<float,float>(static_cast<void*>(bufferImages[current_buffer]),static_cast<void*>(bufferImages[current_buffer?0:1]),CurrSize);
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image to -lconcomp is not 3D"<<endl;
i=argc;
}
}
// ********************* Connected Components 6NN *************************
else if(strcmp(argv[i], "-concomp6") == 0)
{
if(CurrSize->tsize==1)
{
ConnectComp6NN<float,float>(static_cast<void*>(bufferImages[current_buffer]),static_cast<void*>(bufferImages[current_buffer?0:1]),CurrSize);
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image to -concomp6 is not 3D"<<endl;
i=argc;
}
}
// ********************* Connected Components 6NN *************************
else if(strcmp(argv[i], "-concomp26") == 0)
{
if(CurrSize->tsize==1)
{
ConnectComp26NN<float,float>(static_cast<void*>(bufferImages[current_buffer]),static_cast<void*>(bufferImages[current_buffer?0:1]),CurrSize);
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image to -concomp26 is not 3D"<<endl;
i=argc;
}
}
// ********************* Range *************************
else if(strcmp(argv[i], "-range") == 0)
{
float min=FLT_MAX;
float max=-FLT_MAX;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
{
max=bufferImages[current_buffer][i]>max?bufferImages[current_buffer][i]:max;
min=bufferImages[current_buffer][i]<min?bufferImages[current_buffer][i]:min;
}
InputImage->cal_max=max;
InputImage->cal_min=min;
}
// ********************* Extract time point *************************
else if(strcmp(argv[i], "-tp") == 0)
{
string parser=argv[++i];
if(((strtod(parser.c_str(),NULL)!=0) || (parser.length()==1 && parser.find("0")!=string::npos && parser.find("0")!=string::npos) )&& strtod(parser.c_str(),NULL)<=CurrSize->tsize )
{
float factor=strtof(parser.c_str(),NULL);
InputImage->dim[4]=InputImage->nt=CurrSize->tsize=1;
InputImage->dim[0]=3;
InputImage->dim[5]=InputImage->nu=CurrSize->usize=1;
for(long i=0; i<CurrSize->numel; i++)
bufferImages[current_buffer?0:1][i]=bufferImages[current_buffer][i+(int)round(factor)*CurrSize->numel];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " is not an integer"<<endl;
i=argc;
}
}
// ********************* Split Lables *************************
else if(strcmp(argv[i], "-splitlab") == 0)
{
int maxlab=0;
for(long index=0; index<(CurrSize->numel*(CurrSize->tsize*CurrSize->usize)); index++)
maxlab=(round(bufferImages[current_buffer][index])>maxlab)?(int)round(bufferImages[current_buffer][index]):maxlab;
maxlab=maxlab+1;
if(maxlab>0 && CurrSize->tsize<=1&& CurrSize->usize<=1)
{
CurrSize->tsize=maxlab;
CurrSize->usize=1;
delete [] bufferImages[current_buffer?0:1];
bufferImages[current_buffer?0:1]= new SegPrecisionTYPE [CurrSize->numel*maxlab];
for(long index=0; index<(CurrSize->numel*maxlab); index++)
bufferImages[current_buffer?0:1][index]=0.0f;
for(long index=0; index<(CurrSize->numel); index++)
bufferImages[current_buffer?0:1][index+(int)round(bufferImages[current_buffer][index])*CurrSize->numel]=1.0f;
delete [] bufferImages[current_buffer];
bufferImages[current_buffer]= new SegPrecisionTYPE [CurrSize->numel*maxlab];
for(long index=0; index<(CurrSize->numel*maxlab); index++)
bufferImages[current_buffer][index]=0;
current_buffer=current_buffer?0:1;
}
else
{
if(CurrSize->tsize<=1&& CurrSize->usize<=1)
{
cout << "ERROR: Working image is not 3D"<<endl;
}
else
{
cout << "ERROR: Found only "<< maxlab << " labels"<<endl;
}
i=argc;
}
}
// ********************* Split Lables *************************
else if(strcmp(argv[i], "-splitinter") == 0)
{
string direction=argv[++i];
if(CurrSize->tsize<=1&& CurrSize->usize<=1){
CurrSize->tsize=2;
CurrSize->usize=1;
int oldxsize=CurrSize->xsize;
int oldysize=CurrSize->ysize;
int xincrement=1;
int yincrement=1;
int zincrement=1;
if(direction==string("x")){
CurrSize->xsize=round(CurrSize->xsize/2);
xincrement=2;
Scalling[0]= 0.5f;
}
else if(direction==string("y")){
CurrSize->ysize=round(CurrSize->ysize/2);
yincrement=2;
Scalling[1]= 0.5f;
}
else if(direction==string("z")){
CurrSize->zsize=round(CurrSize->zsize/2);
zincrement=2;
Scalling[2]= 0.5f;
}
else{
cout << "ERROR: Direction "<< direction << " is not x, y or z"<<endl;
exit(1);
}
CurrSize->numel=(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize);
for(long indexZ=0, indexZold=0; indexZ<CurrSize->zsize; indexZ++, indexZold+=zincrement){
for(long indexY=0, indexYold=0; indexY<CurrSize->ysize; indexY++, indexYold+=yincrement){
for(long indexX=0, indexXold=0; indexX<CurrSize->xsize; indexX++, indexXold+=xincrement){
bufferImages[current_buffer?0:1][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]=
bufferImages[current_buffer][indexXold+indexYold*oldxsize+indexZold*oldysize*oldxsize];
bufferImages[current_buffer?0:1][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize+CurrSize->numel]=
bufferImages[current_buffer][(indexXold+xincrement-1)+(indexYold+yincrement-1)*oldxsize+(indexZold+zincrement-1)*oldysize*oldxsize];
}
}
}
current_buffer=current_buffer?0:1;
}
}
// ********************* Split Lables *************************
else if(strcmp(argv[i], "-splitnorm") == 0)
{
string direction=argv[++i];
if(CurrSize->tsize<=1&& CurrSize->usize<=1){
CurrSize->tsize=1;
CurrSize->usize=1;
int xincrement=1;
int yincrement=1;
int zincrement=1;
//bool isdirectionsizeodd=0;
if(direction==string("x") || direction==string("1")){
xincrement=2;
//isdirectionsizeodd=(CurrSize->xsize%2)==0;
}
else if(direction==string("y") || direction==string("2")){
yincrement=2;
//isdirectionsizeodd=(CurrSize->ysize%2)==0;
}
else if(direction==string("z") || direction==string("3")){
zincrement=2;
//isdirectionsizeodd=(CurrSize->zsize%2)==0;
}
else{
cout << "ERROR: Direction "<< direction << " is not x, y or z"<<endl;
exit(1);
}
//double regul=5.0f;
std::vector<float> sortedimg;
for(long indexZ=0; indexZ<(CurrSize->zsize); indexZ+=zincrement){
for(long indexY=0; indexY<(CurrSize->ysize); indexY+=yincrement){
for(long indexX=0; indexX<(CurrSize->xsize); indexX+=xincrement){
sortedimg.push_back(bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]);
}
}
}
std::sort(sortedimg.begin(), sortedimg.end());
float thresh=0.5f*sortedimg.at(round(sortedimg.size()*0.5f)); // Find a rough background threshold to ignore non-brain tissues
sortedimg.clear();
std::vector<float> sortedvec;
CurrSize->numel=(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize);
for(long indexZ=(zincrement-1); indexZ<(CurrSize->zsize-(zincrement-1)); indexZ++){
for(long indexY=(yincrement-1); indexY<(CurrSize->ysize-(yincrement-1)); indexY++){
for(long indexX=(xincrement-1); indexX<(CurrSize->xsize-(xincrement-1)); indexX++){
double previous_next_mean_val=(bufferImages[current_buffer][(indexX+xincrement-1)+(indexY+yincrement-1)*CurrSize->xsize+(indexZ+zincrement-1)*CurrSize->ysize*CurrSize->xsize]+
bufferImages[current_buffer][(indexX-xincrement+1)+(indexY-yincrement+1)*CurrSize->xsize+(indexZ-zincrement+1)*CurrSize->ysize*CurrSize->xsize])/(2.0f);
double current_val=bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
bool oddeven=(xincrement>1?indexX%2==0:(yincrement>1?indexY%2==0:(zincrement>1?indexZ%2==0:0)));
float curRat=(current_val*previous_next_mean_val)/(oddeven?(current_val*current_val):(previous_next_mean_val*previous_next_mean_val));
if(!(curRat!=curRat) && current_val>thresh){
sortedvec.push_back(curRat);
}
}
}
}
std::sort(sortedvec.begin(), sortedvec.end());
double compensation_ratio=sortedvec.at(round(sortedvec.size()/2.0f)); // Get the median ratio
cout<<compensation_ratio<<endl;
sortedvec.clear();
for(long indexZ=0; indexZ<(CurrSize->zsize); indexZ+=zincrement){
for(long indexY=0; indexY<(CurrSize->ysize); indexY+=yincrement){
for(long indexX=0; indexX<(CurrSize->xsize); indexX+=xincrement){
bufferImages[current_buffer?0:1][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]=
compensation_ratio*bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
if((indexX+xincrement-1)<CurrSize->xsize && (indexY+yincrement-1)<CurrSize->ysize && (indexZ+zincrement-1)<CurrSize->zsize)
{
bufferImages[current_buffer?0:1][(indexX+xincrement-1)+(indexY+yincrement-1)*CurrSize->xsize+(indexZ+zincrement-1)*CurrSize->ysize*CurrSize->xsize]=
bufferImages[current_buffer][(indexX+xincrement-1)+(indexY+yincrement-1)*CurrSize->xsize+(indexZ+zincrement-1)*CurrSize->ysize*CurrSize->xsize];
}
}
}
}
current_buffer=current_buffer?0:1;
}
}
// // ********************* Split Lables *************************
// else if(strcmp(argv[i], "-splitnorm2") == 0)
// {
// string direction=argv[++i];
// if(CurrSize->tsize<=1&& CurrSize->usize<=1){
// int cur_dims[8]={3,CurrSize->xsize,CurrSize->ysize,CurrSize->zsize,1,1,1,1};
// nifti_image * NewImage1=nifti_copy_nim_info(InputImage);
// NewImage1->data= (void *) calloc(InputImage->nvox, sizeof(float));
// nifti_image * NewImage2=nifti_copy_nim_info(InputImage);
// NewImage2->data=(void *) calloc(InputImage->nvox, sizeof(float));
// float* NewImage1_ptr=static_cast<SegPrecisionTYPE *>(NewImage1->data);
// float* NewImage2_ptr=static_cast<SegPrecisionTYPE *>(NewImage2->data);
// int xincrement=1;
// int yincrement=1;
// int zincrement=1;
// if(direction==string("x")){
// xincrement=2;
// }
// else if(direction==string("y")){
// yincrement=2;
// }
// else if(direction==string("z")){
// zincrement=2;
// }
// else{
// cout << "ERROR: Direction "<< direction << " is not x, y or z"<<endl;
// exit(1);
// }
//// double regul=1.0e-15f;
// CurrSize->numel=(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize);
// for(long indexZ=(zincrement-1); indexZ<(CurrSize->zsize-(zincrement-1)); indexZ++){
// for(long indexY=(yincrement-1); indexY<(CurrSize->ysize-(yincrement-1)); indexY++){
// for(long indexX=(xincrement-1); indexX<(CurrSize->xsize-(xincrement-1)); indexX++){
// if(xincrement>1?indexX%2==0:(yincrement>1?indexY%2==0:(zincrement>1?indexZ%2==0:0))){
// NewImage1_ptr[indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]=
// (bufferImages[current_buffer][(indexX+xincrement-1)+(indexY+yincrement-1)*CurrSize->xsize+(indexZ+zincrement-1)*CurrSize->ysize*CurrSize->xsize]+
// bufferImages[current_buffer][(indexX-xincrement+1)+(indexY-yincrement+1)*CurrSize->xsize+(indexZ-zincrement+1)*CurrSize->ysize*CurrSize->xsize])/(2.0f);
// NewImage2_ptr[indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]=
// bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
// }
// else{
// NewImage2_ptr[indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]=
// (bufferImages[current_buffer][(indexX+xincrement-1)+(indexY+yincrement-1)*CurrSize->xsize+(indexZ+zincrement-1)*CurrSize->ysize*CurrSize->xsize]+
// bufferImages[current_buffer][(indexX-xincrement+1)+(indexY-yincrement+1)*CurrSize->xsize+(indexZ-zincrement+1)*CurrSize->ysize*CurrSize->xsize])/(2.0f);
// NewImage1_ptr[indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]=
// bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
// }
// }
// }
// }
// nifti_set_filenames(NewImage1,"img1.nii.gz",0,0);
// nifti_image_write(NewImage1);
// nifti_set_filenames(NewImage2,"img2.nii.gz",0,0);
// nifti_image_write(NewImage2);
// // Y=a*X+b
// float a=0;
// float b=0;
// LS_Vecs(NewImage1_ptr,NewImage2_ptr,NULL, (CurrSize->xsize*CurrSize->ysize*CurrSize->zsize),&a, &b);
// cout<<a<<" "<<b<<endl;
// for(long indexZ=0; indexZ<(CurrSize->zsize); indexZ+=zincrement){
// for(long indexY=0; indexY<(CurrSize->ysize); indexY+=yincrement){
// for(long indexX=0; indexX<(CurrSize->xsize); indexX+=xincrement){
// bufferImages[current_buffer?0:1][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]=
// a*bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
// if((indexX+xincrement-1)<CurrSize->xsize && (indexY+yincrement-1)<CurrSize->ysize && (indexZ+zincrement-1)<CurrSize->zsize)
// {
// bufferImages[current_buffer?0:1][(indexX+xincrement-1)+(indexY+yincrement-1)*CurrSize->xsize+(indexZ+zincrement-1)*CurrSize->ysize*CurrSize->xsize]=
// bufferImages[current_buffer][(indexX+xincrement-1)+(indexY+yincrement-1)*CurrSize->xsize+(indexZ+zincrement-1)*CurrSize->ysize*CurrSize->xsize];
// }
// }
// }
// }
// current_buffer=current_buffer?0:1;
// }
// }
// ********************* Split Lables *************************
else if(strcmp(argv[i], "-joininter") == 0)
{
string direction=argv[++i];
if(CurrSize->tsize==2&& CurrSize->usize<=1){
CurrSize->tsize=1;
CurrSize->usize=1;
int oldxsize=CurrSize->xsize;
int oldysize=CurrSize->ysize;
int oldzsize=CurrSize->zsize;
int xincrement=1;
int yincrement=1;
int zincrement=1;
if(direction==string("x")){
CurrSize->xsize=round(CurrSize->xsize*2);
xincrement=2;
Scalling[0]= 2.0f;
}
else if(direction==string("y")){
CurrSize->ysize=round(CurrSize->ysize*2);
yincrement=2;
Scalling[1]= 2.0f;
}
else if(direction==string("z")){
CurrSize->zsize=round(CurrSize->zsize*2);
zincrement=2;
Scalling[2]= 2.0f;
}
else{
cout << "ERROR: Direction "<< direction << " is not x, y or z"<<endl;
exit(1);
}
CurrSize->numel=(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize);
long oldnumel=(long)(oldxsize*oldysize*oldzsize);
for(long indexZ=0, indexZold=0; indexZ<CurrSize->zsize; indexZ+=zincrement, indexZold++){
for(long indexY=0, indexYold=0; indexY<CurrSize->ysize; indexY+=yincrement, indexYold++){
for(long indexX=0, indexXold=0; indexX<CurrSize->xsize; indexX+=xincrement, indexXold++){
bufferImages[current_buffer?0:1][(indexX)+(indexY)*CurrSize->xsize+(indexZ)*CurrSize->ysize*CurrSize->xsize]=
bufferImages[current_buffer][indexXold+indexYold*oldxsize+indexZold*oldysize*oldxsize];
bufferImages[current_buffer?0:1][(indexX+xincrement-1)+(indexY+yincrement-1)*CurrSize->xsize+(indexZ+zincrement-1)*CurrSize->ysize*CurrSize->xsize]=
bufferImages[current_buffer][indexXold+indexYold*oldxsize+indexZold*oldysize*oldxsize+oldnumel];
}
}
}
current_buffer=current_buffer?0:1;
}
else{
cout << "ERROR: Number of time points is not 2"<<endl;
exit(1);
}
}
// ********************* merge time points *************************
else if(strcmp(argv[i], "-merge") == 0)
{
string parser=argv[++i];
string parsertp=argv[++i];
if(strtod(parser.c_str(),NULL) && (strtod(parser.c_str(),NULL)!=0 ))
{
long numberof_new_images=(int)strtof(parser.c_str(),NULL);
long dim=(int)strtof(parsertp.c_str(),NULL);
long old_tsize=CurrSize->tsize;
long old_usize=CurrSize->usize;
long new_tsize=CurrSize->tsize;
long new_usize=CurrSize->usize;
if(dim==4)
{
new_tsize=CurrSize->tsize+(int)numberof_new_images;
}
else if(dim==5)
{
new_usize=CurrSize->usize+(int)numberof_new_images;
}
else{
cout<< "ERROR: dim has to be 4 or 5"<<endl;
return 1;
}
delete [] bufferImages[current_buffer?0:1];
bufferImages[current_buffer?0:1]= new SegPrecisionTYPE [CurrSize->numel*(new_tsize*new_usize)];
for(long index=0; index<(CurrSize->numel*(old_tsize*old_usize)); index++)
bufferImages[current_buffer?0:1][index]=bufferImages[current_buffer][index];
delete [] bufferImages[current_buffer];
bufferImages[current_buffer]= new SegPrecisionTYPE [CurrSize->numel*(new_tsize*new_usize)];
for(long index=0; index<(CurrSize->numel*(old_tsize*old_usize)); index++)
bufferImages[current_buffer][index]=bufferImages[current_buffer?0:1][index];
current_buffer=current_buffer?0:1;
CurrSize->usize=new_usize;
CurrSize->tsize=new_tsize;
for(long tp=0; tp<(long)numberof_new_images; tp++)
{
string parser_image_name=argv[++i];
if(parser_image_name.find(string(".nii"))>0 || parser_image_name.find(string(".img")) ||parser_image_name.find(string(".hdr"))>0)
{
nifti_image * NewImage=nifti_image_read(parser_image_name.c_str(),true);
if(NewImage == NULL)
{
cout<< "ERROR: When reading the image"<<parser_image_name<<endl;
return 1;
}
if(dim==4){
if(NewImage->nx==InputImage->nx&&NewImage->ny==InputImage->ny&&NewImage->nz==InputImage->nz && NewImage->nt<=1)
{
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
for(long index=0; index<(long)CurrSize->numel; index++)
bufferImages[current_buffer?0:1][index+(old_tsize+tp)*CurrSize->numel]=NewImagePtr[index];
}
else
{
cout<< "ERROR: Image "<<parser_image_name<<" [nx,ny,nz] do not match or nt>1"<<endl;
return 1;
}
}
else if(dim==5){
if(NewImage->nx==InputImage->nx&&NewImage->ny==InputImage->ny&&NewImage->nz==InputImage->nz&&NewImage->nt==InputImage->nt && NewImage->nu<=1)
{
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
for(long index=0; index<(long)CurrSize->numel*old_tsize; index++)
bufferImages[current_buffer?0:1][index+(old_tsize+old_tsize*tp)*CurrSize->numel]=NewImagePtr[index];
}
else
{
cout<< "ERROR: Image "<<parser_image_name<<" [nx,ny,nz,nt] do not match or nu>1"<<endl;
return 1;
}
}
}
}
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: "<< parser << " has to be an integer > 0"<<endl;
i=argc;
}
}
// ********************* merge time points *************************
else if(strcmp(argv[i], "-subsamp2") == 0)
{
int newx=(int)floor(CurrSize->xsize/2.0f);
int newy=(int)floor(CurrSize->ysize/2.0f);
int newz=(int)floor(CurrSize->zsize/2.0f);
int newnumel=newx*newy*newz;
for(long tp=0; tp<(long)(CurrSize->tsize*CurrSize->usize); tp++){
//create dummy nii
nifti_image * TMPnii = nifti_copy_nim_info(InputImage);
TMPnii->dim[1]=CurrSize->xsize;
TMPnii->dim[2]=CurrSize->ysize;
TMPnii->dim[3]=CurrSize->zsize;
TMPnii->dim[4]=TMPnii->nt=1;
TMPnii->dim[5]=TMPnii->nu=1;
nifti_update_dims_from_array(TMPnii);
//copy pointer, run gaussian, and set to null
TMPnii->data=static_cast<void*>(&bufferImages[current_buffer][CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*tp]);
GaussianSmoothing5D_nifti(TMPnii,NULL,0.5);
TMPnii->data=NULL;
//As TMPnii->data=NULL, the free will not cause any harm
nifti_image_free(TMPnii);
}
delete [] bufferImages[current_buffer?0:1];
bufferImages[current_buffer?0:1]= new SegPrecisionTYPE [newnumel*CurrSize->tsize];
Scalling[0]=0.5;
Scalling[1]=0.5;
Scalling[2]=0.5;
for(long indexT=0; indexT<CurrSize->tsize; indexT++)
for(long indexZ=0; indexZ<newz; indexZ++)
for(long indexY=0; indexY<newy; indexY++)
for(long indexX=0; indexX<newx; indexX++)
bufferImages[current_buffer?0:1][indexX+indexY*newx+indexZ*newy*newx+indexT*newnumel]=bufferImages[current_buffer][indexX*2+indexY*2*CurrSize->xsize+indexZ*2*CurrSize->xsize*CurrSize->ysize+indexT*CurrSize->numel];
delete [] bufferImages[current_buffer];
bufferImages[current_buffer]= new SegPrecisionTYPE [newnumel*CurrSize->tsize];
current_buffer=current_buffer?0:1;
CurrSize->xsize=newx;
CurrSize->ysize=newy;
CurrSize->zsize=newz;
CurrSize->numel=newnumel;
}
// ********************* merge time points *************************
else if(strcmp(argv[i], "-subsamp2xy") == 0)
{
int newx=(int)floor(static_cast<float>(CurrSize->xsize)/2.0f);
int newy=(int)floor(static_cast<float>(CurrSize->ysize)/2.0f);
int newz=(int)floor(static_cast<float>(CurrSize->zsize));
int newnumel=newx*newy*newz;
delete [] bufferImages[current_buffer?0:1];
bufferImages[current_buffer?0:1]= new SegPrecisionTYPE [newnumel*CurrSize->tsize];
Scalling[0]=0.5;
Scalling[1]=0.5;
Scalling[2]=1;
for(long indexT=0; indexT<CurrSize->tsize; indexT++)
for(long indexZ=0; indexZ<newz; indexZ++)
for(long indexY=0; indexY<newy; indexY++)
for(long indexX=0; indexX<newx; indexX++)
bufferImages[current_buffer?0:1][indexX+indexY*newx+indexZ*newy*newx+indexT*newnumel]=bufferImages[current_buffer][indexX*2+indexY*2*CurrSize->xsize+indexZ*CurrSize->xsize*CurrSize->ysize+indexT*CurrSize->numel];
delete [] bufferImages[current_buffer];
bufferImages[current_buffer]= new SegPrecisionTYPE [newnumel*CurrSize->tsize];
current_buffer=current_buffer?0:1;
CurrSize->xsize=newx;
CurrSize->ysize=newy;
CurrSize->zsize=newz;
CurrSize->numel=newnumel;
}
// ********************* Get max TP *************************
else if(strcmp(argv[i], "-tmax") == 0)
{
for(long i=0; i<CurrSize->numel; i++)
{
float tmax=(float)-FLT_MAX;
for(long tp=0; tp<(long)CurrSize->tsize; tp++)
{
if(tmax<bufferImages[current_buffer][i+(long)(tp)*(long)CurrSize->numel])
tmax=bufferImages[current_buffer][i+(long)(tp)*(long)CurrSize->numel];
}
bufferImages[current_buffer?0:1][i]=tmax;
}
CurrSize->tsize=1;
current_buffer=current_buffer?0:1;
}
// ********************* Get TP with maxval *************************
else if(strcmp(argv[i], "-tpmax") == 0)
{
for(long i=0; i<CurrSize->numel; i++)
{
float tmax=(float)-FLT_MAX;
float tmaxindex=-1;
for(long tp=0; tp<(long)CurrSize->tsize; tp++)
{
if(bufferImages[current_buffer][i+(long)(tp)*(long)CurrSize->numel]>tmax)
{
tmax=bufferImages[current_buffer][i+(long)(tp)*(long)CurrSize->numel];
tmaxindex=(float)tp;
}
}
bufferImages[current_buffer?0:1][i]=(float)tmaxindex;
}
InputImage->cal_max=CurrSize->tsize;
CurrSize->tsize=1;
current_buffer=current_buffer?0:1;
}
// ********************* Get mean TP *************************
else if(strcmp(argv[i], "-tmean") == 0)
{
for(long i=0; i<CurrSize->numel; i++)
{
float tmean=0;
for(long tp=0; tp<(long)CurrSize->tsize; tp++)
{
tmean+=bufferImages[current_buffer][i+(long)(tp)*CurrSize->numel];
}
bufferImages[current_buffer?0:1][i]=tmean/CurrSize->tsize;
}
CurrSize->tsize=1;
current_buffer=current_buffer?0:1;
}
// ********************* Get min TP *************************
else if(strcmp(argv[i], "-tmin") == 0)
{
for(long i=0; i<CurrSize->numel; i++)
{
float tmin=(float)FLT_MAX;
for(long tp=0; tp<CurrSize->tsize; tp++)
{
if(tmin>bufferImages[current_buffer][i+(int)(tp)*CurrSize->numel])
tmin=bufferImages[current_buffer][i+(int)(tp)*CurrSize->numel];
}
bufferImages[current_buffer?0:1][i]=tmin;
}
CurrSize->tsize=1;
current_buffer=current_buffer?0:1;
}
// ********************* Reset SCL *************************
else if(strcmp(argv[i], "-scl") == 0)
{
InputImage->scl_inter=0;
InputImage->scl_slope=1;
}
// ********************* Copy Header *************************
else if(strcmp(argv[i], "-hdr_copy") == 0)
{
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->nu<1)
NewImage->dim[5]=1;
if(NewImage->nt<1)
NewImage->dim[4]=1;
nifti_update_dims_from_array(NewImage);
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
if(NewImage->nx==CurrSize->xsize&&NewImage->ny==CurrSize->ysize&&NewImage->nz==CurrSize->zsize&&NewImage->nt==CurrSize->tsize&&NewImage->nu==CurrSize->usize)
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize*CurrSize->tsize*CurrSize->usize); i++)
bufferImages[current_buffer?0:1][i]=NewImagePtr[i];
current_buffer=current_buffer?0:1;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "<<NewImage->nx<<","
<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
exit(1);
i=argc;
}
nifti_image_free(NewImage);
}
// ********************* Copy Header *************************
else if(strcmp(argv[i], "-4to5") == 0)
{
int tempT=CurrSize->tsize;
int tempU=CurrSize->usize;
InputImage->dim[4]=InputImage->nt=CurrSize->tsize=tempU;
InputImage->dim[5]=InputImage->nu=CurrSize->usize=tempT;
}
// ********************* Get LSSD *************************
else if(strcmp(argv[i], "-lssd") == 0)
{
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<SegPrecisionTYPE>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
string parserstd=argv[++i];
if(strtod(parserstd.c_str(),NULL)>0)
{
if(NewImage->nt<2&&NewImage->nx==InputImage->nx&&NewImage->ny==InputImage->ny&&NewImage->nz==InputImage->nz)
{
float * NewImageMean=new float [NewImage->nx*NewImage->ny*NewImage->nz];
float * NewImageStd=new float [NewImage->nx*NewImage->ny*NewImage->nz];
float * InputImageMean=new float [InputImage->nx*InputImage->ny*InputImage->nz];
float * InputImageStd=new float [InputImage->nx*InputImage->ny*InputImage->nz];
float allmeanNew=0;
float allmeanInput=0;
float allstdNew=0;
float allstdInput=0;
for(long index=0; index<InputImage->nx*InputImage->ny*InputImage->nz; index++)
{
allmeanNew+=NewImagePtr[index];
allmeanInput+=bufferImages[current_buffer][index];
NewImageMean[index]=NewImagePtr[index];
NewImageStd[index]=NewImagePtr[index]*NewImagePtr[index];
InputImageMean[index]=bufferImages[current_buffer][index];
InputImageStd[index]=bufferImages[current_buffer][index]*bufferImages[current_buffer][index];
}
allmeanNew=allmeanNew/(InputImage->nx*InputImage->ny*InputImage->nz);
allmeanInput=allmeanInput/(InputImage->nx*InputImage->ny*InputImage->nz);
GaussianFilter4D_cArray(NewImageMean,strtod(parserstd.c_str(),NULL),CurrSize);
GaussianFilter4D_cArray(NewImageStd,strtod(parserstd.c_str(),NULL),CurrSize);
GaussianFilter4D_cArray(InputImageMean,strtod(parserstd.c_str(),NULL),CurrSize);
GaussianFilter4D_cArray(InputImageStd,strtod(parserstd.c_str(),NULL),CurrSize);
for(long index=0; index<InputImage->nx*InputImage->ny*InputImage->nz; index++)
{
allstdNew+=(NewImagePtr[index]-allmeanNew)*(NewImagePtr[index]-allmeanNew);
allstdInput+=(bufferImages[current_buffer][index]-allmeanInput)*(bufferImages[current_buffer][index]-allmeanInput);
}
allstdNew=allstdNew/(InputImage->nx*InputImage->ny*InputImage->nz);
allstdInput=allstdInput/(InputImage->nx*InputImage->ny*InputImage->nz);
for(long index=0; index<InputImage->nx*InputImage->ny*InputImage->nz; index++)
{
NewImageStd[index]=NewImageStd[index]-NewImageMean[index]*NewImageMean[index];
InputImageStd[index]=InputImageStd[index]-InputImageMean[index]*InputImageMean[index];
bufferImages[current_buffer?0:1][index]=(bufferImages[current_buffer][index]-InputImageMean[index])/(sqrt(InputImageStd[index]+0.01*allstdInput))-(NewImagePtr[index]-NewImageMean[index])/(sqrt(NewImageStd[index]+0.01*allstdNew));
}
GaussianFilter4D_cArray(bufferImages[current_buffer?0:1],strtod(parserstd.c_str(),NULL),CurrSize);
for(long index=0; index<InputImage->nx*InputImage->ny*InputImage->nz; index++)
{
bufferImages[current_buffer?0:1][index]=bufferImages[current_buffer?0:1][index]*bufferImages[current_buffer?0:1][index];
}
current_buffer=current_buffer?0:1;
delete [] NewImageMean;
delete [] NewImageStd;
delete [] InputImageMean;
delete [] InputImageStd;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "<<NewImage->nx<<","
<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
}
}
// ********************* Get LNCC *************************
else if(strcmp(argv[i], "-lncc") == 0)
{
string parser=argv[++i];
nifti_image * NewImage=nifti_image_read(parser.c_str(),true);
NewImage->nu=(NewImage->nu>1)?NewImage->nu:1;
NewImage->nt=(NewImage->nt>1)?NewImage->nt:1;
if(NewImage->datatype!=DT_FLOAT32)
{
seg_changeDatatype<float>(NewImage);
}
SegPrecisionTYPE * NewImagePtr = static_cast<SegPrecisionTYPE *>(NewImage->data);
string parserstd=argv[++i];
if(strtod(parserstd.c_str(),NULL))
{
if(NewImage->nt<2&&NewImage->nx==InputImage->nx&&NewImage->ny==InputImage->ny&&NewImage->nz==InputImage->nz)
{
float * NewImageMean=new float [NewImage->nx*NewImage->ny*NewImage->nz];
float * NewImageStd=new float [NewImage->nx*NewImage->ny*NewImage->nz];
float * InputImageMean=new float [InputImage->nx*InputImage->ny*InputImage->nz];
float * InputImageStd=new float [InputImage->nx*InputImage->ny*InputImage->nz];
float allmeanNew=0;
float allmeanInput=0;
float allstdNew=0;
float allstdInput=0;
for(long index=0; index<InputImage->nx*InputImage->ny*InputImage->nz; index++)
{
if(!isnan(NewImagePtr[index]) && !isnan(bufferImages[current_buffer][index])){
allmeanNew+=NewImagePtr[index];
NewImageMean[index]=NewImagePtr[index];
NewImageStd[index]=NewImagePtr[index]*NewImagePtr[index];
allmeanInput+=bufferImages[current_buffer][index];
InputImageMean[index]=bufferImages[current_buffer][index];
InputImageStd[index]=bufferImages[current_buffer][index]*bufferImages[current_buffer][index];
}
}
allmeanNew=allmeanNew/(InputImage->nx*InputImage->ny*InputImage->nz);
allmeanInput=allmeanInput/(InputImage->nx*InputImage->ny*InputImage->nz);
for(long index=0; index<InputImage->nx*InputImage->ny*InputImage->nz; index++)
{
if(!isnan(NewImagePtr[index]) && !isnan(bufferImages[current_buffer][index])){
allstdNew+=(NewImagePtr[index]-allmeanNew)*(NewImagePtr[index]-allmeanNew);
allstdInput+=(bufferImages[current_buffer][index]-allmeanInput)*(bufferImages[current_buffer][index]-allmeanInput);
bufferImages[current_buffer][index]=NewImagePtr[index]*bufferImages[current_buffer][index];
}
}
allstdNew=allstdNew/(InputImage->nx*InputImage->ny*InputImage->nz);
allstdInput=allstdInput/(InputImage->nx*InputImage->ny*InputImage->nz);
cout << allstdInput <<" "<< allstdNew<<endl;
GaussianFilter4D_cArray(bufferImages[current_buffer],strtod(parserstd.c_str(),NULL),CurrSize);
GaussianFilter4D_cArray(NewImageMean,strtod(parserstd.c_str(),NULL),CurrSize);
GaussianFilter4D_cArray(NewImageStd,strtod(parserstd.c_str(),NULL),CurrSize);
GaussianFilter4D_cArray(InputImageMean,strtod(parserstd.c_str(),NULL),CurrSize);
GaussianFilter4D_cArray(InputImageStd,strtod(parserstd.c_str(),NULL),CurrSize);
for(long index=0; index<InputImage->nx*InputImage->ny*InputImage->nz; index++)
{
if(!isnan(NewImagePtr[index]) && !isnan(bufferImages[current_buffer][index])){
NewImageStd[index]=NewImageStd[index]-NewImageMean[index]*NewImageMean[index];
InputImageStd[index]=InputImageStd[index]-InputImageMean[index]*InputImageMean[index];
bufferImages[current_buffer?0:1][index]=(bufferImages[current_buffer][index]-InputImageMean[index]*NewImageMean[index])/(sqrt(NewImageStd[index]*InputImageStd[index])+sqrt(0.01*(allstdNew+allstdInput)));
}
else{
bufferImages[current_buffer?0:1][index]=std::numeric_limits<float>::quiet_NaN();
}
}
current_buffer=current_buffer?0:1;
delete [] NewImageMean;
delete [] NewImageStd;
delete [] InputImageMean;
delete [] InputImageStd;
}
else
{
cout << "ERROR: Image "<< parser << " is the wrong size - original = ( "<<CurrSize->xsize<<","
<<CurrSize->ysize<<","<<CurrSize->zsize<<","<<CurrSize->tsize<<","<<CurrSize->usize<<" ) New image = ( "<<NewImage->nx<<","
<<NewImage->ny<<","<<NewImage->nz<<","<<NewImage->nt<<","<<NewImage->nu<<" )"<<endl;
i=argc;
}
}
else
{
cout << "ERROR: "<< string() << " is not a float"<<endl;
i=argc;
}
nifti_image_free(NewImage);
}
// ********************* z score ****************************
else if(strcmp(argv[i], "-z") == 0)
{
for (long tup=0; tup<(CurrSize->tsize*CurrSize->usize); tup++)
{
float mean=0;
int img3Dsize=(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
mean+=bufferImages[current_buffer][i+img3Dsize*tup];
}
mean/=(float)(img3Dsize);
float std=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
std+=powf((bufferImages[current_buffer][i+img3Dsize*tup]-mean),2);
}
std/=(float)img3Dsize;
std=sqrt(std);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
bufferImages[current_buffer?0:1][i]=(bufferImages[current_buffer][i+img3Dsize*tup]-mean)/std;
}
current_buffer=current_buffer?0:1;
}
}
// ********************* z score ****************************
else if(strcmp(argv[i], "-zr") == 0)
{
string parser=argv[i+1];
if(strtod(parser.c_str(),NULL)==0 )
{
cout<<"ERROR: The <float> range in option -P is not a number or is not within the range."<<endl;
return 0;
}
float percentile = atof(argv[++i])/100.0f;
percentile=percentile>1?1:percentile;
percentile=percentile<0?0:percentile;
for (long tup=0; tup<(CurrSize->tsize*CurrSize->usize); tup++)
{
long img3Dsize=(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize);
float * imgsort=new float [img3Dsize];
long curindex=0;
for(long index=0; index<img3Dsize; index++)
{
imgsort[curindex]=bufferImages[current_buffer][index+img3Dsize*tup];
curindex++;
}
HeapSort(imgsort,img3Dsize-1);
float lowThresh=imgsort[(long)(round(percentile*(img3Dsize-1)))];
float highThresh=imgsort[(long)(round((1-percentile)*(img3Dsize-1)))];
delete [] imgsort;
float mean=0;
long count=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
if(bufferImages[current_buffer][i+img3Dsize*tup]<highThresh && bufferImages[current_buffer][i+img3Dsize*tup]>lowThresh)
{
mean+=bufferImages[current_buffer][i+img3Dsize*tup];
count++;
}
}
mean/=(float)(count);
float std=0;
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
if(bufferImages[current_buffer][i+img3Dsize*tup]<highThresh && bufferImages[current_buffer][i+img3Dsize*tup]>lowThresh)
{
std+=powf((bufferImages[current_buffer][i+img3Dsize*tup]-mean),2);
}
}
std/=(float)(count);
std=sqrt(std);
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
bufferImages[current_buffer?0:1][i]=(bufferImages[current_buffer][i+img3Dsize*tup]-mean)/std;
}
current_buffer=current_buffer?0:1;
}
}
else if(strcmp(argv[i], "-flipNM") == 0) // Neuromorphometric Lab Flip for version 3
{
for(long i=0; i<(long)(CurrSize->xsize*CurrSize->ysize*CurrSize->zsize); i++)
{
switch((int)floor(bufferImages[current_buffer][i])){
case 0: bufferImages[current_buffer?0:1][i]=0; break; break; // Background and skull
case 1: bufferImages[current_buffer?0:1][i]=1; break; // NonBrain Low
case 2: bufferImages[current_buffer?0:1][i]=2; break; // NonBrain Mid
case 3: bufferImages[current_buffer?0:1][i]=3; break; // NonBrain High
case 4: bufferImages[current_buffer?0:1][i]=4; break; // Non-ventricular CSF
case 5: bufferImages[current_buffer?0:1][i]=5; break; // 3rd Ventricle
case 12: bufferImages[current_buffer?0:1][i]=12; break; // 4th Ventricle
case 16: bufferImages[current_buffer?0:1][i]=16; break; // 5th Ventricle
case 24: bufferImages[current_buffer?0:1][i]=31; break; // Right to Left Accumbens Area
case 31: bufferImages[current_buffer?0:1][i]=24; break; // Left to Right Accumbens Area
case 32: bufferImages[current_buffer?0:1][i]=33; break; // Right to Left Amygdala
case 33: bufferImages[current_buffer?0:1][i]=32; break; // Left to Right Amygdala
case 35: bufferImages[current_buffer?0:1][i]=35; break; // Pons
case 36: bufferImages[current_buffer?0:1][i]=36; break; // Brain Stem
case 37: bufferImages[current_buffer?0:1][i]=38; break; // Right to Left Caudate
case 38: bufferImages[current_buffer?0:1][i]=37; break; // Left to Right Caudate
case 39: bufferImages[current_buffer?0:1][i]=40; break; // Right to Left Cerebellum Exterior
case 40: bufferImages[current_buffer?0:1][i]=39; break; // Left to Right Cerebellum Exterior
case 41: bufferImages[current_buffer?0:1][i]=42; break; // Right to Left Cerebellum White Matter
case 42: bufferImages[current_buffer?0:1][i]=41; break; // Left to Right Cerebellum White Matter
case 43: bufferImages[current_buffer?0:1][i]=44; break; // Right to Left Cerebral Exterior
case 44: bufferImages[current_buffer?0:1][i]=43; break; // Left to Right Cerebral Exterior
case 47: bufferImages[current_buffer?0:1][i]=47; break; // 3rd Ventricle (Posterior part)
case 48: bufferImages[current_buffer?0:1][i]=49; break; // Right to Left Hippocampus
case 49: bufferImages[current_buffer?0:1][i]=48; break; // Left to Right Hippocampus
case 50: bufferImages[current_buffer?0:1][i]=51; break; // Right to Left Inf Lat Vent
case 51: bufferImages[current_buffer?0:1][i]=50; break; // Left to Right Inf Lat Vent
case 52: bufferImages[current_buffer?0:1][i]=53; break; // Right to Left Lateral Ventricle
case 53: bufferImages[current_buffer?0:1][i]=52; break; // Left to Right Lateral Ventricle
case 54: bufferImages[current_buffer?0:1][i]=55; break; // Right to Left Lesion
case 55: bufferImages[current_buffer?0:1][i]=54; break; // Left to Right Lesion
case 56: bufferImages[current_buffer?0:1][i]=57; break; // Right to Left Pallidum
case 57: bufferImages[current_buffer?0:1][i]=56; break; // Left to Right Pallidum
case 58: bufferImages[current_buffer?0:1][i]=59; break; // Right to Left Putamen
case 59: bufferImages[current_buffer?0:1][i]=58; break; // Left to Right Putamen
case 60: bufferImages[current_buffer?0:1][i]=61; break; // Right to Left Thalamus Proper
case 61: bufferImages[current_buffer?0:1][i]=60; break; // Left to Right Thalamus Proper
case 62: bufferImages[current_buffer?0:1][i]=63; break; // Right to Left Ventral DC
case 63: bufferImages[current_buffer?0:1][i]=62; break; // Left to Right Ventral DC
case 64: bufferImages[current_buffer?0:1][i]=65; break; // Right to Left vessel
case 65: bufferImages[current_buffer?0:1][i]=64; break; // Left to Right vessel
case 66: bufferImages[current_buffer?0:1][i]=67; break; // Right to Left Ventricular Lining
case 67: bufferImages[current_buffer?0:1][i]=66; break; // Left to Right Ventricular Lining
case 70: bufferImages[current_buffer?0:1][i]=70; break; // Optic Chiasm
case 72: bufferImages[current_buffer?0:1][i]=72; break; // Cerebellar Vermal Lobules I-V
case 73: bufferImages[current_buffer?0:1][i]=73; break; // Cerebellar Vermal Lobules VI-VII
case 74: bufferImages[current_buffer?0:1][i]=74; break; // Cerebellar Vermal Lobules VIII-X
case 77: bufferImages[current_buffer?0:1][i]=76; break; // Right to Left Basal Forebrain
case 76: bufferImages[current_buffer?0:1][i]=77; break; // Left to Right Basal Forebrain
case 81: bufferImages[current_buffer?0:1][i]=89; break; // WM region flips
case 82: bufferImages[current_buffer?0:1][i]=90; break;
case 83: bufferImages[current_buffer?0:1][i]=91; break;
case 84: bufferImages[current_buffer?0:1][i]=92; break;
case 85: bufferImages[current_buffer?0:1][i]=93; break;
case 86: bufferImages[current_buffer?0:1][i]=94; break;
case 87: bufferImages[current_buffer?0:1][i]=87; break;
case 89: bufferImages[current_buffer?0:1][i]=81; break;
case 90: bufferImages[current_buffer?0:1][i]=82; break;
case 91: bufferImages[current_buffer?0:1][i]=83; break;
case 92: bufferImages[current_buffer?0:1][i]=84; break;
case 93: bufferImages[current_buffer?0:1][i]=85; break;
case 94: bufferImages[current_buffer?0:1][i]=86; break;
case 96: bufferImages[current_buffer?0:1][i]=97; break; // Right to Left claustrum
case 97: bufferImages[current_buffer?0:1][i]=96; break; // Left to Right claustrum
case 101: bufferImages[current_buffer?0:1][i]=102; break; // Right to Left ACgG anterior cingulate gyrus
case 102: bufferImages[current_buffer?0:1][i]=101; break; // Left to Right ACgG anterior cingulate gyrus
case 103: bufferImages[current_buffer?0:1][i]=104; break; // Right to Left AIns anterior insula
case 104: bufferImages[current_buffer?0:1][i]=103; break; // Left to Right AIns anterior insula
case 105: bufferImages[current_buffer?0:1][i]=106; break; // Right to Left AOrG anterior orbital gyrus
case 106: bufferImages[current_buffer?0:1][i]=105; break; // Left to Right AOrG anterior orbital gyrus
case 107: bufferImages[current_buffer?0:1][i]=108; break; // Right to Left AnG angular gyrus
case 108: bufferImages[current_buffer?0:1][i]=107; break; // Left to Right AnG angular gyrus
case 109: bufferImages[current_buffer?0:1][i]=110; break; // Right to Left Calc calcarine cortex
case 110: bufferImages[current_buffer?0:1][i]=109; break; // Left to Right Calc calcarine cortex
case 113: bufferImages[current_buffer?0:1][i]=114; break; // Right to Left CO central operculum
case 114: bufferImages[current_buffer?0:1][i]=113; break; // Left to Right CO central operculum
case 115: bufferImages[current_buffer?0:1][i]=116; break; // Right to Left Cun cuneus
case 116: bufferImages[current_buffer?0:1][i]=115; break; // Left to Right Cun cuneus
case 117: bufferImages[current_buffer?0:1][i]=118; break; // Right to Left Ent entorhinal area
case 118: bufferImages[current_buffer?0:1][i]=117; break; // Left to Right Ent entorhinal area
case 119: bufferImages[current_buffer?0:1][i]=120; break; // Right to Left FO frontal operculum
case 120: bufferImages[current_buffer?0:1][i]=119; break; // Left to Right FO frontal operculum
case 121: bufferImages[current_buffer?0:1][i]=122; break; // Right to Left FRP frontal pole
case 122: bufferImages[current_buffer?0:1][i]=121; break; // Left to Right FRP frontal pole
case 123: bufferImages[current_buffer?0:1][i]=124; break; // Right to Left FuG fusiform gyrus
case 124: bufferImages[current_buffer?0:1][i]=123; break; // Left to Right FuG fusiform gyrus
case 125: bufferImages[current_buffer?0:1][i]=126; break; // Right to Left GRe gyrus rectus
case 126: bufferImages[current_buffer?0:1][i]=125; break; // Left to Right GRe gyrus rectus
case 129: bufferImages[current_buffer?0:1][i]=130; break; // Right to Left IOG inferior occipital gyrus
case 130: bufferImages[current_buffer?0:1][i]=129; break; // Left to Right IOG inferior occipital gyrus
case 133: bufferImages[current_buffer?0:1][i]=134; break; // Right to Left ITG inferior temporal gyrus
case 134: bufferImages[current_buffer?0:1][i]=133; break; // Left to Right ITG inferior temporal gyrus
case 135: bufferImages[current_buffer?0:1][i]=136; break; // Right to Left LiG lingual gyrus
case 136: bufferImages[current_buffer?0:1][i]=135; break; // Left to Right LiG lingual gyrus
case 137: bufferImages[current_buffer?0:1][i]=138; break; // Right to Left LOrG lateral orbital gyrus
case 138: bufferImages[current_buffer?0:1][i]=137; break; // Left to Right LOrG lateral orbital gyrus
case 139: bufferImages[current_buffer?0:1][i]=140; break; // Right to Left MCgG middle cingulate gyrus
case 140: bufferImages[current_buffer?0:1][i]=139; break; // Left to Right MCgG middle cingulate gyrus
case 141: bufferImages[current_buffer?0:1][i]=142; break; // Right to Left MFC medial frontal cortex
case 142: bufferImages[current_buffer?0:1][i]=141; break; // Left to Right MFC medial frontal cortex
case 143: bufferImages[current_buffer?0:1][i]=144; break; // Right to Left MFG middle frontal gyrus
case 144: bufferImages[current_buffer?0:1][i]=143; break; // Left to Right MFG middle frontal gyrus
case 145: bufferImages[current_buffer?0:1][i]=146; break; // Right to Left MOG middle occipital gyrus
case 146: bufferImages[current_buffer?0:1][i]=145; break; // Left to Right MOG middle occipital gyrus
case 147: bufferImages[current_buffer?0:1][i]=148; break; // Right to Left MOrG medial orbital gyrus
case 148: bufferImages[current_buffer?0:1][i]=147; break; // Left to Right MOrG medial orbital gyrus
case 149: bufferImages[current_buffer?0:1][i]=150; break; // Right to Left MPoG postcentral gyrus medial segment
case 150: bufferImages[current_buffer?0:1][i]=149; break; // Left to Right MPoG postcentral gyrus medial segment
case 151: bufferImages[current_buffer?0:1][i]=152; break; // Right to Left MPrG precentral gyrus medial segment
case 152: bufferImages[current_buffer?0:1][i]=151; break; // Left to Right MPrG precentral gyrus medial segment
case 153: bufferImages[current_buffer?0:1][i]=154; break; // Right to Left MSFG superior frontal gyrus medial segment
case 154: bufferImages[current_buffer?0:1][i]=153; break; // Left to Right MSFG superior frontal gyrus medial segment
case 155: bufferImages[current_buffer?0:1][i]=156; break; // Right to Left MTG middle temporal gyrus
case 156: bufferImages[current_buffer?0:1][i]=155; break; // Left to Right MTG middle temporal gyrus
case 157: bufferImages[current_buffer?0:1][i]=158; break; // Right to Left OCP occipital pole
case 158: bufferImages[current_buffer?0:1][i]=157; break; // Left to Right OCP occipital pole
case 161: bufferImages[current_buffer?0:1][i]=162; break; // Right to Left OFuG occipital fusiform gyrus
case 162: bufferImages[current_buffer?0:1][i]=161; break; // Left to Right OFuG occipital fusiform gyrus
case 163: bufferImages[current_buffer?0:1][i]=164; break; // Right to Left OpIFG opercular part of the inferior frontal gyrus
case 164: bufferImages[current_buffer?0:1][i]=163; break; // Left to Right OpIFG opercular part of the inferior frontal gyrus
case 165: bufferImages[current_buffer?0:1][i]=166; break; // Right to Left OrIFG orbital part of the inferior frontal gyrus
case 166: bufferImages[current_buffer?0:1][i]=165; break; // Left to Right OrIFG orbital part of the inferior frontal gyrus
case 167: bufferImages[current_buffer?0:1][i]=168; break; // Right to Left PCgG posterior cingulate gyrus
case 168: bufferImages[current_buffer?0:1][i]=167; break; // Left to Right PCgG posterior cingulate gyrus
case 169: bufferImages[current_buffer?0:1][i]=170; break; // Right to Left PCu precuneus
case 170: bufferImages[current_buffer?0:1][i]=169; break; // Left to Right PCu precuneus
case 171: bufferImages[current_buffer?0:1][i]=172; break; // Right to Left PHG parahippocampal gyrus
case 172: bufferImages[current_buffer?0:1][i]=171; break; // Left to Right PHG parahippocampal gyrus
case 173: bufferImages[current_buffer?0:1][i]=174; break; // Right to Left PIns posterior insula
case 174: bufferImages[current_buffer?0:1][i]=173; break; // Left to Right PIns posterior insula
case 175: bufferImages[current_buffer?0:1][i]=176; break; // Right to Left PO parietal operculum
case 176: bufferImages[current_buffer?0:1][i]=175; break; // Left to Right PO parietal operculum
case 177: bufferImages[current_buffer?0:1][i]=178; break; // Right to Left PoG postcentral gyrus
case 178: bufferImages[current_buffer?0:1][i]=177; break; // Left to Right PoG postcentral gyrus
case 179: bufferImages[current_buffer?0:1][i]=180; break; // Right to Left POrG posterior orbital gyrus
case 180: bufferImages[current_buffer?0:1][i]=179; break; // Left to Right POrG posterior orbital gyrus
case 181: bufferImages[current_buffer?0:1][i]=182; break; // Right to Left PP planum polare
case 182: bufferImages[current_buffer?0:1][i]=181; break; // Left to Right PP planum polare
case 183: bufferImages[current_buffer?0:1][i]=184; break; // Right to Left PrG precentral gyrus
case 184: bufferImages[current_buffer?0:1][i]=183; break; // Left to Right PrG precentral gyrus
case 185: bufferImages[current_buffer?0:1][i]=186; break; // Right to Left PT planum temporale
case 186: bufferImages[current_buffer?0:1][i]=185; break; // Left to Right PT planum temporale
case 187: bufferImages[current_buffer?0:1][i]=188; break; // Right to Left SCA subcallosal area
case 188: bufferImages[current_buffer?0:1][i]=187; break; // Left to Right SCA subcallosal area
case 191: bufferImages[current_buffer?0:1][i]=192; break; // Right to Left SFG superior frontal gyrus
case 192: bufferImages[current_buffer?0:1][i]=191; break; // Left to Right SFG superior frontal gyrus
case 193: bufferImages[current_buffer?0:1][i]=194; break; // Right to Left SMC supplementary motor cortex
case 194: bufferImages[current_buffer?0:1][i]=193; break; // Left to Right SMC supplementary motor cortex
case 195: bufferImages[current_buffer?0:1][i]=196; break; // Right to Left SMG supramarginal gyrus
case 196: bufferImages[current_buffer?0:1][i]=195; break; // Left to Right SMG supramarginal gyrus
case 197: bufferImages[current_buffer?0:1][i]=198; break; // Right to Left SOG superior occipital gyrus
case 198: bufferImages[current_buffer?0:1][i]=197; break; // Left to Right SOG superior occipital gyrus
case 199: bufferImages[current_buffer?0:1][i]=200; break; // Right to Left SPL superior parietal lobule
case 200: bufferImages[current_buffer?0:1][i]=199; break; // Left to Right SPL superior parietal lobule
case 201: bufferImages[current_buffer?0:1][i]=202; break; // Right to Left STG superior temporal gyrus
case 202: bufferImages[current_buffer?0:1][i]=201; break; // Left to Right STG superior temporal gyrus
case 203: bufferImages[current_buffer?0:1][i]=204; break; // Right to Left TMP temporal pole
case 204: bufferImages[current_buffer?0:1][i]=203; break; // Left to Right TMP temporal pole
case 205: bufferImages[current_buffer?0:1][i]=206; break; // Right to Left TrIFG triangular part of the inferior frontal gyrus
case 206: bufferImages[current_buffer?0:1][i]=205; break; // Left to Right TrIFG triangular part of the inferior frontal gyrus
case 207: bufferImages[current_buffer?0:1][i]=208; break; // Right to Left TTG transverse temporal gyrus
case 208: bufferImages[current_buffer?0:1][i]=207; break; // Left to Right TTG transverse temporal gyrus
}
}
current_buffer=current_buffer?0:1;
for(long indexZ=0; indexZ<CurrSize->zsize; indexZ++)
for(long indexY=0; indexY<CurrSize->ysize; indexY++)
for(long indexX=0; indexX<CurrSize->xsize; indexX++)
bufferImages[current_buffer?0:1][((CurrSize->xsize-1-indexX)+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize)]=bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
current_buffer=current_buffer?0:1;
}
else if(strcmp(argv[i], "-fliplab") == 0)
{
for(long indexZ=1; indexZ<(CurrSize->zsize-1); indexZ++){
for(long indexY=1; indexY<(CurrSize->ysize-1); indexY++){
for(long indexX=1; indexX<(CurrSize->xsize-1); indexX++){
int indexcur=indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize;
float curval=bufferImages[current_buffer][indexcur];
if( curval!= 52 &&
curval!= 53 &&
curval!= 47 &&
curval!= 50 &&
curval!= 51 &&
curval!= 46 &&
curval!= 45){
int shiftrealsize=1;
int shiftspacing=1;
int stop=0;
for(int shiftz=-shiftrealsize; shiftz<=shiftrealsize; shiftz+=shiftspacing){
for(int shifty=-shiftrealsize; shifty<=shiftrealsize; shifty+=shiftspacing){
for(int shiftx=-shiftrealsize; shiftx<=shiftrealsize; shiftx+=shiftspacing){
int index1=(indexX+shiftx)+CurrSize->xsize*(indexY+shifty)+CurrSize->xsize*CurrSize->ysize*(indexZ+shiftz);
int index2=(indexX-shiftx)+CurrSize->xsize*(indexY-shifty)+CurrSize->xsize*CurrSize->ysize*(indexZ-shiftz);
float curval1=bufferImages[current_buffer][index1];
float curval2=bufferImages[current_buffer][index2];
if(stop==0 && (fabs(shiftx)+fabs(shifty)+fabs(shiftz))<2 ){
if(curval1==46){
if(curval2==47|| curval2==51|| curval2==53){
bufferImages[current_buffer?0:1][indexcur]=46;
stop=1;
}
else{
bufferImages[current_buffer?0:1][indexcur]=bufferImages[current_buffer][indexcur];
}
}
else if(curval1==45 ){
if(curval2==52 || curval2==50 || curval2==47 ){
bufferImages[current_buffer?0:1][indexcur]=45;
stop=1;
}
else{
bufferImages[current_buffer?0:1][indexcur]=bufferImages[current_buffer][indexcur];
}
}
else if(curval1==47|| curval1==51|| curval1==53){
if(curval2==46 ){
bufferImages[current_buffer?0:1][indexcur]=46;
stop=1;
}
else{
bufferImages[current_buffer?0:1][indexcur]=bufferImages[current_buffer][indexcur];
}
}
else if(curval1==52 || curval1==50 || curval1==47 ){
if(curval2==45 ){
bufferImages[current_buffer?0:1][indexcur]=45;
stop=1;
}
else{
bufferImages[current_buffer?0:1][indexcur]=bufferImages[current_buffer][indexcur];
}
}
else{
bufferImages[current_buffer?0:1][indexcur]=bufferImages[current_buffer][indexcur];
}
}
}
}
}
}
else{
bufferImages[current_buffer?0:1][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize]=bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
}
}
}
}
current_buffer=current_buffer?0:1;
}
else if(strcmp(argv[i], "-fliplab2") == 0)
{
for(long indexZ=1; indexZ<(CurrSize->zsize-1); indexZ++){
for(long indexY=1; indexY<(CurrSize->ysize-1); indexY++){
for(long indexX=1; indexX<(CurrSize->xsize-1); indexX++){
int indexcur=indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize;
float curval=bufferImages[current_buffer][indexcur];
if( curval==46 || curval== 45){
int shiftrealsize=1;
int shiftspacing=1;
int stop=0;
for(int shiftz=-shiftrealsize; shiftz<=shiftrealsize; shiftz+=shiftspacing){
for(int shifty=-shiftrealsize; shifty<=shiftrealsize; shifty+=shiftspacing){
for(int shiftx=-shiftrealsize; shiftx<=shiftrealsize; shiftx+=shiftspacing){
if( stop==0 && (fabs(shiftz)+fabs(shifty)+fabs(shiftx))<2 ){
int index2=(indexX-shiftx)+CurrSize->xsize*(indexY-shifty)+CurrSize->xsize*CurrSize->ysize*(indexZ-shiftz);
float curval2=bufferImages[current_buffer][index2];
if(curval2==47|| curval2==51|| curval2==53){
bufferImages[current_buffer?0:1][indexcur]=67;
stop=1;
//cout<<"hit"<<endl;
}
else if(curval2==52 || curval2==50 || curval2==47 ){
bufferImages[current_buffer?0:1][indexcur]=66;
stop=1;
//cout<<"hat"<<endl;
}
else{
bufferImages[current_buffer?0:1][indexcur]=bufferImages[current_buffer][indexcur];
}
}
}
}
}
}
else{
bufferImages[current_buffer?0:1][indexcur]=bufferImages[current_buffer][indexcur];
}
}
}
}
current_buffer=current_buffer?0:1;
}
else if(strcmp(argv[i], "-flipimgx") == 0) // X flip image
{
for(long indexZ=0; indexZ<CurrSize->zsize; indexZ++)
for(long indexY=0; indexY<CurrSize->ysize; indexY++)
for(long indexX=1; indexX<(CurrSize->xsize-1); indexX++)
bufferImages[current_buffer?0:1][((CurrSize->xsize-1-indexX)+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize)]=bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
current_buffer=current_buffer?0:1;
}
else if(strcmp(argv[i], "-flipimgy") == 0) // Y flip image
{
for(long indexZ=0; indexZ<CurrSize->zsize; indexZ++)
for(long indexY=0; indexY<CurrSize->ysize; indexY++)
for(long indexX=1; indexX<(CurrSize->xsize-1); indexX++)
bufferImages[current_buffer?0:1][((indexX)+(CurrSize->ysize-1-indexY)*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize)]=bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
current_buffer=current_buffer?0:1;
}
else if(strcmp(argv[i], "-flipimgz") == 0) // Z flip image
{
for(long indexZ=0; indexZ<CurrSize->zsize; indexZ++)
for(long indexY=0; indexY<CurrSize->ysize; indexY++)
for(long indexX=1; indexX<(CurrSize->xsize-1); indexX++)
bufferImages[current_buffer?0:1][((indexX)+indexY*CurrSize->xsize+(CurrSize->zsize-1-indexZ)*CurrSize->ysize*CurrSize->xsize)]=bufferImages[current_buffer][indexX+indexY*CurrSize->xsize+indexZ*CurrSize->ysize*CurrSize->xsize];
current_buffer=current_buffer?0:1;
}
// ********************* output data type *************************
else if(strcmp(argv[i], "-v") == 0)
{
verbose=1;
}
else if(strcmp(argv[i], "-odt") == 0)
{
string parser=argv[++i];
if(parser.find("uchar")!=string::npos)
{
datatypeoutput=NIFTI_TYPE_UINT8;
}
else if(parser.find("ushort")!=string::npos)
{
datatypeoutput=NIFTI_TYPE_UINT16;
}
else if(parser.find("uint")!=string::npos)
{
datatypeoutput=NIFTI_TYPE_UINT32;
}
else if(parser.find("char")!=string::npos)
{
datatypeoutput=NIFTI_TYPE_INT8;
}
else if(parser.find("short")!=string::npos)
{
datatypeoutput=NIFTI_TYPE_INT16;
}
else if(parser.find("int")!=string::npos)
{
datatypeoutput=NIFTI_TYPE_INT32;
}
else if(parser.find("float")!=string::npos)
{
datatypeoutput=NIFTI_TYPE_FLOAT32;
}
else if(parser.find("double")!=string::npos)
{
datatypeoutput=NIFTI_TYPE_FLOAT64;
}
else
{
cout << "ERROR: Datatype "<< parser << " is unknown"<<endl;
i=argc;
}
}
#ifdef _GIT_HASH
else if( strcmp(argv[i], "--version")==0)
{
printf("%s\n",_GIT_HASH);
return 0;
}
#endif
else
{
cout << "Option "<< string(argv[i]) << " unkown"<<endl;
i=argc;
return 0;
}
}
string parser=argv[argc-1];
if(parser.find(string(".nii"))>0 || parser.find(string(".img")) ||parser.find(string(".hdr"))>0)
{
// saving output
char * filename_out=argv[argc-1];
nifti_image * OutputImage = nifti_copy_nim_info(InputImage);
OutputImage->datatype=datatypeoutput;
nifti_set_filenames(OutputImage,filename_out,0,0);
OutputImage->dim[1]=OutputImage->nx=CurrSize->xsize;
OutputImage->dim[2]=OutputImage->ny=CurrSize->ysize;
OutputImage->dim[3]=OutputImage->nz=CurrSize->zsize;
OutputImage->dim[4]=OutputImage->nt=CurrSize->tsize;
OutputImage->dim[5]=OutputImage->nu=CurrSize->usize;
OutputImage->dim[6]=OutputImage->nv=1;
OutputImage->dim[7]=OutputImage->nw=1;
OutputImage->dim[0]=2;
OutputImage->dim[0]=(OutputImage->dim[3]>1?3:OutputImage->dim[0]);
OutputImage->dim[0]=(OutputImage->dim[4]>1?4:OutputImage->dim[0]);
OutputImage->dim[0]=(OutputImage->dim[5]>1?5:OutputImage->dim[0]);
OutputImage->dim[0]=(OutputImage->dim[6]>1?6:OutputImage->dim[0]);
OutputImage->dim[0]=(OutputImage->dim[7]>1?7:OutputImage->dim[0]);
//mat44 *affineTransformation = (mat44 *)calloc(1,sizeof(mat44));
bool scalingdiff=false;
for(long i=0; i<4; i++)
{
OutputImage->sto_xyz.m[i][i]/=Scalling[i];
OutputImage->pixdim[i+1]/=Scalling[i];
if(Scalling[i]!=1)
{
scalingdiff=true;
}
}
if(scalingdiff)
{
cout << "A scaling factor is present. Removing Sform"<<endl;
OutputImage->sform_code=0;
}
// OutputImage->qoffset_x=translation[0];
// OutputImage->qoffset_y=translation[1];
// OutputImage->qoffset_z=translation[2];
if(verbose)
{
cout << "Output Dim = [ ";
for(long i=0; i<8; i++)
{
cout<<(float)OutputImage->dim[i];
if(i<7)
{
cout<<" , ";
}
}
cout<<" ] "<<endl;
flush(cout);
}
nifti_update_dims_from_array(OutputImage);
nifti_datatype_sizes(OutputImage->datatype,&OutputImage->nbyper,&OutputImage->swapsize);
if(datatypeoutput==NIFTI_TYPE_UINT8)
{
OutputImage->data = (void *) calloc(CurrSize->numel*CurrSize->tsize*CurrSize->usize, sizeof(unsigned char));
unsigned char * OutputImagePtr = static_cast<unsigned char *>(OutputImage->data);
for(long i=0; i<(long)(CurrSize->numel*CurrSize->tsize*CurrSize->usize); i++)
{
OutputImagePtr[i]=(unsigned char)round(bufferImages[current_buffer][i]);
}
}
else if(datatypeoutput==NIFTI_TYPE_UINT16)
{
OutputImage->data = (void *) calloc(OutputImage->nvox, sizeof(unsigned short));
unsigned short * OutputImagePtr = static_cast<unsigned short *>(OutputImage->data);
for(long i=0; i<(long)(CurrSize->numel*CurrSize->tsize*CurrSize->usize); i++)
{
OutputImagePtr[i]=(unsigned short)round(bufferImages[current_buffer][i]);
}
}
else if(datatypeoutput==NIFTI_TYPE_UINT32)
{
OutputImage->data = (void *) calloc(CurrSize->numel*CurrSize->tsize*CurrSize->usize, sizeof(unsigned int));
unsigned int * OutputImagePtr = static_cast<unsigned int *>(OutputImage->data);
for(long i=0; i<(long)(CurrSize->numel*CurrSize->tsize*CurrSize->usize); i++)
{
OutputImagePtr[i]=(unsigned int)round(bufferImages[current_buffer][i]);
}
}
else if(datatypeoutput==NIFTI_TYPE_INT8)
{
OutputImage->data = (void *) calloc(CurrSize->numel*CurrSize->tsize*CurrSize->usize, sizeof(char));
char * OutputImagePtr = static_cast<char *>(OutputImage->data);
for(long i=0; i<(long)(CurrSize->numel*CurrSize->tsize*CurrSize->usize); i++)
{
OutputImagePtr[i]=(char)round(bufferImages[current_buffer][i]);
}
}
else if(datatypeoutput==NIFTI_TYPE_INT16)
{
OutputImage->data = (void *) calloc(CurrSize->numel*CurrSize->tsize*CurrSize->usize, sizeof(short));
short * OutputImagePtr = static_cast<short *>(OutputImage->data);
for(long i=0; i<(long)(CurrSize->numel*CurrSize->tsize*CurrSize->usize); i++)
{
OutputImagePtr[i]=(short)round(bufferImages[current_buffer][i]);
}
}
else if(datatypeoutput==NIFTI_TYPE_INT32)
{
OutputImage->data = (void *) calloc(CurrSize->numel*CurrSize->tsize*CurrSize->usize, sizeof(int));
int * OutputImagePtr = static_cast<int *>(OutputImage->data);
for(long i=0; i<(long)(CurrSize->numel*CurrSize->tsize*CurrSize->usize); i++)
{
OutputImagePtr[i]=(int)round(bufferImages[current_buffer][i]);
}
}
else if(datatypeoutput==NIFTI_TYPE_FLOAT32)
{
OutputImage->data = (void *) calloc(CurrSize->numel*CurrSize->tsize*CurrSize->usize, sizeof(float));
float * OutputImagePtr = static_cast<float *>(OutputImage->data);
for(long i=0; i<(long)(CurrSize->numel*CurrSize->tsize*CurrSize->usize); i++)
{
OutputImagePtr[i]=(float)bufferImages[current_buffer][i];
}
}
else if(datatypeoutput==NIFTI_TYPE_FLOAT64)
{
OutputImage->data = (void *) calloc(CurrSize->numel*CurrSize->tsize*CurrSize->usize, sizeof(double));
double * OutputImagePtr = static_cast<double *>(OutputImage->data);
for(long i=0; i<(long)(CurrSize->numel*CurrSize->tsize*CurrSize->usize); i++)
{
OutputImagePtr[i]=(double)round(bufferImages[current_buffer][i]);
}
}
nifti_image_write(OutputImage);
nifti_image_free(OutputImage);
}
delete [] bufferImages[0];
delete [] bufferImages[1];
delete [] bufferImages;
delete [] CurrSize;
}
catch(std::exception & e)
{
std::cerr << "Standard exception: " << e.what() << std::endl;
}
catch(...)
{
std::cerr << "Unhandled Exception: Something went wrong! Please report the error to mjorgecardoso"<<(char) 64<<"gmail.com" << std::endl;
}
return 0;
}
|
<gh_stars>0
package cc;
import cc.registry.CCRegistry;
import net.fabricmc.api.ModInitializer;
import net.minecraft.world.gen.heightprovider.BiasedToBottomHeightProvider;
import net.minecraft.world.gen.heightprovider.TrapezoidHeightProvider;
import net.minecraft.world.gen.heightprovider.VeryBiasedToBottomHeightProvider;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import java.io.File;
public class Main implements ModInitializer
{
public static final Logger LOGGER = LogManager.getLogger ();
public static final File WORKING_DIRECTORY = new File ( System.getProperty ( "user.dir" ) );
@Override public void onInitialize()
{
long start = System.nanoTime ();
CCRegistry.registerAll ();
long stop = System.nanoTime ();
long difference = stop - start;
System.out.println ( "Time taken: " + difference / 1000000f + "ms" );
}
}
|
def peekleft(self):
if self._container.tail:
return self._container.tail.data
return None |
package com.dchip.door.smartdoorsdk.deviceControl.nativeLev;
import android.util.Log;
/**
* Created by jelly on 2017/11/8.
*/
public class Pn512Lock {
// Used to load the 'native-lib' library on application startup.
static {
System.loadLibrary("devicecontrol");
}//加载so文件,这里加载的为Pn512的JNI文件。
public static final String TAG = "Pn512Lock";
public static final int CMD_READ = 3;
public static final int IO_LOCK_CTRL = 0;
public static final int IO_DOOR1_ST = 1;
public static final int IO_DOOR2_ST = 2;
public static final int IO_LOCK_JUDGE = 3;
public static final int IO_FB1_ELSE = 4;
public static final int IO_FB2_ELSE = 5;
private boolean isOpen =false;
public boolean openDevice(){
isOpen = open();
return isOpen;
}
public int control(int cmd, int arg){
if (isOpen)
return ioctl(cmd,arg);
else {
Log.e(TAG,"control fail,device not open!");
return -1;
}
}
public void closeDevice(){
isOpen = false;
close();
}
/*
* Class: com_dchip_hd_led_gpio_LedCtrl
* Method: ioctl
*cmd为0x00,0x01和0x02:当为0x00时,设置IO电平为高1,当为0x01时设置IO电平为低0,0x02对应的为获取IO电平
*arg对应的为管脚IO:0为LOCK_CTRL,1为DOOR1_ST,2为DOOR2_ST,3为LOCK_JUDGE,4为FB1_ELSE,
*管脚LOCK_CTRL和FB1_ELSE可以设置高低电平,其他不可以。但是所有管脚都可以获取高低电平。
* Signature: (II)I
*/
private native int ioctl(int cmd,int arg);
private native boolean open();
private native void close();
}
|
1. Field of the Invention
The present invention relates to the prediction of the shape of a resist pattern formed by exposing and developing a resist.
2. Description of the Related Art
In lithography, a projection exposure apparatus transfers a reticle pattern onto a resist (photosensitive agent) applied on a substrate (e.g., a semiconductor wafer or glass plate), and a developing device develops the resist, thereby obtaining a resist pattern. The resist pattern shape can be measured using an SEM (scanning electron microscope).
Along with dramatic increases in the degree of integration of elements of semiconductor devices, their minimum line widths (design rules) are increasingly becoming shorter. Under the circumstances, the resolution is enhanced by shortening the wavelength of exposure light and increasing the numerical aperture of a projection optical system.
Unfortunately, such approaches to increasing the resolution cannot meet the required minimum line widths. To combat this shortcoming, a pattern correction technique using optical proximity effect is employed.
The pattern correction must be executed for the entire reticle pattern. For this reason, a reticle pattern correction operation takes a very long time.
Patent reference 1 and non-patent references 1 and 2 disclose methods of predicting the resist pattern shape by computing an aerial image, that is, light intensity distribution formed on a resist using a reticle pattern, and by determining the optical contour shape at an arbitrary light intensity level as the resist pattern shape. The computation accuracies of these prediction methods are relatively low due to approximation errors upon creating computation model equations.
Patent reference 2 computes the light intensity distribution of an aerial image formed on a resist using a reticle pattern. Based on the resultant light intensity distribution, an exponential decay function of two parameters, that is, a process factor and an edge light intensity shift is obtained. The convolution integral of the light intensity distribution and the exponential decay function is calculated, thereby computing the resist pattern shape. This method takes a long time to calculate the convolution integral.
[Patent Reference 1] U.S. Pat. No. 6,643,616
[Patent Reference 2] Japanese Patent Laid-Open No. 2000-58417
[Non-patent Reference 1] Mathematical and CAD Framework for Proximity Correction (1996 SPIE Vol. 2726 P 208-222, Optical Microlithography)
[Non-patent Reference 2] Experimental Results on Optical Proximity Correction with Variable Threshold Resist Model (1997 SPIE Vol. 3051 P 458-468, Optical Microlithography) |
[prMac.com] Charlottesville, Virginia - Colleges and universities have seen explosive growth in remedial writing classes. High school graduates are not meeting basic writing standards for essay construction and argument development. High School Writing 3.0, by Niles Technology, is one way to ensure students arrive at college prepared to write well. Whether a pro-active parent or a teacher with high standards, High School Writing, for the iPad and iPhone, is a cost-effective solution to learning the writing skills colleges expect.
"I usually start most presentations about Niles Technology apps with two basic facts: 1) the lowest scoring part of virtually all achievement tests is the writing section, and, 2) businesses are losing upwards of $4 billion dollars per year because of poor employee writing skills," explains Michael A. Niles, president of Niles Technology Group. "The end results are that students arrive, at college, not prepared to write critically-structured essays; they ultimately perform lower than they should; and, their future earning potential is negatively affected. Consumers lose by having to pay higher prices to cover these avoidable losses. Overall, because of weak writing skills, employees and business are less efficient. Everyone loses when production is lower for the same dollar spent. High School Writing 3.0 is one way to raise student writing standards; thereby, helping everyone."
Writing, in general, is usually mistaken to be only sentences and grammar. While sentences and grammar are important parts of writing, they are really the second and third parts - they qualify meaning. However, the fundamental purpose of writing, which is thoughtful, clear communication, comes from critical thinking and argument development that are done before one sentence is written. Too often, students attempt to write before they really know and understand what they think; Niles Technology apps teach students to think and to develop ideas / arguments first and to write second.
The incorporation of mobile learning into today's classroom is moving into high gear, and Niles Technology Group is dedicated to providing top-of-the-line educational apps that fit into students' mobile lifestyles. Contact Michael Niles at Niles Technology Group for more information about essay writing apps.
High School Writing 3.0 is currently $17.99 USD (or equivalent amount in other currencies) and available worldwide exclusively through the App Store in the Education category.
Niles Technology Group was founded in 2007 to develop software for emerging technologies and is developing a series of mobile computing applications dedicated to teaching superior writing and logical thinking skills. With its experience in the technology and content required to develop full-featured products for students, Niles Technology Group is a leading iPhone app publisher, and the Achievers Writing Center and Essay Writing Wizard apps have sold successfully worldwide. The key to Niles Technology Group's success is specificity. Each app is specific to the writing task at hand. Michael A. Niles, the founder, was formerly, for eight years, the President and CEO of The Right Education, Inc. (TRE), a web-based educational technology company that developed The Learning Accelerator. He looks forward to continuing to bring top-line education products to the mobile computing marketplace. Copyright (C) 2007-2011 Niles Technology Group. All Rights Reserved. Apple, the Apple logo, iPhone,iPod and iPad are registered trademarks of Apple Inc. in the U.S. and/or other countries. |
/**
* @author Christian Sadilek <csadilek@redhat.com>
*/
public class MethodBuilderAbstractOption<T> implements Finishable<T> {
protected ThrowsDeclaration throwsDeclaration = ThrowsDeclaration.none();
protected MethodBuildCallback<T> callback;
public MethodBuilderAbstractOption(final MethodBuildCallback<T> callback) {
this.callback = callback;
}
public T throws_(final Class<? extends Throwable>... exceptionTypes) {
throwsDeclaration = ThrowsDeclaration.of(exceptionTypes);
return callback.callback(null, null, new DefModifiers(Modifier.Abstract), throwsDeclaration, null, null);
}
public T throws_(final MetaClass... exceptions) {
throwsDeclaration = ThrowsDeclaration.of(exceptions);
return callback.callback(null, null, new DefModifiers(Modifier.Abstract), throwsDeclaration, null, null);
}
@Override
public T finish() {
if (callback != null) {
return callback.callback(null, null, new DefModifiers(Modifier.Abstract), throwsDeclaration, null, null);
}
return null;
}
} |
from typing import Optional
from fastapi import FastAPI
from sqlalchemy.sql.expression import table
from sqlmodel import (
SQLModel,
Field,
create_engine,
select,
Session
)
engine = create_engine('sqlite:///database.db')
class Pessoa(SQLModel, table=True):
id : Optional[int] = Field(default=None, primary_key=True)
nome: str
idade: str
SQLModel.metadata.create_all(engine)
app = FastAPI()
@app.get('/')
def home():
return {'message' : 'Deu bom!!!'}
@app.get('/pessoa')
def pessoa():
query = select(Pessoa)
with Session(engine) as session:
result = session.execute(query).scalars().all()
return result
@app.get('/pessoas-nome')
def pessoa():
query = select(Pessoa.nome)
with Session(engine) as session:
result = session.execute(query).scalars().all()
return result
|
How Important Is Diabetes as a Risk Factor for Cardiovascular and Other Diseases in Older Adults? Patel and Kengne discuss a new study inPLoS Medicine which found a 2-fold increased risk of cardiovascular death associated with diabetes in people over 65 years old. Perspectives October 2006 | Volume 3 | Issue 10 | e424 I t is well established that diabetes mellitus is associated with adverse health outcomes. Data from general population cohorts indicate a 2-to 3-fold increase in cardiovascular risks and about a 50 percent increase in the risks of non-cardiovascular mortality associated with this condition. These associations appear largely consistent across populations in different regions of the world. There is some evidence that diabetes may be a more important determinant of cardiovascular risk for women than men. However, the relative effects of diabetes on vascular and other diseases among older, compared with younger, individuals is less certain. Heterogeneity by age in the association between diabetes and cardiovascular disease has been reported, with a consistently weaker association observed among older individuals. Given this possible age-dependency in the epidemiological associations, and the frequent observation that cardiovascular risk factors are often managed less aggressively in older people than in younger people, a better understanding of the relationship between diabetes and disease-specifi c causes of death among older people is important. A New Cohort Study In this regard, the data provided by Kronmal and colleagues in PLoS Medicine make an important contribution to our knowledge of morbidity and mortality associated with diabetes mellitus in older adults. The researchers evaluated a randomly selected cohort of 5,372 people aged 65 years and over, of whom 8.8 percent were known to have a diagnosis of diabetes and were treated with oral hypoglycaemic agents and/or insulin at baseline. After an average of 11.1 years follow-up, over 40 percent of the cohort had died, with approximately 50 to 60 percent of these deaths attributed to cardiovascular causes. Compared to those without diabetes, and after adjustment for a wide range of covariates, individuals with known, treated diabetes had an estimated excess risk of death ranging between approximately 30 and 100 percent, depending on whether or not they were treated with insulin. For cardiovascular mortality, there was a 2-fold increased risk associated with diabetes. However, it is likely that the reported hazard ratios in this study underestimate the true strength of associations between diabetes and cause-specifi c mortality, for two important reasons. First, participants without a diagnosis of diabetes, but with a fasting blood glucose level consistent with this diagnosis, were considered not to have diabetes. Second, for cardiovascular mortality, inclusion of subclinical atherosclerosis as a covariate probably represents overadjustment. Nonetheless, these data provide reliable evidence that diabetes is an important adverse risk factor among older adults, with estimates of the strength of the associations comparable to published data from younger cohorts. Furthermore, in this population of (albeit limited) age range, no age interactions in any of these associations were observed. Kronmal and colleagues also report on analyses suggesting that the relative risks of non-cardiovascular disease mortality associated with diabetes, particularly with respect to death due to infectious or renal causes, were signifi cantly greater among individuals treated with insulin compared with those receiving oral hypoglycaemic agents alone. They further report that women with diabetes on insulin had a particularly high risk of death compared with women without diabetes. These fi ndings are interesting but, in relation to any implication that insulin use may lead to poorer outcomes, can only be considered hypothesis-generating. How Important Is Diabetes as a Risk Factor for Cardiovascular and Other Diseases in Older Adults One of the strengths of this new study is the availability of data relating to a large number of potentially confounding variables at baseline, and adjustment for these factors has been appropriately made. However, as is always the case with observational data, one cannot account for unmeasured (e.g. duration of diabetes ) or unknown risk factors, and residual confounding remains a highly plausible explanation for these fi ndings. Clinical Implications So what are the implications of the results of this study for clinical practice? Primarily, these data confi rm that older adults with diabetes are at very high absolute risk of death from cardiovascular causes (four to fi ve percent per year). Thus, strategies aimed at reducing these risks should be aggressively pursued among such individuals, wherever possible. Fortunately, a range of preventive treatments of proven effi cacy are at our disposal, including blood pressure lowering and the use of statins. Intensive glucose lowering in type 2 diabetes has been shown to reduce microvascular (retinal and renal) events. However, the balance of risks and benefi ts of lowering haemoglobin A1c levels below seven percent (as recommended by many current guidelines), particularly with respect to macrovascular events such as myocardial infarction and stroke, remains uncertain. At least two large-scale randomised clinical trials evaluating this question are ongoing, one of which has no upper age restriction while the other includes participants aged up to 80 years at randomisation. Importantly, to reach such targets for intensive glucose lowering, insulin therapy will be frequently required. Should the trials demonstrate that the benefi ts of intensive glucose lowering outweigh the risks, these data, rather than observational data suggesting possible harm associated with the use of insulin, should take precedence in guiding clinical practice. |
<filename>src/main/java/hu/akarnokd/reactive/ResourceFlowableJust.java
/*
* Copyright 2015-2017 <NAME>
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in
* compliance with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is
* distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
* the License for the specific language governing permissions and limitations under the License.
*/
package hu.akarnokd.reactive;
import java.util.concurrent.atomic.AtomicInteger;
import org.reactivestreams.*;
import io.reactivex.functions.Consumer;
final class ResourceFlowableJust<T> extends ResourceFlowable<T> {
final T item;
final Consumer<? super T> release;
ResourceFlowableJust(T item, Consumer<? super T> release) {
this.item = item;
this.release = release;
}
@Override
protected void subscribeActual(Subscriber<? super T> subscriber) {
subscriber.onSubscribe(new ResourceScalarSubscription<>(subscriber, item, release));
}
@Override
public Consumer<? super T> release() {
return release;
}
static final class ResourceScalarSubscription<T>
extends AtomicInteger implements Subscription {
private static final long serialVersionUID = -4021292785194851945L;
final Subscriber<? super T> actual;
final T item;
final Consumer<? super T> release;
static final int START = 0;
static final int REQUESTED = 1;
static final int COMPLETE = 2;
static final int CANCELLED = 3;
ResourceScalarSubscription(Subscriber<? super T> actual, T item, Consumer<? super T> release) {
this.actual = actual;
this.item = item;
this.release = release;
}
@Override
public void request(long n) {
if (compareAndSet(START, REQUESTED)) {
actual.onNext(item);
if (compareAndSet(REQUESTED, COMPLETE)) {
actual.onComplete();
}
}
}
@Override
public void cancel() {
int s = getAndSet(CANCELLED);
if (s == START) {
releaseItem(item, release);
}
}
}
}
|
<filename>Sources/Engine/External/OpenALSoft/alc/backends/base.cpp
#include "config.h"
#include "base.h"
#include <atomic>
#include <thread>
#include "AL/al.h"
#include "alcmain.h"
#include "alnumeric.h"
#include "atomic.h"
ClockLatency GetClockLatency(ALCdevice *device)
{
BackendBase *backend{device->Backend.get()};
ClockLatency ret{backend->getClockLatency()};
ret.Latency += device->FixedLatency;
return ret;
}
/* BackendBase method implementations. */
BackendBase::BackendBase(ALCdevice *device) noexcept : mDevice{device}
{ }
BackendBase::~BackendBase() = default;
ALCboolean BackendBase::reset()
{ return ALC_FALSE; }
ALCenum BackendBase::captureSamples(void*, ALCuint)
{ return ALC_INVALID_DEVICE; }
ALCuint BackendBase::availableSamples()
{ return 0; }
ClockLatency BackendBase::getClockLatency()
{
ClockLatency ret;
ALuint refcount;
do {
while(((refcount=ReadRef(mDevice->MixCount))&1) != 0)
std::this_thread::yield();
ret.ClockTime = GetDeviceClockTime(mDevice);
std::atomic_thread_fence(std::memory_order_acquire);
} while(refcount != ReadRef(mDevice->MixCount));
/* NOTE: The device will generally have about all but one periods filled at
* any given time during playback. Without a more accurate measurement from
* the output, this is an okay approximation.
*/
ret.Latency = std::chrono::seconds{maxi(mDevice->BufferSize-mDevice->UpdateSize, 0)};
ret.Latency /= mDevice->Frequency;
return ret;
}
|
/**
* Representation of a geodetic datum.
*
*/
public class Datum {
/** The type of the datum.*/
public String datumType;
/** An identifier for the datum.*/
public String datumName;
/**The description of the ellipsoid/geoid which this datum uses.*/
public Ellipsoid ellipsoid;
/**Optional description of a prime meridian.*/
public PrimeMeridian primeMeridian;
public String interstellarbody;
public List<String> usagescope;
public String toProj() {
StringBuilder builder=new StringBuilder();
return builder.toString();
}
public JSONObject toProjJSON() {
JSONObject result=new JSONObject();
result.put("name",datumName);
result.put("type",datumType);
result.put("ellipsoid", ellipsoid.toProjJSON());
return result;
}
public String toGML() {
XMLOutputFactory factory = XMLOutputFactory.newInstance();
StringWriter strwriter=new StringWriter();
XMLStreamWriter writer;
try {
writer = new IndentingXMLStreamWriter(factory.createXMLStreamWriter(strwriter));
if(datumType.contains("ReferenceFrame")) {
writer.writeStartElement("gml:"+datumType.replace("ReferenceFrame", "Datum"));
}else {
writer.writeStartElement("gml:"+datumType);
}
writer.writeStartElement("gml:datumName");
writer.writeCharacters(this.datumName);
writer.writeEndElement();
if(primeMeridian!=null) {
writer.writeStartElement("gml:usesPrimeMeridian");
writer.writeCharacters(System.lineSeparator());
writer.flush();
strwriter.write(primeMeridian.toGML()+System.lineSeparator());
writer.writeEndElement();
}
if(ellipsoid!=null) {
writer.writeStartElement("gml:usesEllipsoid");
writer.writeCharacters(System.lineSeparator());
writer.flush();
strwriter.write(this.ellipsoid.toGML()+System.lineSeparator());
writer.writeEndElement();
}
writer.writeEndElement();
writer.flush();
} catch (XMLStreamException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return strwriter.toString();
}
@Override
public String toString() {
return "Datum [datumType=" + datumType + ", datumName=" + datumName + ", ellipsoid=" + ellipsoid + "]";
}
public String toWKT() {
StringBuilder builder=new StringBuilder();
if(datumName==null)
return builder.toString();
builder.append("DATUM["+"\""+datumName.replace("Datum:","").trim()+"\","+System.lineSeparator());
builder.append(ellipsoid.toWKT()+"]"+System.lineSeparator());
if(primeMeridian!=null) {
builder.append(","+primeMeridian.toWKT()+System.lineSeparator());
}
//builder.append("]");
return builder.toString();
}
public ResultSet datumQuery() {
return null;
}
} |
Poor in vivo efficacy of caspofungin, micafungin and amphotericin B against wild-type Candida krusei clinical isolates does not correlate with in vitro susceptibility results We determined micafungin, caspofungin and amphotericin B (AMB) minimum inhibitory concentration (MICs) and killing rates in RPMI-1640 and in RPMI-1640 with 50% serum against three Candida krusei bloodstream isolates. MIC ranges in RPMI-1640 were 0.1250.25, 0.25 and 0.1250.5mg/L, in RPMI-1640 with 50% serum, MICs were 64128-, 8- and 416-fold higher, respectively. In RPMI-1640 micafungin and caspofungin at 1, 4, 16 and 32mg/L as well as AMB at 2mg/L were fungicidal against all isolates in ≤3.96, ≤4.42 and 14.96h, respectively. In RPMI-1640 with 50% serum, caspofungin was fungicidal for all isolates only at 32mg/L, micafungin and AMB were fungistatic. In neutropenic mice, 5mg/kg caspofungin and 1mg/kg AMB were ineffective against two of the three isolates. Thus, in vivo efficacy of echinocandins and AMB is weak or absent against C. krusei. Prescribers treating C. krusei infections with echinocandins should watch out for clinical resistance and therapeutic failure. |
The number housed in the Bossier Medium Security jail grew from 54 on Monday to more than 100 on Wednesday.
Bossier Parish is holding more than 100 men, primarily from Central American countries, who are accused of entering the United States unlawfully, and the number is to grow.
That number is up from the 54 immigrants the parish said it had accepted Monday. Another busload of about 50 men was expected to arrive in the parish Wednesday afternoon.
Bossier Parish Sheriff Julian Whittington said Wednesday that the parish could house 240 people being held on the authority of U.S. Immigration and Customs Enforcement (ICE). The men are being housed at the Bossier Medium Security jail in Plain Dealing.
The parish receives $62 per day for each of the federal inmates.
"These are what we call Level 1 individuals," Whittington said. "I understand some of them may have misdemeanor offenses on their record. I don't think so far that we've received any of those."
Whittington said ICE asked about 10 days ago about the parish's willingness to house the detainees.
The 102 ICE detainees being held in Bossier Parish as of Wednesday morning hailed from Nicaragua, China, Brazil, Guatemala, Honduras, Ecuador, Colombia, Sri Lanka, Mexico and Peru. Most of the men are from Guatemala and Honduras, the sheriff's department said.
The ICE detainees being held in Bossier had not been to a processing center, Whittington said. They were recently picked up from south Texas and transported to Bossier Parish.
There have minor hiccups since taking the men in this week, Whittington said.
The language barrier remains an issue. The department has employed school teachers who are out for the summer to help communicate with the men. Medical screenings also have been a primary concern, Whittington said.
Inside the Bossier medium-security prison, the men live away from parish inmates and together inside two large rooms filled with bunk beds, tables and a row of showers. A third, identical room remains empty.
"So far, these individuals up there have been super cooperative (and) kept their dorms immaculate," Whittington said.
The men will be held until they can appear in federal court for deportation hearings. The average stay, Whittington said, is around 45 days. Some of the men will be transported to Alexandria for court appearances. Others will appear in court via video, Whittington said.
The parish will be reimbursed for costs associated with transportation, Whittington said.
"It's part of national security," Whittington said of housing the men.
Another factor, Whittington said, was the possible decrease in the payout parishes receive from the state for taking in state prisoners. The state rate now is $24.39 per inmate per day but may drop to $19 as a result of the state budget crunch, Whittington said.
The department moved 180 state prisoners to facilities in other parishes to make room for the detainees.
"If the state follows through with a $5 per diem cut on state inmates, that would affect us to the tune of $1.8 million (a year)," he said. "The state needs to get their act together and take care of their business."
The Bossier Parish Sheriff's Office has been approved by the federal government to house federal inmates for more than 20 years. ICE officials first contacted Bossier Parish a year ago about potentially housing immigration inmates, Whittington said.
Louisiana has two permanent ICE processing centers: one in Jena and the other in Pine Prairie, according to the agency's website.
Only one entity in Louisiana — the East Baton Rouge Parish Sheriff’s Office — has a 287 (g) agreement with ICE. That agreement permits local or state law enforcement agencies to collaborate with the federal government to enforce federal immigration laws, according to ICE's website.
"Really, it's a sad situation if you really want to know the truth," Whittington said. "A lot of these people risking their lives to get here and unfortunately we have people that live here, born here — American citizens — that don't appreciate what they have. It kind of gives you a new outlook on life." |
__author__ = '<NAME>'
__all__ = ['ArtilleryGrid']
import typing
import numpy as np
from balltic.core.grid import EulerianGrid
from balltic.core.guns import ArtilleryGun
from balltic.core.gunpowder import GunPowder
class ArtilleryGrid(EulerianGrid):
"""
Класс - решение основной задачи внутренней баллистики
в газодинамической постановке на подвижной сетке по методу Эйлера
Parameters
----------
gun: ArtilleryGun
Именнованный кортеж начальных условий и параметров АО
gunpowder: str
Название пороха
nodes: int
Количество узлов (интерфейсов) сетки
omega_q: int or float, optional
Отношение массы заряда к массе снаряда
denload: int or float, optional
Плотность заряжания
barrel: int or float, optional
Длина ведущей части стола
kurant: int or float, optional
Число Куранта
boostp: int or float, optional
Значение давления форсирования
Returns
-------
solution:
"""
def __str__(self):
return 'ArtilleryGrid Class'
def __repr__(self):
return (f'{self.__class__.__name__}(gun, gunpowder)')
def __init__(self, gun: ArtilleryGun, gunpowder: str, nodes: int = 100,
omega_q: typing.Union[int, float] = None,
denload: typing.Union[int, float] = None,
barrel: typing.Union[int, float] = None,
kurant: typing.Union[int, float] = None,
boostp: typing.Union[int, float] = None) -> None:
if isinstance(gun, ArtilleryGun):
self.gun = gun
else:
raise ValueError('Параметр gun должен быть ArtilleryGun')
self.gunpowder = GunPowder(gunpowder)
self.nodes = nodes
if boostp is not None:
self.gun = self.gun._replace(boostp=boostp)
if barrel is not None:
self.gun = self.gun._replace(barrel=barrel)
if kurant is not None:
self.gun = self.gun._replace(kurant=kurant)
if omega_q is not None:
self.gun = self.gun._replace(omega_q=omega_q)
if denload is not None:
self.gun = self.gun._replace(denload=denload)
self.omega = self.gun.omega_q * self.gun.shell
self.gun = self.gun._replace(cs_area=np.pi * self.gun.caliber ** 2 / 4)
self.gun = self.gun._replace(
chamber=self.omega / self.gun.denload / self.gun.cs_area)
self.ro_cell = np.full(self.nodes, self.gun.denload)
self.v_cell = np.full(self.nodes, 0.0)
self.zet_cell = np.full(self.nodes, 0.0)
self.press_cell = np.full(self.nodes, self.gun.press_vsp)
self.psi_cell = self._psi()
self.energy_cell = np.full(
self.nodes,
self.press_cell / (self.gunpowder.k - 1)
* (1 / self.ro_cell - (
(1 - self.psi_cell) / self.gunpowder.ro + self.gunpowder.alpha_k * self.psi_cell))
+ (1 - self.psi_cell) * self.gunpowder.f / (self.gunpowder.k - 1)
)
self.c_cell = np.full(
self.nodes,
1 / self.ro_cell
* np.sqrt(self.gunpowder.k * self.press_cell
/ (1 / self.ro_cell - (1 - self.psi_cell) / self.gunpowder.ro - self.gunpowder.alpha_k * self.psi_cell)
)
)
# для расчета Маха на интерфейсе
self.mah_cell_minus = np.full(self.nodes - 1, 0.0)
self.mah_cell_plus = np.full(self.nodes - 1, 0.0)
# для расчета потока q (Векторы H)
self.h_param = np.full(
self.nodes,
self.ro_cell * self.press_cell / self.gunpowder.I_k
)
# для расчета потока f (Векторы Ф )
self.F_param_p = np.array(
[
np.full(self.nodes - 1, 0.0),
np.full(self.nodes - 1, 0.0),
np.full(self.nodes - 1, 0.0),
np.full(self.nodes - 1, 0.0)
]
)
self.F_param_m = np.array(
[
np.full(self.nodes - 1, 0.0),
np.full(self.nodes - 1, 0.0),
np.full(self.nodes - 1, 0.0),
np.full(self.nodes - 1, 0.0)
]
)
# для параметров на границах
self.c_interface = np.full(self.nodes - 1, 0.0)
self.mah_interface = np.full(self.nodes - 1, 0.0)
self.press_interface = np.full(self.nodes - 1, 0.0)
self.v_interface = np.full(self.nodes - 1, 0.0)
self.x_interface = np.full(self.nodes - 1, 0.0)
# векторы состояний и потоков
self.f_param = np.array(
[
np.full(self.nodes - 1, 0.0),
self.press_cell[1:],
np.full(self.nodes - 1, 0.0),
np.full(self.nodes - 1, 0.0)
]
)
self.q_param = np.array(
[
self.ro_cell,
self.ro_cell * self.v_cell,
self.ro_cell * (self.energy_cell + self.v_cell ** 2 / 2),
self.ro_cell * self.zet_cell
]
)
self.is_solved = False
return self._run()
# TODO: Необходимо разобраться с функцией газоприхода
# def _psi(self):
# """
# Функция газоприхода
# """
# if_cond = self.zet_cell <= 1
# elif_cond = self.zet_cell <= self.powder.z_k
# else_cond = self.zet_cell > self.powder.z_k
# answer = np.zeros_like(self.zet_cell)
# answer[if_cond] = self.powder.k_1 \
# * self.zet_cell[if_cond] \
# * (1 + self.powder.lambda_1 * self.zet_cell[if_cond])
# answer[elif_cond] = self.powder.k_2 \
# * (self.zet_cell[elif_cond] - 1) \
# * (1 + self.powder.lambda_2 * (self.zet_cell[elif_cond] - 1))
# answer[else_cond] = np.ones_like(self.zet_cell[else_cond])
# return answer
def _psi(self):
"""
Функция газоприхода
"""
buf = []
for zet in self.zet_cell:
if (zet <= 1):
buf.append(self.gunpowder.k_1 * zet * (1 + self.gunpowder.lambda_1 * zet))
elif (zet <= self.gunpowder.z_k):
buf.append(self.gunpowder.k_2 * (zet - 1) * (1 + self.gunpowder.lambda_2 * (zet - 1)))
else:
buf.append(1.0)
# else: buf.append(self.k_2 * (self.z_k - 1) * (1 + self.lambda_2 * (self.z_k - 1)))
return np.asarray(buf)
def _get_q(self):
self.h_param = self.ro_cell * self.press_cell / self.gunpowder.I_k
coef_stretch = self._previous_cell_lenght / self.x_interface[1]
self.q_param[0][1:-1] = coef_stretch * (
self.q_param[0][1:-1]
- self.tau / self._previous_cell_lenght
* (self.f_param[0][1:] - self.f_param[0][:-1])
)
self.q_param[1][1:-1] = coef_stretch * (
self.q_param[1][1:-1]
- self.tau / self._previous_cell_lenght
* (self.f_param[1][1:] - self.f_param[1][:-1])
)
self.q_param[2][1:-1] = coef_stretch * (
self.q_param[2][1:-1]
- self.tau / self._previous_cell_lenght
* (self.f_param[2][1:] - self.f_param[2][:-1])
)
self.q_param[3][1:-1] = coef_stretch * (
self.q_param[3][1:-1]
- self.tau / self._previous_cell_lenght
* (
self.f_param[3][1:] - self.f_param[3][:-1]
- self.h_param[1:-1] * self.x_interface[1]
)
)
self.ro_cell = self.q_param[0]
self.v_cell = self.q_param[1] / self.q_param[0]
self.energy_cell = self.q_param[2] / self.q_param[0] - self.v_cell ** 2 / 2
self.zet_cell = self.q_param[3] / self.q_param[0]
self.psi_cell = self._psi()
self.press_cell = \
(self.energy_cell - (1 - self.psi_cell) * self.gunpowder.f / (self.gunpowder.k - 1)) \
* (self.gunpowder.k - 1) \
/ (1 / self.ro_cell - ((1 - self.psi_cell) / self.gunpowder.ro + self.gunpowder.alpha_k * self.psi_cell))
self.c_cell = 1 / self.ro_cell \
* np.sqrt(self.gunpowder.k * self.press_cell / (1 / self.ro_cell - (1 - self.psi_cell) / self.gunpowder.ro - self.gunpowder.alpha_k * self.psi_cell))
self._border()
def _get_f(self):
self.f_param[0] = self.c_interface / 2 * (
self.mah_interface
* (self.F_param_p[0] + self.F_param_m[0])
- abs(self.mah_interface)
* (self.F_param_p[0] - self.F_param_m[0])
)
self.f_param[1] = self.c_interface / 2 * (
self.mah_interface
* (self.F_param_p[1] + self.F_param_m[1])
- abs(self.mah_interface)
* (self.F_param_p[1] - self.F_param_m[1])
) + self.press_interface
self.f_param[2] = self.c_interface / 2 * (
self.mah_interface
* (self.F_param_p[2] + self.F_param_m[2])
- abs(self.mah_interface)
* (self.F_param_p[2] - self.F_param_m[2])
) + self.press_interface * self.v_interface
self.f_param[3] = self.c_interface / 2 * (
self.mah_interface
* (self.F_param_p[3] + self.F_param_m[3])
- abs(self.mah_interface)
* (self.F_param_p[3] - self.F_param_m[3])
)
def _get_F_mines(self):
self.F_param_m[0] = self.ro_cell[:-1]
self.F_param_m[1] = self.ro_cell[:-1] * self.v_cell[:-1]
self.F_param_m[2] = self.ro_cell[:-1] * (
self.energy_cell[:-1]
+ self.v_cell[:-1] ** 2 / 2
+ self.press_cell[:-1] / self.ro_cell[:-1]
)
self.F_param_m[3] = self.ro_cell[:-1] * self.zet_cell[:-1]
def _get_F_plus(self):
self.F_param_p[0] = self.ro_cell[1:]
self.F_param_p[1] = self.ro_cell[1:] * self.v_cell[1:]
self.F_param_p[2] = self.ro_cell[1:] * (
self.energy_cell[1:]
+ self.v_cell[1:] ** 2 / 2
+ self.press_cell[1:] / self.ro_cell[1:]
)
self.F_param_p[3] = self.ro_cell[1:] * self.zet_cell[1:]
def _border(self):
"""
Метод "граничных условий"
Переопределяет значения вектора q в первой и последней ячейке,
а также скорость газа в первой ячейке, чтобы выполнялись граничные условия
"""
self.q_param[0][0] = self.q_param[0][1]
self.q_param[0][-1] = self.q_param[0][-2]
self.v_cell[0] = -self.v_cell[1]
self.q_param[1][0] = self.ro_cell[0] * self.v_cell[0]
self.q_param[1][-1] = self.q_param[0][-1] \
* (2 * self.v_interface[-2] - self.v_cell[-2])
self.q_param[2][0] = self.q_param[2][1]
self.q_param[2][-1] = self.q_param[2][-2]
self.q_param[3][0] = self.q_param[3][1]
self.q_param[3][-1] = self.q_param[3][-2]
def _end_vel_x(self):
"""
Возвращает скорость и координату последней границы
Если давление не достигает давления форсирования,
то [0] = 0, [1] = const
"""
if self.press_cell[-2] < self.gun.boostp:
return 0, self.x_interface[-1]
else:
acceleration = self.press_cell[-2] * self.gun.cs_area / self.gun.shell
velocity = self.v_interface[-1] + acceleration * self.tau
x = self.x_interface[-1] + self.v_interface[-1] * self.tau \
+ acceleration * self.tau ** 2 / 2
return velocity, x
|
<reponame>olitheolix/ogre_v21_example
/* MIT License
Copyright (c) 2016 <NAME>
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
#ifndef OGREDEMOQT_H
#define OGREDEMOQT_H
#include <QtWidgets/QApplication>
#include <QtGui/QKeyEvent>
#include <QtGui/QWindow>
#include <Ogre.h>
#include "OgreRoot.h"
#include "OgreOverlaySystem.h"
#include "OgreTextAreaOverlayElement.h"
// The Ogre window inherits from QWindow.
class OgreDemoQt : public QWindow, public Ogre::FrameListener {
/* Declare the Q_OBJECT keyword to ensure Qt's intermediate compiler
can do the necessary wireup between our class and the rest of Qt. */
Q_OBJECT
public:
explicit OgreDemoQt(QWindow *parent = NULL);
~OgreDemoQt();
public slots:
bool eventFilter(QObject *target, QEvent *event);
protected:
/* Core variables for Ogre window */
Ogre::Root *mRoot;
Ogre::Camera *mCamera;
Ogre::CompositorWorkspace *mWorkspace;
Ogre::RenderWindow *mRenderWindow;
Ogre::SceneManager *mSceneManager;
/* Overlays to show the shadow maps and screen text */
Ogre::v1::OverlaySystem *mOverlaySystem;
Ogre::v1::Overlay *mOverlayPSSM;
Ogre::v1::TextAreaOverlayElement *mOverlayText;
Ogre::v1::Overlay *mOverlaySpotlights;
Ogre::v1::TextAreaOverlayElement *mOverlayTextShadow;
/* The lights and objects defined in the scene */
std::vector<Ogre::SceneNode*> mLightNodes;
std::vector<Ogre::SceneNode*> mSceneNodes;
bool m_update_pending;
bool event(QEvent *event);
void addCubes(void);
void addLights(void);
void addPlane(void);
void createCamera(void);
void createScene(void);
void createSceneManager(void);
void createShadowMapOverlays(void);
void createTextOverlay(void);
void exposeEvent(QExposeEvent *event);
int initialise(void);
void initialiseQtWindow(void);
void registerHlms(void);
void render(void);
void renderLater(void);
void renderNow(void);
void setResourceLocations(void);
};
#endif // OGREDEMOQT_H
|
/*
* DatabaseConnectionProvider.java
*
* 09.06.2016
*
* (c) by HealthCarion
*
*/
package archimedes.connections;
/**
* Classes which implement this interface provide methods to maintain a collection of
* database connections.
*
* @author <NAME>
*
* @changed OLI 09.06.2016 - Added.
*/
public interface DatabaseConnectionProvider {
/**
* Adds a new database connection to the list (if it is not already contained).
*
* @param dc The database connection which is to add.
*
* @changed OLI 15.01.2015 - Added.
* @changed OLI 09.06.2016 - Took from "Archimedes.DiagrammModel".
*/
abstract public void addDatabaseConnection(DatabaseConnection dc);
/**
* Returns the database connection with the passed name.
*
* @param name The name of the database connection which is to return.
* @Return The database connection with the passed name or <CODE>null</CODE> if no
* connections with the passed name is existing.
*
* @changed OLI 15.01.2015 - Added.
* @changed OLI 09.06.2016 - Took from "Archimedes.DiagrammModel".
*/
abstract public DatabaseConnection getDatabaseConnection(String name);
/**
* Returns a list of all database connections stored in the diagram.
*
* @Return A list of all database connections stored in the diagram.
*
* @changed OLI 15.01.2015 - Added.
* @changed OLI 09.06.2016 - Took from "Archimedes.DiagrammModel".
*/
abstract public DatabaseConnection[] getDatabaseConnections();
/**
* Removes the database connection with the passed name (if there is one in the diagram).
*
* @param name The name of the database connection which is remove form the diagram.
* @return <CODE>true</CODE> if the database connection is removed.
*
* @changed OLI 15.01.2015 - Added.
* @changed OLI 09.06.2016 - Took from "Archimedes.DiagrammModel".
*/
abstract public boolean removeDatabaseConnection(String name);
} |
// Add a TimerTask is O(log n)
void timer_t::add_timer_in_loop(timer_task_t *task, size_t id)
{
task->id = id;
auto it = std::shared_ptr<timer_task_t>(task);
timer_set.emplace(it);
timer_map.emplace(id, it);
log_debug("timer(id=%zu) has been added to the Timer", id);
} |
<reponame>universityofprofessorex/machine-learning-with-python
# SOURCE: https://blog.bartab.fr/fastapi-logging-on-the-fly/
from __future__ import annotations
from typing import Any, List, Optional
# pylint: disable=no-name-in-module
from pydantic import BaseModel
class LoggerPatch(BaseModel):
name: str
level: str
# ListLoggerModel = ForwardRef("List[LoggerModel]")
class LoggerModel(BaseModel):
name: str
level: Optional[int]
# children: Optional[List["LoggerModel"]] = None
# fixes: https://github.com/samuelcolvin/pydantic/issues/545
children: Optional[List[Any]] = None
# children: ListLoggerModel = None
LoggerModel.update_forward_refs()
|
The line, “Neither flesh nor fleshless,” is from T.S. Eliot’s poem “Burnt Norton.” It is something that I remember for a moment, but it is soon gone. In fact, lots of words and phrases are buzzing in my head when I walk into the gallery: tragedic, universal, melancholic, essential, meditative. One by one, these clichés leave until finally, sitting on one of the padded benches the gallery thoughtfully provided, I find myself staring at “Untitled (1955), trying to figure out where the middle rectangle in the painting ends and the ground begins. I can make out the top and bottom rectangle but not the middle one. The middle rectangle is there and then it isn’t. It becomes an emptiness you feel as much as you see, like staring into a cave unable to make out anything in the darkness. I am frustrated but also strangely comforted. This is why the words I had in my head left me — they were too definitive.
The painting, “Untitled” (1955) is included in the exhibition, Mark Rothko: Dark Palette at Pace (November 4, 2016 – January 7, 2017). It has three rectangles, with the top and bottom ones vaporous and the middle one even less substantial. The top rectangle is a muted violet with most of the top ever so slightly darker than the interior, while the one at the bottom is a muted dark blue, with the left half of the top edge slightly brighter than the rest. Over time I become increasingly sensitive to the minute transitions and shifts of tone and color within the painting without ever feeling quite satisfied with what I know. And yet, as I suggested earlier, I don’t feel dissatisfied either. Is it because I sympathize with Rothko’s refusal to be definitive in this painting? Is it because he immersed himself in murky color sensations much more than I am able to? I can’t say. Rather, like the hovering vaporous planes in many of his paintings, I feel between being moored and unmoored.
I get up and walk around, wishing there was no one else in the gallery on this rainy afternoon. Rust browns, red browns, blue blacks, and dark plum reds — these are some of the crepuscular colors Rothko used. Some of his rectangles, edged with an aura, seem almost never to come into focus, while others feel almost crisp. Looking at “Untitled (Dark Gray on Maroon)” (1963), which is in the collection of the National Gallery of Art in Washington DC, I feel as if I am losing the ability to distinguish, to see the three rectangles hovering inside the painting’s rectangle, which is over 11 feet high by six feet wide. Each one is different enough from the others that I find myself registering the increments separating evanescence and density. The edges of the middle, squarish shape are darker than the interior. At times the maroon ground seems to have bled through the skin of the darker color that Rothko has laid over it. There is something disconcerting about the experience, even as you are flooded with an odd tranquility.
In fact, I find it a relief that a painting such as “Untitled (Plum and Brown)” (1964), was occupied by a single large, vertical rectangle hovering close to the top of a canvas measuring nearly seven feet tall by slightly less than seven feet wide. At the same time, the more I look at the paintings in this stunning exhibition, the more I find myself drawn to the ones percolating with a feeling of instability. In these paintings, where the two or three rectangles ranged across the surface differ from the others in substantiality or, perhaps more accurately, insubstantiality, I feel that Rothko has gotten to a state of unparalleled vulnerability in his work. But I also sense the degree to which he counteracted that perception, infusing his colors with the solidity of a shadow.
It’s as if, through the act of rubbing the thinned colors into the canvas, he was rubbing the skin of the painting until it turned the color of dried blood, or a blue or violet bruise. And yet, once again, I have become too definitive. This is not to suggest there is something vague about the paintings. There is nothing uncertain about how subtle and exquisite the colors are, or about how the light seems to have been suddenly snuffed out, with everything settling into darkness. These descriptions are dramatic in a theatrical way. Rothko’s paintings are dramatic without being theatrical, which isn’t to say that they are hushed or muffled. Despite the limits he imposed upon his practice, the paintings are as different as faces in a crowd: you need only look. Philip Guston may have gotten sick of purity, but I don’t think that Rothko was the least bit interested in purity. There is something dirty about his blacks and rust reds — they reek of the earth even as they try to shed their materiality.
What’s great about Rothko’s paintings is their refutation of language, the way they push back against conclusions. They imagine a domain in which materiality has surrendered to the tenebrous. There are planes with feathery edges, or with edges where a faint aura glows. Sometimes both are in the same painting. Between the planes a strip of orange rust cuts across, like the slashed throat of a setting sun. The color breathes in these works — some of them brimming with suppressed agitation, or what Rothko called “curbed desire.”
Rothko wanted to make a naked painting, which I take to mean he wanted to merge seeing and feeling with nothing to protect him. My eyes and my mood keep adjusting because there is something apprehensive about these paintings — the joy of making mixed with the sorrow of seeing. I am reminded of something that Pierre Bonnard wrote, “There is always color, it has yet to become light,” though in Rothko’s case, I hear him say, “it has yet to become darkness.” And then in an email from Thomas Nozkowsi, I read the line: “Henry James says we paint picture because there are things we cannot say.”
Mark Rothko: Dark Palette continues at Pace (510 West 25th Street, Chelsea, Manhattan) through January 7, 2017. |
<gh_stars>0
/**
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*
*/
import {db} from './db.server';
import SidebarNote from './SidebarNote';
interface NoteListProps {
searchText: string;
}
const NoteList: React.FC<NoteListProps> = ({searchText}) => {
// const notes = fetch('http://localhost:4000/notes').json();
// WARNING: This is for demo purposes only.
// We don't encourage this in real apps. There are far safer ways to access
// data in a real application!
const notes = db.query(
`select * from notes where title ilike $1 order by id desc`,
['%' + searchText + '%']
).rows;
// Now let's see how the Suspense boundary above lets us not block on this.
// fetch('http://localhost:4000/sleep/3000');
return notes.length > 0 ? (
<ul className="notes-list">
{notes.map((note) => (
<li key={note.id}>
<SidebarNote note={note} />
</li>
))}
</ul>
) : (
<div className="notes-empty">
{searchText
? `Couldn't find any notes titled "${searchText}".`
: 'No notes created yet!'}{' '}
</div>
);
};
export default NoteList;
|
. Epidemiological trends in congenital toxoplasmosis and CMV are extremely divergent. While there were only 39 cases of congenital toxoplasmosis in Switzerland between 1982 and 2015, there was an equivalent number of cases of congenital CMV, 38 in total, in 2017 alone. Serological screening for toxoplasmosis was logically abandoned in Switzerland in 2008. Regarding CMV, there is no recommendation for serological screening or neonatal screening in Switzerland, whereas early diagnosis can improve prognosis through the rapid initiation of antiviral treatment. The epidemiological data generated by sentinel surveillance of congenital CMV infections in Switzerland may or may not justify such a measure in our country in the future. |
def mark_closed(self, tactic_application: TacticApplication):
assert self.closed is not None
if self.closed:
return
assert tactic_application.parent.index == self.index
for subgoal in tactic_application.subgoals:
assert subgoal.closed
self.closed = True
self.ignored = True
for tac_app in self.successful_attempts:
if not tac_app.closed:
for subgoal in tac_app.subgoals:
subgoal.update_ignore()
for subgoal_ref in self.parents:
subgoal_ref.tactic_application.update_closed() |
Cognitive Performance, Sleepiness, and Mood in Partially Sleep Deprived Adolescents: The Need for Sleep Study. STUDY OBJECTIVES To investigate the effects of sleep restriction (7 nights of 5 h time in bed ) on cognitive performance, subjective sleepiness, and mood in adolescents. METHODS A parallel-group design was adopted in the Need for Sleep Study. Fifty-six healthy adolescents (25 males, age = 15-19 y) who studied in top high schools and were not habitual short sleepers were randomly assigned to Sleep Restriction (SR) or Control groups. Participants underwent a 2-w protocol consisting of 3 baseline nights (TIB = 9 h), 7 nights of sleep opportunity manipulation (TIB = 5 h for the SR and 9 h for the control groups), and 3 nights of recovery sleep (TIB = 9 h) at a boarding school. A cognitive test battery was administered three times each day. RESULTS During the manipulation period, the SR group demonstrated incremental deterioration in sustained attention, working memory and executive function, increase in subjective sleepiness, and decrease in positive mood. Subjective sleepiness and sustained attention did not return to baseline levels even after 2 recovery nights. In contrast, the control group maintained baseline levels of cognitive performance, subjective sleepiness, and mood throughout the study. Incremental improvement in speed of processing, as a result of repeated testing and learning, was observed in the control group but was attenuated in the sleep-restricted participants, who, despite two recovery sleep episodes, continued to perform worse than the control participants. CONCLUSIONS A week of partial sleep deprivation impairs a wide range of cognitive functions, subjective alertness, and mood even in high-performing high school adolescents. Some measures do not recover fully even after 2 nights of recovery sleep. COMMENTARY A commentary on this article appears in this issue on page 497. |
Actor Vivek Oberoi, who will next be seen in Tamil action thriller Vivegam, also featured in Bank Chor this year. That's all that he had lined up for 2017. Last year, he just starred in Great Grand Masti. Before that, he was seen in two fully etched roles in the 2013 movies Krrish 3 and Grand Masti. In a recent interview with news agency IANS, the actor revealed why he's restricted his roster to one or two films a year. "I'm still choosy about my work. I like to do one film a year and sometimes even two, but not more. I'm enjoying watching my children grow. I like to spend more time with them. I also have my businesses and charitable organizations that keep me busy," IANS quoted him as saying.
Vivek Oberoi married Priyanka Alva, late politician Jeevaraj Alva's daughter, in 2010 - eight years after he made his acting debut with Ram Gopal Varma's Company. The couple are parents to two children - daughter Ameyaa and son Vivaan.
Vivek Oberoi is making his debut in Tamil cinema with Ajith Kumar's Vivegam. The Darna Mana Hai actor features as the antagonist in the film. Sharing his experience of working with Ajith Kumar, he told IANS: "I was touched by his humility. On the first day of the shoot, he walked up to me and thanked me for being part of the project. He referred me as sir and I didn't know how to react. But I could sense he was really humble and not faking it."
Directed by Siva, Vivegam will arrive in theatres on August 24. |
The student drove to the curb outside the airport early Thursday, left his car running, hopped a fence and boarded the plane.
MELBOURNE, Fla. — Authorities in Florida say a 26-year-old student pilot boarded a vacant Airbus 321 American Airlines aircraft in a maintenance facility, causing a short lockdown at Orlando-Melbourne International Airport.
Airport spokeswoman Lori Booker tells news outlets the student drove to the curb outside the airport early Thursday, left his car running, hopped a fence and boarded the plane. A maintenance worker spotted him and police took him into custody.
Booker said officials conducted a sweep of the airfield before re-opening the airport. She says no passengers were in the area at the time.
Booker said the man was born in Trinidad and entered the United States through Canada. She says he has a Florida driver's license, but didn't know what school he attended. |
RNA interferencemediated simultaneous down-regulation of urokinase-type plasminogen activator receptor and cathepsin B induces caspase-8mediated apoptosis in SNB19 human glioma cells The invasive character of gliomas depends on proteolytic cleavage of the surrounding extracellular matrix. Cathepsin B and urokinase-type plasminogen activator receptor (uPAR) together are known to be overexpressed in gliomas and, as such, are attractive targets for gene therapy. In the present study, we used plasmid constructs to induce the RNA interference (RNAi)mediated down-regulation of uPAR and cathepsin B in SNB19 human glioma cells. We observed that the simultaneous down-regulation of uPAR and cathepsin B induces the up-regulation of proapoptotic genes and initiates a collapse in mitochondrial. Cathepsin B and uPAR down-regulated cells showed increases in the expression of activated caspase-8 and DFF40/caspase-activated DNase. Nuclear translocation of AIF and Fas ligand translocation to the cell membrane were also observed. Ki67 and X-linked inhibitor of apoptosis protein levels decreased, thereby indicating apoptosis. These results suggest the involvement of uPAR-cathepsin B complex on the cell surface and its role in maintaining the viability of SNB19 glioma cells. In conclusion, RNAi-mediated down-regulation of uPAR and cathepsin B initiates a partial extrinsic apoptotic cascade accompanied by the nuclear translocation of AIF. Our study shows the potential of RNAi-mediated down-regulation of uPAR and cathepsin B in developing new therapeutics for gliomas. |
package dht
import (
"net"
"time"
log "github.com/sirupsen/logrus"
"github.com/bytom/consensus"
"github.com/bytom/errors"
)
var (
errInvalidIP = errors.New("invalid ip address")
errDNSTimeout = errors.New("get dns seed timeout")
errDNSSeedsEmpty = errors.New("dns seeds is empty")
dnsTimeout = 5 * time.Second
)
// QueryDNSSeeds Query the DNS seeds.
func QueryDNSSeeds(lookupHost func(host string) (addrs []string, err error)) ([]string, error) {
if len(consensus.ActiveNetParams.DNSSeeds) == 0 {
return nil, errDNSSeedsEmpty
}
resultCh := make(chan *[]string, 1)
for _, dnsSeed := range consensus.ActiveNetParams.DNSSeeds {
go queryDNSSeeds(lookupHost, resultCh, dnsSeed, consensus.ActiveNetParams.DefaultPort)
}
select {
case result := <-resultCh:
return *result, nil
case <-time.After(dnsTimeout):
return nil, errDNSTimeout
}
}
func queryDNSSeeds(lookupHost func(host string) (addrs []string, err error), resultCh chan *[]string, dnsSeed, port string) {
var seeds []string
//TODO add proxy
addrs, err := lookupHost(dnsSeed)
if err != nil {
log.WithFields(log.Fields{"module": logModule, "err": err, "dnsSeed": dnsSeed}).Error("fail on look up host")
return
}
for _, addr := range addrs {
if ip := net.ParseIP(addr); ip == nil {
log.WithFields(log.Fields{"module": logModule, "err": errInvalidIP, "dnsSeed": dnsSeed}).Error("fail on parse IP")
return
}
seeds = append(seeds, net.JoinHostPort(addr, port))
}
if len(seeds) == 0 {
return
}
//if channel is full, drop it
select {
case resultCh <- &seeds:
default:
}
}
|
Visual word pairs for automatic image annotation The bag-of-visual-words is a popular representation for images that has proven to be quite effective for automatic annotation. In this paper, we extend this representation in order to include weak geometrical information by using visual word pairs. We show on a standard benchmark dataset that this new image representation improves significantly the performances of an automatic annotation system. |
The L/N-Type Calcium Channel Blocker, Cilnidipine, Reduces Heart Rate and Albuminuria in Patients with Type 2 Diabetes This study was designed to investigate whether the L/N-type calcium channel blocker, cilnidipine, had a renoprotective effect compared with other calcium channel blockers. Twenty-five hypertensive patients with concomitant type 2 diabetes who had a urinary albumin-creatinine ratio (ACR) of 10 − 300 mg albumin/g creatinine and who had been treated with oral calcium channel blockers other than cilnidipine for more than 3 months were included. Patients' medication was changed to cilnidipine 10 mg/day or 20 mg/day without a washout period. Blood pressure and renal function were measured before and at 3 months after the new treatment. Heart rate was also determined as a marker for sympathetic nervous activity. After substitution of cilnidipine, blood pressure did not change significantly, but heart rate decreased significantly from 73.9 ± 7.1 beats/min to 72.0 ± 8.4 beats/min, and the log-transformed urinary ACR decreased to 82.9 ± 49.4% of baseline values. The changes in urinary ACR and heart rate showed a significant positive correlation. Thus, there was a strong indication that cilnidipine may exert its renoprotective effect by inhibiting sympathetic nervous activity. |
Triangulated 3-Manifolds: from Haken's normal surfaces to Thurston's algebraic equation We give a brief summary of some of our work and our joint work with Stephan Tillmann on solving Thurston's equation and Haken equation on triangulated 3-manifolds in this paper. Several conjectures on the existence of solutions to Thurston's equation and Haken equation are made. Resolutions of these conjecture will lead to a new proof of the Poincar\'e conjecture without using the Ricci flow. We approach these conjectures by a finite dimensional variational principle so that its critical points are related to solutions to Thurston's gluing equation and Haken's normal surface equation. The action functional is the volume. This is a generalization of an earlier program by Casson and Rivin for compact 3-manifolds with torus boundary. Introduction This paper is based on several talks given by the author at the conference "Interactions Between Hyperbolic Geometry, Quantum Topology and Number Theory " at Columbia University in 2009 and a few more places. The goal of the paper is to give a quick summary of some of our work and our joint work with Stephan Tillmann, on triangulated 3-manifolds. Our work is an attempt to connect geometry and topology of compact 3-manifolds from the point of view of triangulations. We will recall Haken's normal surface theory, Thurston's work on construction of hyperbolic structures, Neumann-Zagier's work, the notion of angle structures introduced by Casson, Rivin and Lackenby, and the work of several other people. One important point we would like to emphasize is the role that the Neumann-Zagier Poisson structure plays in these theories. It is conceivable that the Neumann-Zagier Poisson structure will play an important role in discretization and quantization of SL(2,C) Chern-Simons theory in dimension three. A combination of the recent work of Segerman-Tillmann, Futer-Guritaud, Luo-Tillmann and has prompted us to make several conjectures on the solutions of Thurston's equation and Haken's normal surface equations. The resolution of some of these conjectures will produce a new proof of the Poincar conjecture without using the Ricci flow method. Let us begin with a recall of closed triangulated pseudo 3-manifolds. Take a disjoint union of tetrahedra. Identify codimension-1 faces of tetrahedra in pairs by affine homeomorphisms. The quotient space is a triangulated closed pseudo 3-manifold. (See §2.1 for more details). In particular, closed triangulated 3-manifolds are closed triangulated pseudo 3-manifolds and ideally triangulated 3-manifolds are pseudo 3-manifolds with vertices removed. Given a closed triangulated oriented pseudo 3-manifold, there are linear and algebraic equations associated to the triangulation. Besides the homology theories, the most prominent ones are Haken's equation of normal surfaces and Thurston's algebraic gluing equation for construction of hyperbolic metrics using hyperbolic ideal tetrahedra. Haken's theory is topological and studies surfaces in 3-manifolds and Thurston's equation is geometric and tries to construct hyperbolic metrics from the triangulation. In the most general setting, Thurston's equation tries to find representations of the fundamental group into P SL(2, C) ( ). Much work has been done on both normal surface theory and Thurston's equation with fantastic consequences in the past fifty years. Haken's normal surface equation is linear. A basis for the solution space was found recently by Kang-Rubinstein. In particular, there are always solutions to Haken's equation with non-zero quadrilateral coordinates. The situation for solving Thurston's equation is different. The main problem which motivates our investigation is the following. Main Problem Given a closed oriented triangulated pseudo 3-manifold (M, T), when does there exist a solution to Thurston's gluing equation? The most investigated cases in solving Thurston's equation are associated to ideal triangulated 3-manifolds with torus boundary so that the complex numbers z are in the upper-half plane (see for instance,,, and many others). These solutions are closely related to the hyperbolic structures. However, we intend to study Thurston's equation and its solutions in the most general setting of closed oriented triangulated pseudo 3-manifolds, in particular, on closed triangulated 3-manifolds. Even though a solution to Thurston's equation in the general setting does not necessarily produce a hyperbolic structure, one can still obtain important information from it. For instance, it was observed in (see also, ) that each solution of Thurston's equation produces a representation of the fundamental group of the pseudo 3-manifold with vertices of the triangulation removed to P SL(2, C). A simplified version of a recent theorem of Segerman-Tillmann states that Theorem 1.1 (Segerman-Tillmann) If (M, T) is a closed triangulated oriented 3-manifold so that the triangulation supports a solution to Thurston's equation, then each edge in T either has two distinct end points or is homotopically essential in M. In particular, their theorem says any one-vertex triangulation of a simply connected 3-manifold cannot support a solution to Thurston's equation. A combination of theorem 1.1 and a result of gives an interesting solution to the main problem for closed 3-manifold. Namely, a closed triangulated 3-manifold (M, T) supports a solution to Thurston's equation if and only if there exists a representation : 1 (M ) → P SL(2, C) so that () = 1 for each edge e having the same end points. The drawback of this solution is that the representation has to be a priori given. Our recent work suggests another way to resolve the main problem using Haken's normal surface equation. To state the corresponding conjecture, let us recall that a solution to Haken's normal surface equation is said to be of 2-quad-type if it has exactly one or two non-zero quadrilateral coordinates. A cluster of three 2-quad-type solutions to Haken's equation consists of three 2-quad-type solutions x 1, x 2 and x 3 so that there is a tetrahedron containing three distinct quadrilaterals q 1, q 2, q 3 with x i (q i ) = 0 for i = 1, 2, 3. A triangulation of a 3-manifold is called minimal if it has the smallest number of tetrahedra among all triangulations of the 3-manifold. The main focus of our investigation will be around the following conjecture. We thank Ben Burton and Henry Segerman for providing supporting data which helped us formulating it in the current form. Using a theorem of Futer-Guritaud, we proved in the following result which supports conjecture 1. Theorem 1.2 Suppose (M, T) is a closed triangulated oriented pseudo 3-manifold. Then either there exists a solution to the generalized Thurston equation or there exists a cluster of three 2quad-type solutions to Haken's normal surface equation. In our joint work with Tillmann, using Jaco-Rubinstein's work, we proved the following theorem concerning the topology of 3-manifolds satisfying part of conjecture 1. (c) M is a Seifert fibered space, or (d) M contains the connected sum # 3 i=1 RP 2 of three copies of the projective plane. Using theorems 1.1 and 1.3, one can deduce the Poincar conjecture from conjecture 1 (without using the Ricci flow) as follows. Suppose M is a simply connected closed 3-manifold. By the Kneser-Milnor prime decomposition theorem, we may assume that M is irreducible. Take a minimal triangulation T of M. By the work of Jaco-Rubinstein on 0-efficient triangulation, we may assume that T has only one vertex, i.e., each edge is a loop. By Segerman-Tillmann's theorem above, we see that (M, T) cannot support a solution to Thurston's equation. By conjecture 1, there exists a cluster of three 2-quad-type solutions to Haken's equation. By theorem 1.3, the minimality of T and irreducibility of M, we conclude that M = S 3. Theorem 1.2 is proved in where we proposed a variational principle associated to the triangulation to approach conjecture 1. In this approach, 2-quad-type solutions to Haken's equation arise naturally from non-smooth maximum points. We generalize the notion of angle structures introduced by Casson, Lackenby and Rivin (for ideally triangulated cusped 3-manifolds) to the circle-valued angle structure (or S 1 -angle structure or SAS for short) and its volume for any closed triangulated pseudo 3-manifold. It is essentially proved in and more specifically in that an SAS exists on any closed triangulated pseudo 3-manifold (M, T). The space SAS(T) of all circle-valued angle structures on (M, T) is shown to be a closed smooth manifold. Furthermore, each circle-valued angle structure has a natural volume defined by the Milnor-Lobachevsky function. The volume defines a continuous but not necessarily smooth volume function vol on the space SAS(T). In particular, the volume function vol achieves a maximum point in SAS(T). The two conclusions in theorem 1.2 correspond to the maximum point being smooth or not for the volume function. More details of the results obtained so far and our approaches to resolve the conjecture 1 will be discussed in sections 4 and 5. We remark that conjecture 1 itself is independent of the angle structures and there are other ways to approach it. There are several interesting problems arising from the approach taken here. For instance, how to relate the critical values of the volume function on SAS(T) with the Gromov norm of the 3-manifold. The Gromov norm of a closed 3-manifold is probably the most important topological invariant for 3-manifolds. Yet its computation is not easy. It seems highly likely that for a triangulation without a cluster of three 2-quad-type solutions to Haken's equation, the Gromov norm of the manifold (multiplied by the volume of the regular ideal tetrahedron) is among the critical values of the volume function on SAS(T). In our recent work with Tillmann and Yang, we have solved this problem for closed hyperbolic manifolds. An affirmative resolution of this problem for all 3-manifolds may provide insights which help to resolve the Volume Conjecture for closed 3-manifolds. Futer and Guritaud have written a very nice paper on volume and angle structures which is closely related to the material covered in this paper. We remark that this is not a survey paper on the subject of triangulations of 3-manifolds. Important work in the field, in particular the work of Jaco-Rubinstein on efficient triangulations of 3-manifolds, is not discussed in the paper. The paper is organized as follows. In section 2, we will recall the basic material on triangulations and Haken's normal surface theory. In section 3, we discuss Neumann-Zagier's Poisson structures and Thurston's gluing equation. In section 4, we discuss circle valued angle structures, their volume, some of our work and a theorem of Futer-Guritaud. In section 5, we introduce a Z 2 version of Thurston's equation (Z 2 -taut structure). conference proceedings for inviting us to write the paper and S. Tillmann and the referee for suggestions on improving the writing of this paper. We would like to thank in particular David Futer and Francois Guritaud for allowing us to present their unpublished theorem. The proof of this theorem was also supplied by them. Triangulations and normal surfaces The normal surface theory, developed by Haken in the 1950's, is a beautiful chapter in 3-manifold topology. In the late 1970's, Thurston introduced the notion of spun normal surfaces and used it to study 3-manifolds. We will revisit the normal surface theory and follow the expositions in and closely in this section. Some of the notations used in this section are new. The work of Tollefson, Kang-Rubinstein, Tillmann, and Jaco on characterizing the quadrilateral coordinates of normal surfaces will be discussed. Some useful facts about tetrahedra The following lemma will be used frequently in the sequel. The proof is very simple and will be omitted. To start, suppose = is a tetrahedron with vertices v 1,..., v 4 and edges (b) If the sum of weights of the edges from each vertex is a constant, i.e., a ij + a ik + a il is independent of indices, then weights of opposite edges are the same, i.e., (c) If the tetrahedron is oriented and edges are labelled by a, b, c so that opposite edges are labelled by the same letter (see figure 1(a)), then the cyclic order a → b → c → a is independent of the choice of the vertices and depends only on the orientation of. Triangulated closed pseudo 3-manifolds and Haken's normal surface equation Let X be a union of finitely many disjoint oriented Euclidean tetrahedra. The collection of all faces of tetrahedra in X is a simplicial complex T * which is a triangulation of X. Identify codimension-1 faces in X in pairs by affine orientation-reversing homeomorphisms. The quotient space M is a closed oriented pseudo 3-manifold with a triangulation T whose simplices are the quotients of simplices in T *. Let V, E, F, T (and V *, E *, F * and T * ) be the sets of all vertices, edges, triangles and tetrahedra in T (in T * respectively). The quotient of a simplex x ∈ T * will be denoted by in T. We call x ∈ T * the unidentified simplex and the quotient simplex. Since the sets of tetrahedra in T * and T are bijective under the quotient map, we will identify a tetrahedron ∈ T * with its quotient , i.e., = and T = T *. we use x > y to denote that y is a face of x. We use |Y | to denote the cardinality of a set Y. Note that in this definition of triangulation, we do not assume that simplices in T are embedded in M. For instance, it may well be that |V | = 1. Furthermore, the non-manifold points in M are contained in the set of vertices. According to Haken, a normal surface in a triangulated pseudo 3-manifold M is an embedded surface S ⊂ M so that for each tetrahedron, topologically the intersection S ∩ consists of a collection of planar quadrilaterals and planar triangles, i.e., inside each tetrahedron, topologically the surface S looks like planes cutting through the tetrahedron generically. Haken's theory puts this geometric observation into an algebraic setting. According to, a normal arc in X is an embedded arc in a triangle face so that its end points are in different edges and a normal disk in X is an embedded disk in a tetrahedron so that its boundary consists of 3 or 4 normal arcs. These are called normal triangles and normal quadrilaterals respectively. A normal isotopy is an isotopy of X leaving each simplex invariant. Haken's normal surface theory deals with normal isotopy classes of normal disks and normal surfaces. For simplicity, we will interchange the use of normal disk with the normal isotopy class of a normal disk. The projections of normal arcs and normal disks from X to M constitute normal arcs and normal disks in the triangulated space (M, T). For each tetrahedron, there are four normal triangles and three normal quadrilaterals inside it up to normal isotopy. See figure 1(b). Note that there is a natural one-one correspondence between normal disks in T * and T. In the sequel, we will not distinguish normal disks in T or T * and we will use △, to denote the sets of all normal isotopy classes of normal triangles and quadrilaterals in the triangulation T and also T *. The set of normal arcs in T * and T are denoted by A * and A respectively. There are relationships among the sets V, E, F, T, △,, A. These incidence relations, which will be recalled below, are the basic ingredients for defining Haken's and Thurston's equations. Take t ∈ △, a ∈ A, q ∈, and ∈ T. The following notations will be used. We use a < t (and a < q) if there exist representatives x ∈ a, y ∈ t (and z ∈ q) so that x is an edge of y (and z). We use t ⊂ and q ⊂ to denote that representatives of t and q are in the tetrahedron. In this case, we say the tetrahedron contains t and q. As a convention, we will always use the letters, e and q to denote a tetrahedron, an edge and a quadrilateral in the triangulation T respectively. The normal surface equation is a system of linear equations defined in the space R △ R, introduced by W. Haken. It is defined as follows. For each normal arc a ∈ A, suppose, are the two tetrahedra adjacent to the triangular face which contains a. (Note that may be.) Then there is a homogeneous linear equation for x ∈ R △ R associated to a: where t, q ⊂, t, q ⊂ and t, t, q, q > a. See figure 2(a). Recall that we identify the set of edges E with the quotient of E *, i.e., The index i : E * → Z is defined as follows: i(y, q) = 1 if y, q lie in the same tetrahedron ∈ T * so that y ∩ q = ∅, and i(y, q) = 0 in all other cases. The index i : E → Z is defined to be i(e, q) = y∈e i(y, q). See figure 2(b) for a picture of i(e, q) = 1, 2. For simplicial triangulations, i(e, q) = 1 means that the quadrilateral q faces the edge e in a tetrahedron, i.e., q ∩ e = ∅ and e, q ⊂. In general, i(e, q) ∈ {0, 1, 2}. However, for simplicial triangulations, i(e, q) = 0, 1. e=e' e' Normal surfaces and tangential angle structures Given x ∈ R △ R, we will call x(t) (t ∈ ∆) and x(q) the t-coordinate and q-coordinate (triangle and quadrilateral coordinates) of x. Haken's normal surface equation addresses the following question. Given a finite set of normal triangles and normal quadrilaterals in a triangulation T, when can one construct a normal surface with these given triangles and quadrilaterals as its intersections with the tetrahedra? Haken's equation (2.1) is a set of necessary conditions. Spun normal surface theory addresses the following question, first investigated by Thurston. Suppose we are given a finite set of quadrilaterals in each tetrahedron. When can one construct a normal surface whose quadrilateral set is the given one? We can phrase it in terms of the normal coordinates as follows. Given a vector z ∈ R, when does there exist a solution to Haken's equation (2.1) whose projection to R is z? The question was completely solved in,, and. We will interpret their results in terms of angle structures. Recall that a (Euclidean type) angle structure, introduced by Casson, Rivin and Lackenby, is a vector x ∈ R >0 so that for each tetrahedron ∈ T, q∈,q⊂ and for each e ∈ E, These two conditions (2.4) and (2.5) have very natural geometric meaning. Suppose a hyperbolic manifold admits a geometric triangulation by ideal hyperbolic tetrahedra. The first equation (2.4) says that a normal triangle in a hyperbolic ideal tetrahedron is Euclidean and the second equation (2.5) says that the sum of the dihedral angles around each edge is 2. By definition, a tangential angle structure is a tangent vector to the space of all angle structures. The following is a result proved by Tollefson (for closed 3-manifolds), Kang-Rubinstein and Tillmann for all cases. The result was also known to Jaco. Let S ns be the space of all solutions to Haken's homogeneous linear equations (2.1). Given a finite set X, the standard basis of R X will be denoted by X * ={x * ∈ R X |x ∈ X} so that x * (t) = 0 if t ∈ X − {x} and x * (x) = 1. We give R X the standard inner product (, ) so that X * forms an orthonormal basis., ) For a triangulated closed pseudo 3-manifold (M, T), let P roj : where R has the standard inner product so that {q * |q ∈ } is an orthonormal basis. For a short proof of this theorem, see. This result is very important for us to relate normal surfaces to critical points of the volume function on the space of all circle-valued angle structures. Neumann-Zagier Poisson structure and Thurston's gluing equation The Neumann-Zagier Poisson structure on R, introduced in, is of fundamental importance for studying triangulated 3-manifolds and in particular for Thurston's gluing equation. We will recall its definition and derive some of its properties in this section. See also and for different proofs. The Neumann-Zagier Poisson structure Recall that our triangulated pseudo 3-manifolds (M, T) are oriented so that each tetrahedron has the induced orientation. Since a pair of opposite edges {e, e } in a tetrahedron is the same as a normal quadrilateral q ⊂ with i(e, q) = 0, by lemma 2.1, for each tetrahedron in T, there exists a natural cyclic order on the three quadrilaterals q 1, q 2, q 3 in. We denote the cyclic order by q 1 → q 2 → q 3 → q 1, and write q → q in if q, q are in the same tetrahedron and q → q in the cyclic order. Define a map w : → R by w(q, q ) = 1 if q → q, w(q, q ) = −1 if q → q and w(x, y) = 0 otherwise. The Neumann-Zagier skew symmetric bilinear form, still denoted by w : R R → R, is defined to be: From the definition, it is evident that w(x, y) = −w(y, x). Let Z be the linear subspace {x ∈ R | for all ∈ T, q⊂ x(q) = 0}. Then the Neumann-Zagier symplectic 2-form is the restriction of w to Z 2. It provides an identification between Z and the dual space Z *. Proof. We need to show for any x ∈ R E and y ∈ Z, Indeed, the left-hand-side of it is e A(y)(e)x(e) = e,q x(e)i(e, q)y(q). The right-hand-side of it is i(e, q)x(e)y(q). Here the last equation comes from (3.3). This ends the proof. If both end points of e are v, then the edge e is counted twice in the summation e>v x(e). The dual map B * : R V → R E is given by B * (y)(e) = v<e y(v). Proof. (See also.) Since the second sequence is the dual of the first, it suffices to prove that one of them is exact. First, BA = 0 follows from the definition of Z. Furthermore, it is easy to see that B * is injective. Indeed, if B * (y) = 0 for some y ∈ R V, then by definition, y(v) + y(v ) = 0 whenever v, v form the end points of an edge. Now for any v ∈ V, take a triangle in T with vertices v 1 = v, v 2, and v 3. Then equations y(v i ) + y(v j ) = 0 for i = j in {1, 2, 3} imply that y(v i ) = 0, i.e., y(v) = 0. It remains to prove that ker(A * ) ⊂ Im(B * ). Suppose x ∈ R E so that A * (x) = 0, i.e., for all q ∈, Spelling out the details of the above equation, we see that it is equivalent to We claim that the above equation implies that y(v, ) = y(v, ) for any other tetrahedron > v. Assuming this claim, and taking y(v) = y(v, ), then we have x(e) = v<e y(v), i.e., x = B * (y), or x ∈ Im(B * ). To see the claim, let us first assume that and share a common triangle face which has v as a vertex. Say the three vertices of the triangle face are v 1 = v, v 2, and v 3. Then equation (3.4) says that. This system of three equations has a unique solution, namely y(v i, ) = y(v i, ) = (x ik + x ij − x jk )/2 for {i, j, k} = {1, 2, 3}. Now in general, if and are two tetrahedra in T which have a common vertex v, by the definition of pseudo 3-manifolds, there exists a sequence of tetrahedra 1 =, 2,..., n = so that for each index i, i, i+1 share a common triangle face which has v as a vertex. Thus, by repeating the same argument just given, we see that y(v, ) = y(v, ). By (3.3) and proposition 3.1(b), the above is equal to This ends the proof. If the right-hand-side of (3.6) equals 1 for all edges, we say that the assignment satisfies Thurston algebraic equation. Thurston's equation Since a pair of opposite edges in a tetrahedron is the same as the normal isotopy class of a quadrilateral, we see that Thurston's equation is defined on C. To be more precise, given z ∈ C, we say z satisfies the generalized Thurston equation, if the following assertions are satisfied: if q → q in, then z(q ) = 1 1−z(q), and if e ∈ E, then q z(q) i(e,q) = ±1. (3.7) If the right-hand-side of (3.7) equals 1 for all edges, we say z satisfies Thurston's equation. This equation was introduced by Thurston in in 1978. He used it to construct the complete hyperbolic metric on the figure-eight knot complement. Since then, many authors have studied Thurston's equation. See for instance,,,,, and others. This equation was originally defined for ideal triangulated 3-manifolds with torus boundary, i.e., closed triangulated pseudo 3-manifolds (M, T) so that each vertex link is a torus. We would like to point out that Thurston's equation (3.6) is defined on any closed triangulated oriented pseudo 3-manifold. It was first observed by Yoshida, a solution to Thurston's equation produces a representation of the fundamental group 1 (M − T ) to P SL(2, C) where T is the set of all vertices. Thus, in the broader setting, solving Thurston's equation amounts to find P SL(2, C) representations of the fundamental group. The recent work of seems to have rediscovered equation (3.6) independently while working on TQFT. Let D(T) be the space of all solutions to Thurston's equation defined in C. By definition, D(T) is an algebraic set. There are several very nice results known for D(T). Let H = {w ∈ C| im(w) > 0} be the upper-half-plane. Theorem 3.4 (Choi ). The set D(T) ∩ H is a smooth complex manifold. Her proof makes an essential use of Neumann-Zagier's symplectic form (theorem 3.3). Another result on Thurston's equation is in the work of Tillmann and Yoshida relating degenerations of solutions of Thurston's equation to normal surface theory. See also the work of and. The geometry behind their construction was first observed by Thurston. Though this work does not address conjecture 1 in the introduction, it does indicate a relationship between Thurston's equation and Haken's equation. Here is Tillmann's construction. Suppose z n ∈ D(T) is an unbounded sequence of solutions to Thurston's equation (3.7) so that for each q ∈, Take the logarithm of equation (3.7) for z n, divide the resulting equation by 1 + q ∈ (ln |z n (q )|) 2, and let n → ∞. We obtain, for each edge e ∈ E, q i(e, q)u(q) = 0. (3.8) By definition, u(q) = 0 unless lim n→∞ z n (q) = 0, or ∞. Furthermore, if lim n z n (q) = 1 and q → q → q, then lim n z n (q ) = lim n | lim n z n (q) = 1} and for q ∈ I, let a q = u(q ) ≥ 0 where q → q. Then Substitute into (3.8), we obtain for each e ∈ E, q∈I v(q)W (e, q) = 0 (3.9) where v = q∈I a q q *. Equation (3.9) appeared in the work of Tollefson in which he proved that, if (M, T) is a closed 3-manifold, then (3.9) gives a complete characterization of the quadrilateral coordinates of solutions to Haken's equation. Namely, if M is closed, a vector v ∈ R is in P roj (S ns ) if and only if (3.9) holds for all e ∈ E. Thus, by Tollefson's theorem, the specific v = q∈I a q q * belongs to P roj (S ne ). As a consequence, one has, Theorem 3.5 (Tillmann) For a closed triangulated 3-manifold (M, T), the logarithmic limits of D(T) correspond to solutions of Haken's normal surface equation. We remark that Tillmann's theorem in is more general and works for all pseudo 3manifolds. We state it in the above form for simplicity. Furthermore, Tillmann observed in that the solution v has the property that there is at most one non-zero quadrilateral coordinate in each tetrahedron. Thus if all coefficients a q are non-negative integers, then the vector v produces an embedded normal surface in the manifold. It follows from the definition that for each e ∈ E, the vector u e = q w(e, q)q * (3.10) is in T AS(T). What Tollefson proved, using the language of TAS, is that for a closed triangulated 3-manifold (M, T), the set {u e |e ∈ E} generates the linear space T AS(T). A generating set for T AS(T) for all closed triangulated pseudo 3-manifolds (M, T) was found in the work of Kang-Rubinstein and Tillmann. In the recent work, Yang is able to construct many solutions of Thurston's equation on closed triangulated 3-manifolds (M, T) with the property that each edge has distinct end points. Let SAS(T) be the set of all S 1 -angle structures on the triangulation T. If x ∈ SAS(T) and v ∈ T AS(T), then xe iv, defined by xe iv (q) = x(q)e iv(q), is still in SAS(T). We use this to identify the tangent space of SAS(T) with T AS(T). The Lobachevsky-Milnor volume (or simply the volume) of an S 1 -angle structure x is defined to be: where arg(w) is the argument of a complex number w and (t) = − t 0 ln |2 sin(s)|ds. The volume formula is derived from the volume of an ideal hyperbolic tetrahedron. See Milnor. It is well known that (t) : R → R is a continuous function with period. Thus, vol : SAS(T) → R is a continuous function. Our goal is to relate the critical points of vol with the topology and geometry of the 3-manifold. Using volume maximization to find geometric structures based on angle structures for manifolds with cusps was introduced by Casson and Rivin. In a recent work, Guritaud used the tool to prove the existence of hyperbolic metrics on the once-punctured torus bundle over the circle with Anosov holonomy. Our approach follows the same path in a more general setting. Existence of SAS and critical points of volume In, we proved a general theorem on the existence of real-valued prescribed-curvature angle structures on a triangulated pseudo 3-manifold. One can check that the proof in implies the following proposition. Also see for a proof. Following, we give a short proof of it for real-valued angle structures on ideally triangulated 3-manifolds with torus boundary (i.e., closed triangulated pseudo 3-manifolds so that each vertex link is a tours). The main idea of the proof for the general case is the same. Suppose otherwise that such a manifold (M, T) does not support a real-valued angle structure. Consider the linear map h : R → R T R E so that h(x)() = q⊂ x(q) and h(x)(e) = q i(e, q)x(q). Let ∈ R T R E be () = and (e) = 2. Then the assumption that (M, T) does not support a real-valued angle structure means / ∈ h(R ). Therefore, there exists a vector f ∈ R T R E so that f is perpendicular to the image h(R ) and (f, ) = 0. This means that for all x ∈ R, i.e., where h * is the transpose of h. In particular, the sum of the values of f at opposite edges in is independent of the choice of the edge pair. By lemma 2.1 (b), we see that there is a map g defined on the pairs (v, ) with v < so that where v, v < e. By the same argument as the one we used in the proof of theorem 3.3, we see that g(v, ) is independent of the choices of tetrahedra, i.e., g : V → R so that and. The last equality is due to the fact that the number of vertices of a triangulation of the torus is equal to half of the number of triangles in the triangulation. Also, in the summations >v 1 and e>v 1, we count and e with multiplicities, i.e., if (or e) has k vertices which are v, then (or e) is counted k times in the sum. This ends the proof for manifolds with torus boundary. Proposition 4.1 guarantees that critical points for the volume function always exist. Here the concept of critical point of the non-smooth function vol has to be clarified. It can be shown ( ) that for any point p ∈ SAS(T) and any tangent vector v of SAS(T) at p, the limit lim The main focus of our research is to extract topological and geometric information from critical points of the volume function on SAS(T). Pursuing in this direction, we have proved the following. Here are the key steps in the proof of theorem 4.2. Given x ∈ SAS(T), we say a tetrahedron ∈ T is flat with respect to x if x(q) = ±1 for all q ⊂ and partially flat if x(q) = ±1 for one q ⊂. Let U be the set of all partially flat but not flat tetrahedra and W = {q|x(q) = ±1, q ⊂, ∈ U }. By analyzing the derivative of t 0 ln |2 sin(s)|ds, we obtain the following main identity. For u ∈ T AS(T), By taking u = u e given by (3.10) and assuming x is a smooth critical point, we obtain a solution z ∈ C to the generalized Thurston equation, where z(q) = sin(arg(x(q ))) sin(arg(x(q ))) x(q), and q → q → q. This argument was known to Casson and Rivin. One may find a detailed argument in or. If x is a non-smooth critical point, then we deduce from (4.1) two equations for all u ∈ T AS(T), where g(u) is a linear function in u. Now we use the following simple lemma. Then for each index i there exists j = i and ij ∈ R so that Using lemma 4.3 for (4.3) where the vector space V is T AS(T) and the linear functions are u(q) with x(q) = ±1 and g, we conclude that for each q with x(q) = ±1, there exist q 1 and ∈ R so that u(q) = u(q 1 ) for all u ∈ T AS(T). This shows that for all u ∈ T AS(T), the inner product (u, q * − q * 1 ) = 0. By theorem 2.2, q * − q * 1 is in P roj (S ns ). Thus theorem 4.1 (a) follows. Futer-Guritaud's Theorem In an unpublished work, David Futer and Francois Guritaud proved a very nice theorem concerning the non-smooth maximum points of the volume function. The proof given below is supplied by Futer and Guritaud. We are grateful to Futer and Guritaud for allowing us to present their proof in this paper. Proof (Futer-Guritaud). Suppose x is a non-smooth maximum volume point in SAS(T). Let U be the set of all partially flat but not flat tetrahedra in x and W = {q ⊂ | ∈ U, x(q) = ±1} as above. Note that, by assumption, for each tetrahedron, there is at most one quadrilateral in W contained in. To see the second condition, we use (4.2). By (3.9), for e ∈ E, the vector u e = e W (e, q)q *, i.e., u e (q) = W (e, q), is in T AS(T). Taking this u e to be the vector u in (4.2), we obtain q∈W W (e, q) = 0 i.e., The last equation says q i(e, q )v(q ) = 0. This verifies the claim. Now back to the proof of the theorem. For each point p ∈ SAS(T), let N (p) be the number of partially flat but not flat tetrahedra in p. For the maximum point x, we may assume N (x) > 0. We will produce a new maximum point y so that N (y) < N (x) as follows. Let v be the tangential angle structure constructed in the claim above. Consider the smooth path Note, by definition, for |t| small, N (r(t)) = N (x). Take |t 0 | be the smallest number so that N (r(t)) = N (x) for all |t| < |t 0 | and N (r(t 0 )) < N (x). Indeed, by Futer-Guritaud's theorem, we can produce a non-smooth maximum point y so that there are three distinct quadrilaterals q 1, q 2, q 3 in a tetrahedron with y(q i ) = ±1. Now we use theorem 4.2 to produce the corresponding 2-quad-type solutions x i, one for each q i with x i (q i ) = 0. Note that we do not assume that x 1, x 2, x 3 are pairwise distinct. A stronger version of conjecture 1 is the following. Minimal triangulations with a cluster of three 2-quad-type solutions Our recent joint work with Stephan Tillmann shows the following. By the work of W. Thurston and others, it is known, without using the Ricci flow method, that manifolds in class (d) but not in cases (a), (b), (c) above are either Haken or hyperbolic. See for instance. Indeed, an irreducible, non-Haken, atoroidal, non-Seifert-fibered 3-manifold containing # 3 i=1 RP 2 has a two fold cover which is a closed 3-manifold of Heegaard genus at most 2. Such a manifold admits a Z 2 action with 1-dimensional fixed point set. By Thurston's Orbifold theorem, or one concludes that the manifold is hyperbolic. The proof of theorem 1.3 makes essential uses of Jaco-Rubinstein's work on 0-efficient triangulations. We analyze carefully the cluster of three 2-quad-type solutions of Haken's normal surface equation constructed from theorem 1.2. Theorem 1.3 takes care of the topology of closed minimally triangulated 3-manifolds which have non-smooth maximum volume points. We don't know if theorem 1.3 can be improved by using only one 2-quad-type solution instead of a cluster of three 2-quad-type solutions. Such an improvement will help in reproving the Poincar conjecture. For instance, one can weaken conjecture 1 by replacing the cluster of three 2-quad-type solutions by one 2-quad-type solution. Another related conjecture is the following, Conjecture 3 Suppose (M, T) is a minimally triangulated closed orientable 3-manifold so that one edge of T has the same end points and is null homotopic in M. Then there exists a cluster of three 2-quad-type solutions on T. By theorem 1.3, one sees that conjecture 3 implies the Poincar conjecture without using the Ricci flow. Some open problems Another potential approach to conjecture 1 is to use volume optimization on a space closely related to SAS(T). Let W (T) be the space {z ∈ C | so that if q → q, then z(q ) = 1/(1 − z(q)) and for each edge e, the right-hand-side of (3.7) is a positive real number}. The volume function vol : W (T) → R is still defined. The maximum points of the volume are related to the solutions of Thurston's equation. In fact, a critical point of the volume function in the set W (T)∩(C−R) gives a solution to Thurston's equation. It is conceivable that the following holds. The first step to carry out this approach is to find conditions on the triangulation T so that W (T) is non-empty. To this end, we consider solving Thurston's equation over the real numbers, i.e., z ∈ R. Here is a step toward producing a real-valued solution to Thurston's equation. The motivation for the definition comes from taut triangulations and real-valued solutions to Thurston's equation. Indeed, if z is a real-valued solution to Thurston's equation, then there is an associated Z 2 -taut structure f defined by: f (q) = 0 if z(q) > 0 and f (q) = 1 if z(q) < 0. Another motivation comes from taut triangulations. Suppose T is a taut triangulation, i.e., there is a map g : → {0, } so that for each tetrahedron, q⊂ g(q) = and for each edge e, q i(e, q)g(q) = 2. Then one defines a Z 2 -taut structure by f (q) = 1 g(q). A very interesting question is to find condition on T so that Z 2 -taut structures exist. Is it possible that the nonexistence of Z 2 -taut structures implies the existence of some special solutions to Haken's normal surface equation? Tillmann and I observed that the equations for Z 2 -taut structures are non-linear but quadratic in f (q). Indeed, a vector f ∈ Z 2 is a Z 2 -taut structure if and only if condition (b) in definition 5.1 holds and for each tetrahedron q⊂ f (q) = 1, (5.1) and q =q,q,q ⊂ f (q)f (q ) = 0. The condition (b) in definition 5.1 and (5.1) should be considered as the definition of a Z 2 -angle structure. We end the paper with several questions. Question 1. Given a triangulated pseudo 3-manifold (M, T), when does there exist a Z 2 -taut structure? Can one relate the non-existence of Z 2 -taut structure to some special solutions to Haken's equation? Question 2. When is a critical point of the volume function of Morse type (i.e., when is the Hessian matrix non-degenerated) and when is the volume function a Morse function? Let v 3 be the volume of the ideal regular hyperbolic tetrahedron. Question 3. Is the Gromov norm of a closed 3-manifold multiplied by v 3 among the critical values of the volume function? Question 4. Is it possible to produce a Floer type homology theory associated to the volume function on SAS(T) which will be a topological invariant of the 3-manifold? |
SOFTWARE-HARDWARE COMPLEX OF QUALIFICATION EVALUATION OF MI-171 HELICOPTER SIMULATOR model of the simulator should be as similar as possible to the information model of the real helicopter. Consequently, the basic components of the simulator are the imitation systems providing the influence of the information creating the adequate picture of the flight on sense organs of the crew, including eyesight a visualization system, flight control equipment, etc.; hearing a system of aviation noise simulation; vestibular apparatus a motion generation system; tactile channel a system for loading control levers. The research objective. The listed systems form the informational model of the simulator, which should be coordinated with the movement of the helicopter. A mathematical model of the helicopter movement dynamics and the models of the mentioned systems provide this coordination. To provide the operation of the complex flight simulator, nonlinear mathematical models of helicopter dynamics based on the modified discrete vortex method have been developed. The models describe the flow of the volumetric design of the propeller apparatus and allow simulating a real-time flight in different modes, including "post-stall" condition. The statement of basic materials. The principles and approaches to the qualification evaluation of complex flight helicopter simulators in accordance with the requirements of the EU (CS-FSTD (H)) and IKAO (Doc 9625) are analyzed. The performance capabilities of a complex full-flight Mi-171 helicopter simulator created by SPA "AVIA" are described. The necessity of certification of flight simulators in compliance with international standards is substantiated. The analysis of the validation pro-cedure is performed. The structure and functioning of the software complex designed to automate validation tests are described. Conclusions. An algorithm for obtaining a conclusion on the test result for one of the tests is presented. Urgency of the research. Flight safety is an actual practical issue which solving influences the future of Ukraine as a transport state. As a consequence of technical progress aviation technology is becoming more and more sophisticated and reliable. However, the intensity of the impact on a person caused by various adverse factors, including information overloads, is constantly increasing. Statistics show that up to 80 % of accidents and disasters occur due to pilot errors. The reason for about 35 % of these errors is lack of professional training, and about 40 % of the errors are caused by inexperience of the crew. Target setting. The cost of aircraft, crew training and the "price" of error increase simultaneously. Cost of professional training of helicopter crews on complex flight simulators is an order of magnitude lower than on real helicopters. Therefore, today the focus of increasing the safety of flights is to improve the level of flight training and flight experience via the use of flight simulators with a high level of information adequacy to a real helicopter. Actual scientific researches and issues analysis. In order to ensure the possibility of the trained crew to obtain the appropriate official documents stating their professional training level, the simulator must be certified according to national and international requirements, i.e. the adequacy of its handling qualities to the appropriate qualities of a simulated helicopter must be guaranteed. Consequently, a complex scientific and practical task is the development of an adequate model of flight dynamics of a helicopter. The research objective. In order to enable helicopter training crews to receive high-level professional training, SPA "AVIA", Kremenchuh developed and produced a complex fullflight simulator (FSTD) of the Mi-171 helicopter type V (by classification ), or level FFS(D) (by classification ). The equipment allows simulating the conduct of the helicopter in all flight modes, including critical ones: control failure, landing in the mode of main lift rotor autorotation, etc., developing practical recommendations for the flight crew, as well as to train the flight crew to find ways out of emergencies. Receiving information about the flight mode, the parameters of the onboard systems, the external environment, etc., the crew envision the information flight model. The information model of the simulator should be as similar as possible to the information model of the real helicopter. Consequently, the basic components of the simulator are the imitation systems providing the influence of the information creating the adequate picture of the flight on sense organs of the crew, including eyesight -a visualization system, flight control equipment, etc.; hearing -a system of aviation noise simulation; vestibular apparatus -a motion generation system; tactile channel -a system for loading control levers. The listed systems form the informational model of the simulator, which should be coordinated with the movement of the helicopter. A mathematical model of the helicopter movement dynamics and the models of the mentioned systems provide this coordination. To provide the operation of the complex flight simulator, nonlinear mathematical models of helicopter dynamics based on the modified discrete vortex method have been developed. The models describe the flow of the volumetric design of the propeller apparatus and allow simulating a real-time flight in different modes, including "post-stall" condition. The statement of basic materials. The guidance on the criteria for qualifying flight simulators designates the following levels of adequacy: "N (None or Not Applicable)" -not required; "G (Generic)" -basic; "R (Representative)" -typical; "S (Specific)" -high. For instance, the high level of adequacy S means that a helicopter of the specific type is being simulated, and initial and periodic validation tests should be made on the basis of objective comparison of the data of the simulator with the approved data of the helicopter. FSTD characteristics important for training, testing and checking flight crewmembers need evaluating. They include the reactions of FSTD in the longitudinal and lateral motion directions; flight technical characteristics while taking-off, hovering and moving, climbing, cruising flight, downgrading, landing approach, power-on landing and landing in autorotation; while performing all-weather flights, as well as checking control systems; and, if necessary, checking the functions performed at the pilots and the instructor's workplaces. To guarantee the correct functioning, the performance of the systems simulating acceleration, vibrational, visual and sound effects is also evaluated. The performance data, flying qualities, and other necessary parameters recorded on a helicopter by means of calibration system of data accumulation with sufficient resolution and experimentally proved accuracy that allow forming a set of corresponding parameters, which can be compared with similar FSTD parameters are considered validation data of flight tests. The approved data is the performance data of the helicopter, collected through the application of the appropriate engineering practice and accepted by the National Aviation Administration, which is responsible for the qualification for use. The best sources of such data are helicopter producers, although the data from other competent sources may also be considered. For instance, the validation tests of the simulator by SPA "AVIA" rest on the data obtained in the flight tests, agreed with the National Aviation Administration and which states, that "The Mi-171, Mi-8AMT, Mi-172 and Mi-8MTV helicopters have the same flight performance and operational characteristics". In order to compare the performance data of the helicopter with one of the helicopter simulator in, a system of tests in the form of the table of validation tests is given. The requirements of one of the tests for comparing the balancing curves of the helicopter and the simulator in the horizontal flight are given in Table 1. The resulting balancing curves should be within the limits of the tolerances with the results of flight tests. To automate the validation tests, the software package TSFlightChart (Fig. 1) has been developed. It allows making the following processes operational: -receiving real time flight parameters of the simulator and displaying their diagrams on the flight control officer's monitor; -managing recording of flight parameters (recording, stopping, pausing, putting custom labels, etc.); -saving flight information records; -graphing flight parameters changes in time; -editing diagrams (changing the set of displayed parameters, cutting out the desired areas, changing the scale, etc.); -saving the edited records in digital and graphic formats; -processing flight information in accordance with the tasks of a particular test; -generating a formal report on the results of the test. The examples below demonstrate a fragment of the content of the flight task and a general algorithm for forming the conclusion about the result of the test 1.f. Level flight Performance and Trimmed Flight Control Positions. 1. Generating a values array of the parameters to be evaluated. 2. Preparing the zones where the constant test speed was maintained during the test. Determining the beginning and end reference numbers of the zones (according to the labels placed during the test), their quantity and length. 3. Calculating medium speed on each of the zones. Generating an array of speeds, which were kept constant during the test. 5. Approximating the discrete dependencies of mean values of the parameters of the results obtained on stage 4 from average speed by power polynomials with increasing degree till obtaining a sufficiently large coefficient of approximation reliability. 6. Obtaining continuous balancing curves by spline-interpolation of discrete dependences obtained on stage 5. 7. Comparing the results of the balancing curves obtained on stage 6 with those obtained in the flight tests. Identifying the maximum deviation of the simulator. Making conclusion about the test result. Adjusting the actual performance data of the simulator in accordance with the tolerance limits determined by the requirements of validation tests CS-FSTD(H) is used as a method for refining the parameters of the mathematical model of the complex helicopter simulator. Determining the list of available (obtained as a result of flight tests) characteristics of stability and control, performance data and field performance for refining the parameters of the mathematical model of the complex helicopter simulator is performed by comparing the list of required characteristics and parameters contained in CS-FSTD(H) with corresponding characteristics and parameters contained in the Helicopter Test Acts. Conclusions. Thus, the developed training complex is certified according to the rules of CS-FSTD, implemented in batch production of SPA "AVIA" and accepted for supply by the Armed Forces of Ukraine, in accordance with the Order of the Minister of Defense of Ukraine. To make validation tests automatic, the software package TSFlightChart has been developed. The next stage of the software and hardware complex development should be the development and implementation of an automatic flight control system, enabling automatic validation tests. This, in turn, will significantly accelerate validation tests and reduce their cost. |
. The blood pressure lowering effect of the orally effective converting-enzyme-inhibitor captopril (SQ 14.225) was investigated in a double-blind study in 20 patients with moderate essential hypertension. During treatment with captopril alone (150 mg t.i.c.) blood pressure dropped from an average of 167/111 to 148/99 mm Hg lying and from 164/113 to 143/99 mm Hg upright whereby 4 patients showed normalisation of their blood pressure (less than 145/95 mm Hg). There was no significant change of pulse frequency. Addition of hydrochlorothiazide led to return to normal of blood pressure in all patients. Even after 6 months of treatment tolerance had not developed. The antihypertensive effect of captopril alone paralleled the one of hydrochlorothiazide and there were no side effects. Activity of the converting enzyme in serum as well as plasma concentrations of angiotensin II and aldosterone were clearly lowered by captopril whereas plasma renin activity increased significantly. Renal sodium and water excretion and glomerular filtration rate were not influenced. |
Antiplasmodial Sesquiterpenoid Lactones from Trichospira verticillata: Structure Elucidation by Spectroscopic Methods and Comparison of Experimental and Calculated ECD Data. A dichloromethane extract of Trichospira verticillata from the Natural Products Discovery Institute was discovered to have good antiplasmodial activity (IC50 ∼5 g/mL). After purification by liquid-liquid partition and C18 reversed-phase HPLC, four new germacranolide-type sesquiterpenoid lactones named trichospirolides A-D were isolated. The structures of the new compounds were elucidated by analysis of their 1D and 2D NMR and MS data. The relative and absolute configurations were assigned based on a comparison of calculated and experimental ECD and UV spectra, specific rotations, internuclear distances, and coupling constants for all possible diastereomers for each compound. Among these four compounds, the conjugated dienone 1 displayed the most potent antiplasmodial activity, with an IC50 value of 1.5 M. |
This post has been updated to reflect comment from Ron Paul.
Ron Paul won't attend the Conservative Political Action Conference this March after he was declined a $50,000 speaking fee he requested, Whispers has learned.
A CPAC spokeswoman confirmed that the American Conservative Union, which runs the conference, had invited the former Texas congressman and presidential candidate and that he declined the invitation.
"We do not give speaking fees, and unfortunately that is a barrier to some speakers," CPAC spokeswoman Laura Rigas tells Whispers.
Former Texas Rep. Ron Paul inspired legions of dedicated supporters during his 2008 and 2012 campaigns for the presidency. (Charlie Riedel/AP)
Megan Stiles, a spokeswoman for Paul's political organization Campaign for Liberty, says Paul isn't attending because he's retired. The former congressman's typical speaking fee is $50,000.
Though Paul attended CPAC most of the years he was in Congress, he skipped the conference last year to focus on campaigning, making him the only Republican presidential candidate not to attend.
When Paul addressed CPAC in 2011, he told an enthusiastic crowd that he was "glad to see the revolution was continuing."
But while Paul won't attend this year's conference, the American Conservative Union is keen to draw the kind of crowd he usually attracts: young and passionate.
The theme of this year's conference is "America's Future: The Next Generation of Conservatives," and the keynote closing address will be delivered by the young, fiery Republican Sen. Ted Cruz, of Texas.
CPAC also this year changed its student rate (a rate that applied to any age) to a "young conservative" rate. Any person under the age of 24 can now attend the three-day conference for $40, one-fifth the cost of the general rate.
Update, 4:45 p.m.:
Ron Paul issued a statement saying he declined to speak at CPAC due to a scheduling conflict.
In a statement provided to Whispers, Paul said that he did not turn down the invitation to speak because CPAC would not pay a speaker’s fee.
"While I have enjoyed speaking there in years past, I am unable to attend this year due to a previously scheduled engagement,” he said. Paul spokeswoman Stiles could not confirm whether Paul asked for his customary speakers fee to deliver an address to CPAC.
More News: |
Michael Hirsh was national editor for Politico Magazine from 2014–2016.
It’s time to acknowledge that Donald Trump and Boris Johnson have far more in common than funny hair, and that the movement once known as conservatism—to which both men retain only the barest connection—is taking on a new form, that of an unabashed, xenophobic nationalism. Trump and Johnson have tapped into a profound trend in world politics that isn’t going away anytime soon. Let’s call it the New Nationalism: a bitter populist rejection of the status quo that global elites have imposed on the international system since the Cold War ended, and which lower-income voters have decided—understandably—is unfair.
Displaced working people of the world are uniting—in their demand, paradoxically, for disunification. The common refrain is “we want our country back.” Back from whom or what is unclear, but the biggest bogeymen appear to be international institutions, open trade and (let’s be honest) the influx of brown-skinned migrants. It hardly seems an accident that Trump has made his slogan “America First” (and is often accused of racism and bigotry against Mexicans and Muslims), while the homicidal lunatic who shot and stabbed the anti-Brexit MP Jo Cox to death days before the Brexit vote shouted, over and over, “Put Britain first” (and was apparently a purchaser of white supremacist literature). Or that Trump has signaled his distaste for NATO and the U.S. alliance system around the world while a majority of Britons have rejected the greatest unification project in world history, the EU, and Europhobe-in-chief Boris Johnson, who could now take over the Tories, has all but assumed the mantle of ultranationalist party leader Nigel Farage as he declares this to be Britain’s “independence day.”
Story Continued Below
Perhaps most unsettling of all is that the U.S. and Europe are only catching up to a trend that has already taken hold elsewhere in the major industrialized nations: In Russia, Vladimir Putin was perhaps the harbinger of this new global nationalism (and Putin is no doubt gloating over the prospective weakening of the EU, whose unity—as well as sanctions—have threatened him). Putin rose to power exploiting the sense of humiliation that Moscow's proud elites felt at the hands of the West after the Soviet Union collapsed in late 1991, which was followed almost immediately by the Clinton administration's relentless efforts to bring what used to be the Soviet bloc countries—and post-Soviet Russia itself—into the Western sphere. That policy started with the high-handed (and mostly failed) economic advice Washington gave to Moscow about free-market economics in the early '90s—the era of "privatization" (ordinary Russians called it “grabitization”), which led directly to the reign of the hated oligarchs. Meanwhile NATO expanded fecklessly into the old Soviet bloc, aggravating anew the raw nerve of Russian paranoia about Western intentions. Cue Putin, and the fierce Russian nationalism he has used to lay claim to Crimea and part of Ukraine.
But in China too, for different reasons, nationalism is the order of the day. For the past two decades the mandarins of the communist party have encouraged nationalist fervor as a replacement for their failed socialist ideology; hence Chinese President Xi Jinping’s slogan of “realizing the great rejuvenation of the Chinese nation.” This has taken the form of aggression in the South China Sea, recalcitrance over open trade and rousing the masses to oppose China’s “humiliation” at the hands of foreign powers, as a 2013 official editorial put it. Even in Japan, which along with Great Britain has been America’s most loyal postwar ally, President Barack Obama was greeted by surprisingly large anti-U.S. protests on his visit in May, when Prime Minister Shinzo Abe hosted the G-7 leaders at the Ise Shinto shrine, which to some critics evoked Japan’s fanatic nationalism before and during World War II.
The question now is how far it will all go, this potential unwinding of the international economic system that too many of us have taken for granted—and which was designed in large part to preserve world peace. To be sure, the mere departure of Great Britain from its uneasy marriage to the European Union does not bode destruction; even in Britain, many voters (especially younger people) see the benefits of a united Europe and international alliance system that is far deeper than any that existed before in history. But it seems very likely that Brexit represents only a beginning, not an end, in the European story. “Brexit is German re-unification in reverse," Ivan Krastev of the Center for Liberal Strategies in Sofia, Bulgaria, told Politico Europe. “A period in European history that started in 1945 has ended today.”
Will things now start to disintegrate? History is not an encouraging guide.
What exactly is starting anew? Will the next to go be the Netherlands, where anti-immigration politician Geert Wilders is demanding a "Nexit" vote. Perhaps even France, the sine qua non of European unity? Maybe euro-disadvantaged countries like Italy and Spain? Or Hungary, where the increasingly autocratic Prime Minister Viktor Orban has sought to emulate Putin and embraced what he calls “a particular, nationalist approach,” declaring: “The new state that we are building in Hungary today is not a liberal state.” In some of the Western European countries such as France and Austria, the nationalist impulses are cloaked as a defense of the liberal Christian West against the Islamic threat. But the unavoidable fact is that many of these European nationalist parties, which have dwelt in obscurity for decades, are now enjoying real legitimacy; in May Norbert Hofer’s anti-immigrant Freedom Party just barely lost the presidency of Austria, winning 49.7 percent of the vote.
What is at stake most immediately is the world economy, as the $2 trillion drop in world market values on Friday showed, and U.S. and European markets took another bad hit on Monday. But in the longer run, world peace could be threatened as well. It’s important to note that, starting with the Bretton Woods agreement on world trade and the creation of the United Nations in 1945, this integrated postwar trading-and-alliance system was intended not to make our elites rich, but to keep the peace. The primary impetus behind the EU also was the prevention of another war. While the particulars of the 1992 Maastricht Treaty that created the eurozone were dryly economic, the unspoken subtext was always unmistakably political: Europeans had to unite, if only because continued disunity would keep them at the edge of the abyss. To put it more bluntly, everyone (especially the French, the original architects of the European Union) wanted to be protected from the Germans, and the Germans wanted to be protected from themselves, as then-Chancellor Helmut Kohl used to suggest publicly, saying repeatedly that the question of a monetary union was one of "war or peace." The German decision to support the European Monetary Union was a frank quid pro quo with the French for allowing German reunification: If the rest of you Europeans allow Germany to grow powerful again, we will hitch our future permanently and peacefully to a larger Europe.
Will things now start to disintegrate? History is not an encouraging guide. Most people, except for historians, forget that the pre-August 1914 era of globalization was also a moment of peace that the world took for granted—a halcyon time when, as John Maynard Keynes wrote, “The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth.” Back then there was complacency too, like that of Norman Angell, who infamously argued in The Great Illusion in 1910, only four years before the destruction of the Great War, that economic interdependence should prevent another major war. Instead there were two.
***
Today our elites would like to believe that conditions are much different, that democracy and global trade are far more entrenched and institutionalized, and the threat of nuclear Armageddon makes war too scary to contemplate. But perhaps conditions are not entirely different: The unaccountable monarchs of pre-World War I Europe have been replaced by the not-terribly-accountable elites of post-World War II Europe, and no one has a good solution for the dysfunction inside the EU.
How did a positive idea—that of more open trade, and peace-promoting international postwar institutions like the EU—come to be identified with this unsustainable economic system? It certainly wasn’t intentional, but faulty economics and an excess of faith in markets was largely to blame. The advocates of globalization and trade deals and capital markets in both parties plainly underestimated how badly the middle class would be hurt, even as they oversold the benefits of agreements like NAFTA. On the other side of the Atlantic, the problem of unaccountable elites dictating from Brussels has dogged the European project from the start, and in Britain, particularly, Euro-phobia—especially resentment of Germany—has always run just beneath the surface. "We can't get over the fact that they [the Germans] are much more powerful than we are," British historian Timothy Garton Ash said in the late 1990s. The war "was our finest hour—and our last." Boris Johnson and other advocates of Brexit claimed that the EU was choking the U.K. economy with what Johnson called “an opaque system of legislation: the vast and growing corpus of law enacted by a European Court of Justice from which there can be no appeal.” They argued that leaving the EU would mean an extra 350 million pounds a week for the ailing British National Health Service (though after the vote they appeared to back off this claim) and would dramatically cut immigration.
Today, the European Union remains a chimera, neither true working unity nor separate nationalities but something in between that never seems to run smoothly; this is especially true of the Eurozone, whose administrators refuse to confront the central contradiction of the euro concept (of which Britain is not a part): If the weaker and more indebted economies have no monetary means to recover—because they can't devalue their own currencies—then all European Monetary Union members have to submit to some form of fiscal integration that reduces their individual power over spending and taxes. This was one of the demands made by the rescue deal offered to Greece: Athens lost control of part of its budget. But the Germans, who dominate policy-making, have resisted submitting themselves to such a regime.
Thus, the long-awaited popular backlash has begun. Tellingly, in both the U.K. and the U.S., the rebellion against globalization and integration is embraced by both right and left. Even a former close ally of David Cameron, Steve Hilton, wrote in May that the EU has “become so complicated, so secretive, so impenetrable that it’s way beyond the ability of any British government to make it work to our advantage." Hilton called the EU "a stinking cesspit of corporate corruption gussied up in the garb of idealistic internationalism," and a majority of voters seemed to agree.
And that is someone who was known as a conservative. In the United States, Trump has also turned GOP conservatism on its head, seizing the party base for his own after its leaders failed to realize that the angry, often white, often older voters they took for granted no longer embraced trickle-down free trade.
In Europe, much of this trend is driven by anti-immigration fervor, especially since hundreds of thousands began fleeing the civil war in Syria and other unsettled places of the Middle East. But in Europe, as in the United States, the rising anti-immigrant sentiment seems more of a symptom than a cause; xenophobia becomes virulent usually only when people at home feel threatened; and they feel threatened when their jobs do. For the bottom half of societies in the West, the jobs are either not there or not considered good enough.
Postwar globalization achieved two major things: Open trade made conditions more equal between countries, but at the cost of creating more inequality within countries, thanks to the flood of industrial jobs that fled to cheaper shores, seeking a lower “China price,” as it was once called. Under U.S. trade policy embraced by both the Democratic and Republican parties, these trends were only encouraged, and their economic impact dismissed as a minor trade-off in exchange for cheaper consumer prices.
For the bottom half of societies in the West, the jobs are either not there or not considered good enough.
That is at the heart of the present rebellion by lower-income voters, who have borne the brunt of globalization in the major economies, including the U.S. and Britain. In the aggregate global trade does bring growth—but in the richer countries, that growth has been largely captured by elites and white-collar workers, and it often comes at the expense of people who actually make the things that get traded. As the Nobel-winning economist Michael Spence wrote in 2011, fully 98 percent of new U.S. jobs since 1990 have been lower-paying “nontradeable” (meaning not in goods and services that are traded abroad) jobs, especially in government and health care, while “employment barely grew in the tradeable sector of the U.S. economy, the sector that produces goods and services that can be consumed anywhere, such as manufactured products, engineering and consulting services. That sector, which accounted for more than 34 million jobs in 1990, grew by a negligible 600,000 jobs between 1990 and 2008.” Another study showed that the U.S. goods trade deficit with China alone from 2001 to 2013 eliminated or displaced 3.2 million U.S. jobs, three-fourths of which were in manufacturing.
The 2008 financial crisis and Great Recession dramatically accelerated this loss of wealth among lower-income voters, and just as dramatically widened the income gap. That in turn transformed our politics far more than both political parties understood, leading to the Trump-Sanders backlash. Nor is there any fix in sight because neither political party is willing to create a whole new welfare state to help the displaced, who are only going to be worse off, Spence argues, because “it is unlikely that government and health care in the U.S. will continue growing as much as it had before the current economic crisis.”
The EU, meanwhile, with a chronically malfunctioning Monetary Union at its heart, has indeed proved a bureaucratic nightmare. In a 2010 House of Commons study, the UK government estimated that about 50 percent of UK legislation with “significant economic impact” originated from EU legislation, mostly relating to agriculture, fisheries and trade with non-EU states. According to the advocacy group Business for Britain, which campaigned for a renegotiated deal with the EU, Brussels has issued no fewer than 3,589 new regulations totaling 13 million words only since David Cameron was elected prime minister in May 2010.
Thus, we are all finding out that internationalism often isn’t pretty. And in Britain, in other European countries and in the United States what we are witnessing now is a broad-based rejection of the tarnished idea that a world of transnational institutions bodes some kind of happier end state—or even, in what now seems a charmingly quaint idea, the “end of history.”
***
In the frustrating void left by that flawed ideology—and with nothing else to fill it, since parties across the spectrum refused to deal forthrightly with their various nations’ inequality problems—it’s only natural that the age-old standby, nationalism, would return in force. In the West, perhaps the most common thread is that people appear to feel that their political parties no longer represent them; just as in the United States both Democrats and Republicans signed on to the free-trade globalization agenda for a generation, in the UK both major parties ended up mostly pro-Europe (at least since Margaret Thatcher). And just as clueless about the backlash they were creating as their counterparts across the Atlantic.
It’s not just Bernie. It’s not just Trump. It’s a political earthquake, and it’s gone global.
The split in the Conservative Party mirrors to some degree what has happened to the Republicans, and Cameron seems as surprised about what just happened to him as Marco Rubio, John Kasich and Jeb Bush were when Trump humiliated them. The same goes for Hillary Clinton, who can’t seem to grasp why so many Democrats love Bernie and hate her. Trump and Sanders simply threw out the old conventional wisdom, which had taken hold of both parties; just as pro-EU ideology dominated both the Tories and Labour in Britain.
That’s why Clinton, the Grand Mistress of this global Status Quo, ought to be quaking over what happened last week: It’s not just Bernie. It’s not just Trump. It’s a political earthquake, and it’s gone global. And Clinton ought to be aware that every time a member of the GOP elite comes forward to endorse her—the latest being Hank Paulson, he of “Government Sachs” provenance—it’s probably only worse news for her campaign.
Is there any way of altering these now-ingrained trends—besides a return to protectionism, mercantilism and perhaps war? It is even possible that Brexit will shock EU bureaucrats into changes they have resisted until now: In the immediate aftermath of the British vote, the foreign ministers of Germany and France issued a paper calling for a European security compact, a common European asylum and migration policy and improvements to the monetary union. Franco-German business leaders, meanwhile, demanded "immediate, credible and visible measures to strengthen the governance" of the Euro area and said their countries should pursue "national reforms to make our economies stronger and more competitive to assure the sustainability of our social model." Even Boris Johnson sounded a bit conciliatory on Monday , writing in an op-ed in The Telegraph: “I cannot stress too much that Britain is part of Europe, and always will be. … British people will still be able to go and work in the EU; to live; to travel; to study; to buy homes and to settle down.”
But European ministers representing 28 member-states tend to issue a lot of papers, and politicians make a lot of promises they can’t keep (Johnson’s claim about Brits working and living in the EU suggests Britain will not be able to shut down immigration as readily as the Brexiters promised). And even if they went further and took some kind of action, most economists say it’s far too late to counter the most dire effects of globalization. The enrichment of developing countries at the expense of the rest—countries that are now producing high-value-added components that 30 years ago were the exclusive purview of advanced economies—“is a permanent, irreversible change,” says Spence. But what’s clear is that politicians interested in preserving the postwar order will need to figure out a new way to address the political discontent that springs out of the deeply flawed economic model the order is built on. It won’t be an easy adjustment, or an easy gulf to bridge: That model has been far better to politicians and administrators than the voters to whom they owe their jobs. |
<gh_stars>0
import fairygui = fgui; |
Barzillai J. Chambers
Early life
Barzillai Jefferson Chambers was born in 1817 in Montgomery County, Kentucky, the son of Walker Chambers and Talitha Cumi Mothershead Chambers. Chambers lived on his father's Kentucky farm for the first twenty years of his life. In 1837, he followed his uncle, Thomas Jefferson Chambers, to Texas. The elder Chambers, who had lived in Texas since 1830, had raised a regiment to join the Texas Revolution and commissioned his nephew as a captain and his aide-de-camp. After helping his uncle to recruit soldiers in Kentucky, the two departed for Texas but arrived too late to see action. Chambers was discharged from the Texas army in 1838 and remained in the state, becoming a surveyor in the southern part of the state. The next year, he was appointed deputy surveyor for north central Texas, between the Brazos and Trinity rivers. The area had little white settlement at the time, and Chambers "narrowly escaped Indian attacks on several occasions."
Chambers continued to work as a surveyor in the 1840s while also engaging in land speculation. By 1850, he was promoted to district surveyor; he had, by then, acquired 10,000 acres of land in Navarro and Johnson counties. He devoted some of the land to farming, but also donated some lots to Johnson County for the erection of the county seat, in Cleburne. He became a lawyer in 1860, but never developed a sizable law practice. Chambers married three times: first in 1852 to Susan Wood, who died the next year; secondly in 1854 to Emma Montgomery, who died in childbirth the next year; and finally in 1861 to Harriet Killough. Chambers had no children who survived infancy with the first two wives, but with his third wife he would have three children: Mary, Patrick, and Isabella.
Business and political career
After the election of Republican Abraham Lincoln as President of the United States in 1860, Chambers, a Democrat, served on two committees in Navarro County that drafted resolutions opposed to Lincoln and racial equality. He supported Texas's secession from the Union in 1861 and adhesion to the newly formed Confederacy, but, being 44 years old at the subsequent outbreak of the Civil War, did not immediately enlist. Although he was exempted from the Confederacy's military draft, Chambers thought the exemption unjust, and wrote to Confederate President Jefferson Davis to protest it. In 1864, he did enlist for six months in the 1st Regiment of Texas State Troops, but saw no action. After the war, Chambers returned to farming but also became a half-owner of Cleburne's only bank from 1871 to 1875. By that time, Chambers had the largest land holdings in Johnson County. He also actively promoted the Dallas and Cleburne Railroad, working unsuccessfully to get the railroad to make its stock available to all local citizens.
Chambers's political opinions after the war were at first concerned with government debt. In an 1868 article in the Cleburne Chronicle, Chambers argued against all interest-bearing national debts, and for a general policy of inflationism. Running as a Democrat, he was elected an alderman of Cleburne and was a delegate to Texas's constitutional convention of 1875. The proposed constitution that emerged was primarily an attempt to reverse the changes made by Republicans during Reconstruction. Chambers opposed its adoption, not for that reason, but because "while taxation was unequally thrown upon property alone, unqualified suffrage was given to every man". He also opposed the homestead exemption, "which protect[s] the debtor class from the just demands of the creditor class." Nevertheless, the constitution was adopted by a two-to-one majority.
Greenback party
When the Democratic Party's platform did not endorse his inflationist views in 1876, Chambers quit the party. The new Greenback Party (sometimes called the Greenback-Labor Party,) had not yet begun to organize in Texas, but its positions better suited Chambers and he joined in 1877. The Greenbackers, in the words of one author, "anticipated by almost fifty years the progressive legislation of the first quarter of the twentieth century." Their platform advocated an eight-hour work day, safety regulations in factories, and prohibition of child labor. Their most prominent position, however, and the one from which their name was derived, was support for the continued issuance of the greenback dollar.
During the Civil War, Congress had authorized "greenbacks," a new form of fiat money that was redeemable not in gold but in government bonds. The greenbacks had helped to finance the war when the government's gold supply did not keep pace with the expanding costs of maintaining the armies. When the crisis had passed, many in both parties, especially in the East, wanted to return the nation's currency to a gold standard as soon as possible. The Specie Payment Resumption Act, passed in 1875, ordered that greenbacks be gradually withdrawn and replaced with gold-backed currency beginning in 1879. At the same time, economic depression had made it more expensive for debtors to pay debts they had contracted when currency was less valuable. Farmers and laborers, especially, clamored for the return of coinage in both gold and silver, believing the increased money supply would restore wages and property values. With neither major party willing to endorse inflation, some partisans of loose money joined the new Greenback faction.
Chambers rose quickly in the new party's leadership, serving as a delegate to their state convention in July 1878. Initially a candidate for state land commissioner, Chambers withdrew his name in August of that year to allow former Republican Jacob Kuechler to win a unanimous nomination; he was nominated instead for a seat in the state legislature. Ten Greenbackers were elected to the legislature that year, but Chambers was not among them. During the campaign, Chambers published a pamphlet attacking the Democratic candidates and calling for Congress to create "a sufficient amount of paper money, making it equal to gold and silver, and full legal tender for all debts". While it did not help his own election, the pamphlet was widely circulated within the party and raised Chambers's profile among Greenbackers.
1880 election
By 1879, the Greenback coalition had splintered, and Chambers became affiliated with the faction most prominent in the South and West, called the "Union Greenback Labor Party," led by Marcus M. "Brick" Pomeroy. Pomeroy's faction was more radical and emphasized its independence, suggesting that Eastern Greenbackers were likely to "sell out the party at any time to the Democrats." Chambers was chosen as a delegate to the Pomeroy faction's 1880 national convention in St. Louis. The 212 delegates nominated Stephen D. Dillaye of New York for President and Chambers for Vice President. Accepting the nomination, Chambers called for unity between the Greenback factions and restated his belief in the party's goals, and attacked bankers as "Huns and Vandals." To further promote his views, he purchased a small newspaper, the Cleburne Avalanche, and spoke at the state convention in Texas in May 1880.
The more Eastern-oriented faction of the Greenbacks, called the "National Greenback Party," held its convention in Chicago that June, and Chambers attended along with others from the Pomeroy faction, hoping to heal the party split. The two factions resolved their differences and wrote a joint platform. The Greenbackers also agreed to admit forty-four delegates of the Socialist Labor Party. Dillaye stepped aside to allow the nomination of James B. Weaver of Iowa, a Civil War general and Congressman. Chambers was proposed for Vice President by the reunited party, as was Absolom M. West of Mississippi; Chambers was victorious on the first ballot, by 403 votes to 311. West moved that the nomination be made unanimous, which it was.
Chambers gave speeches on his way back to Texas, castigating the banks and defending the admission of socialists to the convention as "simply a body of men enlisted in the cause of human rights." In his official acceptance letter, he called for expansion of the currency, immigration restriction to help workingmen compete "with Chinese serf labor," and the forfeiture of all unfulfilled railroad grants. On July 8, before reaching home, Chambers fell as he exited his train in Kosse, Texas, and broke two ribs. He was confined to bed for several weeks and considered withdrawing from the race, but decided against it. His efforts, however, were limited by his injuries, and his only contribution to the campaign was to publish his newspaper, renamed the Cleburne Greenbacker. Greenbackers had high hopes for the 1880 election, but were disappointed with the result: Weaver and Chambers won just over 300,000 votes (3.3% of the popular vote) and did not carry a single state in the electoral college.
Post-election life
Chambers remained active in politics after the 1880 election. He served as chairman of the Texas Greenback Party in 1882 as George Washington Jones received the party's endorsement for governor. He was unsuccessful, and Chambers worried that the party was becoming "disorganized and disintegrated beyond the hope of a successful rally." In 1884, Jones ran again for governor and Chambers broke with him on the question of whether the state should lease public lands or let cattlemen use them without payment (Chambers favored the former option). He also criticized the party's presidential nominee that year, Benjamin Butler of Massachusetts, for attacking monopolies without offering any suggestions on how to reform them. After the 1884 election, Chambers had little involvement in politics. The Greenback Party fell apart by 1888, but many of its ideas and members found a home in the People's Party, which arose in the early 1890s. Chambers made his last foray into politics in 1890 in two letters to the Southern Mercury, a newspaper of the Farmers' Alliance, in which he again condemned monopolies and corporations, and suggested that all laws creating them be repealed. He was encouraged by the growth of the People's Party, but old age and ill health kept him from being an active member. Chambers died at his home on September 16, 1895, and was buried in Cleburne Memorial Cemetery. |
Modeling and Analysis of Eddy-Current Damping for High-Precision Magnetic Levitation of a Small Magnet This paper presents modeling and analysis of eddy-current damping that is formed by a conductive plate placed below the levitating object in order to suppress vibrations and ensure stability. It is demonstrated that vibrations should be damped to preserve stability and precision especially for stepwise motion. The levitated object is a small permanent magnet in our experiments. A magnetic drive unit is used for vertical motion of the magnet. Eddy-current distribution in the plate is calculated by solving diffusion equation for vector magnetic potential. The eddy force applied to the object is derived by a coil model representation. It is shown that if a 20 mm radius, 9 mm thick aluminum circular plate is used for eddy-current damping, the levitated object can closely follow a step input with a steady-state precision varying between 0.04 and 0.07 mm depending on the plate object distance. Eddy-current damping is a key technique that improves levitation performance to increase the diversity of applications of magnetic levitation systems in micromanipulation and microelectronic fabrication |
import { ImageType } from '../../pages/lab/types';
export type MediaType = {
image: ImageType;
video: {
asset: {
url: string;
};
};
mediaType?: string;
};
export type ImageOverlayT = {
contentHeight: number;
};
export type WindowT = {
windowHeight: number;
};
|
Effects of Topography and Modified Layer by Plasma-Shot Treatment on High-Speed Steel In this study, plasma shot (PS) treatment was applied to high-speed steel (HSS) surfaces using a titanium carbide electrode to confirm the effect of discharge current (Ip) on the formation of a single dimple and analyze a modified layer. The roughness of modified surfaces increased when Ip increased, and energy-dispersive X-ray spectrometry showed an increase in titanium atom density when Ip and electrode consumption volume (Ve) increased. A friction test confirmed that the modified surfaces friction was reduced by discharge dimples under low-load conditions. Vickers hardness test confirmed that the hardness of the modified surface was~300600 HV higher than that of an untreated HSS surface. Moreover, it increased with an increase in Ip. However, application of PS treatment to the edge of surfaces on the workpiece caused shape deterioration. The deterioration size of the edge of the modified layer increased when Ip increased. To solve this issue, we propose a novel method named position-adjusted PS (PA-PS) treatment. PA-PS treatment is used to adjust the end of the electrode in the order of tens of micrometers from the edge of the workpiece to avoid the deterioration of the edge form. Under Ip=21 A, PA-PS formed a modified layer without deteriorating the edge shape of the workpiece, thus confirming the PS characteristics applied to HSS surfaces. Moreover, PA-PS treatment solved the shape deterioration of the edge on modified surfaces via PS treatment. Introduction At present, products that have considerably improved our standard of living are produced by various manufacturing methods. In machine processing, machine parts are accurately produced using machine tools such as lathes, milling machines, drilling machines, and machining centers. The industry depends on the machining process to machine the various shapes of products using cutting tools. High-efficiency cutting has been increasingly demanded for high-mix, low-volume production; however, tool life is a major bottleneck for realizing more efficient manufacturing because cutting tools are exposed to severe environments of high temperatures and pressures at high cutting speeds, which considerably increases the wear rate. To protect such tools from wear, surfaces are treated using conventional surface treatment technology, such as chemical vapor deposition (CVD) and physical vapor deposition (PVD), to form wearproof layers. These surface treatment technologies are important for increasing the machining performance of cutting tools. The disadvantages of these surface treatment technologies include low adhesion of coating and thickness nonuniformity when applied to complex surfaces. However, plasma-shot (PS) treatment can overcome certain disadvantages of the conventional technologies. PS treatment was developed by applying electric discharge machining (EDM). EDM is a manufacturing method that can be used to machine hard materials into complex shapes with high precision. PS treatment is completely the opposite of EDM and transfers electrode materials to the workpiece surface to form modified surfaces. Figure 1 shows the mechanism of PS treatment. In this process, arc discharges occur between the workpiece and the electrode, which partially melts and transfers to the workpiece surface during the discharge. In fact, ~ 10,000 arc discharges occur every second, and the gap between the electrode and workpiece during processing is controlled in the order of tens of micrometers using servo motors. Spectroscopic analysis has confirmed that the temperature of the arc discharge reaches ~ 6000-7000 K. Moreover, the heat flux at > 109 W/m 2 from the arc discharge instantaneously melts and evaporates the electrode. The molten electrode material is laminated in the molten pool on the surface of the workpiece. Usually, the green compact electrode, which is easily consumed, is used to transfer the electrode material to the workpiece surface during processing. The advantages of PS treatment are as follows : 1. The deformation on workpiece surfaces is considerably less because the shrinkage of the workpiece is limited to a shallow range via local pulse discharge during PS treatment. 2. Local treatment is possible by controlling the shape of the electrode material and vibration pattern during processing. 3. The modified layer is highly adhesive because the electrode material is melted by the heat of the arc discharge and mixes with the workpiece material. 4. PS treatment uses appropriate electrode materials to yield various surface characteristics. Previously, a modified layer with high hardness was successfully formed on Fe-based materials using a titanium carbide (TiC) electrode. In conventional research, the detailed characteristics of high-speed steel (HSS) after PS treatment have not been studied. Therefore, to clarify HSS characteristics after PS treatment, we observed and analyzed the shape of a single dimple sample and the modified surface, the amount of transferred titanium (Ti) atoms, and the parameters (e.g., Sa, Sku, and Ssk). Then, we evaluated the mechanical characteristics of modified surfaces by the Vickers hardness (HV) and friction tests. Finally, a new PS treatment was developed that addressed shape deterioration in the cutting edge by adjusting the electrode's position, which is known as position-adjusted PS (PA-PS) treatment. This treatment solves the disadvantage of shape deterioration known as "sagging" on application of PS treatment to the edge of cutting tools. Methodology The electrode material used in the experiment was TiC, which improved surface wear resistance. The workpiece material was HSS (C: 2.0%, Si: 0.5%, Mn: 0.3%, Cr: 3.8%, Mo: 2.5%, V: 5.1%, W: 14.3%, Co: 11.0%). In this experiment, an electric discharge machine (Mitsubishi Electric Corporation, ES041-A) was used. Moreover, for this process, the workpiece and TiC electrode had positive polarity. Figure 2 and Table 1 show the setup and PS treatment conditions, respectively. Observation of a Single Dimple In this study, scanning electron microscopy (SEM), white light interferometry (WLI), and energy-dispersive X-ray diffraction (EDX) were used to evaluate the surfaces of the modified layers. A TiC electrode was used to apply PS treatment to HSS surfaces to confirm the effect of discharge current (I p ) on the formation of a single dimple, which was observed using SEM and WLI. Table 1 lists the experimental conditions in which I p was 1, 3, 10, and 21 A. A single dimple was observed because the modified surface was the outcome of a series of single dimples. Figure 3 shows the SEM images of the single dimple at each I p. Figure 4 shows the diameter of the single dimple in each I p measured from Fig. 3. We used the WLI to measure the 3D topography of a single discharge and calculated the average value of the three points where the height of peaks and depth of craters were large for the measurement area of each I p. Figure 5 shows the roughness of the single dimple of each I p. The round shape of the single dimple formed in each I p was confirmed in Fig. 3; the black part in the image was the crater. The shapes of the single dimples were complicated by the effect of the surrounding conditions and the flow of the machining fluid when they were once melted and resolidified. The diameters of the individual dimples increased along with an increase in I p ; however, their growth became saturated after I p = 10 A in Fig. 4. A previous study reported that the capacity of workpiece melting was increased by an increase in discharge current energy. Thus, the diameter of a single dimple increased along with an increase in I p. On the other hand, dimples with the same diameters were formed under I p = 10 and 21 A. Thus, we confirmed that the height of the peak and the depth of the crater in the discharge dimple increased along with an increase in I p ; however, the height of the peak saturated after I p = 10 A (Fig. 5). This tendency was similar to that of cast iron. Observation and Analysis of the Modified Layer We used SEM to observe the surfaces of modified layers when I p and the electrode consumption volume (V e ) were varied. The experimental conditions were the same as the ones given in Table 1. The HSS surface was subjected to PS treatment under I p = 3, 10, and 21 A and V e = 0.025, 0.05, and 0.1 mm. The PS treatment time was increased along with an increase in V e. Figure 6 shows the SEM image of the modified layer when I p and V e changed. The modified layer had rough surfaces under each condition, and the size and number of dimples increased when I p increased under each V e. In addition, we used EDX to evaluate the amount of transferred titanium (Ti) atoms from the TiC electrode to the surface on the modified layer when I p and V e were varied. Figure 7 shows the change in the Ti atomic number concentration when I p and V e changed. An increase in the Ti atomic number concentration along with an increase in V e for each I p was confirmed. The thickness of the TiC-modified layer increased along with an increase in V e. Moreover, a considerable increase was observed in the Ti atomic number concentration under I p = 3 A and V e = 0.1 mm. This increase was due to the increase in transferred TiC from the electrodes to workpiece surfaces owing to longer treatment time and a decrease in I p. Evaluation of the Modified Surface Topography The roughness of the modified layer is closely related to the friction of the surface. Therefore, we analyzed the surface and evaluated its parameters (e.g., Sa, Sku, and Ssk) when V e and I p were varied. Figure 8 shows the topography of the modified surface evaluated by WLI. The height difference between the top and bottom and roughness of the modified layer increased. The increase in roughness in the single dimple along with an increase in I p affected this result. The roughness of the modified layer did not change when V e increased. Figure 9 shows the quantitative evaluation of the Sa increase when I p increased. The increase in Sa was due to the high treatment temperature. The change in Sa because of the increase in V e was not confirmed. Sku is the sharpness of the surface, while Ssk is the symmetry of the surface and characterizes height distribution. Tables 2 and 3 show the relationship between Sku and the sharpness of the surfaces and the relationship between Ssk and the symmetry of the surfaces, respectively. Figures 10 and 11 show Sku and Ssk of the modified layer. Figure 10 shows that the surface treated under I p = 3 A has Sku > 3, which suggests a sharp roughness of shape. The surface treated under I p = 10 A and 21 A has Sku < 3, which indicates a dull roughness of shape. Figure 11 shows that the surface treated under I p = 3 A has Ssk > 0, which shows that the surface has many peaks. The surface treated under I p = 10 and 21 A has Ssk < 0, which indicates that the surface has multiple craters. The surface treated under I p = 10 and 21 A has low friction because the dimples supply oil to the surfaces because of hydrodynamic lubrication. Evaluation of the Edge Shape of Modified Surfaces We applied PS treatment to the edge of cutting tools to treat the edge of the HSS workpiece. We used SEM and WLI to observe and evaluate the edges of the treated surfaces from above and the side, respectively, to confirm the relationship between the change in I p and the generation of "sagging." The experimental conditions were the same as those given in Table 1, and V e = 0.05 mm. Figure 12 shows the observation and evaluation methods. Figure 13 shows that the shape deterioration called "sagging" on the edge of the workpiece was generated in each I p. The size of sagging increased when I p increased. This result can be attributed to the tendency of the diameter of the dimple and Sa on the surfaces to increase when I p increases. Figure 14 shows the topography of the edge of workpieces from the side of each I p. The topography under I p = 21 A shows the partial error of high roughness to the side. The heights of sagging to the side are ~ 8 m under I p = 3 and ~ 18 m under I p = 21 A. The tendency of the increase in shape deterioration was the same as that of Modified surface The shape deterioration part Observation from above Evaluation from the side Mechanical Characteristics of the Treated Surfaces We conducted HV and friction tests to evaluate the mechanical characteristics of the treated surfaces by PS treatment. We used the HV test to measure the hardness of the treated surface and investigate the change in HV when I p was increased. In this test, a microscope was used to determine the analysis area, and an indent was generated by a Vickers indenter. Hardness was measured by dividing the applied load P by the surface area of the generated indentation. Figure 15 shows the generated indentations. The surface area was calculated using L 1 and L 2. The following equation was used for HV: where L 1 and L 2 are the diagonal lengths of the diamondshaped indentations and P is the load. Table 2 shows the test conditions. We evaluated the HSS workpiece and treated the surfaces in Table 1 for each V e = 0.05 mm when I p increased. Figure 16 shows HV when I p increased. The HV test confirmed that the hardness of the modified surface was ~ 300-600 HV larger than that of an untreated HSS surface. Moreover, the hardness increased along with an increase in I p, primarily because the thickness of the TiCmodified layer increased along with an increase in the depth of the treated area when I p increased in the PS treatment process. We conducted a reciprocating friction test under fluid lubrication to evaluate the friction characteristics of the modified surfaces. Table 3 gives the test conditions, whereas Table 4 gives the lubricating oil specifications. Figure 17 shows the reciprocating friction test with a glass surface under fluid lubrication. The kinematic viscosity of fluid was 8.108 s/mm 2, and the friction test was used to evaluate four types of workpieces: HSS workpieces and modified workpieces by PS treatment under the conditions Fig. 17 Schematic of reciprocating friction test with a glass surface under fluid lubrication of I p = 3, 10, and 21 A and V e = 0.1 mm. The load was increased from 25 to 500 g in the friction test. Figure 18 shows the friction test results. The friction coefficient of the modified surfaces was ~ 0.05-0.10 less than that of the HSS surface in any load. We speculate that dimples on the modified surfaces remained and transferred fluid between the modified surfaces and the glass during the reciprocating movement. Friction reduction by discharge dimples was confirmed, especially under low-load conditions from 25 to 100 g. This reduction was attributed to the wedge effect, generating hydrodynamic pressure in the narrow gaps. However, the friction coefficient at I p = 21 A increased under high-load condition from 200 g, which was attributed to contact between peaks on the modified surfaces and the glass because gaps decreased as the load increased. Therefore, the friction reduction effect owing to hydrodynamic pressure was significant under low load, but the friction increase was generated by the contact between peaks on the modified surfaces and the glass under large load. We concluded that PS treatment improved the friction characteristics of the surface materials arising from a decrease in the friction coefficient under fluid lubrication (Tables 5, 6). Position-Adjusted PS Treatment We discovered that PS treatment had the disadvantage of causing shape deterioration known as "sagging" when it was applied to the edge of the cutting tools in Sect. 2.5. The size of sagging increased as I p increased; thus, this study introduces a new PS treatment that addresses shape deterioration in the cutting edge by adjusting the electrode's position, which is known as position-adjusted PS (PA-PS) treatment. PS treatment was applied to the edge of the HSS workpiece at I p = 3 and 21 A. The experimental conditions were the same as those given in Table 1, and V e = 0.05 mm. Figures 19 and 20 show the appearance and process of PA-PS treatment, respectively. First, this treatment adjusted the side of untreated workpiece I to the side of workpiece II (Fig. 20). Second, the position of the end of the electrode material was adjusted to the edge of workpiece I because of the consumption of the side surface of the electrode material by arc discharges between the side of the electrode material and that of workpiece II during the treatment process. Finally, this treatment prevented the edge of workpiece I from causing shape deterioration because of the adjusted electrode material in the PS process. After the above process, we used PA-PS treatment to observe and evaluate the edge of the modified surface. Figure 21 shows the digital microscopy (DM) images of the edge of the modified surfaces using PA-PS treatment. The black and silver parts represent craters and peaks on the modified surfaces in Fig. 21, respectively. The edges of the modified surfaces under I p = 3 and 21 A were successfully prevented from causing shape deterioration (Fig. 21). We used a DM to measure the maximum distance from the edge of the workpiece to the modified surface under I p to evaluate the appropriate I p. The measurement results show that the maximum distance from the edge of the workpiece to the modified surface is ~ 150 m when I p = 3 A and ~ 80 m when I p = 21 A. This indicates that PA-PS treatment under I p = 21 A can successfully treat the edge of the modified surface. We argue that the distance from the edge of the workpiece to the modified surface was reduced because of an increase in the diameter of a single dimple when I p increased in Fig. 4. Moreover, the deterioration of the side surface of the electrode material was not generated and the large dimple reached the edge of the modified surface at I p = 21. Therefore, PA-PS treatment at I p = 21 with high hardness was applied to the surface of the cutting tool. Conclusions In this study, the characteristics of the modified surfaces treated by PS treatment on the HSS workpiece were identified using the TiC electrodes. Based on the experimental results and further discussion, the following conclusions were drawn: The diameter of a single dimple increases along with an increase in I p ; however, its growth saturates after I p = 10 A. The height of a peak and the depth of a crater in a discharge dimple increases along with an increase in I p ; however, their growth saturates after I p = 10 A. The modified layer subjected to PS treatment has rough surfaces, and the roughness of the modified surface increases as I p increases. The Ti atomic number concentration in the modified layer increases as V e increases for each I p. The peak shapes on the modified surfaces under I p = 3 A and I p = 10 and 21 A are sharp and dull at any V e. The modified surface treated under I p = 3 A has multiple peaks, and the modified surface treated under I p = 10 and 21 A has multiple craters. The size of "sagging" increases when I p increases; the heights of "sagging" to the side are ~ 8 m at I p = 3 A and ~ 18 m at I p = 21 A. The hardness of the modified surface is ~ 300-600 HV larger than that of an untreated HSS surface. The hardness increases along with an increase in I p. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons.org/licen ses/by/4.0/. |
The difference from a good fantasy player and a great one is all in how someone uses the waiver wire. The great players will look for value all the time, while the less effective owner will only look for home runs. Having looked extensively at who is available in many fantasy leagues, I have a few suggestions of players who are still available in some if not most, that you may want to pick up if only on the short-term. They may not be “keeper-worthy”, but they should fill in adequately and post some good numbers.
Alfred Morris RB WAS – He looked good in the pre-season and just before their first game against the Saints, Shanahan showed his cards by naming Morris their starter. Even though his YPC was only 3.4 he was a big reason for their win. Look for him to get the start in week in week 2 against the St. Louis and put up some decent numbers, because after all, it is against the Rams. Morris should be owned in all fantasy formats, but if he has snuck under the radar, grab him.
C.J. Spiller RB BUF– With Fred Jackson being out anywhere from 3 to 8 weeks CJ is likely to see to get a lot of touches. He looked good against the Jets’ defence (really, did anyone besides him look good?) putting up an amazing 169 yards on 14 carries and a TD. Look for him to have another good week against Kansas City. Spiller should be owned in all formats until Jackson is back and getting the brunt of the workload again.
Stephan Hill WR NYJ – I followed him through training camp, as I do with most receivers that come out of Georgia Tech. It’s interesting to see how many former GT players become super stars. But I was unsatisfied with his production, whether he was to blame or the QB’s who were supposed to be getting him the ball. I watched the game and admit that he was doing a great job of finding open space and racking up some nice yards after catch. His athleticism makes Sanchez’s not so accurate throws, manageable. Due to his production you need to pick up Hill as soon as possible as he might end up being Sanchez’s go-to guy.
Jonathan Dwyer RB PIT – Redman isn’t the answer to the loss of Mendenhall. Redman hasn’t demonstrated the ability to get first string touches in the backfield. Apparently the coaching staff are now talking about handing-off to Jonathan Dwyer. Dwyer looked good against Denver’s defence putting up a respectable 4.8 YPC. If Dwyer can put up decent production this week against the Jets he might be the go-to guy even when Mendenhall returns. If you have a roster spot available you need to add Dwyer.
Randal Cobb WR GB – Just an all-around great athlete with great skills. Green Bay was lining him up all over the field on Sunday. He is going to be a big part of a high scoring offense. This will be a bonus if your team awards return yards as well as PPR. He will be a nice flex play this week against Chicago Bears.
Kevin Ogletree WR DAL – I’m sure after his amazing performance last week against the giants he is already been picked up in most fantasy formats, but if not he’ll be a nice addition. He has great open-field speed and good hands that will help Romo find a target besides Dez Bryant.
Flying under the Radar
Andrew Hawkins WR CIN – I watched this game and was amazed by the speed and quickness. He looked like the Tasmanian Devil cutting through and around blockers gaining yards after the catch. He was a video replay away from adding another 20 yards to his total which would have game him 8 catches for 106 yards. If those were his final numbers you could almost guarantee that there would be a lot of hype this week about him. Hawkins could become a PPR monster. If you have a spot on your roster to stash him it might not be a bad move. If you don’t have a spot you need to monitor him this week to see how he does against Cleveland. You can almost guarantee if he puts up similar numbers he’ll be all over next week’s waiver wires.
There you have a few considerations for fantasy wire pick-ups. Some will already have been taken, but they are certainly worth checking because they have been undervalued until now, and there is the possibility of them being available. If so, gram ’em up! |
A twodomain mechanism for group A streptococcal adherence through protein F to the extracellular matrix. Streptococcus pyogenes binds to the extracellular matrix (ECM) and a variety of host cells and tissues, causing diverse human diseases. Protein F, a S.pyogenes adhesin that binds fibronectin (Fn), contains two binding domains. A repeated domain (RD2) and an additional domain (UR), located immediately Nterminal to RD2. Both domains are required for maximal Fn binding. In this study, we characterize RD2 and UR precisely and compare their functions and binding sites in Fn. The minimal functional unit of RD2 is of 44 amino acids, with contributions from two adjacent RD2 repeats flanked by a novel MGGQSES motif. RD2 binds to the Nterminal fibrin binding domain of Fn. UR contains 49 amino acids, of which six are from the first repeat of RD2. It binds to Fn with higher affinity than RD2, and recognizes a larger fragment that contains fibrin and collagen binding domains. Expression of UR and RD2 independently on the surfaceexposed region of unrelated streptococcal protein demonstrates that both mediate adherence of the bacteria to the ECM. We describe here a mechanism of adherence of a pathogen that involves two pairs of sites located on a single adhesin molecule and directed at the same host receptor. |
//===----- SpecConstants.cpp - SYCL Specialization Constants Pass ---------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
// See comments in the header.
//===----------------------------------------------------------------------===//
#include "SpecConstants.h"
#include "llvm/ADT/APInt.h"
#include "llvm/ADT/StringMap.h"
#include "llvm/ADT/StringRef.h"
#include "llvm/IR/InstIterator.h"
#include "llvm/IR/Instruction.h"
#include "llvm/IR/Instructions.h"
#include "llvm/Support/ErrorHandling.h"
using namespace llvm;
namespace {
// __sycl* intrinsic names are Itanium ABI-mangled; this is common prefix for
// all mangled names of __sycl_getSpecConstantValue intrinsics, which differ by
// the template type parameter and the specialization constant value type.
constexpr char SYCL_GET_SPEC_CONST_VAL[] = "_Z27__sycl_getSpecConstantValue";
// Unmangled base name of all __spirv_SpecConstant intrinsics which differ by
// the value type.
constexpr char SPIRV_GET_SPEC_CONST_VAL[] = "__spirv_SpecConstant";
// Metadata ID string added to calls to __spirv_SpecConstant to record the
// original symbolic spec constant ID.
constexpr char SPEC_CONST_SYM_ID_MD_STRING[] = "SYCL_SPEC_CONST_SYM_ID";
static void AssertRelease(bool Cond, const char *Msg) {
if (!Cond)
report_fatal_error((Twine("SpecConstants.cpp: ") + Msg).str().c_str());
}
StringRef getStringLiteralArg(const CallInst *CI, unsigned ArgNo,
SmallVectorImpl<Instruction *> &DelInsts,
GlobalVariable *&SymGlob) {
Value *V = CI->getArgOperand(ArgNo)->stripPointerCasts();
if (auto *L = dyn_cast<LoadInst>(V)) {
// Must be a
// vvvvvvvvvvvvvvvvvvvv
// @.str = private unnamed_addr constant[10 x i8] c"SpecConst\00", align 1
// ...
// %TName = alloca i8 addrspace(4)*, align 8
// ...
// store i8 addrspace(4)* addrspacecast(
// i8* getelementptr inbounds([10 x i8], [10 x i8] * @.str, i32 0, i32 0)
// to i8 addrspace(4)*), i8 addrspace(4)** %TName, align 8, !tbaa !10
// %1 = load i8 addrspace(4)*, i8 addrspace(4)** %TName, align 8, !tbaa !10
// %call = call spir_func zeroext
// i1 @_Z27__sycl_getSpecConstantValueIbET_PKc(i8 addrspace(4)* %1)
// ^^^^^^^^^^^^^^^^^^^^
// sequence, w/o any intervening stores and calls between the store and load
// so that %1 is trivially known to be the address of the @.str literal.
AllocaInst *TmpPtr =
cast<AllocaInst>(L->getPointerOperand()->stripPointerCasts());
// find the store of the literal address into TmpPtr
StoreInst *Store = nullptr;
for (User *U : TmpPtr->users()) {
if (StoreInst *St = dyn_cast<StoreInst>(U)) {
AssertRelease(!Store, "single store expected");
Store = St;
#ifndef NDEBUG
break;
#endif // NDEBUG
}
}
AssertRelease(Store, "unexpected spec const IR pattern 0");
DelInsts.push_back(Store);
#ifndef NDEBUG
// verify there are no intervening stores/calls
AssertRelease(L->getParent() == Store->getParent(), "same BB expected");
for (const Instruction *I = Store->getNextNode(); I; I = I->getNextNode()) {
if (I == L) {
DelInsts.push_back(L);
L = nullptr; // mark as met
break;
}
AssertRelease(!I->mayHaveSideEffects(),
"unexpected spec const IR pattern 1");
}
AssertRelease(!L, "load not met after the store");
#endif // NDEBUG
AssertRelease(Store, "store not met");
V = Store->getValueOperand()->stripPointerCasts();
}
const Constant *Init = cast<GlobalVariable>(V)->getInitializer();
SymGlob = cast<GlobalVariable>(V);
StringRef Res = cast<ConstantDataArray>(Init)->getAsString();
if (Res.size() > 0 && Res[Res.size() - 1] == '\0')
Res = Res.substr(0, Res.size() - 1);
return Res;
}
// TODO support spec constant types other than integer or
// floating-point.
Value *genDefaultValue(Type *T, Instruction *At) {
if (T->isIntegerTy())
return ConstantInt::get(T, 0);
if (T->isFloatingPointTy())
return ConstantFP::get(T, 0.0);
llvm_unreachable("non-numeric specialization constants are NYI");
return nullptr;
}
std::string manglePrimitiveType(Type *T) {
if (T->isFloatTy())
return "f";
if (T->isDoubleTy())
return "d";
assert(T->isIntegerTy() &&
"unsupported spec const type, must've been guarded in headers");
switch (T->getIntegerBitWidth()) {
case 1:
return "b";
case 8:
return "a";
case 16:
return "s";
case 32:
return "i";
case 64:
return "x";
default:
llvm_unreachable("unsupported spec const integer type");
}
return "";
}
// This is a very basic mangler which can mangle non-templated and non-member
// functions with primitive types in the signature.
std::string mangleFuncItanium(StringRef BaseName, FunctionType *FT) {
std::string Res =
(Twine("_Z") + Twine(BaseName.size()) + Twine(BaseName)).str();
for (unsigned I = 0; I < FT->getNumParams(); ++I)
Res += manglePrimitiveType(FT->getParamType(I));
return Res;
}
void setSpecConstMetadata(Instruction *I, StringRef SymID, int IntID) {
LLVMContext &Ctx = I->getContext();
MDString *SymV = MDString::get(Ctx, SymID);
ConstantAsMetadata *IntV =
ConstantAsMetadata::get(ConstantInt::get(Ctx, APInt(32, IntID)));
MDNode *Entry = MDNode::get(Ctx, {SymV, IntV});
I->setMetadata(SPEC_CONST_SYM_ID_MD_STRING, Entry);
}
std::pair<StringRef, unsigned> getSpecConstMetadata(Instruction *I) {
const MDNode *N = I->getMetadata(SPEC_CONST_SYM_ID_MD_STRING);
if (!N)
return std::make_pair("", 0);
const auto *MDSym = cast<MDString>(N->getOperand(0));
const auto *MDInt = cast<ConstantAsMetadata>(N->getOperand(1));
unsigned ID = static_cast<unsigned>(
cast<ConstantInt>(MDInt->getValue())->getValue().getZExtValue());
return std::make_pair(MDSym->getString(), ID);
}
static Value *getDefaultCPPValue(Type *T) {
if (T->isIntegerTy())
return Constant::getIntegerValue(T, APInt(T->getScalarSizeInBits(), 0));
if (T->isFloatingPointTy())
return ConstantFP::get(T, 0);
llvm_unreachable("unsupported spec const type");
return nullptr;
}
} // namespace
PreservedAnalyses SpecConstantsPass::run(Module &M,
ModuleAnalysisManager &MAM) {
int NextID = 0;
StringMap<unsigned> IDMap;
// Iterate through all calls to
// template <typename T> T __sycl_getSpecConstantValue(const char *ID)
// intrinsic and lower them depending on the SetValAtRT setting (see below).
bool IRModified = false;
for (Function &F : M) {
if (F.isDeclaration())
continue;
SmallVector<CallInst *, 32> SCIntrCalls;
for (Instruction &I : instructions(F)) {
auto *CI = dyn_cast<CallInst>(&I);
Function *Callee = nullptr;
if (!CI || CI->isIndirectCall() || !(Callee = CI->getCalledFunction()))
continue;
StringRef Name = Callee->getName();
if (!Name.startswith(SYCL_GET_SPEC_CONST_VAL))
continue;
SCIntrCalls.push_back(CI);
}
IRModified = IRModified || (SCIntrCalls.size() > 0);
for (auto *CI : SCIntrCalls) {
// 1. Find the symbolic ID (string literal) passed as the actual argument
// to the intrinsic - this should always be possible, as only string
// literals are passed to it in the SYCL RT source code, and application
// code can't use this intrinsic directly.
SmallVector<Instruction *, 3> DelInsts;
DelInsts.push_back(CI);
GlobalVariable *SymGlob = nullptr;
StringRef SymID = getStringLiteralArg(CI, 0, DelInsts, SymGlob);
Type *SCTy = CI->getType();
if (SetValAtRT) {
// 2. Spec constant value will be set at run time - then add the literal
// to a "spec const string literal ID" -> "integer ID" map, uniquing
// the integer ID if this is new literal
auto Ins = IDMap.insert(std::make_pair(SymID, 0));
if (Ins.second)
Ins.first->second = NextID++;
// 3. Transform to spirv intrinsic _Z*__spirv_SpecConstant*.
LLVMContext &Ctx = F.getContext();
// Generate arguments needed by the SPIRV version of the intrinsic
// - integer constant ID:
Value *ID = ConstantInt::get(Type::getInt32Ty(Ctx), NextID - 1);
// - default value:
Value *Def = genDefaultValue(SCTy, CI);
// ... Now replace the call with SPIRV intrinsic version.
Value *Args[] = {ID, Def};
constexpr size_t NArgs = sizeof(Args) / sizeof(Args[0]);
Type *ArgTys[NArgs] = {nullptr};
for (unsigned int I = 0; I < NArgs; ++I)
ArgTys[I] = Args[I]->getType();
FunctionType *FT = FunctionType::get(SCTy, ArgTys, false /*isVarArg*/);
Module &M = *F.getParent();
std::string SPIRVName = mangleFuncItanium(SPIRV_GET_SPEC_CONST_VAL, FT);
FunctionCallee FC = M.getOrInsertFunction(SPIRVName, FT);
assert(FC.getCallee() && "SPIRV intrinsic creation failed");
CallInst *SPIRVCall =
CallInst::Create(FT, FC.getCallee(), Args, "", CI);
CI->replaceAllUsesWith(SPIRVCall);
// Mark the instruction with <symbolic_id, int_id> pair for later
// recollection by collectSpecConstantMetadata method.
setSpecConstMetadata(SPIRVCall, SymID, NextID - 1);
// Example of the emitted call when spec constant is integer:
// %6 = call i32 @_Z20__spirv_SpecConstantii(i32 0, i32 0), \
// !SYCL_SPEC_CONST_SYM_ID !22
} else {
// 2a. Spec constant must be resolved at compile time - just replace
// the intrinsic with default C++ value for the spec constant type.
CI->replaceAllUsesWith(getDefaultCPPValue(SCTy));
}
for (auto *I : DelInsts) {
assert(I->getNumUses() == 0 && "removing live instruction");
I->removeFromParent();
I->deleteValue();
}
// Don't delete SymGlob here, as it may be referenced from multiple
// functions if __sycl_getSpecConstantValue is inlined.
}
}
return IRModified ? PreservedAnalyses::none() : PreservedAnalyses::all();
}
bool SpecConstantsPass::collectSpecConstantMetadata(
Module &M, std::map<StringRef, unsigned> &IDMap) {
bool Met = false;
for (Function &F : M) {
if (F.isDeclaration())
continue;
SmallVector<CallInst *, 32> SCIntrCalls;
for (Instruction &I : instructions(F)) {
auto *CI = dyn_cast<CallInst>(&I);
Function *Callee = nullptr;
if (!CI || CI->isIndirectCall() || !(Callee = CI->getCalledFunction()))
continue;
std::pair<StringRef, unsigned> Res = getSpecConstMetadata(CI);
if (!Res.first.empty()) {
IDMap[Res.first] = Res.second;
Met = true;
}
}
}
return Met;
}
|
Detecting true relationships in time series data with different orders of integration Abstract It is fairly well-known that proper time series analysis requires that estimated equations be balanced. Numerous scholars mistake this to mean that one cannot mix orders of integration. Previous studies have clarified the distinction between equation balance and having different orders of integration, and shown that mixing orders of integration does not increase the risk of type I error when using the general error correction/autoregressive distributed lag (GECM/ADL) models, that is, so long as equations are balanced (and other modeling assumptions are met). This paper builds on that research to assess the consequences for type II error when employing those models. Specifically, we consider cases where a true relationship exists, the left- and right-hand sides of the equation mix orders of integration, and the equation still is balanced. Using the asymptotic case, we find that the different orders of integration do not preclude identification of the true relationship using the GECM/ADL. We then highlight that estimation is trickier in practice, over finite time, as data sometimes do not reveal the underlying process. But, simulations show that even in these cases, researchers will typically draw accurate inferences as long as they select their models based on the observed characteristics of the data and test to be sure that standard model assumptions are met. We conclude by considering the implications for researchers analyzing or conducting simulations with time series data. |
Applications of Business Process Redesign in Hotel Daily Operations Abstract The growing yet competitive global economy necessitates businesses to continuously seek ways to execute more efficiently while delivering products or services to customer expectations of quality and timeliness. Business process redesign (BPR) is an approach to analyze, evaluate, and change existing processes and sub-processes in the product/service manufacture and delivery cycle. Delivery of services is more difficult due to the nebulousness of the customer final interpretation and the rapid evaporation of the delivery occurrence. The drop in American travel since September 11, 2001 has devastated hotel occupancy. Hotels delivering services matching or exceeding customer expectation will be more likely to survive the ebb of customer room night sales. This project applied BPR to a process of airport van delivery in a large US hotel. Following customer complaint, the hotel gathered staff to discuss and propose solutions to reducing customer wait times at the airport curb. An initial analysis by management indicated that communications and employee training were two interventions to the process that would likely improve the process. During the analysis period, customers riding in the van were surveyed to determine their satisfaction with the pickup service. The process was redesigned to add more communication devices to the concierge work area and to improve van driver training. Customers were resurveyed following the intervention. Significant improvement was noted in operator courtesy and friendliness and decreased wait time. Allowing time to analyze processes within an operation can yield rapid improvements often without major financial expenditure. |
Resolved allergenspecific IgE sensitization among females and early polysensitization among males impact IgE sensitization up to age 24 years To the Editor: Up to half of the adult population has allergenspecific immunoglobulin E antibodies (sIgE).1,2 Even though females and males are not equally affected, relatively little is known regarding mechanisms behind differences in IgE sensitization between females and males. IgE polysensitization has been defined as IgE reactivity to several nonrelated (or not obviously related) allergenic source materials and has been shown to increase with age and correlate with disease expression and multimorbidity.3,4 Although females and males do not seem to differ regarding IgE polysensitization in adulthood,5 a recent study indicates that IgE polysensitization is more common among boys than girls in childhood.5 Interestingly, findings from the longitudinal Isle of Wight study indicate that more females than males tend to outgrow their IgE sensitization.6 Investigation of the nature history of IgE sensitization requires large longitudinal populationbased studies including repeated measurements of sIgE. We therefore undertook a study to explore the dynamics of IgE sensitization from early childhood to young adulthood in relation to sex, including new and resolved IgE sensitization, polysensitization as well as IgE trajectories over time. In the Swedish populationbased birth cohort BAMSE (N = 4089), participants have been followed up to young adulthood including collection of blood at ages 4, 8, 16 and 24 years for analysis of specific IgE to 14 common food (peanut, soy, wheat, milk, egg, cod) and airborne (timothy, birch, mugwort, cat, dog, horse, house dust mites and Cladosporium herbarum) allergens.7 Females (n = 656) and males (n = 570) with complete data on sIgE at all four followups were included for the current study. Any IgE sensitization was defined as sIgE ≥ 0.35 kUA/L to one or more of the tested foods and/or airborne allergens. Polysensitization was defined as sIgE ≥ 0.35 kUA/L to four or more specific allergens at the same followup, as in previous reports from BAMSE.8 New IgE sensitization: any IgE sensitization at a followup in an individual with no sIgE sensitization at the previous followup. Resolved IgE sensitization: no IgE sensitization to any of the specific allergens at a followup in an individual that displayed any IgE sensitization at the previous followup. The study was approved by the regional ethics committee in Stockholm, ethics approval number; 2016/138031/2, and participants and/or caregivers provided written informed consent. Details on data collection and definitions are presented in the Methods section in this article's supporting information. A similar proportion of females (22.6%) and males (23.7%) were IgEsensitized to any of the 14 allergens at 4 years, Figure 1. With increasing age, the prevalence of any IgE sensitization increased for both sexes, but a steeper increase was seen for males. Figure 1 also shows how many allergens females and males were sensitized to at the different ages. Fewer IgEsensitized females than males were polysensitized to four or more allergens at all ages with the most pronounced difference at age four years, 12.6% (20/148) vs. 31.1% (42/135). The corresponding rates at 8 years were 30.7% (65/212) vs. 42.9% (85/198). Differences diminished further with age, and at 16 and 24 years, differences were small. Very few polysensitized individuals outgrew their sensitization between followups, zero individuals between 4 and 8 years, one between 8 and 16 years and five between 16 and 24 years. Thus, in total four females and two males with polysensitization outgrew their IgE sensitization over time. We further explored IgE polysensitization in relation to sex over time using generalized estimating equations, and the same analysis was used to explore new and resolved IgE sensitization, Figure 2. Covariates included in Table S2 were tested as potential confounders using backward selection, but none of the factors impacted the associations between sex and any of the outcomes, and therefore, unadjusted analysis is presented. Male sex was significantly associated with IgE polysensitization at 4 years (OR: 2.75, 95% CI: 1.41 5.37) but not at other ages, Figure 2. Sex was not associated with new IgE sensitization at the ages 4 and 8 years. However, at ages 16 and 24 years more males than females developed new IgE sensitization. The overall OR for the impact of male sex for new IgE sensitization was 1.15, 95% CI: 0.97 1.33. In contrast, resolved IgE sensitization was less common among males than females at all ages, overall OR: 0.37, 95% CI: 0.25 0.53. Resolved IgE sensitization was seen both for food and airborne allergens with comparable sex differences (data not shown). Both new and resolved IgE sensitization was characterized by significantly lower levels of specific IgE at all ages, while IgE polysensitization was associated with significantly higher levels of specific IgE at all ages (data not shown). males do not seem to differ regarding IgE poly-sensitization in adulthood, 5 a recent study indicates that IgE poly-sensitization is more common among boys than girls in childhood. 5 Interestingly, findings from the longitudinal Isle of Wight study indicate that more females than males tend to outgrow their IgE sensitization. 6 Investigation of the nature history of IgE sensitization requires large longitudinal population-based studies including repeated measurements of sIgE. We therefore undertook a study to explore the dynamics of IgE sensitization from early childhood to young adulthood in relation to sex, including new and resolved IgE sensitization, poly-sensitization as well as IgE trajectories over time. In the Swedish population-based birth cohort BAMSE (N = 4089), participants have been followed up to young adulthood including collection of blood at ages 4,8,16 and 24 years for analysis of specific IgE to 14 common food (peanut, soy, wheat, milk, egg, cod) and airborne (timothy, birch, mugwort, cat, dog, horse, house dust mites and Cladosporium herbarum) allergens. 7 Females (n = 656) and males (n = 570) with complete data on sIgE at all four follow-ups were included for the current study. Any IgE sensitization was defined as sIgE ≥ 0.35 kU A /L to one or more of the tested foods and/or airborne allergens. Poly-sensitization was defined as sIgE ≥ 0.35 kU A /L to four or more specific allergens at the same follow-up, as in previous reports from BAMSE. We further explored IgE poly-sensitization in relation to sex over time using generalized estimating equations, and the same analysis was used to explore new and resolved IgE sensitization, Figure 2. Table S2 were tested as potential confounders using backward selection, but none of the factors impacted the associations between sex and any of the outcomes, and therefore, unadjusted analysis is presented. Male sex was significantly associated with IgE poly-sensitization at 4 years (OR: 2.75, 95% CI: 1.41-5.37) but not at other ages, Figure 2. Sex was not associated with new IgE sensitization at the ages 4 and 8 years. However, at ages 16 and 24 years more males than females developed new IgE sensitization. The overall OR for the impact of male sex for new IgE sensitization was 1.15, 95% CI: 0.97-1.33. In contrast, resolved IgE sensitization was less common among males than females at all ages, overall OR: 0.37, 95% CI: 0.25-0.53. Resolved IgE sensitization was seen both for food and airborne allergens with comparable sex differences (data not shown). Covariates included in Both new and resolved IgE sensitization was characterized by significantly lower levels of specific IgE at all ages, while IgE polysensitization was associated with significantly higher levels of specific IgE at all ages (data not shown). This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. The current study includes only 30% of the original cohort. To facilitate evaluation of generalizability, we present baseline characteristics, allergic diseases and any IgE sensitization at ages 4,8 and 16 years for the 1226 individuals and for the original cohort, Table S1. As shown in Table S1, there were some differences in basic characteristics. Also, allergic diseases and IgE sensitization tended to be more prevalent, especially at higher ages, in the group that provided blood at all follow-ups, even though differences were small. A higher proportion of females (656/2024, 32.4%) than males (570/2064, 27.6%) provided blood at all 4 follow-ups and were included in the current study. To evaluate whether selection bias might impact the sex differences found in our study, we compared background factors between included females and males. Differences were small, Table S2. In accordance with the data in Table S1, any IgE sensitization tended to be more prevalent both among included females and males. However, differences were not more pronounced among males (data not shown). In this longitudinal population-based study, we show that a comparable number of females (22.6%) and males (23.7%) display any IgE sensitization at 4 years. With age, IgE sensitization became increasingly more common among both females and males. The increase was steeper among males with 52.6% of males displaying any IgE sensitization at age 24 years compared to 38.7% of females. We found that differences in trajectories of IgE sensitization from F I G U R E 1 IgE sensitization from 4 to 24 years among females (n = 656) and males (n = 570) in the BAMSE birth cohort. Bars represent the proportion of IgE-sensitized females and males with any IgE sensitization at the respective age. Colour sections in bars show the proportions with IgE sensitization to one, two or three, four to seven and more than seven of the 14 tested specific allergens at each follow-up. *p-value <.05 regarding differences in any IgE sensitization between females and males F I G U R E 2 Impact of male sex on new IgE sensitization, resolved IgE sensitization and IgE poly-sensitization up to age 24 years in the BAMSE cohort. Odds ratios were obtained by generalized estimating equations including 570 males and 656 females (reference group) childhood to young adulthood between females and males can largely be explained by resolved IgE sensitization among females and early IgE poly-sensitization among males. The finding that females outgrow their IgE sensitization to a higher degree than males confirms previous results from the Isle of Wight study in which females tended to be more likely to outgrow their sensitization than males. 6 Poly-sensitized individuals were unlikely to outgrow their sensitization. Thus, the accumulating sex differences in overall sensitization seen with increasing age (Figure 1) are probably partly explained by the high degree of early poly-sensitization among boys. Our results are in accordance with data from a cross-sectional populationbased Polish study including 1409 individuals aged 6 to 44 years. 5 They found polyvalent sensitization (defined as sIgE to two or more allergens) to be significantly more prevalent among 6-7 year old boys compared to girls the same age while there were no significant differences in the older age groups. 5 Results from the German Multi-Centre Allergy Study birth cohort indicate that the earlier the sensitization onset, the stronger the tendency for poly-sensitization. 9 Thus, even though a similar proportion of females and males displayed IgE sensitization at 4 years it is possible that males developed their sensitization at a younger age compared to females (i.e. below age 4). The genetic influence on poly-sensitization has only been evaluated in a limited number of studies and suggests associations with the HLA and C11orf30-LRRC32 regions, as well as Th2 signalling genes. 3 Sex-specific genetic effects on sensitization and allergic diseases have been reported in the literature, 10 but no consistent picture or explanation of the underlying biology has emerged. Whether there are primarily genetic, hormonal or environmental factors associated with the observed sex differences in IgE sensitization trajectories in our study, resolved sensitization in females and poly-sensitization in males, remain to be further investigated. Strengths of our study include the population-based design, long follow-up time and that blood was collected for analyses of sIgE at four time points. Adjustment for several potential confounders did not affect the results, although residual confounding effects can never be ruled out in an observational study. A limitation is that only 30% of the original cohort were included and rates of sensitization are probably higher compared to the general population, which may affect the generalizability of the findings. We explored three outcomes, and if the Bonferroni method had been applied, a p-value of.017 would be significant. The associations for sex and IgE polysensitization at 4 years as well as all associations regarding sex and resolved IgE sensitization are significant at that level. At the same level, the association for sex and new IgE sensitization was significant at 16 years but not at 24 years. A weakness of our study is that IgE sensitization was not measured before age 4 years. In summary, sex impacts IgE trajectories from childhood to young adulthood. We identified two factors that contribute to these sex differences: resolution of allergen-specific sensitization in females and higher rates of allergen poly-sensitization in males. Further analyses of the underlying determinants for these immunological events are warranted. ACK N OWLED G M ENTS We thank the children and parents participating in the BAMSE cohort and all staff involved in the study through the years. We would also like to thank Professor Magnus Wickman, former PI of the BAMSE study, for valuable input. CO N FLI C T O F I NTE R E S T EM has received lecture fees from Novartis, Sanofi and Thermo Fisher Scientific outside the submitted work. NB has received consultancy fees from Pfizer and Sanofi outside the submitted work. MvH has received lecture fees from Thermo Fisher Scientific and ALK; and consultancy fees from Biomay AG, Vienna, Austria and Hycor Biomedical LLC, CA, US, outside the submitted work. Dr. Westman reports personal fees from ALK (consultancy fees), outside the submitted work. The other authors report no conflict of interest relevant to this article. AUTH O R S CO NTR I B UTI O N S Data collection was managed by EM, IK and AB. Statistical analysis was conducted by NB and EM. Analysis and drafting of the manuscript were conducted by NB and EM. All authors participated in critical revision of the manuscript, provided important intellectual input and approved the final version. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. |
/*------------------------------------------------------------------------------
*
* Copyright (c) 2011-2020, EURid vzw. All rights reserved.
* The YADIFA TM software product is provided under the BSD 3-clause license:
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of EURid nor the names of its contributors may be
* used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
*------------------------------------------------------------------------------
*
*/
#include "dnscore/dnscore-config.h"
#include "dnscore/pool.h"
#include "dnscore/logger.h"
extern logger_handle *g_system_logger;
#define MODULE_MSG_HANDLE g_system_logger
static mutex_t pool_chain_mtx = MUTEX_INITIALIZER;
static pool_s *pool_chain = NULL;
static void pool_reset_nop(void *ptr, void *args)
{
(void)ptr;
(void)args;
}
void pool_init_ex(pool_s *pool, pool_allocate_callback *allocate_, pool_free_callback *free_, pool_reset_callback *reset_, void *allocate_args, const char* name)
{
#if DEBUG
// ensure there are no double initialisations
pool_s *first = pool_chain;
while(first != NULL)
{
if(first == pool)
{
abort();
}
first = first->next;
}
#endif
ptr_vector_init(&pool->pool);
pool->allocate_method = allocate_;
pool->free_method = free_;
pool->reset_method = reset_;
pool->allocate_args = allocate_args;
mutex_init(&pool->mtx);
pool->allocated_count = 0;
pool->released_count = 0;
pool->name = name;
pool->max_size = 0;
pool->current_count = 0;
pool->peak_count = 0;
pool->hard_limit = FALSE;
pool->maxed = FALSE;
pool_set_size(pool, 0x10000);
mutex_lock(&pool_chain_mtx);
pool->next = pool_chain;
pool_chain = pool;
mutex_unlock(&pool_chain_mtx);
}
void
pool_init(pool_s *pool, pool_allocate_callback *allocate_, pool_free_callback *free_, void *allocate_args, const char *name)
{
pool_init_ex(pool, allocate_, free_, pool_reset_nop, allocate_args, name);
}
void
pool_log_stats_ex(pool_s *pool, logger_handle* handle, u32 level)
{
if(pool != NULL)
{
logger_handle_msg(handle, level, "pool '%s' handled %llu allocations and %llu releases; pooled %i maxed at %i; using %u peaked at %u",
pool->name, pool->allocated_count, pool->released_count,
pool->pool.offset + 1, pool->max_size,
pool->current_count, pool->peak_count);
}
else
{
logger_handle_msg(handle, MSG_ERR, "pool is NULL");
}
}
void
pool_log_stats(pool_s *pool)
{
pool_log_stats_ex(pool, MODULE_MSG_HANDLE, MSG_DEBUG);
}
void
pool_log_all_stats_ex(logger_handle* handle, u32 level)
{
mutex_lock(&pool_chain_mtx);
pool_s *p = pool_chain;
while(p != NULL)
{
pool_log_stats_ex(p, handle, level);
p = p->next;
}
mutex_unlock(&pool_chain_mtx);
}
void
pool_log_all_stats()
{
pool_log_all_stats_ex(MODULE_MSG_HANDLE, MSG_DEBUG);
}
void
pool_finalize(pool_s *pool)
{
#if DEBUG
pool_log_stats(pool);
#endif
mutex_lock(&pool_chain_mtx);
pool_s **pp = &pool_chain;
while(*pp != NULL)
{
if(*pp == pool)
{
*pp = pool->next;
break;
}
pp = &(*pp)->next;
}
mutex_unlock(&pool_chain_mtx);
u64 delta;
mutex_lock(&pool->mtx);
delta = pool->allocated_count - pool->released_count;
for(s32 i = 0; i <= pool->pool.offset; i++)
{
pool->free_method(pool->pool.data[i], pool->allocate_args);
pool->pool.data[i] = NULL;
}
ptr_vector_destroy(&pool->pool);
mutex_unlock(&pool->mtx);
mutex_destroy(&pool->mtx);
pool_log_stats(pool);
if(delta != 0)
{
log_warn("pool '%s' leaked: %d items", pool->name, delta);
}
#if DEBUG
memset(pool, 0xe0, sizeof(pool_s));
#endif
}
void*
pool_alloc(pool_s *pool)
{
void *p;
mutex_lock(&pool->mtx);
if(pool->hard_limit)
{
if(pool->current_count >= pool->max_size + 1)
{
if(!pool->maxed) // the maxed flag helps to only complain once the limit is reached
{
log_warn("pool '%s' : pool usage reached maximum %i > %i", pool->name, pool->peak_count, pool->max_size);
pool->maxed = TRUE;
}
mutex_unlock(&pool->mtx);
return NULL;
}
pool->maxed = FALSE;
}
pool->allocated_count++;
if(++pool->current_count > pool->peak_count)
{
pool->peak_count = pool->current_count;
}
if(pool->pool.offset >= 0)
{
p = ptr_vector_pop(&pool->pool);
mutex_unlock(&pool->mtx);
pool->reset_method(p, pool->allocate_args);
}
else
{
mutex_unlock(&pool->mtx);
p = pool->allocate_method(pool->allocate_args);
}
log_debug7("pool '%s': alloc %p", pool->name, p);
return p;
}
void
pool_release(pool_s *pool, void *p)
{
log_debug7("pool '%s': release %p", pool->name, p);
mutex_lock(&pool->mtx);
if((--pool->current_count) < 0)
{
log_err("pool '%s': <0: %d", pool->name, pool->current_count);
}
if(pool->pool.offset < pool->max_size)
{
ptr_vector_append(&pool->pool, p);
}
else
{
pool->free_method(p, pool->allocate_args);
}
pool->released_count++;
mutex_unlock(&pool->mtx);
}
void
pool_set_size(pool_s *pool, s32 max_size)
{
yassert(ptr_vector_size(&pool->pool) <= max_size);
ptr_vector_resize(&pool->pool, max_size);
pool->max_size = max_size - 1;
}
|
#ifndef _RC4_H_
#define _RC4_H_
typedef struct _RC4_CTX
{
unsigned char S[256];
unsigned char x, y;
} RC4_CTX;
void arc4_set_key(RC4_CTX *ctx, const unsigned char *in_key, int key_len);
void arc4_crypt(RC4_CTX *ctx, unsigned char *byte);
void arc4_crypt_message(RC4_CTX *ctx, const void *msg, size_t msg_len, void *dst);
void EncryptRc4(const void* key,size_t keylen,void* dst,const void* src,size_t len);
#endif
|
A review of optical coherence tomography angiography (OCTA) Optical coherence tomography angiography (OCTA) is a new, non-invasive imaging technique that generates volumetric angiography images in a matter of seconds. This is a nascent technology with a potential wide applicability for retinal vascular disease. At present, level 1 evidence of the technologys clinical applications doesnt exist. In this paper, we introduce the technology, review the available English language publications regarding OCTA, and compare it with the current angiographic gold standards, fluorescein angiography (FA) and indocyanine green angiography (ICGA). Finally we summarize its potential application to retinal vascular diseases. OCTA is quick and non-invasive, and provides volumetric data with the clinical capability of specifically localizing and delineating pathology along with the ability to show both structural and blood flow information in tandem. Its current limitations include a relatively small field of view, inability to show leakage, and proclivity for image artifact due to patient movement/blinking. Published studies hint at OCTAs potential efficacy in the evaluation of common ophthalmologic diseases such age related macular degeneration (AMD), diabetic retinopathy, artery and vein occlusions, and glaucoma. OCTA can detect changes in choroidal blood vessel flow and can elucidate the presence of choroidal neovascularization (CNV) in a variety of conditions but especially in AMD. It provides a highly detailed view of the retinal vasculature, which allows for accurate delineation of the foveal avascular zone (FAZ) in diabetic eyes and detection of subtle microvascular abnormalities in diabetic and vascular occlusive eyes. Optic disc perfusion in glaucomatous eyes is notable as well on OCTA. Further studies are needed to more definitively determine OCTAs utility in the clinical setting and to establish if this technology may offer a non-invasive option of visualizing the retinal vasculature in detail. Introduction Optical coherence tomography angiography (OCTA) is a new non-invasive imaging technique that employs motion contrast imaging to high-resolution volumetric blood flow information generating angiographic images in a matter of seconds. OCTA compares the decorrelation signal (differences in the backscattered OCT signal intensity or amplitude) between sequential OCT b-scans taken at precisely the same cross-section in order to construct a map of blood flow. Axial bulk motion from patient movement is eliminated so sites of motion between repeated OCT b-scans represent strictly erythrocyte movement in retinal blood vessels. OCTA requires higher imaging speeds than most currently available OCT systems can provide in order to obtain a densely sampled volume. Conventional OCT device scanning speeds would result in too much trade-off between decreased field of view, lower image quality, and greatly increased scanning time. Comparing OCTA with FA and ICGA Fluorescein angiography (FA) and indocyanine green angiography (ICGA) are both invasive test that require intravenous administration of dye and imaging up to 10-30 minutes. They provide two-dimensional image sets that allow for dynamic visualization of blood flow with a wide field of view. Therefore, patterns of dye leakage, pooling, and staining can be appreciated and are well-documented in the literature. FA remains the gold standard for the detection of choroidal neovascularization (CNV), as well as retinal neovascularization such as neovascularization of the disc (NVD) and neovascularization elsewhere (NVE). However, retinal pathology can be obscured by this leakage as well as hemorrhage or media opacities, and localization of the depth of the lesion and size delineation of neovascularization can be difficult due to dye leakage and poor stereopsis, and because the imaging modalities are not depth resolved. As a result, segmentation of different layers is not routinely possible with FA or ICGA. Therefore, identification of the axial location of pathology requires an understanding of patterns of blockage and leakage. For example, differentiation between type 1 CNV, which is found between the retinal pigment epithelium (RPE) and Bruch's membrane, and type 2 CNV, which is found in the subretinal space above the RPE, requires understanding that the RPE blocks underlying fluorescence so type 1 CNV requires a larger amount of dye to accumulate before hyperfluorescence is apparent. FA and ICGA have other drawbacks that can limit their widespread use. Since they are invasive, relatively expensive, and time-consuming, they are not ideal techniques to use on a regular basis in a busy clinical setting. Although considered safe, the dyes pose risks ranging from nausea to allergic reactions, including anaphylaxis in rare instances. Aside from allergic reactions of which the likelihood increases with frequency of use, indocyanine green dye is contraindicated in pregnancy and kidney disease. For the evaluation of patients requiring frequent follow-up exams or of those that may not tolerate injection of intravenous dye, a rapid non-invasive technique to visualize retinal and choroidal vessels would be beneficial. OCTA in comparison is a non-invasive technique that acquires volumetric angiographic information without the use of dye. Each three-dimensional scan set takes approximately six seconds to obtain. The en-face images (OCT angiograms) can then be scrolled outward from the internal limiting membrane (ILM) to the choroid to visualize the individual vascular plexus and segment the inner retina, outer retina, choriocapillaris, or other area of interest. The en-face acquisition areas currently range from 2 2 mm to 12 12 mm with the scan quality greatly decreased with a widened field of view since the same number of OCT b-scans is used for all scanning areas. The 12 x 12 mm scan is only available on research prototypes. The 3 3 mm OCT angiograms appear to be higher resolution than the currently available FA/ ICGA images, and a study by Matsunaga et al. deduced that they were at least equivalent in showing important vascular detail. Use of the montage technique allows for a larger field of view much like FA/ICGA while maintaining this improved resolution (Figure 1; de Carlo TE et al., unpublished data in review). Carl Zeiss, Inc (Carl Zeiss Meditec, Dublin, CA) is developing an automatic wide-field montage software, which employs motion tracking to track the eyes and stitch images together. OCTA provides flow information at a fixed point in time. Although leakage is not appreciable, exact delineation and size measurements can be performed for pathology such as CNV (de Carlo TE et al., unpublished data in review). This is especially useful for identification of type 1 CNV where localization is inferential and therefore may be inaccurate with FA/ICGA. Retinal blood flow on OCTA can be obscured by hemorrhage as this decreases the ability of light to penetrate into the deeper layers of the eye. OCTA provides both structural and functional (i.e. blood flow) information in tandem. The "corresponding" OCT b-scans can be co-registered with the simultaneous OCT angiograms so the operator is able to scroll through the OCT angiogram like a cube scan. As a result, the precise location of pathology can be viewed on the corresponding OCT b-scans. The axial resolution of the corresponding OCT b-scans are lower quality than the typical highlysampled line scans and are similar to the resolution of individual OCT b-scans within a volumetric cube scan. Both the retinal and the choroidal microvasculature can be visualized using OCTA while FA is used for seeing the retinal vessels and ICGA is more ideal for imaging the choroid. Using the present technology, OCTA is more prone to artifact than FA or ICGA. The larger retinal vessels cause a "ghost image" referred to as a shadow artifact, when segmenting deeper layers, especially the outer retina. This can make it more difficult to appreciate the presence of abnormal vasculature in the deeper layers. Because OCTA uses the principle that movement in the back of the eye represents blood flow, it is prone to motion artifact. White lines (representing decorrelation signal over the entire b-scan) appear in areas of bulk patient movement such as when the patient loses fixation or moves. Conversely, blinks appear as a black line across the OCT angiogram because the OCT signal is blocked from reaching the retina and the software, therefore, detects no movement. Although erythrocytes should be the only moving object in the retina, some non-vascular structures such as fine tissue may also cause a decorrelation signal, especially if the patient is moving. For example, the edges of a retinal pigment epithelial detachment (RPED) often show up on OCTA as white noise artifact in cases of increased patient movement. It is postulated that because the RPE is a fine structure, in areas of disruption such as a RPED, it can presumably move and therefore be detected on the OCT angiogram. On the other hand, OCTA can also miss areas of slow blood flow such as in microaneurysms or fibrotic CNV. Since OCTA relies on change between consecutive b-scans, it will detect flow only above a minimum threshold, the slowest detectable flow, which is determined by the time between the two sequential OCT b-scans. Lesions that have flow below the slowest detectable flow would therefore not be visualized using this imaging technique. Increasing the time between consecutive OCT b-scans could allow for increased flow detection but would offer a trade-off due to increased movement artifact. One of the advantages of a higher speed system is that multiple volumetric sets can be obtained at each cross-section so the threshold can be altered later by selecting different time frames between the OCT b-scans to determine the optimal image quality. Therefore if a low-flow vessel is undetectable by using the first and second OCT b-scans at a given cross section, the image may be processed using the first and third OCT b-scans to increase the time between the OCT b-scans thereby decreasing the minimum threshold. A couple of publications have qualitatively compared OCTA with FA. Spaide et al. described the peripapillary retinal vascular layers in 12 normal eyes, finding that OCTA provided improved visualization of all the vascular layers including the radial peripapillary and deep capillary networks that were not well-distinguished on FA. OCTA imaging of the perifoveal region was reported by Matsunaga et al., demonstrating that the ability to see the normal retinal vasculature was equivalent to that of FA. OCTA of normal eyes The most widely available prototype OCTA system is the AngioVue software of the RTVue XR Avanti spectraldomain OCT (SD-OCT) (Optovue, Inc, Fremont, CA), which uses a split-spectrum amplitude decorrelation angiography (SSADA) algorithm. The device obtains volumetric scans of 304 304 A-scans at 70,000 A-scans per second in approximately 3.0 seconds. The software offers the option of 2 2 mm, 3 3 mm, 6 6 mm, and 8 8 mm OCT angiograms (Figure 2A-C) and automated segmentation of these full-thickness retinal scans into the "superficial" and "deep" inner retinal vascular plexuses, outer retina, and choriocapillaris ( Figure 2E-H). The OCT angiogram segmentation of the superficial inner retina contains a projection of the vasculature in the retinal nerve fiber layer (RNFL) and ganglion cell layer (GCL) ( Figure 2E). The deep inner retina OCT angiogram segmentation shows a composite of the vascular plexuses at the border of the inner plexiform layer (IPL) and inner nuclear layer (INL) and the border of the INL and outer plexiform layer (OPL) ( Figure 2F). The OCTA prototype with the fastest acquisition rate was developed by the Massachusetts Institute of Technology using a swept-source OCT (SS-OCT) device (Department of Electrical Engineering and Computer Science and Research Laboratory of Electronics, Massachussetts Institute of Technology, Cambridge, MA). This ultra-high speed prototype employs a vertical cavity surface emitting laser (VCSEL) operating at 1060 nm wavelength which allows increased light penetration into pigmented tissues and improved choroidal blood flow visualization compared to the light source used in SD-OCT. The SS-OCTA system obtains scans of 500 500 A-scans at 400,000 A-scans per second in approximately 3.8 seconds. This ultra-high speed allows for imaging of wider fields of view. The prototype can be manipulated to obtain OCT angiograms up to 12 12 mm, however, it is most commonly used to create 3 3 mm and 6 6 mm OCT angiograms of great detail ( Figure 3A-B). Full-thickness scans are manually segmented into the superficial (plexus at the RNFL), intermediate (plexus at the GCL), and deep (plexuses at IPL/INL and INL/OPL borders) inner retinal vascular plexuses, outer retina, choriocapillaris, and choroidal layers ( Figure 3D-F). Using this OCTA system, the choriocapillaris and choroidal vessels were described in normal eyes by Choi et al. OCTA of dry (Non-Neovascular) AMD Dry age-related macular degeneration (AMD) is characterized by drusen, pigmentary changes, and photoreceptor and RPE loss, called geographic atrophy (GA). Decreased foveolar choroidal blood flow is associated with AMD and increased drusen extent, and it has been hypothesized that the choroidal blood flow may predict disease progression. Choi et al. (unpublished data, presented sometimes associated with drusen. Figures 4 and 5 demonstrate discrete areas of decreased signal at the choriocapillaris level below many but not all drusen in three eyes. These areas of alteration did not appear to be due to shadowing (from material in the drusen), and some choroidal vessels were appreciated below these areas. However, further studies would be necessary to determine if the choriocapillaris changes associated with the drusen are true areas of flow impairment. Choriocapillaris flow alterations are also shown in two eyes along the border of GA in Figure 6. OCTA of wet (Neovascular) AMD Several publications concerning OCTA of eyes with wet AMD appear in the literature. In July 2014 Jia et al. first described the ability of a prototype SS-OCTA system to visualize and quantify CNV that had been seen on FA in five eyes. Then in November 2014, Moult and Choi et al. described CNV in 16 of 19 eyes with neovascularization, noting that the majority of these eyes (14/16, 88%) also demonstrated choriocapillaris alteration surrounding the CNV. De Carlo et al. described qualitative and quantitative characteristics of CNV in 48 eyes. The group determined sensitivity and specificity of the prototype AngioVue software, using FA as the ground truth, to be 50% (4/8) and 91% (20/22) respectively, hypothesizing that the low sensitivity was due to small sample size and blockage from large amounts of retinal hemorrhage in some patients. Figures 7 and 8 illustrate three examples of CNV, including one type 3 CNV (retinal angiomatous proliferation, RAP), on OCTA confirmed with FA/ICGA, using the Angiovue OCTA software of the RTVue XR Avanti (Optovue, Inc., Fremont, CA). Figure 9 shows two OCTA examples of CNV, one of which was treatment nave, using the SS-OCT prototype (Department of Electrical Engineering and Computer Science and Research Laboratory of Electronics, Massachussetts Insitute of Technology, Cambridge, MA). OCTA of diabetes There are few published papers as of early 2015 on OCTA of diabetic retinopathy. Choi et al. (unpublished data) demonstrated that OCTA of diabetic eyes ranging from no retinopathy to proliferative diabetic retinopathy (PDR) demonstrated choriocapillaris abnormalities and/or retinal microvascular abnormalities such as microaneurysms, vascular remodeling adjacent to the foveal avascular zone (FAZ), enlarged FAZ, and capillary tortuosity and dilation. OCTA and FA were compared in unpublished data by Salz et al. The group supported the utility of OCTA in evaluating FAZ and the perifoveal intercapillary area, showing that they were sequentially enlarged in each stages of diabetic retinopathy (normal eyes to PDR). The data showed that OCTA visualized the majority but not all of the microaneurysms visualized by FA likely because OCTA is limited by the principle of slowest detectable flow. However, OCTA was able to appreciate some microaneurysms that were not detected by FA. OCTA also successfully detected other abnormalities that were not evident on FA such as areas of retinal non-perfusion, reduced capillary density, and increased vessel tortuosity. de Carlo et al. (unpublished data in review) described a wide-field OCTA montage of an eye with newly proliferative diabetic retinopathy. The wide-field montage OCTA image also successfully allowed visualization of an enlarged FAZ, perifoveal intercapillary area, and multiple microaneurysms. It also provided a larger field of view allowing more peripheral detection of microvascular changes, early NVE, and areas of capillary non-perfusion including areas too small to visualize on FA. (B2) Full-thickness 3 x 3 mm OCT angiogram, which provides improved detail over 6 x 6 mm OCT angiograms, demonstrates higher sensitivity in detecting micro vascular abnormalities. FAZ appears enlarged. Aneurysms that are seen on FA in B1 that are also seen on OCTA are circled in yellow. Aneurysms on FA that are seen as areas of capillary non-perfusion on OCTA are circled in blue. Figure 10 shows an enlarged FAZ on OCTA and compares OCTA and FA in the identification of microaneurysms in two eyes with non-proliferative diabetic retinopathy (NPDR). Capillary non-perfusion and other retinal microvascular abnormalities are demonstrated in Figure 11. OCTA examples of NVD and NVE in PDR eyes are shown in Figure 12. OCTA of artery and vein occlusion Retinal vascular occlusions have yet to be described in the literature using OCTA as an imaging modality. However, preliminary work at the New England Eye Center of Boston, MA shows that OCTA may be useful for evaluating these diseases. Unpublished data in review by de Carlo et al. described a case of branch retinal vein occlusion (BRVO) using a wide-field montage technique. The OCTA showed a large wedge-shaped area of capillary nonperfusion in the inferotemporal macula with clear delineation of the boundary of ischemia, and vascular abnormalities such as microaneurysms, telangiectasis, and anastamoses. Figure 13 shows OCT angiograms of an acute branch retinal artery occlusion (BRAO) and a subacute central retinal artery occlusion (CRAO). The BRAO demonstrates wedge-shaped areas of capillary non-perfusion that correlate to areas of abnormalities on the retinal thickness map. This illustrates the potential use of OCTA in pinpointing areas of ischemia and edema. The CRAO shows diffuse capillary non-perfusion in areas supplied by the central retinal artery as seen on the same-day FA. Flow is still seen in the major retinal vessels. Around the optic disc, there is an absence of blood flow in the superficial disc vasculature supplied by the central retinal artery but the lamina cribosa blood flow remains intact. As OCTA provides a snapshot in time, it does not demonstrate delayed arteriovenous transit time as FA does. A case of BRVO and a case of central retinal vein occlusion (CRVO) are illustrated in Figure 14. OCTA of the BRVO shows capillary non-perfusion superotemporally Figure 11 OCTA of NPDR. The right eye (A) and left eye (B) of a 58 year old Caucasian man with non-proliferative diabetic retinopathy and diabetic macular edema (DME) using the Angiovue optical coherence tomography angiography (OCTA) software of the RTVue XR Avanti (Optovue, Inc., Fremont, CA). (A1) Full-thickness (internal limiting membrane to Bruch's membrane) 6 x 6 mm OCT angiogram shows microvascular abnormalities such as areas of capillary non-perfusion (yellow arrows), capillary loops, and microaneurysms. (A2) En-face structural OCT with a red line corresponding to the highly-sampled OCT b-scan in A3. (A3) 12 mm highly sampled OCT b-scan through the fovea demonstrating DME and hard exudates. (B1) Full-thickness 3 x 3 mm OCT angiogram, which provides improved detail over 6 x 6 mm OCT angiograms, shows microvascular abnormalities such as areas of capillary non-perfusion (yellow arrows), capillary loops, and microaneurysms. (B2) En-face structural OCT with a red line corresponding to the highly-sampled OCT b-scan in B3. (B3) 12 mm highly sampled OCT b-scan through the fovea demonstrating DME and hard exudates. along the superior arcade extending into the FAZ, and telangiectatic vessels, capillary loops, and possible microaneurysms at the border of the ischemic areas. The OCTA of the chronic CRVO demonstrates diffuse capillary non-perfusion continuous with the FAZ and telangiectatic vessels. OCTA of glaucoma OCTA is a useful tool for evaluating optic disc perfusion in glaucomatous eyes. The normally dense peripapillary microvascular network is attenuated in both the superficial disc vasculature and the deeper lamina cribosa. Averaging the decorrelation signal in OCT angiograms approximates the area of microvasculature and allows the user to calculate the flow index, which is decreased in eyes with glaucoma. The flow index has been shown to have both a very high sensitivity and specificity in differentiating glaucomatous eyes from normal eyes. Conclusions OCTA is a new technology that has great potential for use in the clinical setting. Compared with FA and ICGA, the current retinal angiographic gold standards, OCTA advantages are that it is non-invasive, acquires volumetric scans that can be segmented to specific depths, uses motion contrast instead of intravenous dye, can be obtained within seconds, provides accurate size and localization information, visualizes both the retinal and choroidal vasculature, and shows structural and blood flow information in tandem. Disadvantages of OCTA are its limited field of view, inability to view leakage, increased potential for artifacts (blinks, movement, vessel ghosting), and inability to detect blood flow below the slowest detectable flow. OCTA has been shown to be a useful imaging modality for the evaluation of common ophthalmologic diseases such AMD, diabetic retinopathy, artery and vein occlusions, and glaucoma. In some cases OCTA has even been shown to detect pathology not seen on FA. In the future, faster scanning speeds would be crucial to obtain larger fields of view with higher resolution. More studies are needed to determine OCTA's utility in the clinical setting and to determine if this technology may offer a noninvasive option of visualizing the retinal vasculature in detail. |
package com.currencycheck.util;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
/**
* TODO: Document class
*/
public class CurrencyFormatterTest {
@Test
public void formatTest(){
//TODO: Implement test method
Assertions.fail();
}
}
|
Almost four years after the brutal murders of Anna Catherina businesswoman Jennifer Persaud and her two young sons, police have arrested a man who has reportedly confessed to the crime after his wife, tired of the constant abuse she suffered at his hands, tipped off the police.
Police sources told Stabroek News yesterday that the man detailed how he committed the gruesome crime in September 2012 after he had gone to rob the shop that Persaud operated. Suspicion had initially fallen on Persaud’s partner and the man currently in custody had not been considered a suspect. However, his wife, who kept his secret for years, told the police to question him about the murders after she got tired of his constant abuse.
The man is likely to be charged soon.
In September 2012, Persaud, 41, the owner of a liquor store and bar at Anna Catherina and her sons Afridi Bacchus, 6, and Jadon Persaud who was 18 months old, were murdered and their bodies were found at her Anna Catherina, West Coast Demerara home. The trio’s throats had been slit and they also suffered stab wounds. No one has ever been charged and the police had been heavily criticised for their sloppy investigation into the murders including by the Director of Public Prosecutions.
Yesterday, police sources said the suspect’s wife went to the police station to file a domestic violence report against her husband. Stabroek News was told that after making the report, the woman told the law enforcement officials that they should also question her husband about the murder of Persaud and her sons.
Acting on the tip provided, investigators arrested the alleged killer. Under interrogation, he confessed to committing the triple murder, sources close to the investigation said.
The man told investigators that he would normally be at Persaud’s shop. He said on the night of the murders, he went with the intention of robbing the liquor store. He explained to the ranks that he gained access to the building by pushing his hand through an open space to open a door and entered the business place.
The man told investigators that he went upstairs after he did not collect a substantial amount of cash from the shop. He related that once upstairs, he collected a bag and Persaud woke up and saw him in the house. It was at this point, he told investigators, he stabbed the woman.
The man said Persaud’s elder son woke up and saw his face and he decided to slit his throat. Bacchus screamed when his throat was slit and the baby woke up, the source said. According to the source, after the alleged killer noticed that the baby was awake, he killed the 18-month-old.
The source related that the suspect further told investigators that he went home and his body was bloodied and he related to his wife what he did. The wife kept the secret but on Saturday decided to tip off the police as a result of the constant domestic abuse she suffered at his hands.
Persaud’s reputed husband had initially been arrested and questioned but police never had any substantial evidence to charge him. The reputed husband and Persaud were known to have domestic disputes.
Police had worked on the theory that it was not a case of robbery. It was explained that more than $40,000 was found by the woman’s bed head along with a quantity of jewellery. The police had worked on the theory that the killing could have been a crime of passion judging by the number of stabs inflicted on Persaud. |
An Appraisal of Methods Recently Recommended for Testing Salt Sensitivity of Blood Pressure BP parallel to changes in salt powerful h. Blood pressure determined with a 24 hour blood pressure monitoring device from the average of readings taken every 20 minutes during the day between 6 am and 9:59 pm, and every 30 min during the night. Cutoff for classifying subjects as SS set as a change in 24 hour average MAP of ≥ 10 mmHg. i. Diet not controlled during the salt loading phase. Dietary instructions differed between salt restriction phase and salt loading phase. Potassium intake estimated to be approximately 90 mmol per day based on a single 24 hour urine collection study performed in each phase of the study. j. Reproducibility of the testing protocol for classifying subjects as salt sensitive was determined from the results of 24 hour measurements of MAP. In an additional analysis, reproducibility was determined from the results of casual measurements of blood pressure. Based on the casual BP measurements, reproducibility of classifying the same subjects as SS on both tests was 23% and reproducibility of classifying the same subjects as SR on both tests was 76%. The casual blood pressure values were determined by averaging the results of 2 measurements taken 1 minute apart in sitting subjects. k. In this study, the sample size represents the number of subjects consistently classified in both rounds of testing and does not represent the number of subjects that were in a particular category on initial testing. In addition to the 17 SS subjects and 13 SR subjects that were consistently classified, another 15 subjects gave inconsistent results on repeat testing. Of the subjects with inconsistent results on repeat testing, the numbers initially classified as SS versus SR were not reported. l. Values for salt intake represent the ranges for mean salt intake estimated from measurements of 24 hour urine sodium excretion. Absolute values for MAP in SS and SR subgroups were not reported. m. Blood pressure determined with a random-zero sphygmomanometer with measurements taken in sitting subjects at the end of each diet phase. Absolute values for salt-induced changes in MAP in the SS and SR subgroups were not reported. Cutoff for classifying subjects as SS set as a change in MAP of ≥ 5 mmHg. n. Diet not controlled throughout entire study. Diet potassium content and urinary potassium excretion not reported. o. Of the total number of subjects entered into the study, 66% were consistently classified in repeat tests. The number of subjects classified as SS on initial testing that failed to be classified as SS on repeat testing was not reported. p. In this study, the sample size represents the number of subjects consistently classified in both rounds of testing and does not represent the number of subjects that were in a particular category on initial testing. In addition to the 15 SS subjects and 25 SR subjects that were consistently classified, another 35 subjects gave inconsistent results on repeat testing. Of the subjects with inconsistent results on repeat testing, the numbers initially classified as SS versus SR were not reported. q. Values for salt intake represent the mean salt intake estimated from measurements of 24 hour urine sodium excretion. Target salt intake was approximately 50 mmol/day in the low salt phase and 150 mmol/day in the high salt phase. Absolute values for MAP in SS and SR subgroups were not reported. r. Blood pressure determined with a random-zero sphygmomanometer in sitting subjects. The pressure measurements were not made on the last day of each diet phase as recommended in the preferred dietary protocol. Rather, blood pressure was determined from the mean of 5 pairs of measurements taken over the t. Of the total number of normotensive subjects studied, 53% were consistently classified in repeat tests. The number of subjects classified as SS on initial testing that failed to be classified as SS on repeat testing was not reported. u. In this study, the sample size represents the number of subjects consistently classified in both rounds of testing and does not represent the number of subjects that were in a particular category on initial testing. In addition to the 22 SS subjects and 11 SR subjects that were consistently classified, another 21 subjects gave inconsistent results on repeat testing. Of the subjects with inconsistent results on repeat testing, the numbers initially classified as SS versus SR were not reported. v. Of the total number of hypertensive subjects studied, 61% were consistently classified in repeat tests. The number of subjects classified as SS on initial testing that failed to be classified as SS on repeat testing was not reported w. Blood pressure determined with a 24 hour blood pressure monitoring device from the average of readings taken at 15 minute intervals during the day between 7 am and 10:00 pm, and every 30 min during the night. Absolute values for salt-induced changes in MAP in the SS and SR subgroups were not reported. Cutoff for classifying subjects as SS set as a change in 24 hour average MAP of ≥ 10 mmHg. x. Controlled diet provided 65 mmol of potassium per day. See published study for additional diet details. y. Results reflect the analysis performed on 24 hour blood pressure recordings. When the analysis was performed on clinic blood pressure values determined from the average of 3 measurements obtained over 15 minutes in sitting subjects, the reproducibility of classifying subjects as SS in repeat testing was 50% and of classifying subjects as SR on repeat testing was 70%. |
ISAR Consensus Guidelines on Safety and Ethical Practices in In vitro Fertilization Clinics Study Question: What are the Safe and Ethical practices for ART applicable in INDIA? What is Already Known: The Indian IVF industry is booming; with mushrooming of assisted reproductive technology (ART) clinics in the country, the need for regulation is immense. The ISAR has taken up this initiative to lead the way forward in establishing practice guidelines for the safe and ethical use of ARTs in our country. These guidelines discuss the points to consider before the starting of an IVF unit, to the designing of the laboratory, the staffing pattern and experience recommendations, laboratory safety guidelines, documentation and patient traceability, gamete traceability, handling biological material, the consumables and media, and different consents and checklists and also propose key performance indicators for the Indian scenario. Study Design, Size, Duration: This is the report of a 2-day consensus meeting where two moderators were assigned to a group of experts to collate information on safe and ethical ivf practices in INDIA. This meeting utilised surveys, available scientific evidence and personal laboratory experience into various presentations by experts on pre-decided specific topics. Participants/Materials, Setting, Methods: Expert professionals from ISAR representing clinical and embryology fields. Main Results and the Role of Chance: The report is divided in various components including the regulations, the various requirements for an ART center, qualifications and trainings, recommendations on good practices and quality management: the report and recommendations of the expert panel reflect the discussion on each of the topics and try to lay down good practice points for labs to follow. Limitations, Reasons for Caution: The recommendations are solely based on expert opinion. Future availability of data may warrant an update of the same. Wider Implications of the Findings: These guidelines can help labs across the country to standardise their ART services and improve clinical outcomes. Study Funding/Competing Interest(S): The consensus meeting and writing of the paper was supported by funds from CooperSurgical India. Introduction Infertility is a significant health problem across the reproductive age group in India. As a result, the demand for medically assisted modalities to alleviate infertility services is growing. Since the birth of the first baby through in vitro fertilization (IVF) in 1978 in the UK, more than 8 million births have taken place worldwide through assisted reproductive technology (ART). The rapid evolution of ART for the treatment of infertile couple was one of the extraordinary restorative accomplishments throughout the world. Infertility has remained a social taboo since ages; with changing times and rapid developments taking place in the field of modern science, our philosophies have evolved eventually, the desire of a child, a family successor continues to be a significant concern. The researchers have continued to make dynamic advances in the journey and led to improvements in the modern medicine giving a ray of hope to the millions of infertile couples, extensive refinement of techniques in the field of ART open opportunities finding solutions to fertility problems for the wider population, but the ready access to these services also allow misuse which needs to be regulated. The regulatory perspective of in vitro fertilization in India In 1982, the Indian Council of Medical Research (ICMR), a pioneering Indian organization in the field of Biomedical Sciences, took the initiative realizing the significance of infertility treatment and introduced a project (led by T. C. Anand Kumar and Indira Hinduja) at Institute for Research in Reproduction (now ICMR-National Institute for Research in Reproductive Health) at Mumbai. As a result, India's first fully scientifically documented test-tube baby, "Harsha," was born on August 6, 1986. Since then, the demand for infertility management in the country led to the mushrooming of the IVF clinics in the country. ART in India is facing quite a few regulatory concerns, and risks need evaluation at a larger scale chiefly due to the absence of any regulations. The services offered by the ART clinics are questionable. To regulate these clinics, the ICMR developed the National Guidelines for Accreditation, Supervision, and Regulation of ART Clinics in India in 2005 which was transformed into ART (Regulation) Bill, 2017 and Surrogacy (Regulation) Bill, 2016. The bill is still under scrutiny. Assisted Reproductive Technology Clinics in India as Per the National Registry In India, the number of ART centers is rising over the last decade. At the time of formulating this consensus and based on the number of applications received, the list of enrolled ART clinics is 490 under the National Registry of ART Clinics and Banks governed by the ICMR. It is a vast market, and hence, many multinational companies have come up with their setup. Unfortunately, this health sector is still unorganized and unregulated. It is assumed that we have more than 5000 large and small centers working in India offering ART services. In India, we have active societies such as Indian Society for Assisted Reproduction, Indian Fertility Society, and Association of Clinical Embryologist which have a large number of members. Unfortunately, they do not have hold regulatory powers as far as ART practices in India are concerned. Clinics involved in any one of the following activities should be regulated, registered, and supervised by the State Accreditation Authority/State Appropriate Authorities. 1. Any treatment involving the use of gametes that have been donated or collected or processed in vitro, except for artificial insemination (AIH) of husband's semen, and for intrauterine insemination (IUI) by level 1A clinics who will not process the gametes themselves 2. Any infertility treatment that involves the use and creation of embryos outside the body 3. The processing or/and storage of gametes or embryos 4. Research on human embryos. The term ART clinic used in this document refers to a clinic involved in any one of the first three of the above activities. Recommendations on assisted reproductive technology clinics Once the bill is passed, All ART centers/clinics should be registered with the National Registry of ART Clinics and Banks in India, ICMR. There should be a provision for licensing of embryologists. Registration of patients should be done with photograph identity and complete address (address proof is mandatory). The ART professionals may be guided by the white paper/guidelines issued by the National ART bodies till the appropriate advisories are issued by the Government of India (GOI). There should be a grievance redressal forum for ART centers in the country. Standards for in vitro fertilization clinic in India as per the Indian Council of Medical Research guidelines 2010 The ICMR recently finalized the National Guidelines for the Regulation of ART clinics. According to the ICMR guidelines, infertility clinics have been categorized into three levels based on the availability and complexity of ART service. The guidelines provide minimum requirements regarding staff in infertility clinics as well as physical requirements for an ART clinic. In this type of clinics, preliminary investigations are carried out and type and cause of infertility are diagnosed Primary infertility care unit or clinic could be a doctor's consulting room, such as a gynecologist's or a physician's consulting office, or even a general hospital Depending on the severity of infertility, the couple could be treated at the Level 1A clinic or referred to a specialty (Level 1B, Level 2 or Level 3) clinic The gynecologist or the physician in-charge of a Level 1A infertility care unit should have an appropriate postgraduate degree or diploma and be capable of taking care of the above responsibility A Level 1A infertility care unit will not require an accreditation under these guidelines Level 2 (secondary infertility care units) In this type of clinics, preliminary investigations are carried out and type and cause of infertility are diagnosed Primary infertility care unit or clinic could be a doctor's consulting room, such as a gynecologist's or a physician's consulting office, or even a general hospital Depending on the severity of infertility, the couple could be treated at the Level 1A clinic or referred to a specialty (Level 1B, Level 2, or Level 3) clinic The gynecologist or the physician in charge of a Level 1A infertility care unit should have an appropriate postgraduate degree or diploma and be capable of taking care of the above responsibility A Level 1A infertility care unit will not require an accreditation under these guidelines Contd... Code of Practice Code of practice deals with all aspects of the treatment provided and the research done at registered clinics. Those areas that affect the doctors, scientists, and patients and are a part of this code are summarized below. The aim is to provide more comprehensive coverage of key aspects of the IVF laboratory, to give continuous support to laboratory specialists and consequently contribute to improving IVF patient care. Infrastructure Embryology laboratories are an essential part of an ART clinic. Areas should be minimal but scalable. Small laboratory, positive pressure with high-efficiency particulate air filters is recommended. Recommendations on laboratory space and design The embryology laboratory should have adequate space to ensure safe and comfortable working conditions, and the design should be appropriate for the volume and scope of the procedures performed The location of storage areas and equipment should be planned for optimal efficiency in each working area Laboratory design should facilitate cleaning as per the required standards Floors, walls, and ceilings must have nonporous surfaces that can be cleaned easily Separate office space should be available for carrying out administrative/documentation work Access to the laboratory should take account of the need for environmental control and security Oxygen depletion monitor is only required for a cryostorage facility where liquid nitrogen is handled The laboratory and operation theater (OT) access should be independent of each other but should be interconnected by pass box/glass doors, etc. An adequate changing rooms based on workload should be located in the vicinity of the scrub area. Recommendations on laboratory equipment The laboratory should contain all essential items required for IVF, in a number appropriate to the workload The incubator number is critical and should be based on the number of cycles and embryo culture duration Gametes and embryos should be conveniently distributed across incubators to minimize door openings Equipment must be adequate for optimal laboratory work, easy to disinfect and kept clean to avoid contamination We recommend not more than four patients at a given time per incubator standard sized 150 L box incubator, bench-top incubators can accommodate more as the gas and temperature recovery rates are faster. Recommendations and consensus on air quality Air quality monitoring should be used as a routine measure of quality assurance (for example, through particle counts or the use of settle plates; recording any cultures observed) Air handling unit (AHU) with heating ventilation and air conditioning is recommended with 12-15 air changes per hour Separate system for filtration of volatile organic compounds (VOCs) and microbial decontamination is recommended Filters should be routinely changed depending on the workload of the laboratory Positive pressure modules may be used in lieu of AHU. Parameters Group consensus (as per the ESHRE guidelines, 2015) Particle counts Grade A environment with a background of at least GMP Grade D Microbial contamination VOCs filtration VOC=Volatile organic compounds, GMP=Good manufacturing practices Recommendations on infrastructure Group recommendation. A Background knowledge in infrastructural and architectural needs for an ART clinic is recommended before setting up an ART Lab. The minimum recommended laboratory space is 120 sq feet Cryobiology laboratory minimum recommended space is 100 sq feet It is preferable to have an IVF OT as per NABH norms The interiors, physical characteristics, and air quality values should be adhered as per the specifications mentioned above Powerpoints should be enough and at regular distances with UPS backup and generator backup Scrub and wash area should be designed near the vicinity of IVF OT and laboratory with well-concealed drainage There should be no water source inside the laboratory Separate exclusive air conditioning (preferably attached with an AHU) is recommended to maintain the laboratory room temperature between 24°C and 26°C The laboratory should be adequately lit with warm-diffused recessed lights. Recommendations on laboratory safety It is the duty of all laboratory personnel to inform laboratory and/or center management of any circumstances in which the safety of laboratory personnel, and/or the safety and integrity of gametes, and/or the embryos in their care are compromised The laboratory design should allow all procedures to be carried out without compromising the safety of staff, patients, or patients' gametes or embryos Equipment should be placed such that there is sufficient and safe operating space Attention should be given to the ergonomics of the operator, bench height, adjustable chairs, microscope eye height, efficient use of space and surfaces, and sufficient air conditioning with controlled humidity and temperature Measures should be taken to minimize exposure of gametes and embryos to VOCs and other potentially toxic substances All staff should have appropriate equipment handling trainings The storage room should have an oxygen depletion monitor, linked to an external warning system All cryostorage vessels should have an alarm system to alert staff Only trained scientific, technical, medical, or nursing staff or staff in training who are under supervision should be allowed to enter the laboratory while procedures are taking place Visitors should never be left unsupervised in clinical laboratory areas Only authorized person should enter the laboratory. Unauthorized person can enter the laboratory only when accompanied by an authorized person. Appropriate dress should be worn before entering the laboratory. Staffing Minimum standards for the staffing For the ART laboratory, several professional associations and laboratory organizations have already framed and published the rules and guidelines; however, in India, there are no specific recommendations for staffing versus workload. Staff requirement The number of staff should be based on the number of cycles performed in a year The type of services offered strongly influences the number of people required As an approximate guide, clinics that perform up to 150 retrievals and cryopreservation cycles per year should always have a minimum of two qualified clinical embryologists Appropriate human resources should provide an adequate climate to perform all laboratory tasks on time to ensure patient safety and quality of care. Laboratory director The laboratory director or another experienced person having the ability level for training can train individuals joining the team. The training progress must be strictly followed and documented properly. The promotion of a new team member to a higher level of ability can work under supervision or can work without supervision and may train other persons . The procedures must be documented and approved by the laboratory director. The number of cases per procedure that must be performed to transit from one training level to the next is indicated. These numbers need to be adopted by the individual centers. Clinical embryologist responsibilities Execution of standard operating procedures (SOPs) Participation in daily practice, communication, and organization Contribution to clinical laboratory decisions To impart training to the staff members and students. Clinical embryologist qualification If the clinic is in existence for at least 1 year before the promulgation of these rules, a person with a BSc or BVSc degree but with at least 3 years of first-hand validated hands-on experience of the techniques and of discharging the responsibilities listed below would be acceptable for functioning as a clinical embryologist. He/she must be either a medical graduate or have a postgraduate degree or a doctorate in an appropriate area of life sciences. Staff management The laboratory should establish documented procedures for staff management that ensure all staff should have: An initial orientation and induction Basic training and advanced training as required On-going competence assessment with audits An annual joint review (with the line managers) Continuous education and professional development Staff records Access to hands-on training. There should be a system of short-term comparisons between team members on a regular basis (for example, monthly, but independent of the weekday and workload). Short-term quality control can involve one member comparing his or her results with another staff member using the same sample or patient. Quality control measures are mentioned below: Grading of oocyte maturity and fertilization rates Classification of embryo quality Evaluation of basic sperm parameters of original and prepared semen samples. Semen collection container should be of IVF grade. Quality Management Definition and concept of quality The ISO 9000:2000 standards define quality as "the degree to which a set of inherent characteristics fulfills requirements." The requirements in this definition could be specified by the supplier, by the customer, or may also be legal. Quality-of-care is a multi-dimensional concept, encompassing treatment efficacy and impact on the health and welfare of both patients and offspring. Besides, the concept of quality includes the cost in financial and human terms of achieving the desired outcome. General QMS requirement as per the ISO standards The organization/clinic should: Identify the process needed for the quality management system and their application throughout the in-clinic setup Determine the sequence and integration of these processes Have SOPs for all procedures Ensure the availability of resources and information necessary to support the operation of these procedures Implement actions necessary to achieve planned results and continual improvement of the system Define job roles and responsibilities Ensure full traceability Use quality-tested products Have annual maintenance contracts (AMC) in place for critical equipment Protocol verification and corrective actions Performance reviews and internal/external audits Risk assessment and analysis Key performance indicator (KPI) monitoring. Identification and Patient Traceability Guidelines for identification of gametes Before commencing a treatment cycle procedure, the embryologist should check that the patient has signed a valid informed consent form Patient's gamete, embryo, tissue, plastic ware, and culture plates' identification system should be followed diligently in all the cases Minimum of three identification markers should be used for patient identification out of which one should be unique to the patient IVF centers must double-check the identification of samples and the patients or donors to whom they relate at all critical points of the clinical and laboratory process Laboratories must have in place robust, effective processes to ensure that no mismatches of gametes or embryos or identification errors occur Verification of patients and witnessing protocols should be followed when any of the following clinical or laboratory procedures take place: Ovum pick up and oocyte collection Semen collection and sperm preparation Insemination through IUI/IVF/intracytoplasmic sperm injection (ICSI) Embryo transfer Cryopreservation Disposal of gametes and embryos Transport of gametes and embryos. Incubators should be organized to facilitate the identification of sperm, oocytes, zygotes, and embryos The identity of the laboratory person handling the samples at each point of the process, from receipt through final disposition, date and time, should be documented. This permits tracking of the example throughout its period in the laboratory, also at later dates In cases where donor oocytes/sperm are used, traceability must be assured All cells and embryos for genetic investigation must be individually handled, carefully identified and labeled, and tracked during the whole procedure. During these steps, double identity checks are strongly recommended Electronic systems such as bar-coding and radio frequency identification (RFID) are appropriate, subject to a risk assessment to ensure that any system introduced will not harm gametes or embryos. The system must be deemed reliable and ensure that any electronic devices employed are safe. Integrated witness systems should be fully validated. Double-witnessing is required for entry, exit points, and the mixing of sperm and oocytes. A hard copy of electronic witnessing should be retained. Witnessing Systems in In vitro Fertilization Background Manual and electronic witnessing is used simultaneously to (i) ensure enough in-house validation, (ii) assess the exact mismatch rate (i.e., operator noncompliance), and (iii) analyze documented procedure timings. The introduction of the electronic witness system (EWS) in the IVF clinical practice is a recent innovation. Although EWS is recommended to improve traceability and reducing IVF mix-ups, only a few centers have implemented the technology to this point all around the world. Timing of errors in the in vitro fertilization process Most important errors could be wrong identity of couple or patients, improper specimen labeling, egg collection, sperm reception and preparation, mixing wrong sperm and eggs, or injecting wrong sperm into eggs, improper transfer of gametes or embryos between tubes/dishes, improper transfer of embryos into a woman, insemination of a woman with wrong sperm prepared in the laboratory, placing wrong gametes or embryos into cryopreservation, removal of wrong gametes or embryos from cryopreservation, disposal of wrong gametes or embryos and transporting wrong gametes or embryos, etc. Error detection Error detection during IVF could happen in the laboratory, after embryo transfer, after live birth, after IUI, after embryos frozen, prior or during embryo transfer, before IUI, and before treatment starting, and most of the times, it is detected at a later phase beyond repair time. In vitro fertilization zero tolerance for error: Demands high integrity Though couples may desire to have a child with their gene pool, but no couple would like to have a child without consent, deliberate, or accidental mix up of gene pool. Integrity in IVF is crucial and has excellent value, and it demands ethical practices. it must be considered as a serious incident and should be followed up with a audit. Utmost care must be taken by the stakeholders to minimize such errors. Currently, most of the laboratories have introduced the couple's ID check and double witnessing of the procedure to avoid any form of error in the IVF procedure effectively. Manual witnessing system Manual double witnessing (MDW) can be defined as the "double-checking performed on all clinical and laboratory procedures" with the expectation that if an "operator" makes an error, it will be caught by the other "witness." Although MDW is a safeguard requirement whose apparent value is self-evident, evidence suggests that it may not be as safe and effective as it should be. Disadvantages of manual double witnessing MWS may introduce errors Embryologists could end up in doing IVF process-related mistakes Embryologists could face legal challenges and regulatory sanctions, while patients would have to cope with the psychological damage and with the loss of confidence in the IVF process impacting on future cycles Numerous problems with double checking have been identified previously relating to independent redundancy, attentional blindness and ambiguous accountability The introduction of the electronic witness system (EWS) in the IVF clinical practice is a recent innovation. Without an EWS, the primary control measure used to reduce the risk of biological sample mix-up is a human double-checking approach. However, this mechanism of control is vulnerable to human errors, including check omission, check incomplete, involuntary automaticity, and no contemporaneous checking. For these reasons, several alternative options have been developed in order to replace the majority of human manual witnessing steps in IVF: (i) systems based on barcode labels, (ii) systems based on silicon barcodes that are injected directly into eggs or embryos, and (iii) systems based on RFID technology. Advantages of electronic witness system Prevent potential errors (including identification errors) Safe and secure to use Cost-saving Monitors every instance To help establish accountability, less ambiguity, and reduce liability Minimize stress and interruptions Enhances patient satisfaction and overall well-being Protecting and managing every aspect of the daily workload RFID prevents embryologists from accidentally working on more than one patient's eggs or sperm at a time and, second, it marks each course step, preventing embryologists from omitting critical tasks in the process Using RFID tags, the patient's identity is monitored at every stage of the treatment, and at the same time, the system captures information regarding the cycle progress and operator actions. Consensus on the integrity of samples, couple's/sample identity, check-in in vitro fertilization Need to implement compulsory identity check with the government-issued photographic ID of couples seeking IVF treatment Generally, photograph identity in the form of an electronic file is recommended Need to check the ID of couples on every visit to the clinic with the uploaded photo ID (electronic version) There must not be any deliberate act of mixing gamete/transferring embryo of a third person. Except for OD/ED/D-sperm where written consent is required before planning the procedure. Double witness of couples' identification check Need to witness couples ID check at different stages of the procedure in the clinic, namely semen collection (with photo ID, wife's full name, date of birth, and address), ovum pick-up (OPU) (husband's full name, date of birth, and address), embryo transfer (husband's full name, date of birth and address, and crosschecking of her husband's name on the sample tube), and artificial insemination (AID) by donor (final cross-checking of her and her husband's full name, date of birth, address, and ID of sample by the treatment recipient woman). Double witne]ssing of samples -procedures Standard IVF procedure at the time of insemination is crucial for better clinical outcomes. There is a need to counter check various checkpoints, namely couple's names with the semen sample (dish with oocyte and embryology sheet), loading sperm and egg onto ICSI dish (name on tube with sperm, dish with egg and embryology sheet), freezing/thawing the embryos (agree or approval by the person performing the procedure and another laboratory staff for the correct identity of gametes/embryos), and check for same identification on the straw, canister, and cane. Recommendations on protective measures Protective measures should be in place to ensure aseptic conditions for gamete and embryos Everybody fluid sample (semen, blood, follicular fluid) should be handled using universal precautions (i.e., as if it were contaminated) Laboratory clothing should be autoclaved and worn only in sterile areas and removed upon leaving the laboratory, avoiding transmission of contaminants Safety glasses or goggles are suggested where appropriate Disposable nontoxic (nonpowdered) gloves and masks should be worn by all clinical and lab personnel during procedures All procedures and manipulation of body fluids should be performed to minimize the creation of droplets and aerosols Gloves should be removed and discarded when leaving the laboratory. Gloves should never be reused Eye and face protection, cryogenic gloves, and apron should be worn by laboratory staff when cryogenic materials are handled Mechanical pipetting devices should be used for the manipulation of liquids in the laboratory. Mouth pipetting is not recommended Eating, drinking, smoking, application of makeup, or manipulation of contact lenses are not permitted in the laboratory Disinfection and sterilization of potentially infected equipment should be done when samples of seropositive or infected patients are handled Incubators should be frequently cleaned and sterilized Nitrogen tanks should be maintained as per the manufacturer's guidelines. Recommendations on biomedical waste Published biomedical waste rules by the GOI should be followed Every IVF center must have the license for biomedical waste management from the state pollution control board and pollution control committee. Recommendations on spill management All spillages must be dealt as soon as possible An embryo-safe disinfectant at the correct concentration must be used for handling spillage during the procedure Suitable disposable wipes must be used for disinfecting and cleaning the spillage. Recommendations on transportation of biological materials Adhere to the rules and regulations as issued by ICMR The export or import of human gametes from another country is permitted as per the permission of the National Registry of the Assisted Reproductive Technology Clinics and Banks in India of the ICMR However, gametes can be transported within the country with the proper documentation. Consumables and Media General guidelines for suppliers Procurement of disposables, media, and oil must be done from reliable sources Suppliers manufacturing facility should have relevant ISO certification and should provide evidence of using good manufacturing practices (GMP) Media, disposables, and oil should be of embryo culture grade quality Guidelines: User (embryologist) Embryologist must verify all the QC documents before accepting the consignment All the disposable consumables should be sterile, single-use and consumed within the expiry date Embryologist should record batch number, expiry date, entry date, and number of times media/oil bottles are opened along with dates Pharmaceutical medical-grade refrigeration facilities should be available for storage of media and reagents All the media should be kept at the temperature recommended by the supplier/manufacturer. Recommendations for oocyte retrieval Oocyte retrieval is the fundamental step during IVF as it requires constant stability for temperature and pH. It is a time-bound procedure, i.e., done at 34-36 h post trigger. Before the procedure, an identity-check of the patient is mandatory. Culture media should be equilibrated at least for 4-6 hrs and (HEPES/MOPS) media need to be prewarmed before the procedure. Necessary equipment such as test tube warmers and dishes needs to be maintained at 37°C on the day of procedure. Ovum aspiration pump with a pressure setting between 100 and 120 mmHg should be strictly maintained. The aspirated follicular fluid is screened under a stereomicroscope for the presence of oocytes. Prolonged oocyte exposure to follicular fluid is not recommended. Prolonged oocyte exposure to follicular fluid is not recommended. The oocytes retrieved are immediately transferred post washing in the flushing medium to culture media, and it should be achieved in minimal time. Exposure to light should be minimized. Documentation involving the duration of cumulus-oocyte complexes retrieval, number of collected oocytes, and the team involved in the entire procedure should be maintained. Recommendations for sperm preparation Sperm count, motility, and morphology play a pivotal role in human fertilization and hence have to be carefully assessed before and on the day of procedure Clear instructions should be given to the patient before the collection of the same. The collection room should be in the nonsterile area of the clinic setting. Home collection should ideally be not allowed, however; in some circumstances, it can be allowed. Consent regarding the identity of the sample should be documented Properly labeled IVF tested plastic-ware that is nontoxic and mouse embryo assay, limulus amebocyte lysate, and human sperm survival assays tested should be used for noninterference with the semen Identity before the collection should be checked. Masturbation is the preferred method Postcollection, sample should be sent to laboratory as soon as possible avoiding extreme temperatures (<20°C and >37°C) Sperm analysis and preparation should start within 1 h of collection. Prolonged sperm exposure to seminal plasma is not recommended Medical history such as the use of medication, fever during the previous months, and completeness of the ejaculate collection should be documented Semen preparation is done to maximize the chances of fertilization with significance for extracting motile spermatozoa eliminating non-motile and dead spermatozoa additionally removing de-capacitation factors from the seminal plasma and to capacitate the spermatozoa There should never be preset parameters for preparation, rather should be designed according to the characteristics and origin of individual samples Swim-up and discontinuous density gradients are the two most preferred and widely accepted A backup of the sample for patients with difficulty in producing sample is important. Proper counseling for the cryopreservation for oocytes should also be done as an alternative way Patients should be tested for serious transmissible infections such as hepatitis A, HIV, hepatitis B surface antigen, and venereal disease research laboratory test. Standard precautions for handling biological material must be practiced in the laboratory. Extensive semen preparation by density-gradient centrifugation followed by swim-up is recommended. Recommendations for insemination of oocytes Insemination can be achieved by either IVF/ICSI. The most crucial factor in IVF is the number of progressively motile sperm used for insemination. They must be enough to optimize the chance of regular fertilization. A motile sperm concentration ranging between 0.1 and 0.5 10 6 /mL is used. Motility and quality of sperm also play a pivotal role Double-density is the preferred method employed for semen preparation in case of IVF. The final sperm suspension should be in a medium compatible with oocyte culture A double-check of the identity of gametes at the time of insemination procedure is mandatory Records of "the time of insemination" should be kept Co-incubation of cumulus-oocyte complexes and sperm is usually performed overnight. Short protocol can also be performed where if, signs of fertilization not seen, early rescue ICSI comes into play Oocytes are injected 38-41 h posttrigger This procedure entails the deposition of a single spermatozoon directly into the cytoplasm of the oocyte, bypassing the ZP and the oolema Optimal and sterile conditions should be maintained during the micromanipulation to avoid the detrimental effect of variation, media, and altered air quality on the gamete under manipulation Prior to micromanipulation, oocytes are exposed to 80 IU/mL of hyaluronidase for the removal of cumulus cells. For final removal of corona cells, the oocytes are repeatedly aspirated in and out with decreasing inner diameters of 300, 170, and 140 flexipets, respectively ICSI dishes are prepared according to the embryologist performing the procedure with polyvinylpyrrolidone, flushing media, and oil being the main key players Both the micropipettes are aligned and are bent to an angle of approximately 35° Before the injection of gametes, a double-check is mandatory. Exposure during sperm identification and immobilization followed by injection should be minimized. Normal and motile sperms are selected. In the case of only immotile sperm cells, a non-invasive vitality test can be used to select viable sperm for injection. In the case of TESA samples, motility enhancers such as theophylline could be used. witnessing. Dishes to be well-labeled and should not be exposed before a fertilization check If M1 or abnormal oocytes are injected, they should be ideally kept in a separate dish or marked if kept in the same dish. Recommendations for scoring for fertilization All inseminated or injected oocytes should be examined for the presence of pronuclei (PN) and polar bodies at 16-18 h postinsemination In case of IVF, loosened residual cumulus cells must be removed by aspirating them in and out with denupets of varying diameters to access the fertilized oocytes The zygotes are transferred to pre-equilibrated culture media dishes For better understanding regarding the pronuclear morphology, assessment should be done under an objective lens of 200 Embryos as a result of abnormal fertilization such as 1 PN or 3PN should not be transferred or cryopreserved unless deemed euploid by PGT-A. Recommendations for scoring for fertilization For optimal embryo growth, culture conditions should be consistent There are two different approaches that can be used to optimize embryo development-sequential or single-step media. In case of sequential culture, dishes are changed according to the stage of embryo whereas, for a single step, embryo grows in one type of media dish during its entire in vivo journey Dishes can be made in accordance with the laboratory SOP's. An optimal drop of culture media should be made, while dish preparation remains under sterile conditions. Oil overlay over the culture dishes minimizes the changes to temperature, pH, and osmolality For incubation of embryo, there are various kinds of incubators available that are used according to the need and workload of the laboratory. Regular maintaining and cleaning of the same should be maintained Scoring of the embryo should be performed at high magnification (at least 200) using an inverted microscope Evaluation of cleavage-stage embryos should include cell number, size and symmetry, and percentage of fragmentation. Blastocyst scoring should include expansion of the blastocoel cavity, the morphology of the inner cell mass, and trophectoderm Assessment should be performed at crucial developmental stages postinsemination. Embryo development can also be assessed using time-lapse imaging, allowing an uninterrupted evaluation involving the morphokinetics during growth Embryo selection for transfer is primarily based on the synchrony between embryo and endometrium. Other selection parameters, such as time-lapse kinetics, may be considered Single embryo transfer is recommended to avoid multiple gestations transfer strategies should be customized according to the patient profile Embryo quality and stage of development, female age, ovarian response, and treatment plan should be taken into consideration before transfer It is advisable not to transfer more than two embryos in good prognosis patients Cryopreservation should be performed for supernumerary embryos, according to their quality, patient wishes, and national legislation along with consents and records for same A checklist for the embryo transfer procedure should be maintained that includes identification number, name of the patient and partner, time of embryo transfer, catheter lot number, signature of the doctor and embryologist along with witnesses, and any adverse events during the procedure A double identity-check of the patient, the patient file and necessary consents, and the culture dish is mandatory immediately before the transfer. Recommendations for cryopreservation Cryopreservation refers to the cooling of cells and tissues to sub-zero temperatures to stop all biologic activity and preserve them for future use. It can be performed for gametes and embryos. Along with facilities, trained embryologists should be available in the laboratory to perform the necessary procedures Different approaches including slow freezing and vitrification can be used according to the type of biological material that needs to be cryopreserved For sperm, rapid cooling is a possible and convenient method for cryopreservation For oocytes, vitrification has been reported to be highly successful and is recommended For cleavage-stage embryos and blastocysts, high success rates have been reported when using vitrification It is important to understand that while dealing with infectious biological material, cross-contamination via liquid nitrogen needs to be reduced. Separate cryocans for such samples and embryo should be maintained Disposables and dishes need to be discarded in accordance with biosafety regulations At cryopreservation, documentation on biological material should include: Identification number, patient and partner name, device labeling, cryopreservation method, media used, date and time of cryopreservation, embryologist's name, embryo quality and stage of development, number of oocytes or embryos per device, number of devices stored per patient, and location of stored samples Cryodevices must be clearly and permanently labeled with reference to patient details, treatment number, and/or unique identification code At thawing for the same biological material, documentation should include: double witnessing for the name of patient and partner, thawing method, thawing media, date and time of thawing, embryologist name, and postthawing sample/oocyte/embryo quality A double-check of patient identity is recommended at every step of cryopreservation and thawing Accidental thawing should be avoided. Consents and Checklists The ART clinic should obtain written permission from the couple before conducting any ART procedure. Risk Analysis and Mitigation in an In vitro Fertilization Laboratory Background IVF laboratory procedures involve handling male and female gametes. An error at any of the intermittent steps may have direct consequences including a possible change of genetic filiation of a family. Unlike other laboratories where-in case of mishandling or mistake, reports can be cancelled, and tests can be repeated, this option is not available for an IVF laboratory. Once the baby is born or even once the pregnancy is established, the issue becomes much complicated-emotionally, ethically, and legally. Clinical recommendations on risk mitigation Strict adherence to the consensus guidelines to ensure patient's identity and safety A copy of the patient's consent should be kept in laboratory records From the statistical analysis and legal point of view, correct data entry is crucial to assess the center's performance and to safeguard allegations from patient-related to misuse of genetic material To avoid repetition of the same incidents in the future, a separate audit logbook must be maintained citing all problems encountered and measures taken to solve those issues. Assisted conception is unlikely to be any less prone to adverse incidents; indeed, there have been several high-profile cases which have drawn attention to this problem. Because of the nature of the work undertaken in assisted conception, there is the potential to affect not only future generations but also many patients simultaneously because of the storage of biological material. It is, therefore, essential to implement strategies to reduce the likelihood of patient safety incidents. Risk prediction and mitigation strategies should be referred to in Table 5. Incident Reporting It is an occurrence that is inconsistent with the routine care of the patient or the regular running of the organization. Categorization of incidents Adverse events It can be further classified as: Near miss (A "near miss" is considered an unplanned event that did not result in injury, illness, or damage -but had the potential to do so) Serious Adverse event affecting gametes Adverse reaction affecting individuals. A patient being implanted with an embryo that is intended for someone else The death of a patient or an incident which affects a few patients (e.g., when a storage unit malfunctions which may irretrievably damage the embryos, eggs, or sperm of several patients). Transmission of communicable diseases/illnesses/conditions is leading to prolonged hospitalization and treatment or even death. Moderate incidents The loss of embryos for one patient Breaches of confidentially where sensitive personal data or data relating to more than one patient is sent to the wrong recipient or when a piece of equipment malfunctions affecting the quality of a patient's embryos Eggs rendered unusable during processing (for example, the moving of an egg between dishes). Clinical Administrative Laboratory OHSS Patients are starting a treatment cycle before all their screening results were returned and reviewed S c r e e n i n g r e s u l t s n o t being checked or being misinterpreted Donors being accepted and matched with a recipient without the screening results being available or checked, or screening results being misinterpreted. Misplacement of an embryo during embryo transfer, ovarian abscesses following egg collection, vaginal bleeding and urinary tract infections as well as allergic reactions to medications. Infections found in embryo cultures that originated from the patient or their partner. Laboratory Procedures, Documentation, and Data Management Based on existing literature, the group decided on certain consensus points that are mentioned below. All processes should be mapped, using appropriate flow chart methodology The process map then forms the basis of standardized operating procedures (SOPs) The SOPs should be structured in a standardized format and their distribution must be controlled SOPs should be written based on the documented scientific evidence and authorized, signed and updated SOPs for all processes to optimize outcomes The KPIs should be clearly defined, monitored and documented in a computer database Procedures should maximize the chance of success and minimize risk Before the implementation of any new method, it needs to be validated and monitored in the current setting Importantly, the clinical and laboratory staff members need to undergo training and prove competence for each procedure performed The data on the performance of the clinic, but also of the individuals should be collected and analyzed regularly Data should be audited, assessed and structured to discern the input quality, the process quality and the output quality as appropriate The list of data required for collecting and auditing is presented in Table 3. In addition, data on the functioning of equipment and technical systems, e.g., air quality and level of microbial contamination, must be collected and regularly audited. Key Performance Indicators and Benchmarking for India KPIs are Indicators deemed essential for evaluating the introduction of a technique or process; establishing minimum standards for proficiency; monitoring ongoing performance within a QMS (for internal quality control, external quality assurance); and benchmarking and quality improvement. In general, the results of a series of KPIs will provide an adequate overview of the most critical steps in the IVF laboratory process. |
J. Walter Thompson in New York, part of the WPP Group, has acquired Interactive Marketing Concepts in Toronto, a strategic e-business marketing agency with services like digital advertising, promotions and e-commerce programs. Terms were not disclosed.
Interactive Marketing, which has 25 employees and billings estimated at $10 million from clients like the Dairy Farmers of Ontario and Parke Davis, will operate as an autonomous unit of the J. Walter Thompson Group Canada. |
package com.networkedassets.autodoc.configuration;
import com.atlassian.activeobjects.external.ActiveObjects;
import com.atlassian.confluence.user.AuthenticatedUserThreadLocal;
import com.google.common.collect.ImmutableMap;
import com.mashape.unirest.http.exceptions.UnirestException;
import com.networkedassets.autodoc.TransformerClient;
import com.networkedassets.util.functional.Optionals;
import net.java.ao.Query;
import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.PUT;
import javax.ws.rs.Path;
@Path("token")
@Consumes("application/json")
public class BundleAccessTokenService {
private TransformerClient transformerClient;
private ActiveObjects ao;
public BundleAccessTokenService(ActiveObjects ao, DocSettingsService docSettingsService) {
this.ao = ao;
this.transformerClient = new TransformerClient(docSettingsService, this);
}
public String getForUserKey(String userKey) throws TokenNotFoundException {
BundleAccessToken accessToken = null;
try {
accessToken = ao.executeInTransaction(() ->
ao.find(BundleAccessToken.class, Query.select().where("USER_KEY = ?", userKey))[0]
);
} catch (ArrayIndexOutOfBoundsException e) {
throw new TokenNotFoundException();
}
return accessToken.getAccessToken();
}
public void setForUserKey(String userKey, String token) {
ao.executeInTransaction(() -> {
BundleAccessToken accessToken =
Optionals.fromArrayOfOne(
ao.find(BundleAccessToken.class, Query.select().where("USER_KEY = ?", userKey))
).orElse(
ao.create(BundleAccessToken.class, ImmutableMap.of("USER_KEY", userKey))
);
accessToken.setAccessToken(token);
accessToken.save();
return accessToken;
});
}
public void setForCurrentUser(String token) {
setForUserKey(AuthenticatedUserThreadLocal.get().getKey().getStringValue(), token);
}
@PUT
public void saveTokenForTransformerUser(Credentials credentials) throws UnirestException {
String token = transformerClient.getToken(credentials);
setForCurrentUser(token);
}
@GET
@Path("ask")
public boolean doesCurrentUserHaveToken() {
String userKey = AuthenticatedUserThreadLocal.get().getKey().getStringValue();
return ao.executeInTransaction(() ->
ao.find(BundleAccessToken.class, Query.select().where("USER_KEY = ?", userKey)).length == 1);
}
public static class Credentials {
public String username;
public String password;
}
}
|
The so-called thermite reaction traditionally involves the exothermic reduction of iron oxide with aluminum, in which the reaction produces molten iron with an aluminum oxide slag floating thereon, the reaction taking place either in a suitable mold so that the molten iron is fusion cast into a desired shape, or at a site where two metal parts are to be joined to produce a weld between such metal parts when the reaction is completed.
Although there are prior patents which involve the use of the thermite type reaction to produce borides, carbides, silicides and nitrides and the like, the product produced by the reaction is of at least two phases, one which is a layer of the boride, carbide, etc., and another which is a layer of the oxide of the reducing metal such as aluminum or magnesium. That is, the reducing metal oxide is present as a separate layer of slag, as in the classical thermite reaction. If special steps are taken to produce a composition which is a mixture of the boride, carbide, etc. and the reducing metal oxide, such composition is not a foamed product.
U.K. Pat. No. 1,497,025 teaches the production of cast refractory inorganic products by a thermite type reaction in which slag is formed and the product is a dense, sintered form. Thus, the teaching of this patent is directed to producing a composition which is not a mixture, homogeneous or otherwise, of all the reaction products, but of a composition which is a mixture of the reaction products less the oxide of the reducing metal and (to the extent possible) less the CO which is formed during the reaction. This patent is specifically directed to avoid "poorly sintered specimens" of the desired product and to avoid products which are characterized by "porosity and the presence of free carbon therein, which affects their strength". To this end, the patent teaches a method which is carried out at a centrifugal acceleration of from 100 to 1500 g and in a gaseous medium under pressure of 1 to 100 atm, using an inert gas such as argon. In this patent, the reaction mixture contains carbon and a reducing metal such as aluminum plus one or more metal oxides. The end product in each case is divided into two layers, a top layer of slag which is the reducing metal oxide and the bottom layer which is the desired material. Even if the constraints taught by this patent are not followed and porosity is present, it is not present in a composition which includes the reducing metal oxide.
Present techniques of producing refractory, monolithic shapes involve initial shape-forming steps such as hydraulic or isostatic pressing, slip-casting, extrusion, injection molding and the like prior to the firing step. Moreover, the firing step normally involves at least preheating the entire reaction mixture either to ignition temperature or to an elevated temperature at which local ignition and subsequent completion of the reaction occurs. |
/*
* Copyright (C) 2014-2021 <NAME> (www.helger.com)
* philip[at]helger[dot]com
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.helger.json.parser;
import java.math.BigDecimal;
import java.math.BigInteger;
import java.util.Arrays;
import javax.annotation.Nonnegative;
import javax.annotation.Nonnull;
import javax.annotation.concurrent.NotThreadSafe;
import com.helger.commons.string.ToStringGenerator;
/**
* A special StringBuilder implementation that supports conversion to numeric
* values in a more efficient way.
*
* @author <NAME>
*/
@NotThreadSafe
public class JsonStringBuilder
{
protected char [] m_aBuf;
protected int m_nLen;
// Status vars
private String m_sCache;
public JsonStringBuilder ()
{
this (16);
}
public JsonStringBuilder (@Nonnegative final int nCapacity)
{
m_aBuf = new char [nCapacity];
m_nLen = 0;
}
private void _expandCapacity (@Nonnegative final int nMinimumCapacity)
{
int nNewCapacity = (m_aBuf.length + 1) * 2;
if (nNewCapacity < 0)
nNewCapacity = Integer.MAX_VALUE;
else
if (nMinimumCapacity > nNewCapacity)
nNewCapacity = nMinimumCapacity;
m_aBuf = Arrays.copyOf (m_aBuf, nNewCapacity);
}
public void append (final char c)
{
m_sCache = null;
final int nNewLen = m_nLen + 1;
if (nNewLen > m_aBuf.length)
_expandCapacity (nNewLen);
m_aBuf[m_nLen++] = c;
}
public boolean hasContent ()
{
return m_nLen > 0;
}
@Nonnegative
public int getLength ()
{
return m_nLen;
}
public char charAt (@Nonnegative final int nIndex)
{
if (nIndex >= m_nLen)
throw new IllegalArgumentException ("Invalid index provided: " + nIndex);
return m_aBuf[nIndex];
}
@Nonnull
public JsonStringBuilder reset ()
{
m_nLen = 0;
m_sCache = null;
return this;
}
public void backup (final int n)
{
m_nLen -= n;
}
@Nonnull
public BigDecimal getAsBigDecimal ()
{
return new BigDecimal (m_aBuf, 0, m_nLen);
}
@Nonnull
public BigInteger getAsBigInteger ()
{
return new BigInteger (getAsString (), 10);
}
@Nonnull
public Double getAsDouble ()
{
return Double.valueOf (Double.parseDouble (getAsString ()));
}
@Nonnull
public String getAsString ()
{
String ret = m_sCache;
if (ret == null)
{
ret = new String (m_aBuf, 0, m_nLen);
m_sCache = ret;
}
return ret;
}
@Override
public String toString ()
{
return new ToStringGenerator (this).append ("Len", m_nLen).append ("asString", getAsString ()).getToString ();
}
}
|
// NewStore initializes a TRC/Certificate Chain cache/resolver backed by db.
// Parameter local must specify the AS in which the trust store resides (which
// is used during request forwarding decisions). When sending infra messages,
// the trust store will use IDs starting from startID, and increment by one for
// each message.
func NewStore(db *trustdb.DB, local addr.IA, startID uint64, logger log.Logger) (*Store, error) {
store := &Store{
trustdb: db,
ia: local,
log: logger,
msgID: startID,
}
return store, nil
} |
# -*- coding: utf-8 -*-
from app import config
from app.api import HTTPStatus, api, request
from app.controllers import BaseController
from app.schemas.color import ColorSchema
from app.schemas.dictionary import DicioSchema
from app.schemas.response import ResponseSchema
from app.schemas.twitch import TwitchSchema
from app.schemas.weather import WeatherSchema
from app.services.color import ColorService
from app.services.currency import CurrencyService
from app.services.dictionary import DicioService
from app.services.math import MathService
from app.services.translate import TranslatorService
from app.services.twitch import TwitchService
from app.services.weather import WeatherService
ns = api.namespace("tools", description="Tools", validate=True)
color = ns.model("Color", ColorSchema().as_model(), strict=True)
color_parser = ns.parser()
color_parser.add_argument("hex", type=str, help="HEX code, '#' is optional", required=True)
currency = ns.model("Currency", ResponseSchema().as_model(), strict=True)
currency_parser = ns.parser()
currency_parser.add_argument("base", type=str, help="Requested exchange rate base asset", required=True)
currency_parser.add_argument("quote", type=str, help="Requested exchange rate quote asset", required=True)
dicio = ns.model("Dictionary", DicioSchema().as_model(), strict=True)
dicio_parser = ns.parser()
dicio_parser.add_argument("word", type=str, help="Word to get definition", required=True)
math = ns.model("Math", ResponseSchema().as_model(), strict=True)
math_parser = ns.parser()
math_parser.add_argument("expression", type=str, help="Expression to be evaluated", required=True)
math_parser.add_argument("precision", type=int, help="Number of significant digits in formatted output", default=4)
translate = ns.model("Translate", ResponseSchema().as_model(), strict=True)
translate_parser = ns.parser()
translate_parser.add_argument("text", type=str, help="Desired text to translate", required=True)
translate_parser.add_argument("source", type=str, help="Source language to translate from", default="auto")
translate_parser.add_argument("target", type=str, help="Target language to translate to", default="pt")
twitch = ns.model("Twitch", TwitchSchema().as_model(), strict=True)
twitch_parser = ns.parser()
twitch_parser.add_argument("channel", type=str, help="The channel Twitch username", required=True)
twitch_parser.add_argument("user", type=str, help="The user Twitch username (only required for 'followed' info)")
twitch_parser.add_argument(
"infos",
type=str,
help="What informations do you want to know, separated by commas?",
choices=("account_age", "avatar", "creation", "follow_age", "followed", "follows", "game", "id", "title", "total_views", "uptime", "viewers"),
required=True,
)
twitch_parser.add_argument("language", type=str, help="Output language", default="pt")
twitch_parser.add_argument("precision", type=str, help="How precise the timestamp should be", default="3")
twitch_parser.add_argument("format", type=str, help="Formatting of the returned date and time", default="d/m/Y \\à\\s H:i:s")
twitch_parser.add_argument("timezone", type=str, help="Timezone for displaying date and time other than UTC", default="America/Sao_Paulo")
weather = ns.model("Weather", WeatherSchema().as_model(), strict=True)
weather_parser = ns.parser()
weather_parser.add_argument("location", type=str, help="City name, state code and country code, separated by commas", required=True)
weather_parser.add_argument("language", type=str, help="Output language", default="pt")
weather_parser.add_argument("units", type=str, help="Units of measurement", choices=("standard", "metric", "imperial"), default="metric")
@ns.route("/color")
class ColorController(BaseController):
@ns.doc(description="Get information about any color")
@ns.marshal_with(color, envelope="data", code=HTTPStatus.OK.value, description="Color information")
@ns.response(code=HTTPStatus.BAD_REQUEST.value, description="Invalid color")
@ns.expect(color_parser)
def get(self):
color = ColorService().by_hex(**request.args)
color_json = ColorSchema(many=False).dump(color)
return color_json, HTTPStatus.OK
@ns.route("/currency")
class CurrencyController(BaseController):
@ns.doc(description="Get the exchange rate between pair of requested assets")
@ns.marshal_with(currency, envelope="data", code=HTTPStatus.OK.value, description="Exchange rate")
@ns.response(code=HTTPStatus.BAD_REQUEST.value, description="Invalid asset")
@ns.expect(currency_parser)
def get(self):
currency = CurrencyService(config.CURRENCY_API_KEY).rate(**request.args)
currency_json = ResponseSchema(many=False).dump(currency)
return currency_json, HTTPStatus.OK
@ns.route("/dictionary")
class DicioController(BaseController):
@ns.doc(description="Get the dictionary definition of a word")
@ns.marshal_with(dicio, envelope="data", code=HTTPStatus.OK.value, description="Word definition")
@ns.response(code=HTTPStatus.BAD_REQUEST.value, description="Word not found")
@ns.expect(dicio_parser)
def get(self):
dicio = DicioService().definition(**request.args)
dicio_json = DicioSchema(many=False).dump(dicio)
return dicio_json, HTTPStatus.OK
@ns.route("/math")
class MathController(BaseController):
@ns.doc(description="Get the result of a mathematical expression")
@ns.marshal_with(math, envelope="data", code=HTTPStatus.OK.value, description="Expression result")
@ns.response(code=HTTPStatus.BAD_REQUEST.value, description="Invalid expression")
@ns.expect(math_parser)
def get(self):
math = MathService().evaluate(**request.args)
math_json = ResponseSchema(many=False).dump(math)
return math_json, HTTPStatus.OK
@ns.route("/translate")
class TranslateController(BaseController):
@ns.doc(description="Translate a text")
@ns.marshal_with(translate, envelope="data", code=HTTPStatus.OK.value, description="Translated text")
@ns.response(code=HTTPStatus.BAD_REQUEST.value, description="Invalid language")
@ns.expect(translate_parser)
def get(self):
translate = TranslatorService().translate(**request.args)
translate_json = ResponseSchema(many=False).dump(translate)
return translate_json, HTTPStatus.OK
@ns.route("/twitch")
class TwitchController(BaseController):
@ns.doc(description="Get Twitch information about a channel")
@ns.marshal_with(twitch, envelope="data", code=HTTPStatus.OK.value, description="Twitch channel information")
@ns.response(code=HTTPStatus.BAD_REQUEST.value, description="Invalid info requested or channel not found")
@ns.expect(twitch_parser)
def get(self):
twitch = TwitchService().fetch(**request.args)
twitch_json = TwitchSchema(many=False).dump(twitch)
return twitch_json, HTTPStatus.OK
@ns.route("/weather")
class WeatherController(BaseController):
@ns.doc(description="Get the current weather data for any location")
@ns.marshal_with(weather, envelope="data", code=HTTPStatus.OK.value, description="Current weather")
@ns.response(code=HTTPStatus.BAD_REQUEST.value, description="Location not found")
@ns.expect(weather_parser)
def get(self):
weather = WeatherService(config.WEATHER_API_KEY).by_location(**request.args)
weather_json = WeatherSchema(many=False).dump(weather)
return weather_json, HTTPStatus.OK
|
/* ----------------------------------------------------------------------------
* This file was automatically generated by SWIG (http://www.swig.org).
* Version 4.0.0
*
* Do not make changes to this file unless you know what you are doing--modify
* the SWIG interface file instead.
* ----------------------------------------------------------------------------- */
package xyz.redtorch.gateway.ctp.x64v6v3v11v.api;
public class CThostFtdcExchangeOrderInsertErrorField {
private transient long swigCPtr;
protected transient boolean swigCMemOwn;
protected CThostFtdcExchangeOrderInsertErrorField(long cPtr, boolean cMemoryOwn) {
swigCMemOwn = cMemoryOwn;
swigCPtr = cPtr;
}
protected static long getCPtr(CThostFtdcExchangeOrderInsertErrorField obj) {
return (obj == null) ? 0 : obj.swigCPtr;
}
@SuppressWarnings("deprecation")
protected void finalize() {
delete();
}
public synchronized void delete() {
if (swigCPtr != 0) {
if (swigCMemOwn) {
swigCMemOwn = false;
jctpv6v3v11x64apiJNI.delete_CThostFtdcExchangeOrderInsertErrorField(swigCPtr);
}
swigCPtr = 0;
}
}
public void setExchangeID(String value) {
jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_ExchangeID_set(swigCPtr, this, value);
}
public String getExchangeID() {
return jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_ExchangeID_get(swigCPtr, this);
}
public void setParticipantID(String value) {
jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_ParticipantID_set(swigCPtr, this, value);
}
public String getParticipantID() {
return jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_ParticipantID_get(swigCPtr, this);
}
public void setTraderID(String value) {
jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_TraderID_set(swigCPtr, this, value);
}
public String getTraderID() {
return jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_TraderID_get(swigCPtr, this);
}
public void setInstallID(int value) {
jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_InstallID_set(swigCPtr, this, value);
}
public int getInstallID() {
return jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_InstallID_get(swigCPtr, this);
}
public void setOrderLocalID(String value) {
jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_OrderLocalID_set(swigCPtr, this, value);
}
public String getOrderLocalID() {
return jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_OrderLocalID_get(swigCPtr, this);
}
public void setErrorID(int value) {
jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_ErrorID_set(swigCPtr, this, value);
}
public int getErrorID() {
return jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_ErrorID_get(swigCPtr, this);
}
public void setErrorMsg(String value) {
jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_ErrorMsg_set(swigCPtr, this, value);
}
public String getErrorMsg() {
return jctpv6v3v11x64apiJNI.CThostFtdcExchangeOrderInsertErrorField_ErrorMsg_get(swigCPtr, this);
}
public CThostFtdcExchangeOrderInsertErrorField() {
this(jctpv6v3v11x64apiJNI.new_CThostFtdcExchangeOrderInsertErrorField(), true);
}
}
|
Understanding Soliton Wave Propagation in Nonlinear Transmission Lines for Millimeter Wave Multiplication In this paper, soliton propagation in nonlinear transmission lines (NLTLs) periodically loaded with symmetric voltage dependent capacitances is studied. From the lumped element equivalent circuit of the line we have analyzed the influence of nonlinear shunt reactances on soliton propagation characteristics. It is shown that by increasing the non linearity of the CV characteristic, a faster separation of input signal into solitons is achieved. The fact that frequency multiplication in NLTLs is governed by soliton formation makes the results of this work relevant to understand the influence of nonlinear loading devices on multiplier performance. Since a heterostructure barrier varactor (HBV)-like voltage dependent capacitance has been considered for the nonlinear devices, this study can be of interest for the design of millimeter wave frequency multipliers loaded with HBVs. |
The silent loss of cell physiology hampers marine biosciences An ongoing loss of experts in marine cellular biochemistry and physiology (CBP) is stagnating the generation of knowledge upon which rapidly growing omics approaches rely, ultimately hampering our ability to predict organismal responses to climate change. In the past few decades, the genomic revolution has enabled astonishing advances in marine ecological and evolutionary sciences. Today, genomic and transcriptomic techniques are increasingly being used to attempt to understand and predict the responses of marine species to ongoing climate change-clearly one of the most formidable scientific challenges of our generation. But while researchers were able to identify genomic targets of selection in some recent studies (for example, in coral populations inhabiting exceptionally warm waters and in mussels reared under simulated ocean acidification ), the functions of the coded proteins are often poorly characterized or even completely unknown. The few studies in which researchers have successfully linked genomic targets of selection to specific cellular processes (e.g., ) share a common feature: They benefited from decades of foundational research on cellular biochemistry and physiology (CBP) that grounded their analysis. Unfortunately, the genomic revolution has led to the displacement of many disciplines perceived to be less cutting edge, as the number of faculty positions could not keep track with the explosive diversification of biological research. Similar to the widely recognized waning numbers of taxonomists, there is a silent and inadvertent loss of CBP experts in the marine science community. Inexorably, this loss in expertise will affect our ability to discover and characterize phenotypes and to predict species-specific resilience and vulnerability to climate change. Meanwhile, biomedical researchers are uncovering a surprising lack of correlation between gene expression and cell function even in clonal cell lines under controlled laboratory conditions. These findings highlight the urgent need for a comprehensive understanding of cellular biological processes that extend beyond genomics and transcriptomics. Cells contain a dense mix of ions, proteins, and other organic molecules (collectively referred to as "osmolytes") that create a gel-like fluid that determines protein and organelle function (Fig 1); the properties of this fluid can greatly vary in manners specific to organelles, cell types, and species. Most metabolic reactions are highly sensitive to carbonate chemistry and osmolyte concentrations, and, virtually, every enzyme is either directly impacted by pH or posttranslationally regulated via acid-base-dependent signaling pathways. Thus, knowledge of intracellular carbonate chemistry under both control and stress conditions is essential for our mechanistic understanding of the impacts of ocean acidification and other stressors. However, although invertebrates constitute approximately 75% of marine animal biodiversity, not one invertebrate species has fully characterized intracellular osmolyte concentrations, acid-base parameters, and carbonate chemistry. Indeed, the most comprehensive (yet incomplete) assessment of intracellular osmolyte budgets in invertebrates was published >60 years ago, and detailed information about intracellular acid-base parameters is available only for squid. This lack of knowledge has many implications, such as impairing our ability to mimic intracellular fluids during in vitro characterization of the proteins that are targets of selection (Fig 1). Numerous other critical processes are similarly understudied at the biochemical and cellular levels, including aerobic and anaerobic ATP production, antioxidant responses, biomineralization, and metabolic exchange between symbiotic partners. Not coincidentally, a large portion of genes from marine animal pan-genomes cannot be functionally annotated at present. Of course, "omics" approaches have an important role in marine sciences, both for evaluating known processes and for generating new hypothesis. But do we want to rely on transcriptomics to assess effects of environmental stressors on marine animals when we do not know the identity of the relevant genes and the functions of their coded proteins? How relevant are Gene Ontology or KEGG pathways for studies on marine animals, considering these databases are largely built using model organisms that do not possess the traits we are interested in evaluating? Can we accurately predict the responses of marine fish to climate change thorough studies on zebrafish, a freshwater species? Do rodents or Caenorhabditis elegans (or even noncalcifying anemones) hold the key to understanding how corals build massive calcium carbonate reef structures? Similarly, collaborations with traditional molecular biologists and biomedical scientists will certainly help us to gain insights about how marine organisms work https://doi.org/10.1371/journal.pbio.3001641.g001 ; however, this does not eliminate the need for marine CBP scientists who can analyze Big Data in an environmentally relevant context, identify gaps, formulate the next set of hypotheses, and design and execute the ensuing experiments. The field of marine CBP, alas, has intrinsic limitations that conspire against its progress and impact. Chief among them, method development is very time consuming and often results in protocols that can only be performed in specialized laboratories leading to relatively modest publication and citation rates. In the current hypercompetitive academic environment, this combination can discourage junior researchers and be detrimental to early career faculty. In addition, the small and decreasing size of the scientific community in this field constitutes a major hurdle during the review process. Indeed, it is increasingly difficult to find peers who can fairly assess the quality of work while also avoiding conflicts of interest. As a result, work is often reviewed by colleagues and editors who lack the required expertise and have other scientific interests and thus tend to perceive research on CBP as "too specialized" (or, if reviewed by biomedical scientists, as "not sophisticated enough"). This generates an unfortunate cycle whereby studies are seldomly published in high-profile multidisciplinary journals, affecting the ability of researchers in this field to influence the scientific community and secure funding and faculty positions. Furthermore, the retiring of each colleague results in the loss of an academic role model, completing a "vicious cycle." Retirement of colleagues also leads to loss of methodological know-how that, eventually, will have to be redeveloped when cellular phenotyping, in the broadest sense, inevitably becomes essential. In the short term, the theoretical and practical know-how can be preserved through detailed video and written methods and review papers. But in the medium and long term, marine CBP scientists will always be needed to identify original questions, develop new methods, interpret results, and promote and generate excitement to the next generations of scientists and keep the cycle going. We thus urge research institutions to once again value and promote the hiring of marine CBP faculty and researchers. Cluster faculty hires that "pair" them with evolutionary ecologists seem a promising way forward, especially if these researchers work on similar taxa and environments. Only with a strong community of CBP scientists we will be able to understand how marine animal species "work," how cellular attributes differ from those of biomedical model organisms, and why some genotypes fare better than others in rapidly changing ocean environments. |
Sara Iredell Fleetwood
Early life
Sara Louise Iredell was born in April 1849 in St. Louis, Missouri to Elizabeth Susan (née Webb) and Geoffrey George Iredell. Her father was originally from Edenton, North Carolina and was the son of a slave who had been emancipated. At the time Sarah and her sister Laura (1850–1909) were born, he was operating a barber shop in St. Louis. Her mother, originally from Philadelphia, Pennsylvania was the sister of Frank J. Webb, and they were the children of abolitionists Louisa (née Burr), illegitimate daughter of Aaron Burr, and Francis Webb. During Iredell's childhood, the family moved to Philadelphia making their home with their Webb cousins. Between 1856 and 1858, she attended Oberlin College as a pupil-teacher.
Career
After her graduation from Oberlin, Iredell moved back to Philadelphia and began her career teaching in public schools. In 1863, she became a founding member of the Ladies Union Association, serving as the organization's secretary. The Ladies Union was created to fund raise and provide assistance to African-American soldiers who were either sick or wounded. In 1866, Iredell worked as a pupil-teacher at the Institute for Colored Youth, completing her training in 1867. She then taught from 1867 to 1868 at the Roberts Vaux School before moving to teach in the public school system of Frederick, Maryland. Because of low pay and the treatment black teachers received, she left Maryland and began working as a teacher in Washington, D. C.
In Washington, Iredell became involved in the National Association for the Relief of Destitute Colored Women and Children. She met and married Medal of Honor recipient Christian Fleetwood in 1869 and the couple subsequently had a daughter, Edith. They were very involved with the prominent African-American professional community hosting literary salons, and entertaining their guests with theatrical and musical performances. In 1892, Fleetwood was one of the nine co-founders of the Colored Women's League of Washington, an organization which focused on issues faced by black women. She spoke at various functions addressing issues like child care and parenting training, establishment of nurseries for working women, and sanitation. In 1898, she and Anna Evans Murray attended the Congress of Mothers as representatives of the Colored Women's League.
In 1893, Fleetwood enrolled in the first class of nurses admitted to Howard University's Freedman's Hospital School of Nursing, studying under Daniel Hale Williams. That same year, she and her cousin, Evelyn D. Shaw organized relief efforts to feed and house those impacted by the Panic of 1893. She graduated from Freedman's in 1896 and initially became a private nurse in Washington. In February 1901, when the previous nursing supervisor resigned, Fleetwood was appointed by Dr. Austin M. Curtis as the replacement supervisor for the training school. She took a national civil service examination to qualify for the post outscoring applicants from throughout the country. Her appointment marked the first time a black supervisor held the post. In August of the same year, she was confirmed as supervisor by the chief surgeon, Dr. William A. Warfield, who reappointed her and gave her the title, Directoress of Nurses. She remained the director until 1904, when she resigned from the post.
Fleetwood organized the Freedmen’s Nurses Association and attended the national convention of the Nurses Association Alumni as the association's delegate in 1904. In 1907, when the examining board for graduate nurses was established in Washington, D. C. she was selected as the first black representative on the board by the Graduate Nurses' Association. When her term expired in June of that same year, she was not reappointed and despite protests by the commissioners, no other African American representative was appointed to the board.
Death and legacy
Fleetwood died on February 1, 1908 in Washington, D. C. from complications of diabetes. She and her husband's papers make up the Christian A. Fleetwood Papers, which were donated to the Library of Congress in 1947. The site for the house in which the couple resided, at 319 U Street NW, in the LeDroit Park Historic District of Washington, D. C. is part of the African American Heritage Trail in the capital city and is identified by a historic marker. |
Improved Control of DFIG Wind Turbines for Operation with Unbalanced Network Voltages Many wind turbine generators (WTGs) are installed in remote, rural areas, where the power grids are usually weak, characterized by unbalanced voltage conditions. If the voltage unbalance is not taken into account in the control system, it will cause poor power quality and poor operating performance of the WTG systems. This paper proposes a novel control scheme to improve the operating performance of a wind turbine equipped with a doubly fed induction generator (DFIG) under unbalanced network voltage conditions. The rotor-side converter (RSC) and grid-side converter (GSC) of the DFIG are controlled in a positive (dq)+ reference frame as well as in a negative (dq)- reference frame. Control of the RSC and GSC in the (dq)+ reference frame is the same as the regular control under balanced network conditions. The supplementary control of the RSC in the (dq)- reference frame minimizes the electromagnetic torque pulsations of the DFIG caused by the unbalanced network voltage; the supplementary control of the GSC in the (dq)- reference frame contributes to balance the total output currents of the DFIG. The proposed control scheme is implemented in PSCAD/EMTDC on a 3.6 MW DFIG wind turbine connected to a power network with unbalanced voltage. Results show that it improves the operating performance of the DFIG wind turbine system. |
import {message, Typography} from "antd";
import React from "react";
import getFirebase from '../utils/firebase';
import env from "../utils/vars";
import {navigate} from '@reach/router';
const Text = Typography.Text;
export function handleAPIResponse(response) {
if (response.ok) {
return Promise.resolve(response.json());
}
return Promise.resolve(response.json()).then((responseInJson) => {
return Promise.reject(responseInJson.error);
});
}
export function handleAPIError(error) {
if (error && error.message) {
// Network error (API DOWN / OFFLINE)
if (error.name == "TypeError") {
message.error(<>Unable to connect to backend <Text type="secondary">- retry in a while ...</Text> </>, 10);
//navigate('/503/');
}
// Authorization error
else if (error.code == "401003") {
message.error(<><Text>{`${error.message}`}</Text> <Text type="secondary">- {error.reason}</Text></>);
// Logout
const fb = getFirebase();
fb.auth().signOut().then(function () {
}).catch(function (error) {
console.log(error)
});
}
// API Error
else
message.error(<><Text>{`${error.message}`}</Text> <Text type="secondary">- {error.reason}</Text></>);
} else {
message.error("Something went wrong")
}
}
|
The Prominent Mandibular Angle: Preoperative Management, Operative Technique, and Results in 42 Patients A prominent mandibular angle is considered to be unattractive in the Orient because it gives the face a square and muscular appearance. While described infrequently in the United States, this entity is commonly encountered in the Orient owing to different facial characteristics and different aesthetic sensibilities. We present a retrospective study of 42 female patients who presented requesting the reduction of a prominent mandibular angle for cosmetic reasons. We describe our approach, which utilizes formal planimetry, cephalometric tracings, and Panorex mandibular radiographs. We utilize the intraoral approach and use an oscillating saw to resect the predetermined segment of bone. In 18 of the 42 patients, we resected muscle as well. We also describe using the preauricular incision in a patient undergoing a concomitant rhytidectomy. Our cosmetic results have been generally satisfactory, with only one inaccurate osteotomy. We had three infections which resolved without sequelae. |
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define S string
#define mp make_pair
#define pb push_back
#define lb lower_bound
#define ub upper_bound
//<NAME> https://github.com/anubhawbhalotia
#define fi first
#define se second
#define f(i,s,n) for(long i=s;i<n;i++)
#define fe(i,s,n) for(long i=s;i<=n;i++)
#define fr(i,s,n) for(long i=s;i>n;i--)
#define fre(i,s,n) for(long i=s;i>=n;i--)
#define mod 998244353
typedef vector<int> vi;
typedef vector<long> vl;
typedef vector<ll> vll;
typedef pair<int,int> pii;
typedef pair<long,long> pll;
typedef pair<ll,ll> pllll;
typedef set<int> si;
typedef set<long> sl;
typedef multiset<int> msi;
typedef multiset<long> msl;
typedef multiset<ll> msll;
int main()
{
string s,a;
cin>>s;
int flag=0;
f(i,0,5)
{
cin>>a;
if(a[0]==s[0] || a[1]==s[1])
flag=1;
}
if(flag)
{
cout<<"YES"<<endl;
}
else
cout<<"NO"<<endl;
} |
Moderate glucose control is associated with increased mortality compared with tight glucose control in critically ill patients without diabetes. BACKGROUND Optimal glucose management in the ICU remains unclear. In 2009, many clinicians at Intermountain Healthcare selected a moderate glucose control (90-140 mg/dL) instead of tight glucose control (80-110 mg/dL). We hypothesized that moderate glucose control would affect patients with and without preexisting diabetes differently. METHODS We performed a retrospective cohort analysis of all patients treated with eProtocol-insulin from November 2006 to March 2011, stratifying for diabetes. We performed multivariate logistic regression for 30-day mortality with covariates of age, modified APACHE (Acute Physiology and Chronic Health Evaluation) II score, Charlson Comorbidity score, and target glucose. RESULTS We studied 3,529 patients in 12 different ICUs in eight different hospitals. Patients with diabetes had higher mean glucose (132 mg/dL vs 124 mg/dL) and greater glycemic variability (SD = 41 mg/dL vs 29 mg/dL) than did patients without diabetes (P <.01 for both comparisons). Tight glucose control was associated with increased frequency of moderate and severe hypoglycemia (30.3% and 3.6%) compared with moderate glucose control (14.3% and 2.0%, P <.01 for both). Multivariate analysis demonstrated that the moderate glucose target was independently associated with increased risk of mortality in patients without diabetes (OR, 1.36; 95% CI, 1.01-1.84; P =.05) but decreased risk of mortality in patients with diabetes (OR, 0.65; 95% CI, 0.45-0.93; P =.01). CONCLUSIONS Moderate glucose control (90-140 mg/dL) may confer greater mortality in critically ill patients without diabetes compared with tight glucose control (80-110 mg/dL). A single glucose target does not appear optimal for all critically ill patients. These data have important implications for the design of future interventional trials as well as for the glycemic management of critically ill patients. |
Self-reproducing catalyst drives repeated phospholipid synthesis and membrane growth Significance We report on the design and synthesis of an artificial cell membrane that sustains continual growth. Lipid membranes are ubiquitous in all domains of life. Numerous studies have exploited the ability of lipids to self-assemble into bilayer vesicles with properties reminiscent of cellular membranes, but previous work has yet to mimic natures ability to support persistent phospholipid membrane formation. In this work, we have developed an artificial cell membrane that continually synthesizes all of the components needed to form additional catalytic membranes. These results demonstrate that complex lipid membranes capable of indefinite self-synthesis can emerge when supplied with simpler chemical building blocks. Cell membranes are dynamic structures found in all living organisms. There have been numerous constructs that model phospholipid membranes. However, unlike natural membranes, these biomimetic systems cannot sustain growth owing to an inability to replenish phospholipid-synthesizing catalysts. Here we report on the design and synthesis of artificial membranes embedded with synthetic, self-reproducing catalysts capable of perpetuating phospholipid bilayer formation. Replacing the complex biochemical pathways used in nature with an autocatalyst that also drives lipid synthesis leads to the continual formation of triazole phospholipids and membrane-bound oligotriazole catalysts from simpler starting materials. In addition to continual phospholipid synthesis and vesicle growth, the synthetic membranes are capable of remodeling their physical composition in response to changes in the environment by preferentially incorporating specific precursors. These results demonstrate that complex membranes capable of indefinite self-synthesis can emerge when supplied with simpler chemical building blocks. |
// Schema implements the sql.Node interface
func (dtf *DiffTableFunction) Schema() sql.Schema {
if !dtf.Resolved() {
return nil
}
if dtf.sqlSch == nil {
panic("schema hasn't been generated yet")
}
return dtf.sqlSch
} |
n = input()
ans = 1
prevx, prevy = 0, 0
for i in range(n):
x, y = map(int, raw_input().split())
if prevx == prevy:
ans += max(0, min(x, y) - max(prevx, prevy))
else:
ans += max(0, min(x, y) - max(prevx, prevy) + 1)
prevx, prevy = x, y
print ans
|
Risk Factors for Postoperative Pulmonary Complications: An Update of the Literature Abstract Perioperative medicine is a growing area of research that brings together internists, anesthesiologists, surgeons, and hospitalists. A medical team approach to ensure the best possible patient outcomes has fostered collaborative strategies across disciplines. Perioperative pulmonary complications are common and can be associated with significant morbidity and mortality. Effective strategies to identify and reduce risks of pulmonary complications can improve patient outcomes. We review the new literature (2013 to early 2014) in the field of perioperative pulmonary medicine that reports new strategies to improve outcomes in the area of perioperative pulmonary care. |
Enhanced sampling reveals the main pathway of an organic cascade reaction Normal molecular dynamics simulations are usually unable to simulate chemical reactions due to the low probability of forming the transition state. Therefore, enhanced sampling methods are implemented to accelerate the occurrence of chemical reactions. In this investigation, we present an application of metadynamics in simulating an organic multi-step cascade reaction. The analysis of the reaction trajectory reveals the barrier heights of both forward and reverse reactions. We also present a discussion of the advantages and disadvantages of generating the reactive pathway using molecular dynamics and the intrinsic reaction coordinate (IRC) algorithm. I. INTRODUCTION Molecular simulations are gaining importance in physics, chemistry, biology, and materials research. Due to the high computational costs, it is difficult to investigate a range of natural phenomena requiring rare events, such as those exhibited in phase transitions, chemical processes, and protein folding, using traditional molecular dynamics (MD). Individual states in these systems are separated by colossal free energy barriers. Thus, the transition between them takes aeons. Performing enhanced sampling simulations, which are often divided into two categories: collective variable (CV)-based and CV-free approaches, is a solution to this issue. CVs characterise the most challenging modes to sample and are typically used to distinguish between metastable states. In order to accelerate the transition between metastable states, CV-based approaches such as umbrella sampling (US) 1 and metadynamics (MetaD) 2 can improve sampling over CVs. CV-free methods, such as replica exchange MD (REMD) 3 and integrated tempering sampling (ITS) 4, can facilitate transitions between distinct metastable states with little a priori system knowledge. Recently, hybrid approaches, which mix the two categories of methods 5,6, can further improve the sampling capabilities over the required configuration or phase space 7,8. There are already studies that apply enhanced sampling simulations in organic reactions. However, most of them only concern single-step reactions (although a pair of stereoisomers can be formed). In this study, a metadynamics simulation is implemented in a two-step reaction that includes two different types of pericyclic reactions and has significance in synthetic chemistry 12,13. The scheme of which is shown in Figure 1. FIG. 1: A double-step organic cascade reaction We tried to figure out the reaction pathway and obtain the free energy barrier height through the metadynamics simultaion. A discussion is followed to evaluate the efficiency of finding the pathway using enhanced sampling and IRC. Metadynamics (MetaD): The general idea behind MetaD is to prevents an ergodic sampling by adding a Gaussian biased potential V bias to the system Hamiltonian 2,14. The biased potential is a function of collective variables (CVs) s. A collective variable is a function of the particle positions r. where t is the simulation time. is the height of the Gaussian, and is the standard deviation vector. The goal of a MetaD run is to achieve the convergence of the free energy landscape. Theoretically speaking, at the end of a MetaD run, the value of the free energy F(s) is equal to the negative value of the accumulated bias potential, i.e., However, the constant addition of the repulsive potential actually prevents the convergence of the free energy surface, introducing a systematic error into it. The way to tackle this problem is to introduce a new method known as the welltempered metadynamics (WT-MetaD), which is used in this study 18. Well-tempered metadynamics (WT-MetaD): The biggest difference between MetaD and WT-MetaD is that the height of the gaussian is now time-dependent. where w is the height of the first gaussian, > 1 is known as the bias factor. As the height of the gaussian (t) decreases as the bias potential increases. When t → ∞, (t) → 0, and the bias potential converges as The free energy landscape F(s) can then be calculated. III. SYSTEM AND METHOD The simulated reaction is shown in Figure 2. The QM/MM based molecular dynamics simulation is performed on SANDER in AMBER18 19, and the enhanced sampling plugin to perform WT-MetaD is the open-source, communitydeveloped PLUMED library 20, version 2.5.1 21. In order to fit the experimental conditions, the simulation is in the NVT emsemble, and the temperature is set to be 200K, and a Langevin thermostat with a friction coefficient of 1 ps −1 is enabled to control the temperature of the system. An implicit solvation model is used to model the solvent (THF). The conditions are chosen to fit the experimental conditions 12,13. The simulation length is 20 ns with four parallel trajectories. As there are several millions of configurations, the level of energy calculation is set to be DFTB3 in order to reduce time cost 22,23. The accuracy of the DFTB method for barrier heights of a variety of reactions has been verified 24. The two CVs are selected to be the two bonds that is broken and formed during the whole reaction, also shown in Figure 2. The reaction pathway is then sketched using the MEPplot package 25. The initial points are placed at the bottom of the free energy landscape of each compound. The free energy pathway is then elucidated after its convergence reported from the package. IV. RESULTS AND DISCUSSION The effectiveness of MetaD simulations is examined by plotting the change of the magnitude of CVs versus time. The relationship between d 1 and t, and that between d 2 and t in a trajectory is shown in Figure 3 According to the graphs, both CVs have changed considerably through time, which implies multiple conversion between compounds. For the case of d 1, a distance around 0.15 nm corresponds to compound 1. All the other distances corresponds to multiple isomers of the acyclic compound 2 or compound 3. For the case of d 2, a distance around 0.15 nm corresponds to compound 3, All the other distances corresponds to multiple isomers of the acyclic compound 2 or compound 1. The part of free energy landscape of this trajectory with the reaction trajectory is sketched, shown in Figure 5. The free energy curve is shown in Figure 6. As the objective of this study is to generate the reaction pathway via enhanced sampling methods. This is a relatively novel approach, as the traditional way of doing this is to perform the IRC algorithm 26,27. It is worth while to discuss the differences between them, and their strength and weaknesses. IRC algorithms require the starting compound to be the transition state if a complete trajectory is required. This is because, in principle, the IRC-LQA algorithm operates as follows 28,29 : The starting molecule, which has a position vector x 0, can expand its potential energy surface (PES) around it. The energy is expressed as where ∆x is the displacement vector, g 0 is the energy gradient at x 0, and H 0 is the Hessian matrix at x 0. Taking the first derivative of equation with respect to x give the energy gradient at x as g 0 (x) = g 0 + H 0 ∆x Then the coordinate vector x can be updated using the steepest descent equation where s is an arc along the reaction path. The most important prerequisite of a successful IRC calculation is that the input structure must be the transition state (TS). This is because the IRC calculation is usually calculated on both sides, i.e., the position vector is updates on opposite directions. If the starting compound is not a TS, the updated energy value would rise, causing the IRC algorithm to halt. There are two problems in performing IRC calculations. The first problem in IRC is that finding the TS is far more non-trivial than finding the energy minima. In practise, a number of failures are always accompanied with finding the TS, especially when the implicit solvation model is added. As a TS is essential in performing an IRC calculation, being unable to find it will significantly hamper the progress to find the reaction path. The second problem is that IRC, just like other TS-related algorithms, requires the Hessian matrix of a molecule 28. As a Hessian matrix is found by evaluating the second derivative of the molecular energy with respect to two atomic movements, the number of energy calculations is proportional to N 2, where N is the number of atoms in the molecule. This makes IRC calculations very computational expensive at high levels of theory. On the contrary, molecular dynamics simulations with enhanced sampling are more straight forward to carry out. In theory, one only need to provide a structure of molecule, which does not even need to be at the energy minima, and (a) reaction coordinate(s) as the CV. After some time period (usually 20 to 30 ns), the FEL landscape will converge, and the minimum energy pathway can be found by using made packages, e.g. MULE 30 or MEP Plot 25. The benefit of a completed FEL, compared to a single path obtained via IRC is that it may potentially reveal more reactive pathways than the most feasible one. However, obtaining the converged FEL is easier said than done. One of the main problems of enhanced sampling is that the molecule would collapse after adding a bias potential. Even if the bond is not set as a CV, the bias added will still significantly affect the bond. As a result, it may break after sometime, making the whole simulation unable to proceed. The way of countering this is to add upper walls to the molecule to prevent the bonds from breaking. Such an issue is particularly serious in this study due to the high angle strain of the four-membered ring. Consequently, apart from the two bonds that are going to form(break) during the reaction, upper walls are added to all other carbon-carbon bonds and carbon-oxygen bonds. Another issue of enhanced sampling is that there may be expected products. This, again, is due to the bias added to the system so that it is chemically labile. In this study, an attempt was made to simulate the following reaction, shown in Figure 7 with CVs shown in the picture. Comparing to compound 5', compound 6' not only does not contain any highly unsaturated ketene structures, but also has an aromatic furan ring. The high stability of compound 6 essentially prevents the simulation from proceeding. In practise, the FEL does not converge even after 200 ns of simulation, even when a lower wall is added to partially stop the formation of the expected carbon-oxygen bond. In general, the number of unexpected byproducts will increase significantly when the system gets larger, which creates huge difficulty in simulating them. Another problem lies in the theory of calculation, semiempirical and force field based energy algorithms are the only feasible ones to perform energy calculations of a colossal number of configurations (one million per nanosecond). Any attempts to refine the precision of FEL, even using 6-31(G) level of theory make takes days to complete even under a supercomputer. The final issue is the efficiency of enhanced sampling. Even though the transition between compounds is possible due to the addition of a biased potential, most of the simulation time is still wasted. This issue is also profound in this study, as the acyclic compound 2 has a large number of stereoisomers. Consequently, even though the simulation lasts for 20ns, only 20 transitions between the reacatants are observed, and it takes progressively longer for a new transition to occur. V. CONCLUSION AND FUTURE WORK In this study, an enhanced sampling simulation is performed on an organic cascade reaction. The FEL is then sketched as well as the potential energy curve and the barrier heights. A comparison is then drawn to compare the feasibility of finding the reaction pathway via IRC and enhanced sampling. Based on the current deficiencies of enhanced sampling, three future directions are suggested: Add the option in the molecular dynamics package to prevent irrelevant bond from breaking and unneeded side products. Designing algorithms that can improve the efficiency of the enhanced sampling simulations, especially when sampling the TS. Apply enhanced sampling molecular simulations in more organic cascade reactions to further demonstrate the benefits of this method. |
Jeff Foster was in college when he realized the type of player he needed to become to make it at the next level.
As a freshman at the little-known Southwest Texas State University (now named Texas State University), Foster was blown away by the ferocity that his teammate Elijah Hobley played with. Hobley had served in the military and was in his mid-20s at the time.
With a grown-man frame, Hobley played so relentlessly in practice that no one on the team — especially players still in their teenage years — could match him.
"He literally just beat the crap out of everybody on the team," explained Foster. "And I think I realized freshman year that if I didn't fight back, then I wasn't going to be able to play, and it really just kind of left a lasting impression on me that you just gotta go out there and fight."
Hobley finished his college career with averages of 6.2 points and 6.9 rebounds at Texas State, not unlike Foster's numbers in the pros, where he finished his 13-year Pacers career averaging 4.9 points and 6.9 rebounds.
The way Foster felt about playing against Hobley is the same way many NBA players now recount playing against Foster — a compliment to Foster's intensity. The statistical categories in which Foster is prominent on the franchise leaderboard are not typically the ones that send players to All-Star Games, but they are not without value. He ranks third in offensive rebounds (2,101), eighth in defensive rebounds (3,147), 10th in steals (507), and fourth in personal fouls (1,921). Foster never averaged double-digits in points or rebounds, yet still managed to stick around with one team for his entire 13-year NBA career, something just 26 other players have done in league history.
Photos: Relive Foster's 13-year Career »
So how did Foster — playing at a small school in Texas — end up on the radar of the Pacers, who had just suffered a heartbreaking loss to the Knicks in the 1999 Eastern Conference Finals? It all started when the team president at the time, Donnie Walsh, took a trip to a college basketball tournament in Portsmouth, Virginia to scout prospects. The Portsmouth Invitational Tournament at the time was typically a way for second-round prospects to impress scouts and move up into the first round, which is exactly what Foster did when he hit his defender with a jaw-dropping move to the basket.
"I remember sitting close to the court and he was out there playing outside, he made a move where it was so quick for a 6-10 guy, faking and going in and dunking on somebody, and I said 'Whoa,'" Walsh recounted. "So then I started watching him, I really loved his motor, his aggression, he ran the floor, a great athlete. And I thought he was tough."
When it was time to host pre-draft workouts, Foster's name was high on the list of players that Walsh wanted to get in. Larry Bird, who was the head coach at the time, watched alongside Walsh at the workout as they both ended up surprised to see that not only was Foster the dirty-work type of player they had scouted, but could also shoot the ball with accuracy, something Walsh hadn't seen him do before.
"He was immediately a guy that you knew you wanted on your team because he was a good teammate," recounted Walsh. "There were things he could do that we really needed. He was good defensively, and he ran the floor."
When draft day came, the Pacers selected high-flying Jonathan Bender straight out of high school with the fifth overall pick in the draft. Then Indiana executed a draft-day trade, swapping the rights for Vonteego Cummings and a future first-round pick for the rights to Foster — who had been selected 21st overall after a four-year collegiate career.
Despite being more mature than most NBA rookies, Foster couldn't help but be wide-eyed when walking into a locker room that had a singular focus of breaking through the Eastern Conference and taking the franchise to its first NBA Finals. Predictably, Foster didn't play much in his rookie season, appearing in just 19 games as the Pacers followed through on their goal of making the NBA Finals, where they fell in six games to Kobe Bryant and the Lakers.
Pacers Vote: Pick Your 2000s Starting Five »
"I thought that was just sort of the first of many trips to come," Foster said. "And evidently it was not. But I realized that the way for me to get on the floor the quickest was just to kind of do all the things that everyone else didn't want to do."
As the 2000s Finals team changed in the years that followed, Foster saw his minutes ramp up with his comfort level in the NBA growing. In his third season, he played in every game and averaged 5.7 points and 6.8 rebounds per game.
"He kind of makes the difference between a good team and an All-Star team," Walsh said. "I don't believe that you can go out and get All-Stars and you're going to be good, because somebody has got to do all the little things that make you a complete team. He is the ultimate glue guy both on and off the court."
By his fifth season, Foster was already 27 years old — a veteran by NBA standards, especially since the league was still getting 18-year-olds straight out of the high school gym.
As a vet, Foster had a reputation for not just offering advice to his teammates, but sometimes his opponents as well (if he liked their game, that is).
"We always had mad respect for one another, and he was always just giving great advice to young guys to help 'em get through the process," explained Al Jefferson, who was a rookie on the Celtics when him and Foster first crossed paths. "Just teaching points, learning them veteran tricks. The tricks that only the veteran guys could get away with. Little things like that."
Foster's Pacers were matched up in the first round with Jefferson's Celtics during Big Al's rookie NBA season in 2005, and Indiana got the best of Boston, winning in seven games.
"He was one of the guys that went for my ball fake," laughed Jefferson. "And he told me one game, 'I'm not going for your ball fake ever again in life' and he didn't."
Jefferson though, was one of the lucky ones. There are a trail of opponents who might not have quite as fond memories of doing battle in the paint with Foster, whose long limbs would ricochet around in search of rebounds, often meeting players' faces in the process.
"He played a very physical game in a legal way," said Walsh of Foster's playing style. "He wasn't going around hitting people. Because he wasn't trying to go over the rules, he wasn't trying to hit anybody, or elbow people anybody, he just played physically so you felt him when you were playing against him. And that can wear guys down and it picks up the motor of the rest of the players."
Foster is in agreement that he never did anything to intentionally hurt an opposing player, but his willingness to get the most out of his allotted six fouls is one of the things that made his teammates breathe a sigh of relief that he was on their team, and not the other way around.
"If you're my teammate I'll do anything for you, I'll run through a wall for you," Foster said. "If you're on the other team, hey, sports are unlike anything else, there is a winner and loser."
One teammate who fondly recalls Foster's impact is the Pacers' franchise star, Paul George. Foster's final two seasons in the NBA were George's first two, and George credits Foster with building his confidence as a rookie.
"Jeff was probably one of my favorite teammates of all time, especially for myself being a rookie, it was like a perfect relationship between me and him," said George. "He would tell me about stuff off the court, how to be a man, how to be mature how to handle pressure. And when I started getting minutes, he was the guy that was setting screens. I just remember every time he rebounded, he was looking for me, 'Rook where you at, where you at? Here rook,' then he'd come set the screen for me. So allowing me to play early on, he was the one really encouraging me to be special."
Like many parts of the franchise history, even in 2017, there is a direct line to Reggie Miller. When Foster was a rookie, Miller did similar things for Foster, getting him involved and active, even with a veteran team. Foster took Miller's obsessive work habits and routines, and helped passed them on, first to Danny Granger, then to Paul George, who now takes strides to make sure his younger players feel involved on the team as well.
"He would do his workouts everyday, even on off days he would come in, just because he needed to be prepared," George said. "Again, that's the professionalism side, that's when I was able to see it from that point of view. And he was just great in the locker room, talking with guys, with keeping a positive locker room, keeping the culture great. A lot of guys played into that, but Jeff was definitely a poster for it."
Foster's famed workout routine wasn't just to improve his on-court performance. Much of it, it turned out, was to get him on the court at all. He credits the Pacers training staff with giving him a regimen of workouts and treatments that helped alleviate back pain, but with his berserker style of play, Foster's back troubles began to severely hinder his ability to lace up.
2000s Central: Pacers.com/2000s »
It was a catch 22 — had he not dove for every loose ball, gotten involved in every tangle, crashed the paint for every offensive rebound, his back might have given him a few more years to chase a title. Yet, had he not done those things, he wouldn't have had a 13-year NBA career to begin with, and certainly wouldn't be remembered as the type of teammate that inspired others to work harder at their craft every day.
By the time the 2011-12 season rolled around, Foster was 35 years old and played in just 11 games, with his back sending a painfully clear signal that his time in the league was finished.
"When I couldn't feel anything below my knees, it was just cut and dried, I'm done. I can't play," Foster said. "If I keep playing, I'm going to be in a wheelchair."
Armed with a business degree from his time in college, Foster moved back to his home state of Texas and has become an entrepreneur, currently focusing on a cryotherapy company: Restore Cryotherapy.
It's a bit ironic, considering cryotherapy treatment might have helped during his time with the Pacers. Foster even joked about trying to sell a cryotherapy machine to Carl Eaton, the Pacers Associate Head Athletic Trainer/Physical Therapist, while he's in town for the Pacers 2000s "Decade Game" on Sunday.
But despite the lingering back pain, Foster speaks like someone who would run through the same brick walls for his teammates all over again. Someone who got a chance to live out his basketball dream and now gets a chance to live out his dream in the business world, which he's been passionate about since he sold baseball cards and mowed lawns in high school.
"I don't have any regrets," He said. "That's the deal, you know? I've got injuries that people don't deal with until their 60s or 70s; I literally just got back from L.A. seeing my doctor. I mean, whatever hand you're dealt you play it and I'm playing it. I deal with it, I get therapy, I stretch, I keep my core strong. Hey, life is what you make of it, you're either happy or you're sad, and I'm pretty content with how things worked out." |
Changes in Intestinal Morphology and Permeability in the BioBreeding Rat Before the Onset of Type 1 Diabetes Objective: Type 1 diabetes is an autoimmune disorder that occurs in genetically susceptible individuals. It has been hypothesized that the disease could be triggered by environmental agents that gain entry into the body through small intestinal absorption. Increased intestinal permeability has been reported both in spontaneous animal models of type 1 diabetes and human type 1 diabetes. In these studies, we examined both the physical and functional permeability characteristics of the small intestine in diabetes-prone and control rats. Methods: In a series of studies, BioBreeding diabetes-prone(n = 31), BioBreeding diabetes-resistant (n = 20) and control Wistar (n = 25) rats were examined at intervals from 21 to 125 days of age. Results: The percentage of goblet cells and the mucosal crypt depth were significantly greater in BioBreeding diabetes-prone than BioBreeding diabetes-resistant rats (P < 0.001 and P = 0.01, respectively). BioBreeding diabetes-prone and BioBreeding diabetes-resistant rats expressed less of the tight junction protein claudin (P < 0.05) and exhibited greater intestinal permeability (P < 0.001) than did Wistar rats. Intestinal permeability measured both in vivo and ex vivo decreased in all rat strains as age increased (P < 0.001). Conclusions: In a genetically susceptible rodent model of diabetes, early increased intestinal permeability might allow unregulated passage of environmental antigens that could potentially trigger the autoimmune response leading to type 1 diabetes. |
There is a boycott underway against Target from those who disagree with their decision to allow transgender people to choose whatever bathroom they want.
One guy decided to test out their new policy at his local Target. But here’s the thing – he didn’t dress up like a woman, and didn’t identify as a woman. He just waltzed in there lookin’ like a normal guy!
Here’s what happened:
The guy doesn’t identify as a woman, he just says his name is “Andy” – so, they let AndyGenders go where ever!!
Mediaite got a comment from Target after the video was released – they pretty much supported the decision of the manager:
Thanks for reaching out. We certainly respect that there are a wide variety of perspectives and opinions. As a company that firmly stands behind what it means to offer our team an inclusive place to work — and our guests an inclusive place to shop – we continue to believe that this is the right thing for Target. Thanks!
So what’s the point in even labelling bathrooms? It’s just a suggestion and you can use any one you want!! |
# Copyright (c) 2020 Club Raiders Project
# https://github.com/HausReport/ClubRaiders
#
# SPDX-License-Identifier: BSD-3-Clause
import logging
import psutil
def printmem(msg: str):
process = psutil.Process()
logging.info("Memory tracker: " + msg + ":" + '{:,}'.format(process.memory_info()[0]))
|
Observing Organizational Environments: A Systematic Approach for Information Analysts Information Analysts observe the elements of an organization in order to gain information unavailable through interviewing and the investigation of hard data. In the past the process of observation has been intuitive at best. This article describes and develops a systematic methodology for analyzing the internal organizational environment. The approach is based on a framework used in film criticism called mise-en-scene analysis. Seven major concrete and abstract elements which influence organizational decisions are identified: office lighting and color; office design, space, and location; clothing of decision makers; individual and group decision making; abilities of decision makers; attention to multiple objectives; and cognitive maps of decision makers. The systematic framework for observation developed in this article is an alternative to the common sense approach to observation. The major advantage of the mis-en-scene approach is that it allows the Information Analyst to classify, document, and interpret important factors which usually remain at the subconscious level. |
An Oligopoly Spectrum Pricing with Behavior of Primary Users for Cognitive Radio Networks Dynamic spectrum sharing is a key technology to improve spectrum utilization in wireless networks. The elastic spectrum management provides a new opportunity for licensed primary users and unlicensed secondary users to efficiently utilize the scarce wireless resource. In this paper, we present a game-theoretic framework for dynamic spectrum allocation where the primary users rent the unutilized spectrum to the secondary users for a monetary profit. In reality, due to the ON-OFF behavior of the primary user, the quantity of spectrum that can be opportunistically shared by the secondary users is limited. We model this situation with the renewal theory and formulate the spectrum pricing scheme with the Bertrand game, taking into account the scarcity of the spectrum. By the Nash-equilibrium pricing scheme, each player in the game continually converges to a strategy that maximizes its own profit. We also investigate the impact of several properties, including channel quality and spectrum substitutability. Based on the equilibrium analysis, we finally propose a decentralized algorithm that leads the primary users to the Nash-equilibrium, called DST. The stability of the proposed algorithm in terms of convergence to the Nash equilibrium is also studied. |
/// createNominalType - Create a new nominal type.
llvm::StructType *IRGenModule::createNominalType(CanType type) {
assert(type.getNominalOrBoundGenericNominal());
if (type->hasArchetype())
type = type.getNominalOrBoundGenericNominal()->getDeclaredType()
->getCanonicalType();
IRGenMangler Mangler;
std::string typeName = Mangler.mangleTypeForLLVMTypeName(type);
return llvm::StructType::create(getLLVMContext(), StringRef(typeName));
} |
// Add methods to directly and solely to Door object
impl Door {
fn new(is_open: bool) -> Door {
Door { is_open: is_open }
}
} |
def _tokenize(self, controlled):
def build_controlled_task(tup):
return 'activate metaphors from {}: {}'.format(tup[0], tup[1])
def build_free_task(tup):
return 'activate metaphors: {}'.format(tup[1])
task_func = build_controlled_task if controlled else build_free_task
input_sents = (pseq(zip(self.metaphors_frame.source_domain,
self.metaphors_frame.in_sent))
.map(task_func)
.list())
logging.info('begin tokenizing input IDs for T5 Metaphor set.')
self.input_ids = self.tokenizer(
input_sents,
padding=True,
return_tensors='pt').input_ids
logging.info('finished tokenizing input IDs for T5 Metaphor set.')
logging.info('begin tokenizing output IDs for T5 Metaphor set.')
self.output_ids = self.tokenizer(
list(self.metaphors_frame.out_sent),
padding=True,
return_tensors='pt').input_ids
logging.info('finished tokenizing output IDs for T5 Metaphor set.') |
// Data class to explicitly indicate that these bytes are raw audio data
public class AudioData
{
public AudioData(byte[] bytes)
{
this.bytes = bytes;
}
public byte[] bytes;
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.