content
stringlengths
7
2.61M
Error Estimates for Variational Models with Non-Gaussian Noise Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e. variational methods with quadratic fidelity and quadratic regularization, the error estimation is well-understood under so-called source conditions. Significant progress for nonquadratic regularization functionals has been made recently after the introduction of the Bregman distance as an appropriate error measure. The other important generalization, namely for nonquadratic fidelities such as those arising from Bayesian models with non-Gaussian noise, has not been analyzed so far. In this paper we develop a framework for the derivation of error estimates in the case of rather general fidelities and highlight the importance of duality for the shape of the estimates. We then specialize the approach for several important noise models in imaging (Poisson, Laplacian, Multiplicative) and the corresponding Bayesian MAP estimation.
A new clustering approach based on Glowworm Swarm Optimization High-quality clustering techniques are required for the effective analysis of the growing data. Clustering is a common data mining technique used to analyze homogeneous data instance groups based on their specifications. The clustering based nature-inspired optimization algorithms have received much attention as they have the ability to find better solutions for clustering analysis problems. Glowworm Swarm Optimization (GSO) is a recent nature-inspired optimization algorithm that simulates the behavior of the lighting worms. GSO algorithm is useful for a simultaneous search of multiple solutions, having different or equal objective function values. In this paper, a clustering based GSO is proposed (CGSO), where the GSO is adjusted to solve the data clustering problem to locate multiple optimal centroids based on the multimodal search capability of the GSO. The CGSO process ensures that the similarity between the cluster members is maximized and the similarity among members from different clusters is minimized. Furthermore, three special fitness functions are proposed to evaluate the goodness of the GSO individuals in achieving high quality clusters. The proposed algorithm is tested by artificial and real-world data sets. The better performance of our proposed algorithm over four popular clustering algorithms is demonstrated on most data sets. The results reveal that CGSO can efficiently be used for data clustering.
My oldest child "Caroline" is a junior in high school. She wants to get a job after school and/or on the weekends. Her mom and I are 100% behind her goal to start working part-time. What's the best way for us to support her? Thanks Liz! You are our expert source for every career topic. A lot of kids get their first jobs at retail stores or supermarkets. Let's assume that Caroline wants to do the same thing. Here's a fantastic way for a kid to get their first job (it works with people of all ages, too!). Help Caroline fill out an online application form for the retail store or restaurant she's interested in working for. She may need to fill out ten or twelve applications because not all stores are hiring at any given time. I'm not a fan of online application forms, but a kid going after their first job cannot skip that step. Caroline may say "What should I put on the application form, Dad? I've never had a job before!" Caroline should list everything remotely job-like on the application form. She should list babysitting, raking leaves, volunteer projects in the community and anything else that required her to show up and do something, whether she got paid or not. Once she has completed the online application form, Caroline will go into the store and walk around. Her goal is to acquaint herself with the store, even if she's been in it a thousand times already. She's going to watch the employees and see how they do their jobs. Now Caroline will ask for a manager, or walk up to someone with an ID badge who's dressed like a manager. Caroline: Hi, excuse me, are you the manager? Manager: I'm one of them. How can I help you? Caroline: I just wanted to introduce myself. I'm Caroline Vargas, a junior at Ridge Crest High School. I just filled out a job application to work here. Manager: Awesome, Caroline! I'll look for your application. Because retail stores are all about presence -- showing up, doing your job, interacting appropriately with other people and generally being reliable -- retail managers love to have a sharp applicant walk up and introduce themselves. They put a face with a name. They see that Caroline is sharp and personable. If they don't forget, they're going to look for her application when they get back to their office. Now Caroline has to wait to get a call or email message inviting her to an interview. That's why she can't wait to see what happens with one store before applying at two or three more. She has to keep her job search engine running! A few days later, Caroline gets an email message from the store's HR person. She's been invited to an interview for a Cart Attendant job after school on Tuesday. Caroline's going to dress up a bit (neat skirt or trousers, top, sweater or blazer) and go to the interview. The person interviewing her will probably show her into a small conference room or someone's office. You and Caroline's mom can walk her through the interview process so she's prepared. The Guest Services Manager "Leo" will interview Caroline. Here's how Caroline will nail the interview and get the job! Leo, a Manager: So Caroline, tell me about yourself. Caroline: I'm a junior at Ridge Crest High. I play baritone in the marching band. Manager: Baritone? I haven't heard of that one. Caroline: It's like a trombone but we can't march with trombones because we'd hit our fellow marchers with our slides. I play trombone in Concert Band. Caroline: Can I ask you a quick question about the job? Caroline: I know you're looking for a Cart Attendant. I want to make sure I understand what that job is all about. Of course I've been in the store many times with my parents and friends so I think I have some idea of what a Cart Attendant does. They round up the carts from the parking lot. They help people out to their cars with big packages. They probably back up the cashiers when somebody goes to lunch. Maybe they do some things with merchandise in the back? Is there anything else I'm missing? Manager: That's a pretty good rundown! Cart Attendants also clean the bathrooms. Are you good with that? Caroline: I clean the bathroom in my house, so that's no problem. Manager: When are you available to work, Caroline? What is this manager going to think about Caroline when she alleviates his greatest concern about hiring any high schooler -- that is, that the kid is just looking for video game money, and won't have a clue -- within the first three minutes of the interview? The manager is going to be thrilled to meet Caroline. This method gets kids (and their parents) great jobs every day! Caroline is not going to say or convey the message "I'm ready to go, I don't need any training" because that's ridiculous -- it's a new job, and her first job and she's going to need all the usual training. She's conveying the message "I'm awake and aware. I pay attention. I know that if I tell you all about marching band you still won't have a great sense of how on-the-ball I am, but if I tell you that I've thought about the role and have a good idea of what you need to have done, you're going to have more confidence in me." Leo hired Caroline on the spot. In my opinion every high school student should get a part-time job, and every college student too. I think it's a disservice to let a kid graduate from high school, much less college, without having held a job. Getting a job and doing the job are huge learning experiences that no classroom instruction can replace. I think we should teach kids about work, employment, entrepreneurship and life goals starting in the first grade. We should teach kids that they have passions that they will discover as little kids and throughout their lives. We can teach them that their passions are important, and that they don't have to give up their passions to make a living. Why keep the grown-up world a secret from kids? Why not teach them to look for things they love to do and are good at, and make a career out of doing those things?
The creator of Ripple and original founder of Mt. Gox, Jed McCaleb, recently made a donation worth roughly $500,000 in XRP to artificial intelligence researchers the Machine Intelligence Research Institute (MIRI). Going on the XRP-USD exchange rate at the time of the donation, McCaleb’s became the largest single contribution in the Institute’s history. Though values have fluctuated and MIRI will not convert the donation into half a million US dollars straight away, it is still a startling amount and could be worth even more if the XRP value rises in future. At the time of writing 1 XRP, the currency of the Ripple Network, was worth around $0.02. MIRI The Machine Intelligence Research Institute (MIRI) is a tax-exempt, non-profit organization focusing on safety issues related to the development of ‘Strong AI’, or smarter-than-human artificial intelligence. Known as the Singularity Institute until January 2013, MIRI has a mission to ensure the creation of such intelligence has a positive impact on humanity. Among its advisory board are names often associated with future studies, Transhumanism and the technological Singularity: Nick Bostrom, Aubrey de Grey, PayPal co-founder Peter Thiel and Foresight Nanotech Institute co-founder Christine Peterson. Ray Kurzweil was also a director from 2007-10. MIRI’s executive director Luke Muehlhauser said: “In late 2013 I decided MIRI should begin accepting XRP donations in addition to BTC donations. So I contacted Jed for advice, and he said ‘Actually, I was thinking of making an XRP donation to you guys…’ I didn’t know at the time he was a fan of MIRI’s research, but it turns out that he is.” A donation in XRP leaves MIRI with a large amount of money it cannot easily spend, so the Institute has a strategy, Muehlhauser said: “We are converting some of the XRP to US dollars and to BTC through the Ripple network, we are keeping a significant chunk in XRP as an investment, and we are also selling some XRP to fans (so as to improve our current asset allocation and grow the Ripple network simultaneously),” Researching ‘Friendly AI’ Muehlhauser said MIRI would use the donation to help fund the hiring of mathematical researchers to work full-time on sub-problems in its ‘Friendly AI‘ theory. The term ‘Friendly AI’ comes from research fellow Eliezer Yudkowsky. It is a sub-field ‘machine ethics’ or ‘artificial morality’, which focuses on constraining the behavior of ‘narrow AI’ systems. Research into Friendly AI aims to ensure any future, self-improving and general artificial intelligence is ‘friendly’ to humanity. Machine ethics itself is not necessarily a far-flung or even Singularitarian idea: the US Department of Defense is already working on autonomous battlefield robots and airborne drones, and Congress has declared that a third of the US military’s ground forces must be robotic by 2025. There are a few problems more future-oriented Friendly AI research must bear in mind. For example, future machine ‘superintelligences’ will be far more powerful than the humans who caused them to exist, and thus they and humans may no longer be able to comprehend each other. Such a superintelligence would also act precisely and literally based on the mechanisms put there by its designers, and would not understand the complexity or subtlety of its human designers’ values (eg: the meaning of ‘happiness’). Ripple, XRP and Ripple Labs Ripple itself, still “in beta”, is intended to be a distributed payment and currency exchange network, and XRP is its native currency unit through which trades are made. While users can use Ripple/XRP to trade any currency (digital or fiat) they choose, it’s necessary to hold some XRP to participate. Ripple has also been known to polarize the bitcoin community, with some seeing it as an evolution and others viewing its central ownership as a form of control that could threaten more distributed protocols. At November’s US Senate Committee hearing, Ripple was the only digital payment network other than bitcoin to have representatives scheduled to testify on its behalf, however, they were absent on the day. All XRP units already exist and do not require mining. The for-profit company that oversees it, Ripple Labs, does not sell any product but aims to distribute 55% of the currency and keep the rest, in the hope its value increases as the network gains popularity. It either is or isn’t a bitcoin competitor, depending on your perspective. Jed McCaleb Jed McCaleb left his main role at Ripple Labs in July last year, but is still a director there. He is known to have an interest in AI research and development, and the technological Singularity. He founded the current Mt. Gox in 2009, converting an old exchange for Magic: The Gathering Online trading cards into the world’s first bitcoin-USD exchange in Tokyo. He sold the site to its current owners, Tibanne Ltd., in 2011. McCaleb was also the creator of the eDonkey2000 P2P filesharing network. Thanks to this history he is sometimes mentioned as possibly one of the real identities behind bitcoin creator Satoshi Nakamoto. Handshake image via Shutterstock
package com.kp.swasthik; import org.apache.camel.CamelContext; import org.apache.camel.component.cxf.CxfEndpoint; import org.apache.camel.component.cxf.DataFormat; import org.apache.cxf.Bus; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class CamelConfig { @Bean public CxfEndpoint mathServicePoxy(Bus bus, CamelContext context) { CxfEndpoint ep = new CxfEndpoint(); ep.setAddress("/mathproxy"); ep.setBus(bus); ep.setCamelContext(context); ep.setLoggingFeatureEnabled(true); ep.setDataFormat(DataFormat.PAYLOAD); ep.setWsdlURL("http://localhost:8080/services/math?wsdl"); return ep; } }
//TODO Writing and Reading tests. mod tests { use crate::{SnowBinError, SnowBinErrorTypes, SnowBinInfo}; #[test] fn info_test() -> Result<(), SnowBinError> { SnowBinInfo::default(); SnowBinInfo::new(8, 8)?; SnowBinInfo::new(8, 16)?; SnowBinInfo::new(8, 32)?; SnowBinInfo::new(8, 64)?; SnowBinInfo::new(34785382, 8)?; SnowBinInfo::new(755463454, 16)?; SnowBinInfo::new(7864263463, 32)?; SnowBinInfo::new(45662346234, 64)?; SnowBinInfo::new(u64::MAX, 8)?; SnowBinInfo::new(u64::MAX, 16)?; SnowBinInfo::new(u64::MAX, 32)?; SnowBinInfo::new(u64::MAX, 64)?; assert_eq!( SnowBinInfo::new(1, 8).unwrap_err().error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new(1, 16).unwrap_err().error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new(1, 32).unwrap_err().error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new(1, 64).unwrap_err().error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new(1, 1).unwrap_err().error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new(8, 1).unwrap_err().error_type(), SnowBinErrorTypes::DataSizeNotAllowed ); assert_eq!( SnowBinInfo::new(8, u8::MAX).unwrap_err().error_type(), SnowBinErrorTypes::DataSizeNotAllowed ); Ok(()) } } #[cfg(feature = "v_hash")] mod v_hash_tests { use crate::{SnowBinError, SnowBinErrorTypes, SnowBinInfo}; #[test] fn info_test() -> Result<(), SnowBinError> { SnowBinInfo::default_with_v_hash(); SnowBinInfo::new_with_v_hash(8, 8)?; SnowBinInfo::new_with_v_hash(8, 16)?; SnowBinInfo::new_with_v_hash(8, 32)?; SnowBinInfo::new_with_v_hash(8, 64)?; SnowBinInfo::new_with_v_hash(34785382, 8)?; SnowBinInfo::new_with_v_hash(755463454, 16)?; SnowBinInfo::new_with_v_hash(7864263463, 32)?; SnowBinInfo::new_with_v_hash(45662346234, 64)?; SnowBinInfo::new_with_v_hash(u64::MAX, 8)?; SnowBinInfo::new_with_v_hash(u64::MAX, 16)?; SnowBinInfo::new_with_v_hash(u64::MAX, 32)?; SnowBinInfo::new_with_v_hash(u64::MAX, 64)?; assert_eq!( SnowBinInfo::new_with_v_hash(1, 8).unwrap_err().error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new_with_v_hash(1, 16) .unwrap_err() .error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new_with_v_hash(1, 32) .unwrap_err() .error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new_with_v_hash(1, 64) .unwrap_err() .error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new_with_v_hash(1, 1).unwrap_err().error_type(), SnowBinErrorTypes::HeaderSizeTooSmall ); assert_eq!( SnowBinInfo::new_with_v_hash(8, 1).unwrap_err().error_type(), SnowBinErrorTypes::DataSizeNotAllowed ); assert_eq!( SnowBinInfo::new_with_v_hash(8, u8::MAX) .unwrap_err() .error_type(), SnowBinErrorTypes::DataSizeNotAllowed ); Ok(()) } }
<reponame>sdbochkarev/dnscrypt<gh_stars>0 package dnscrypt import ( "bytes" "encoding/binary" "math/rand" "time" "github.com/ameshkov/dnscrypt/v2/xsecretbox" "golang.org/x/crypto/nacl/secretbox" ) // EncryptedQuery - a structure for encrypting and decrypting client queries // // <dnscrypt-query> ::= <client-magic> <client-pk> <client-nonce> <encrypted-query> // <encrypted-query> ::= AE(<shared-key> <client-nonce> <client-nonce-pad>, <client-query> <client-query-pad>) type EncryptedQuery struct { // EsVersion - encryption to use EsVersion CryptoConstruction // ClientMagic - a 8 byte identifier for the resolver certificate // chosen by the client. ClientMagic [clientMagicSize]byte // ClientPk - the client's public key ClientPk [keySize]byte // With a 24 bytes nonce, a question sent by a DNSCrypt client must be // encrypted using the shared secret, and a nonce constructed as follows: // 12 bytes chosen by the client followed by 12 NUL (0) bytes. // // The client's half of the nonce can include a timestamp in addition to a // counter or to random bytes, so that when a response is received, the // client can use this timestamp to immediately discard responses to // queries that have been sent too long ago, or dated in the future. Nonce [nonceSize]byte } // Encrypt - encrypts the specified DNS query, returns encrypted data ready to be sent. // // Note that this method will generate a random nonce automatically. // // The following fields must be set before calling this method: // * EsVersion -- to encrypt the query // * ClientMagic -- to send it with the query // * ClientPk -- to send it with the query func (q *EncryptedQuery) Encrypt(packet []byte, sharedKey [sharedKeySize]byte) ([]byte, error) { var query []byte // Step 1: generate nonce binary.BigEndian.PutUint64(q.Nonce[:8], uint64(time.Now().UnixNano())) rand.Read(q.Nonce[8:12]) // Unencrypted part of the query: // <client-magic> <client-pk> <client-nonce> query = append(query, q.ClientMagic[:]...) query = append(query, q.ClientPk[:]...) query = append(query, q.Nonce[:nonceSize/2]...) // <client-query> <client-query-pad> padded := pad(packet) // <encrypted-query> nonce := q.Nonce if q.EsVersion == XChacha20Poly1305 { query = xsecretbox.Seal(query, nonce[:], padded, sharedKey[:]) } else if q.EsVersion == XSalsa20Poly1305 { var xsalsaNonce [nonceSize]byte copy(xsalsaNonce[:], nonce[:]) query = secretbox.Seal(query, padded, &xsalsaNonce, &sharedKey) } else { return nil, ErrEsVersion } if len(query) > maxQueryLen { return nil, ErrQueryTooLarge } return query, nil } // Decrypt - decrypts the client query, returns decrypted DNS packet. // // Please note, that before calling this method the following fields must be set: // * ClientMagic -- to verify the query // * EsVersion -- to decrypt func (q *EncryptedQuery) Decrypt(query []byte, serverSecretKey [keySize]byte) ([]byte, error) { headerLength := clientMagicSize + keySize + nonceSize/2 if len(query) < headerLength+xsecretbox.TagSize+minDNSPacketSize { return nil, ErrInvalidQuery } // read and verify <client-magic> clientMagic := [clientMagicSize]byte{} copy(clientMagic[:], query[:clientMagicSize]) if !bytes.Equal(clientMagic[:], q.ClientMagic[:]) { return nil, ErrInvalidClientMagic } // read <client-pk> idx := clientMagicSize copy(q.ClientPk[:keySize], query[idx:idx+keySize]) // generate server shared key sharedKey, err := computeSharedKey(q.EsVersion, &serverSecretKey, &q.ClientPk) if err != nil { return nil, err } // read <client-nonce> idx = idx + keySize copy(q.Nonce[:nonceSize/2], query[idx:idx+nonceSize/2]) // read and decrypt <encrypted-query> idx = idx + nonceSize/2 encryptedQuery := query[idx:] var packet []byte if q.EsVersion == XChacha20Poly1305 { packet, err = xsecretbox.Open(nil, q.Nonce[:], encryptedQuery, sharedKey[:]) if err != nil { return nil, ErrInvalidQuery } } else if q.EsVersion == XSalsa20Poly1305 { var xsalsaServerNonce [24]byte copy(xsalsaServerNonce[:], q.Nonce[:]) var ok bool packet, ok = secretbox.Open(nil, encryptedQuery, &xsalsaServerNonce, &sharedKey) if !ok { return nil, ErrInvalidQuery } } else { return nil, ErrEsVersion } packet, err = unpad(packet) if err != nil { return nil, ErrInvalidPadding } return packet, nil }
package com.java24hours; import java.util.*; class ArrayInputSum{ public static void main(String args[]){ Scanner input = new Scanner(System.in); int number[] = new int[5]; int sum = 0; System.out.print("Enter your numbers: "); for (int i = 0; i < number.length; i++) { number[i] = input.nextInt(); sum += number[i]; } System.out.println("Sum is: " + sum); } }
Predictors of indocyanine green visualization during fluorescence imaging for segmental plane formation in thoracoscopic anatomical segmentectomy. BACKGROUND To determine factors predicting indocyanine green (ICG) visualization during fluorescence imaging for segmental plane formation in thoracoscopic anatomical segmentectomy. METHODS Intraoperatively, the intravenous ICG fluorescence imaging system during thoracoscopic anatomical segmentectomy obtained fluorescence emitted images of its surfaces during lung segmental plane formation after the administration of 5 mg/body weight of ICG. The subtraction of regularization scale for calculating the exciting peaks of ICG between the planned segments to resect and to remain was defined as Intensity (I). Variables such as the ratio of forced expiratory volume in 1 s to forced vital capacity (%FEV1.0), smoking index (SI), body mass index (BMI), and low attenuation area (LAA) on computed tomography (CT) took a leading part. RESULTS The formation of the segmental plane was successfully accomplished in 98.6% segments and/or subsegments. SI and LAA significantly affected I levels. The area under the receiver operating characteristic curve for the %FEV1.0, SI, and LAA was 0.56, 0.70, and 0.74, respectively. SI >800 and LAA >1.0% were strong predictors of unfavorable ICG visibility (P=0.04 and 0.01, respectively). CONCLUSIONS Fluorescence imaging with ICG was a safe and effective method for segmental plane formation during thoracoscopic anatomical segmentectomy. In spite of its high success rate, unfavorable visibility may potentially occur in patients who are heavy smokers or those with a LAA (>1.0%) on CT.
Los Angeles Mayor Eric Garcetti has nominated Seleta Reynolds to be the new general manager for the city’s Transportation Department (LADOT.) From preliminary research on Reynolds’s background, this looks like great news. Reynolds currently works for San Francisco Municipal Transportation Agency (SFMTA) where her focus had been livable streets, with a focus on building more equitable streets. Reynolds’ Twitter feed @seletajewel celebrates great bike and walk facilities. Reynolds is featured in Streetsblog San Francisco articles explaining Bay Are Bike Share, pushing Caltrans on standards for protected bicycle lanes, and arguing for better motorist education for bicyclist safety. Updated: Read our follow-up post, including a brief interview with the nominee here. Mayor Garcetti’s full press release follows after the jump. MAYOR GARCETTI NOMINATES SELETA REYNOLDS AS GENERAL MANAGER OF THE DEPARTMENT OF TRANSPORTATION LOS ANGELES — Mayor Eric Garcetti has nominated Seleta Reynolds of the San Francisco Municipal Transportation Agency as the next General Manager of the Los Angeles Department of Transportation (LADOT). “Los Angeles is changing the way it looks at transportation,” said Mayor Garcetti. “Seleta is the ideal field marshal in our war against traffic who will bring to bear all the tools at our disposal, from better road design to transit to technology to bicycle and pedestrian improvements. She is also a big believer in our Great Streets Initiative and has committed to applying her passion and expertise to revitalizing key community corridors across our city to improve neighborhood gathering places and generate economic activity.” “Los Angeles is a world-class city that deserves excellent transportation choices,” said Reynolds. “I’m excited to partner with the agencies and policymakers who deliver great projects on the streets.” Seleta Reynolds has over 16 years of experience planning, funding, and implementing transportation projects throughout the United States. She presently works for the San Francisco Municipal Transportation Agency, where she leads three teams in the Livable Streets sub-division responsible for innovation, policy, and coordination for complete streets projects citywide. Her teams’ current projects include the launch of a pilot bikesharing system and construction of safety projects to help the city meet Vision Zero, a goal to reach zero traffic deaths by 2024. “Seleta is the right person at the right time. L.A. is poised to expand transportation choices, improve mobility and design safer, more vibrant streets, and Seleta brings the innovative vision and strategies needed to lead LADOT at this critical moment,” said Janette Sadik-Khan, principal at Bloomberg Associates and former NYC transportation commissioner. Sadik-Khan helped support the search for a general manager, advising and assisting Mayor Garcetti and L.A. officials throughout the extensive selection process. “L.A.’s streets are its most valuable resource, and Mayor Garcetti’s selection is a key step toward making them great streets for walking, biking, living, and business.” Reynolds’ nomination is subject to confirmation by the City Council. “Seleta Reynolds is the perfect choice to transform LADOT and get Los Angeles moving again,” said Councilmember Mike Bonin, chair of the Transportation Committee and a member of the interview and selection panel. “Seleta has deep experience creating change, building projects, and removing the roadblocks to mobility. I’m eager to work with her as she applies her skills and abilities to the implementation of the Mayor’s progressive transportation agenda.” “With the selection of Seleta Reynolds, the city is bringing on proven expertise that will allow LADOT to explore both innovation in design and equity in implementation,” said Transportation Commissioner Tafarai Bayne. “Multi-modal transportation options are fundamental to improving the quality of life for all Angelenos, where residents in South L.A. can as easily walk, take a bus, train, or bike to work, the grocery store, or to school as easily as they could drive. I look forward to working closely with LADOT and our new general manager as we continue to make Los Angeles one of the greatest cities in the world.” Seleta currently serves on the Transportation Research Board Pedestrian and Bicycle Committees and the WalkScore Advisory Board. She is a past president of Association of Pedestrian and Bicycle Professionals. Prior to joining the SFMTA, Seleta managed the San Francisco and Seattle offices of Fehr & Peers and worked for the City of Oakland Public Works Agency. She graduated from Brown University. “Seleta understands how cities work and she possesses an ability to get people both inside and outside of transportation to look at their streets differently,” said Ed Reiskin, Executive Director of SFMTA and President of the National Association of City Transportation Officials. “San Francisco is losing a tremendously talented innovator but LA is gaining a skilled manager and planner who knows where the city needs to go in the 21st century.” “Transportation is a regional issue, and Metro looks forward to working with Seleta Reynolds and the LADOT to find solutions to keep people moving throughout the L.A. region,” said Metro CEO Arthur T. Leahy. The Department of Transportation leads the planning, design, construction, and operations for transportation systems in the city of Los Angeles and partners with sister agencies to improve multi-modal service and infrastructure in the city and the region. The Department currently has an annual budget of approximately $131 million and an authorized staff of 1,278 full-time employees and 272 part-time employees. The Department is also responsible for extensive federal funding in transportation-related grants and special funds. For more information, visit ladot.lacity.org
Fire detection using infrared images for UAV-based forest fire surveillance Unmanned aerial vehicle (UAV) based computer vision system, as a more and more promising option for forest fires surveillance and detection, is now widely employed. In this paper, an image processing method for the application to UAV is presented for the automatic detection of forest fires in infrared (IR) images. The presented algorithm makes use of brightness and motion clues along with image processing techniques based on histogram-based segmentation and optical flow approach for fire pixels detection. First, the histogram-based segmentation is used to extract the hot objects as fire candidate regions. Then, the optical flow method is adopted to calculate motion vectors of the candidate regions. The motion vectors are also further analyzed to distinguish fires from other fire analogues. Through performing morphological operations and blob counter method, a fire can be finally tracked in each IR image. Experimental results verified that the designed method can effectively extract and track fire pixels in IR video sequences.
<gh_stars>10-100 import assert from "assert"; import synthetics from "Synthetics"; export const handler = async () => { assert(process.env.MONITORING_URL); assert(process.env.TIMEOUT); const { MONITORING_URL, TIMEOUT } = process.env; const page = await synthetics.getPage(); await page.goto(MONITORING_URL, { timeout: parseInt(TIMEOUT) }); await page.screenshot({ path: "/tmp/screenshot.png" }); return; };
City leaders have cited many reasons for deciding to opt out. Most of them say it's a bad message for youth or something related to school safety. Some have opted out because of concern for regulations and how things will eventually be handled, with the option to opt back in after things are sorted out. Recreational marijuana became legal in Michigan on Dec. 6, but sales have not yet started and aren't likely to start until late next year at the earliest. Municipalities that do decide to opt out are encouraged to notify LARA, but are not required to do so. Cities like Troy, Pontiac and Birmingham have opted out but are not on the list. Since the last update, more than 40 communities have been added to the list.
A low cost fluorescence lifetime measurement system based on SPAD detectors and FPGA processing This work presents a low cost fluorescence life time measurement system, aimed at carrying out fast diagnostic tests through label detection in a portable system so it can be used in a medical consultation, within a short time span. The system uses Time Correlated Single Photon Counting (TCSPC), measuring the arrival time of individual photons and building a histogram of those times, showing the fluorescence decay of the label which is characteristic of each fluorescent substance. The system is implemented using a Xilinx FPGA which controls the experiment and includes a Time to Digital Converter (TDC) to perform measurements with a resolution in the order of tenths of picoseconds. Also included are a laser diode and the driving electronics to generate short pulses as well as a HV-CMOS implemented Single Photon Avalanche Diode (SPAD) as a high gain sensor. The system is entirely configurable so it can easily be adapted to the target label molecule and measurement needs. The histogram is constructed within the FPGA and can then be read as convenient. Various performance parameters are also shown, as well as experimental measurements of a quantum dot fluorescence decay as a proof of concept.
MEXICO CITY, Oct 10 (Reuters) - Mexico’s central bank is expected to hike interest rates before the end of the year, Eikon Refinitiv data showed on Wednesday, amid concerns about persistently high inflation. Yields on Mexican interest rate swaps pointed to a just over 50 percent chance of a 25 basis point hike at the central bank’s next meeting on Nov. 15 and forecast a near certain likelihood of a hike by its Dec. 20 decision . Mexico’s central bank last week made a divided decision to hold its benchmark rate at 7.75 percent, with one member calling for a hike. The bank warned it may need to hike again. Data this week showed Mexico’s annual inflation rate rose in September to 5.02 percent, well above the bank’s 3 percent target. Rafael Camarena, an analyst at Santander in Mexico City, said another hike would be considered even more likely if the incoming government raises spending plans more than expected or the country sees significant wage hikes. “I think there could be a hike,” said Camarena. “From now until November, we have some inflation reports, but I think the more pressured decision could be in December, when we have the budget and a potential salary increases,” he said. Leftist President-elect Andres Manuel Lopez Obrador takes office on Dec. 1 and will detail spending plans for next year. He has promised to increase social spending on programs for the young and elderly without increasing debt, but some are skeptical austerity and anti-corruption measures will be enough to pay for his campaign promises.
Prior Abdominal Surgery Is Associated With an Increased Risk of Postoperative Complications After Anterior Lumbar Interbody Fusion Study Design. Retrospective medical record review. Objective. The purpose of this study was to determine whether a history of abdominal/pelvic surgery confers an increased risk of retroperitoneal anterior approachrelated complications when undergoing anterior lumbar interbody fusion. Summary of Background Data. As anterior lumbar interbody fusion gains popularity, both anterior retroperitoneal approach have become increasingly used. Methods. The records of 263 patients, who underwent infraumbilical retroperitoneal approach to the anterior aspect of the lower lumbar spine for a degenerative spine condition between 2007 and 2011 were retrospectively reviewed. Patient's demographics, risk factors, preoperative diagnosis, surgical history, level of the anterior fusion, and perioperative complications were collected. Anterior retroperitoneal approach to the spine was carried out by a single general surgeon. Results. Ninety-seven patients (37%) developed at least 1 complication. Forty-nine percent of patients with a history of abdominal surgery developed a postoperative complication compared with 28% of patients without such history (RR = 1.747, P⩽ 0.001). After controlling for other factors such as age, sex, body mass index, diagnostic groups, and preoperative comorbidities (hypertension, diabetes, and smoking status), these differences remained statistically significant. When each type of complication was considered separately, there was a statistically significant difference in the incidence of general complications (RR = 2.384, P = 0.007), instrumentation-related complications (RR = 2.954, P = 0.010), and complications related to the anterior approach (RR = 1.797, P = 0.021). Conclusion. Anterior lumbar interbody fusion via a midline incision and a retroperitoneal approach was associated with 37% overall rate of complication. Patients with a history of abdominal or pelvic surgery are at a higher risk of developing general, instrumentation, and anterior approachrelated complications. Level of Evidence: 4
<filename>youlai-admin/admin-biz/src/test/java/com/youlai/admin/AdminApplicationTests.java package com.youlai.admin; import cn.hutool.core.lang.Assert; import com.youlai.admin.service.ISysMenuService; import com.youlai.admin.service.ISysResourceService; import lombok.AllArgsConstructor; import lombok.extern.slf4j.Slf4j; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit4.SpringRunner; import java.util.List; @RunWith(SpringRunner.class) @SpringBootTest @Slf4j public class AdminApplicationTests { @Autowired private ISysMenuService iSysMenuService; @Autowired private ISysResourceService iSysResourceService; @Test public void testListForRouter() { List list = iSysMenuService.listForRouter(); Assert.isTrue(list.size()>0); } @Test public void testListForResourceRoles() { List list = iSysResourceService.listForResourceRoles(); log.info(list.toString()); Assert.isTrue(list.size()>0); } }
Q: Profile setting options very confusing on Teams The profile settings and display names for Teams is pretty confusing. When I first signed up for Teams it used my private information (my full name) only within Teams. I rechecked Stack Overflow public and all was good, that one still had a display name. But then I thought I would check the settings for my "team" to see what user name it is using. I could not find the field, but I noticed the first tab profile has a drop down allowing me to select which profile I want to view: This was nice because I could flip between my SO Profile and my Teams profile. What I then tried to do was edit my profile. But when I click this tab this drop down does not show up. So I am thinking the user interface (The user experience) would be a bit more clear were it to have this drop down in this tab as well. Then allow me to change stuff from here: I don't know what I am editing on this page as the top line is orange telling me I am on Stack Overflow. I guess what I am asking is display name should be available in both Stack Overflow and teams - it's OK if you want to use the full name but it should be a separate field and may default to your full name. A: I agree that the profile is confusing and needs a lot of work. Here’s some context about how we got here and what we’re going to do about it. Over the years as Stack Overflow’s offering has grown the profile and the settings to control your profile data have been adapted to accommodate this. These changes have been incremental and made on an ad hoc basis. However we’ve never had the opportunity to step back and consider what the Stack Overflow profile should be given everything we’ve learned about our users and how they use it. The net result is a bloated product with a large amount of design and technical debt. Specifically we have 3 profiles (public profile, Developer Story and now Team profiles) for 3 audiences that share a large amount of data. This makes it very hard to know who can see your profile data and how you can control it. We didn’t add a drop down to the settings tab because the majority of settings don’t make sense to be specific to a Team. For example logins, job preferences and site preferences do not apply. However there is some profile data such as job title or profile image that would be nice to be specific per Team. We chose not to allow customisation per Team to prevent further duplication of profile data that would be hard for users to manage - as we’ve seen happen on Developer Story. For Teams we needed the ability for users to be identifiable within a Team to their colleagues. Knowing who wrote an answer and their role in your organisation is important for you to be able to trust that answer. Given display names are not identifiable we decided to reuse the real name field that is used on Developer Story and shared with employers (if you choose both of these to happen). Your real name takes the place of the display name wherever it is viewed within a Team. Only you and other members of the Team can see your Team profile. You can update this at the bottom of the Edit Profile page under Edit Profile & Settings: Over the next couple of months, now that Teams has launched and we’re learning a lot about how users use the product, we’re going to start planning profile improvements. These will likely include: Making it easier for users to know who can see their profile data and how they can control this Removing duplication of data and consolidating the number of profiles Allow customisation of how you appear to your colleagues within a Team If you have any feedback about possible changes please do post on here!
<reponame>dave-msk/fsnd-capstone # Copyright 2020 <NAME> (David). All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== from __future__ import absolute_import from __future__ import division from __future__ import print_function import datetime import itertools import flask_testing from absl.testing import parameterized from core import models ERROR_400 = {"success": False, "error": 400, "message": "bad request"} ERROR_404 = {"success": False, "error": 404, "message": "resource not found"} ERROR_422 = {"success": False, "error": 422, "message": "unprocessable"} ERROR_500 = {"success": False, "error": 500, "message": "server error"} class RestfulRouteTestBase(flask_testing.TestCase): def setUp(self): models.db.create_all() test_actor = models.Actor(id=1, name="<NAME>", age=30, gender="F") test_movie = models.Movie(id=1, title="Test Movie", release_date=datetime.date(1970, 1, 1), actors=[test_actor]) models.db.session.add(test_actor) models.db.session.add(test_movie) models.db.session.commit() models.db.session.close() def tearDown(self): models.db.session.remove() models.db.drop_all() def compare_json(self, res, code, expected): self.assertEqual(res.status_code, code) self.assertTrue(res.is_json) self.assertDictEqual(res.json, expected) def generate(**kwargs): def decorator(test_method): keys = sorted(kwargs) combinations = [dict(zip(keys, p)) for p in itertools.product( *[kwargs[k]for k in keys])] return parameterized.parameters(*combinations)(test_method) return decorator
Managing threats in the global era: The impact and response to SARS Abstract In early 2003, the SARS virus brought disruption of public and business activities in many areas of the world, particularly Asia. As a result of its impact, SARS quickly established itself as a new kind of global uncertainty and posed challenges for traditional methods of risk management. This article examines the impact that SARS has had through means of a case study and builds on this to provide recommendations for how uncertainty may be managed in an increasingly globalized world. Reconsideration of strategic and riskmanagement approaches have become necessary. Supplychain management and corporate strategy require a fundamental rethink to balance the pursuit of efficiency with increased responsiveness and flexibility. Unpredictability and turbulence in the international business environment suggest that traditional planning approaches that assume linear growth may give way to more scenariobased planning. This will encourage firms to contemplate a variety of possible futures and better prepare them for unanticipated events. Similarly, contingentbased continuity plans help businesses continue running even during a crisis. © 2006 Wiley Periodicals, Inc. Managing Threats in the Global Era: The Impact and Response to SARS Wei-Jiat Tan ■ Peter Enderwick Executive Summary In early 2003, the SARS virus brought disruption of public and business activities in many areas of the world, particularly Asia. As a result of its impact, SARS quickly established itself as a new kind of global uncertainty and posed challenges for traditional methods of risk management. This article examines the impact that SARS has had through means of a case study and builds on this to provide recommendations for how uncertainty may be managed in an increasingly globalized world. Reconsideration of strategic and risk-management approaches have become necessary. Supply-chain management and corporate strategy require a fundamental rethink to balance the pursuit of efficiency with increased responsiveness and flexibility. Unpredictability and turbulence in the international business environment suggest that traditional planning approaches that assume linear growth may give way to more scenario-based planning. This will encourage firms to contemplate a variety of possible futures and better prepare them for unanticipated events. Similarly, contingent-based continuity plans help businesses continue running even during a crisis. © 2006 Wiley Periodicals, Inc. province in November 2002. Largely due to the failure of the Chinese authorities to recognize the seriousness of the problem or provide international notification, SARS quickly spread throughout China and, in February 2003 Kong was to provide the global accelerant from which SARS quickly spread, particularly to neighboring Asian countries, including Vietnam, Singapore, and Taiwan. The high incidence of travel between Toronto and Asia also saw the outbreak of SARS in Canada. Over the course of the outbreak, SARS infected more than 8,000 people and left more than 900 dead in 32 countries, with 349 of those fatalities recorded in Mainland China. While infectious epidemics are by no means a new phenomenon, there is little doubt that SARS had a greater impact on the international business environment than its predecessors. This is largely due to the fact that countries and economies are now more interconnected than before, allowing for easy transmission of a virus like SARS. While literature does exist on the management of risk, SARS is indicative of a new kind of uncertainty, the impact and management of which must be analyzed in the context of a world that has become increasingly globalized. This article examines the impact of SARS on the international business environment and considers how managers can incorporate events such as SARS into an ongoing riskmanagement framework. The discussion comprises four substantive sections. The first section provides a contextual background of the international business environment at the time of the SARS outbreak. The second section provides a case-study discussion of the impact of SARS on international business operations. Drawing on this case study, the third section examines some strategic implications for firms seeking to cope with the new types of uncertainty such as that created by SARS. Concluding thoughts are provided in the final section. CONTEXTUAL BACKGROUND Globalization There is little doubt that businesses and firms operate in an increasingly globalized and integrated environment. Globalization manifests itself as an increase in cross-border movements of goods and services, capital and technology flows, tourist and business travel, and the migration of people (Craig & Douglas, 1999). This integration has been made possible by declining trade and investment barriers, the growth of free trade agreements, and regional integration. Another driver of globalization has been technological advancements in communications and transportation. The use of satellite links, company intranets, and the Internet has improved communication networks and linkages across borders, thereby lowering the costs of coordinating and controlling a global organization. Modern communication systems have also enabled the rapid dissemination of information, leading to some convergence of consumer tastes and preferences. In addition, developments in transportation have allowed the rapid supply of people, goods, and services from distant locations. Globalization has provided significant opportunities for firms to reconfigure their supply chains and globalize their production processes, thereby reaping economies of scale and taking advantage of national differences in the cost and quality of factors of production. However, globalization also presents some very real challenges. The interconnectedness that is characteristic of globalization also means that local conditions are no longer the result of purely domestic influence. Indeed, crises in one country now have the ability to affect other countries around the world. This was evident from the 1997 Asian financial crisis, September 11, and SARS, where such crises had a "ripple effect" so that their direct and residual impacts were felt far from their epicenters. Flexibility and Responsiveness The international business environment has never been predictable or certain. However, the scale of investments in today's globalized world, coupled with rapid technological change, shortening product life cycles, and the increasing aggressiveness of competitors, has increased the uncertainty and complexity of operating in such an environment. Indeed, it has been stated that: Globalisation and technology are sweeping away the market and industry structures that have historically defined the nature of competition. The variables that can profoundly influence success and failure are too numerous to count. That makes it impossible to predict, with any confidence, which markets a company will be serving or how its industry will be structured, even a few years hence. (Bryan, 2002. p. 18) Accordingly, unlike past decades that exhibited long, stable periods in which firms could achieve sustainable competitive advantage, competition is increasingly characterized by short periods of advantage, The interconnectedness that is characteristic of globalization also means that local conditions are no longer the result of purely domestic influence. marked by frequent disruptions. In such hypercompetitive environments, risk is not so much predicted as it is responded to. Accordingly, strategies that focus solely on efficiency and pay close attention to cost structures must now be reassessed in light of the inflexibilities they exude in a changing and uncertain environment. Further, exploitation of core competencies that were once seen as a precondition to success are now viewed as presenting the risk of core rigidities. Accordingly, a higher premium is now being placed on considerations such as flexibility, responsiveness, and adaptiveness. One area in which this need for flexibility has been recently espoused is that of supply-chain management. Until recently, companies focused on developing tightly controlled supply chains, with the emphasis on efficiencies in operations and distribution. While tightly controlled supply chains work well in stable environments with minimal disruptions, we are now experiencing an environment of increasing unpredictability, where disruptions are more common. Accordingly, the ability to respond to the resulting fluctuations in demand is paramount, and considerations such as flexibility and responsiveness are now considered as important as efficiency. The Management of Risk Globalization has also seen the emergence of a new type of risk, with a nature quite different to what was traditionally regarded as risk in international business. In the 1970s, 1980s, and even 1990s, risk was generally equated to financial, exchange rate, and inflationary risks and, in particular, "political risk," which was reflective of host-government hostility toward foreign investment during much of this time. Political risk was country-specific and could be summed up as the likelihood that a multinational enterprise's foreign operations could be constrained by host-government policies through measures such as forced divestment, unwelcome regulation, and interference with operations. Accordingly, risk management was also country-specific and involved assessing the riskiness of a particular country through a variety of predictive approaches. Where a country was deemed too risky, the firm would avoid investing or withdraw its current investment. Other risk-management devices also involved responding to risk that largely emanated from host governments. Indeed, defensive political risk-management strategies involved locating crucial aspects of the company's operations beyond the reach of the host, while integrative strategies aimed to make the firm an integral part of the host society, thereby minimizing the risk of government intervention. However, as the world economy has become increasingly global, political risk, while still present, is arguably not as pressing as before. This is largely because of a change in attitude toward trade and investment, with most countries now encouraging foreign direct investment (FDI). Indeed, between 1991 and 2003, more than 165 countries made 1,885 changes in legislation governing FDI, with 95 percent of these changes involving the liberalization of FDI regulations. This has also been supported by a dramatic increase in the number of bilateral investment treaties, as well as regional and global free-trade agreements (United Nations Conference on Trade and Development , 2003). At the same time, we have witnessed the emergence of a new type of environmental business threat that has manifested itself in incidents such as global terrorism, SARS, financial crises, and computer viruses, all of which have the ability to disrupt a firm's operations. Enderwick describes such threats as being sudden, unexpected, and unpredictable, with the ability to spread quickly through global processes and forces, thus having a widespread impact but with a disproportionate impact on regions, sectors, and industries. Clearly, risk is no longer country-specific, nor is it limited to threats from host-government actions. Instead, it is global and systemic, and capable of being perpetrated by individuals or small groups. Further, such threats do not simply affect a firm's operating conditions, but also its overall viability, as they can cause severe disruptions, threatening the very survival of the firm. Accordingly, new strategies for managing this type of threat are required, and they cannot be avoided by simply deciding not to invest in a particular country, or by using strategies centered on host governments. However, while in the past risk was largely seen as negative, it should be noted that these environmental uncertainties provide both challenges as well as opportunities for those businesses that have the ability to respond quickly and effectively. It is useful to clarify the exact nature of a disruption such as SARS in terms of risk and uncertainty. While these terms are often used interchangeably, they have distinct meanings. According to Knight's analysis, risk is considered as the variation in potential outcomes to which an associated probability can be assigned. In statistical terms, while the distribution of the variable is known, the particular value that will be realized is not. Uncertainty exists when there is no understanding of even the distribution of the variable. For decision making, uncertainty is a greater problem than risk. Because probabilities can be attached to risk, options to mitigate risks through Clearly, risk is no longer countryspecific, nor is it limited to threats from host-government actions. insurance or hedging are possible. Because probability cannot be assigned to uncertainty, instruments to reduce uncertainty are not available. We suggest that SARS (and similar recent environmental disruptions such as global terrorism, computer viruses, and avian bird flu) are uncertainties, not risks. These types of disruptions share a number of characteristics. First, they can be considered as "jolts" that occur randomly. No one anticipated the emergence of SARS, or any similar virus. Because such events are not continuous or even regular, it is not possible to assign probabilities to them. Second, the nature of these jolts is such that they evolve, changing their forms, and do not simply recur. For example, viruses such as SARS and avian bird flu are capable of mutating and assuming different forms with differing impacts. In the case of avian bird flu, there have been recent reports of the first full case of human-to-human transmission, and a recurring fear is that it could mutate into a human pandemic with devastating effects. Similarly, global terrorism assumes a variety of forms including car bombs, suicide bombers, aircraft as weapons of destruction, and chemical attacks. This makes it difficult to use historical experience as a predictor of future occurrences and impacts. Third, the impact of these uncertainties tends to be concentrated, either by sector or by geographical location. As the next section makes clear, the primary effects of SARS were experienced in Asia and disproportionately affected the transport, tourism, and medical industries. The impacts of natural disasters such as extreme weather events or financial and political problems appear to be more widely and randomly distributed. This is not to suggest that SARS did not become a global issue; however, its global spread was clearly traceable to well-established patterns of personal and business contact. SARS: A CASE STUDY To understand the impact that SARS had, it is useful to employ a "concentric band" framework, which sees a crisis like SARS as having a "ripple effect," as illustrated in Figure 1. The band closest to the center represents the primary or immediate impacts of SARS. Moving outward, the next band represents secondary impacts that are likely to develop over the short to medium term, followed by those impacts that result from the various responses to SARS. Finally, the outermost band represents the longer-term issues that are likely to arise out of the SARS crisis. We suggest that SARS (and similar recent environmental disruptions such as global terrorism, computer viruses, and avian bird flu) are uncertainties, not risks. Primary Impacts The sectors that were immediately and significantly affected by SARS were those that involved regular human contact. Accordingly, Asian tourism and transport were hit especially hard. Flights to Asia were cancelled, with SARS hot zones like Singapore and Hong Kong suffering the most (Lemonick & Park, 2003). The hotel business in Asia also suffered and plummeted 25% between February and March (Lemonick & Park, 2003), with Hong Kong five-star hotels at occupancy rates of 10% and Singapore occupancies falling from a norm of 70% to 20% to 30%. As fewer tourists arrived and locals chose to stay home to avoid public places, stores and restaurants in Singapore and Hong Kong were almost empty at peak hours (Engardio, Shari, Weintraub, Arnst, & Roberts, 2003). SARS also had a significant impact on medical facilities and staff. Rapid increases in the number of cases quickly exposed inadequate surge capacities in hospitals and public health systems and a lack of protective gear, with the problem exacerbated by health workers falling victim to the disease (World Health Organization , 2003). In Beijing, a shortage of bed space in hospitals meant suspected SARS cases could not be hospitalized and quarantined quickly, contributing to the spread of the illness. To reduce this heavy burden on existing hospitals, governments invested sub-. Indeed, Hong Kong spent HK$400 million to create 1,280 hospital beds and a further HK$100 million to train medical staff (Fowler & Dolven, 2003). Secondary Impacts Food Industry. SARS also led to secondary impacts in the food industry. Food prices in Asia plummeted as restaurants cut down on purchase orders, thereby affecting the region's farmers and fishing fleets (Carmichael, McGinn, & Theil, 2003). Supermarket sales in key markets such as Singapore, Taiwan, Japan, and China also fell due to a loss in consumer confidence, although increased food preparation at home-and, in some cases, panic buying-had a positive impact on supermarkets. Manufacturing. There was widespread belief that a major disruption like SARS could paralyze just-in-time supply chains by holding up production and the flow of goods and services between countries due to port closures, travel restrictions, and forced closures of manufacturing plants if employees got infected. Despite such media hysteria, the impact on the manufacturing sector was not that pronounced. This was largely because Asian companies took preemptive steps as soon as the epidemic became known and increased production in the anticipation that there could be a problem, building up their buffer levels in inventory and safety stock. The result was very few plant shutdowns in the Far East. Investment. Investment in Asia was also affected, as international firms postponed plans to begin or expand operations in Asia. Real estate sales fell drastically as buyers refused to travel to Hong Kong or China to look at building sites. Similarly, the cancellation of trade fairs affected manufacturers, particularly in China, who rely on such fairs to sell their goods. The capital markets did not emerge unscathed, and it is estimated that overall fundraising in Asia fell 10-20% in 2003 due to SARS (Hamlin, Smith, Meyer, Kirk, & Horn, 2003). Stock prices of those companies with extensive operations in Asia also fell. Unemployment. Given that the tourism and hospitality industries that were hit hardest by SARS were labor-intensive, there was also a corresponding rise in unemployment in SARS-affected countries, mainly concentrated in these industries. In the worst-hit countries like China, Hong Kong, Singapore, Taiwan, and Vietnam, the tourist industry faced losses of 30% of travel and tourism employment. The global impact of SARS was also expected to bring a 15% loss in the tourism workforce in Indonesia and Oceania, and 5% in the rest of the world. Economy and Growth. The impact of SARS on regional economies and projected economic growth was also substantial. Indeed, economists estimate that China and South Korea have each suffered $2 billion in losses in tourism, retail sales, and productivity due to SARS, with Japan, Hong Kong, Taiwan, and Singapore estimated to lose approximately $1 billion each. Toronto, severely affected by SARS, was losing $30 million a day at the height of the crisis. In terms of the global cost of SARS, this is estimated to have reached $30 billion. Positive Effects. While negatively affecting a number of industries, others were able to capitalize on the opportunities that SARS provided. The outbreak of SARS saw a worldwide surge in demand for facemasks, given that SARS is largely transmitted by coughing and sneezing. This saw demand outstripping supply, forcing large manufacturers like 3M to switch to 24-hour production. Video conferencing was another industry to benefit, as Asian employers sent their workers home and cancelled overseas conferences, meetings, and visits. While no industrywide traffic figures are available, many video-conferencing services reported spikes in usage in Asia since the SARS epidemic began. Indeed, InterCall, a Hong Kong teleconferencing company, doubled its business in March and April 2003 and saw a 200% increase in users signing up for the service in Hong Kong in April and a 30% rise in new customers worldwide. Response-Generated Impacts Individual Firms. In response to the SARS crisis, businesses undertook a number of measures to minimize the impacts of SARS. Business travel bans to SARS-affected areas were a common risk-management device, as were temporary quarantine measures for those who had recently traveled to such areas (e.g., working from home, segregating them from other employees). Less common was the repatriation of employees, and according to one survey, less than 7% of firms had brought employees home from SARS-affected regions or placed them in another country. Many firms implemented business continuity plans, with some firms setting up operations at parallel sites or shifting operations altogether to other office com- plexes (). IT played a major role in all continuity plans, with firms issuing notebooks capable of accessing the firm's intranet so employees could work from home and employing technology such as video conferencing to ensure business continued as usual. Government Spending. In response to SARS, governments in Asia also took action and increased government spending to SARSaffected industries. In China, the Central Government launched the largest-ever tax relief package to help the aviation, tourism, and retail sectors recover from the SARS epidemic, estimated to cost several billion yuan. The Hong Kong government similarly offered a $1.5 billion relief package for local businesses () and has invested $80 million to revitalize the tourism industry, with part of this money to be spent on a worldwide campaign to reassure visitors. Domestic Measures. The outbreak of SARS prompted governments to take decisive and often drastic action to curb the spread of SARS. The Singapore government authorized measures such as closure of schools and universities, temperature checks twice daily (at home and in the workplace), home quarantine for those exposed to SARS, and triage centers at the entrances of hospitals to identify and separate SARS patients. In Taiwan, remote video monitors were installed in quarantine households to ensure against any quarantine violations (Chinese Government Information Office, 2003). In China, far more draconian measures were taken, arguably to compensate for the government's previous lack of responsiveness and reluctance to report the seriousness of the SARS crisis in China. Public entertainment spots in Beijing were closed down, as were public schools and universities (Kaufman & Chen, 2003). Stricter Border Control. Governments also responded to the rapid spread of SARS by implementing stricter border control measures and the collection of detailed health and contact information. At one extreme, several countries, such as Taiwan, banned individuals who had traveled to SARS-affected regions (Kaufman & Chen, 2003). Other strict measures included requirements that travelers from SARS-affected areas wear facemasks for two weeks after arrival and special powers of quarantines. Greater International Cooperation. In recognition of the fact that SARS is a global problem, governments have also been more willing The outbreak of SARS prompted governments to take decisive and often drastic action to curb the spread of SARS. to cooperate to prevent its further spread. One such example was the "Special ASEAN-China Leaders Meeting on SARS" held on April 29, 2003, in Bangkok, where ten Association of Southeast Asian Nations leaders and the Chinese premier held crisis talks on how to fight the virus. This was consistent with the way the international community rallied together to understand and treat the SARS virus. Indeed, the global response was unprecedented and saw 11 laboratories around the world that were previously strong competitors sharing information freely. Longer-Term Issues Importance of the State. As illustrated by the response-generated impacts, it is clear that the role of the state in international business is still important, as the SARS crisis saw the state take on a major crisis management role. Governments were responsible for mobilizing resources such as hospitals and other medical facilities, as well as coordinating public health care. Quarantine measures also had to be introduced, monitored, and enforced, coupled with surveillance capacity to monitor and report quickly on disease outbreaks and their progress. Finally, governments were responsible for handling the economic slowdown caused by SARS and providing assistance to severely affected industry sectors through increased public spending. These tasks are not amenable to market forces and highlight the unlikelihood of globalization leading to the elimination of the nation-state. Open and Transparent Government. Connected to the idea of the growing importance of the state is the need for open and transparent government. The reasons for this are twofold. First, the need for transparency is paramount if a crisis is to be contained. China's initial understating of the number of confirmed cases, refusal to give daily reports, and blocking of WHO specialists from visiting Guangdong (the origin of SARS) allowed SARS to spread rapidly. It was only after China began reporting the true seriousness of the situation and allowed WHO officials to investigate that the SARS crisis slowly came under control. In contrast, Vietnam was able to contain the virus relatively quickly through prompt and open reporting, the early request for WHO assistance, and rapid case detection, isolation, infection control, and vigorous contact tracing. Accordingly, any attempt to conceal a crisis such as SARS for fear of the social and economic consequences can only be regarded as a short-term measure that ultimately risks the situation spiraling out of control. Second, it is in the interests of government to be open and transparent, as in today's turbulent and unpredictable environment, investors are now placing a premium on governments that can be trusted. Indeed, China's behavior during the SARS crisis has resulted in a loss of credibility in the international community and created fears among foreign investors about doing business in China. Fear. Another long-term issue is how to handle the fear and panic that accompanied SARS, given it was fear that spread faster and had a greater impact than the disease itself. Indeed, as far as infectious diseases go, SARS is relatively mild; it is harder to catch than the flu, with a fatality rate of only 6% to 8%. Despite this, SARS had a devastating effect on the tourism industry, as people became unwilling to fly to SARS-affected regions. Business was also affected, as foreigners cancelled conferences and meetings in countries in Asia, such as South Korea, that had not reported a single SARS case. This was largely because Asia is viewed as "one place" (), and therefore the crisis in one part of Asia was extrapolated to the whole. Personal Behavior. SARS is also likely to have a longer-term impact with regard to personal behavior and cultures. In Singapore, through massive public-education efforts to promote public-health practices, there has been a noticeable change in personal habits. People wash their hands more, and public and restaurant toilets are much cleaner. Similarly, people are using serving spoons for shared dishes when eating, and sick people are more likely to see a doctor when they become ill rather than do nothing (Borsuk & Prystay, 2003). SARS could also see a change in the way that business is done in Asia. As mentioned earlier, many businesses turned to video conferencing at the height of the outbreak. Video conferencing was found to be an effective tool to maintain communication during the crisis-and without the associated travel/hotel costs and jet lag (). However, tensions remain in that Asian businesspeople place a high value on personal contact and prefer to meet clients and customers face-to-face. The Reality of Globalization-The Need for Openness and Trust. The SARS crisis has illustrated the consequences of living in an interconnected world and has further clarified the nature of globalization. Technology, such as e-mail, instant telephone communication, and the Internet, has united people and increased enormously the number of contacts that people have. These contacts are eventually pur- The SARS crisis has illustrated the consequences of living in an interconnected world and has further clarified the nature of globalization. sued through personal visits or through business meetings, conferences, and plant tours. Further, advancements in air travel mean any place in the world is accessible within 24 hours, and coupled with the movement of commerce, this has brought China and other developing nations out of relative isolation (). The result has been a global network within which an infectious disease like SARS can spread, and while diseases in the past have taken weeks or months to spread, SARS was literally transmitted within days, setting a record for the speed of continent-to-continent transmission. Accordingly, while globalization has provided the world with many benefits, it also brings risks, and increased connectedness also means that threats have a greater global impact. This implies that countries must understand that they can no longer insulate themselves from threats such as SARS given the open borders of a globalized world, and there must be an increasing recognition that crises like SARS are not simply a regional problem, but a global one. STRATEGIC IMPLICATIONS The case study illustrates that SARS represents a new kind of threat and has implications for the way uncertainty is managed in the future. Risk-management strategies that were largely country-focused are no longer adequate in themselves, given that this new type of threat is global and systemic. Despite the high levels of uncertainty associated with events such as SARS, this should be incorporated into decision making. A lack of precise knowledge does not preclude decision makers from further information gathering or from making decisions about likely probabilities of events occurring. As has been recognized, traditional strategic management approaches encourage perceptions of uncertainty in a binary fashion (Courtney, Kirkland, & Viguerie, 1997). The world is seen either as sufficiently certain that precise, and usually single, predictions of the future can be made, or that uncertainty renders such an approach totally ineffective. In the latter case, there may be a temptation to abandon analytical approaches and to rely wholly on gut instinct. Courtney et al. argue that in many cases uncertainty can be significantly reduced through careful search for additional information; in effect, much that is unknown can be made knowable. The uncertainty that remains after the most thorough analysis they term residual uncertainty. There are a number of approaches that offer insights into how to manage uncertainty. The simplest approach is to ignore it. This can be done by developing a "most likely prediction" often based on "expert input" or by assigning a margin of error to key variables. Each of these approaches yields a single unequivocal strategic option by either ignoring uncertainty or assigning it a probability. Neither approach is satisfactory. Ignoring an uncertain environmental event is clearly dangerous. Assigning probabilities to unique events is invalid. Even subjective probability derived from expert analysis is untestable and arbitrary. Miller highlights a useful distinction between financial riskmanagement and firm-strategy approaches to managing environmental uncertainties. Financial risk-management techniques such as insurance and futures contracts reduce the firm's exposure to specific risks without changing the underlying strategy. But, as noted earlier, such techniques only apply to risks, not uncertainties. In the case of an event such as SARS, strategic responses, which attempt to mitigate the firm's exposure to uncertainties, are likely to be more useful. Miller identifies five generic strategic responses to environmental uncertainties: avoidance, control, cooperation, imitation, and flexibility. Avoiding an event such as SARS through divestment, delayed entry, or a focus on low uncertainty markets is difficult. The irregular occurrence and variable impact of such events is unlikely to justify divestment. Similarly, their unpredictable and evolving nature makes postponement or niching very difficult. Uncertainty control strategies based on political lobbying, vertical integration, or enhanced market power are not an effective counter to SARS. In the same way a cooperative strategy deals primarily with behavioral risk and is not likely to be effective, neither is an imitative strategy that addresses competitive rivalry. Of more value is the management of uncertainty through organizational flexibility. Flexibility focuses on the ability of the organization to respond and adapt to significant environmental changes. High levels of flexibility imply lower costs of organizational adaptation to uncertainty. In contrast to approaches that try to increase the predictability of uncertain events, flexibility strategies emphasize internal responsiveness, irrespective of the predictability of contingencies. A widely used strategy for increasing flexibility is diversification, whether of products, markets, or sources of supply. With regard to SARS, the key strategic responses are likely to occur in the areas of supply-chain management, diversification, scenario planning, and ensuring business continuity. We consider these in more detail. In contrast to approaches that try to increase the predictability of uncertain events, flexibility strategies emphasize internal responsiveness, irrespective of the predictability of contingencies. Supply-Chain Management The need for flexibility and responsiveness is no more evident than in the area of supply-chain management. While the manufacturing sector did not suffer severe disruptions given the relatively quick manner in which SARS was contained, had the crisis persisted and impeded the flow of goods and services and/or caused plant shutdowns, major disruptions to manufacturing and distribution would have occurred. Indeed, potential disruptions quickly became apparent as firms contemplated the possible effects of travel bans. Firms recognized that problems could arise if a factory needed repair help to continue manufacturing but engineers could not be sent due to travel bans (Wonacott, Chang, & Dolven, 2003). In combination, these issues highlight the need for flexible supply chains that can respond quickly to changes in demand and cope with major disruptions. To develop this responsiveness, firms can do a number of things. First, in handling a crisis like SARS, every moment of delay is critical, and the earlier you can get the supply-chain network to respond, the easier it is to manage. Accordingly, to ensure prompt action, firms must ensure quicker access to and action on information, preferably at the source, that may provide timely warnings. This certainly reiterates the importance of management basics such as environmental scanning and monitoring, and the need for it to be an ongoing activity. However, such environmental scanning will no longer simply involve monitoring the local political environment, as often happened in the past. Instead, it will need to encompass the larger regional and global environment. Further, while host-country managers previously played a vital role in conveying information about the political environment back to higher management, what will become increasingly important is the ability to channel this information to the firm's affiliates in other parts of the world and to share any lessons learned from the crisis so that these affiliates may benefit from them. This further reinforces the value of establishing an integrated global network and facilitating intracompany learning. The need to be responsive also has implications when choosing manufacturing locations. China's initial unresponsive and surreptitious approach to the SARS crisis illustrates that while cost of production and a low-cost labor force have been, and will still remain, dominant considerations in the investment decision, stability, reliability, and predictability are likely to be given a higher premium. Given the unexpected and sudden nature of threats such as SARS, management is also likely to add to its investment criteria how well various parts of the world are equipped to deal with crises. In handling a crisis like SARS, every moment of delay is critical, and the earlier you can get the supply-chain network to respond, the easier it is to manage. Firms may also opt to switch from large production sites in a single location like China to smaller, but multiple facilities around the world, thereby creating a global network of manufacturing facilities. This allows increased flexibility so that if disruptions to manufacturing or the supply chain occur in one country, the firm has the ability to vary plant loadings and increase production elsewhere (MacCormack, Newman, & Rosenfield, 1994). Such a manufacturing network will considerably increase the complexity of coordinating the global supply chain. However, this may simply be a necessary trade-off for firms wishing to balance cost-efficiency and responsiveness in managing their global supply network. Alternatively, firms may find that establishing their own manufacturing operations is too risky an investment and may instead choose to outsource, thereby continuing a trend that has been taking place over the last ten to fifteen years. However, SARS has highlighted the value of diversifying the supply base and sourcing from multiple locations, thereby reducing a firm's dependence on a single supply location (). Indeed, outsourcing offers the flexibility to switch sourcing to another country if a crisis like SARS should disrupt supply-chain operations in a particular country. Accordingly, many global firms are considering back-up suppliers outside of Asia, with Latin America and Eastern Europe likely locations. The way in which outsourcing is conducted may also change as events such as SARS have increased the reluctance and inability to travel. Indeed, rather than establishing their own network of suppliers, firms may increasingly turn to third-party logistics providers such as BChinaB, who have offices on U.S. soil but manage a sprawling network of 1,500 factories, tool and die shops, materials suppliers, and plastic molders in China. By using such a provider to manage their logistics operations abroad, firms can reduce the requirement of traveling overseas to negotiate price quotations and samples and having to deal with Chinese manufacturers directly. The move toward responsiveness may also necessitate less of a focus on cost-efficiency and a loosening of the tight control that is currently held over supply chains. After SARS, firms may have to reexamine their supply chains to identify potential problems and bottlenecks and allow for enough slack to accommodate delays and potential problems that can arise. Such readjustments may include keeping buffer inventory and safety stock to hedge against uncertainties. While such measures incur costs, the costs of disruptions to an unresponsive supply chain may prove more severe-these being extended lead times, lost service contracts, and higher emergency logistics costs. Diversification Another lesson from the SARS crisis may be to illustrate the risks of having too focused a corporate strategy and the potential benefits of diversification. In the same way that financiers diversify their investment portfolios to decrease variability in their rate of return, a portfolio approach to corporate strategy ensures that even if some of the firm's corporate initiatives fail, the success of other initiatives achieves an overall favorable outcome for the firm. This is especially so where the impacts of events like SARS and terrorism are disproportionately borne by certain sectors or locations. Accordingly, corporate strategies may now require this "portfolio approach" so that a firm is not overly focused on one sector or location. For example, the SARS crisis, coupled with a more global world market, is likely to see exporters increase diversification in both products and geographical markets. On a larger scale, economies may also look to become more diversified, as SARS revealed that many Asian countries were heavily dependent on the services sector. For businesses, related diversification appears to be superior to unrelated diversification. Scenario Planning As noted earlier, the nature of environmental threats is changing and is increasingly difficult to anticipate. Indeed, SARS illustrated the difficulty of trying to predict where the next threat will come from and has called into question traditional linear planning and forecasting. Such planning techniques work on the assumption that the environment in the future will be very much like today, and that extrapolation is meaningful. However, in today's turbulent and disruptive environment, this assumption is no longer valid and what are needed are plans that are flexible enough to adapt to the circumstances. Accordingly, the SARS crisis is likely to accelerate the current trend toward the adoption of scenario planning. Rather than forecasting a specific future or "most likely outcome," scenario planning builds on existing knowledge to develop several plausible future scenarios and then necessitates constructing robust strategies that will provide competitive advantage no matter what specific events unfold. As such, it encourages firms to think about "worstcase" scenarios, which may include technological, economic, political, or environmental calamities. Schnaars discusses a number of different approaches that can be adopted when designing strategy for multiple scenarios. Accordingly, scenario planning forces firms to pay closer attention to internal, external, and the broader global environmental factors that may influence the firm's future. This process challenges firms to avoid complacency in their strategy formulation and encourages managers to think more broadly and unconventionally and view events with a new perspective, an essential requirement in trying to prepare for unknowable shocks and crises (Kennedy, Perrottet, & Thomas, 2003). Arguably, SARS will "shake things up" and encourage strategists to further consider more diverse and unexpected scenarios, as prior to SARS, many strategic-planning scenarios had not been done with a disease in mind (). Business Continuity The Federal Emergency Management Agency estimated that the costs of disasters are 15 times greater than the costs of preparing for them. Indeed, events such as September 11 and SARS illustrated the value of business continuity planning, which essentially involves strategies, services, and technologies that enable firms to prevent, cope with, and recover from disasters, while at the same time ensuring the continued running of the business. While the SARS crisis certainly reinforced the need for such planning, it also provided implications for the content of such continuity plans in the future. While some continuity plans actioned during the SARS crisis saw the establishment of parallel operations or the shifting of work to safer regions/locations so as to create back-up locations, it is apparent that cost and time factors can mitigate against this being a feasible option for many firms. However, what SARS did demonstrate, and what has been suggested by business continuity writers, is that technology may be the key and that "telecommuting" or "teleworking" should be a part of any company's business continuity plan. Accordingly, firms need to ensure they have the technological infrastructure to support the ability to work from home or from remote locations, in case an event like SARS forces offices to close. At a basic level, this requires employees to have access to the firm's information/data or intranet from home, and such access must be secure. To reduce reliance on a single data source, firms are also beginning to employ "network storage" or "data mirroring" technologies so that key transactional data is copied in almost real time to other locations, thus creating a back-up. The SARS crisis also highlighted the usefulness of video-conferencing and teleconferencing technology, particularly given that higher bandwidth speed now makes such conferencing a more viable option. While travel bans and the reluctance to travel persisted, conferencing technology allowed continued contact with clients and overseas partners, and allowed important meetings to take place. While such technologies may not immediately become the industry norm, given that personal contact in Asian countries is highly valued, their introduction as a riskmanagement device may secure their gradual acceptance as their longterm benefits become more obvious. Indeed, anecdotal evidence suggests that firms who invested in such technology during the SARS crisis will continue to use this technology in the future. While technology is important, it alone may not be sufficient, and the human element in continuity plans is also important. Key workers must be identified and must have access to the right IT equipment and training to enable them to carry on working if the office has to be shut down. Key staff should also be spread among different sites, as one organization learned the hard way, when it lost its entire IT recovery team located in the World Trade Center during September 11. Finally, firms must also realize that telecommuting has a human element, as workers stuck at home often experience feelings of isolation, anxiety, and depression. Accordingly, planning must incorporate how such psychological problems can be addressed. Fear Management As mentioned in the case study, the impact from the fear of SARS was greater than SARS itself, and has implications for how a crisis such as SARS is managed in the future. What is glaringly obvious is the need for full disclosure of information, given that the panic about SARS was fueled when information was concealed or only partially disclosed, leading to rumors and exaggeration. Employees need facts from, and questions answered by, reliable and credible sources. Responses that have proved effective include establishing 24hour hotlines to communicate with staff and directing staff to other information sites, such as the WHO Web site. CONCLUSIONS The SARS crisis occurred against the backdrop of a highly interconnected and integrated world economy and has established itself as a new kind of global threat, along with other unpredictable events such as the Asian financial crisis and global terrorism. Rather than having a localized impact, the impact of SARS has been far-reaching, even if this was largely from the fear of the virus rather than the virus itself. The impact from the fear of SARS was greater than SARS itself, and has implications for how a crisis such as SARS is managed in the future. In Table 1, we summarize some of the key differences between the traditional and new forms of risk. For governments, the message is clear-even in a world without borders, the state will still have a role, given that unsupported market processes are insufficient by themselves to solve the problems created by SARS. However, with this responsibility comes the requirement that governments act in an open and transparent manner, something that is arguably a precondition for the effective handling of a crisis such as SARS. Global phenomena such as SARS also emphasize the need for a collective response and more openness and cooperation among nations. For businesses, the ability of SARS to significantly disrupt international business and the speed in which the disease was transmitted suggests that the nature of this new kind of event is global and systemic, and accordingly warrants a broad and encompassing risk-management approach. The implication is that firms must put a higher premium on strategies that emphasize flexibility and responsiveness. Indeed, firms will find value in increasing diversification, whether this is in sourcing or in corporate strategy. Planning must also become less linear and more contingent-based, and in considering a range of possible future scenarios, firms will be in a better position to handle disruptions that increasingly cannot be predicted. Technology also appears to offer a possible solution as a risk-management device, and we are likely to see technolo- gies such as video conferencing become a commonplace feature in offices of the future. Further research on the role that strategies, structures, and resources play in anticipating, responding, and adjusting to environmental disruptions is necessary. In sum, while an event like SARS produces considerable challenges, it also offers insights into how firms can better equip themselves to manage within an increasingly turbulent and unpredictable environment.
package com.denizenscript.denizen2sponge.tags.handlers; import com.denizenscript.denizen2sponge.tags.objects.LocationTag; import com.denizenscript.denizen2core.tags.AbstractTagBase; import com.denizenscript.denizen2core.tags.AbstractTagObject; import com.denizenscript.denizen2core.tags.TagData; public class LocationTagBase extends AbstractTagBase { // <--[tagbase] // @Since 0.3.0 // @Base location[<LocationTag>] // @Group Sponge Base Types // @ReturnType LocationTag // @Returns the input as a LocationTag. // --> @Override public String getName() { return "location"; } @Override public AbstractTagObject handle(TagData data) { if (!data.hasNextModifier()) { data.error.run("Invalid location tag-base: expected a modifier! See documentation for this tag!"); return null; } return LocationTag.getFor(data.error, data.getNextModifier()).handle(data.shrink()); } }
declare class PWABadge { private readonly navigator: Navigator; private readonly window: Window; /** * Check the Browser Badge feature supports */ isSupported(): boolean; /** * Sets the PWA App's badge. * * If a value is provided, set the badge to the provided value otherwise, display a plain white dot (or other flag as reprobate to the platform) * Don't assume anything about how the user agent displays the badge. * Some user agents may take a number like "4000" and rewrite it as "99+". * If you saturate the badge yourself (for example by setting it to "99") then the "+" won't appear. * No matter the actual number, just call setAppBadge(unreadCount) and let the user agent deal with displaying it accordingly. * * Setting number to `0` is the same as calling {@link syncClearBadge|this.syncClearBadge()} or {@link syncClearBadge|this.asyncClearBadge()}. * * @template Unread Badge count */ syncSetBadge(unreadCount: number): void; /** * Sets the PWA App's badge. * * If a value is provided, set the badge to the provided value otherwise, display a plain white dot (or other flag as reprobate to the platform) * Don't assume anything about how the user agent displays the badge. * Some user agents may take a number like "4000" and rewrite it as "99+". * If you saturate the badge yourself (for example by setting it to "99") then the "+" won't appear. * No matter the actual number, just call setAppBadge(unreadCount) and let the user agent deal with displaying it accordingly. * * Setting number to `0` is the same as calling {@link syncClearBadge|this.syncClearBadge()} or {@link syncClearBadge|this.asyncClearBadge()}. * * @template Unread Badge count */ asyncSetBadge(unreadCount: number): Promise<void>; /** * Removes app's badge. */ syncClearBadge(): void; /** * Removes app's badge. */ asyncClearBadge(): Promise<void>; } export default PWABadge;
/** * This action bans the user * @author didacus */ public class BanAction implements Action { private Date banForever; private Date endBanDate; private final UserManager userManager; private final Log banActionLogger = LogFactory.getLog(BanAction.class); /** * Create the action * @param ds the datasource object */ public BanAction(DataSource ds) { super(); this.userManager = new UserModelManager(ds); } @Override public String execute(HttpServletRequest req, HttpServletResponse res) { try { Calendar todayDate = Calendar.getInstance(); todayDate.add(Calendar.MONTH, 1); this.endBanDate = new Date(todayDate.getTimeInMillis()); todayDate.add(Calendar.YEAR, 100); this.banForever = new Date(todayDate.getTimeInMillis()); String emailUser = req.getParameter("email"); boolean typeBan = Boolean.parseBoolean(req.getParameter("typeBan")); if (typeBan) { this.userManager.banUser(banForever, emailUser); } else { this.userManager.banUser(endBanDate, emailUser); } return "/admin/AdminController?action=showUsersAction"; } catch (SQLException e) { this.banActionLogger.error("Errore Interno", e); return "/error500.jsp"; } } }
package org.ff4j.store; import static org.ff4j.store.JdbcStoreConstants.COL_FEAT_GROUPNAME; import static org.ff4j.store.JdbcStoreConstants.COL_ROLE_FEATID; import static org.ff4j.store.JdbcStoreConstants.COL_ROLE_ROLENAME; import static org.ff4j.utils.JdbcUtils.buildStatement; /* * #%L * ff4j-core * %% * Copyright (C) 2013 - 2015 FF4J * %% * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * #L% */ import static org.ff4j.utils.JdbcUtils.closeConnection; import static org.ff4j.utils.JdbcUtils.closeResultSet; import static org.ff4j.utils.JdbcUtils.closeStatement; import static org.ff4j.utils.JdbcUtils.executeUpdate; import static org.ff4j.utils.JdbcUtils.isTableExist; import static org.ff4j.utils.JdbcUtils.rollback; import static org.ff4j.utils.Util.assertHasLength; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Collection; import java.util.HashSet; import java.util.LinkedHashMap; import java.util.Map; import java.util.Set; import javax.sql.DataSource; import org.ff4j.core.Feature; import org.ff4j.core.FeatureStore; import org.ff4j.exception.FeatureAccessException; import org.ff4j.exception.FeatureAlreadyExistException; import org.ff4j.property.Property; import org.ff4j.property.store.JdbcPropertyMapper; import org.ff4j.utils.JdbcUtils; import org.ff4j.utils.MappingUtil; import org.ff4j.utils.Util; /** * Implementation of {@link FeatureStore} to work with RDBMS through JDBC. * * @author <NAME> (@clunven) */ public class JdbcFeatureStore extends AbstractFeatureStore { /** Error message 1. */ public static final String CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE = "Cannot check feature existence, error related to database"; /** Error message 2. */ public static final String CANNOT_UPDATE_FEATURES_DATABASE_SQL_ERROR = "Cannot update features database, SQL ERROR"; /** Access to storage. */ private DataSource dataSource; /** Query builder. */ private JdbcQueryBuilder queryBuilder; /** Mapper. */ private JdbcPropertyMapper JDBC_PROPERTY_MAPPER = new JdbcPropertyMapper(); /** Mapper. */ private JdbcFeatureMapper JDBC_FEATURE_MAPPER = new JdbcFeatureMapper(); /** Default Constructor. */ public JdbcFeatureStore() {} /** * Constructor from DataSource. * * @param jdbcDS * native jdbc datasource */ public JdbcFeatureStore(DataSource jdbcDS) { this.dataSource = jdbcDS; } /** * Constructor from DataSource. * * @param jdbcDS * native jdbc datasource */ public JdbcFeatureStore(DataSource jdbcDS, String xmlConfFile) { this(jdbcDS); importFeaturesFromXmlFile(xmlConfFile); } /** {@inheritDoc} */ @Override public void createSchema() { DataSource ds = getDataSource(); JdbcQueryBuilder qb = getQueryBuilder(); String dbSchema = queryBuilder.getDbSchema(); if (!isTableExist(ds, qb.getTableNameFeatures(), dbSchema)) { executeUpdate(ds, qb.sqlCreateTableFeatures()); } if (!isTableExist(ds, qb.getTableNameCustomProperties(), dbSchema)) { executeUpdate(ds, qb.sqlCreateTableCustomProperties()); } if (!isTableExist(ds, qb.getTableNameRoles(), dbSchema)) { executeUpdate(ds, qb.sqlCreateTableRoles()); } } /** {@inheritDoc} */ @Override public void enable(String uid) { assertFeatureExist(uid); update(getQueryBuilder().enableFeature(), uid); } /** {@inheritDoc} */ @Override public void disable(String uid) { assertFeatureExist(uid); update(getQueryBuilder().disableFeature(), uid); } /** {@inheritDoc} */ @Override public boolean exist(String uid) { assertHasLength(uid); Connection sqlConn = null; PreparedStatement ps = null; ResultSet rs = null; try { sqlConn = getDataSource().getConnection(); ps = JdbcUtils.buildStatement(sqlConn, getQueryBuilder().existFeature(), uid); rs = ps.executeQuery(); rs.next(); return 1 == rs.getInt(1); } catch (SQLException sqlEX) { throw new FeatureAccessException(CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE, sqlEX); } finally { closeResultSet(rs); closeStatement(ps); closeConnection(sqlConn); } } /** {@inheritDoc} */ @Override public Feature read(String uid) { assertFeatureExist(uid); Connection sqlConn = null; PreparedStatement ps = null; ResultSet rs = null; try { sqlConn = getDataSource().getConnection(); ps = sqlConn.prepareStatement(getQueryBuilder().getFeature()); ps.setString(1, uid); rs = ps.executeQuery(); // Existence is tested before rs.next(); Feature f = JDBC_FEATURE_MAPPER.mapFeature(rs); closeResultSet(rs); rs = null; closeStatement(ps); ps = null; // Enrich to get roles 2nd request ps = sqlConn.prepareStatement(getQueryBuilder().getRoles()); ps.setString(1, uid); rs = ps.executeQuery(); while (rs.next()) { f.getPermissions().add(rs.getString("ROLE_NAME")); } closeResultSet(rs); rs = null; closeStatement(ps); ps = null; // Enrich with properties 3d request to get custom properties by uid ps = sqlConn.prepareStatement(getQueryBuilder().getFeatureProperties()); ps.setString(1, uid); rs = ps.executeQuery(); while (rs.next()) { f.addProperty(JDBC_PROPERTY_MAPPER.map(rs)); } return f; } catch (SQLException sqlEX) { throw new FeatureAccessException(CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE, sqlEX); } finally { closeResultSet(rs); closeStatement(ps); closeConnection(sqlConn); } } /** {@inheritDoc} */ @Override public void create(Feature fp) { assertFeatureNotNull(fp); Connection sqlConn = null; PreparedStatement ps = null; Boolean previousAutoCommit = null; try { // Create connection sqlConn = getDataSource().getConnection(); if (exist(fp.getUid())) { throw new FeatureAlreadyExistException(fp.getUid()); } // Begin TX previousAutoCommit = sqlConn.getAutoCommit(); sqlConn.setAutoCommit(false); // Create feature ps = sqlConn.prepareStatement(getQueryBuilder().createFeature()); ps.setString(1, fp.getUid()); ps.setInt(2, fp.isEnable() ? 1 : 0); ps.setString(3, fp.getDescription()); String strategyColumn = null; String expressionColumn = null; if (fp.getFlippingStrategy() != null) { strategyColumn = fp.getFlippingStrategy().getClass().getName(); expressionColumn = MappingUtil.fromMap(fp.getFlippingStrategy().getInitParams()); } ps.setString(4, strategyColumn); ps.setString(5, expressionColumn); ps.setString(6, fp.getGroup()); ps.executeUpdate(); closeStatement(ps); ps = null; // Create roles for (String role : fp.getPermissions()) { ps = sqlConn.prepareStatement(getQueryBuilder().addRoleToFeature()); ps.setString(1, fp.getUid()); ps.setString(2, role); ps.executeUpdate(); closeStatement(ps); ps = null; } // Create customproperties if (fp.getCustomProperties() != null && !fp.getCustomProperties().isEmpty()) { for (Property<?> pp : fp.getCustomProperties().values()) { ps = createCustomProperty(sqlConn, fp.getUid(), pp); closeStatement(ps); ps = null; } } // Commit sqlConn.commit(); } catch (SQLException sqlEX) { rollback(sqlConn); throw new FeatureAccessException(CANNOT_UPDATE_FEATURES_DATABASE_SQL_ERROR, sqlEX); } finally { closeStatement(ps); closeConnection(sqlConn, previousAutoCommit); } } /** {@inheritDoc} */ @Override public void delete(String uid) { assertFeatureExist(uid); Connection sqlConn = null; PreparedStatement ps = null; Boolean previousAutoCommit = null; try { // Create connection sqlConn = getDataSource().getConnection(); previousAutoCommit = sqlConn.getAutoCommit(); sqlConn.setAutoCommit(false); Feature fp = read(uid); // Delete Properties if (fp.getCustomProperties() != null) { for (String property : fp.getCustomProperties().keySet()) { ps = sqlConn.prepareStatement(getQueryBuilder().deleteFeatureProperty()); ps.setString(1, property); ps.setString(2, fp.getUid()); ps.executeUpdate(); closeStatement(ps); ps = null; } } // Delete Roles if (fp.getPermissions() != null) { for (String role : fp.getPermissions()) { ps = sqlConn.prepareStatement(getQueryBuilder().deleteFeatureRole()); ps.setString(1, fp.getUid()); ps.setString(2, role); ps.executeUpdate(); closeStatement(ps); ps = null; } } // Delete Feature ps = sqlConn.prepareStatement(getQueryBuilder().deleteFeature()); ps.setString(1, fp.getUid()); ps.executeUpdate(); closeStatement(ps); ps = null; // Commit sqlConn.commit(); } catch (SQLException sqlEX) { rollback(sqlConn); throw new FeatureAccessException(CANNOT_UPDATE_FEATURES_DATABASE_SQL_ERROR, sqlEX); } finally { closeStatement(ps); closeConnection(sqlConn, previousAutoCommit); } } /** {@inheritDoc} */ @Override public void grantRoleOnFeature(String uid, String roleName) { assertFeatureExist(uid); assertHasLength(roleName); update(getQueryBuilder().addRoleToFeature(), uid, roleName); } /** {@inheritDoc} */ @Override public void removeRoleFromFeature(String uid, String roleName) { assertFeatureExist(uid); assertHasLength(roleName); update(getQueryBuilder().deleteFeatureRole(), uid, roleName); } /** {@inheritDoc} */ @Override public Map<String, Feature> readAll() { LinkedHashMap<String, Feature> mapFP = new LinkedHashMap<String, Feature>(); Connection sqlConn = null; PreparedStatement ps = null; ResultSet rs = null; try { // Returns features sqlConn = dataSource.getConnection(); ps = sqlConn.prepareStatement(getQueryBuilder().getAllFeatures()); rs = ps.executeQuery(); while (rs.next()) { Feature f = JDBC_FEATURE_MAPPER.mapFeature(rs); mapFP.put(f.getUid(), f); } closeResultSet(rs); rs = null; closeStatement(ps); ps = null; // Returns Roles ps = sqlConn.prepareStatement(getQueryBuilder().getAllRoles()); rs = ps.executeQuery(); while (rs.next()) { String uid = rs.getString(COL_ROLE_FEATID); mapFP.get(uid).getPermissions().add(rs.getString(COL_ROLE_ROLENAME)); } closeResultSet(rs); rs = null; closeStatement(ps); ps = null; // Read custom properties for each feature for (Feature f : mapFP.values()) { ps = sqlConn.prepareStatement(getQueryBuilder().getFeatureProperties()); ps.setString(1, f.getUid()); rs = ps.executeQuery(); while (rs.next()) { f.addProperty(JDBC_PROPERTY_MAPPER.map(rs)); } closeResultSet(rs); rs = null; closeStatement(ps); ps = null; } return mapFP; } catch (SQLException sqlEX) { throw new FeatureAccessException(CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE, sqlEX); } finally { closeResultSet(rs); closeStatement(ps); closeConnection(sqlConn); } } /** {@inheritDoc} */ @Override public Set<String> readAllGroups() { Set<String> setOFGroup = new HashSet<String>(); Connection sqlConn = null; PreparedStatement ps = null; ResultSet rs = null; try { // Returns features sqlConn = dataSource.getConnection(); ps = sqlConn.prepareStatement(getQueryBuilder().getAllGroups()); rs = ps.executeQuery(); while (rs.next()) { String groupName = rs.getString(COL_FEAT_GROUPNAME); if (Util.hasLength(groupName)) { setOFGroup.add(groupName); } } return setOFGroup; } catch (SQLException sqlEX) { throw new FeatureAccessException("Cannot list groups, error related to database", sqlEX); } finally { closeResultSet(rs); closeStatement(ps); closeConnection(sqlConn); } } /** {@inheritDoc} */ @Override public void update(Feature fp) { assertFeatureNotNull(fp); Connection sqlConn = null; PreparedStatement ps = null; try { sqlConn = dataSource.getConnection(); Feature fpExist = read(fp.getUid()); int enable = 0; if (fp.isEnable()) { enable = 1; } String fStrategy = null; String fExpression = null; if (fp.getFlippingStrategy() != null) { fStrategy = fp.getFlippingStrategy().getClass().getName(); fExpression = MappingUtil.fromMap(fp.getFlippingStrategy().getInitParams()); } update(getQueryBuilder().updateFeature(), enable, fp.getDescription(), fStrategy, fExpression, fp.getGroup(), fp.getUid()); // ROLES // To be deleted (not in new value but was at first) Set<String> toBeDeleted = new HashSet<String>(); toBeDeleted.addAll(fpExist.getPermissions()); toBeDeleted.removeAll(fp.getPermissions()); for (String roleToBeDelete : toBeDeleted) { removeRoleFromFeature(fpExist.getUid(), roleToBeDelete); } // To be created : in second but not in first Set<String> toBeAdded = new HashSet<String>(); toBeAdded.addAll(fp.getPermissions()); toBeAdded.removeAll(fpExist.getPermissions()); for (String addee : toBeAdded) { grantRoleOnFeature(fpExist.getUid(), addee); } // REMOVE EXISTING CUSTOM PROPERTIES ps = sqlConn.prepareStatement(getQueryBuilder().deleteAllFeatureCustomProperties()); ps.setString(1, fpExist.getUid()); ps.executeUpdate(); closeStatement(ps); ps = null; // CREATE CUSTOM PROPERTIES for (Property<?> property : fp.getCustomProperties().values()) { ps = createCustomProperty(sqlConn, fp.getUid(), property); closeStatement(ps); ps = null; } } catch (SQLException sqlEX) { throw new FeatureAccessException(CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE, sqlEX); } finally { closeStatement(ps); closeConnection(sqlConn); } } /** {@inheritDoc} */ @Override public void clear() { Connection sqlConn = null; PreparedStatement ps = null; try { sqlConn = dataSource.getConnection(); ps = sqlConn.prepareStatement(getQueryBuilder().deleteAllCustomProperties()); ps.executeUpdate(); closeStatement(ps); ps = null; ps = sqlConn.prepareStatement(getQueryBuilder().deleteAllRoles()); ps.executeUpdate(); closeStatement(ps); ps = null; ps = sqlConn.prepareStatement(getQueryBuilder().deleteAllFeatures()); ps.executeUpdate(); closeStatement(ps); ps = null; } catch (SQLException sqlEX) { throw new FeatureAccessException(CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE, sqlEX); } finally { closeStatement(ps); closeConnection(sqlConn); } } /** * Ease creation of properties in Database. * * @param uid * target unique identifier * @param props * target properties. */ public void createCustomProperties(String uid, Collection <Property<?> > props) { Util.assertNotNull(uid); if (props == null) return; Connection sqlConn = null; PreparedStatement ps = null; Boolean previousAutoCommit = null; try { sqlConn = dataSource.getConnection(); // Begin TX previousAutoCommit = sqlConn.getAutoCommit(); sqlConn.setAutoCommit(false); // Queries for (Property<?> pp : props) { ps = createCustomProperty(sqlConn, uid, pp); closeStatement(ps); ps = null; } // End TX sqlConn.commit(); } catch (SQLException sqlEX) { rollback(sqlConn); throw new FeatureAccessException(CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE, sqlEX); } finally { closeStatement(ps); closeConnection(sqlConn, previousAutoCommit); } } /** * Create SQL statement to create property. * * @param sqlConn * current sql connection * @param featureId * current unique feature identifier * @param pp * pojo property * @return * statement sql to be executed * @throws SQLException * error during sql operation */ private PreparedStatement createCustomProperty(Connection sqlConn, String featureId, Property<?> pp) throws SQLException { PreparedStatement ps = sqlConn.prepareStatement(getQueryBuilder().createFeatureProperty()); ps.setString(1, pp.getName()); ps.setString(2, pp.getType()); ps.setString(3, pp.asString()); ps.setString(4, pp.getDescription()); if (pp.getFixedValues() != null && !pp.getFixedValues().isEmpty()) { String fixedValues = pp.getFixedValues().toString(); ps.setString(5, fixedValues.substring(1, fixedValues.length() - 1)); } else { ps.setString(5, null); } ps.setString(6, featureId); ps.executeUpdate(); return ps; } /** {@inheritDoc} */ @Override public boolean existGroup(String groupName) { assertHasLength(groupName); Connection sqlConn = null; PreparedStatement ps = null; ResultSet rs = null; try { sqlConn = dataSource.getConnection(); ps = sqlConn.prepareStatement(getQueryBuilder().existGroup()); ps.setString(1, groupName); rs = ps.executeQuery(); rs.next(); return rs.getInt(1) > 0; } catch (SQLException sqlEX) { throw new FeatureAccessException(CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE, sqlEX); } finally { closeResultSet(rs); closeStatement(ps); closeConnection(sqlConn); } } /** {@inheritDoc} */ @Override public void enableGroup(String groupName) { assertGroupExist(groupName); update(getQueryBuilder().enableGroup(), groupName); } /** {@inheritDoc} */ @Override public void disableGroup(String groupName) { assertGroupExist(groupName); update(getQueryBuilder().disableGroup(), groupName); } /** {@inheritDoc} */ @Override public Map<String, Feature> readGroup(String groupName) { assertGroupExist(groupName); LinkedHashMap<String, Feature> mapFP = new LinkedHashMap<String, Feature>(); Connection sqlConn = null; PreparedStatement ps = null; ResultSet rs = null; try { // Returns features sqlConn = dataSource.getConnection(); ps = sqlConn.prepareStatement(getQueryBuilder().getFeatureOfGroup()); ps.setString(1, groupName); rs = ps.executeQuery(); while (rs.next()) { Feature f = JDBC_FEATURE_MAPPER.mapFeature(rs); mapFP.put(f.getUid(), f); } closeResultSet(rs); rs = null; closeStatement(ps); ps = null; // Returns Roles ps = sqlConn.prepareStatement(getQueryBuilder().getAllRoles()); rs = ps.executeQuery(); while (rs.next()) { String uid = rs.getString(COL_ROLE_FEATID); // only feature in the group must be processed if (mapFP.containsKey(uid)) { mapFP.get(uid).getPermissions().add(rs.getString(COL_ROLE_ROLENAME)); } } closeResultSet(rs); rs = null; closeStatement(ps); ps = null; // Read custom properties for each feature for (Feature f : mapFP.values()) { ps = sqlConn.prepareStatement(getQueryBuilder().getFeatureProperties()); ps.setString(1, f.getUid()); rs = ps.executeQuery(); while (rs.next()) { f.addProperty(JDBC_PROPERTY_MAPPER.map(rs)); } closeResultSet(rs); rs = null; closeStatement(ps); ps = null; } return mapFP; } catch (SQLException sqlEX) { throw new FeatureAccessException(CANNOT_CHECK_FEATURE_EXISTENCE_ERROR_RELATED_TO_DATABASE, sqlEX); } finally { closeResultSet(rs); closeStatement(ps); closeConnection(sqlConn); } } /** {@inheritDoc} */ @Override public void addToGroup(String uid, String groupName) { assertFeatureExist(uid); assertHasLength(groupName); update(getQueryBuilder().addFeatureToGroup(), groupName, uid); } /** {@inheritDoc} */ @Override public void removeFromGroup(String uid, String groupName) { assertFeatureExist(uid); assertGroupExist(groupName); Feature feat = read(uid); if (feat.getGroup() != null && !feat.getGroup().equals(groupName)) { throw new IllegalArgumentException("'" + uid + "' is not in group '" + groupName + "'"); } update(getQueryBuilder().addFeatureToGroup(), "", uid); } /** * Utility method to perform UPDATE and DELETE operations. * * @param query * target query * @param params * sql query params */ public void update(String query, Object... params) { Connection sqlConnection = null; PreparedStatement ps = null; try { sqlConnection = dataSource.getConnection(); ps = buildStatement(sqlConnection, query, params); ps.executeUpdate(); if (!sqlConnection.getAutoCommit()) { sqlConnection.commit(); } } catch (SQLException sqlEX) { throw new FeatureAccessException(CANNOT_UPDATE_FEATURES_DATABASE_SQL_ERROR, sqlEX); } finally { closeStatement(ps); closeConnection(sqlConnection); } } /** * Getter accessor for attribute 'dataSource'. * * @return current value of 'dataSource' */ public DataSource getDataSource() { if (dataSource == null) { throw new IllegalStateException("DataSource has not been initialized"); } return dataSource; } /** * Setter accessor for attribute 'dataSource'. * * @param dataSource * new value for 'dataSource ' */ public void setDataSource(DataSource dataSource) { this.dataSource = dataSource; } /** * @return the queryBuilder */ public JdbcQueryBuilder getQueryBuilder() { if (queryBuilder == null) { queryBuilder = new JdbcQueryBuilder(); } return queryBuilder; } /** * @param queryBuilder the queryBuilder to set */ public void setQueryBuilder(JdbcQueryBuilder queryBuilder) { this.queryBuilder = queryBuilder; } }
<reponame>davidkarlsen/Hystrix<filename>hystrix-contrib/hystrix-javanica/src/test/java/com/netflix/hystrix/contrib/javanica/test/common/command/BasicCommandTest.java<gh_stars>0 package com.netflix.hystrix.contrib.javanica.test.common.command; import com.netflix.hystrix.HystrixEventType; import com.netflix.hystrix.HystrixInvokableInfo; import com.netflix.hystrix.HystrixRequestLog; import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand; import com.netflix.hystrix.contrib.javanica.command.AsyncResult; import com.netflix.hystrix.contrib.javanica.test.common.BasicHystrixTest; import com.netflix.hystrix.contrib.javanica.test.common.domain.User; import org.junit.Before; import org.junit.Test; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; public abstract class BasicCommandTest extends BasicHystrixTest { private UserService userService; private AdvancedUserService advancedUserService; private GenericService<String, Long, User> genericUserService; @Before public void setUp() throws Exception { super.setUp(); userService = createUserService(); advancedUserService = createAdvancedUserServiceService(); genericUserService = createGenericUserService(); } @Test public void testGetUserAsync() throws ExecutionException, InterruptedException { Future<User> f1 = userService.getUserAsync("1", "name: "); assertEquals("name: 1", f1.get().getName()); assertEquals(1, HystrixRequestLog.getCurrentRequest().getAllExecutedCommands().size()); com.netflix.hystrix.HystrixInvokableInfo<?> command = getCommand(); // assert the command key name is the we're expecting assertEquals("GetUserCommand", command.getCommandKey().name()); // assert the command group key name is the we're expecting assertEquals("UserService", command.getCommandGroup().name()); // assert the command thread pool key name is the we're expecting assertEquals("CommandTestAsync", command.getThreadPoolKey().name()); // it was successful assertTrue(command.getExecutionEvents().contains(HystrixEventType.SUCCESS)); } @Test public void testGetUserSync() { User u1 = userService.getUserSync("1", "name: "); assertGetUserSnycCommandExecuted(u1); } @Test public void shouldWorkWithInheritedMethod() { User u1 = advancedUserService.getUserSync("1", "name: "); assertGetUserSnycCommandExecuted(u1); } @Test public void should_work_with_parameterized_method() throws Exception { assertEquals(Integer.valueOf(1), userService.echo(1)); assertEquals(1, HystrixRequestLog.getCurrentRequest().getAllExecutedCommands().size()); assertTrue(getCommand().getExecutionEvents().contains(HystrixEventType.SUCCESS)); } @Test public void should_work_with_parameterized_asyncMethod() throws Exception { assertEquals(Integer.valueOf(1), userService.echoAsync(1).get()); assertEquals(1, HystrixRequestLog.getCurrentRequest().getAllExecutedCommands().size()); assertTrue(getCommand().getExecutionEvents().contains(HystrixEventType.SUCCESS)); } @Test public void should_work_with_genericClass_fallback() { User user = genericUserService.getByKeyForceFail("1", 2L); assertEquals("name: 2", user.getName()); assertEquals(1, HystrixRequestLog.getCurrentRequest().getAllExecutedCommands().size()); HystrixInvokableInfo<?> command = HystrixRequestLog.getCurrentRequest() .getAllExecutedCommands().iterator().next(); assertEquals("getByKeyForceFail", command.getCommandKey().name()); // confirm that command has failed assertTrue(command.getExecutionEvents().contains(HystrixEventType.FAILURE)); // and that fallback was successful assertTrue(command.getExecutionEvents().contains(HystrixEventType.FALLBACK_SUCCESS)); } private void assertGetUserSnycCommandExecuted(User u1) { assertEquals("name: 1", u1.getName()); assertEquals(1, HystrixRequestLog.getCurrentRequest().getAllExecutedCommands().size()); com.netflix.hystrix.HystrixInvokableInfo<?> command = getCommand(); assertEquals("getUserSync", command.getCommandKey().name()); assertEquals("UserGroup", command.getCommandGroup().name()); assertEquals("UserGroup", command.getThreadPoolKey().name()); assertTrue(command.getExecutionEvents().contains(HystrixEventType.SUCCESS)); } private com.netflix.hystrix.HystrixInvokableInfo<?> getCommand() { return HystrixRequestLog.getCurrentRequest().getAllExecutedCommands().iterator().next(); } protected abstract UserService createUserService(); protected abstract AdvancedUserService createAdvancedUserServiceService(); protected abstract GenericService<String, Long, User> createGenericUserService(); public interface GenericService<K1, K2, V> { V getByKey(K1 key1, K2 key2); V getByKeyForceFail(K1 key, K2 key2); V fallback(K1 key, K2 key2); } public static class GenericUserService implements GenericService<String, Long, User> { @HystrixCommand(fallbackMethod = "fallback") @Override public User getByKey(String sKey, Long lKey) { return new User(sKey, "name: " + lKey); // it should be network call } @HystrixCommand(fallbackMethod = "fallback") @Override public User getByKeyForceFail(String sKey, Long lKey) { throw new RuntimeException("force fail"); } @Override public User fallback(String sKey, Long lKey) { return new User(sKey, "name: " + lKey); } } public static class UserService { @HystrixCommand(commandKey = "GetUserCommand", threadPoolKey = "CommandTestAsync") public Future<User> getUserAsync(final String id, final String name) { return new AsyncResult<User>() { @Override public User invoke() { return new User(id, name + id); // it should be network call } }; } @HystrixCommand(groupKey = "UserGroup") public User getUserSync(String id, String name) { return new User(id, name + id); // it should be network call } @HystrixCommand public <T> T echo(T value) { return value; } @HystrixCommand public <T> Future<T> echoAsync(final T value) { return new AsyncResult<T>() { @Override public T invoke() { return value; } }; } } public static class AdvancedUserService extends UserService { } }
/** * Prepares the *.inp file from a vector of segments * Appends a guess constructed from monomer orbitals if supplied, Not implemented yet */ bool Orca::WriteInputFile(std::vector< ctp::Segment* > segments, Orbitals* orbitals_guess , std::vector<ctp::PolarSeg*> PolarSegments ) { std::vector<std::string> results; std::string temp_suffix = "/id"; std::string scratch_dir_backup = _scratch_dir; std::ofstream _com_file; std::string _com_file_name_full = _run_dir + "/" + _input_file_name; _com_file.open(_com_file_name_full.c_str()); _com_file << "* xyz " << _charge << " " << _spin << endl; std::vector< ctp::QMAtom* > qmatoms; if (_write_charges) { qmatoms = orbitals_guess->QMAtoms(); } else { QMMInterface qmmface; qmatoms = qmmface.Convert(segments); } WriteCoordinates(_com_file, qmatoms); _com_file << "%pal\n " << "nprocs " << _threads << "\nend" << "\n" << endl; if (_write_basis_set) { std::string _el_file_name = _run_dir + "/" + "system.bas"; WriteBasisset(qmatoms, _basisset_name, _el_file_name); _com_file << "%basis\n " << endl; _com_file << "GTOName" << " " << "=" << "\"system.bas\";" << endl; if (_auxbasisset_name != "") { std::string _aux_file_name = _run_dir + "/" + "system.aux"; WriteBasisset(qmatoms, _auxbasisset_name, _aux_file_name); _com_file << "GTOAuxName" << " " << "=" << "\"system.aux\";" << endl; } } if (_write_pseudopotentials) { WriteECP(_com_file, qmatoms); } _com_file << "end\n " << "\n" << endl; if (_write_charges) { WriteBackgroundCharges( PolarSegments); } _com_file << _options << "\n"; if (_write_guess) { throw std::runtime_error("Not implemented in orca"); } _com_file << endl; _com_file.close(); CTP_LOG(ctp::logDEBUG, *_pLog) << "Setting the scratch dir to " << _scratch_dir + temp_suffix << flush; _scratch_dir = scratch_dir_backup + temp_suffix; WriteShellScript(); _scratch_dir = scratch_dir_backup; return true; }
Calvin Harris pours his emotions on the dance floor in the impassioned breakup anthem, “My Way.” “I made my move and it was all about you,” the Scottish DJ sings. “Now I feel so far removed.” The breezy, predictable melody rises and falls in waves driven forward by an unwavering kick drum. Only the string section draws out the song’s inherent melancholy. Harris’ big pop singles usually feature collaborations with singers like Rihanna, Haim, Ellie Goulding, or rappers like Big Sean. But “My Way” is a rare exception that finds Harris stepping beyond the booth for the first time in years. The last time he sang on one of his own tracks was for his 2014 hit “Summer,” and before that, “Feel So Close,” a single that originally came out in 2011. Both were hit songs on American radio. This year, Harris has focused on collaborations: “This Is What You Came For,” with Rihanna and co-written by Taylor Swift, “Hype,” which featured Dizzee Rascal and “Ole,” which reunited Harris with John Newman. “This Is What You Came For” is currently at No. 7 on the Hot 100. It climbed all the way to No. 3 earlier this summer.
<gh_stars>0 # -*- coding: utf-8 -*- # Scrapy settings for actor_spider project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'actor_spider' SPIDER_MODULES = ['actor_spider.spiders'] NEWSPIDER_MODULE = 'actor_spider.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:56.0) Gecko/20100101 Firefox/56.0' # Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) # CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: # CONCURRENT_REQUESTS_PER_DOMAIN = 16 CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) # COOKIES_ENABLED = True # Disable Telnet Console (enabled by default) # TELNETCONSOLE_ENABLED = False # Override the default request headers: # DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', # } # Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html SPIDER_MIDDLEWARES = { 'scrapy_splash.SplashDeduplicateArgsMiddleware': 100, # 'actor_spider.middlewares.ActorSpiderSpiderMiddleware': 543, } # Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html DOWNLOADER_MIDDLEWARES = { # 'actor_spider.middlewares.MyCustomDownloaderMiddleware': 543, 'scrapy_splash.SplashCookiesMiddleware': 723, 'scrapy_splash.SplashMiddleware': 725, 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810, } # Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html # EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, # } # Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html # ITEM_PIPELINES = { # 'actor_spider.my_imagepipelines.MyImagesPipeline': 300, # 'scrapy.pipelines.images.ImagesPipeline': 310 # } # Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html AUTOTHROTTLE_ENABLED = True # The initial download delay AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: AUTOTHROTTLE_DEBUG = True # Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings # HTTPCACHE_ENABLED = True # HTTPCACHE_EXPIRATION_SECS = 0 # HTTPCACHE_DIR = 'httpcache' # HTTPCACHE_IGNORE_HTTP_CODES = [] # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # my settings SPLASH_URL = 'http://192.168.99.100:8050' # docker run -p 8050:8050 scrapinghub/splash # docker run -p 8050:8050 scrapinghub/splash --max-timeout 3600 DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter' # IMAGES_STORE = r'D:\\picture_Sheldon_baidu' IMAGES_STORE = r'D:\\Johnny_Galecki' # 图片失效天数 # IMAGES_EXPIRES = 90 # 缩略图 # IMAGES_THUMBS = { # 'small': (64, 64), # 'big': (1024, 1024), # } # 过滤小图片 # IMAGES_MIN_HEIGHT = 110 # IMAGES_MIN_WIDTH = 110
Phenotypic and functional abnormalities in the peripheral blood T-cells of patients with primary Sjogren's syndrome. Changes in regulatory T-cell subset (including the recently described CD4 helper inducers or suppressor inducers) balance in the peripheral blood may play a role in the pathogenesis of primary Sjogren's syndrome (SS). Direct immunofluorescence and flow cytometry were used to quantitate and analyse peripheral blood lymphocytes in 15 patients with primary SS and 15 control subjects. A reduction in the percentage of circulating CD4 lymphocytes was observed in patients with SS. There was no quantitative abnormality in the percentage of circulating CD4+ 2H4+ (suppressor inducer), CD4+ 4B4+ (helper inducer), CD2, CD3, CD8, CD8+ 2H4+, CD8+ 4B4+, CD25 (IL-2R), CD19, CD16, CD57 lymphocytes in the patients. Circulating CD8 lymphocytes expressing the activation marker HLA-DR were increased in the patients. The functional status of peripheral blood lymphocytes was assessed by PHA (phytohaemagglutinin) stimulation followed by monitoring their proliferative response by radiolabelled thymidine uptake and expression of CD25 (Interleukin-2 receptor). A reduction in the proliferative response of total, CD4-depleted, and CD8-depleted lymphocytes suspensions to PHA was demonstrated. The level of expression of CD25 (IL-2 receptor) was similar in patients and controls before and after 24 h stimulation with PHA. We conclude that there is a disturbance in the functional properties of peripheral blood T cells that can contribute to the immunopathogenesis of SS. Meanwhile, the quantitative reduction of suppressor/inducer lymphocytes as defined by the CD4 2H4 phenotype can be precluded from a role in the development of such an autoimmune condition.
Finite element analysis of a single conductor with a Stockbridge damper under Aeolian vibration A finite element model is developed to predict the vibrational response of a single conductor with a Stockbride damper. The mathematical model accounts for the two-way coupling between the conductor and the damper. A two-part numerical analysis using MATLAB is presented to simulate the response of the system. The first part deals with the vibration of the conductor without a damper. The results indicate that longer span conductors without dampers are susceptible to fatigue failure. In the second part, a damper is attached to the conductor and the effects of the excitation frequency, the damper mass, and the damper location are investigated. This investigation shows that the presence of a properly positioned damper on the conductor significantly reduces fatigue failure.
def match_note_offsets(ref_intervals, est_intervals, offset_ratio=0.2, offset_min_tolerance=0.05, strict=False): if strict: cmp_func = np.less else: cmp_func = np.less_equal offset_distances = np.abs(np.subtract.outer(ref_intervals[:, 1], est_intervals[:, 1])) offset_distances = np.around(offset_distances, decimals=N_DECIMALS) ref_durations = util.intervals_to_durations(ref_intervals) offset_tolerances = np.maximum(offset_ratio * ref_durations, offset_min_tolerance) offset_hit_matrix = ( cmp_func(offset_distances, offset_tolerances.reshape(-1, 1))) hits = np.where(offset_hit_matrix) G = {} for ref_i, est_i in zip(*hits): if est_i not in G: G[est_i] = [] G[est_i].append(ref_i) matching = sorted(util._bipartite_match(G).items()) return matching
The point of me writing post isn’t to create a sob story the aim is to raise awareness. When I write about my childhood experiences in particular my hope is to illustrate what other children with ADHD may be experiencing due to there ADHD. I certainly didn’t have the words to explain my difficulties back then nor did I have the courage to express them. Perhaps what I write today will help some, whether it be a child a parent or a teacher. Through my work supporting those affected by ADHD it’s became evident that many children with ADHD experience the same vulnerabilities, exclusion and issues. I think it’s imperative we try to bring understanding to the phenomenon of ADHD. I’ve been thinking about vulnerability and how it relates to those with ADHD. Having ADHD myself I have no clear memories of feeling vulnerable as a child. I probably felt invincible rather than vulnerable. I was always quick with my tongue and I used it as a strong defence to protect myself. If someone said something smart I was always able to fire a smart assed comment back just as quick. I would have definitely felt isolated and secluded but not really vulnerable. Due to my birthday being July I went to school a year too soon and because of this I was very small compared to my classmates and possibly a year less mature. I cannot say for sure but I would imagine my ADHD traits may have irritated my peers adding to the reasons why I may have been Billy no mates. I was never invited to a birthday party in primary school and although I felt that rejection throughout I somehow learned to deal with it. I never really enjoyed playing with big groups anyway and I was happy enough to run around on my own pretending I was superman. During my Primary school years Barry McGuigan became the world champion feather weight boxer. Barry is from Clones Co. Monaghan only 7 miles from the village I’m from. I remember my Uncle Paddy used to get me posters of Barry, sponsored by Champion milk lol, and I had them all over my room. (nostalgia) In my mind I was his no.1 fan. I became obsessed with boxing; I had boxing gloves, a punch bag and an older brother who was only glad to get punching the head of me when we sparred. I’m not saying I was like Barry McGuigan, far from it actually, but I did get to the stage that if I needed to I could physically defend myself despite my scrawny build. So again, I didn’t feel vulnerable yet in many ways I was. As I got older and entered secondary school I learned that bad behaviour, by this stage I was a specialist, meant instant access into the cool club. All of a sudden I was accepted and had “friends”. Unfortunately this was when I became much more vulnerable. One of the vulnerabilities for people with ADHD lies in the underdevelopment of effective self-discipline or self-control. My insecurities where easily tapped into and I found myself doing things that were suggested by others. ‘Niall, I dare you to tell Mrs to fuck off’. In my mind I had to maintain my new “friends” even if it meant detention for a week. I was one of the first of my class to start smoking because I now was hanging out with the older Kids, one must keep up appearances. I even got in fights with people for no other reason than somebody saying, hit him. I was a child who was easily influenced. Many young people with ADHD end up in the Criminal Justice System due to this vulnerability. In the company of the wrong people ADHD children, teenagers and Adults can be very susceptible to having their thoughts emotions and actions manipulated and controlled without even realising its happening. The School experience as a whole wasn’t a very positive environment me. I was always in trouble but at least I was now getting rewarded for my poor behaviour by having people that said I was their “friend”. Don’t get me wrong I was no angel an I loved an audience. I had a natural ability to act like an edjit and make people laugh both of which I have tuned to a fine art to this day. Education became of no interest to me what so ever as long as I had people that I could call my “friends”. As I got older I had other vulnerabilities to contend with, addiction for one. For me my escape was alcohol and towards the end I was battling with drugs as well. It’s a very frightening thing when a substance has so much power over you that our willing to do almost anything to get more. Approximately 60% of those with ADHD will also have drug and Alcohol issues. That is more than one in two. I will expand further on Addiction and ADHD another time. I could probably write a book on that subject alone. Children with ADHD are much more vulnerable to accidents such as falling of bicycles or skate boards, falling out of trees and running out on roads without looking due to impulsivity and failing to recognise risks. As adults the risk taking vulnerability manifests as drug, alcohol and gambling addictions or riding motorbikes or cars at 150mph and having a feeling of invincibility. A Danish study that came out last month showed that people with ADHD are at higher risk of dying due to some of what I’ve just described. Today as an adult with ADHD I have learned to manage life much more successfully. I keep my circles small and choose friends carefully. I can still be like the wee boy with the big dreams and my Barry McGuigan obsession has transferred to the Conor McGregor obsession and again in my mind I am his no.1 fan. I’m getting distracted here. My point is, adequate support and understanding of this condition is needed because the majority of people with ADHD remain highly vulnerable to substance abuse, depression, anxiety, accidents and manipulation by some. If you enjoyed this article, please consider sharing it, like us on Facebook Adult ADHD NI and follow us on Twitter @Niallgreene01 & @AdultADHDNI. Niall now offers One to One support for people affected by ADHD support through Skype. If you wish to avail of this support service please contact Adult ADHD NI by Email – Niaadhd@gmail.com Advertisements
#pragma ident "$Id: load_config.c,v 1.2 2006/02/10 02:05:34 dechavez Exp $" /*====================================================================== * * Load the current configuration into the appropriate output location * * - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - * Copyright (c) 1997 Regents of the University of California. * All rights reserved. *====================================================================*/ #include "nrts.h" #include "xfer.h" #include "edes.h" int load_config(struct edes_req *request, struct xfer_cnf *cnf, struct edes_params *edes) { struct xfer_cnfgen1 *gen1; struct xfer_cnfnrts *nrts; switch (cnf->format = request->cnf_form) { case XFER_CNFGEN1: return load_cnfgen1(&cnf->type.gen1, edes); break; case XFER_CNFNRTS: return load_cnfnrts(&cnf->type.nrts, edes); break; default: xfer_errno = XFER_EFORMAT; return -1; } } /* Revision History * * $Log: load_config.c,v $ * Revision 1.2 2006/02/10 02:05:34 dechavez * mods for libida 4.0.0, libisida 1.0.0 and neighors (dbio support) * * Revision 1.1.1.1 2000/02/08 20:20:11 dec * import existing IDA/NRTS sources * */
Josh van der Flier is set to miss the rest of the season after he underwent surgery on his groin last week. The Leinster and Ireland flanker has been ruled out for 12 weeks as looks to recover in time for the World Cup. Van der Flier suffered the injury in the Six Nations win over France earlier this month in what is the latest cruel blow for the 25-year old. Leinster head coach Leo Cullen also confirmed this morning that Robbie Henshaw saw a specialist in the UK as he looks to get to the bottom of his dead leg issue. The centre is "less likely than likely" to feature in Saturday's Champions Cup quarter-final against Ulster. "Whatever way the tear in the muscle was... the dead leg was initially what the cause of concern was but there was a little bit more damage a bit deeper in,” Cullen revealed. Sean Cronin and Noel Reid are both following return to play protocols after suffering head knocks in last week's defeat to Edinburgh. Ross Byrne, who was a late withdrawal from the trip to Scotland due to “tightness in his foot”, will be assessed as the week progresses. In more positive news however, Dan Leavy, Luke McGrath, Joe Tomane and Nick McCarthy all came through their first game back unscathed.
. Altogether 89 patients were examined using scintigraphy with 67Ga-citrate. Radiometric, scintigraphic, radioautographic and histological investigation of the operation material of 15 patients was conducted. The diagnostic accuracy was 92.1%, sensitivity 98.6%, specificity 75%. The investigation of the operation material showed that RP accumulation in the peripheral part of tumor was 4-4.5 times higher and in its central part 2-3 times higher than in the unaffected pulmonary tissue. Elevated RP accumulation was found in the capsular wall in inflammatory processes as well as in lymph nodes with anthracosis. Recovered silver grains on histoautoradiograms of tumor tissue zones were mainly localised around isolated cells and their groups in the intercellular space.
import { Dispatch } from "redux"; import api, { apiV1 } from "../../api"; import { GlobalTime } from "./global"; import { ActionTypes } from "./types"; import { Token } from "../../utils/token"; import { toUTCEpoch } from "../../utils/timeUtils"; export interface servicesListItem { serviceName: string; p99: number; avgDuration: number; numCalls: number; callRate: number; numErrors: number; errorRate: number; } export interface metricItem { timestamp: number; p50: number; p95: number; p99: number; numCalls: number; callRate: number; numErrors: number; errorRate: number; } export interface externalMetricsAvgDurationItem { avgDuration: number; timestamp: number; } export interface externalErrCodeMetricsItem { errorRate: number; externalHttpUrl: string; numErrors: number; timestamp: number; } export interface topEndpointListItem { p50: number; p90: number; p99: number; numCalls: number; name: string; } export interface externalMetricsItem { avgDuration: number; callRate: number; externalHttpUrl: string; numCalls: number; timestamp: number; } export interface dbOverviewMetricsItem { avgDuration: number; callRate: number; dbSystem: string; numCalls: number; timestamp: number; } export interface customMetricsItem { timestamp: number; value: number; } export interface getServicesListAction { type: ActionTypes.getServicesList; payload: servicesListItem[]; } export interface externalErrCodeMetricsActions { type: ActionTypes.getErrCodeMetrics; payload: externalErrCodeMetricsItem[]; } export interface externalMetricsAvgDurationAction { type: ActionTypes.getAvgDurationMetrics; payload: externalMetricsAvgDurationItem[]; } export interface getServiceMetricsAction { type: ActionTypes.getServiceMetrics; payload: metricItem[]; } export interface getExternalMetricsAction { type: ActionTypes.getExternalMetrics; payload: externalMetricsItem[]; } export interface getDbOverViewMetricsAction { type: ActionTypes.getDbOverviewMetrics; payload: dbOverviewMetricsItem[]; } export interface getTopEndpointsAction { type: ActionTypes.getTopEndpoints; payload: topEndpointListItem[]; } export interface getFilteredTraceMetricsAction { type: ActionTypes.getFilteredTraceMetrics; payload: customMetricsItem[]; } export const getServicesList = (globalTime: GlobalTime) => { return async (dispatch: Dispatch) => { let request_string = "/services?start=" + globalTime.minTime + "&end=" + globalTime.maxTime; const response = await api.get<servicesListItem[]>(apiV1 + request_string); dispatch<getServicesListAction>({ type: ActionTypes.getServicesList, payload: response.data, //PNOTE - response.data in the axios response has the actual API response }); }; }; export const getDbOverViewMetrics = ( serviceName: string, globalTime: GlobalTime, ) => { return async (dispatch: Dispatch) => { let request_string = "/service/dbOverview?service=" + serviceName + "&start=" + globalTime.minTime + "&end=" + globalTime.maxTime + "&step=60"; const response = await api.get<dbOverviewMetricsItem[]>( apiV1 + request_string, ); dispatch<getDbOverViewMetricsAction>({ type: ActionTypes.getDbOverviewMetrics, payload: response.data, }); }; }; export const getExternalMetrics = ( serviceName: string, globalTime: GlobalTime, ) => { return async (dispatch: Dispatch) => { let request_string = "/service/external?service=" + serviceName + "&start=" + globalTime.minTime + "&end=" + globalTime.maxTime + "&step=60"; const response = await api.get<externalMetricsItem[]>(apiV1 + request_string); dispatch<getExternalMetricsAction>({ type: ActionTypes.getExternalMetrics, payload: response.data, }); }; }; export const getExternalAvgDurationMetrics = ( serviceName: string, globalTime: GlobalTime, ) => { return async (dispatch: Dispatch) => { let request_string = "/service/externalAvgDuration?service=" + serviceName + "&start=" + globalTime.minTime + "&end=" + globalTime.maxTime + "&step=60"; const response = await api.get<externalMetricsAvgDurationItem[]>( apiV1 + request_string, ); dispatch<externalMetricsAvgDurationAction>({ type: ActionTypes.getAvgDurationMetrics, payload: response.data, }); }; }; export const getExternalErrCodeMetrics = ( serviceName: string, globalTime: GlobalTime, ) => { return async (dispatch: Dispatch) => { let request_string = "/service/externalErrors?service=" + serviceName + "&start=" + globalTime.minTime + "&end=" + globalTime.maxTime + "&step=60"; const response = await api.get<externalErrCodeMetricsItem[]>( apiV1 + request_string, ); dispatch<externalErrCodeMetricsActions>({ type: ActionTypes.getErrCodeMetrics, payload: response.data, }); }; }; export const getServicesMetrics = ( serviceName: string, globalTime: GlobalTime, ) => { return async (dispatch: Dispatch) => { let request_string = "/service/overview?service=" + serviceName + "&start=" + globalTime.minTime + "&end=" + globalTime.maxTime + "&step=60"; const response = await api.get<metricItem[]>(apiV1 + request_string); dispatch<getServiceMetricsAction>({ type: ActionTypes.getServiceMetrics, payload: response.data, //PNOTE - response.data in the axios response has the actual API response }); }; }; export const getTopEndpoints = ( serviceName: string, globalTime: GlobalTime, ) => { return async (dispatch: Dispatch) => { let request_string = "/service/top_endpoints?service=" + serviceName + "&start=" + globalTime.minTime + "&end=" + globalTime.maxTime; const response = await api.get<topEndpointListItem[]>(apiV1 + request_string); dispatch<getTopEndpointsAction>({ type: ActionTypes.getTopEndpoints, payload: response.data, //PNOTE - response.data in the axios response has the actual API response }); }; }; export const getFilteredTraceMetrics = ( filter_params: string, globalTime: GlobalTime, ) => { return async (dispatch: Dispatch) => { let request_string = "/spans/aggregates?start=" + toUTCEpoch(globalTime.minTime) + "&end=" + toUTCEpoch(globalTime.maxTime) + "&" + filter_params; const response = await api.get<customMetricsItem[]>(apiV1 + request_string); dispatch<getFilteredTraceMetricsAction>({ type: ActionTypes.getFilteredTraceMetrics, payload: response.data, //PNOTE - response.data in the axios response has the actual API response }); }; };
#include "J2ObjC_header.h" #pragma push_macro("INCLUDE_ALL_LeSessionController") #ifdef RESTRICT_LeSessionController #define INCLUDE_ALL_LeSessionController 0 #else #define INCLUDE_ALL_LeSessionController 1 #endif #undef RESTRICT_LeSessionController #if __has_feature(nullability) #pragma clang diagnostic push #pragma GCC diagnostic ignored "-Wnullability" #pragma GCC diagnostic ignored "-Wnullability-completeness" #endif #if !defined (LeSessionController_) && (INCLUDE_ALL_LeSessionController || defined(INCLUDE_LeSessionController)) #define LeSessionController_ #define RESTRICT_LeMockController 1 #define INCLUDE_LeMockController 1 #include "LeMockController.h" @class Event; @class IOSByteArray; @class IOSObjectArray; @class JavaLangBoolean; @class JavaLangException; @class JavaUtilConcurrentLocksReentrantLock; @class JavaUtilUUID; @class LeDeviceMock; @class LeEventType; @class LeFormat; @class LeGattCharacteristicMock; @class LeGattServiceMock; @class LeRemoteDeviceMock; @class LeSessionController_SourceType; @protocol JavaLangRunnable; @protocol JavaUtilConcurrentLocksCondition; @protocol JavaUtilList; @protocol JavaUtilMap; @protocol LeCharacteristicListener; @protocol LeCharacteristicWriteListener; @protocol LeDeviceListener; @protocol LeGattCharacteristic; @protocol LeRemoteDeviceListener; @protocol Session; @interface LeSessionController : NSObject < LeMockController > { @public jint counter_; jboolean strict_; JavaUtilConcurrentLocksReentrantLock *lock_; id<JavaUtilConcurrentLocksCondition> condition_; jint source_; IOSObjectArray *values_; Event *currentEvent_; jboolean waitingForEvent_; NSString *sessionName_; id<Session> session_; volatile_id mockedEvents_; volatile_id stackedEvent_; jboolean sessionIsRunning_; jboolean stopSession_; jlong executeNextEventAfter_; id<JavaUtilMap> characteristicsValues_; JavaLangException *sessionException_; id<JavaUtilMap> characteristicListeners_; id<JavaUtilMap> characteristicWriteListeners_; id<JavaUtilMap> devices_; id<JavaUtilMap> deviceKeys_; id<JavaUtilMap> remoteDevices_; id<JavaUtilMap> remoteDeviceKeys_; id<JavaUtilMap> gattServices_; id<JavaUtilMap> gattServicesKeys_; id<JavaUtilMap> deviceListeners_; id<JavaUtilMap> deviceListenerKeys_; id<JavaUtilMap> characteristics_; id<JavaUtilMap> characteristicsKeys_; id<JavaUtilMap> remoteDeviceListeners_; } @property (readonly, copy, class) NSString *TAG NS_SWIFT_NAME(TAG); + (NSString *)TAG; #pragma mark Public - (instancetype __nonnull)initWithSession:(id<Session>)session; - (instancetype __nonnull)initWithSession:(id<Session>)session withBoolean:(jboolean)strict; - (void)addDeviceWithInt:(jint)key withLeDeviceMock:(LeDeviceMock *)mock; - (jint)characteristicGetIntValueWithLeGattCharacteristicMock:(LeGattCharacteristicMock *)leGattCharacteristicMock withLeFormat:(LeFormat *)format withInt:(jint)index; - (IOSByteArray *)characteristicGetValueWithLeGattCharacteristicMock:(LeGattCharacteristicMock *)leGattCharacteristicMock; - (void)characteristicReadWithLeGattCharacteristicMock:(LeGattCharacteristicMock *)leGattCharacteristicMock; - (void)characteristicSetValueWithLeGattCharacteristicMock:(LeGattCharacteristicMock *)leGattCharacteristicMock withByteArray:(IOSByteArray *)value; - (void)characteristicSetValueWithLeGattCharacteristicMock:(LeGattCharacteristicMock *)leGattCharacteristicMock withByteArray:(IOSByteArray *)value withJavaLangBoolean:(JavaLangBoolean *)withResponse; - (jboolean)checkEventWithLeEventType:(LeEventType *)event withLeDeviceMock:(LeDeviceMock *)source withNSStringArray:(IOSObjectArray *)arguments; - (jboolean)checkEventWithLeEventType:(LeEventType *)event withLeGattCharacteristicMock:(LeGattCharacteristicMock *)source withNSStringArray:(IOSObjectArray *)arguments; - (jboolean)checkEventWithLeEventType:(LeEventType *)event withLeGattServiceMock:(LeGattServiceMock *)source withNSStringArray:(IOSObjectArray *)arguments; - (jboolean)checkEventWithLeEventType:(LeEventType *)event withLeRemoteDeviceMock:(LeRemoteDeviceMock *)source withNSStringArray:(IOSObjectArray *)arguments; - (jboolean)checkEventWithSourceIdWithLeEventType:(LeEventType *)eventType withLeSessionController_SourceType:(LeSessionController_SourceType *)sourceType withInt:(jint)source withNSStringArray:(IOSObjectArray *)arguments; - (void)deviceAddListenerWithLeDeviceMock:(LeDeviceMock *)leDeviceMock withLeDeviceListener:(id<LeDeviceListener>)listener; - (jboolean)deviceCheckBleHardwareAvailableWithLeDeviceMock:(LeDeviceMock *)leDeviceMock; - (jboolean)deviceIsBtEnabledWithLeDeviceMock:(LeDeviceMock *)leDeviceMock; - (void)deviceRemoveListenerWithLeDeviceMock:(LeDeviceMock *)leDeviceMock withLeDeviceListener:(id<LeDeviceListener>)listener; - (void)deviceStartScanningWithLeDeviceMock:(LeDeviceMock *)leDeviceMock; - (void)deviceStartScanningWithLeDeviceMock:(LeDeviceMock *)leDeviceMock withJavaUtilList:(id<JavaUtilList>)filters; - (void)deviceStartScanningWithLeDeviceMock:(LeDeviceMock *)leDeviceMock withJavaUtilUUIDArray:(IOSObjectArray *)uuids; - (void)deviceStopScanningWithLeDeviceMock:(LeDeviceMock *)leDeviceMock; - (id<LeCharacteristicListener>)getCharacteristicListenerWithInt:(jint)key; - (id<LeCharacteristicWriteListener>)getCharacteristicWriteListenerWithInt:(jint)key; - (id<LeDeviceListener>)getDeviceListenerWithInt:(jint)key; - (id<LeRemoteDeviceListener>)getRemoteDeviceListenerWithInt:(jint)key; - (id<Session>)getSession; - (JavaLangException *)getSessionException; - (void)pointReachedWithNSString:(NSString *)point; - (void)remoteDeviceAddListenerWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock withLeRemoteDeviceListener:(id<LeRemoteDeviceListener>)listener; - (void)remoteDeviceCloseWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock; - (void)remoteDeviceConnectWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock; - (void)remoteDeviceDisconnectWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock; - (NSString *)remoteDeviceGetAddressWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock; - (NSString *)remoteDeviceGetNameWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock; - (void)remoteDeviceReadRssiWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock; - (void)remoteDeviceRemoveListenerWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock withLeRemoteDeviceListener:(id<LeRemoteDeviceListener>)listener; - (void)remoteDeviceSetCharacteristicListenerWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock withLeCharacteristicListener:(id<LeCharacteristicListener>)listener withJavaUtilUUIDArray:(IOSObjectArray *)uuids; - (void)remoteDeviceSetCharacteristicWriteListenerWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock withLeCharacteristicWriteListener:(id<LeCharacteristicWriteListener>)listener withJavaUtilUUIDArray:(IOSObjectArray *)uuids; - (void)remoteDeviceStartServiceDiscoveryWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock; - (void)remoteDeviceStartServiceDiscoveryWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock withJavaUtilUUIDArray:(IOSObjectArray *)uuids; - (jboolean)serviceEnableCharacteristicNotificationWithLeGattServiceMock:(LeGattServiceMock *)leGattServiceMock withJavaUtilUUID:(JavaUtilUUID *)characteristic; - (id<LeGattCharacteristic>)serviceGetCharacteristicWithLeGattServiceMock:(LeGattServiceMock *)leGattServiceMock withJavaUtilUUID:(JavaUtilUUID *)uuid; - (JavaUtilUUID *)serviceGetUuidWithLeGattServiceMock:(LeGattServiceMock *)leGattServiceMock; - (void)startDefaultSession; - (void)startSessionWithNSString:(NSString *)sessionName; - (void)stopSession; - (void)waitForEventWithEvent:(Event *)event; - (void)waitForFinishedSession; - (void)waitForPointWithNSString:(NSString *)point; - (jboolean)waitTillSessionStarted; #pragma mark Protected - (void)addDeviceListenerWithInt:(jint)key withLeDeviceListener:(id<LeDeviceListener>)listener; - (void)checkPause; - (LeGattServiceMock *)createGattServiceWithInt:(jint)key; - (LeGattServiceMock *)createGattServiceWithNSString:(NSString *)key; - (LeGattCharacteristicMock *)createOrReturnCharacteristicWithInt:(jint)key; - (LeGattCharacteristicMock *)createOrReturnCharacteristicWithNSString:(NSString *)key; - (LeRemoteDeviceMock *)createOrReturnRemoteDeviceWithInt:(jint)key withLeDeviceMock:(LeDeviceMock *)deviceMock; - (LeRemoteDeviceMock *)createRemoteDeviceWithInt:(jint)key withLeDeviceMock:(LeDeviceMock *)deviceMock; - (jboolean)eventBooleanValue; - (jboolean)eventBooleanValueWithInt:(jint)seq; - (jint)eventIntValue; - (NSString *)eventValue; - (NSString *)eventValueWithInt:(jint)seq; - (LeGattCharacteristicMock *)getCharacteristicWithInt:(jint)key; - (LeGattCharacteristicMock *)getCharacteristicWithNSString:(NSString *)key; - (jint)getCharacteristicKeyWithLeGattCharacteristicMock:(LeGattCharacteristicMock *)characteristic; - (id<LeCharacteristicListener>)getCharacteristicListenerWithNSString:(NSString *)key; - (id<LeCharacteristicWriteListener>)getCharacteristicWriteListenerWithNSString:(NSString *)key; - (LeDeviceMock *)getDeviceWithInt:(jint)key; - (LeDeviceMock *)getDeviceWithNSString:(NSString *)key; - (jint)getDeviceKeyWithLeDeviceMock:(LeDeviceMock *)device; - (id<LeDeviceListener>)getDeviceListenerWithNSString:(NSString *)key; - (jint)getDeviceListenerKeyWithLeDeviceListener:(id<LeDeviceListener>)deviceListener; - (jint)getGattServiceKeyWithLeGattServiceMock:(LeGattServiceMock *)LeGattServiceMock; - (LeRemoteDeviceMock *)getRemoteDeviceWithInt:(jint)key; - (LeRemoteDeviceMock *)getRemoteDeviceWithNSString:(NSString *)key; - (jint)getRemoteDeviceKeyWithLeRemoteDeviceMock:(LeRemoteDeviceMock *)leRemoteDeviceMock; - (id<LeRemoteDeviceListener>)getRemoteDeviceListenerWithNSString:(NSString *)key; - (void)startSessionInThread; - (void)updateCurrentEventWithEvent:(Event *)newCurrentEvent; - (void)waitForPointOrEventWithNSString:(NSString *)point; - (void)workOnEventWithEvent:(Event *)event; #pragma mark Package-Private - (void)runCurrentEventOnUiThreadWithJavaLangRunnable:(id<JavaLangRunnable>)runnable; - (jboolean)shouldLog; // Disallowed inherited constructors, do not use. - (instancetype __nonnull)init NS_UNAVAILABLE; @end J2OBJC_EMPTY_STATIC_INIT(LeSessionController) J2OBJC_FIELD_SETTER(LeSessionController, lock_, JavaUtilConcurrentLocksReentrantLock *) J2OBJC_FIELD_SETTER(LeSessionController, condition_, id<JavaUtilConcurrentLocksCondition>) J2OBJC_FIELD_SETTER(LeSessionController, values_, IOSObjectArray *) J2OBJC_FIELD_SETTER(LeSessionController, currentEvent_, Event *) J2OBJC_FIELD_SETTER(LeSessionController, sessionName_, NSString *) J2OBJC_FIELD_SETTER(LeSessionController, session_, id<Session>) J2OBJC_VOLATILE_FIELD_SETTER(LeSessionController, mockedEvents_, id<JavaUtilList>) J2OBJC_VOLATILE_FIELD_SETTER(LeSessionController, stackedEvent_, Event *) J2OBJC_FIELD_SETTER(LeSessionController, characteristicsValues_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, sessionException_, JavaLangException *) J2OBJC_FIELD_SETTER(LeSessionController, characteristicListeners_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, characteristicWriteListeners_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, devices_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, deviceKeys_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, remoteDevices_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, remoteDeviceKeys_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, gattServices_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, gattServicesKeys_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, deviceListeners_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, deviceListenerKeys_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, characteristics_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, characteristicsKeys_, id<JavaUtilMap>) J2OBJC_FIELD_SETTER(LeSessionController, remoteDeviceListeners_, id<JavaUtilMap>) inline NSString *LeSessionController_get_TAG(void); /*! INTERNAL ONLY - Use accessor function from above. */ FOUNDATION_EXPORT NSString *LeSessionController_TAG; J2OBJC_STATIC_FIELD_OBJ_FINAL(LeSessionController, TAG, NSString *) FOUNDATION_EXPORT void LeSessionController_initWithSession_(LeSessionController *self, id<Session> session); FOUNDATION_EXPORT LeSessionController *new_LeSessionController_initWithSession_(id<Session> session) NS_RETURNS_RETAINED; FOUNDATION_EXPORT LeSessionController *create_LeSessionController_initWithSession_(id<Session> session); FOUNDATION_EXPORT void LeSessionController_initWithSession_withBoolean_(LeSessionController *self, id<Session> session, jboolean strict); FOUNDATION_EXPORT LeSessionController *new_LeSessionController_initWithSession_withBoolean_(id<Session> session, jboolean strict) NS_RETURNS_RETAINED; FOUNDATION_EXPORT LeSessionController *create_LeSessionController_initWithSession_withBoolean_(id<Session> session, jboolean strict); J2OBJC_TYPE_LITERAL_HEADER(LeSessionController) @compatibility_alias HoutbeckeRsLeMockLeSessionController LeSessionController; #endif #if !defined (LeSessionController_SourceType_) && (INCLUDE_ALL_LeSessionController || defined(INCLUDE_LeSessionController_SourceType)) #define LeSessionController_SourceType_ #define RESTRICT_JavaLangEnum 1 #define INCLUDE_JavaLangEnum 1 #include "java/lang/Enum.h" @class IOSObjectArray; typedef NS_ENUM(NSUInteger, LeSessionController_SourceType_Enum) { LeSessionController_SourceType_Enum_device = 0, LeSessionController_SourceType_Enum_remoteDevice = 1, LeSessionController_SourceType_Enum_gattService = 2, LeSessionController_SourceType_Enum_gattCharacteristic = 3, }; @interface LeSessionController_SourceType : JavaLangEnum @property (readonly, class, nonnull) LeSessionController_SourceType *device NS_SWIFT_NAME(device); @property (readonly, class, nonnull) LeSessionController_SourceType *remoteDevice NS_SWIFT_NAME(remoteDevice); @property (readonly, class, nonnull) LeSessionController_SourceType *gattService NS_SWIFT_NAME(gattService); @property (readonly, class, nonnull) LeSessionController_SourceType *gattCharacteristic NS_SWIFT_NAME(gattCharacteristic); + (LeSessionController_SourceType * __nonnull)device; + (LeSessionController_SourceType * __nonnull)remoteDevice; + (LeSessionController_SourceType * __nonnull)gattService; + (LeSessionController_SourceType * __nonnull)gattCharacteristic; #pragma mark Public + (LeSessionController_SourceType *)valueOfWithNSString:(NSString *)name; + (IOSObjectArray *)values; #pragma mark Package-Private - (LeSessionController_SourceType_Enum)toNSEnum; @end J2OBJC_STATIC_INIT(LeSessionController_SourceType) /*! INTERNAL ONLY - Use enum accessors declared below. */ FOUNDATION_EXPORT LeSessionController_SourceType *LeSessionController_SourceType_values_[]; inline LeSessionController_SourceType *LeSessionController_SourceType_get_device(void); J2OBJC_ENUM_CONSTANT(LeSessionController_SourceType, device) inline LeSessionController_SourceType *LeSessionController_SourceType_get_remoteDevice(void); J2OBJC_ENUM_CONSTANT(LeSessionController_SourceType, remoteDevice) inline LeSessionController_SourceType *LeSessionController_SourceType_get_gattService(void); J2OBJC_ENUM_CONSTANT(LeSessionController_SourceType, gattService) inline LeSessionController_SourceType *LeSessionController_SourceType_get_gattCharacteristic(void); J2OBJC_ENUM_CONSTANT(LeSessionController_SourceType, gattCharacteristic) FOUNDATION_EXPORT IOSObjectArray *LeSessionController_SourceType_values(void); FOUNDATION_EXPORT LeSessionController_SourceType *LeSessionController_SourceType_valueOfWithNSString_(NSString *name); FOUNDATION_EXPORT LeSessionController_SourceType *LeSessionController_SourceType_fromOrdinal(NSUInteger ordinal); J2OBJC_TYPE_LITERAL_HEADER(LeSessionController_SourceType) #endif #if __has_feature(nullability) #pragma clang diagnostic pop #endif #pragma pop_macro("INCLUDE_ALL_LeSessionController")
Capillary Haemangioma Of Lower Lip In An African Patient. Haemangioma is one of the commonest benign vascular tumours affecting 1012% of infancy. Approximately 50% of haemangiomas resolve by the age of 5 years and 90% resolve by 9 years of age. Rarely haemangiomas persist and required treatment. 1 A 26-year-old African male presented with a painful swelling of lower lip for 1 year. The swelling was initially small and gradually reached the present size. Past medical history and family history of the patient was non contributory to the presenting symptom. Clinical examination of the patient revealed a radish to purplish, fluctuant swelling of the lower lip involving the whole lower lip measuring about 32 cm. (Figure-1) On palpation it was soft to firm no signs of discharge and ulceration were found. Blenching was found to be positive on diascopy. Fine Needle aspiration cytology was performed which was performed and repeated aspirations yielded fresh blood. The microscopical examination of the aspirate revealed collection of numerous RBCs. (Figure-2) 5% ethamolin 2 ml was injected in the lesion and Incisional biopsy was taken under local anaesthesia. The specimen was submitted to the Department of Oral and Maxillofacial Pathology for microscopic evaluation. The microscopic examination of soft tissue section revealed a highly vascular connective tissue stroma comprised of numerous dilated blood vessels containing RBCS and lined by endothelial cells. (Figure 3 & 4) The stroma was sparse with minimal inflammatory component. An area of haemorrhage was noted with the collection of abundant hemosiderin pigmentation. The overlying epithelium was normal stratified squamous epithelium without dysplasia. Based on these features a final diagnosis of capillary haemangioma was given. Pigmented lesions are commonly found in the mouth. Such lesions represent a variety of clinical entities, ranging from physiologic changes to manifestation of systemic illness and malignant neoplasm. 1 Haemangiomas are considered common in head and neck but rare in oral cavity and lips. 2 The differential diagnoses of haemangioma include lymphangioma, Intra vascular papillary endothelial hyperplasia and pyogenic granuloma. 3 A colour Doppler ultrasound is required to confirm the association of feeder vessel. Sclerosing agents may be used for diagnostic and therapeutic purposes. In the present case also ethamolin was used for incisional biopsy. Growing haemangiomas can be treated effectively by systemic drug therapy, sclerotherapy, laser therapy or combined therapy.
President Trump is expected to announce that he will not certify the Iran deal in a speech on Friday, October 13. As Elena Chachko and Suzanne Maloney have already explained, doing so will not necessarily lead to the reimposition of sanctions. It will, however, trigger a period of intense congressional debate and geopolitical uncertainty. According to the deal’s architects, this is enough to unleash disastrous foreign policy repercussions. Perhaps these critics are correct. However, an honest reckoning with the president’s decision must include not just projections about its effects, but also an evaluation of its causes. Foremost among these is the framework created by decades of American sanctions legislation. The existence of these twin requirements places the president’s expected certification decision in a different--even reasonable--light. Rather than claiming Iran is in material breach when, according to most judgements, Iran is largely complying, the president can claim that his failure to certify is based on his evaluation of American interests. This claim has the virtue of truth. For better or worse, President Trump really does seem to believe that the U.S. would be better off taking a harder line with Iran. So, the president’s failure to certify is best understood as a policy disagreement rather than a politically motivated lie about Iranian actions. Just as Ben has encouraged readers not to become accustomed to presidential lying, recognizing this difference is crucial. Of course, President Trump did not draft the INARA. Congress did. If members of Congress dislike that certification requires a policy judgement—in addition to a factual one about compliance–they have only themselves to blame. In fact, the legislative origins of the dual determinations framework lie even deeper than the INARA. It’s crucial to remember that the American sanctions regime did not begin with Iran’s nuclear program. It accrued over decades, frequently referencing Iran’s nuclear program--but just as frequently citing Iran’s heinous human rights record, support for terrorism, and destabilizing regional role. During the nuclear negotiations, American officials made a show of only bargaining away “nuclear-related” sanctions. Yet the troublesome truth is that there is no such category. The vast majority of sanctions punished Iran both for its nuclear behavior and for other activities in conflict with American interests. It is precisely because these sanctions are complex, multipurpose policy tools that congress sought to provide presidents with maximum flexibility. Therefore, the vast majority of sanctions contained provisions permitting a president to waive or suspend sanctions when he determined it was “essential to the national interest” to do so. Because sanctions were never just about nuclear weapons, presidents would be required to make wider judgements as they adjusted sanctions to control Iranian behavior. This suspension power was precisely what President Obama used so effectively in reaching a deal without congressional support. In doing so, he operated within the legal framework that already existed. In accordance with underlying sanction legislation, he affirmed— publicly and repeatedly—that these suspensions were essential to the national interest. In the lead-up to the nuclear deal, opponents lacked the votes in congress necessary to block it. But a strong bipartisan majority did agree that the pre-existing certification requirements ought to continue to limit future presidents. Consequently, the INARA’s demand for recertification that sanctions relief is “vital to the national interest” is a retention of a standard at the heart of decades of American sanctions policy. None of this means that President Trump’s evaluation of the national interest is particularly wise. But it does mean that criticism based entirely on the claim that “Iran is in compliance” is inadequate. Indeed, decades of law dictate that the president must look beyond compliance alone when evaluating whether to continue the suspension of sanctions. Of course, there is still some rhetorical slipperiness afoot. For weeks, the White House has signaled that a central rationale for decertification will be Iran’s violation of “the spirit of the deal.” This phrase is frustratingly ambiguous. But here too the president has a point. In recent months, Iran has repeatedly conducted ballistic missile tests in defiance of the very same U.N. Security Council (UNSC) resolution that implemented the Joint Comprehensive Plan of Action (JCPOA). Iran is specifically “called upon not to undertake any activity related to ballistic missiles designed to be capable of delivering nuclear weapons, including launches using such ballistic missile technology.” Crucially, the same resolution explicitly states that the parties’ “participation in the JCPOA is contingent upon the United Nations Security Council… requir[ing] States to comply with the provisions in this statement.” So in a sense, restrictions on Iranian ballistic missile activity are an element of the deal. Yet, the same resolution noticeably (and inexcusably) avoids binding legal language for these restrictions. In contrast to other provisions which are “decided” by the Security Council under Article 41, the missile restrictions are simply issued as an exhortation. Moreover, Footnote 3 of the JCPOA’s Annex V explicitly warns that “[t]he provisions of this [Security Council] Resolution do not constitute provisions of this JCPOA.” So as a technical matter, the missile launches also seem to lie outside of the deal itself. Hence the strange conclusion, shared by the White House (and interestingly, the French): Iran may be “technically compliant,” but is not abiding by the agreement’s “spirit.” This also follows President Trump’s corollary: ongoing sanctions relief is no longer in the U.S. national interest. As should be clear, the administration’s verbal gymnastics are rooted in the diplomatic dance that went into the deal itself. The U.S. and its allies wished to retain ongoing restrictions on Iran’s ballistic missile activity as part of the deal. Iran adamantly refused. Where Western negotiators saw such activities as an illegal (under prior UNSC resolutions) and aggressive supplement to Iran’s nuclear program, Iran claimed that any such restrictions were themselves a form of punishment. Rather than resolving the issue, diplomats fudged it. The result gives the appearance of restraining Iran’s missile program, but without legal teeth. It does so in a way that ensures that both a bellicose Iran and compliance hawks can self-righteously condemn the other. When combined with the “national interests” certification requirement—and of course, a singularly unpredictable president—this sort of ambiguity becomes radioactive. Sunday’s deadline also reveals a sad, if unsurprising irony: artful diplomacy may have facilitated the deal’s formation, but overly clever drafting may also lead to its unraveling. The president will make a choice on Sunday, and he should bear its full responsibility. But decades of prior choices—particularly legislative and drafting choices—created the framework for Sunday’s decision. Prior Congresses, presidents and the public ought to recognize this. Blame, or credit, will be due.
Closeness Centrality Based Cluster Head Selection Algorithm for Large Scale WSNs Low-energy adaptive clustering hierarchy (LEACH) is a adaptive clustering routing protocol, which is proposed to efficiently manage the energy consumption in Wireless Sensor Networks (WSNs). In this protocol, sensor nodes are organized into clusters and elects the cluster heads randomly. Sensor nodes in each cluster transmit the data directly to the cluster heads. Cluster heads gather the data and transmits to base station. Here, random selection of cluster heads helps to distribute the energy dissipation evenly among all sensor nodes. However, this mechanism consume relatively more energy consumption in large scale WSNs, as the distance between newly elected cluster head and sensor nodes may not be optimal. Intuitively, LEACH protocol with deterministic selection of cluster head based on minimum distance between all sensor nodes in the cluster consumes less energy over random selection. Although, this method increases the life time of the WSN, it is computationally inefficient for large-scale WSNs. In our work, we select the cluster head based on closeness centrality measure. We observe a significant reduction of energy consumption over LEACH protocol with less computational complexity. We also prove that deterministic selection of cluster head based on closeness centrality measure improves the lifetime of WSN significantly over random selection.
Design and Development of Optimized Scheduling Algorithm for Software As A Service Based Applications in Secure Cloud Environment In layman terms, cloud computing can be described as sharing of computing services on internet. In this paper, a brief introduction to cloud computing, its features, applications, and its limitations has been discussed. As, cloud computing involves execution of multiple activities simultaneously, management is required for the smooth functioning of applications in a secured manner. Different load balancing techniques for the efficient resource provisioning of applications have also been discussed. The Paper concludes with a description of a framework for the development of optimized scheduling algorithm for Software as a Service based applications in secure cloud environment.
// GetUserByNameOrId retrieves a user within an admin organization // by either name or ID // Returns a valid user if it exists. If it doesn't, returns nil and ErrorEntityNotFound // If argument refresh is true, the AdminOrg will be refreshed before searching. // This is usually done after creating, modifying, or deleting users. // If it is false, it will search within the data already in memory (useful when // looping through the users and we know that no changes have occurred in the meantime) func (adminOrg *AdminOrg) GetUserByNameOrId(identifier string, refresh bool) (*OrgUser, error) { getByName := func(name string, refresh bool) (interface{}, error) { return adminOrg.GetUserByName(name, refresh) } getById := func(name string, refresh bool) (interface{}, error) { return adminOrg.GetUserById(name, refresh) } entity, err := getEntityByNameOrId(getByName, getById, identifier, refresh) if entity == nil { return nil, err } return entity.(*OrgUser), err }
/** * Checks for collisions between spaceships. If two spaceships collide, * their speeds are adjusted, and the collideWithSpaceShip() method of each * spaceship is called. */ private void checkCollisions() { for (int i = 0; i < this.ships.length; i++) { for (int j = i + 1; j < this.ships.length; j++) { if (this.ships[i].getPhysics().testCollisionWith(this.ships[j].getPhysics())) { this.ships[i].collidedWithAnotherShip(); this.ships[j].collidedWithAnotherShip(); } } } }
package compat import ( "context" "github.com/golang/protobuf/ptypes/empty" "github.com/pkg/errors" "google.golang.org/grpc" "github.com/networkservicemesh/networkservicemesh/controlplane/api/connection" local "github.com/networkservicemesh/networkservicemesh/controlplane/api/local/connection" remote "github.com/networkservicemesh/networkservicemesh/controlplane/api/remote/connection" ) type localMonitorAdapter struct { connection.MonitorConnectionClient } func (l localMonitorAdapter) MonitorConnections(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (local.MonitorConnection_MonitorConnectionsClient, error) { value, err := l.MonitorConnectionClient.MonitorConnections(ctx, &connection.MonitorScopeSelector{NetworkServiceManagers: make([]string, 1)}, opts...) return newLocalMonitorConnection_MonitorConnectionsClientAdapter(value), err } func NewLocalMonitorAdapter(conn *grpc.ClientConn) local.MonitorConnectionClient { return &localMonitorAdapter{ MonitorConnectionClient: connection.NewMonitorConnectionClient(conn), } } type localMonitorConnection_MonitorConnectionsClientAdapter struct { connection.MonitorConnection_MonitorConnectionsClient } func (l localMonitorConnection_MonitorConnectionsClientAdapter) Recv() (*local.ConnectionEvent, error) { if l.MonitorConnection_MonitorConnectionsClient != nil { m, err := l.MonitorConnection_MonitorConnectionsClient.Recv() if err != nil { return nil, err } return ConnectionEventUnifiedToLocal(m), nil } return nil, errors.New("localMonitorConnection_MonitorConnectionsClientAdapter.MonitorConnection_MonitorConnectionsClient == nil") } func newLocalMonitorConnection_MonitorConnectionsClientAdapter(adapted connection.MonitorConnection_MonitorConnectionsClient) *localMonitorConnection_MonitorConnectionsClientAdapter { return &localMonitorConnection_MonitorConnectionsClientAdapter{MonitorConnection_MonitorConnectionsClient: adapted} } type remoteMonitorConnection_MonitorConnectionsClientAdapter struct { connection.MonitorConnection_MonitorConnectionsClient } func (r remoteMonitorConnection_MonitorConnectionsClientAdapter) Recv() (*remote.ConnectionEvent, error) { if r.MonitorConnection_MonitorConnectionsClient != nil { m, err := r.MonitorConnection_MonitorConnectionsClient.Recv() if err != nil { return nil, err } return ConnectionEventUnifiedToRemote(m), nil } return nil, errors.New("remoteMonitorConnection_MonitorConnectionsClientAdapter.MonitorConnection_MonitorConnectionsClient == nil") } func newRemoteMonitorConnection_MonitorConnectionsClientAdapter(adapted connection.MonitorConnection_MonitorConnectionsClient) *remoteMonitorConnection_MonitorConnectionsClientAdapter { return &remoteMonitorConnection_MonitorConnectionsClientAdapter{MonitorConnection_MonitorConnectionsClient: adapted} } type remoteMonitorAdapter struct { connection.MonitorConnectionClient } func NewRemoteMonitorAdapter(conn *grpc.ClientConn) remote.MonitorConnectionClient { return &remoteMonitorAdapter{ MonitorConnectionClient: connection.NewMonitorConnectionClient(conn), } } func (r remoteMonitorAdapter) MonitorConnections(ctx context.Context, selector *remote.MonitorScopeSelector, opts ...grpc.CallOption) (remote.MonitorConnection_MonitorConnectionsClient, error) { value, err := r.MonitorConnectionClient.MonitorConnections(ctx, MonitorScopeSelectorRemoteToUnified(selector), opts...) return newRemoteMonitorConnection_MonitorConnectionsClientAdapter(value), err }
. The World Health Organization recommends exclusive breastfeeding up to the age of six months and continuation of partial breastfeeding up to the age of two years, in addition to nutritionally adequate and safe food. In Mauritania, despite some progress, most mothers do not comply with these recommendations. The aim of this study, conducted in Nouakchott, was to evaluate breastfeeding and feeding practices, and measure factors associated with achievement of the optimal duration of exclusive breastfeeding. The methodology combined quantitative and qualitative approaches. A descriptive cross-sectional study was conducted by questionnaires among 330 mothers from different departments of the capital. Twenty semi-structured interviews were then conducted with Mauritanian grandmothers in order to understand their roles and perceptions about infant feeding. Before the age of 6 months, the exclusive breastfeeding rate was 18.4%, the predominant breastfeeding rate was 44.3% and the partial breastfeeding with milk rate was 28.1%. In addition, 9.2% of infants received supplementary feeding. We found that 50.5% of mothers were aware of the optimal duration of exclusive breastfeeding, but only 14.2% complied with this recommendation. The factors significantly associated with compliance with the optimal duration of exclusive breastfeeding were maternal age over 35 years and multiparity. Interviews revealed that grandmothers knew about some of the nutritional recommendations, but denied their relevance based on their experience. Their advice contradicted certain medical recommendations. Our study revealed inadequacies concerning the mothers knowledge and more frequently their practices in terms of infant feeding. The gap between knowledge and practice can be essentially explained by the relative importance attributed to recommendations by the mothers, as well as the confrontation between medical recommendations and grandmothers traditional knowledge.
Gambling Behavior Severity and Psychological, Family, and Contextual Variables: A Comparative Analysis This study compares 3 groups consisting of individuals with no gambling problem, those with some problem, and pathological gamblers, according to the following 4 levels of analysis: social context (i.e., accessibility and social acceptance), family context (i.e., family of origin issues, family functioning, and family quality of life), marital issues (i.e., marital satisfaction and adjustment), and individual issues (i.e., congruence, differentiation of self, and psychopathological symptoms). The study protocol of 8 standardized scales, a sociodemographic questionnaire, and 6 independent questions was administered to 331 adults. The main results indicate that although the 2 groups of nonpathological gamblers exhibited differing levels of gambling severity, they did not differ statistically, suggesting that gambling-related problems were only evident when a pathological level was attained. The pathological gamblers exhibited a greater number of family, marital, and individual difficulties compared to the other 2 groups.
/** * Find element with same id. * @param id id of find element. * @return Item with same id or null if items not has element with same id. */ protected Item findById(String id) { Item result = null; List<Item> itemList = getItems(this.prepStatFindById, id); if(itemList.size() != 0) { result = itemList.get(0); } return result; }
/* * Copyright 2016 Centro, Inc. * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package net.centro.rtb.monitoringcenter.metrics.system.jvm; import com.codahale.metrics.Gauge; import com.codahale.metrics.Metric; import com.codahale.metrics.MetricSet; import net.centro.rtb.monitoringcenter.util.MetricNamingUtil; import java.lang.management.ManagementFactory; import java.lang.management.MemoryMXBean; import java.lang.management.MemoryPoolMXBean; import java.lang.management.MemoryUsage; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; class JvmMemoryMetricSet implements MetricSet { private final MemoryMXBean memoryMXBean; private final List<MemoryPoolMXBean> memoryPoolMXBeans; private MemoryUsageStatus totalMemoryUsageStatus; private MemoryUsageStatus heapMemoryUsageStatus; private MemoryUsageStatus nonHeapMemoryUsageStatus; private List<MemoryPoolStatus> memoryPoolStatuses; private Map<String, Metric> metricsByNames; JvmMemoryMetricSet() { this.memoryMXBean = ManagementFactory.getMemoryMXBean(); this.memoryPoolMXBeans = ManagementFactory.getMemoryPoolMXBeans(); Map<String, Metric> metricsByNames = new HashMap<>(); // Total String totalNamespace = "total"; final Gauge<Long> totalUsedMemoryInBytesGauge = new Gauge<Long>() { @Override public Long getValue() { return memoryMXBean.getHeapMemoryUsage().getUsed() + memoryMXBean.getNonHeapMemoryUsage().getUsed(); } }; metricsByNames.put(MetricNamingUtil.join(totalNamespace, "usedInBytes"), totalUsedMemoryInBytesGauge); final Gauge<Long> totalCommittedMemoryInBytesGauge = new Gauge<Long>() { @Override public Long getValue() { return memoryMXBean.getHeapMemoryUsage().getCommitted() + memoryMXBean.getNonHeapMemoryUsage().getCommitted(); } }; metricsByNames.put(MetricNamingUtil.join(totalNamespace, "committedInBytes"), totalCommittedMemoryInBytesGauge); this.totalMemoryUsageStatus = new MemoryUsageStatus() { @Override public Gauge<Long> getInitialSizeInBytesGauge() { return null; } @Override public Gauge<Long> getUsedMemoryInBytesGauge() { return totalUsedMemoryInBytesGauge; } @Override public Gauge<Long> getMaxAvailableMemoryInBytesGauge() { return null; } @Override public Gauge<Long> getCommittedMemoryInBytesGauge() { return totalCommittedMemoryInBytesGauge; } @Override public Gauge<Double> getUsedMemoryPercentageGauge() { return null; } }; // Heap JvmMemoryUsageMetricSet heapMemoryUsageMetricSet = new JvmMemoryUsageMetricSet(new MemoryUsageProvider() { @Override public MemoryUsage get() { return memoryMXBean.getHeapMemoryUsage(); } }); metricsByNames.put("heap", heapMemoryUsageMetricSet); this.heapMemoryUsageStatus = heapMemoryUsageMetricSet; // Non-heap JvmMemoryUsageMetricSet nonHeapMemoryUsageMetricSet = new JvmMemoryUsageMetricSet(new MemoryUsageProvider() { @Override public MemoryUsage get() { return memoryMXBean.getNonHeapMemoryUsage(); } }); metricsByNames.put("nonHeap", nonHeapMemoryUsageMetricSet); this.nonHeapMemoryUsageStatus = nonHeapMemoryUsageMetricSet; // Memory pools List<MemoryPoolStatus> memoryPoolStatuses = new ArrayList<>(); for (final MemoryPoolMXBean pool : memoryPoolMXBeans) { final String memoryPoolName = pool.getName(); final String poolNamespace = MetricNamingUtil.join("pools", MetricNamingUtil.sanitize(memoryPoolName)); final JvmMemoryUsageMetricSet memoryUsageMetricSet = new JvmMemoryUsageMetricSet(new MemoryUsageProvider() { @Override public MemoryUsage get() { return pool.getUsage(); } }); metricsByNames.put(poolNamespace, memoryUsageMetricSet); final Gauge<Long> usedAfterGcInBytesGauge; if (pool.getCollectionUsage() != null) { usedAfterGcInBytesGauge = new Gauge<Long>() { @Override public Long getValue() { return pool.getCollectionUsage().getUsed(); } }; metricsByNames.put(MetricNamingUtil.join(poolNamespace, "usedAfterGcInBytes"), usedAfterGcInBytesGauge); } else { usedAfterGcInBytesGauge = null; } memoryPoolStatuses.add(new MemoryPoolStatus() { @Override public String getName() { return memoryPoolName; } @Override public MemoryUsageStatus getMemoryUsageStatus() { return memoryUsageMetricSet; } @Override public Gauge<Long> getUsedAfterGcInBytesGauge() { return usedAfterGcInBytesGauge; } }); } this.memoryPoolStatuses = memoryPoolStatuses; this.metricsByNames = metricsByNames; } @Override public Map<String, Metric> getMetrics() { return Collections.unmodifiableMap(metricsByNames); } public MemoryUsageStatus getTotalMemoryUsageStatus() { return totalMemoryUsageStatus; } public MemoryUsageStatus getHeapMemoryUsageStatus() { return heapMemoryUsageStatus; } public MemoryUsageStatus getNonHeapMemoryUsageStatus() { return nonHeapMemoryUsageStatus; } public List<MemoryPoolStatus> getMemoryPoolStatuses() { return Collections.unmodifiableList(memoryPoolStatuses); } }
For a movie star on the red carpet, it's hard to go wrong with a classic Gucci suit. Well, almost. Though luxury Italian craftsmanship is about as close to suit nirvana as you can get, it's still important to pay attention to the details. Miss any one of several key tailoring points, and it won't make a difference whether your suit is from Gucci or Rocco's Discount Clothing Shed. This past week, both Jake Gyllenhaal and Ben Affleck attended events wearing a navy Marseille suit from Gucci. And while one of them looked about as crisp and polished as you can get, the other looked more like he was attending his first job interview. With the perfect tailoring on his Gucci Made to Order version of the suit, Jake Gyllenhaal could give '60s Frank Sinatra a run for his money. Nothing is off or out of place. For starters, the sleeves are a perfect length, revealing, with almost mathematical precision, about a quarter inch of his shirt cuff. Affleck, on the other hand, didn't go made-to-order. He also, it seems, skipped the trip to the tailor. As a result, he's showing way too much shirt cuff. The reason is simple: The shirt sleeve is too long. His cuff goes all the way down to the first knuckle of his thumb. Look back to master Gyllenhaal and you can see that it should just hit where your hand meets your wrist. Now we move southward and see that JG's trousers are tailored without a break, so the hems just barely kiss the tops of his cap-toe oxfords. Over on team Affleck, however, we see that the bottom of his trousers are stacked up like traffic on an L.A. freeway. That look is fine for jeans and a pair of high tops, but for a suit it feels sloppy. Affleck's suit appears to be a little too tight in the middle—hence the shirt reveal below his top button. (Again, though, that could be because of the way he's standing.) It's also a little too loose everywhere else, most notably the sleeves, which are an easy fix for most tailors. Jake's jacket is tailored into one perfect fluid line, while Ben's looks like the EKG reading on a heart arrhythmia. These are two good looking actors who were genetically engineered to wear suits. And yet, with just a few subtle tweaks, one looks like Don Draper redux while the other looks like he borrowed something from his dad's closet. The small things matter, and they can drastically alter the look of your clothing. Especially when it comes to suits. Never forget that.
<reponame>GeekyAubergine/rust-renderer use crate::physics::{ colliders::ray_collider::{RayCollider, RayCollision}, ray::Ray, }; use super::aabb::BoundingBox; pub trait Shape { fn collide_ray(&self, ray: &Ray, t_min: f64, t_max: f64) -> Option<RayCollision>; fn get_bounding_box(&self, frame_start_time: f64, frame_end_time: f64) -> super::aabb::AABB; } impl<T> RayCollider for T where T: Shape + Send + Sync, { fn collide_ray(&self, ray: &Ray, t_min: f64, t_max: f64) -> Option<RayCollision> { return self.collide_ray(ray, t_min, t_max); } } impl<T> BoundingBox for T where T: Shape + Send + Sync { fn get_bounding_box(&self, frame_start_time: f64, frame_end_time: f64) -> super::aabb::AABB { return self.get_bounding_box(frame_start_time, frame_end_time); } }
<gh_stars>1-10 package cn.zhuoqianmingyue.aop.annotationConfiguration; import org.junit.Test; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class UserTest { @Test public void run() { ApplicationContext appliction = new ClassPathXmlApplicationContext("ioc-aop-annotationConfiguration.xml"); User user = (User)appliction.getBean("user"); user.run(); } @Test public void speak() { ApplicationContext appliction = new ClassPathXmlApplicationContext("ioc-aop-annotationConfiguration.xml"); User user = (User)appliction.getBean("user"); user.speak("hello !"); } @Test public void add() { ApplicationContext appliction = new ClassPathXmlApplicationContext("ioc-aop-annotationConfiguration.xml"); User user = (User)appliction.getBean("user"); user.add(3,4); } }
Differential association between obesity and coronary artery disease according to the presence of diabetes in a Korean population Background Coronary artery disease (CAD) is a major cardiovascular complication in diabetic patients. Despite the significant association between obesity and diabetes, the majority of the diabetic subjects are not obese in an Asian population. This study evaluated the association between obesity and coronary artery disease (CAD) according to the diabetes status in a Korean population. Methods The association between obesity and CAD using the parameters of any plaque, obstructive plaque, and coronary artery calcium score (CACS) >100 according to the presence of diabetes was evaluated in 7,234 Korean adults who underwent multi-detector computed tomography for general health evaluations. Obesity was defined as a body mass index (BMI) ≥25 kg/m2. Results The prevalence of obesity was significantly higher in diabetic subjects than in non-diabetic subjects, but the majority of the diabetic subjects were non-obese (48% vs. 37%, p <0.001). The incidence of any plaque (58% vs. 29%), obstructive plaque (20% vs. 6%), and CACS >100 (20% vs. 6%) were significantly higher in diabetic patients than in non-diabetic subjects (p <0.001, respectively). Incidence of any plaque (33% vs. 26%, p <0.001), obstructive plaque (7% vs. 6%, p=0.014), and CACS >100 (8% vs. 6%, p=0.002) was significantly higher in non-diabetic subjects with obesity than in those without obesity, but the incidence of all coronary parameters was not different in diabetic subjects according to the obesity status. After adjusting for confounding risk factors including age, gender, hypertension, dyslipidemia, current smoking, and mild renal dysfunction, obesity was independently associated with increased risks of any plaque (OR 1.14) and CACS >100 (OR 1.31) only in non-diabetic subjects (p <0.05, respectively). Multiple logistic regression models revealed that diabetes was independently associated with all coronary parameters. Conclusion Despite a significantly higher prevalence of obesity in diabetic subjects than in non-diabetic subjects, obesity is associated with the presence of any plaque and severe coronary calcification only in subjects without established diabetes among Korean population. Background Diabetes is significantly associated with an increased risk of coronary artery disease (CAD). Although the pathogenesis of diabetes is complicated by multiple metabolism-related problems, a deterioration of insulin secretion and an aggravation of insulin resistance are 2 central defects in the pathogenesis of diabetes. It is obvious that obesity is one of the major factors for insulin resistance. However, the criterion of obesity is dependent on ethnicity, and the prevalence of obesity differs according to ethnicity. In addition, despite the substantial increases in the prevalence of obesity and diabetes in Asia, the clinical features of the development of diabetes in Asia are explicitly different from those in other parts of the world, with diabetes developing in a much shorter time, at a younger age, and in subjects with much lower body mass index (BMI) in Asia. The majority of individuals with diabetes are not obese, even with obesity defined as a BMI of more than 25 kg/m 2, and significant weight loss is observed during the course of the development of diabetes in Korean population. Furthermore, several studies on the pathogenesis of type 2 diabetes reported that impaired insulin secretion is more prominent than insulin resistance, even in the status of impaired glucose tolerance. Accordingly, whether obesity is an independent predictor for CAD in Asian diabetic subjects may be an important issue, but data are scarce in Asian populations. The coronary artery calcium (CAC) score (CACS), which has developed to quantify the extent of CAC, is a good marker of coronary atherosclerosis. CACS is closely correlated with the volume of coronary plaque measured by autopsy and is considered a surrogate marker for the overall coronary plaque burden. A few previous studies investigated the relationship between obesity and CAC, but the results were inconsistent. Some reported a positive and independent association, while others reported a null, or even an inverse association. Furthermore, most relevant studies were conducted in a Western population, in which obesity and CAD are more prevalent compared with other populations such as East Asian. In addition, they evaluated the association between obesity and CAC without considering the status of diabetes. Recently, coronary computed tomographic angiography (CCTA) was introduced as a novel noninvasive imaging approach for evaluating coronary atherosclerosis, and it has high diagnostic accuracy in detecting CAD. Therefore, we investigated the association between obesity and coronary atherosclerosis according to diabetes status using the noninvasive CCTA in Korean subjects with near-normal kidney function. Subjects This cross-sectional study consisted of 8,648 consecutive subjects who had undergone CCTA evaluation with 64-slice multi-detector computed tomography (MDCT) from January 2004 to April 2009 at Severance Cardiovascular Hospital. All subjects were referred for general health evaluations with following various indications: symptoms such as chest discomfort, dyspnea, or fatigue; no symptoms but having abnormal electrocardiographic test, previous history of peripheral artery disease or cerebrovascular disease, or the presence of multiple cardiovascular risk factors. Subjects were excluded for any one of the following criteria: (a) age < 30 years (n = 69); (b) established chronic kidney disease or glomerular filtration rate (GFR) <60 ml/min/1.73 m 2, estimated by the modification of diet in renal disease formula (n = 959); and (c) insufficient medical records (n = 386). As a result, 7,234 subjects were included in this study. The study protocol was approved by the local ethics committee of our institution. Protocol of MDCT Data acquisition and image post-processing were performed in accordance with the Society of Cardiovascular Computed Tomography guidelines on CCTA acquisition. Briefly, subjects with an initial heart rate ≥65 beats/min before MDCT received a single oral dose of 50 mg of metoprolol (Betaloc; Yuhan, Seoul, Korea) 1-2 h before CT examination unless -adrenergic blocking agents were contraindicated (overt heart failure, atrioventricular conduction abnormalities, or bronchial asthma). Subjects were scanned with a 64-slice CT scanner (Sensation 64; Siemens Medical Solutions, Forchheim, Germany). Initially, a non-enhanced prospective electrocardiogram (ECG)-gated scan to evaluate CACS was performed with the following parameters: rotation time of 330 ms, slice collimation of 0.6 mm, slice width of 3.0 mm, tube voltage of 100-120 kV, tube current of 50 mA, and table feed/scan of 18 mm. CCTA was then performed using retrospective ECG-gating with the following scan parameters: rotation time of 330 ms, slice collimation of 64 0.6 mm, tube voltage of 100-120 kV, tube current of 400-800 mA depending on patient size, table feed/scan of 3.8 mm, and pitch factor of 0.2. ECG-based tube current modulation was applied to 65% of the R-R interval. A real-time bolus-tracking technique was applied to trigger scan initiation. The total estimated average radiation dose for the multi-slice CT protocol was 8.7 ± 1.5 mSv. Contrast enhancement was achieved using 60 mL of iopamidol (370 mg iodine/mL, Iopamiro; Bracco, Milan, Italy) injected at 5 mL/s, followed by an injection of 30 mL of diluted contrast (the ratio of saline to contrast agent was 7:3) and then 30 mL of saline at 5 mL/s with a power injector (Envision CT; Medrad, Indianola, PA) via an antecubital vein. Image reconstruction was carried out on the scanner workstation using commercially available software (Wizard; Siemens Medical Solutions, Forchheim, Germany). Axial images were reconstructed retrospectively at 65% of the RR interval for each cardiac cycle. If artifacts were present, then additional data sets were obtained for various points of the cardiac cycle, and the data set with the smallest artifact was selected for further analysis. The reconstructed image data sets were transferred to an offline workstation (Aquarius Workstation; TeraRecon, Inc., San Mateo, CA). Each lesion identified was examined using maximum-intensity projection and multiplanar reconstruction techniques on a short axis and along multiple longitudinal axes. Lesions were classified by the maximal stenosis of the luminal diameter observed on any plane. Measurement of CT variables CCTA data were evaluated by 2 experienced cardiac radiologists (Y.J.K. and B.W.C., who have 6 and 9 years of experience in cardiac CT, respectively). This study primarily evaluated the presence of any plaque, obstructive plaque, and CACS >100. Both any plaque and obstructive plaque were divided into 2 subtypes according to the presence of coronary calcification following calcified or mixed plaque and non-calcified plaque, respectively. CACS was measured using a previously described method. Because the frequency of CACS >100 in the Asian population is known to be lower than that in Caucasians, African-Americans, and Hispanics, we used CACS >100 as the parameter for identifying severe coronary calcification. Plaque was defined as structures >1 mm 2 within and/or adjacent to the vessel lumen that were clearly distinguished from the lumen and surrounding pericardial tissue. Obstructive plaque was defined as plaque with ≥50% luminal diameter stenosis. Calcified plaque was defined as plaque occupied by calcified tissue for ≥50% of the plaque area (density >130 Hounsfield units in native scans). Mixed calcified plaque was defined as plaque in which <50% of the area is occupied by calcified tissue. Plaque without any calcium was defined as non-calcified plaque. CAD was defined as the presence of any coronary plaque and calcium identified by CCTA. Measurement of clinical variables Medical histories of hypertension, dyslipidemia, diabetes, and smoking status were systematically acquired for the subjects. Height, weight, and blood pressure were measured during visits. All blood samples, including those for triglycerides, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, and glucose, were obtained after a 12-h fast on the day of the CT scan as part of the clinical work-up. BMI was calculated as weight (kg) height (m 2 ), and obesity was defined as a BMI ≥25 kg/m 2. Hypertension was defined as systolic blood pressure ≥140 mmHg and/or diastolic blood pressure ≥90 mmHg or the use of antihypertensive treatment. Dyslipidemia was defined as total cholesterol ≥240 mg/dL, LDL ≥130 mg/dL, HDL ≤40 mg/dL, TG ≥150 mg/dL, or treatment with lipid-lowering agents. Current smoking history was considered present if subjects consistently smoked or smoked within 1 month before the study. Mild renal dysfunction was defined as a GFR of 60-89 ml/min/ 1.73 m 2. Diabetes was defined as fasting glucose ≥126 mg/ dL, the receipt of antidiabetic treatment, or a referral diagnosis of diabetes. Statistical analysis Continuous variables are expressed as the mean ± SD or medians and interquartile range according to the distribution. Categorical variables are presented as n (%). Continuous variables were compared using an independent t-test or Mann-Whitney's U-test, and categorical variables were compared using the 2 test or Fisher's exact test, as appropriate. Univariate and multivariate logistic regression analysis for evaluating the impact of obesity and BMI on the presence of any plaque, obstructive plaque, and CACS <100 were performed separately for subjects with and without diabetes. Multivariate logistic regression analysis was adjusted for confounding risk factors including age, gender, hypertension, dyslipidemia, current smoking, and mild renal dysfunction. And, multivariate logistic models were analyzed to identify the impact of diabetes on the presence of any plaque, obstructive plaque, and CACS <100. The covariate-adjusted odds ratio (OR) and 95% confidence intervals (CI) for each parameter were calculated. SPSS version 18 (SPSS Inc., Chicago, IL) was used for all statistical analyses, and p <0.05 was considered significant. Results The clinical characteristics of the 7,234 subjects (52 ± 10 years; 57% men) are listed in Table 1. There were 6,345 non-diabetic (88%) and 889 diabetic subjects (12%). The overall prevalence of obesity in the present study was 38%. The prevalence of obesity was significantly higher in diabetic subjects than in non-diabetic subjects (48% vs. 37%, p <0.001), but the majority of diabetic subjects were nonobese ( Figure 1). The prevalence of hypertension, and dyslipidemia were significantly higher in diabetic subjects than in non-diabetic subjects (p <0.001, respectively). The incidence of any plaque (58% vs. 29%), obstructive plaque (20% vs. 6%), and CACS >100 (20% vs. 6%) were significantly higher in diabetic patients than in non-diabetic subjects (p <0.001, respectively) ( Figure 2). Data are expressed as n (%) or mean ± SD. BMI, body mass index; CACS, coronary artery calcium score; FBS, fasting blood sugar; GFR, glomerular filtration rate; HDL, high-density lipoprotein; LDL, low-density lipoprotein. Figure 1 Comparison of the prevalence of obesity according to the diabetes status. Multivariate logistic models for identifying the impact of diabetes on the presence of any plaque, obstructive plaque, and CACS <100 were analyzed after consecutively adjusting for age, gender, hypertension, BMI, dyslipidemia, current smoking, and mild renal dysfunction. All models illustrated that diabetes had a strong impact on the presence of any plaque, obstructive plaque, and CACS >100 (Table 4). Discussion To the best of our knowledge, the present study is the first to provide information on the differential association between obesity and coronary atherosclerosis according to the presence of diabetes in the Asian population. Diabetes was strongly associated with all coronary parameters including any plaque, obstructive plaque, and CACS >100. Despite a significantly higher prevalence of obesity in diabetic subjects, obesity was independently associated with the presence of CAD and severe coronary calcification only in subjects without established diabetes in a Korean population. Obesity, a major factor for insulin resistance, is significantly associated with diabetes and CAD. However, the criteria and prevalence of obesity is dependent on ethnicity. Despite the increased prevalence of obesity and diabetes in Asia, the clinical features of the development of diabetes in Asia are different from those in other parts of the world. In Korea, previous studies reported that approximately 65% of subjects with diabetes were non-obese, and that impaired insulin secretion was more prominent than insulin resistance in diabetic subjects. Furthermore, a recent study revealed that diabetes had an incremental impact on subclinical atherosclerosis independent of the metabolic syndrome, for which insulin resistance is a major characteristic, in the Korean population. Accordingly, it is important to identify whether obesity is independently associated with CAD in an Asian population with established diabetes. CAC is a traditional surrogate marker for coronary atherosclerosis because it is significantly related to the coronary plaque burden. It may be that the severity of coronary calcification is somewhat different in an Asian population compared to that in a Western population. However, most studies evaluated the association between obesity and CAC in a Western population with conflicting results. CCTA has recently been used for evaluating coronary atherosclerosis because of its high diagnostic accuracy in detecting CAD, and only limited studies investigated the relationship of BMI with coronary atherosclerosis using CCTA. Dores et al. reported that BMI was an independent predictor of CAD, but it was not correlated with the severity of CAD in subjects with suspected CAD. Labounty et al. reported that an increased BMI was associated with a greater prevalence, extent, and severity of CAD and was independently associated with an increased risk of myocardial infarction. However, most participants in these studies were from Western populations, and these studies did not evaluate the differential association of BMI with coronary atherosclerosis according to the presence of diabetes. Furthermore, it has not been investigated whether obesity, defined as a BMI ≥25 kg/m 2, is an independent predictor of CAD in Asian patients with diabetes. We evaluated the association between obesity and coronary atherosclerosis using the parameters of any plaque, obstructive plaque, and CACS >100 according to the presence of diabetes. The prevalence of obesity was significantly higher in diabetic subjects than in non-diabetic subjects, but the majority of diabetic subjects were non-obese in our Korean population. We identified that diabetes was strongly associated with all coronary atherosclerotic parameters. Despite a significantly higher prevalence of obesity in diabetic subjects than in non-diabetic subjects, obesity was independently associated with the presence of CAD and severe coronary calcification in non-diabetic subjects. These results suggest that obesity is not a useful predictor for CAD in subjects with established diabetes, although it was significantly associated with the development of diabetes in a Korean population. These results might imply that the identification of newly developing diabetes might be important in non-diabetic subjects with obesity; however, considering the incremental impact of diabetes on coronary atherosclerosis, rigorous risk stratification for CAD is necessary in subjects with established diabetes irrespective of the obesity status in the Korean population. Although obesity is associated with an increased risk of cardiovascular disease, several studies have suggested an interesting phenomenon, the obesity paradox, which is the protective effect of obesity against adverse clinical outcomes in patients with obstructive CAD. However, these studies evaluated the impact of obesity on prognosis only in patients with established CAD. In the present study, diabetes was independently associated with the presence and severity of CAD, but obesity was not independently associated with CAD in subjects with established diabetes. Although the relationship between obesity and prognosis in subjects with diabetes remains uncertain in the Asian population, it may be more important to predict the development of CAD in non-obese subjects with diabetes compared with their obese counterparts considering the protective effect of obesity in subjects with obstructive CAD. Further prospective studies with larger sample sizes are necessary to address this issue. Limitations Several limitations should be acknowledged in the present study. First, we could not eliminate the possible effects of underlying medication for hypertension, dyslipidemia, and diabetes on coronary atherosclerosis because of the observational design of this study. Second, we used only the criterion of BMI ≥25 kg/m 2 for defining the obesity status. Although it is well-known that BMI is significantly associated with abdominal fat and waist circumference in the Korean population, further evaluation of the association between other anthropometric indices and coronary atherosclerosis according to the presence of diabetes may be necessary in Asian populations. Despite these limitations, it is novel in that only participants of Asian ethnicity were evaluated in the present study compared with other studies performed in Western populations. In addition, the clinical usefulness of obesity for predicting CAD according to the diabetes status was first evaluated in approximately 7,300 Asian subjects with near-normal kidney function using large-scale CT data. The findings of this study may be helpful to identify the differential association between obesity and CAD according to the diabetes status in the Asian population. Conclusions Despite a significantly higher prevalence of obesity in diabetic subjects than in non-diabetic subjects, the majority of the diabetic subjects were non-obese in our Korean population. Obesity has an independent predictive value for the presence of CAD and severe coronary calcification only in subjects without established diabetes. Considering the incremental impact of diabetes, rigorous risk stratification for CAD might be necessary in diabetic subjects irrespective of the obesity status among Asian population.
<filename>ritsu/cogs/AdminCog.py<gh_stars>0 # -*- coding: utf-8 -*- #------------------------------------------ "AdminCog: Cog to hold dev-only commands for discord bot Ritsu#6975" __author__ = "supershadoe" __license__ = "Apache v2.0" #------------------------------------------ # Imports # import datetime import inspect import platform from discord import Embed, version_info from discord.ext import commands from psutil import Process #------------------------------------------ class AdminCog(commands.Cog, name="Admin Commands", command_attrs=dict(hidden=True)): "Has dev-only(owner-only) commands like eval and shutdown." def __init__(self, bot): self.bot = bot #------------------------------------------ @commands.command() @commands.is_owner() async def _load(self, ctx, cog_name): "To load a cog while debugging" await ctx.send(f"Loading cog **{cog_name}**...") return self.bot.load_extension(f"ritsu.cogs.{cog_name}") #------------------------------------------ @commands.command() @commands.is_owner() async def _unload(self, ctx, cog_name): "To unload a cog while debugging" await ctx.send(f"Unloading cog **{cog_name}**...") return self.bot.unload_extension(f"ritsu.cogs.{cog_name}") #------------------------------------------ @commands.command() @commands.is_owner() async def _reload(self, ctx, cog_name): "To reload a cog while debugging" await ctx.send(f"Reloading cog **{cog_name}**...") return self.bot.reload_extension(f"ritsu.cogs.{cog_name}") #------------------------------------------ @commands.command() @commands.is_owner() async def _ping(self, ctx): "For just pinging the bot" await ctx.send(embed=Embed(title="Ping response", description=f"Pong!\nBot latency is **{int(self.bot.latency*1000)} ms**.", color=0xFF00FF)) #------------------------------------------ @commands.command() @commands.is_owner() async def _stats(self, ctx): "Provide stats about the host PC" embed = Embed(title="Stats", description=f"Bot name: {self.bot.user}", color=0xFF00FF) embed.add_field(name="Python version", value=f"`{'.'.join(map(str,platform.sys.version_info[0:3]))}`") embed.add_field(name="discord.py version", value=f"`{'.'.join(map(str,version_info[0:3]))}`") embed.add_field(name="Linux kernel version", value=f"`{platform.release()}`", inline=False) embed.add_field( name="Bot uptime", value=f"`{str(datetime.datetime.now() - datetime.datetime.fromtimestamp(Process(platform.os.getpid()).create_time()))}`") embed.add_field(name="Bot latency", value=f"`{str(int(self.bot.latency*1000))} ms`") return await ctx.send(embed=embed) #------------------------------------------ @commands.command() @commands.is_owner() async def _eval(self, ctx, *args): "Eval command: DANGER ZONE!" res=eval(' '.join(args)) if inspect.isawaitable(res): output = await res else: output = res return await ctx.send(embed=Embed(title="Python: eval result", description=f"```{output}```", color=0xFF00FF)) #------------------------------------------ @commands.command() @commands.is_owner() async def _shutdown(self, ctx): "Shutdown command(owner-only)" await ctx.send(embed=Embed(title="Shutting down", description="⏻ Shutting down after command from owner...", color=0xFF0000)) return await self.bot.logout() #------------------------------------------ @commands.command() @commands.is_owner() async def _todo(self, ctx): "To do command for myself" return await ctx.send("https://discord.com/channels/801170087688011828/801174287427174410/805653802308206602") #------------------------------------------ def setup(bot): "Function to setup cog to add to bot" bot.add_cog(AdminCog(bot))
Metabolic reprogramming enables hepatocarcinoma cells to efficiently adapt and survive to a nutrient-restricted microenvironment ABSTRACT Hepatocellular carcinoma (HCC) is a metabolically heterogeneous cancer and the use of glucose by HCC cells could impact their tumorigenicity. Dt81Hepa1-6 cells display enhanced tumorigenicity compared to parental Hepa1-6 cells. This increased tumorigenicity could be explained by a metabolic adaptation to more restrictive microenvironments. When cultured at high glucose concentrations, Dt81Hepa1-6 displayed an increased ability to uptake glucose (P<0.001), increased expression of 9 glycolytic genes, greater GTP and ATP (P<0.001), increased expression of 7 fatty acid synthesis-related genes (P<0.01) and higher levels of Acetyl-CoA, Citrate and Malonyl-CoA (P<0.05). Under glucose-restricted conditions, Dt81Hepa1-6 used their stored fatty acids with increased expression of fatty acid oxidation-related genes (P<0.01), decreased triglyceride content (P<0.05) and higher levels of GTP and ATP (P<0.01) leading to improved proliferation (P<0.05). Inhibition of lactate dehydrogenase and aerobic glycolysis with sodium oxamate led to decreased expression of glycolytic genes, reduced lactate, GTP and ATP levels (P<0.01), increased cell doubling time (P<0.001) and reduced fatty acid synthesis. When combined with cisplatin, this inhibition led to lower cell viability and proliferation (P<0.05). This metabolic-induced tumorigenicity was also reflected in human Huh7 cells by a higher glucose uptake and proliferative capacity compared to HepG2 cells (P<0.05). In HCC patients, increased tumoral expression of Glut-1, Hexokinase II and Lactate dehydrogenase correlated with poor survival (P = 2.47E−5, P = 0.016 and P = 6.58E−5). In conclusion, HCC tumorigenicity can stem from a metabolic plasticity allowing them to thrive in a broader range of glucose concentrations. In HCC, combining glycolytic inhibitors with conventional chemotherapy could lead to improved treatment efficacy.
Influenza Outbreaks in Long-Term-Care Facilities: How Can We Do Better? Despite the availability of influenza vaccines for several decades, infection caused by influenza viruses continues to cause considerable morbidity and mortality. Epidemics of influenza occur each year during the winter months in the northern hemisphere, accounting for excess mortality and increased hospitalization rates.1-3 Most severe disease occurs in the elderly, with approximately 90% of the deaths associated with influenza occurring in those older than 65 years.2 Residents of long-termcare facilities (LTCFs) are especially vulnerable because of their increased age and frailty and the presence of multiple comorbidities. Moreover, they live in a closed environment in proximity to other residents and have frequent contact with staff, volunteers, and visitors who may introduce influenza to the facility from the community. When influenza occurs in a nursing home, attack rates among residents may be as high as 25% to 60% with case-fatality rates of 10% to 20%.4-7 Although most disease is caused by influenza A virus, influenza type B has also been associated with considerable morbidity and mortality.8-10 Several guidelines and recommendations for influenza prevention and control in LTCFs for the elderly have been published in the past 10 years, including two that appeared in Infection Control and Hospital Epidemiology.11-13 These guidelines have all emphasized the importance of annual influenza vaccination of residents and staff, surveillance for respiratory tract infections, access to rapid influenza diagnostic testing, and policies and procedures for outbreak management, including the use of postexposure chemoprophylaxis with antiviral agents. Annual vaccination of residents continues to be the main priority of preventive strategies. Resident vaccination in LTCFs has reduced influenza-associated pneumonia, hospitalization, and mortality rates.14-17 Nevertheless, influenza immunization rates remain suboptimal in many LTCFs.10,18,19 More recently, vaccinating nursing home staff, and not just residents, has been recognized as an important factor in reducing the risk of influenza and its complications among residents.20,21 Nursing homes and other LTCFs must be prepared in advance to deal with the possible occurrence of an influenza outbreak. There must be policies in place regarding influenza surveillance and diagnosis, appropriate isolation measures, and the potential use of postexposure chemoprophylaxis with antiviral agents.13,19 Unfortunately, few studies have been conducted to provide data on which to base such policy development. For example, the strength and quality of the evidence for almost all of the specific recommendations made by the Long-TermCare Committee of the Society for Healthcare Epidemiology of America for influenza surveillance and outbreak management in LTCFs was classified as IIIB, indicating only moderate evidence to support the recommendation based on evidence from opinions of respected authorities, clinical experience, descriptive studies, or reports of expert committees.13 Two articles in this issue of Infection Control and Hospital Epidemiology contribute to our knowledge of influenza management in LTCFs. The study by Drinka et al. addresses the issue of how to define the presence of an influenza outbreak in an LTCF in order to be able to effectively intervene with postexposure antiviral prophylaxis.22 Hirji et al. describe their experience with zanamivir, a neuraminidase inhibitor, for treatment and prophylaxis during concomitant outbreaks due to influenza A and influenza B in an LTCF.23 Surveillance definitions for the identification of influenza-like illness in LTCF residents have been proposed.11,12,24 However, none of these definitions have been
Nailfold fluconazole fluid injection for fingernail onychomycosis PIP score (Table 1, Fig. 1a,b). Olanzapine, a new atypical, antipsychotic drug with antihistaminic and anticholinergic properties, was released by the US Food and Drugs Administration in 1996 for the treatment of psychosis, particularly schizophrenia. The mechanism of action of olanzapine in prurigo remains unclear. It has been shown to have high affinity for various receptors including D1, D2, D4, 5-HT2A, 5-HT2C, a-1adrenergic, histamine H1, and five muscarinic receptors. Blockade of these receptors may at least partly explain its antipruritic effects in SP. Compared with classical antipsychotic drugs such as haloperidol or pimozide, olanzapine is practically free of extrapyramidal symptoms and cardiotoxicity. Side-effects are rare but include dizziness or fainting when getting up suddenly from a lying or sitting position, drowsiness, mouth dryness, headache, vision problems, weakness and constipation. The most common side-effects appear to be somnolence and weight gain, as observed in our patient population. Our findings are in accordance with a recent report on olanzapine in neurotic excoriation, a condition resembling SP in patients skin-picking behaviour. Olanzapine might be an effective alternative in the treatment of SP resistant to conventional therapy, especially in patients with overlapping psychological and emotional affections. However, future blinded, randomized, placebo-controlled studies are now warranted before olanzapine can be recommended for SP.
There has also been little progress in identifying and preventing PH in those at risk for the disease. A new study published in EMBO Molecular Medicine by researchers from Brigham and Women’s Hospital (BWH) sheds light on the disease’s surprising cause, with crucial implications for diagnosis, treatment, and prevention of PH in persons at risk. In studies that have spanned nearly eight years of work, Chan and his colleagues found that specific disruptions in mitochondria, the powerhouses of cells, lead to alterations of energy production, or metabolism, of blood vessels of the lungs. ISCU is a critical human protein that generates factors called iron-sulfur clusters essential for normal mitochondrial function. In a mouse model of the disease, the team found that microRNAs – small loops of genetic material – tamp down ISCU. This can then incite catastrophic molecular consequences culminating in PH. Based on their findings in animal models, Chan’s team went on to study a 29 year old woman with two defective copies of the ISCU gene. At rest, the woman’s blood pressure and heart function appeared relatively normal, but exercise testing revealed unambiguous dysfunction of blood vessels in her lung. Importantly, the patient improved when placed on a PH drug to relax and widen the smooth muscle lining her lung blood vessels. Identifying patients with mutations in other genes that contribute to iron-sulfur deficiencies early could give clinicians the ability to intervene before a patient develops PH. The new study also highlights the possibility of designing new drugs to treat PH that focus on ISCU proteins, the microRNA that targets them, and other genes known to be directly linked to iron biology in general but have yet to be studied in PH. Finally, Chan’s team predicts that there likely exist a number of additional medical conditions ranging from heart disease, cancer, and neurologic disease that may share a dependence on iron-sulfur generation and could be treated similarly. This work was supported by the NIH (K08HL096834; U54-CA151884, R01-DE016516-06, and EB000244); the McArthur-Radovsky, Lerner, Harris, and Watkins Funds; Gilead Research Scholars Fund; the Pulmonary Hypertension Association; and the Intramural Research Program of the National Human Genome Research Institute.
Fracture Toughness of Epoxy Resins Containing Blends of Monomers with Different Molecular Weights The effect of adding a high molecular weight epoxy monomer (epikote 1001) to a low molecular weight one (epikote 828) on fracture toughness properties was investigated according to the crosslinking degree and density heterogeneity. To characterize the crosslinking degree and density heterogeneity, the glass transition temperature, Tg, and fragility, m, were deduced from thermo-viscoelastic properties. The characterization of Tg and m revealed that blends can be divided into two groups: one group with ( < 10 wt%) and another one with ( > 10 wt%), where is the weight ratio of epikote 1001 to epikote 828. The first group had the same average crosslinking degree (the same Tg) but different density heterogeneities (m decreased). The other group had a lower crosslinking degree (Tg decreased) and even more density heterogeneity (m decreased). The fracture toughness results showed that KIC of blends of the first group was approximately constant because the increase in density heterogeneity was still too weak (ineffective m), whereas KIC of blends of the second group was higher due to the simultaneous decrease in average crosslinking degree and increase in density heterogeneity. Therefore, the lower crosslinking degree (lower Tg) is and the more heterogeneous the blend (lower m) is due to the addition of high molecular weight monomer, the higher KIC becomes.
Synthesis and Preparation of Hydrophobic CNTs-Coated Melamine Formaldehyde Foam by Green and Simple Method for Efficient Oil/Water Separation Hydrophobic porous polymeric materials have attracted great interests recently as potential candicate for oil-water separation due to their high selectivity and sorption capacity. Herein, we present a green, simple and cost-effective method to change hydrophilic melamine formaldehyde (MF) foam to hydrophobic carbon nanotubes (CNTs) coated MF foam through an immersion process. The MF foam was produced from the MF resin which was synthesized in a laboratory by a condensation reaction between melamine and formaldehyde under alkaline condition with a molar ratio of melamine to formaldehyde of 1:3. The MF foam has an open-cell structure with the average pore diameter of 350 m, density of 25 kg m-3 and porosity of 98 %. The as-prepared CNTs-coated MF foam exhibits high sorption capacity (23-66 g/g) for oils and organic solvents, good recyclability and high selectivity.
BATAVIA, Ohio — A public visitation for Detective Bill Brewer is scheduled for Thursday. The 20-year veteran of the Clermont County Sheriff’s Office was fatally wounded during a standoff in Pierce Township on Feb. 2. The visitation is from 4 p.m. until 8 p.m. Thursday, Feb. 7, at Mt. Carmel Christian Church at 4110 Bach Buxton Rd. Brewer’s funeral is at 11 a.m. Friday at the church. He will be buried at the Pierce Township Cemetery on Locust Corner Road. Details of the services were released Monday by E.C. Nurre Funeral Homes. Residents wishing to pay tribute to the fallen deputy can sign an online tribute wall on the funeral home’s obituary page by clicking here. Brewer graduated from Williamsburg High School in 1996, according to his obituary. He is survived by his wife of 13 years and his 5-year-old son. Lt. Nick DeRose was also wounded in the standoff Saturday. He was treated and released from University of Cincinnati Medical Center.
Legitimate firms or hackers - who is winning the global cyber war In order to enable reliable service products, the internet of things (IoT) technology infrastructure must withstand global hacking events. How is the IoT faring in the cyber security war? Is it winning or is it losing? If the IoT is losing, there is cause for concern. If the IoT is winning, then there is cause for optimism. Yet there is a third possibility - stasis. What would this look like? Why would it occur? We seek to add to the growing body of literature on the IoT. We add to the IoT literature by applying learning curves to major global cyber-attacks to investigate whether the predators (hackers) or prey (firms) have developed a decided advantage in this arena. We provide finding from a variety of industrial ecosystems. Our theoretical contributions include the integration of this literature with the literature of technology innovation sequencing, business cycles and the nature of service products.
How Businesses Can Benefit from Collaborating with the Arts In todays hypercompetitive, digital-first, knowledge-based economy, organizational creativity has never been more important as a potential source of competitive advantage. The foundation stone for every innovation is an idea and all ideas are born of creativity. The innovation process thus starts with creativity and the new ideas it yields are ideally based on insights that will lead ultimately to novel outcomes (such as new products, services, experiences or business models) and thereby to a sustainable competitive advantage. In established businesses, until relatively recently, creativity was called on only for specific, often high-profile occasions, for hackathons or for major innovation jams, but today it is an essential, everyday necessity of routine work. However, attaining the right level of creativity from within is a challenge for many organizations and so they need to establish an appropriate and effective way to import it into their teams, projects and, ultimately, culture. The arts are a pure, unadulterated form of creativity. Mindsets, processes and practices from the arts can give organizational creativity a significant boost and can potentially offset the creative deficit in an organization. Here, the illustrative cases and practices that demonstrate how the arts can have a positive impact on business are examined.
Former Parramatta captain Nathan Hindmarsh says he wants the Eels to leave the door open for Semi Radradra to return after his stint in French rugby. MORE: Jack Bird not heading to Newcastle any time soon | Multiple NSW sides after Ben Hunt Radradra has signed a one year deal with Toulon and will leave the NRL after the 2017 season. It is understood the Eels offered the winger a sizeable pay raise, but that the money forked out by cashed up Toulon blew it out of the water. Despite the turbulent season Radradra had at the club, Hindmarsh said he'd like to see him back in the blue and gold if he decided to leave rugby union after the first year of his contract. "You'd like to say yes," Hindmarsh told Fox Sports News when asked if the Eels should go after Radradra again. "For what he's done for the club, he's a try-scoring machine. He's exciting. He puts bums on seats. "If he does the one year and goes 'this isn't right for me,' I'd like to see Parramatta offer him another deal back there because he does win games. https://images.performgroup.com/di/library/sportal_com_au/71/82/semi-radradra_ux8iktxf429u191if40tb0cl3.jpg?t=-1307000904&w=500&quality=80 "He scores tries and that's what you want your wingers to do, so why not?" With 330 first grade caps to his name, Hindmarsh played his whole career at the Eels but doesn't begrudge Radradra for seeking the big bucks elsewhere. "He's getting upwards of $750,000 a year to play rugby union compared to $500,000 what the Eels offered him, which is very good money for a winger," he said. "You can't knock him for that. "He's got his family back there in Fiji, but it could also be the fact that he just wants to try something new and you can't knock a bloke for doing that. "We've seen Haynesy go and try different things before. Izzy Folau's made a name for himself in rugby union, so you can't knock a bloke for trying and you also can't knock a bloke or chasing a bit of money."
Hereditary hypophosphatemia in Norway: a retrospective population-based study of genotypes, phenotypes, and treatment complications Objective Hereditary hypophosphatemias (HH) are rare monogenic conditions characterized by decreased renal tubular phosphate reabsorption. The aim of this study was to explore the prevalence, genotypes, phenotypic spectrum, treatment response, and complications of treatment in the Norwegian population of children with HH. Design Retrospective national cohort study. Methods Sanger sequencing and multiplex ligand-dependent probe amplification analysis of PHEX and Sanger sequencing of FGF23, DMP1, ENPP1KL, and FAM20C were performed to assess genotype in patients with HH with or without rickets in all pediatric hospital departments across Norway. Patients with hypercalcuria were screened for SLC34A3 mutations. In one family, exome sequencing was performed. Information from the patients' medical records was collected for the evaluation of phenotype. Results Twety-eight patients with HH (18 females and ten males) from 19 different families were identified. X-linked dominant hypophosphatemic rickets (XLHR) was confirmed in 21 children from 13 families. The total number of inhabitants in Norway aged 18 or below by 1st January 2010 was 1109156, giving an XLHR prevalence of ∼1 in 60000 Norwegian children. FAM20C mutations were found in two brothers and SLC34A3 mutations in one patient. In XLHR, growth was compromised in spite of treatment with oral phosphate and active vitamin D compounds, with males tending to be more affected than females. Nephrocalcinosis tended to be slightly more common in patients starting treatment before 1 year of age, and was associated with higher average treatment doses of phosphate. However, none of these differences reached statistical significance. Conclusions We present the first national cohort of HH in children. The prevalence of XLHR seems to be lower in Norwegian children than reported earlier. Introduction Hereditary hypophosphatemia (HH) is a group of rare diseases with disordered phosphate metabolism and decreased renal tubular phosphate reabsorption. In hypophosphatemic rickets (HR), the hypophosphatemia is associated with rickets and osteomalacia, whereas syndromes with hypophosphatemia combined with osteosclerosis and ectopic calcifications, and not rickets or osteomalacia, are also recognized. HR can be classified as either dependent or independent of the bone derived fibroblast growth factor 23 (FGF23). FGF23 is a phosphate-regulating hormone, acting on kidney tubuli cells to decrease expression of sodium-phosphate co-transporter types IIa and IIc (NaPi-IIa and NaPi-IIc) encoded by SLC34A1 and SLC34A3 respectively. Elevated levels of serum phosphate increase the expression of FGF23 thereby decreasing the reabsorption of phosphate in the renal proximal tubule, while hypophosphatemia normally down regulates the expression of FGF23. FGF23 also down regulates the1-ahydroxylase (encoded by CYP27b1), thus inhibiting the activation of 25OH vitamin D (25OHD) to 1,25(OH) 2 vitamin D (1,25(OH) 2 D), and up regulates 24-hydroxylase (encoded by CYP24a1), which inactivates 1,25(OH) 2 D by conversion to 24,25(OH) 2 vitamin D. In FGF23dependent HR, the physiological increase in serum 1,25(OH) 2 D in response to hypophosphatemia is blunted, and the result is a serum level of 1,25(OH) 2 D that is low, or inappropriately normal for the degree of hypophosphatemia. FGF23 dependent HR is caused by mutations in genes involved in the FGF23 bone-kidney-axis, with levels of intact FGF23 (iFGF23) being elevated or inappropriately normal in the setting of hypophosphatemia when suppressed FGF23 is to be expected. FGF23 dependent HR includes X-linked dominant HR (XLHR) caused by lossof-function mutations in the phosphate regulating endopeptidase homolog, X-linked (PHEX) gene, autosomal dominant HR caused by gain of function mutations in the FGF23 gene, and three types of autosomal recessive HR. ARHR1 is caused by mutations in the DMP1 gene, encoding the dentin matrix protein 1, ARHR2 is caused by mutations in the ENPP1 gene encoding the ectonucleotide pyrophosphatase/phosphodiesterase 1, whereas we have recently shown an association between biallelic mutations in FAM20C and FGF23dependent ARHR3 in a Norwegian family. FAM20C encodes a protein kinase, important in many phosphorylation processes. Phosphorylation of FGF23 by FAM20C makes FGF23 less stable by inhibiting O-glycosylation by GalNacT3, and inactivating mutations in FAM20C thus leads to increased levels of iFGF23. There is also one report of FGF23 dependent HR caused by an activating translocation leading to up-regulation of the expression of the KL gene, encoding the anti-aging protein a-klotho. In FGF23-independent HR, as seen in hereditary HR with hypercalcuria (HHRH) caused by mutations in the SLC34A3 gene, the level of iFGF23 is appropriately down-regulated. Treatment of HR includes oral phosphate replacement several times daily, combined with calcitriol to counteract the secondary hyperparathyroidism (HPT) elicited by the serum phosphate peak and transient decrease in serum ionized calcium upon phosphate dosing. Treatment is balanced to improve linear growth and reduce skeletal deformities while simultaneously minimizing the risk of complications to treatment such as secondary and tertiary HPT, nephrocalcinosis, hypertension, and renal failure. We have conducted the first complete national study of HH in children, to explore the prevalence, genotypes, phenotypic spectrum, and response to and complications of treatment. Patient population During 2009 all pediatric hospital departments in Norway were contacted to identify children with HH. The number of patients identified was compared to the number of patients younger than 18 years registered in the Norwegian Patient Registry (NPR) with the diagnosis code 'E83.3 Disorders of phosphorus metabolism and phosphatases' in the World Health Organization's International Classification of Diseases version 10 (WHO ICD-10). Patients were continuously recruited through the years 2009-2014. The inclusion criteria for HH were serum phosphate below the age dependent reference range in repeated samples combined with tubular maximum reabsorption rate of phosphate per glomerular filtration rate (TmP/GFR) not due to primary HPT, HPT secondary to renal failure or malabsorption, Fanconi syndrome or other tubulopathy, vitamin D dependent rickets, vitamin D deficiency or hypophosphatemia secondary to acute metabolic derangements. A family history or genetic diagnosis was supportive, but not required for inclusion. All exons and intron-exon boundaries of FGF23, DMP1, ENPP1, KL, and FAM20C were sequenced, in successive order, in subjects without pathogenic PHEX mutations. In short, DNA targets were first amplified by PCR (list of primers available upon request) using the AmpliTaq Gold DNA Polymerase System (Applied Biosystems). PCR amplicons were purified with 2 ml of ExoSapIT. Using the Big Dye Terminator chemistry sequencing was performed on the 3730 DNA analyzer (Applied Biosystems) and analyzed using the SeqScape Software (Applied Biosystems). Review of medical history Information on age at diagnosis, clinical and biochemical findings at diagnosis, treatment, and complications was collected by review of the medical records of included patients. Height was converted to z-scores according to Norwegian growth charts. Delta z-score was calculated as the difference between z-score at last registered consultation and z-score at diagnosis. Laboratory data from each visit from the time of diagnosis to the time of inclusion in the study, including serum levels of calcium, phosphate, alkaline phosphatase, creatinine, parathyroid hormone (PTH), 25OHD, and 1,25(OH) 2 D were also recorded, as well as results from kidney ultrasound and skeletal X-ray examinations. TmP/GFR was calculated according to the formula provided by Barth et al.. Blood tests were analyzed according to each hospital laboratory's current standard methods. Genotype-phenotype associations in XLHR patients The PHEX mutations were classified as either deleterious or plausible according to earlier studies. Deleterious mutations comprise those leading to a premature stop codon, including nonsense mutations, splice-site mutations, and insertions and deletions affecting reading frame. Mutations classified as plausible were missense mutations and deletions that did not affect reading frame. The phenotypic features compared were age at diagnosis and at the last registered consultation, height z-score at diagnosis and at the last registered consultation, serum levels of phosphate, ALP and PTH at diagnosis, skeletal manifestations (clinical or radiological signs of rickets or bowing) at diagnosis, and information on dental involvement, nephrocalcinosis, and persistent bowing at the last registered consultation. Statistical analysis The prevalences of HH and XLHR was calculated based on the number of patients aged 0-18 years registered with these diagnosis in 2009 and the total number of people in Norway aged 0-18 years by 1st January 2010, obtained from the official Statistics Norway database. The data were analyzed with SPSS version 22. Between-group comparisons were performed using nonparametric tests; medians were compared using the Mann-Whitney U test, and frequencies were compared with the Fisher's exact test. Ethics and approvals Written informed consent was obtained from all study participants. The study was approved by the Regional Committee for Medical and Health Research Ethics, Region West, Norway (REK number 2009/1140). HH patient cohort By 31st December 2009 we had identified a total of 23 children aged 0-18 years with HH in Norway, and all except one were included in this study. Two additional patients with HH, one with confirmed XLHR, were born before 2009, but diagnosed after 2010. By the end of 2009 the National Patient Registry reported 32 children with the ICD-10 diagnosis 'E83.3 Disorders of phosphate metabolism and phosphatases', but four of these patients had hypophosphatasia, and five had transient hypophosphatemia in the course of malignancy, premature birth, or other underlying condition. On 1st January 2010, the number of inhabitants aged below 18 years was 1 109 156, and this gives a prevalence of HH of w1 in 45 000 children. XLHR was confirmed in 18 children, giving a prevalence of w1 in 60 000. During the period from 1st January 2010 to 31st December 2014, we included another four patients, two of which immigrated to Norway in 2014 and two patients born to XLHR mothers after 2010. The total of 28 patients included comprised 18 females and ten males from 19 different families (Supplementary Figure 1, see section on supplementary data given at the end of this article). XLHR was confirmed in 21 children. Twenty-two patients had a family history of HR, while six were sporadic cases. Genotypes in HH We identified the likely pathogenic mutation in 15 of the 19 HH pedigrees (79%). PHEX mutations were found in 21 subjects from 13 different pedigrees (Supplementary Table 1, see section on supplementary data given at the end of this article), and three of the XLHR probands were sporadic. Of the 13 different PHEX mutations detected, nine had not been previously reported in the SNP or PHEX databases (see section 'Materials and methods'). The nine novel mutations comprised one large duplication, two single nucleotide deletions leading to frameshift and premature stop codons, two triplet deletions leading to loss of one or more codons, two missense mutations, one nonsense mutation, and one splice site mutation. One male patient with HHRH was found to be compound heterozygous for a splicing mutation, c.757-1GOA, and an intronic deletion mutation, c.925C20_926-48del, in the SLC34A3 gene. The c.757-1GOA affects the conserved splice donor site of intron 7, and is predicted to cause aberrant splicing. The c.925C20_926-48del mutation has been reported previously. Two patients with combined heterozygous mutations in FAM20C are described elsewhere. In four patients, two sporadic cases in females and two males with affected mothers, we were not able to identify a pathogenic mutation by standard Sanger sequencing of PHEX, FGF23, DMP1, ENPP1 or KL, or by PHEX MLPA. Phenotypes in HH The median age at diagnosis was 2.1 years (range 0.1-15.5 years), and 26 of the 28 subjects were diagnosed before the age of 7 years (Table 1 and detailed information for each subject is given in Supplementary Table 2, see section on supplementary data given at the end of this article). Median age at the last registered consultation was 12.1 years (range 1.3-18.3). Phenotype in XLHR " The 21 XLHR children comprised 16 females and five males. Their median age was 0.9 years (range 0.1-15.5) at diagnosis, and 10.8 years (range 1..0) at the last registered consultation. Growth was compromised, and Fig. 1 illustrates the height z-scores for 19 of the 21 XLHR patients related to age at diagnosis and at the last registered consultation. Males tended to have a lower height z-score than females (Table 2), both at diagnosis and at the last registered consultation, whereas delta z-score did not differ between the sexes. In accordance with an earlier study, we analyzed the XLHR patients' data depending on initiation of treatment before or after 1 year of age. There was no significant improvement in height z-score in either treatment group. One patient was treated with growth hormone (GH) from the age of 11 years 10 months. His height z-score improved from K2.9 at the last consultation before initiation of GH to a final height of K1.9 S.D. at age 17 years (data not shown). Clinical or radiological evidence of skeletal involvement was found in 13 of 20 children (four out of five males and nine out of 15 females) at diagnosis. The seven patients without skeletal manifestations at diagnosis were all familial cases, diagnosed before the age of 8 months (median 4 months), and comprised six females and one male. During the years after diagnosis, all of these had episodes of rickets identified on clinical or radiological examination, and a male and two of the females had persisting skeletal axis deviations at the last registered consultation. Overall, nine females and four males had persisting axis deviation at the last registered consultation, and correcting osteotomy had been performed in one female and two males. The prevalence of dental involvement was higher in male than female XLHR patients, and in children who started treatment after the age of 1 year (Table 2). Genotype-phenotype associations in XLHR " There were no differences between the mutation status groups in growth, dental involvement, persistent bowing, or development of nephrocalcinosis (results not shown). Treatment and complications in HH The median age at the start of treatment was 2.1 years. Twenty-six of the 28 patients were treated with oral phosphate and vitamin D (alfacalcidol) supplements (Table 1). Two patients were diagnosed at the time of inclusion, and had not started any treatment at that point. patients. In this group, the median age at the start of treatment with oral phosphate and alfacalcidol was 1.0 year (range 0.2-15.6), and ten of 19 children started treatment before the age of 1 year. Information concerning development of nephrocalcinosis was available for 20 of 21 XLHR patients, and nephrocalcinosis was diagnosed in nine of 20 (45%), at a median age 4 years 6 months (range 1 year-5 years 5 months), after a median time in treatment of 1 year 5 months (range 8 months-4 years 5 months). The median time in treatment for patients without registered nephrocalcinosis was 7 years 2 months (range 0-14 years 7 months). All nine XLHR patients who developed nephrocalcinosis did so within 5 years of treatment. Of the 11 patients without nephrocalcinosis, only four had been treated for 5 years or more, and were included in further comparisons. The prevalence of nephrocalcinosis in this subgroup was nine of 13 (69%). There was a trend toward a higher average daily dose of phosphate (given as mg/kg per day elemental phosphorus) during the years before the diagnosis of nephrocalcinosis as compared to the daily phosphate dose during the first 5 treatment years in patients without nephrocalcinosis ( Fig. 2A) (median 61.0 mg/kg per day (range 12.1-79.0) and median 44.8 mg/kg per day (range 13.8-64.7) respectively). Moreover, there was a tendency for earlier start of treatment in children who developed nephrocalcinosis compared with children that did not (median 0.5 year vs 1 year; range 0.2-4.4 vs 0.6-3.6), and seven of nine children with nephrocalcinosis and two of four children without nephrocalcinosis had started treatment before 1 year of age. There were no differences in the starting doses of phosphate and alfacalcidol, average daily dose of alfacalcidol, serum level of PTH level at diagnosis, maximum registered serum PTH, or maximum registered urinecalcium/creatinine ratio (U-Ca/creatinine; results not shown). Furthermore, the groups did not differ with respect to the occurrence of skeletal symptoms at diagnosis, dental involvement at diagnosis, persistent bowing at the last registered consultation, or delta height z-score (not shown). Information concerning parathyroid state was available in 18 patients, of whom 16 had elevated levels of total intact PTH at the time of diagnosis (Table 1 and Supplementary Table 2a). All patients developed transient HPT during treatment in the face of normocalcemia. As seen in Fig. 2B, there was a positive association between the maximum measured serum PTH and the daily dose of phosphate (given as mg/kg per day of elemental phosphorus). Tertiary HPT was diagnosed in one female XLHR patient at 13 years of age. She had been treated with phosphate and alfacalcidol from the age of 5 months, and during the 12.5 years of treatment, the average phosphate dose was 83.0 mg/kg per day (range 47.0-127.0 mg/kg per day) and alfacalcidol dose 18.5 ng/kg per day (range 11.4-44.0 ng/kg per day). Treatment with calcimimetics was started, and she has avoided the need of parathyroidectomy. Treatment and complications in non-X-linked HH " Nephrocalcinosis was diagnosed in one female patient with no detected mutation in any of the known genes at age 8 years 2 months after 6 years 4 months of treatment with phosphate and alfacalcidol. Nephrocalcinosis was also demonstrated in the male patient with HHRH, before start of treatment. Tertiary HPT was found in one female patient with no established mutations in any of the known genes. She had been treated for 14 years, with an average dose of elemental phosphorus of 45.9 mg/kg per day (range 38-80 mg/kg per day) and alfacalcidol 34.2 ng/kg per day (range 22-49.6 ng/kg per day) the last 7 years before the development of permanently elevated PTH combined with hypercalcemia. The patient has responded well to treatment with a calcimimetic, and has so far not needed parathyroidectomy. Discussion We have presented the first national cohort of HH and XLHR in children, describing the prevalence, genotypes, phenotypic spectrum, and response to and complications of treatment in the Norwegian pediatric population. The prevalence of XLHR in Norwegian children was one in 60 000. Earlier reports from regional cohorts, with a risk of selection bias, have found the prevalence of XLHR to be w1 in 20 000. Studies of large pedigrees of XLHR patients have reported a low penetrance of skeletal manifestations in hypophosphatemic female family members, whereas all hypophosphatemic males had skeletal manifestations of disease. Hence, there is a possibility of undiagnosed XLHR in Norwegian females from pedigrees without affected males. However, the ratio of female to male patients in our cohort was 16:5, as compared to the expected ratio of 2:1 for X-linked dominant disorders; a large proportion of undiagnosed females thus seems unlikely. Since our study included only children already in contact with health care and asymptomatic members of the pedigrees were not tested for hypophosphatemia, we cannot rule out hypophosphatemic second-degree relatives. It is therefore possible that the prevalence of HH and XLHR in the Norwegian pediatric population may be higher than one in 45 000 and one in 60 000 respectively. We identified the genotype responsible for HH in 79% of pedigrees in this population-based cohort, and PHEX mutations comprised 87% of the verified mutations. This supports what has been found by others, and confirms that XLHR is the most common variant of HR. Of 13 PHEX mutations, nine (69%) had not been reported earlier (ExAC Browser accessed 21.05.15, http://exac.broadinstitute.org/gene/ENSG00000102174), demonstrating that most mutations are private in this gene. We have previously reported two male siblings with the first identified association between compound heterozygous mutations in FAM20C and FGF23 dependent hypophosphatemia in humans. None of the patients had mutations in FGF23, DMP1, ENPP1, or KL, confirming that mutations in these genes rarely seem to be the cause of HH. In four patients we did not find the likely disease causing mutation. However, as illustrated by our finding of FAM20C mutations, there are possibilities of mutations in other genes associated with pathways involving FGF23, phosphate reabsorption, and tissue mineralization. One adolescent male was compound heterozygous for mutations in the SLC34A3 gene. He had no manifestations of rickets, normal growth and bone mineral density, and came to medical attention because of recurrent kidney stones, accompanied by hypercalcuria, hypophosphatemia, suppressed PTH, and high 1,25(OH) 2 D. He had a novel splicing mutation c.757-1GOA affecting the conserved splice donor site of intron 7, predicted to cause aberrant splicing, and a previously reported intronic deletion mutation, c.925C20_926-48del. Earlier studies have shown that about 10% of homozygous and 16% of compound heterozygous carriers of mutations in SLC34A3 presented with renal calcifications, without evidence of skeletal involvement. Thus, our case is consistent with a phenotypic and genotypic heterogenesity in SLC34A3 related conditions, including HHRH. When comparing non-sense PHEX mutations with missense PHEX mutations likely to reduce protein function, we did not find differences in growth, severity of skeletal or dental disease, or in the prevalence of treatment complications based on the type of mutation. Our findings confirm the results of another recent study, whereas other studies have suggested an association between truncating mutations and a more severe skeletal phenotype. However, even in subjects with the same genotype, the skeletal phenotype seems to be very variable and individual. This might reflect influence from other genetic variants in mineralization and phosphate metabolism. Interestingly, it was recently reported that patients homozygous or heterozygous for the FGF23 sequence variant c.C716T (p.T239M, rs7955866) had significantly lower levels of serum phosphate and lower renal TmP/GFR than patients homozygous for the WT allele. Another research group have reported a weak, but significant association between the c.C716T variant of FGF23 and lower TmP/GFR and lower plasma intact PTH in healthy children and adults. In none of the studies, it was possible to show significantly higher levels of serum iFGF23 in subjects carrying the c.C716T variant. Evaluation of phenotype in XLHR showed that growth was compromised, and there was a tendency for lower height z-scores in males than females. Also, we found a trend for males having a higher proportion of skeletal and dental manifestations than females. As discussed above, some studies points to a milder phenotype in females, with slight hypophosphatemia and mild or no overt skeletal disease. There are also reports of slightly lower serum levels of phosphate and more severe skeletal disease in male than female XLHR patients. Other studies have reported no gender differences in skeletal phenotype, but more severe dental phenotype in post pubertal males than females. Thus, our findings support the notion of a more severe mineralization defect in males than females. Dental involvement seemed to be less common in the patients who started treatment before 1 year of age, suggesting the importance of proper mineralization of dentin prior to the eruption of teeth. On the other hand, starting treatment before age 1 year did not lead to an improved height z-score at the last registered consultation. Some earlier studies have concluded that early start of treatment had a positive effect on linear growth. In one study however, the height z-score was generally higher in those who started treatment before the age of 1 year compared with those who started later, but declined over time for those who started treatment early and improved in those who started treatment later. We found that treatment with phosphate and vitamin D improved mineral homeostasis and rickets, but did not fully correct skeletal axis deviation and to a lesser extent correct the growth deficiency in HR. This adds support to the theory that FGF23 may play a role in the normal physiology of mineralized tissues independently phosphate regulation. Treatment with phosphate will lead to transient increases in serum phosphate, which trigger production and release of FGF23 and PTH, further aggravating the skeletal phenotype. Novel therapy with FGF23 neutralizing antibodies has shown that inhibition of excess FGF23 activity correct growth deficiency in mice, and anti-FGF23 antibodies are currently being tested in human XLHR. It is possible that longitudinal growth in HH patients reflects the individual severity of and response to a disturbed FGF23 homeostasis, rather than the severity of hypophosphatemia itself. European Journal of Endocrinology The patients who developed nephrocalcinosis had started treatment earlier and had received higher daily doses of phosphate, but did not have better growth outcomes, than patients without nephrocalcinosis. Renal function remained normal in all patients, except for transient low-grade renal failure seen in the XLHR patient with tertiary HPT. Our results strengthen the association between higher phosphate doses and development of nephrocalcinosis found in earlier studies. Early start of treatment as a risk factor for nephrocalcinosis has been found by some, but not by others. The prevalence of nephrocalcinosis in patients receiving combination therapy with phosphate and calcitriol is reported to be from 33 to 80% (median 59%), but long term follow-up of mild nephrocalcinosis in XLHR does not seem to affect renal function. As discussed above, treatment with phosphate and calcitriol has a certain positive effect on growth, but only phosphate-treated patients develop nephrocalcinosis. This again probably reflects that current treatment options are suboptimal, both when considering skeletal outcome and the rate of complications. Elevated serum levels of PTH were found in ten of 15 XLHR patients before the start of treatment all patients developed HPT during the course of treatment. Our findings add to other reports of high normal or slightly elevated levels of PTH in hypophosphatemic untreated XLHR patients. In normal subjects, hypophosphatemia will, through an increase in 1,25(OH) 2 D, reduce PTH levels. Evidence also suggests an inhibitory effect of FGF23 on PTH production. The explanation for the inappropriate PTH response in untreated HR, and the details of the interactions between phosphate, FGF23, and PTH, still need further clarification. Secondary HPT caused by oral phosphate supplements can be counteracted by increasing the doses of calcitriol, with the risk of developing hypercalcuria and nephrocalcinosis, or by reducing the phosphate dose, with the risk of worsening rickets. However difficult, successful management of HPT in XLHR is important, as HPT has been associated with development of hypertension and renal failure, cardiac failure, and also brown tumor in the mandible. Two patients, one with XLHR, developed tertiary HPT after long-term use of phosphate supplements. The XLHR patient had received relatively high doses of phosphate and relatively low doses of alfacalcidol for more than 10 years. Tertiary HPT has been reported in 36 cases of HR, and prolonged treatment with high doses of phosphate supplements seems to be a risk factor. There are reports of successful treatment of tertiary HPT with cinacalcet in children and adults, but safety concerns have stopped further clinical trials investigating the effects of cinacalcet in children. A recent report suggests the vitamin D analog paricalcitol to suppress elevated PTH secondary to treatment in XLHR. However, careful monitoring of treatment, to ensure lowest efficient phosphate dose is very important to heal rickets and at the same time reduce the risk of tertiary HPT. The observations from this study support recently published guidelines on treatment and monitoring of HR in children. We recommend that combined treatment with oral phosphate and activated vitamin D (calcitriol or alfacalcidol) is started once the diagnosis has been made. Most children respond well to a calcitriol dose of 20-30 ng/kg per day (divided in two doses) or alfacalcidol 30-50 ng/kg per day (single dose) and an elemental phosphorous dose of 20-40 mg/kg per day (divided in four doses) with reduced signs of rickets and skeletal deformities. The starting doses of phosphate should be kept low to reduce gastrointestinal side effects, and to avoid complications clinical and biochemical controls should be performed at least every 3 months, and supplemented with skeletal X-rays every 2 years and renal ultrasonography every 2-5 years. To avoid HPT, the aim should not be normalization of serum phosphate, but the lowest efficient dose that promote growth and heal rickets. To minimize the risk of nephrocalcinosis, hypercalcuria, defined as U-Ca/creatinine ratio O0.87 mmol/ mmol should be avoided. One strength of our study is related to the fact that combined data from the NPR and all pediatric centers in Norway allowed us to collect a complete national material of childhood HH. This allowed for the estimation of a national prevalence, and adds information to the literature on the epidemiology of hereditary HR. Moreover, we have identified new mutations in known and novel genes, expanding the genetic diversity of HH with and without rickets. On the other hand, the study is limited by the size of the cohort and the retrospective design, implying we could not ensure uniform collection of information from the clinical, biochemical, and radiological examinations. Furthermore, we did not do genetic testing on normophosphatemic, asymptomatic siblings, as predictive genetic testing on children is not allowed in Norway according to the Biotechnology Act. This means there is a possibility for undiagnosed subclinical cases. In conclusion, we have presented the first complete national cohort of HH in children. The prevalence of XLHR seems to be lower in Norwegian children than reported earlier. Supplementary data This is linked to the online version of the paper at http://dx.doi.org/10.1530/ EJE-15-0515. Declaration of interest The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. Funding This research was supported by a PhD grant from the University of Bergen. Author contribution statement S Rafaelsen, H Raeder, S Johansson, and R Bjerknes designed the study; S Rafaelsen collected the data; whereas S Rafaelsen, H Raeder, S Johansson, and R Bjerknes contributed to data analysis and interpretation. S Rafaelsen and R Bjerknes drafted the manuscript, whereas all authors contributed to the revision and approved the final version of the manuscript.
Oklahoma GOP Rep. Jim Bridenstine was facing a tough fight to win confirmation as NASA chief. Democrat Doug Jones's arrival to the Senate makes it tougher. WASHINGTON — President Trump has renominated Oklahoma GOP Congressman Jim Bridenstine to be NASA's next administrator even as the already murky prospect of his confirmation is growing dimmer now that Democrat Doug Jones of Alabama has joined the Senate. Jones' arrival leaves only 51 Republicans in the 100-seat chamber. One of those Republicans — Marco Rubio of Florida — is expected to oppose the nominee. And with Democrats aligned against Bridenstine, a tie-breaking vote by Vice President Mike Pence might be the only chance to save the nomination. Complicating matters is the failing health of two GOP senators — Thad Cochran of Mississippi and John McCain of Arizona — whose attendance now appears crucial to Bridenstine's successful confirmation. So what are the odds, the three-term lawmaker from Tulsa wins the job? "Not great," said John Logsdon, former director of the Space Policy Institute at the George Washington University. "But I doubt the White House would have resubmitted the nomination if they didn't calculate that there's at least a chance," he said. "It depends on how much the White House wants the nomination and is willing to use what leverage it may have to get the votes." Don't expect any support from Democratic senators.They have unified behind Florida Sen. Bill Nelson's call to block the confirmation due to Bridenstine's past skepticism of climate change and the senator's concern that having a partisan lawmaker at the helm could complicate key space missions. Bridenstine squeaked through the committee's confirmation process in November, approved 14-13 on a party-line vote with every Democrat opposed. NASA employs about 18,000 workers and has an annual budget of roughly $19 billion. It's primarily charged with conducting space exploration missions, developing supersonic aircraft, and launching satellites that measure changes inthe Earth's climate and ocean temperatures. Rubio has stopped short of an absolute 'no,' but he has expressed deep misgivings about having an elected politician running the space agency. It hasn't helped that Bridenstine appeared in ads on behalf of Texas GOP Sen. Ted Cruz during the 2016 presidential campaign that suggested Rubio, then a candidate for the White House, was soft on terror because the Florida senator had supported immigration reform. Bridenstine was one of more than 70 nominations the White House announced Monday evening. Many of them were nominated last year but their names had to be resubmitted because the Senate failed to act on them before the end of 2017. No Trump nominee for an administration post has been on the losing end of a confirmation vote bythe full Senate and it's doubtful the president would want to test that record. "It is unlikely that a nomination would be brought to the floor for a vote if Senate leadership was not certain it would pass," Marcia S. Smith writes in SpacePolicyOnline.com. The agency has been run the past year by acting administrator Robert M. Lightfoot Jr. Logsdon credits Lightfoot, who has spent nearly 30 years at the agency, with providing "very good leadership" during uncertain times. "But every day that goes by, they say: 'What would change if Bridenstine were here?' Logsdon said. "Leaving an organization in this kind of leadership suspense cannot be healthy. "
<filename>imsegm/classification.py<gh_stars>0 """ Supporting file to create and set parameters for scikit-learn classifiers and some prepossessing functions that support classification Copyright (C) 2014-2018 <NAME> <<EMAIL>> """ import os import pickle import logging import random import collections import traceback import itertools from functools import partial import numpy as np import pandas as pd # import itertools # import matplotlib.pyplot as plt from scipy import interp from scipy.stats import randint as sp_randint from scipy.stats import uniform as sp_random from sklearn import grid_search, metrics from sklearn import preprocessing, feature_selection, decomposition from sklearn import cluster from sklearn import ensemble, neighbors, svm, tree from sklearn import pipeline, linear_model, neural_network from sklearn import model_selection import imsegm.labeling as seg_lbs import imsegm.utils.experiments as tl_expt # NAME_FILE_RESULTS = 'results.csv' TEMPLATE_NAME_CLF = 'classifier_{}.pkl' DEFAULT_CLASSIF_NAME = 'RandForest' DEFAULT_CLUSTERING = 'kMeans' # DEFAULT_MIN_NB_SPL = 25 NB_JOBS_CLASSIF_SEARCH = 5 NB_CLASSIF_SEARCH_ITER = 250 NAME_CSV_FEATURES_SELECT = 'feature_selection.csv' NAME_CSV_CLASSIF_CV_SCORES = 'classif_{}_cross-val_scores-{}.csv' NAME_CSV_CLASSIF_CV_ROC = 'classif_{}_cross-val_ROC-{}.csv' NAME_TXT_CLASSIF_CV_AUC = 'classif_{}_cross-val_AUC-{}.txt' METRIC_AVERAGES = ('macro', 'weighted') METRIC_SCORING = ('f1_macro', 'accuracy', 'precision_macro', 'recall_macro') # rounding unique features, in case to detail precision ROUND_UNIQUE_FTS_DIGITS = 3 DICT_SCORING = { 'f1': metrics.f1_score, 'accuracy': metrics.accuracy_score, 'precision': metrics.precision_score, 'recall': metrics.recall_score, } def create_classifiers(nb_jobs=-1): """ create all classifiers with default parameters :param nb_jobs: int, number of parallel if possible :return: {str: clf} >>> classifs = create_classifiers() >>> classifs # doctest: +ELLIPSIS {...} >>> sum([isinstance(create_clf_param_search_grid(k), dict) ... for k in classifs.keys()]) 7 >>> sum([isinstance(create_clf_param_search_distrib(k), dict) ... for k in classifs.keys()]) 7 """ clfs = { 'RandForest': ensemble.RandomForestClassifier(n_estimators=20, # oob_score=True, min_samples_leaf=2, min_samples_split=3, n_jobs=nb_jobs), 'GradBoost': ensemble.GradientBoostingClassifier(subsample=0.25, warm_start=False, max_depth=6, min_samples_leaf=6, n_estimators=200, min_samples_split=7), 'LogistRegr': linear_model.LogisticRegression(solver='sag', n_jobs=nb_jobs), 'KNN': neighbors.KNeighborsClassifier(n_jobs=nb_jobs), 'SVM': svm.SVC(kernel='rbf', probability=True, tol=2e-3, max_iter=5000), 'DecTree': tree.DecisionTreeClassifier(), # 'RBM': create_pipeline_neuron_net(), 'AdaBoost': ensemble.AdaBoostClassifier(n_estimators=5), # 'NuSVM-rbf': svm.NuSVC(kernel='rbf', probability=True), } return clfs def create_clf_pipeline(name_classif=DEFAULT_CLASSIF_NAME, pca_coef=0.95): """ create complete pipeline with all required steps :param name_classif: str, key name of classif :return: object >>> create_clf_pipeline() # doctest: +ELLIPSIS Pipeline(...) """ # create the pipeline components = [('scaler', preprocessing.StandardScaler())] if not pca_coef is None: components += [('reduce_dim', decomposition.PCA(pca_coef))] components += [('classif', create_classifiers()[name_classif])] clf_pipeline = pipeline.Pipeline(components) return clf_pipeline def create_clf_param_search_grid(name_classif=DEFAULT_CLASSIF_NAME): """ create parameter grid for search :param str name_classif: key name of selected classifier :return: {str: ...} >>> create_clf_param_search_grid('RandForest') # doctest: +ELLIPSIS {'classif__...': ...} >>> dict_classif = create_classifiers() >>> all(len(create_clf_param_search_grid(k)) > 0 for k in dict_classif) True """ def _log_space(b, e, n): return np.unique(np.logspace(b, e, n).astype(int)).tolist() clf_params = { 'RandForest': { 'classif__n_estimators': _log_space(0, 2, 40), 'classif__min_samples_split': [2, 3, 5, 7, 9], 'classif__min_samples_leaf': [1, 2, 4, 6, 9], 'classif__criterion': ('gini', 'entropy'), }, 'KNN': { 'classif__n_neighbors': _log_space(0, 2, 20), 'classif__algorithm': ('ball_tree', 'kd_tree'), # , 'brute' 'classif__weights': ('uniform', 'distance'), 'classif__leaf_size': _log_space(0, 1.5, 10), }, 'SVM': { 'classif__C': np.linspace(0.2, 1., 8).tolist(), 'classif__kernel': ('poly', 'rbf', 'sigmoid'), 'classif__degree': [1, 2, 4, 6, 9], }, 'DecTree': { 'classif__criterion': ('gini', 'entropy'), 'classif__min_samples_split': [2, 3, 5, 7, 9], 'classif__min_samples_leaf': range(1, 7, 2), }, 'GradBoost': { # 'clf__loss': ('deviance', 'exponential'), # only for 2 cls 'classif__n_estimators': _log_space(0, 2, 25), 'classif__max_depth': range(1, 7, 2), 'classif__min_samples_split': [2, 3, 5, 7, 9], 'classif__min_samples_leaf': range(1, 7, 2), }, 'LogistRegr': { 'classif__C': np.linspace(0., 1., 5).tolist(), # 'classif__penalty': ('l1', 'l2'), # 'classif__dual': (False, True), 'classif__solver': ('lbfgs', 'sag'), # 'classif__loss': ('deviance', 'exponential'), # only for 2 cls }, 'AdaBoost': { 'classif__n_estimators': _log_space(0, 2, 20), } } if name_classif not in clf_params.keys(): clf_params[name_classif] = {} logging.warning('not defined classifier name "%s"', name_classif) return clf_params[name_classif] def create_clf_param_search_distrib(name_classif=DEFAULT_CLASSIF_NAME): """ create parameter distribution for random search :param name_classif: str, key name of classif :return: {str: ...} >>> create_clf_param_search_distrib() # doctest: +ELLIPSIS {...} >>> dict_classif = create_classifiers() >>> all(len(create_clf_param_search_distrib(k)) > 0 for k in dict_classif) True """ clf_params = { 'RandForest': { 'classif__n_estimators': sp_randint(2, 25), 'classif__min_samples_split': sp_randint(2, 9), 'classif__min_samples_leaf': sp_randint(1, 7), }, 'KNN': { 'classif__n_neighbors': sp_randint(5, 25), 'classif__algorithm': ('ball_tree', 'kd_tree'), # , 'brute' 'classif__weights': ('uniform', 'distance'), # 'clf__p': [1, 2], }, 'SVM': { 'classif__C': sp_random(0., 1.), 'classif__kernel': ('poly', 'rbf', 'sigmoid'), 'classif__degree': sp_randint(2, 9), }, 'DecTree': { 'classif__criterion': ('gini', 'entropy'), 'classif__min_samples_split': sp_randint(2, 9), 'classif__min_samples_leaf': sp_randint(1, 7), }, 'GradBoost': { # 'clf__loss': ('deviance', 'exponential'), # only for 2 cls 'classif__n_estimators': sp_randint(10, 200), 'classif__max_depth': sp_randint(1, 7), 'classif__min_samples_split': sp_randint(2, 9), 'classif__min_samples_leaf': sp_randint(1, 7), }, 'LogistRegr': { 'classif__C': sp_random(0., 1.), # 'classif__penalty': ('l1', 'l2'), # 'classif__dual': (False, True), 'classif__solver': ('newton-cg', 'lbfgs', 'sag'), # 'classif__loss': ('deviance', 'exponential'), # only for 2 cls }, 'AdaBoost': { 'classif__n_estimators': sp_randint(2, 100), } } # if this classif is not set use no params if name_classif not in clf_params.keys(): clf_params[name_classif] = {} return clf_params[name_classif] def create_pipeline_neuron_net(): """ create classifier for simple neuronal network :return: clf >>> create_pipeline_neuron_net() # doctest: +ELLIPSIS Pipeline(...) """ # Models we will use logistic = linear_model.LogisticRegression() rbm = neural_network.BernoulliRBM(learning_rate=0.05, n_components=35, n_iter=299, verbose=False) clf = pipeline.Pipeline(steps=[('rbm', rbm), ('logistic', logistic)]) return clf # def append_matrix_vertical(old, new): # """ append a matrix after another one in vertical direction # # :param old: np.matrix<total*l> # :param new: np.matrix<total*k> # :return: np.matrix<total*(k+l)> # # >>> a, b = np.zeros((10, 5)), np.zeros((10, 4)) # >>> append_matrix_vertical(a, b).shape # (10, 9) # """ # if old is None: # old = new.copy() # else: # # logging.debug('append V:{} <- {}'.format(old.shape, new.shape)) # old = np.hstack((old, new)) # return old def compute_classif_metrics(y_true, y_pred, metric_averages=METRIC_AVERAGES): """ compute standard metrics for multi-class classification :param [int] y_true: :param [int] y_pred: :return {str: float}: >>> np.random.seed(0) >>> y_true = np.random.randint(0, 3, 25) * 2 >>> y_pred = np.random.randint(0, 2, 25) * 2 >>> d = compute_classif_metrics(y_true, y_true) >>> d['accuracy'] 1.0 >>> d['confusion'] [[10, 0, 0], [0, 10, 0], [0, 0, 5]] >>> d = compute_classif_metrics(y_true, y_pred) >>> d['accuracy'] # doctest: +ELLIPSIS 0.32... >>> d['confusion'] [[3, 7, 0], [5, 5, 0], [1, 4, 0]] >>> d = compute_classif_metrics(y_pred, y_pred) >>> d['accuracy'] 1.0 """ y_true = np.array(y_true) y_pred = np.array(y_pred) assert y_true.shape == y_pred.shape, \ 'prediction (%i) and annotation (%i) should be equal' \ % (len(y_true), len(y_pred)) logging.debug('unique lbs true: %s, predict %s', repr(np.unique(y_true)), repr(np.unique(y_pred))) uq_labels = np.unique(np.hstack((y_true, y_pred))) # in case there are just two classes, relabel them as [0, 1], sklearn error: # "ValueError: pos_label=1 is not a valid label: array([ 0, 255])" if len(uq_labels) <= 2: # NOTE, this is temporal just for purposes of computing statistic y_true = relabel_sequential(y_true, uq_labels) y_pred = relabel_sequential(y_pred, uq_labels) # http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html EVAL_STR = 'EVALUATION: {:<2} PRE: {:.3f} REC: {:.3f} F1: {:.3f} S: {:>6}' try: p, r, f, s = metrics.precision_recall_fscore_support(y_true, y_pred) for l, _ in enumerate(p): logging.debug(EVAL_STR.format(l, p[l], r[l], f[l], s[l])) except Exception: logging.error(traceback.format_exc()) dict_metrics = { 'ARS': metrics.adjusted_rand_score(y_true, y_pred), # 'F1': metrics.f1_score(y_true, y_pred), 'accuracy': metrics.accuracy_score(y_true, y_pred), # 'precision': metrics.precision_score(y_true, y_pred), 'confusion': metrics.confusion_matrix(y_true, y_pred).tolist(), # 'report': metrics.classification_report(labels, predicted), } # compute aggregated precision, recall, f-score, support names = ['precision', 'recall', 'f1', 'support'] for avg in metric_averages: try: mtr = metrics.precision_recall_fscore_support(y_true, y_pred, average=avg) res = dict(zip(['{}_{}'.format(n, avg) for n in names], mtr)) except Exception: logging.error(traceback.format_exc()) res = dict(zip(['{}_{}'.format(n, avg) for n in names], [-1] * 4)) dict_metrics.update(res) return dict_metrics def compute_classif_stat_segm_annot(annot_segm_name, drop_labels=None, relabel=False): """ compute classification statistic between annotation and segmentation :param (ndarray, ndarray, str) annot_segm_name: :param bool relabel: :return: >>> np.random.seed(0) >>> annot = np.random.randint(0, 2, (5, 10)) >>> segm = np.random.randint(0, 2, (5, 10)) >>> d = compute_classif_stat_segm_annot((annot, annot, 'ttt'), relabel=True, ... drop_labels=[5]) >>> d['(FP+FN)/(TP+FN)'] # doctest: +ELLIPSIS 0.0 >>> d['(TP+FP)/(TP+FN)'] # doctest: +ELLIPSIS 1.0 >>> d = compute_classif_stat_segm_annot((annot, segm, 'ttt'), relabel=True, ... drop_labels=[5]) >>> d['(FP+FN)/(TP+FN)'] # doctest: +ELLIPSIS 0.846... >>> d['(TP+FP)/(TP+FN)'] # doctest: +ELLIPSIS 1.153... >>> d = compute_classif_stat_segm_annot((annot, segm + 1, 'ttt'), ... relabel=False, drop_labels=[0]) >>> d['confusion'] [[13, 17], [0, 0]] """ annot, segm, name = annot_segm_name assert segm.shape == annot.shape, 'dimension do not match for ' \ 'segm: %s - annot: %s' \ % (repr(segm.shape), repr(annot.shape)) y_true, y_pred = annot.ravel(), segm.ravel() # filter particular labels if drop_labels is not None: mask = np.ones(y_true.shape, dtype=bool) for lb in drop_labels: mask[y_true == lb] = 0 mask[y_pred == lb] = 0 y_true = y_true[mask] y_pred = y_pred[mask] # relabel such that the classes maximaly match if relabel: y_pred = seg_lbs.relabel_max_overlap_unique(y_true, y_pred, keep_bg=False) dict_stat = compute_classif_metrics(y_true, y_pred, metric_averages=['macro']) # add binary metric if len(np.unique(y_pred)) == 2: dict_stat['(FP+FN)/(TP+FN)'] = compute_metric_fpfn_tpfn(y_true, y_pred) dict_stat['(TP+FP)/(TP+FN)'] = compute_metric_tpfp_tpfn(y_true, y_pred) # set the image name dict_stat['name'] = name return dict_stat def compute_stat_per_image(segms, annots, names=None, nb_jobs=1, drop_labels=None, relabel=False): """ compute statistic over multiple segmentations with annotation :param [ndarray] segms: :param [ndarray] annots: :param [str] names: :param int nb_jobs: :return DF: >>> np.random.seed(0) >>> img_true = np.random.randint(0, 3, (50, 100)) >>> img_pred = np.random.randint(0, 2, (50, 100)) >>> df = compute_stat_per_image([img_true], [img_true], nb_jobs=2, ... relabel=True) >>> df.iloc[0] # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE ARS 1 accuracy 1 confusion [[1672, 0, 0], [0, 1682, 0], [0, 0, 1646]] f1_macro 1 precision_macro 1 recall_macro 1 support_macro None Name: 0, dtype: object >>> df = compute_stat_per_image([img_true], [img_pred], drop_labels=[-1]) >>> df.iloc[0] # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE ARS 0.0... accuracy 0.3384 confusion [[836, 826, 770], [836, 856, 876], [0, 0, 0]] f1_macro 0.270077 precision_macro 0.336306 recall_macro 0.225694 support_macro None Name: 0, dtype: object """ assert len(segms) == len(annots), \ 'size of segment. (%i) amd annot. (%i) should be equal' \ % (len(segms), len(annots)) if names is None: names = map(str, range(len(segms))) _compute_stat = partial(compute_classif_stat_segm_annot, drop_labels=drop_labels, relabel=relabel) iterate = tl_expt.WrapExecuteSequence(_compute_stat, zip(annots, segms, names), nb_jobs=nb_jobs, desc='statistic per image') list_stat = list(iterate) df_stat = pd.DataFrame(list_stat) df_stat.set_index('name', inplace=True) return df_stat def feature_scoring_selection(features, labels, names=None, path_out=''): """ find the best features and retrun the indexes http://scikit-learn.org/stable/auto_examples/linear_model/plot_sparse_recovery.html http://scikit-learn.org/stable/auto_examples/feature_selection/plot_feature_selection.html :param features: np.array<nb_spl, nb_fts> :param labels: np.array<nb_spl, 1> :param names: [str] :param path_out: str :return: >>> from sklearn.datasets import make_classification >>> features, labels = make_classification(n_samples=250, n_features=5, ... n_informative=3, n_redundant=0, n_repeated=0, ... n_classes=2, random_state=0, shuffle=False) >>> indices, df_scoring = feature_scoring_selection(features, labels) >>> indices array([1, 0, 2, 3, 4]) >>> df_scoring # doctest: +NORMALIZE_WHITESPACE ExtTree F-test k-Best variance feature 1 0.248465 0.755881 0.755881 2.495970 2 0.330818 58.944450 58.944450 1.851036 3 0.221636 2.242583 2.242583 1.541042 4 0.106441 4.022076 4.022076 0.965971 5 0.092639 0.022651 0.022651 1.016170 >>> features[:, 2] = 1 >>> indices, df_scoring = feature_scoring_selection(features, labels) >>> indices array([1, 0, 3, 4, 2]) """ logging.info('Feature selection for %s', repr(names)) logging.debug('Features: %s and labels: %s', repr(features.shape), repr(labels.shape)) if not isinstance(features, np.ndarray): features = np.array(features) # Build a forest and compute the feature importances forest = ensemble.ExtraTreesClassifier(n_estimators=125, random_state=0) forest.fit(features, labels) f_test, _ = feature_selection.f_regression(features, labels) k_best = feature_selection.SelectKBest(feature_selection.f_classif, k='all') k_best.fit(features, labels) variances = feature_selection.VarianceThreshold().fit(features, labels) imp = { 'ExtTree': forest.feature_importances_, # 'Lasso': np.abs(lars_cv.coef_), 'k-Best': k_best.scores_, 'variance': variances.variances_, 'F-test': f_test } # std = np.std([t.feature_importances_ for t in forest.estimators_], axis=0) indices = np.argsort(forest.feature_importances_)[::-1] if names is None or len(names) < features.shape[1]: names = map(str, range(1, features.shape[1] + 1)) df_scoring = pd.DataFrame() for i, n in enumerate(names): dict_scores = {k: imp[k][i] for k in imp} dict_scores['feature'] = n df_scoring = df_scoring.append(dict_scores, ignore_index=True) df_scoring.set_index(['feature'], inplace=True) logging.debug(df_scoring) if os.path.exists(path_out): path_csv = os.path.join(path_out, NAME_CSV_FEATURES_SELECT) logging.debug('export Feature scoting to "%s"', path_csv) df_scoring.to_csv(path_csv) return indices, df_scoring def save_classifier(path_out, classif, clf_name, params, feature_names=None, label_names=None): """ estimate classif for all data and export it :param str path_out: path for exporting trained classofier :param classif: sklearn classif. :param str clf_name: name of selected classifier :param [str] feature_names: list of string names :param [str] label_names: list of string names of label_names :return str: >>> clf = create_classifiers()['RandForest'] >>> p_clf = save_classifier('.', clf, 'TESTINNG', {}) >>> p_clf './classifier_TESTINNG.pkl' >>> d_clf = load_classifier(p_clf) >>> sorted(d_clf.keys()) ['clf_pipeline', 'features', 'label_names', 'name', 'params'] >>> d_clf['clf_pipeline'] # doctest: +ELLIPSIS RandomForestClassifier(...) >>> d_clf['name'] 'TESTINNG' >>> os.remove(p_clf) """ assert os.path.isdir(path_out), 'missing folder: %s' % repr(path_out) dict_classif = { 'params': params, 'name': clf_name, 'clf_pipeline': classif, 'features': feature_names, 'label_names': label_names, } path_clf = os.path.join(path_out, TEMPLATE_NAME_CLF.format(clf_name)) logging.info('export classif. of %s to "%s"', dict_classif, path_clf) with open(path_clf, 'wb') as f: pickle.dump(dict_classif, f) logging.debug('export finished') return path_clf def load_classifier(path_classif): """ estimate classifier for all data and export it :param str path_classif: path to the exported classifier :return {str: ...}: """ assert os.path.exists(path_classif), 'missing: "%s"' % path_classif logging.info('import classif from "%s"', path_classif) if not os.path.exists(path_classif): logging.debug('classif does not exist') return None with open(path_classif, 'rb') as f: dict_clf = pickle.load(f) # dict_clf['name'] = classif_name logging.debug('load classif: %s', repr(dict_clf.keys())) return dict_clf def export_results_clf_search(path_out, clf_name, clf_search): """ do the final testing and save all results :param str path_out: path to directory for exporting classifier :param str clf_name: name of selected classifier :param object clf_search: """ assert os.path.isdir(path_out), 'missing folder: %s' % repr(path_out) fn_path_out = lambda s: os.path.join(path_out, 'classif_%s_%s.txt' % (clf_name, s)) with open(fn_path_out('search_params_scores'), 'w') as f: f.write('\n'.join([repr(gs) for gs in clf_search.grid_scores_])) with open(fn_path_out('search_params_best'), 'w') as f: params = clf_search.best_params_ rows = ['{:30s} {}'.format('"{}":'.format(k), params[k]) for k in params] f.write('\n'.join(rows)) def relabel_sequential(labels, uq_lbs=None): """ relabel sequantila vetor staring from 0 :param [] labels: :return []: >>> relabel_sequential([0, 0, 0, 5, 5, 5, 0, 5]) [0, 0, 0, 1, 1, 1, 0, 1] """ labels = np.asarray(labels) if uq_lbs is None: uq_lbs = np.unique(labels) lut = np.zeros(np.max(uq_lbs) + 1) logging.debug('relabeling original %s to %s', repr(uq_lbs), range(len(uq_lbs))) for i, lb in enumerate(uq_lbs): lut[lb] = i labesl_new = lut[labels].astype(labels.dtype).tolist() return labesl_new def create_classif_train_export(clf_name, features, labels, cross_val=10, nb_search_iter=1, search_type='random', eval_metric='f1', nb_jobs=NB_JOBS_CLASSIF_SEARCH, path_out=None, params=None, pca_coef=0.98, feature_names=None, label_names=None): """ create classifier and train it once or find best parameters. whether tha path out is given export it for later use :param str clf_name: name of selected classifier :param ndarray features: features in dimension nb_samples x nb_features :param [int] labels: annotation for samples :param cross_val: :param int nb_search_iter: number of searcher for hyper-parameters :param str path_out: path to directory for exporting classifier :param {str: ...} dict params: dictionary of paramters :param [str] feature_names: list of extracted features - names :param [str] label_names: list of label names :return: (obj, str): classifier, path to the exported classifier >>> np.random.seed(0) >>> lbs = np.random.randint(0, 3, 150) >>> fts = np.random.random((150, 5)) + np.tile(lbs, (5, 1)).T >>> clf, p_clf = create_classif_train_export('AdaBoost', fts, lbs, ... path_out='', search_type='grid') # doctest: +ELLIPSIS Fitting ... >>> clf # doctest: +ELLIPSIS Pipeline(...) >>> clf, p_clf = create_classif_train_export('RandForest', fts, lbs, ... path_out='.', nb_search_iter=2) # doctest: +ELLIPSIS Fitting ... >>> clf # doctest: +ELLIPSIS Pipeline(...) >>> p_clf './classifier_RandForest.pkl' >>> os.remove(p_clf) >>> import glob >>> files = glob.glob(os.path.join('.', 'classif_*.txt')) >>> sorted(files) # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS ['./classif_RandForest_search_params_best.txt', './classif_RandForest_search_params_scores.txt'] >>> for p in files: os.remove(p) """ assert len(labels) > 0, 'some labels has to be given' features = np.nan_to_num(features) assert len(features) == len(labels), \ 'features (%i) and labels (%i) should have equal length' \ % (len(features), len(labels)) assert features.ndim == 2 and features.shape[1] > 0, \ 'at least one feature is required' logging.debug('training data: %s, labels (%i): %s', repr(features.shape), len(labels), repr(collections.Counter(labels))) # gc.collect(), time.sleep(1) logging.info('create Classifier: %s', clf_name) clf_pipeline = create_clf_pipeline(clf_name, pca_coef) logging.debug('pipeline: %s', repr(clf_pipeline.steps)) if nb_search_iter > 1 or search_type == 'grid': # find the best params for the classif. logging.debug('Performing param search...') nb_labels = len(np.unique(labels)) clf_search = create_classif_search(clf_name, clf_pipeline, nb_labels=nb_labels, search_type=search_type, cross_val=cross_val, eval_scoring=eval_metric, nb_iter=nb_search_iter, nb_jobs=nb_jobs) # NOTE, this is temporal just for purposes of computing statistic clf_search.fit(features, relabel_sequential(labels)) logging.info('Best score: %s', repr(clf_search.best_score_)) clf_pipeline = clf_search.best_estimator_ best_parameters = clf_pipeline.get_params() logging.info('Best parameters set: \n %s', repr(best_parameters)) if path_out is not None and os.path.isdir(path_out): export_results_clf_search(path_out, clf_name, clf_search) # while there is no search, just train the best one clf_pipeline.fit(features, labels) if path_out is not None and os.path.isdir(path_out): path_classif = save_classifier(path_out, clf_pipeline, clf_name, params, feature_names, label_names) else: path_classif = path_out return clf_pipeline, path_classif def eval_classif_cross_val_scores(clf_name, classif, features, labels, cross_val=10, path_out=None, scorings=METRIC_SCORING): """ compute statistic on cross-validation schema http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html :param str clf_name: name of selected classifier :param obj classif: sklearn classifier :param ndarray features: features in dimension nb_samples x nb_features :param [int] labels: annotation for samples :param object cross_val: :param str path_out: path for exporting statistic :param [str] scorings: list of used scorings :return DF: >>> labels = np.array([0] * 150 + [1] * 100 + [2] * 50) >>> data = np.tile(labels, (6, 1)).T.astype(float) >>> data += 0.5 - np.random.random(data.shape) >>> data.shape (300, 6) >>> from sklearn.cross_validation import StratifiedKFold >>> cv = StratifiedKFold(labels, n_folds=5, random_state=0) >>> classif = create_classifiers()[DEFAULT_CLASSIF_NAME] >>> eval_classif_cross_val_scores(DEFAULT_CLASSIF_NAME, classif, ... data, labels, cv) f1_macro accuracy precision_macro recall_macro 0 1.0 1.0 1.0 1.0 1 1.0 1.0 1.0 1.0 2 1.0 1.0 1.0 1.0 3 1.0 1.0 1.0 1.0 4 1.0 1.0 1.0 1.0 >>> labels[labels == 1] = 2 >>> cv = StratifiedKFold(labels, n_folds=3, random_state=0) >>> eval_classif_cross_val_scores(DEFAULT_CLASSIF_NAME, classif, ... data, labels, cv, path_out='.') f1_macro accuracy precision_macro recall_macro 0 1.0 1.0 1.0 1.0 1 1.0 1.0 1.0 1.0 2 1.0 1.0 1.0 1.0 >>> import glob >>> p_files = glob.glob(NAME_CSV_CLASSIF_CV_SCORES.replace('{}', '*')) >>> sorted(p_files) # doctest: +NORMALIZE_WHITESPACE ['classif_RandForest_cross-val_scores-all-folds.csv', 'classif_RandForest_cross-val_scores-statistic.csv'] >>> [os.remove(p) for p in p_files] # doctest: +ELLIPSIS [...] """ df_scoring = pd.DataFrame() for scoring in scorings: try: uq_labels = np.unique(labels) # ValueError: pos_label=1 is not a valid label: array([0, 2]) if len(uq_labels) <= 2: # NOTE, this is temporal just for purposes of computing stat. labels = relabel_sequential(labels, uq_labels) scores = model_selection.cross_val_score(classif, features, labels, cv=cross_val, scoring=scoring) logging.info('Cross-Val score (%s = %f):\n %s', scoring, np.mean(scores), repr(scores)) df_scoring[scoring] = scores except Exception: logging.error(traceback.format_exc()) if path_out is not None: assert os.path.exists(path_out), 'missing: "%s"' % path_out name_csv = NAME_CSV_CLASSIF_CV_SCORES.format(clf_name, 'all-folds') path_csv = os.path.join(path_out, name_csv) df_scoring.to_csv(path_csv) if len(df_scoring) > 1: df_stat = df_scoring.describe() logging.info('cross_val scores: \n %s', repr(df_stat)) if path_out is not None: assert os.path.exists(path_out), 'missing: "%s"' % path_out name_csv = NAME_CSV_CLASSIF_CV_SCORES.format(clf_name, 'statistic') path_csv = os.path.join(path_out, name_csv) df_stat.to_csv(path_csv) else: logging.warning('no statistic collected') return df_scoring def eval_classif_cross_val_roc(clf_name, classif, features, labels, cross_val, path_out=None, nb_thr=100): """ compute mean ROC curve on cross-validation schema http://scikit-learn.org/0.15/auto_examples/plot_roc_crossval.html :param str clf_name: name of selected classifier :param obj classif: sklearn classifier :param ndarray features: features in dimension nb_samples x nb_features :param [int] labels: annotation for samples :param object cross_val: :param str path_out: path for exporting statistic :param int nb_thr: number of thresholds :return: >>> np.random.seed(0) >>> labels = np.array([0] * 150 + [1] * 100 + [3] * 50) >>> data = np.tile(labels, (6, 1)).T.astype(float) >>> data += np.random.random(data.shape) >>> data.shape (300, 6) >>> from sklearn.cross_validation import StratifiedKFold >>> cv = StratifiedKFold(labels, n_folds=5, random_state=0) >>> classif = create_classifiers()[DEFAULT_CLASSIF_NAME] >>> fp_tp, auc = eval_classif_cross_val_roc(DEFAULT_CLASSIF_NAME, classif, ... data, labels, cv, nb_thr=10) >>> fp_tp FP TP 0 0.000000 0.0 1 0.111111 1.0 2 0.222222 1.0 3 0.333333 1.0 4 0.444444 1.0 5 0.555556 1.0 6 0.666667 1.0 7 0.777778 1.0 8 0.888889 1.0 9 1.000000 1.0 >>> auc 0.94444444444444442 >>> labels[-50:] -= 1 >>> data[-50:, :] -= 1 >>> fp_tp, auc = eval_classif_cross_val_roc(DEFAULT_CLASSIF_NAME, classif, ... data, labels, cv, nb_thr=5) >>> fp_tp FP TP 0 0.00 0.0 1 0.25 1.0 2 0.50 1.0 3 0.75 1.0 4 1.00 1.0 >>> auc 0.875 """ mean_tpr = 0.0 mean_fpr = np.linspace(0, 1, nb_thr) labels_bin = np.zeros((len(labels), np.max(labels) + 1)) unique_labels = np.unique(labels) assert all(unique_labels >= 0), \ 'some labels are negative: %s' % repr(unique_labels) for lb in unique_labels: labels_bin[:, lb] = (labels == lb) count = 0 for train, test in cross_val: features_train = np.copy(features[train], order='C') labels_train = np.copy(labels[train], order='C') features_test = np.copy(features[test], order='C') classif.fit(features_train, labels_train) proba = classif.predict_proba(features_test) # Compute ROC curve and area the curve for i, lb in enumerate(unique_labels): fpr, tpr, _ = metrics.roc_curve(labels_bin[test, lb], proba[:, i]) fpr = [0.] + fpr.tolist() + [1.] tpr = [0.] + tpr.tolist() + [1.] mean_tpr += interp(mean_fpr, fpr, tpr) mean_tpr[0] = 0.0 count += 1. # roc_auc = metrics.auc(fpr, tpr) mean_tpr /= count mean_tpr[-1] = 1.0 # mean_auc = metrics.auc(mean_fpr, mean_tpr) df_roc = pd.DataFrame(np.array([mean_fpr, mean_tpr]).T, columns=['FP', 'TP']) auc = metrics.auc(mean_fpr, mean_tpr) if path_out is not None: assert os.path.exists(path_out), 'missing: "%s"' % path_out name_csv = NAME_CSV_CLASSIF_CV_ROC.format(clf_name, 'mean') path_csv = os.path.join(path_out, name_csv) df_roc.to_csv(path_csv) name_txt = NAME_TXT_CLASSIF_CV_AUC.format(clf_name, 'mean') with open(os.path.join(path_out, name_txt), 'w') as fp: fp.write(str(auc)) logging.debug('cross_val ROC: \n %s', repr(df_roc)) return df_roc, auc def search_params_cut_down_max_nb_iter(clf_parameters, nb_iter): """ create parameters list and count number of possible combination in case they are they are limited :param clf_parameters: {str: ...} :param nb_iter: int, nb of random tryes :return: int """ param_list = grid_search.ParameterSampler(clf_parameters, n_iter=nb_iter) param_grid = grid_search.ParameterGrid(param_list.param_distributions) try: # this works only in case the set of params is finite, otherwise crash if len(param_grid) < nb_iter: nb_iter = len(param_grid.param_grid) logging.debug('nb iter: -> %i', nb_iter) except Exception: logging.debug('something went wrong in cutting down nb iter') return nb_iter def create_classif_search(name_clf, clf_pipeline, nb_labels, search_type='random', cross_val=10, eval_scoring='f1', nb_iter=NB_CLASSIF_SEARCH_ITER, nb_jobs=NB_JOBS_CLASSIF_SEARCH): """ create sklearn search depending on spec. random or grid :param nb_iter: int, for random number of tries :param name_clf: str, name of classif. :param clf_pipeline: object :param cross_val: obj specific CV for fix train-test :param nb_jobs: int, nb jobs running in parallel :return: """ score_weight = 'weighted' if nb_labels > 2 else 'binary' scoring = metrics.make_scorer(DICT_SCORING[eval_scoring.lower()], average=score_weight) if search_type == 'grid': clf_parameters = create_clf_param_search_grid(name_clf) logging.info('init Grid search...') clf_search = grid_search.GridSearchCV( clf_pipeline, clf_parameters, scoring=scoring, cv=cross_val, n_jobs=nb_jobs, verbose=1, refit=True) else: clf_parameters = create_clf_param_search_distrib(name_clf) nb_iter = search_params_cut_down_max_nb_iter(clf_parameters, nb_iter) logging.info('init Randomized search...') clf_search = grid_search.RandomizedSearchCV( clf_pipeline, clf_parameters, scoring=scoring, cv=cross_val, n_jobs=nb_jobs, n_iter=nb_iter, verbose=1, refit=True) return clf_search def shuffle_features_labels(features, labels): """ take the set of features and labels and shuffle them together while keeping link between feature and its label :param ndarray features: features in dimension nb_samples x nb_features :param [int] labels: annotation for samples :return: np.array<nb_samples, nb_features>, np.array,<nb_samples> >>> np.random.seed(0) >>> fts = np.random.random((5, 2)) >>> lbs = np.random.randint(0, 2, 5) >>> fts_new, lbs_new = shuffle_features_labels(fts, lbs) >>> np.array_equal(fts, fts_new) False >>> np.array_equal(lbs, lbs_new) False """ assert len(features) == len(labels), \ 'features (%i) and labels (%i) should have equal length' \ % (len(features), len(labels)) idx = list(range(len(labels))) logging.debug('shuffle indexes - %i', len(labels)) np.random.shuffle(idx) features = features[idx, :] labels = np.asarray(labels)[idx] return features, labels def convert_dict_label_features_2_vectors(dict_features): """ convert dictionary of features where key is the labels to vector of all features and related labels :param dict_features: {int: [[float] * nb_features] * nb_samples} :return: np.array<nb_samples, nb_features>, [int] """ features, labels = [], [] for k in dict_features: features += dict_features[k].tolist() labels += [k] * len(dict_features[k]) return np.array(features), labels def compose_dict_label_features(features, labels): """ convert vector of features and related labels to a dictionary of features where key is the lables :param ndarray features: features in dimension nb_samples x nb_features :param [int] labels: annotation for samples :return: {int: np.array<nb, nb_features>} """ dict_features = dict() features = np.array(features) for lb in np.unique(labels): dict_features[lb] = features[labels == lb, :] return dict_features def down_sample_dict_features_random(dict_features, nb_samples): """ browse all label features and take random subset of features to have given nb_samples per class :param {} dict_features: {int: [[float] * nb_features] * nb} :param int nb_samples: :return {}: {int: [[float] * nb_features] * nb_samples} >>> np.random.seed(0) >>> d_fts = {'a': np.random.random((100, 3))} >>> d_fts = down_sample_dict_features_random(d_fts, 5) >>> d_fts['a'].shape (5, 3) """ dict_features_new = dict() for label in dict_features: features = dict_features[label] if len(features) <= nb_samples: dict_features_new[label] = features.copy() continue idx = list(range(len(features))) random.shuffle(idx) idx_select = idx[:nb_samples] dict_features_new[label] = np.array(features)[idx_select, :] return dict_features_new def down_sample_dict_features_kmean(dict_features, nb_samples): """ cluser with kmeans the features with nb cluster == given nb_samples and the retirn features which are closer to each cluster center :param {} dict_features: {int: [[float] * nb_features] * nb} :param int nb_samples: :return {}: {int: [[float] * nb_features] * nb_samples} >>> np.random.seed(0) >>> d_fts = {'a': np.random.random((100, 3))} >>> d_fts = down_sample_dict_features_kmean(d_fts, 5) >>> d_fts['a'].shape (5, 3) """ dict_features_new = dict() for label in dict_features: features = dict_features[label] if len(features) <= nb_samples: dict_features_new[label] = features.copy() continue kmeans = cluster.KMeans(n_clusters=nb_samples, init='random', n_init=3, max_iter=5, n_jobs=-1) dist = kmeans.fit_transform(features) find_min = np.argmin(dist, axis=0) dict_features_new[label] = features[find_min, :] return dict_features_new # def unique_rows(matrix): # matrix = np.ascontiguousarray(matrix) # unique_matrix = np.unique(matrix.view([('', matrix.dtype)] # * matrix.shape[1])) # unique_shape = (unique_matrix.shape[0], matrix.shape[1]) # unique_matrix = unique_matrix.view(matrix.dtype).reshape(unique_shape) # return unique_matrix def unique_rows(data): """ with matrix detect unique row and return only them :param data: np.array :return: np.array """ # preventing: ValueError: new type not compatible with array. # https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.view.html data = data.copy() uniq = np.unique(data.view(data.dtype.descr * data.shape[1])) return uniq.view(data.dtype).reshape(-1, data.shape[1]) def down_sample_dict_features_unique(dict_features): """ browse all label features and take unique features :param {} dict_features: {int: [[float] * nb_features] * nb_samples} :return {}: {int: [[float] * nb_features] * nb} >>> np.random.seed(0) >>> d_fts = {'a': np.random.random((100, 3))} >>> d_fts = down_sample_dict_features_unique(d_fts) >>> d_fts['a'].shape (100, 3) """ dict_features_new = dict() for label in dict_features: features = np.round(dict_features[label], ROUND_UNIQUE_FTS_DIGITS) unique_fts = np.array(unique_rows(features)) assert features.ndim == unique_fts.ndim, 'feature dim matching' assert features.shape[1] == unique_fts.shape[1], \ 'features: %i <> %i' % (features.shape[1], unique_fts.shape[1]) dict_features_new[label] = unique_fts return dict_features_new def balance_dataset_by_(features, labels, balance_type='random', min_samples=None): """ balance number of training examples per class by several method :param ndarray features: features in dimension nb_samples x nb_features :param [int] labels: annotation for samples :param str type: balance_type of balancing dataset :param min_samples: int or None, if None take the smallest class :return: >>> np.random.seed(0) >>> fts, lbs = balance_dataset_by_(np.random.random((25, 3)), ... np.random.randint(0, 2, 25)) >>> fts.shape (24, 3) >>> lbs [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] """ logging.debug('balance dataset using "%s"', balance_type) hist_labels = collections.Counter(labels) if min_samples is None: min_samples = min(hist_labels.values()) dict_features = compose_dict_label_features(features, labels) if balance_type.lower() == 'random': dict_features = down_sample_dict_features_random(dict_features, min_samples) elif balance_type.lower() == 'kmeans': dict_features = down_sample_dict_features_kmean(dict_features, min_samples) elif balance_type.lower() == 'unique': dict_features = down_sample_dict_features_unique(dict_features) else: logging.warning('not defined balancing method "%s"', balance_type) features, labels = convert_dict_label_features_2_vectors(dict_features) # features, labels = shuffle_features_labels(features, labels) return features, labels def convert_set_features_labels_2_dataset(imgs_features, imgs_labels, drop_labels=None, balance_type=None): """ with dictionary for each image we concentrate all features over images and labels into simple form :param {str: ndarray} imgs_features: dictionary of name and features :param {str: ndarray} imgs_labels: dictionary of name and labels :param balance: bool, wether balance_type number of sampler per class :return: >>> np.random.seed(0) >>> d_fts = {'a': np.random.random((25, 3)), ... 'b': np.random.random((30, 3)), } >>> d_lbs = {'a': np.random.randint(0, 2, 25), ... 'b': np.random.randint(0, 2, 30)} >>> fts, lbs, sizes = convert_set_features_labels_2_dataset(d_fts, d_lbs) >>> fts.shape (55, 3) >>> lbs.shape (55,) >>> sizes [25, 30] """ logging.debug('convert set of features and labels to single one') assert all(k in imgs_labels.keys() for k in imgs_features.keys()), \ 'missing some items of %s' % repr(list(imgs_labels.keys())) features_all, labels_all, sizes = list(), list(), list() for name in sorted(imgs_features.keys()): features = np.array(imgs_features[name]) labels = np.array(imgs_labels[name].astype(int)) drop_labels = [] if drop_labels is None else drop_labels for lb in drop_labels: features = features[labels != lb] labels = labels[labels != lb] if balance_type is not None: # balance_type dataset to have comparable nb of samples features, labels = balance_dataset_by_(features, labels, balance_type=balance_type) features_all += features.tolist() labels_all += np.asarray(labels).tolist() sizes.append(len(labels)) return np.array(features_all), np.array(labels_all, dtype=int), sizes def compute_tp_tn_fp_fn(annot, segm, label_positive=None): """ compute measure TruePositive, TrueNegative, FalsePositive, FalseNegative :param ndarray annot: :param ndarray segm: :param int label_positive: :return float: >>> np.random.seed(0) >>> annot = np.random.randint(0, 2, (5, 7)) * 9 >>> segm = np.random.randint(0, 2, (5, 7)) * 9 >>> annot - segm array([[-9, 9, 0, -9, 9, 9, 0], [ 9, 0, 0, 0, -9, -9, 9], [-9, 0, -9, -9, -9, 0, 0], [ 0, 9, 0, -9, 0, 9, 0], [ 9, -9, 9, 0, 9, 0, 9]]) >>> compute_tp_tn_fp_fn(annot, annot) (20, 15, 0, 0) >>> compute_tp_tn_fp_fn(annot, segm) (9, 5, 11, 10) >>> compute_tp_tn_fp_fn(annot, np.ones((5, 7))) (nan, nan, nan, nan) >>> compute_tp_tn_fp_fn(np.zeros((5, 7)), np.zeros((5, 7))) (35, 0, 0, 0) """ y_true = np.asarray(annot).ravel() y_pred = np.asarray(segm).ravel() uq_labels = np.unique([y_true, y_pred]).tolist() if len(uq_labels) > 2: logging.debug('too many labels: %s', repr(uq_labels)) return np.nan, np.nan, np.nan, np.nan elif len(uq_labels) < 2: logging.debug('only one label: %s', repr(uq_labels)) return len(y_true), 0, 0, 0 if label_positive is None or label_positive not in uq_labels: label_positive = uq_labels[-1] uq_labels.remove(label_positive) label_negative = uq_labels[0] tp = np.sum( np.logical_and(y_true == label_positive, y_pred == label_positive)) tn = np.sum( np.logical_and(y_true == label_negative, y_pred == label_negative)) fp = np.sum( np.logical_and(y_true == label_positive, y_pred == label_negative)) fn = np.sum( np.logical_and(y_true == label_negative, y_pred == label_positive)) return tp, tn, fp, fn def compute_metric_fpfn_tpfn(annot, segm, label_positive=None): """ compute measure (FP + FN) / (TP + FN) :param ndarray annot: :param ndarray segm: :param int label_positive: :return float: >>> np.random.seed(0) >>> annot = np.random.randint(0, 2, (50, 75)) * 3 >>> segm = np.random.randint(0, 2, (50, 75)) * 3 >>> compute_metric_fpfn_tpfn(annot, segm) # doctest: +ELLIPSIS 1.02... >>> compute_metric_fpfn_tpfn(annot, annot) 0.0 >>> compute_metric_fpfn_tpfn(annot, np.ones((50, 75))) nan """ tp, _, fp, fn = compute_tp_tn_fp_fn(annot, segm, label_positive) if tp == np.nan: return np.nan elif (fp + fn) == 0: return 0. measure = float(fp + fn) / float(tp + fn) return measure def compute_metric_tpfp_tpfn(annot, segm, label_positive=None): """ compute measure (TP + FP) / (TP + FN) :param ndarray annot: :param ndarray segm: :param int label_positive: :return float: >>> np.random.seed(0) >>> annot = np.random.randint(0, 2, (50, 75)) * 3 >>> segm = np.random.randint(0, 2, (50, 75)) * 3 >>> compute_metric_tpfp_tpfn(annot, segm) # doctest: +ELLIPSIS 1.03... >>> compute_metric_tpfp_tpfn(annot, annot) 1.0 >>> compute_metric_tpfp_tpfn(annot, np.ones((50, 75))) nan """ tp, _, fp, fn = compute_tp_tn_fp_fn(annot, segm, label_positive) if tp == np.nan: return np.nan elif (tp + fn) == 0: return 0. measure = float(tp + fp) / float(tp + fn) return measure # def stat_weight_by_support(dict_vals, id_val, id_sup): # val = [v * s for v, s in zip(dict_vals[id_val], dict_vals[id_sup])] # n = np.sum(val) / np.sum(dict_vals[id_sup]) # return n # # # def format_classif_stat(y_true, y_pred): # """ format classification statistic # # :param [int] y_true: annotation # :param [int] y_pred: predictions # :return: # # >>> np.random.seed(0) # >>> y_true = np.random.randint(0, 2, 25) # >>> y_pred = np.random.randint(0, 2, 25) # >>> stat = format_classif_stat(y_true, y_pred) # >>> pd.Series(stat) # f1_score 0.586667 # precision 0.605882 # recall 0.600000 # support 25.000000 # dtype: float64 # """ # vals = metrics.precision_recall_fscore_support(y_true, y_pred) # stat = {'precision': stat_weight_by_support(vals, 0, 3), # 'recall': stat_weight_by_support(vals, 1, 3), # 'f1_score': stat_weight_by_support(vals, 2, 3), # 'support': np.sum(vals[3])} # return stat class HoldOut: """ Hold-out cross-validator generator. In the hold-out, the data is split only once into a train set and a test set. Unlike in other cross-validation schemes, the hold-out consists of only one iteration. Parameters ---------- nb : total number of samples hold_idx : int index where the test starts random_state : Seed for the random number generator. Example ------- >>> ho = HoldOut(10, 7) >>> len(ho) 1 >>> list(ho) [([0, 1, 2, 3, 4, 5, 6], [7, 8, 9])] """ def __init__(self, nb, hold_idx, random_state=0): """ :param int nb: total number of samples :param int hold_idx: index where the test starts :param obj random_state: Seed for the random number generator. """ self.total = nb self.hold_idx = hold_idx self.random_state = random_state assert self.total > self.hold_idx, \ 'total %i should be higher than hold Idx %i' % (self.total, self.hold_idx) def __iter__(self): """ iterate the folds :return ([int], [int]): """ ind_train = list(range(self.hold_idx)) ind_test = list(range(self.hold_idx, self.total)) yield ind_train, ind_test def __len__(self): """ number of folds :return int: """ return 1 class CrossValidatePOut: """ Hold-out cross-validator generator. In the hold-out, the data is split only once into a train set and a test set. Unlike in other cross-validation schemes, the hold-out consists of only one iteration. Parameters ---------- Example 1 --------- >>> cv = CrossValidatePOut(6, 3, rand_seed=False) >>> cv.indexes [0, 1, 2, 3, 4, 5] >>> len(cv) 2 >>> list(cv) # doctest: +NORMALIZE_WHITESPACE [([3, 4, 5], [0, 1, 2]), \ ([0, 1, 2], [3, 4, 5])] Example 2 --------- >>> cv = CrossValidatePOut(7, 3, rand_seed=0) >>> list(cv) # doctest: +NORMALIZE_WHITESPACE [([3, 0, 5, 4], [6, 2, 1]), \ ([6, 2, 1, 4], [3, 0, 5]), \ ([6, 2, 1, 3, 0, 5], [4])] >>> len(list(cv)) 3 >>> cv.indexes [6, 2, 1, 3, 0, 5, 4] """ def __init__(self, nb_samples, nb_hold_out, rand_seed=None): """ :param [int] nb_samples: list of sizes :param int nb_hold_out: how much hold out :param obj rand_seed: int or None """ assert nb_samples > nb_hold_out, \ 'number of holdout has to be smaller then total size' self.nb_samples = nb_samples self.nb_hold_out = nb_hold_out self.indexes = list(range(self.nb_samples)) if rand_seed is not False: np.random.seed(rand_seed) np.random.shuffle(self.indexes) logging.debug('sets ordering: %s', repr(self.indexes)) self.iter = 0 def __iter__(self): """ iterate the folds :return ([int], [int]): """ for i in range(0, self.nb_samples, self.nb_hold_out): inds_test = self.indexes[i:i + self.nb_hold_out] inds_train = [i for i in self.indexes if i not in inds_test] yield inds_train, inds_test def __len__(self): """ number of folds :return int: """ return int(np.ceil(self.nb_samples / float(self.nb_hold_out))) class CrossValidatePSetsOut: """ Hold-out cross-validator generator. In the hold-out, the data is split only once into a train set and a test set. Unlike in other cross-validation schemes, the hold-out consists of only one iteration. Parameters ---------- Example 1 --------- >>> cv = CrossValidatePSetsOut([2, 3, 2, 3], 2, rand_seed=False) >>> cv.set_indexes [[0, 1], [2, 3, 4], [5, 6], [7, 8, 9]] >>> len(cv) 2 >>> list(cv) # doctest: +NORMALIZE_WHITESPACE [([5, 6, 7, 8, 9], [0, 1, 2, 3, 4]), \ ([0, 1, 2, 3, 4], [5, 6, 7, 8, 9])] Example 2 --------- >>> cv = CrossValidatePSetsOut([2, 2, 1, 2, 1], 2, rand_seed=0) >>> cv.set_indexes [[0, 1], [2, 3], [4], [5, 6], [7]] >>> list(cv) # doctest: +NORMALIZE_WHITESPACE [([2, 3, 5, 6, 7], [4, 0, 1]), \ ([4, 0, 1, 7], [2, 3, 5, 6]), \ ([4, 0, 1, 2, 3, 5, 6], [7])] >>> len(cv) 3 >>> cv.sets_order [2, 0, 1, 3, 4] """ def __init__(self, set_sizes, nb_hold_out, rand_seed=None): """ :param [int] set_sizes: list of sizes :param int nb_hold_out: how much hold out :param obj rand_seed: int or None """ assert len(set_sizes) > nb_hold_out, \ 'nb of hold out (%i) has to be smaller then total size %i' \ % (nb_hold_out, len(set_sizes)) self.set_sizes = list(set_sizes) self.total = np.sum(self.set_sizes) self.nb_hold_out = nb_hold_out self.set_indexes = [] for i, size in enumerate(self.set_sizes): start = int(np.sum(self.set_sizes[:i])) inds = range(start, start + size) self.set_indexes.append(list(inds)) assert np.sum(len(i) for i in self.set_indexes) == self.total, \ 'all indexes should sum to total count %i' % self.total self.sets_order = list(range(len(self.set_sizes))) if rand_seed is not False: np.random.seed(rand_seed) np.random.shuffle(self.sets_order) logging.debug('sets ordering: %s', repr(self.sets_order)) self.iter = 0 def __iter__(self): """ iterate the folds :return ([int], [int]): """ for i in range(0, len(self.set_sizes), self.nb_hold_out): test = self.sets_order[i:i + self.nb_hold_out] inds_train = list(itertools.chain.from_iterable( self.set_indexes[i] for i in self.sets_order if i not in test)) inds_test = list(itertools.chain.from_iterable( self.set_indexes[i] for i in self.sets_order if i in test)) yield inds_train, inds_test def __len__(self): """ number of folds :return int: """ nb = len(self.set_sizes) / float(self.nb_hold_out) return int(np.ceil(nb)) # DEPRECATED # ========== # def check_exist_labels_dataset(dataset, lut): # u_lbs = np.unique(lut.values()) # for l in u_lbs: # if not l in dataset: # dataset[l] = [] # return dataset # def extend_dataset(dataset, fts, lut): # logger.info('adding new features to training dataset.') # dataset = check_exist_labels_dataset(dataset, lut) # for k in lut: # dataset[lut[k]].append(fts[k]) # logger.debug(str_dataset_stat(dataset, 'EXTENDED')) # return dataset # def cluster_dataset_label_samples(data, nb_clts, method): # data = np.array(data) # if method == 'AggCl': # clt = cluster.AgglomerativeClustering(nb_clts, linkage='ward', # memory='tmpMemoryDump') # clt.fit(data) # words = [np.mean(data[clt.labels_ == l], axis=0) # for l in np.unique(clt.labels_)] # if method == 'AffPr': # clt = cluster.AffinityPropagation(convergence_iter=7) # clt.fit(data) # words = clt.cluster_centers_ # elif method == 'Birch': # clt = cluster.Birch(n_clusters=nb_clts) # clt.fit(data) # words = clt.subcluster_centers_ # elif method == 'kMeans': # clt = cluster.KMeans(init='random', n_clusters=nb_clts, n_init=3, # max_iter=35, n_jobs=5) # # clt = cluster.KMeans(init='k-means++', n_clusters=nb_clts, # n_init=7, n_jobs=-1) # clt.fit(data) # words = clt.cluster_centers_ # else: # random # words = data[np.random.choice(data.shape[0], nb_clts)] # return words # def segm_features_classif_general(dataset_dict, ft, clf, prob=False): # X, y = convert_standard_dataset(dataset_dict) # assert len(X) == len(y) # clf.fit(X, y) # lbs = clf.predict(ft) # if prob: # probs = clf.predict_proba(ft) # else: # probs=None # return lbs, probs # def get_labeling_probability(ft, datasetDict, lbs): # probs = np.zeros((len(ft),len(datasetDict))) # probs[range(len(lbs)), lbs] = 1. # return probs
Chapter One of the Book of Tebow will reach its climax next Sunday when the Broncos play the Patriots in Denver. It's America's most polarizing athlete vs. America's most polarizing team. The guy who's subverting our expectations of what a quarterback should be vs. a guy who embodies those expectations. There are story lines galore, and if you care about sports at all, you'll be watching at 4:15 on Sunday. No matter which team wins, this game a pivotal juncture for where this whole Tebow thing will go. If the Broncos lose (as expected), the Tebow experiment will continue to be filed under "gimmick." People will still watch Tebow in awe, and love every minute of it. But there's an sense out there that Tebow's success is fleeting — that this crazy, super-fun ride will abruptly come to an end sometime soon. And that sense will only intensify if the Broncos fall to the Pats. But if the Broncos somehow win, the Tebow experiment finds something that no one ever really thought it'd get: legitimacy. Yeah, the Patriots have their flaws, but they're still among the NFL's elite. If the Broncos can eek one out over Belichick and Brady, why couldn't they stand up to Roethlisberger or Flacco and the mighty TJ Yates? The game has huge playoff implications. But it has even bigger Tebowmania implications, and that'll be the big story a week from now.
The Influence of Palatal Harvesting Technique on The Donor Site Vascular Injury:A Split-Mouth Comparative Cadaver Study. AIM The aim of this study was to evaluate the influence of two harvesting approaches on the donor site vascular injury. MATERIALS AND METHODS A split-mouth cadaver study was designed on 21 fresh donor heads. Every hemi-palate was assigned to receive the trap-door (TDT) or the epithelialized free gingival graft harvesting technique (FGGT). A soft tissue graft was harvested from each side for histology analyses. Betadine solution was used to inject the external carotid artery and a collagen sponge was positioned over the harvested area to compare the amount of "leakage". RESULTS The mean leakage observed was 16.56 ± 3.01L in the FGGT-harvested sites, and 69.21 ± 7.08 L for the TDT group, a ratio of 4.18 (p<0.01). Regression analyses demonstrated a trend for more leakage at thinner palatal sites for the FGGT group (p = 0.09), and a statistically significant correlation for the TDT-harvest sites (p = 0.02). Additionally, a shallow palatal vault height (PVH) was associated with a higher leakage in both harvesting groups (p = 0.02). The histomorphometric analyses revealed that grafts harvested with TDT exhibited a significantly higher mean number of medium ( = 0.1-0.5 mm, p = 0.03), and large vessels ( ≥ 0.5 mm, p = 0.02). CONCLUSIONS Within the limitations of the present research, the TDT resulted in a significantly higher leakage than the FGGT, which was also correlated with the histology analyses where a greater number of medium and large vessels were observed in the harvested grafts. This article is protected by copyright. All rights reserved.
<reponame>jbauers/terraform-provider-onelogin package appssoschema import ( "github.com/onelogin/onelogin-go-sdk/pkg/services/apps" ) // FlattenOIDC takes a AppSSO instance and creates a map func FlattenOIDC(sso apps.AppSso) map[string]interface{} { return map[string]interface{}{ "client_id": sso.ClientID, "client_secret": sso.ClientSecret, } } // FlattenSAMLCert takes a AppSSO instance and uses the Certificate node to create the map func FlattenSAMLCert(sso apps.AppSso) map[string]interface{} { return map[string]interface{}{ "name": sso.Certificate.Name, "value": sso.Certificate.Value, } } // FlattenSAML takes a AppSSO instance and creates a map func FlattenSAML(sso apps.AppSso) map[string]interface{} { return map[string]interface{}{ "metadata_url": sso.MetadataURL, "acs_url": sso.AcsURL, "sls_url": sso.SlsURL, "issuer": sso.Issuer, } }
Are You Going to SMX West or SphinnCon Israel? Home > Search Engine Conferences > Are You Going to SMX West or SphinnCon Israel? Rand Fishkin wrote a blog post on the reasons why you must attend SMX West. Among his reasons: you can meet potential employees and clients, you can perform competitive analysis, you can brainstorm with speakers, you can test your elevator pitch, you can set goals for yourself, and applications of the right tip may increase your ROI. This works for any conference, really, and it should be part of any Search Marketer's agenda. But why SMX West? Well, for one, I'll be there. Rand mentions that great people will be going, and you bet I'll be blogging from the front row as always. Rand argues that in comparison to other conferences, there's also great food (just make sure you prepare Kosher food this time, Danny!) He adds that the timing is great, the sessions are new (the lineup is revamped), three days is a perfect length (I like it better than two, especially since I'm flying cross country), it's in Silicon Valley, there's free wifi, there are great after-parties (really?!), and there's breakthrough content. So if you haven't signed up for SMX West yet, do it. The deadline for a discount registration is two days from now, so do it right away! You can register at the official SMX West website. Also, as you know, Barry (you know, the guy who runs this blog) is arranging SphinnCon Israel next month (February 5th). If you're a local or near Israel, you're encouraged to attend. There are representatives from many search agencies in the country in addition to Google Israel representation. It should be a blast. Postscript Barry: Danny wrote a post on SphinnCon Israel at Search Engine Land, please make sure to check it out. SphinnCon has a limited number of seats available, and the full agenda has been posted. I won't be there, so it won't be as fun as if I would, but you're encouraged to attend. If I could make it, you bet I would!
The hybrid inflation waterfall and the primordial curvature perturbation Without demanding a specific form for the inflaton potential, we obtain an estimate of the contribution to the curvature perturbation generated during the linear era of the hybrid inflation waterfall. The spectrum of this contribution peaks at some wavenumber $k=k_*$, and goes like $k^3$ for $k\ll k_*$, making it typically negligible on cosmological scales. The scale $k_*$ can be outside the horizon at the end of inflation, in which case $\zeta=- (g^2 - \vev{g^2})$ with $g$ gaussian. Taking this into account, the cosmological bound on the abundance of black holes is likely to be satisfied if the curvaton mass $m$ much bigger than the Hubble parameter $H$, but is likely to be violated if $m\lsim H$. Coming to the contribution to $\zeta$ from the rest of the waterfall, we are led to consider the use of the `end-of-inflation' formula, giving the contribution to $\zeta$ generated during a sufficiently sharp transition from nearly-exponential inflation to non-inflation, and we state for the first time the criterion for the transition to be sufficiently sharp. Our formulas are applied to supersymmetric GUT inflation and to supernatural/running-mass inflation Hybrid inflation Hybrid inflation ends with a phase transition known as the waterfall, which up to now has been studied only in special cases. This paper, which is a continuation of, provides a rather general treatment. We begin by defining the setup. Scales leaving the horizon An inflation model starts to make contact with observation only around the time that the observable universe leaves the horizon. The following description hybrid inflation is intended to apply to the subsequent era. Within the standard cosmology, the number N obs of e-folds of inflation after the observable universe leaves the horizon satisfies 63 − 1 2 ln 10 −5 M P H N obs 49 − 1 3 ln 10 −5 M P H. (1.1) (The time-dependence of H is ignored in this expression, which is usually a good approximation.) The upper bound corresponds to matter domination from the end of inflation to the epoch T = 1 MeV, with radiation domination thereafter until the observed matter dominated era, while the lower bound replaces the former era by one of radiation domination. The scales probed by observation of large scale structure (cosmological scales) leave the horizon during the first 15 or so e-folds after the observable universe. On these scales, the curvature perturbation is nearly gaussian with a a nearly scaleinvariant spectrum P (k) ∼ (5 10 −5 ) 2. Hybrid inflation potential Our analysis applies to a wide class of hybrid inflation models. The essential features of the potential are captured by the following expression; To have a perturbative quantum theory we demand g ≪ 1 and ≪ 1. The inflaton is supposed to have zero vev and V () is set to zero at the vev. We require V () > 0 during inflation so that moves towards its vev. The era of inflation with < c is called the waterfall. The requirements that V and ∂V /∂ vanish in the vacuum determine V 0 and the vev of the waterfall field : It is necessary for our analysis that the waterfall field has the canonical kinetic term. For simplicity we will pretend that is a single real field. At least within the Standard Scenario defined below, that cannot really be the case because it would lead to the formation of domain walls, located along surfaces at which (x, t) is trapped at the origin, which would be fatal to the cosmology. In reality, will be replaced by a function of two or more real fields. So that there is only one effective degree of freedom in the direction, we will demand that the function is invariant under some symmetry group of the action. Then the only change in our analysis for the realistic case would be the introduction of some numerical factors into the equations. In the realistic case the domain walls might be replaced by cosmic strings or monopoles, but in general the trapping of will not occur and (x, t) will everywhere approach its vev. The inflaton may also be replaced by a function of two or more real fields. If there is still only one effective degree of freedom the only change is again the introduction of numerical factors. In the opposite case of multi-field inflation, corresponding to a family of inflationary trajectories that are curved in field space, most of our analysis still applies if, by the onset of the waterfall, the family has collapsed to a single effective trajectory which has negligible curvature during the waterfall. To obtain powerful results we take the inflaton to have the canonical kinetic term, though much of our analysis would apply to, for instance, k-inflation. Hybrid inflation was first discovered in in the context of single-field inflation. It was given its name in, where the form (1.2) was invoked for V (, ) with V () = m 2 2 /2. With parameters chosen to give the Standard Scenario, and demanding also that is responsible for the observed curvature perturbation, this gives spectral index n > 1 in contradiction with observation. Many forms of V () have been proposed, which allow to generate the curvature perturbation within the single-field inflation scenario. In our calculations we employ Eq. (1.2) for V (, ), without specifying the inflaton potential V (). Minor variants of Eq. (1.2) would make little difference. The interaction g 2 2 2 might be replaced by 2 2+n / n where is a uv cutoff, or the term 4 might be replaced by 4+n / n. For our purpose, these variants are equivalent to allowing (respectively) g and to be many orders of magnitude below unity. More drastic modifications of Eq. (1.2) have been proposed, including inverted hybrid inflation where is increasing during inflation, as well as mutated and smooth hybrid inflation where the waterfall field varies during inflation. Also, the waterfall potential might have a local minimum at the origin so that the waterfall proceeds by bubble formation. Our analysis does not apply to those cases. Standard Scenario By varying the parameters in the potential (1.2), one can have a wide range of scenarios that is still not fully explored. Most discussions of hybrid inflation make some assumptions, corresponding to what might be called the Standard Scenario. In this section we state those assumptions, which are made in the rest of the paper. Until approaches its vev at the end of the waterfall, inflation is supposed to be nearly exponential ( H ≡ ||/H 2 ≪ 1) with V 0 dominating the potential: We take H to be constant during the waterfall, which is usually a good approximation. Nearly exponential inflation require Eqs. (1.4) and (1.5) give It is usually supposed that vev ≪ M P corresponding to m ≫ H. (In particular, GUT inflation takes to be a GUT Higgs field with vev ∼ 10 −2 M P.) One sometimes considers vev roughly of order M P corresponding to m roughly of order H (supernatural and running mass The upper part of the range is favoured, especially because we deal with hybrid inflation. One usually requires ≪ M P but we will just invoke the weaker requirement #1 If is big enough we have m 2 () H 2. Then we assume that vanishes up to a vacuum fluctuation which is set to zero. If is small enough, m 2 () −H 2. Hence there is a 'transition' regime with |m 2 ()| ≪ H 2. If the transition takes several Hubble times, the quantum fluctuation of will be converted to a classical perturbation, with spectrum ∼ (H/2) 2 on all scales leaving the horizon during the #1 This is also invoked in our earlier paper but note that Eq. of has a typo. transition. To avoid this the transition should take less than a Hubble time or so (fast transition). The waterfall starts at m 2 () = 0 which is in the middle of the transition. During the waterfall the vacuum fluctuation of is converted to a classical field (x, t), with 2 moving towards 2 vev. The waterfall ends when 2 (x, t) ≃ 2 vev, and inflation is supposed to end then because V () is not supposed to support inflation without the additional term V 0. Regarding, we require that it decreases monotonically before the waterfall, and afterward for as long as it affects the evolution of. This assumption is not at all trivial, because V () may steepen as decreases, causing to oscillate about the origin. The evolution of has yet to be studied for that case, which occurs in part of the parameter space for some well-motivated forms of the potential, including GUT inflation and running-mass inflation. The waterfall During the waterfall we need to consider both and. Taking both fields to live in unperturbed spacetime (ie. ignoring back-reaction) the evolution equations during the waterfall are We assume that the waterfall starts with an era during which Eq. (1.12) can be replaced by #2 k + 3H k + (k/a) 2 + m 2 (t)) k = 0, (1.13) with m 2 (t) is independent of. We call this the linear era, and it will be our main focus. Regarding k (t) as an operator, its mode function k (t) also satisfies Eq. (1.13). We will see how k (t) grows exponentially for suitably small k, generating a classical quantity k (t). Keeping only the classical modes, we arrive at a classical field. During at least the first part of the linear era, m 2 () depends significantly on. Then the right hand side of Eq. (1.11) has to be negligible so that m 2 (t) can be independent of. With that condition in place we just have to worry about the perturbation that is generated from the vacuum fluctuation. If the linear era of the waterfall takes no more than a Hubble time or so, can be completely eliminated by taking the spacetime slicing to be one of uniform. But if the waterfall takes much more than a Hubble time, new contributions to are generated as each scale leaves the horizon. To avoid this quantum effect on #2 This assumption implies some lower bound on || but it is not clear how to calculate the bound. the evolution, we have to assume that the new contributions to are negligible. Then we can again choose the slicing of uniform. #3 For the threading of spacetime, we choose the comoving worldlines (those moving with the fluid, so that a fluid element has zero momentum density). The gradients of both and (as we shall see) the classical field are small compared with their time derivatives. If they vanished, the comoving worldlines would be free-falling and orthogonal to the slicing, and we could choose the time coordinate labeling the slicing to be proper time along each thread. We assume that the gradients are small enough to make that choice possible to an acceptable approximation. This completes the definition of the gauge in which the classical field (x, t) is defined. We also need to justify the use of Eq. (1.13) for the mode function k, before the classical quantity k (t) is generated. As we will see, Eq. (1.13) is needed for that purpose only for modes that are well inside the horizon during this time and (at least with the approximation of Section 5) only for much less than a Hubble time. That being the case, we can ignore the second term of Eq. (1.13) and set a equal to a constant so that Eq. (1.13) becomes a flat spacetime equation in which back-reaction is negligible. Waterfall field during the linear era 2.1 Evolution of With H constant, we can use conformal time = −1/aH to write (1.13) as For sufficiently small k, we can set 2 k ≃ 2 k=0 = a 2m2. Then 2 k switches from positive to negative before = c (but much less than a Hubble time before, by virtue of our fast transition assumption). For k 2 > 0 the switch is later. For the scales that we need to consider, we assume that there are eras both before and after the switch when 2 k satisfies the adiabaticity condition d| k |/d ≪ | 2 k |. #3 Since we neglect the new contributions to during the linear era (while m 2 () depends significantly on ), we neglect also their effect on the spectrum P (k). That is presumably a good approximation if P given by Eq. (5.12) is much less than the contribution P lin (k) that we are going to calculate. That will probably be the case if P lin (k) is big enough to form black holes, which is our main concern. It will not be the case if we deal with one of those exceptional inflation models where P (k), on the scales leaving the horizon during the linear era of the waterfall, is itself big enough to form black holes. Then we have the opposite situation: Eq. (5.12) will be valid if P lin is not big enough to form black holes. Taking k to be an operator, its mode function k satisfies Eq. (2.1). During the adiabaticity era before the switch we take the mode function to be which defines the vacuum state. During the adiabaticity era after the switch where the subscript 1 denotes the beginning of the adiabatic era. The displayed prefactor is exact only if m 2 (t) ∝ t and H(t − t 1 ) ≪ 1. As we are about to see, k grows during this era and we call it the growth era. During the growth era, the adiabaticity condition is equivalent to the two conditions (2.6) The growth era begins when both conditions are first satisfied. The first condition implies |m(t)| ≫ H so that |m(t)| ≃ |m(t)|, and it can hold only if m ≫ H. For k ≪ a(t)|m(t)| we have | k | ≃ | k=0 | ≃ a(t)|m(t)|. With Eq. (2.4) this gives k ≃ | k | k /a. At k = 0 Eq. (2.4) becomes (2.7) Ignoring the relatively slow time-dependence of the prefactor, we have as a rough approximation In the regime k ≪ a|m(t)| we have giving where (2.11) By virtue of Eq. (2.6), the change in |m(t)| in time |m(t)| −1 is small and so is the change in a. Defining To avoid the divergence in k * (t) at t = t 1, we will regard t start as the start of the growth era rather than t 1. After t start, k * (t) decreases while a|m(t)| increases. We assume that k 2 * (t) ≪ a 2 |m 2 (t)|, except for a brief era near the beginning of the growth era that can be ignored. #4 Then k (t) at fixed t falls exponentially in the regime k * (t) k < a|m(t)| and significant modes have k k * (t). The number of e-folds of growth is N(t) ≡ H(t − t start ). We denote the end of the linear era by a subscript 'end'. If N(t end ) 1, k * (t) falls continuously. If instead N(t end ) ≫ 1, the exponential increase of a causes k * (t) to level off after N(t) ∼ 1. Using H ≪ |m(t)| < m, we learn that in any case This tells us that the scale k * (t) is shorter than the scale leaving the horizon at the beginning of the waterfall. Since we assume that cosmological scales are outside the horizon at this stage, k * (t) is shorter than any cosmological scale. Dividing both sides by exp(2N(t end )) we have This tells us that scale k * (t) (and in particular, its final value k * (t end )) can be far outside the horizon at the end of the linear era. Classical field (x, t) During the growth era the mode function k has constant phase (zero with our convention), which means that k (t) ∝ k (t) can be be regarded as a classical field. The significant modes have k k * (t) ≪ a(t)|m(t)|. Because when for each mode. This means that the continuous creation of new classical modes, occuring for each mode when 2 k becomes negative at k ∼ a(t)m(t), can be ignored. For the significant modes, k / k ≃ a(t)|m(t)|. The classical field has approximately the same behaviour; #5 (x, t) ≃ |m(t)|(x, t). (2.16) #4 This assumption holds within the approximation of Section 5. #5 This behaviour breaks down near any locations with (x, t) = 0. To discuss them we would have to extend the discussion to a multi-component as mentioned at the end of Section 1.2. We assume that if they exist, they are rare enough to be ignored. Since k * (t) ≪ a|m(t)|, the gradient of is small compared with its time derivative. The spectrum of is Using Eq. (2.10) the mean square (spatial average) of 2 is We denote the perturbation in 2 by 2 : The convolution theorem gives for (2.20) which falls exponentially at fixed t. End of the linear era At each location, the linear equation (2.16) ceases to be valid around the time when 2 (x, t) achieves some value 2 nl. This time is given by We will take the linear epoch to end at a time t end, such that the fraction y > of space with 2 (x, t) > 2 nl is small. We will see that the probability distribution of (x, t) is gaussian, and using the approximation erfc(x) ∼ exp(−x 2 ) we have According to this equation t end is not very sensitive to the choice of y >, and for estimates we will take ln(1/y > ) ∼ 1. If the linear era lasts for long enough, we will have m 2 (t end ) ≃ −m 2 (ie. (t end ) ≪ c = m/g). In that case the right hand side of Eq. (1.11) is irrelevant, and the linear era ends only when the right hand side of Eq. (1.12) becomes significant. This gives. Then the linear era will end when the right hand side of Eq. (1.11) becomes significant, provided that the right hand side of Eq. (1.12) is then still negligible which we are about to show will be the case. If is still slowly rolling so that 3H(t) = −V, the right hand side of Eq. (1.11) becomes significant. when 2 ∼ 3H||/gm. But it may be that g 2 2 has become of order H 2 first, causing to oscillate about the origin. Including both possibilities we have #6 It follows from Eqs. (1.10) and (1.6) that the right hand side of Eq. (2.26) is much less than 2 vev, making the right hand side of Eq. (1.12) insignificant as advertised. Energy density and pressure of We have seen that the gradient of is negligible compared with its time-derivative, and in our adopted gauge the gradient of vanishes. Ignoring the gradient, the energy density and pressure are = + and In an unperturbed universe the energy continuity equation holds; To the extent that spatial gradients are negligible it holds at each location. With a generic choice of the slicing, denoted by a subscript g, we hav where a g (x, t) is the locally defined scale factor. We are working in the gauge defined in Section 1.4, which means that t is proper time and we can choose a(x, t) = a(t), the unperturbed scale factor. We therefore hav With the potential V () = 1 2 m 2 2 the right hand side is min{m 2, H 2 } = m 2 which means that only the first possibility exists. The existence of the second possibility for a more general potential was missed in. We are dealing with the linear era, which means that the right hand sides of Eqs. (1.11) and (1.12) are negligible. With its right hand side negligible, Eq. (1.11) describes a free field which means that it satisfies the energy continuity equation by itself; (2.36) The same must therefore be true for ; At each location we have #7 Using this equation to differentiate Eq. (2.29) we fin The second term of the right hand side violates the energy continuity equation. This apparent inconsistency between the field equations and the energy continuity equation occurs because the effect of the interaction term g 2 2 2 is dropped in Eq. (1.11) (ie. the right hand side is set to zero) but kept in Eq. (1.12). If we demand approximate consistency between the field equations and the energy continuity equation, we need the second term of Eq. (2.39) to be much smaller than the first. That condition is equivalent to which is stronger than the adiabaticity conditions (2.5) and (2.6) (with k ≪ a|m(t)|). But there is no need to impose this stronger condition, because the near cancellation of the two terms of makes it unreasonable to expect the approximate evolution (2.38) of to give even an approximate estimate of. By contrast, the right hand side of the energy continuity equation has no cancellation so that it can be used to evaluate. Invoking Eq. (2.16), we find (2.41) Justifying the neglect of It has been essential for our discussion that the spatial average of (x, t) is negligible. That is the case in a sufficiently large volume, because (x, t) is constructed entirely #7 In the present context Eq. (2.38) can be replaced by Eq. (2.16) by virtue of the adiabaticity condition. But in Section 6 we drop the adiabaticity condition. from the Fourier modes. But to make contact with cosmological observations we should consider a finite box, whose (coordinate) size L is not too many orders of magnitude bigger than the size of the presently observable universe. Denote the average within the box by we have where the Fourier modes of > satisfy kL > 1 so that The average within the box comes from modes with kL 1, and for a random location of the box the expectation value of 2 is To justify the neglect of we need 2 ≪ 2 >. In our scenario, where P (k) peaks at a value k * (t), this is equivalent to Lk * (t) ≫ 1. That is satisfied because the scale k * (t) is supposed to be much smaller than the observable universe. To have 2 2 > we would presumably have to allow the transition from m 2 (t) = −H 2 to m 2 (t) = H 2 to take at least several e-folds so that it can generate a contribution to that has a flat spectrum. Then, if the flat spectrum generated during the transition dominates, one would have 45) where N before (N after ) is the number of e-folds of transition before (after) the observable universe leaves the horizon. Contribution to the curvature perturbation We write the contribution to the curvature perturbation that is generated during the waterfall as = lin + nl, where the first term is generated during the linear era, and the second is generated afterward up to some epoch just after inflation has ended. The curvature perturbation is (x, t) ≡ ln a(x, t), where a(x, t) is the locally defined scale factor on the spacetime slicing of uniform. As in Eq. (2.35), the spatial gradient is supposed to be negligible, which in general requires smoothing on a super-horizon scale. Using that equation, we see that the change in between any two times t 1 and t 2 is where N is the number of e-folds between slices of uniform. Working to first order in, #8 we will use this result to calculate lin, and then see how it might be used to calculate nl. We note in passing that an equivalent procedure is to integrate the expressio The contribution lin During the linear era, the gradient of is negligible without any smoothing. We are working in a gauge where = 0 so that ( Ignoring the inhomogeneity of the locally defined Hubble parameter, and. (3.5) Using Eq. (2.41), Using Eq. (2.21), we have for k ≪ k * (t) (3.8) We assume that |t(t end )| ≫ |t(t start )|, which will be justified within the approximation of Section 5. Then we have. (3.10) #8 A second-order calculation of is needed only to treat very small non-gaussianity corresponding to reduced bispectrum |f NL | 1. On cosmological scales, such non-gaussianity will eventually be measurable (and is expected if comes from a curvaton-type mechanism ). But there is no hope of detecting such non-gaussianity on much smaller scales. The inhomogeneity of H is indeed negligible because it generates a contribution This is much less than Ht(t end ) in magnitude, because |m(t)| ≫ H and || ≪ H. The contribution nl Let us estimate the number of e-folds N nl after the end of the linear era. At t end, 2 is increasing exponentially. Soon afterward it starts to affect, driving it towards zero. We therefore expect m 2 (t) to quickly approach −m 2 after t end (if it is not there already), restoring at least approximately the linear evolution of 2. Then Eq. (2.16) will hold with |m(t)| ∼ m giving If the linear era ends only when the right hand side of Eq. (1.12) becomes important we have ln( vev / end ) ∼ 1, giving N nl ∼ H/m ≪ 1. But if it ends when the right hand side of Eq. (1.11) becomes important we may have ln( vev / end ) ≫ 1 which allows N nl 1. Now we consider the contribution nl, that is generated between t end and some time t 2 just after inflation has everywhere ended. To calculate it we need to smooth on a super-horizon scale. Then we can use the N formula which gives nl (x, t) = Ht 12 (x), (3.17) where the initial and final slices both have uniform and t 12 (x) is the proper time interval between them. At each location, the linear era ends at the epoch t nl (x) given by Eq. (2.23). At this epoch there is nearly-exponential inflation, and inflation ends at some later time t noninf (x). If ∆t(x) ≡ t noninf (x) − t nl (x) is sufficiently small it can be taken to correspond to a spacetime slice of negligible thickness. Then t 12 is given by the 'end-of-inflation' formula where (x) is defined on the slice. The addition of nl to lin corresponds to taking the final slice of the N formula be the transition slice, instead of a slice of uniform. This equation is valid to first order in. To derive it we take the separation between the initial and final slice to be not much bigger than is needed for them to enclose the transition slice. Then we can take the unperturbed quantity(t) to have a constant value both during inflation and non-inflation. This gives Since | inf | is evaluated during nearly-exponential inflation, it is much smaller than | noninf | leading to Eq. (3.18). We are defining on a slice of uniform and is defined on a slice of uniform. The time displacement from the first slice to the second slice is is − /, which means that = (t end )/ (t end ). Putting this into Eq. (3.18) we get (3.20) We are only interested in the case that this ratio is 1. Then the inclusion of nl corresponds to omitting the middle term of Eq. (3.10). From Eq. (3.14), this case can occur only if |m(t end )| ≪ m. Now comes a crucial point. From the derivation of Eq. (3.18), it is clear that the criterion for its validity is ∆t(x) ≪ |t 12 (x)|, at a typical location. (In words, the thickness of the transition slice is negligible compared compared with its warping.) This simple remark has not been made before, and consequently it has not been checked whether the criterion is satisfied. In our case, H∆t(x) is given by Eq. (3.16) with 2 end replaced by 2 nl (x). Since that quantity appears only in the log the change will not have much effect, and we will have H∆t(x) ∼ N nl at a typical location. On the other hand, the typical value of | nl (x)| = H|t 12 (x)| is P 1/2 nl (k) where k aH is the smoothing scale used to define nl. The criterion for Eq. (3.18) to be valid is therefore N nl ≪ P 1/2 nl (k). In the regime of interest 2 (t end )/ 2 (t end ) ≫ 1, this criterion becomes Whenever the criterion (3.21) is not satisfied, the calculation of end that we have described does not apply. Other uses of the 'end of inflation' formula Our use of Eq. (3.18) to evaluate nl is quite different from its usual applications. In those applications, the field causing (x) has a nearly flat spectrum, leading to a nearly flat P nl (k) that can give a significant (even dominant) contribution to P (k) on cosmological scales. Since P 1/2 (k) ∼ 5 10 −5 on these scales, Eq. (3.21) on cosmological scales becomes 22) where N tran now refers to the duration of the transition slice in the scenario under consideration, and 12 = Ht 12 is the contribution to. Most of the other applications consider hybrid inflation, with the transition slice the entire hybrid inflation waterfall. Of course their setup is different from ours because they introduce a third field, the one that generates in Eq. (3.18). In these cases, N tran in Eq. (3.22) becomes the total duration of the waterfall. We see from Eq. (3.16) that it cannot be much less than H/m, which means that Eq. (3.22) needs H/m ≪ 5 10 −5. Since we need (H/m) 2 ≫ H/M P (corresponding to ≪ 1), this requires a fairly low inflation scale H ≪ 10 −9 M P. An alternative possibility is for the transition slice to be at the end of thermal inflation. (Thermal inflation is is a few e-folds of inflation occurring typically long after the usual inflation, which is ended by a thermal phase transition.) Then we expect roughly ∆N ∼ H/m, where m is the tachyonic mass of the field causing the end of thermal inflation. This criterion (3.22) is satisfied by the usual realizations of thermal inflation. Further possibilities for the transition slice are considered in. Cosmological black hole bound on P The most dramatic effect of would be the formation of black holes. This places an upper bound on P, which we now discuss taking on board for the first time the non-gaussianity of. The bound that we are going to consider rests on the validity of the following statement: if, at any epoch after inflation, there are roughly spherical and horizonsized regions with significantly bigger than 1, a significant fraction of them will collapse to form roughly horizon-sized black holes. #9 The validity is suggested by the following argument: the overdensity at horizon entry is / ∼, and if it is of order 1 then ∼ = 3M 2 P H 2. The excess energy within the Hubble distance H −1 is then M ∼ H −3 ∼ M 2 P /H, which means that the Hubble distance corresponds roughly to the Schwarzchild radius of a black hole with mass M. The validity is confirmed by detailed calculation using several different approaches, as summarized for instance in. Before continuing we mention the following caveat. Practically all of the literature, as well as the simple argument just given, assumes that within the region is not very much bigger than 1. Then the spatial geometry within the region is not too strongly distorted and the size of the black hole is indeed roughly that of the horizon. In the opposite case, the background geometry is strongly distorted and the wavenumber k defined in the background no longer specifies the physical size of the region at the epoch aH = k of horizon entry. An entirely different discussion would then be necessary, which has not been given in the literature. As the opposite case does not arise in typical early-universe scenarios we ignore it. We are interested in the case that P (k) has a peak at some value k peak, and we assume that the width of the peak in ln k is roughly of order 1 so that (4.1) Regions with 1 that might form black holes will be rare if P (k peak ) is not too big. Observation demands that the regions must indeed be rare, because it places a strong upper bound on the fraction of of space that can collapse to form horizon-sized black holes, on the assumption that the collapse takes place at a single epoch as is the case in our scenario. A recent investigation of the bound is given in, with extensive references to the literature. The bound depends on the epoch of collapse. Denoting it by it lies in the range 10 −20 10 −5. (4.2) To bound P (k peak ), we shall require y < where y is the fraction of space with > c, and c is roughly of order 1. The fraction y can be calculated from 2 if we know the probability distribution of (x). The standard assumption is that it is gaussian. Then 3) #9 We are choosing the background scale factor a(t) so that the perturbation = (ln a(x, t)) has zero spatial average. and using the large-x approximation erfc (x) ≃ e −x 2 / √ x ∼ e −x 2 we find For the range (4.2) this gives (with c ≃ 1) P (k peak ) 0.01 to 0.04. But lin given by Eqs. (2.19) and (3.10) is actually non-gaussian, of the form With this form, there is no region of space where > g 2, and y ≪ 1 now implies some bound g 2 − c ≪ c which is practically equivalent to g 2 < c. This corresponds to P (k peak ) 2 For completeness, we see what happens if = + (g 2 − g 2 ) with g gaussian. (This might be the case if is generated after inflation by a curvaton-type mechanism.) The we have (4.7) which gives P (k peak ) 2 10 −4 to 2 10 −3. In all three cases, the bound on P (k peak ) is very insensitive to f which means that it depends only weakly on the value of. Turning that around though, the black hole abundance is very sensitive to P (k peak ) which suggests that fine-tuning of parameters will be needed to get an eventually observable (yet presently allowed) abundance. If the peak has width ∆ ln k different from 1, 2 ≃ P (k peak )∆ ln k. If ∆ ln k ≪ 1 this weakens the bound on P (k peak ) by a factor (∆ ln k) −1, but such a narrow peak is not generated in typical scenarios. If instead ∆ ln k ≫ 1, one might think that the bound on P (k peak ) is strengthened by a factor (∆ ln k) −1, but that conclusion is too hasty because the observational bound (4.2) refers to the formation of horizon sized black holes at a more or less definite epoch whereas the broad peak will lead to the formation of such black holes over ∆ ln k Hubble times. The value of 2 in that case is not directly related to the black hole abundance, and the black hole bound on P (k peak ) is unlikely to be strengthened very much. For instance, if the observational bound on black hole abundance applies separately to the black holes formed within each unit interval of ln k, the effective value of y for a given value of P (k peak ) is just multiplied by that factor, which has a negligible effect on the bound on P (k peak ). The effect of lin Now we discuss the effect of lin, assuming that it is at least not canceled by nl. By virtue of Eq. (2.5), the first term of Eq. (3.12) is ≪ 1, and the second term is ≤ 1. If k * (t end ) is super-horizon, is of the form Eq. (4.5) with the minus sign, and the black hole bound is P (k * (t end )) 2. This is likely to be well satisfied. If instead k * (t end ) is sub-horizon, we have to remember that the black hole bound refers to horizon-sized regions. To apply it, we must drop sub-horizon modes of lin. Estimating the bispectrum, trispectrum as in, one sees that this makes lin nearly gaussian. Then P lin peaks at k ∼ k end ≡ a(t end )H, and the black hole bound is roughly P lin (k end ) 10 −2. This too will be satisfied if k * (t end ) is well within the horizon. We emphasize that these bounds refers to the formation of horizon-sized black holes. If k * (t end ) is sub-horizon, smaller black holes may also be formed. A discussion of their abundance would require assumptions about the evolution of the perturbations during the transition from inflation to non-inflation, and would be much more difficult than the corresponding discussion for the formation of black holes from. Although P lin (k) is probably too small to form black holes, it may still be quite large. If reheating after inflation is long delayed this may lead to copious structure formation with a variety of possible cosmological effects. Finally, let us see whether P lin (k) can be significant on cosmological scales; ie. whether it can be comparable with the observed quantity P ≃ 10 −9. It follows from Eq. (2.15) that the scale k * (t end ) is shorter than the scale leaving the horizon at the beginning of the waterfall. Therefore, the inequality (3.12) implies that P lin will give a negligible contribution to the observed quantity P ∼ 10 −9 if the shortest cosmological scale leaves the horizon more than 3 ln ≃ 7 e-folds before the start of the waterfall, ie. if the observable universe leaves the horizon more than ≃ 22 e-folds before the start of the waterfall. We will see that this is assured within the approximation of Section 5. Estimates using a simple approximation In this section we make a simple approximation for m 2 (t). This will allow us to verify some of the assumptions that we have been making, especially if we assume that satisfies the slow-roll approximation. The approximation for m 2 (t) The approximation is The cross-over between the two expressions is at t = t = ≡ m 2 / 3. The second expressions corresponds to setting = 0. If the linear era ends at t < t = only the first approximation is invoked. The first approximation is exact at t = 0, and it ignores the time-dependence of d( 2 )/dt = 2. The constancy of is a good approximation at t ≪ t =, and so is the constancy of if (5.8) is sufficiently well satisfied. The fast transition requirement described in Section 1.3 is H. To simplify some of the estimates we will usually take the requirement to be H ≪ (fast transition). (5.4) With this approximation for |m 2 (t)|, the linear era is completely described by the four parameters g, H, m, and. Let us define N(t) ≡ Ht. Then the epoch t = t = corresponds to (5.5) Slow-roll approximation To obtain the strongest possible results, we assume that the evolution of satisfies the slow-roll approximation, at least during some era that begins before the waterfall and ends when ceases to to affect the evolution of. Then unperturbed inflaton field satisfies The basic slow-roll approximation is or equivalently H|/| ≪ 1. As a scale leaves the horizon, the vacuum fluctuation of is converted to a classical perturbation with spectrum ≃ (H/2) 2. At a given epoch, the vacuum fluctuation is set to zero on sub-horizon scales. These results hold both before and during the waterfall. Focusing on the former era we have more results, because is the only field. First, we have a couple more relations: Second, we have the crucial result that generates nearly gaussian curvature perturbation to the curvature perturbation, with spectrum given by For a given k the spectrum is generated at the epoch of horizon exit k = aH, and is constant thereafter until at least the beginning of the waterfall. Trading for f For single-field inflation, we can use Eq. (5.12) to obtain more powerful results by trading for f ≡ 5 10 −5 −1 H 2 /2 = 5 10 −5 −1 P 1/2 (k beg ), (5.13) where k beg is the horizon scale at the beginning of the waterfall. Inflation models are usually constructed so that P accounts for the observed P on cosmological scales. Then, if P is nearly scale-independent we will have f ∼ 1. More generally there is an upper bound f 2 10 3 (black hole constraint) (5.14) corresponding to the black hole bound P 10 −2 on the spectrum of the nearly gaussian = that exists at the beginning of the waterfall. There is also a lower bound corresponding to Eq. (1.6): f ≫ 10 −2 H/(10 −5 M P ) (nearly exponential inflation). (5.15) The relation between f and is given by We are demanding g ≪ 1, but the fast transition requirement H ≪ can always be satisfied because f < 2 10 −3 and H ≪ m. In this paper we are not specifying the potential V (). Most previous work considers the potential V () = 1 2 m 2 2. Then slow-roll requires m ≪ H and 3H = −m 2. The fast transition requirement H becomes and f is given by In this case we need f ≪ 1, to avoid a positive spectral tilt for P which would conflict with observation. The requirement that V () does not support inflation (so that inflation ends with the end of the waterfall) is c 10M P, which is guaranteed by Eq. (1.10). The case N(t end ) ≪ 1 is considered in. We then have k 2 * (t end ) = a 2 (t start ) 2 /2(t end ) 1/2. Assuming instead N(t end ) 1, Our implicit assumption that the growth era starts well before t end corresponds to t end ≫ 1. This is equivalent to nl ≫ or min H mgf 1/5 The requirement t end < t = corresponds to. The case t end > t = In this case m 2 (t end ) ≃ −m 2. As we discussed in Section 3.1, 2 nl is given by Eq. (2.25), and P lin by Eq. (3.15). Growth starts, at the latest, at t = + m −1. #13 Our our approximation makes d|m(t)|/dt discontinuous at t = t = in violation of the adiabaticity condition (2.5). In reality |m(t)| will be smooth around t = t =. To avoid specifying a definite form for |m(t)|, we confine ourselves to the case N(t end ) 1. Then k 2 * (t end ) ≃ a 2 (t start )mH. (5.27) Assume first that the growth era starts before t =. Then Eqs. (2.18) and (2.8) give #14 We also have where the inequality holds because we are assuming that growth starts before t =. Using the first equality we get The final approximation is t = ≪ 3t end, which should be adequate because we are in the regime t = < t end. In this approximation, the growth before t = has a negligible effect. Using it we find This gives again the bound (5.26). Now suppose that growth does not start before t =. Then the inequality in Eq. (5.29) is reversed leading to N = ≪ 1. We therefore arrive again at Eq. (5.31) leading to Eq. (5.32). In this case Ht start = N = + H/m which is ≪ 1 as before. Also, from Eqs. (2.13) and (5.27), we have k * (t start )/k * (t end ) ≃ H/m. Using Eq. (3.8), this ensures that the typical value of |t(x, t end )/t(x, t start )| is m/H ≫ 1. This is much less than M 2 P /H 2, which means that Eq. (3.16) gives N nl ≪ ln(M P /H). This is the same bound that we obtained for N(t end ). It therefore applies to the total number of e-folds of the waterfall, N water ≡ N(t end ) + N nl. Duration of the non-linear era As seen in Section 4.2, we need the waterfall to begin more than 22 e-folds after the observable universe leaves the horizon, if we are to be sure that P lin (k) has a negligible effect on cosmological scales. Equivalently we need N obs −22 > N wat. From Eq. (1.1) the left hand side of this inequality is bigger than 47−/2 and we have seen that the right hand side is ≪ ln(M P /H). The inequality will therefore hold if 47 ≫ /2 ie. if H/M P ≫ 10 −41. This is hardly stronger than the BBN bound (1.9), which means that P lin (k) is almost certainly negligible on cosmological scales. Two inflation models To illustrate the power of our results we apply them to two inflation models based on supersymmetry. #15 We ignore a factor H/m within the log, which is permissible since H/m is also the prefactor. Supersymmetric GUT hybrid inflation Supersymmetric GUT hybrid inflation takes to be a GUT Higgs field so that vev ≃ 10 −2 M P corresponding to (H/m) 2 ≃ 10 −5. This is not small enough for the 'end of inflation' formula to yield the entire waterfall contribution to (Eq. (3.22)). Supersymmetry gives g 2 = 2 leading to g 2 = 10 9 (H/M P ) 2. This leaves for our discussion two independent parameters which we take as g and f. The potential V () may depend on several parameters. It typically steepens, and our discussion applies only if the parameters are such that steepening does not end slow-roll before t end. Requiring the inflaton perturbation to generate on cosmological scales, the steepening implies 10 −1.5 g f 1, the lower bound coming from Eq. (5.15). Using Eq. (5.16) we have /H ≃ 10 2 (g/f ) 1/3. The fast transition requirement (5.16) is certainly satisfied if g 2 ≫ 10 −12 (ie. H/M P ≫ 10 −10 which usually holds. The parameter space allows t end < t = (with either of the possibilities in Eq. (2.26)) as well as t end > t =. Provided that H/ is well below 1, the duration of the waterfall is quite short, and the 'end of inflation' formula can give nl in part of the parameter space ((3.21)). Supernatural/running-mass inflation Supernatural inflation and running-mass inflation take vev roughly of order M P corresponding to m roughly of order H. This can be motivated by supposing that is a string modulus, with gravity-mediated or anomaly-mediated supersymmetry breaking. The former case, V 1/4 0 ∼ 10 10 GeV or H ∼ 10 −15 M P is usually invoked and the latter would give H ∼ 10 −13 M P or so. This low inflation scale and m ∼ H are distinguishing features of the paradigm. The potential for supernatural inflation is V () = m 2 2 /2 which does not allow = on cosmological scales. Running-mass inflation takes V () to be the renormalization group improved potential allowing = on cosmological scales which is assumed. In a suitable regime of parameter space, (k) on small scales can be big enough to exceed the cosmological bound on black hole formation, providing a constraint on the parameter space; in other words we can have f ∼ 10 3. This is another distinguishing feature of the paradigm. Since m is roughly of order H our criterion m 2 ≫ H 2 cannot be very well satisfied and the analysis of the next Section is really more appropriate. To proceed we assume that m/H is a bit above 1, and take f ∼ 1. Then the fast transition requirement H ≪ is satsfied for g 2 ≫ 10 −8 which is expected. Since m is roughly of order H and we deal with the case t end > t =, corresponding to m 2 (t end ) ≃ −m 2. This gives N(t end ) ∼ ln(M P /H) ∼ 33, and k * (t end ) is outside the horizon, with P lin (k * (t end )) ≃ (H/2m) 2. This is is not far below 1, and the black hole bound might be violated. The duration of the non-linear era is N nl ∼ 1. Since Eq. (3.21) is not satisfied, the contribution nl is not given by the 'end of inflation' formula. The case m ∼ H Now we consider the case that m/H is 1 but not extremely small. To arrive at estimates we assume that P (k) in this regime continues to peak at some value k * (t) ≪ a(t)|m(t)|. Since |m(t)| ≤ m ∼ H this means that k * (t) is always outside the horizon. We assume that the gradient of is negligible, checking the self-consistency of that assumption later. Then Eq. (2.38) holds. Considering either of the two independent solutions we define s(t) b giving We assume that the right hand side of Eq. (6.2) is negligible, checking later for self-consistency. Keeping only the growing mode this gives The gradient of is indeed negligible compared with, which means that and p are given by Eqs. ( Since we haven't calculated (x, t) from the vacuum fluctuation we don't know the precise value of 2 (x, H −1 ) but it presumably lies roughly between and m ∼ H since these are the relevant mass scales. As we saw earlier this would make the log at most of order 10 2 or so. Therefore, since we are imposing H ≪, Eq. (6.5) is hardly compatible with Ht ≫ 1. The conclusion is that Eq. (2.40) probably requires the regime (5.2), m 2 (t) ≃ −m 2, which we assume from now on. That in turn implies 2 (t end ) ≫ 2 (t end ). To calculate lin we use Eq. (3.4), and assume |t(t end )/t(t start )| ≫ 1. Using Eqs. (2.29), (6.3), and (6.1), we find / 2 = 3/2s(t) ≃ H 2 /m 2, (6.6) to be compared with / 2 = 3H/2|m(t)| in the case m 2 ≫ H 2. We therefore have Since P (k) peaks at k *, we expect that the final equality of Eq. (2.18) will be roughly correct. Also we expect that P 2 (k) will be given roughly by Eq. (2.22) at k k * and will fall off at bigger k. Then, using Eq. (6.4) we see that P lin (k) peaks at k ∼ k * (t end ) with a value We conclude that the black hole bound is likely to be violated if m is significantly below H. A crucial feature of our setup is the condition (2.40), which is necessary for consistency if the gradient of is negligible and there is no cancellation between the two terms of. We found that the solution of Eq. (2.1) then indeed makes the gradient of negligible, with no cancellation. But the solution of Eq. (2.1) may also make the gradient negligible with no cancellation, in a part of parameter space where with the condition (2.40) violated. In such a regime, we would have to conclude that the linear approximation leading to Eq. (2.1) is invalid. Comparison with other calculations Nineteen other papers have considered the contribution of the waterfall to. in the fast transition regime. Some of them also consider the issue of black hole formation, concluding that the black hole constraint is satisfied for m ≫ H but not for m ∼ H. That is roughly our conclusion though we are less sure. In view of this, one may wonder whether the present paper and its companion are needed. They are, for several reasons. First, all of the previous papers take to be canonically normalized, and nearly all of them go much further by assuming V () = m 2 2 /2. Second, none of the previous papers specifies all of the assumptions that are made, as we do here and in. Third, none of them except considers the non-gaussian black hole bound as we have done in the present paper. Fourth, most of them present a calculation which is much more complicated than ours. Finally, all of the other papers except perhaps have errors. The last point was considered in our earlier paper. The problem for many of the papers is that the waterfall is treated as two-field inflation, without imposing the requirement 2 ≫ 2 > that would be needed to justify such a treatment. #16 As we have seen, this is not the case. It is only within the slow transition regime, considered in, that one can expect to find a regime of parameter space that allows the waterfall to be treated as two-field inflation. We will not repeat the analysis of the problems of the other papers appearing before, that was given in the latter paper. After three papers appeared. The paper is a continuation of. Its main focus is on the case c 10M P, where inflation continues after the waterfall, but that does not affect the contribution to generated during the waterfall, and Eq. (6.15)) of reproduces the expression for given in Eq. A of. The papers consider the case m H, and they calculate k by numerical integration with the potential V () = m 2 2 /2. #17 Then they evaluate lin by integrating Eq. (3.2), finding that m < H is definitely forbidden by the black hole bound. The calculation assumes that the gradient of can be ignored when evaluation and p but they don't investigate the compatibility of the evolution equation Eq. (2.1) with the energy continuity equation. However, their results for the case m = H (with the other parameters fixed at particular values) shown in their Figure 2 is in excellent agreement with our Eq. (6.1), assuming |m(t)| = m. Their result for P lin (k * (t end ), t end ) with the same parameter choice, shown in their Figure 4, is also in agreement with ours, assuming in addition 2 (t end ) ≫ 2 (t end ). It therefore seems that for at least this parameter choice, their assumption that the gradient of is negligible is justified, and that moreover the consistency condition (2.40) is satisfied. But there is no reason to think that the same is true in the entire parameter space, considered in their Figure 5. Regarding the black hole bound, they note that lin has the non-gaussian form (4.5). In they use 2 lin 1 instead of our P lin 1. As the width of the peak in P lin (k) is rather broad, this will somewhat overestimate the region of parameter space forbidden by the cosmological bound on P lin as we noted in Section 4.1. In a more sophisticated procedure is used to obtain the black hole bound, but they don't estimate the theoretical error and it is unclear to us whether it represents an improvement on our rough estimate P 1. Conclusion We have considered the contribution lin to, that is generated during the linear era of the waterfall within the Standard Scenario. We gave a rather complete calculation #16 The papers set 2 = 2 while the others regard it as a free parameter. #17 With m H, the fast transition requirement (5.17) conflicts with the slow-roll requirement m ≪ H, but the two are roughly compatible with the choice (m /H) 2 = 10 −1 of. for the case that the waterfall mass m is much bigger than H, and arrived at estimates for m ∼ H. Taking on board our discussion of the non-gaussian black hole bound, we concluded that the black hole bound will be satisfied for m ≫ H, but that it may well be violated for m H. The latter case will be further investigated in a future publication, by numerically integrating Eq. (2.1). A lot more will have to be done before we have a complete understanding of contribution to generated during the waterfall. A fundamental problem is to handle the ultra-violet cutoff, that is needed to obtain finite values for the fields and for the energy density and pressure. Our procedure of keeping only the classical field modes is approximate, and it violates at some level the energy continuity equation. This and related issues are discussed for instance in. A precise procedure is advocated in, but its relation to our approximate procedure is unclear. An understanding of the ultra-violet cutoff will allow one to decide on the minimum value of that allows an initial linear era. With that in place one would hopefully verify that the value invoked in the present calculation is big enough. But it will still be unclear how to evaluate the contribution to that is generated during inflation after the linear era ends, when it is not given by the 'end of inflation' contribution. A numerical simulation, even with reasonable simplifications, might well require one to consider a patch of the universe that is too big to handle. Acknowledgments The author acknowledges support from the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant ST/J00418/1, and from UNILHC23792, European Research and Training Network (RTN) grant.
MUMBAI, India — For decades, luxury hotels have been oases for travelers in developing countries, places to mingle with the local elite, enjoy a lavish meal or a dip in the pool and sleep in a clean, safe room. But last week’s lethal attacks on two of India’s most famous hotels — coming just two months after a huge truck bomb devastated the Marriott in Islamabad, Pakistan — have underlined the extent to which these hotels are becoming magnets for terrorists. Worse, hotel executives and security experts say that little can be done to stop extensively trained gunmen with military assault rifles and grenades who launch attacks like the ones that left this city’s Oberoi and Taj Mahal Palace & Tower strewn with bodies. P.R.S. Oberoi, the chairman of the Oberoi Group, said at a news conference over the weekend that he had directed his company’s hotels to step up security after the Islamabad bombing. The Oberoi banned anyone from parking in front of its hotel here for fear that a car bomb could destroy the glass wall at the front of the lobby, a risk at many hotels. But those protections did not deter the attackers, who entered the Oberoi on foot. Mr. Oberoi questioned whether any hotel could defend against such an assault. “The authorities have to help us,” he said, by preventing attacks from occurring at all. The Taj, it turns out, had warning, according to both an Indian government official, who spoke on the condition of anonymity, and Ratan Tata, the chairman of the company that owns the hotel. In an interview on CNN, Mr. Tata said the hotel had temporarily increased security after being warned of a possible terrorist attack. But he said those measures were eased shortly before last week’s attacks and could not have prevented gunmen from entering the hotel. American hotel chains have policies against discussing security precautions, but watched the Mumbai hotel sieges closely. “We never talk about security measures in our hotels because to talk about what we do would compromise them, but I think it’s fair to say what happened in Mumbai is going to re-energize them,” said Vivian Deuschl, the spokeswoman for the Ritz Carlton Hotel Company, a Marriott subsidiary. Some hotels in Asia already take elaborate precautions, particularly in countries with histories of attacks on Western luxury hotels. At the Grand Hyatt in Jakarta, Indonesia, for example, guards check the trunks of all vehicles and even use mirrors to check cars’ underbodies for explosives before letting them drive to the entrance. Guests’ baggage is opened and checked by hand for suspicious objects, and everyone must go through a metal detector before entering the building. In Pakistan’s major cities, where hotels have been targets before, already-tight security at some hotels has become even more intrusive since the Marriott bombing. Guests have to pass through at least one, and often, several security checkpoints on their way into the hotels; some are staffed by paramilitaries. At the luxury Serena Hotel in Islamabad, those who wish to enter are grilled about where they are going and whom they are meeting. But security experts say such measures — and even some lesser ones — will be difficult to implement outside of war zones or countries where hotels have already been made targets, even after the attacks in Mumbai. Hotels have some built-in design problems for those seeking to protect them from terrorists. Long hallways can turn into dangerous mazes during the type of attacks that occurred in Mumbai. And the Oberoi and the old wing of the Taj hotel, where most of the fighting took place, both have high, central atriums, as many hotels do. This proved to be a vulnerability. After throwing grenades and directing automatic weapons fire at staff and diners in ground-floor lobbies and restaurants, the attackers at each hotel ascended the atriums. This allowed them to hunt down guests while dropping grenades and shooting at commandos below. The Oberoi Group employs many plainclothes security officers in its hotels, but they are unarmed, Mr. Oberoi said. J. K. Dutt, the director general of India’s National Security Guards, the commando force that took the lead in the fighting, said Sunday in a televised news conference that the most difficult gunman to attack in the Taj hotel was one who ascended a spiral staircase and took up a position behind an extremely thick pillar that was part of the 105-year-old building’s original structure. Particularly at the Taj, the attackers seemed to have detailed knowledge of the building’s layout, Mr. Dutt said. They kept moving among large halls with multiple entrances, not allowing themselves to be cornered in small rooms without other exits. By contrast, the commandos and the police had old blueprints of the massive, labyrinthine hotel that did not clearly show which passageways were connected and which were blocked by walls, and did not show recent construction, Mr. Dutt said. The police and first-response agencies should be working with the hotel industry to devise crisis action plans that would include computer programs detailing all internal and external aspects of hotel building structure, said Michael Coldrick, a London-based security professional and a former explosives specialist with Scotland Yard. For example, a prerecorded DVD walk-through of a hotel could be used to brief special forces assault teams to make sure that they know what to expect. In the end, several security experts say, no system is foolproof. The Marriott in Islamabad, which had been struck in the past, had layers of security in place on the night the truck bomber approached. The truck was stopped by security guards who check vehicles before allowing them through a hydraulic barrier. Those precautions are credited with saving lives; the truck never made it past the barrier and closer to the hotel, where the blast would have been more devastating. Still, more than 50 people died and more than 250 were wounded. Heather Timmons contributed reporting from New Delhi, Salman Masood from Islamabad, Pakistan, and Barry Meier from New York.
<filename>src/filters/transform/MPCVideoDec/memcpy_sse.h /* * * Copyright (C) 2011 <NAME> * http://www.1f0.de * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. * * Taken from the QuickSync decoder by <NAME> * * Adaptation for MPC-BE (C) 2012 Sergey "Exodus8" (<EMAIL>) * */ #pragma once #include <intrin.h> #include <emmintrin.h> inline void* memcpy_sse2(void* dst, const void* src, size_t nBytes) { #ifndef _WIN64 __asm { // optimized on Intel Core 2 Duo mov ecx, nBytes mov edi, dst mov esi, src add ecx, edi prefetchnta [esi] prefetchnta [esi+32] prefetchnta [esi+64] prefetchnta [esi+96] // handle nBytes lower than 128 cmp nBytes, 512 jge fast slow: mov bl, [esi] mov [edi], bl inc edi inc esi cmp ecx, edi jnz slow jmp end fast: // align dstEnd to 128 bytes and ecx, 0xFFFFFF80 // get srcEnd aligned to dstEnd aligned to 128 bytes mov ebx, esi sub ebx, edi add ebx, ecx // skip unaligned copy if dst is aligned mov eax, edi and edi, 0xFFFFFF80 cmp eax, edi jne first jmp more first: // copy the first 128 bytes unaligned movdqu xmm0, [esi] movdqu xmm1, [esi+16] movdqu xmm2, [esi+32] movdqu xmm3, [esi+48] movdqu xmm4, [esi+64] movdqu xmm5, [esi+80] movdqu xmm6, [esi+96] movdqu xmm7, [esi+112] movdqu [eax], xmm0 movdqu [eax+16], xmm1 movdqu [eax+32], xmm2 movdqu [eax+48], xmm3 movdqu [eax+64], xmm4 movdqu [eax+80], xmm5 movdqu [eax+96], xmm6 movdqu [eax+112], xmm7 // add 128 bytes to edi aligned earlier add edi, 128 // offset esi by the same value sub eax, edi sub esi, eax // last bytes if dst at dstEnd cmp ecx, edi jnz more jmp last more: // handle equally aligned arrays mov eax, esi and eax, 0xFFFFFF80 cmp eax, esi jne unaligned4k aligned4k: mov eax, esi add eax, 4096 cmp eax, ebx jle aligned4kin cmp ecx, edi jne alignedlast jmp last aligned4kin: prefetchnta [esi] prefetchnta [esi+32] prefetchnta [esi+64] prefetchnta [esi+96] add esi, 128 cmp eax, esi jne aligned4kin sub esi, 4096 alinged4kout: movdqa xmm0, [esi] movdqa xmm1, [esi+16] movdqa xmm2, [esi+32] movdqa xmm3, [esi+48] movdqa xmm4, [esi+64] movdqa xmm5, [esi+80] movdqa xmm6, [esi+96] movdqa xmm7, [esi+112] movntdq [edi], xmm0 movntdq [edi+16], xmm1 movntdq [edi+32], xmm2 movntdq [edi+48], xmm3 movntdq [edi+64], xmm4 movntdq [edi+80], xmm5 movntdq [edi+96], xmm6 movntdq [edi+112], xmm7 add esi, 128 add edi, 128 cmp eax, esi jne alinged4kout jmp aligned4k alignedlast: mov eax, esi alignedlastin: prefetchnta [esi] prefetchnta [esi+32] prefetchnta [esi+64] prefetchnta [esi+96] add esi, 128 cmp ebx, esi jne alignedlastin mov esi, eax alignedlastout: movdqa xmm0, [esi] movdqa xmm1, [esi+16] movdqa xmm2, [esi+32] movdqa xmm3, [esi+48] movdqa xmm4, [esi+64] movdqa xmm5, [esi+80] movdqa xmm6, [esi+96] movdqa xmm7, [esi+112] movntdq [edi], xmm0 movntdq [edi+16], xmm1 movntdq [edi+32], xmm2 movntdq [edi+48], xmm3 movntdq [edi+64], xmm4 movntdq [edi+80], xmm5 movntdq [edi+96], xmm6 movntdq [edi+112], xmm7 add esi, 128 add edi, 128 cmp ecx, edi jne alignedlastout jmp last unaligned4k: mov eax, esi add eax, 4096 cmp eax, ebx jle unaligned4kin cmp ecx, edi jne unalignedlast jmp last unaligned4kin: prefetchnta [esi] prefetchnta [esi+32] prefetchnta [esi+64] prefetchnta [esi+96] add esi, 128 cmp eax, esi jne unaligned4kin sub esi, 4096 unalinged4kout: movdqu xmm0, [esi] movdqu xmm1, [esi+16] movdqu xmm2, [esi+32] movdqu xmm3, [esi+48] movdqu xmm4, [esi+64] movdqu xmm5, [esi+80] movdqu xmm6, [esi+96] movdqu xmm7, [esi+112] movntdq [edi], xmm0 movntdq [edi+16], xmm1 movntdq [edi+32], xmm2 movntdq [edi+48], xmm3 movntdq [edi+64], xmm4 movntdq [edi+80], xmm5 movntdq [edi+96], xmm6 movntdq [edi+112], xmm7 add esi, 128 add edi, 128 cmp eax, esi jne unalinged4kout jmp unaligned4k unalignedlast: mov eax, esi unalignedlastin: prefetchnta [esi] prefetchnta [esi+32] prefetchnta [esi+64] prefetchnta [esi+96] add esi, 128 cmp ebx, esi jne unalignedlastin mov esi, eax unalignedlastout: movdqu xmm0, [esi] movdqu xmm1, [esi+16] movdqu xmm2, [esi+32] movdqu xmm3, [esi+48] movdqu xmm4, [esi+64] movdqu xmm5, [esi+80] movdqu xmm6, [esi+96] movdqu xmm7, [esi+112] movntdq [edi], xmm0 movntdq [edi+16], xmm1 movntdq [edi+32], xmm2 movntdq [edi+48], xmm3 movntdq [edi+64], xmm4 movntdq [edi+80], xmm5 movntdq [edi+96], xmm6 movntdq [edi+112], xmm7 add esi, 128 add edi, 128 cmp ecx, edi jne unalignedlastout jmp last last: // get the last 128 bytes mov ecx, nBytes mov edi, dst mov esi, src add edi, ecx add esi, ecx sub edi, 128 sub esi, 128 // copy the last 128 bytes unaligned movdqu xmm0, [esi] movdqu xmm1, [esi+16] movdqu xmm2, [esi+32] movdqu xmm3, [esi+48] movdqu xmm4, [esi+64] movdqu xmm5, [esi+80] movdqu xmm6, [esi+96] movdqu xmm7, [esi+112] movdqu [edi], xmm0 movdqu [edi+16], xmm1 movdqu [edi+32], xmm2 movdqu [edi+48], xmm3 movdqu [edi+64], xmm4 movdqu [edi+80], xmm5 movdqu [edi+96], xmm6 movdqu [edi+112], xmm7 end: } return dst; #else return memcpy(dst, src, nBytes); #endif } static bool SSE = 0, SSE2 = 0, SSE41 = 0; inline void check_sse() { if (!SSE && !SSE2 && !SSE41) { int info[4]; __cpuid(info, 0); // Detect Instruction Set if (info[0] >= 1) { __cpuid(info, 0x00000001); SSE = (info[3] & ((int)1 << 25)) != 0; SSE2 = (info[3] & ((int)1 << 26)) != 0; SSE41 = (info[2] & ((int)1 << 19)) != 0; } } } inline void* memcpy_sse(void* d, const void* s, size_t size) { if (d == NULL || s == NULL) { return NULL; } // If memory is not aligned, use memcpy bool isAligned = (((size_t)(s) | (size_t)(d)) & 0xF) == 0; if (!isAligned) { return memcpy(d, s, size); } check_sse(); if (!SSE41) { if (SSE2) { return memcpy_sse2(d, s, size); } return memcpy(d, s, size); } static const size_t regsInLoop = sizeof(size_t) * 2; // 8 or 16 __m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7; #ifdef _M_X64 __m128i xmm8, xmm9, xmm10, xmm11, xmm12, xmm13, xmm14, xmm15; #endif size_t reminder = size & (regsInLoop * sizeof(xmm0) - 1); // Copy 128 or 256 bytes every loop size_t end = 0; __m128i* pTrg = (__m128i*)d; __m128i* pTrgEnd = pTrg + ((size - reminder) >> 4); __m128i* pSrc = (__m128i*)s; // Make sure source is synced - doesn't hurt if not needed. _mm_sfence(); while (pTrg < pTrgEnd) { // _mm_stream_load_si128 emits the Streaming SIMD Extensions 4 (SSE4.1) instruction MOVNTDQA // Fastest method for copying GPU RAM. Available since Penryn (45nm Core 2 Duo/Quad) xmm0 = _mm_stream_load_si128(pSrc); xmm1 = _mm_stream_load_si128(pSrc + 1); xmm2 = _mm_stream_load_si128(pSrc + 2); xmm3 = _mm_stream_load_si128(pSrc + 3); xmm4 = _mm_stream_load_si128(pSrc + 4); xmm5 = _mm_stream_load_si128(pSrc + 5); xmm6 = _mm_stream_load_si128(pSrc + 6); xmm7 = _mm_stream_load_si128(pSrc + 7); #ifdef _M_X64 // Use all 16 xmm registers xmm8 = _mm_stream_load_si128(pSrc + 8); xmm9 = _mm_stream_load_si128(pSrc + 9); xmm10 = _mm_stream_load_si128(pSrc + 10); xmm11 = _mm_stream_load_si128(pSrc + 11); xmm12 = _mm_stream_load_si128(pSrc + 12); xmm13 = _mm_stream_load_si128(pSrc + 13); xmm14 = _mm_stream_load_si128(pSrc + 14); xmm15 = _mm_stream_load_si128(pSrc + 15); #endif pSrc += regsInLoop; // _mm_store_si128 emit the SSE2 intruction MOVDQA (aligned store) _mm_store_si128(pTrg , xmm0); _mm_store_si128(pTrg + 1, xmm1); _mm_store_si128(pTrg + 2, xmm2); _mm_store_si128(pTrg + 3, xmm3); _mm_store_si128(pTrg + 4, xmm4); _mm_store_si128(pTrg + 5, xmm5); _mm_store_si128(pTrg + 6, xmm6); _mm_store_si128(pTrg + 7, xmm7); #ifdef _M_X64 // Use all 16 xmm registers _mm_store_si128(pTrg + 8, xmm8); _mm_store_si128(pTrg + 9, xmm9); _mm_store_si128(pTrg + 10, xmm10); _mm_store_si128(pTrg + 11, xmm11); _mm_store_si128(pTrg + 12, xmm12); _mm_store_si128(pTrg + 13, xmm13); _mm_store_si128(pTrg + 14, xmm14); _mm_store_si128(pTrg + 15, xmm15); #endif pTrg += regsInLoop; } // Copy in 16 byte steps if (reminder >= 16) { size = reminder; reminder = size & 15; end = size >> 4; for (size_t i = 0; i < end; ++i) { pTrg[i] = _mm_stream_load_si128(pSrc + i); } } // Copy last bytes - shouldn't happen as strides are modulu 16 if (reminder) { __m128i temp = _mm_stream_load_si128(pSrc + end); char* ps = (char*)(&temp); char* pt = (char*)(pTrg + end); for (size_t i = 0; i < reminder; ++i) { pt[i] = ps[i]; } } return d; }
// of grades, then invokes method processGrades to analyze them. public class GradeBookTest { // main method begins program execution public static void main(String[] args) { // two-dimensional array of student grades int[][] gradesArray = {{87, 96, 70}, {68, 87, 90}, {94, 100, 90}, {100, 81, 82}, {83, 65, 85}, {78, 87, 65}, {85, 75, 83}, {91, 94, 100}, {76, 72, 84}, {87, 93, 73}}; GradeBook myGradeBook = new GradeBook( "CS101 Introduction to Java Programming", gradesArray); System.out.printf("Welcome to the grade book for%n%s%n%n", myGradeBook.getCourseName()); myGradeBook.processGrades(); } }
Diseases of the Aorta and Kidney Disease: conclusions from a Kidney Disease: Improving Global Outcomes (KDIGO) Controversies Conference Abstract Chronic kidney disease (CKD) is an independent risk factor for the development of abdominal aortic aneurysm (AAA), as well as for cardiovascular and renal events and all-cause mortality following surgery for AAA or thoracic aortic dissection. In addition, the incidence of acute kidney injury (AKI) after any aortic surgery is particularly high, and this AKI per se is independently associated with future cardiovascular events and mortality. On the other hand, both development of AKI after surgery and the long-term evolution of kidney function differ significantly depending on the type of AAA intervention (open surgery vs. the various subtypes of endovascular repair). Current knowledge regarding AAA in the general population may not be always applicable to CKD patients, as they have a high prevalence of co-morbid conditions and an elevated risk for periprocedural complications. This summary of a Kidney Disease: Improving Global Outcomes Controversies Conference group discussion reviews the epidemiology, pathophysiology, diagnosis, and treatment of Diseases of the Aorta in CKD and identifies knowledge gaps, areas of controversy, and priorities for future research. Introduction In February 2020, Kidney Disease: Improving Global Outcomes (KDIGO) held the 4th of a series of Controversies Conferences on cardiovascular diseases in patients with chronic kidney disease (CKD) focusing on Central & Peripheral Arterial Diseases in CKD in Dublin, Ireland. The conference covered four large topics: Cerebrovascular Disease, Central Aortic Disease, Renovascular Disease, and Peripheral Arterial Diseases. A summary report 1 provided an overview of the conference but was not able to provide in-depth context and full review of these diverse areas. This report focuses exclusively on Central Aortic Disease, including the associations of CKD with abdominal aortic aneurysm (AAA) and the impact of CKD on post-surgery outcomes; the association of AAA with renal artery stenosis (RAS); the potential impact of CKD on the pathobiology and prognosis of AAA; the approach to the initial diagnostic evaluation of AAA in CKD; the management of AAA in patients with CKD; the incidence, impact on outcomes and prevention strategies of acute kidney injury (AKI) following AAA repair; the post-procedure evaluation of AAA in CKD; the long-term course of kidney function following AAA repair; differences in AAA management in special populations (e.g. acute rupture, elderly, and women); the epidemiology, and management of thoracic aortic dissection in the presence of CKD and the incidence and impact of AKI following thoracic aortic surgery. 2. CKD and abdominal aortic aneurysm: associations and impact on outcomes 2.1 CKD and the risk of incident abdominal aortic aneurysm AAA is a progressive disease leading to dilatation of the aortic lumen and is defined as an abdominal aortic diameter of >3.0 cm. The prevalence of AAA in the Western World is estimated at 1-4.5% of men and 0.5% of women at 65-70 years of age. The prevalence of AAA within CKD patient populations has not been specifically studied. Preliminary data from cross-sectional studies suggest that the prevalence of AAA can be up to 30% higher in individuals with CKD; 7,8 however, the cross-sectional nature of such studies did not allow them to determine whether CKD is associated with future risk for AAA development or whether the association is an epidemiologic co-existence driven by shared underlying risk factors. A recent analysis of 10 724 participants in the Atherosclerosis Risk in Communities Study (aged 53-75 years during 1996-1998), a large communitybased cohort, evaluated the associations of estimated glomerular filtration rate (eGFR) and urine albumin-to-creatinine ratio (ACR) with incident AAA (diagnosis in outpatient, hospitalization discharge, or death records) over a median follow-up of 13.9 years (Figure 1). 9 The demographically adjusted hazard ratios for AAA development were progressively increasing either with descending groups of eGFR (starting from 60-74 mL/min/1.73 m 2 compared to the group of > _90 mL/min/1.73 m 2 ) or with increasing levels of ACR (starting from ACR as low as 10-29 mg/g compared with ACR < 10 mg/g). Associations of pre-surgery kidney function with long-term cardiovascular events and mortality Over the past decade, the effects of pre-surgery kidney function on post-surgery outcomes were also studied. In a prospective cohort study of 383 patients with infrarenal AAA that underwent endovascular aortic aneurysm repair (EVAR), cumulative freedom from the composite end point (death, myocardial infarction, stroke, and peripheral vascular complications), and cumulative survival were progressively lower for declining eGFR groups over 36 months of follow-up. In adjusted Coxregression analysis, every 1 mL/min/1.73 m 2 higher baseline eGFR was associated with a 5% lower likelihood of the composite end-point and a 6% lower likelihood of death. 10 Similarly, in a retrospective cohort study of 47 715 patients who underwent AAA repair (of whom 25.7% open repair and 74.3% EVAR), those with moderately (eGFR 30-59 mL/min/ 1.73 m 2 ) or severely (eGFR <30 mL/min/1.73 m 2 ) impaired kidney function had significantly higher 30-day mortality, a longer length of hospital stay, higher treatment-related costs, and lower 3-year survival compared with individuals without CKD ( Figure 2). 11 Associations of pre-surgery kidney function with long-term kidney outcomes With regard to the association of pre-surgery kidney function with postsurgery renal outcomes, in most relevant studies this was studied in relation to the type of treatment, which is discussed to a greater extent below. In 12 In a cohort of 275 patients with AAA who underwent EVAR, the presence of CKD of G3 or higher was associated with two-fold higher odds for eGFR loss >20% over 9 years of follow-up. 13 A three-fold higher risk of patients with pre-existing CKD for eGFR loss >20% or kidney failure was also noted in another cohort of 268 patients with AAA undergoing different types of repair. 14 3. Pathophysiology, natural history, and risk of rupture of abdominal aortic aneurysm in patients with and without CKD AAA is a multifactorial disease. Traditional risk factors for AAA include age, smoking, hypertension, and family history, with a higher prevalence in men across all ages. 6 The disease is characterized by infiltration of inflammatory cells including macrophages and lymphocytes into the aortic wall, and associated atherosclerotic processes may also be present. In addition, there is progressive loss of vascular smooth muscle cells (VSMCs) from the aortic wall and degradation of the extracellular matrix due to the production of matrix-degrading enzymes linked to inflammatory cell infiltration and VSMC phenotype change ( Figure 3). Resultant structural weakening of the vessel wall renders the aorta more susceptible to rupture. 15 However, the exact aetiology of the disease and the factors that precipitate rupture are poorly understood and involve complex interactions between pathological and biomechanical processes. 16 CKD and AAA share a number of risk factors including age, hypertension, and smoking. On the other hand, risk factors that are prominent in patients with CKD, such as alterations in calcium-phosphate metabolism, arterial stiffness, oxidative stress, and others may also contribute in AAA development. Hypertension, in particular, plays a central role in the pathogenesis of both diseases. Elevated blood pressure is the most common modifiable risk factor for CKD progression and leads to kidney damage through multiple mechanisms, from glomerular hyperfiltration and proteinuria to hyalinosis of the pre-glomerular vessels causing ischaemia and to direct podocyte injury. 19,20 Furthermore, hypertension can promote AAA formation through various pathways, including increased expression of matrix metalloproteinases (MMPs), upregulation of inflammatory responses such as nuclear factor kappaB signalling and others. 21,22 Currently, no studies address differences in pathophysiologic mechanisms, natural history, and risk of rupture of AAA between patients with and without CKD or among CKD stages. As discussed above, in the Atherosclerosis Risk in Communities eGFR and albuminuria were independently associated with greater risk of AAA and of greater abdominal aortic diameter. 9 Although eGFR and albuminuria may be useful in patient stratification, this study was unable to disentangle mechanisms through which CKD may promote AAA development. 23 Therefore, there is a need for research addressing whether features that characterize patients with advanced CKD or on dialysis (e.g. alterations in calciumphosphate metabolism, arterial stiffness, insulin resistance, oxidative stress, and inflammation) are involved in the development and progression of AAA. It will also be important to compare and contrast the pathological features of the vessel wall in AAA between patients with and without CKD. The association between vascular stiffening and calcification and the risk of AAA development and rupture is an area of particular interest. These pathologies are common and widespread in ageing and are accelerated across all ages in CKD. 24 At the cellular level, calcification is linked to accelerated senescence and death of VSMCs, promoting their conversion to an osteogenic and pro-inflammatory phenotype. 25 Premature VSMC ageing is also a feature of AAA. 15,26 Studies in non-CKD populations have demonstrated that there is an association between calcification and cardiovascular mortality, all-cause mortality, and rupture in patients with AAA, but more definitive work is required. Imaging of active calcification using 18 F-sodium fluoride was an additive predictor of aneurysm growth and future clinical events, and this or other imaging modalities may be useful in future studies of patients with CKD. 18,30 Other studies report associations between arterial stiffness and AAA, 31 and it has been postulated that increased MMP activity in both the vasculature and kidney in CKD patients may be particularly involved in AAA development. 32 These multi-functional enzymes drive vessel wall remodelling, vascular stiffening, and calcification, as well as kidney fibrosis and were suggested as potential markers to refine risk stratification in CKD; thus, a possible role of MMPs in the acceleration of aortic abnormalities in these patients should be further investigated. Renal artery stenosis in patients with abdominal aortic aneurysm The majority of observational studies and clinical trials of AAA or AAA repair do not report rates of renal artery stenosis (RAS). The prevalence of RAS in the angiographic studies of patients with AAA that do report it varies significantly, ranging from 2.6% to 39% (averaging 30%), depending on the criteria used for case definitions. Relevant parameters include RAS severity (>50-70%; unilateral or bilateral); AAA location (infrarenal vs. suprarenal); presence or absence of CKD; the indication for angiography (angiography for other diseases, angiography for suspected RAS); and the angiographic technique used (arteriography; CT angiography; MR angiography). In older studies with small sample sizes, the prevalence of RAS in patients with AAA was estimated to be around 30%. 36 In a recent study with 933 participants, the prevalence of RAS ranged from 5.2% (infrarenal AAA) to 20.3% (suprarenal AAA). 37 In another recent study of patients undergoing repair for infrarenal AAA, only 2.6% of patients had RAS, which was defined by stenosis of 70% or more. 14 Future studies should assess the prevalence of RAS with angiographic criteria, its functional impact (hypertension control, kidney function), and its the prognostic significance for renal outcomes in patients with AAA. Initial diagnostic evaluation of abdominal aortic aneurysm in patients with CKD Duplex ultrasonography, computed tomographic angiography (CTA), and magnetic resonance angiography (MRA) are commonly used for the diagnosis of AAA. Duplex ultrasonography refers to B-mode ('brightness' mode) grey-scale imaging with pulse-wave Doppler spectral and colour flow analysis. Usually a grey-scale (B-mode ultrasound is sufficient for the initial evaluation and follow-up of an AAA (systolic size measurement of aneurysm extent from outer wall to outer wall in anterior-posterior direction). Additional information can be obtained by colour Doppler ultrasound, which is routinely performed in several countries. 38,39 A high-quality examination is dependent on the skill of the technologist and the use of appropriate ultrasound probe with adequate depth of penetration (MHz), fasting patient status, as well as adequate gain and wall filter settings to distinguish true findings from artefact or noise. The main considerations in evaluating AAA include accurate anatomic assessment to identify patients meeting criteria for revascularization (Table 1). 40,41 To achieve these goals, B-mode imaging alone is the gold standard, although additional information may be obtained with pulse-wave Doppler spectral and colour flow analysis (e.g. presence and extent of mural thrombus). 38,39,42 It is important to note, however, that ultrasound alone is not sufficient for procedural planning, and additional imaging, preferably with CTA, is almost always necessary given the complexity of these procedures. 40,41 Recent data and consensus statements indicate that the risk of contrastinduced nephropathy (CIN) may have been overstated historically and should not deter from proper diagnosis and treatment of AAA in patients with CKD. 43 The amount of contrast used with modern CTA is considerably less than in prior years. It is also important to note that CIN risk prediction tools are available to help predict which patients undergoing EVAR procedures may experience adverse events from iodinated contrast. 44 The Mehran risk-prediction model, which is most widely adopted for CIN in coronary intervention patients, seems to have the best discriminative ability among EVAR patients. 45 Magnetic resonance angiography (MRA) is less useful in the initial or pre-procedural evaluation of AAA due to motion artefact, resolution, and cost, although gadolinium-based contrast agent (GBCA) imaging may improve the quality of the examination and is no longer a major risk for patients with CKD, as long as group II agents are used. 40,41,46 According to the American College of Radiology, macrocyclic ionic GBCAs classified in group II (gadobenate dimeglumine, gadobutrol, gadoterate meglumine, or gadoteridol) have higher stability from the dissociation of gadolinium than linear and non-ionic agents. 47,48 A meta-analysis on the risk of nephrogenic systemic fibrosis (NSF) concluded that the risk of NSF in CKD G4 or G5 receiving a Group II GBCA is less than 0.07%. 49 Thus, the potential diagnostic harms of withholding group II GBCAs for indicated MRI examinations may outweigh the risk of NSF. 6. Management of abdominal aortic aneurysm in patients with CKD 6.1 Indications for treatment and available modalities for AAA repair There is no proven medical therapy that reduces the risk of AAA rupture. Open surgical and endovascular repair are the only treatments shown to decrease AAA-specific mortality. 50 However, both have a number of complications, including acute and chronic kidney dysfunction. The indications for AAA treatment in the elective and emergency settings do not differ for patients who have established CKD. 50 When an AAA measures less than 55 mm in maximal antero-posterior diameter, the rupture risk is less than the risk of surgery. 51 Men with an AAA diameter >55 mm are considered for elective AAA surgery; in women, the threshold for considering elective AAA repair may be around 50 mm. Patients with symptoms secondary to the AAA (e.g. pain) and those who present with a rupture are considered for emergency repair. 50 Those with an increase in diameter >10 mm in 1 year should be referred to a surgeon. The presence of CKD should be taken into account during preoperative risk assessment, since CKD is associated with a higher risk of post-surgery AKI, long-term eGFR decline, cardiovascular events, and mortality, as discussed extensively above. Endovascular aneurysm repair (EVAR) has superior short-term outcomes compared with open surgery and has become the treatment of choice for many patients. 50 However, there are no specific data to support offering EVAR over open surgery in individuals with CKD; anatomical and other patient-related parameters should be taken into account when making that decision. Typical infrarenal AAA have a proximal aortic neck that provides an adequate landing zone for the endovascular device; juxtarenal aneurysms do not have this zone, and the aneurysm involves the infrarenal abdominal aorta adjacent to or including the lower margin of renal artery origins; suprarenal and thoraco-abdominal aneurysms extend above and beyond the orifice of renal arteries (Figure 4). 52 Based on the exact anatomy of the aneurysm, there are several potential modes of open or endovascular AAA reconstruction. Open AAA repair can be performed with a suprarenal or an infrarenal aortic clamp based on the anatomy of the proximal AAA neck. Suprarenal clamping is associated with higher morbidity and AKI rates, 53,54 which are expected, given the ischaemic insult to the kidneys. Endovascular repair for a typical infrarenal AAA (i.e. the proximal aortic neck of the AAA provides an adequate landing zone) can be performed using an infrarenal device (i.e. standard EVAR), which may or may not have suprarenal fixation modalities (e.g. bare stents or hooks). 53 Suprarenal fixation is meant to decrease the chance of device migration and potential endoleak over the long term. Aneurysms with a 'hostile' proximal neck (where a standard infrarenal EVAR device would not provide adequate seal), juxtrarenal, suprarenal, or thoraco-abdominal aneurysms cannot be treated with standard EVAR 'off the shelf' devices. More complex forms of EVAR have been devised to address these anatomies, such as fenestrated EVAR (fEVAR) or branched EVAR (bEVAR). 53 Use of these complex endovascular procedures usually requires more contrast than standard infrarenal EVAR and involves a high risk for renal artery occlusion (estimated at 2.3% for fEVAR and 9.6% for bEVAR) or stenosis, given that covered stents are deployed in the actual renal vasculature. 55 7. Acute kidney injury after abdominal aortic aneurysm surgery: incidence, risk factors, impact on outcomes, and prevention strategies 7.1 Incidence of AKI following interventions for AAA AKI after elective AAA surgery is an important complication. 56 In original studies exploring this association AKI incidence ranged widely, due to the variation of criteria used (change in serum creatinine levels, decrease in creatinine clearance or eGFR, and others). 57 In recent years, the use of contemporary criteria for AKI definition, such as the Risk-Injury-Failure-Loss-End-stage (RIFLE), Acute-Kidney-Injury-Network (AKIN), and KDIGO criteria has enabled better comparison among studies. 58 The majority of observational studies using contemporary criteria examined AKI incidence after elective EVAR for infrarenal AAA, and reported incidence around 15-20%, most of which was Stage 1 AKI ( Table 2). 53, In the only study using the AKIN and KDIGO criteria and also including urine output measurements, Saratzis et al. reported a postoperative AKI incidence of 18.8% in 149 patients undergoing EVAR. 62 In 947 patients undergoing elective EVARs for infrarenal AAA, AKI incidence was 18% using the AKIN and KDIGO criteria. 63 Studies examining AKI incidence between different modes of AAA treatment are discussed in section 7.4. Risk factors for AKI following interventions for AAA Epidemiologic studies examining risk factors for AKI following AAA interventions are scarce. The lack of uniform reporting of AKI, the complex pathophysiology of factors involved, and the absence of information on the perioperative volume status of patients largely prohibit valid analyses. 57 In the aforementioned cohort of 947 patients undergoing elective EVARs, pre-operative eGFR, and CKD >G2 were the only independent predictors of AKI among a wide set of factors studied, including age, sex, major co-morbidities, aneurysm diameter, the volume of contrast medium used, and others. 63 In a recent prospective study of 300 patients undergoing different types of AAA repair, older age, baseline eGFR, and ischaemic heart disease were the main predictors of AKI after infrarenal EVAR and open repair. 53 AKI and impact on long-term kidney function, cardiovascular outcomes, and mortality Existing data suggest that AKI development following AAA surgery is an independent risk factor for eGFR decline, as well as for cardiovascular events and mortality. 14,62,63 In a recent study of 266 individuals undergoing AAA repair with either EVAR or open surgery, AKI was independently associated with eGFR decline >20% and/or kidney failure during follow-up for both types of repair. 14 In the aforementioned study of 149 elective EVARs from Saratzis et al. 62 patients who developed AKI were more likely to die or develop cardiovascular complications over 33 months of follow-up in univariate analyses, and AKI was independently associated with death and cardiovascular morbidity in exploratory adjusted survival analyses. In another cohort of 1068 individuals, of which 947 underwent EVAR and 121 open repair for AAA, AKI following intervention was independently associated with a 1.7-fold higher risk of cardiovascular events during a median follow-up of 62 months ( Figure 5). 63 Evidence from other clinical areas suggests that AKI contributes to long-term kidney function loss through multiple structural changes, including glomerulosclerosis and tubulointerstitial fibrosis. 77 However, it is not known whether AKI is pathophysiologically involved in the acceleration of cardiovascular disease or is simply a marker of occult cardiovascular burden in these individuals. 57 Effects of AAA treatment modality on AKI incidence As discussed above, AKI develops in 15-20% of patients having elective EVAR for infrarenal AAA. 53 53 In two studies using contemporary criteria to define AKI, patients undergoing EVAR with suprarenal fixation had similar AKI rates as those undergoing infrarenal fixation. 79 Finally, small studies suggest that the risk of AKI with fEVAR or bEVAR procedures is typically higher than with standard EVAR, i.e., usually at around 25-30% of patients ( Table 2). 53 Peri-procedural management for AKI prevention in patients with AAA There is currently no evidence-based strategy in the context of open AAA surgery or EVAR that has been proven to reduce the risk of AKI or subsequent longer-term renal decline. 57,80 No RCT examining prevention of renal complications post-AAA surgery has focused on CKD patients. 57 Current guidance for patients at high risk for AKI based on RCTs from relevant clinical areas suggests perioperative intravenous fluid administration using crystalloid solutions for those with an eGFR <40 mL/min/1.73 m 2 or those with a history of kidney transplantation or a solitary kidney. 81 So far, various interventions for AKI reduction have been tested, including N-acetylcysteine; 82 ischaemic pre-conditioning; 83 high-dose intrarenal artery infusions of fenoldopam delivered via a left brachial access; 84 intravenous fluids with bicarbonate; 85 administration of anti-oxidants (e.g. vitamin C). 86 The majority of these interventions were assessed in under-powered exploratory studies, often using inconsistent AKI reporting criteria. A consensus group on AKI following EVAR in the UK reported a pilot RCT investigating a large bolus dose of bicarbonate to alkalinize patients' urine prior to commencing EVAR together with a standardized regimen of aggressive intravenous volume expansion with crystalloid solutions. 67 Participants who received this two-step intervention (even those with heart failure or advanced CKD) did not experience adverse events, and the strategy was easy to implement; a larger RCT to test this intervention is currently under development. Post-procedure evaluation of abdominal aortic aneurysm in patients with CKD Following revascularization, the major considerations for followup include repeat contrast imaging to monitor for complications in patients with EVAR as compared to open repair. As has been stated previously, the benefit of improved in-hospital mortality after EVAR is offset by higher long-term complications including endoleak ( Figure 6), device migration, and continued aneurysm expansion requiring repeat intervention. Traditionally, the surveillance protocol following EVAR included CTA at 1-month, 6-months, 1 year, and then annually paired with duplex ultrasound imaging. More recent data and US guidelines suggest that the 6-month imaging exams may be dropped if the 1-month evaluation is without complications. 41,87 Alternatively, the European guidelines advocate for the use of duplex ultrasound with non-contrast CT and abdominal radiographs at any time after EVAR in patients with CKD. 40 CTA is reserved for suspected endoleak in this algorithm. In some centres, contrast-enhanced duplex ultrasonography (CEUS) is available which shows promise of excellent sensitivity and specificity for the detection of endoleaks 88 and may replace CTA for endoleak detection in the future. Long-term kidney function after abdominal aortic aneurysm surgery With regard to the effects of the type of repair to mid-and long-term kidney function, early observational studies suggested that patients with infrarenal AAA that underwent EVAR with suprarenal fixation (i.e. using an infrarenal 'off-the-shelf' EVAR device with suprarenal fixation modalities) experienced a greater eGFR decline over the 1st and 2nd-year post-surgery than those having EVAR with no suprarenal fixation. 89 Figure 7). 12 Another retrospective study including 317 patients with open repair and 358 with EVAR showed that eGFR decline in the longterm was almost two-times greater in patients undergoing EVAR. 78 A recent meta-analysis reporting on eGFR changes at 1 and 5 years suggested that EVAR with suprarenal fixation does not lead to a significantly greater drop in kidney function compared with infrarenal fixation at one year; however, there is a greater loss of eGFR over 5 years. 79 10. Thoracic aortic dissection and kidney disease 10.1 CKD and thoracic aortic dissection: associations and impact on outcomes Thoracic aortic dissection is a rare but serious cardiovascular disease. According to the Stanford classification, which is commonly used, dissections involving the ascending aorta are classified as Type A and those without ascending aorta involvement as Type B. 91 There are few data on the epidemiology, natural course, and complications of aortic dissection in patients with CKD. In previous reports, the prevalence of CKD was noted at 8.5-10% of patients with acute aortic dissection. 92 94 Currently, no longitudinal study has particularly evaluated whether CKD is a risk factor for aortic dissection development. In the German Registry study, pre-existing CKD was not associated with mortality in patients with Type A dissection. In patients with Type B dissection, however, the prevalence of CKD was higher in non-survivors (23.9% vs. 20% in survivors, P = 0.039). 94 Pre-existing CKD was also an independent predictor of mortality in the study by Hoogmoed et al., 92 Figure 6 The types of endoleaks after endovascular aortic repair: Type I, leak at the proximal or distal landing of the graft; Type II, leak via branches (e.g. lumbar artery) into the aneurysm sac; Type III, modular defect or tearing of the graft material; Type IV, graft porosity. but not in a report from the International Registry of Acute Aortic Dissections (IRAD) that included 1034 patients. 93 A recent retrospective study of all patients with renal failure on dialysis in the USA who underwent open proximal aortic repair with the diagnosis of non-ruptured thoracic aortic aneurysm (n = 325) or type A aortic dissection (n = 461) during the years 1987-2015, showed perioperative mortality (in-hospital or 30-day mortality) of 12.6% and 24.3%, and 10-year mortality of 81% and 87.9%, respectively. 95 In patients with type A aortic dissection, age > _65 years, heart failure, and diabetes were independently associated with worse 10year mortality. This study affirmed the feasibility of emergency surgery for acute type A dissections but also highlighted the need for careful patient selection in the elective repair of proximal thoracic aneurysm for dialysis-dependent patients. Management of thoracic aortic dissection in patients with CKD Currently, there are no data supporting differences in treatment practices of aortic dissection based on the presence of CKD. Although Type A dissections are almost always treated surgically, Type B dissections are mostly treated interventionally, especially if visceral or renal arteries are compromised. 96 This was the case also in the aforementioned German registry, where the majority of Type A patients underwent open surgery, whereas most of Type B had endovascular procedures. 94 An early intervention is the preferred treatment to minimize ischaemic time of visceral organs and the kidneys, and often it is the only option to provide a reasonable chance of survival in patients with acute aortic dissection. However, in the case of advanced age or extensive pre-existing comorbidities, the risk of complications may justify non-intervention, because in these patients prognosis is dismal. Medical treatment with aggressive blood pressure lowering for Type B dissections is recommended in uncomplicated cases or patients with a prohibitive risk profile. 96 10.3 Acute kidney injury after thoracic aortic dissection: incidence, risk factors, and impact on outcomes In the German Registry study, the need for postoperative kidney replacement therapy was extremely high (24.2%) in Type A and high (8.2%) in Type B patients 94 due to the multiplicity of risk factors, emergent setting, and complexity of the operations. In another retrospective study, Hoogmoed et al. reviewed 478 patients with Acute Type B aortic dissection; patients on dialysis were excluded. 52.7% of patients experienced AKI (27.2% Stage I, 14.9% Stage II, and 10.7% Stage III). Independent predictors for AKI were CKD, renal malperfusion, congestive heart failure, hypertension, visceral malperfusion, and limb malperfusion. AKI was associated with a longer hospital stay, and, in Stages II and III only, with reduced late survival but not with late aortic events during follow-up. 92 Other studies also suggest that renal dysfunction on admission and renal artery involvement contribute to AKI development, while AKI per se is associated with a higher risk of in-hospital complications. 97 AKI was also identified as an independent predictor for mortality in the IRAD study. 93 Finally, a recent observational study in 129 individuals that received endovascular repair for acute type B aortic dissection reported that 16.3% of the patients had RAS; these individuals had a higher incidence of AKI and lower eGFR both pre-operatively and 1-month postoperatively than individuals without RAS (81.7 ± 23.8 vs. 96.0 ± 20.0 mL/ min, P = 0.017). 98 Differences in management in special conditions and populations Management of patients with aortic pathology and CKD can be broken down into those with elective conditions and those with acute aortic syndromes (dissections, intramural haematomas, penetrating aortic ulcers, symptomatic, or ruptured aortic aneurysms). For the latter patients, the presence of CKD is an important factor in counselling the patient and family as to peri-procedural risk and mortality, as well as potential worsening of kidney function towards kidney failure. As discussed above, however, there is currently no evidence restricting the use of open or endovascular treatments for the above conditions in patients with CKD. In the absence of specific evidence, one can extrapolate from the literature that the overall short-term perioperative morbidity is generally lower with endovascular repair use in emergency situations. 50 This is most clear for ruptured infrarenal AAA abdominal aortic aneurysms, where there is a clear shift towards the use of EVAR techniques worldwide. 99 The complexity of the procedure is also important for treatment decisions; a patient with severe CKD and a penetrating aortic ulcer with contained rupture who could be treated with EVAR is very different from one needing an urgent open thoraco-abdominal aneurysm repair. Furthermore, in patients with advanced age (i.e. >75-80 years), decisions on treatment and treatment modality of both thoracic and aortic diseases should be made on an individualized approach based on a riskbenefit basis, as the very elderly are often excluded from relevant studies, and surgery may not increase overall life expectancy. 100,101 For younger and relatively fit patients with AAA, open surgical repair may be preferred, as it is associated with slightly better long-term survival 101 and better long-term preservation of kidney function in relation to suprarenal EVAR. 12 Finally, women tend to be older, have smaller aneurysms, and higher prevalence of CKD compared to men. 102,103 When undergoing AAA repair, women are less likely to undergo EVAR and have higher rates of procedural complications and, in some studies, in-hospital mortality. These factors need to be taken into account when deciding about treatment. Limitations of existing evidence and issues for further research In recent years, the literature on the associations of aortic diseases with CKD or AKI has been growing. However, the majority of data come from observational studies, several limitations prohibit drawing definite conclusions, and several aspects are open to future research. With regard to long-term evolution of kidney function or incidence of AKI after AAA or thoracic dissection surgery, outcomes may be affected by factors not adequately assessed by existing studies, such as anatomic complexity, clamp time, clamp location in AAA (suprarenal vs. infrarenal), intentional occlusion of accessory renal arteries, and others. Studies on long-term kidney function after EVAR may reflect treatment practices in the early era of the procedure (15-20 years ago) with regard to choice of fixation, procedural time, increased contrast volume, and increased complication rates; such practices may have substantially changed over time. Furthermore, techniques and experience for AAA repair may vary substantially in different parts of the world. With regard to the exact treatment types, the accurate pathophysiology of eGFR loss following EVAR (especially with suprarenal fixation) remains to be fully established: bare metal material covering the orifice of renal arteries, wire placement during the procedure, inflammation of the aortic sack, and micro-emboli may be important. In addition, most of the relevant evidence refers to outcomes after correction of infrarenal AAA and not of AAA involving the orifice of renal arteries or thoracoabdominal aneurysms with complicated anatomy, such as those with false channel affecting renal arteries. The use of fenestrated or physicianmodified grafts is increasing in several countries for simple or complex aneurysm types. The associations of EVAR with such grafts with longterm kidney function are poorly studied. Other gaps in knowledge requiring further investigation include the pathophysiologic mechanisms through which CKD may affect AAA and thoracic aortic dissection development (including the role of the severity of atherosclerotic lesions above the AAA), the prevalence of RAS and its impact on outcomes in patients with AAA, the mechanisms through which AKI post-surgery for AAA or thoracic dissection affects cardiovascular events and all-cause mortality, and the long-term evolution of kidney function following surgery for aortic dissection. Randomized trials are also needed to assess several questions, such as the effects of different post-surgery diagnostic protocols on long-term kidney function, the effects of different treatment modalities for AAA on long-term kidney and cardiovascular outcomes, and the effects of AKI prevention strategies in CKD patients. Conclusions Chronic kidney disease and aortic diseases are tightly linked with bidirectional associations. The presence of CKD is an independent risk factor for the development of AAA and is associated with adverse cardiovascular and kidney outcomes and all-cause mortality following surgery for AAA or thoracic aortic disease. In parallel, the incidence of AKI after any type of aortic intervention is particularly high, and this AKI is independently associated with future cardiovascular events and mortality. On the other hand, the type of AAA surgery (open vs. the various subtypes of endovascular repair) directly affects the rates of post-procedural AKI and the long-term course of kidney function. In contrast to emerging epidemiological data, available evidence on the pathophysiology, proper diagnosis, and treatment of AAA or thoracic aortic dissection specifically in the context of CKD is limited. This is also the case for the association of AAA with RAS. As the prevalence rates of CKD and aortic diseases are continuously increasing, observational studies and clinical trials on all the above fields are urgently needed to delineate the complex associations between these entities for the benefit of our patients.
<filename>api/vo/RedirectVO.go package vo // // RedirectVO ... // type RedirectVO struct { BaseEntityVO `create:"ignore"` Source string `json:"source"` Destination string `json:"destination"` Type string `json:"type"` MatchingType string `json:"matchingType"` ExpertMode bool `json:"expertMode" create:"ignore"` } // // // // RowListing ... // // // func (vo RedirectVO) RowListing(cmd *command.Command) []*listing.Row { // lr := listing.NewRow(vo.Source + " => " + vo.Destination) // lr.Readable = true // // lr.Cols = []interface{}{ // *vo.BaseEntityVO.ID, // vo.BaseEntityVO.Modified.ToUnixDate(), // } // // return []*listing.Row{lr} // } // // ResetDatabaseState ... // func (vo RedirectVO) ResetDatabaseState() interface{} { vo.BaseEntityVO.ID = nil vo.BaseEntityVO.Created = nil vo.BaseEntityVO.Modified = nil return vo }
Nexrutine Inhibits Survival and Induces G1 Cell Cycle Arrest, Which Is Associated with Apoptosis or Autophagy Depending on the Breast Cancer Cell Line Breast cancers that are estrogen receptor (ER) negative or are ER negative with ErbB2/HER-2 overexpression have a poor prognosis, which emphasizes the importance of developing compounds for preventing breast cancer. Nexrutine, an herbal extract from the plant Phellodendron amurense, has been used for centuries in Asian medicine to treat inflammation, gastroenteritis, abdominal pain, and diarrhea. In this study we investigated the anticancer effects of Nexrutine on ER negative breast cancer cell lines that are positive or negative for HER-2. Nexrutine decreased the activities of 2 potential targets of breast cancer, cyclooxygenase (COX)-2, and peroxisome proliferators activated receptor gamma (PPAR). The antiinflammatory effects of Nexrutine were evident with decreased prostaglandin (PG)E2 production, protein expression of microsomal PGE2 synthase (mPGES), and PPAR. Nexrutine decreased cell survival and induced a G1 cell cycle arrest in SkBr3 and MDA-MB 231 cells, which were associated with reduced protein expression of Cyclin D1 and cdk2 along with increased protein expression of p21 and p27. The growth-inhibitory effect of Nexrutine was associated with apoptosis in SkBr3 cells and autophagy in MDA-MB231 cells. Based on these findings, we propose that Nexrutine may provide a novel approach for protection against breast cancer.
import sys for e in sys.stdin: a,n='',1 for c in e[:-1]: if'@'==c:n=0 elif n<1:n=int(c) else:a+=c*n;n=1 print(a)
package page7.q637; import java.math.BigInteger; import java.util.Scanner; public class Main { public static void main(String[] args) { int n = new Scanner(System.in).nextInt(); System.out.println(new BigInteger("2").pow(n * n) .subtract(BigInteger.ONE) .toString()); } }
/** * @brief Set URI location for the default manifest. * @details The default manifest is polled regularly and generates a * notification upon change. The URI struct and the content pointer to * must be valid throughout the lifetime of the application. * * @param uri URI struct with manifest location. * @return Error code. */ arm_uc_error_t ARM_UCS_SetDefaultManifestURL(arm_uc_uri_t* uri) { UC_SRCE_TRACE("ARM_UCS_SetDefaultManifestURL"); arm_uc_error_t result = (arm_uc_error_t){ SRCE_ERR_INVALID_PARAMETER }; if (uri != 0) { if (uri->scheme == URI_SCHEME_HTTP) { default_config.manifest = *uri; result = (arm_uc_error_t){ SRCE_ERR_NONE }; } } return result; }
#!/usr/bin/env python3 from mosq_test_helper import * import json import shutil def write_config(filename, port): with open(filename, 'w') as f: f.write("listener %d\n" % (port)) f.write("allow_anonymous true\n") f.write("plugin ../../plugins/dynamic-security/mosquitto_dynamic_security.so\n") f.write("plugin_opt_config_file %d/dynamic-security.json\n" % (port)) def command_check(sock, command_payload, expected_response): command_packet = mosq_test.gen_publish(topic="$CONTROL/dynamic-security/v1", qos=0, payload=json.dumps(command_payload)) sock.send(command_packet) response = json.loads(mosq_test.read_publish(sock)) if response != expected_response: print(expected_response) print(response) raise ValueError(response) port = mosq_test.get_port() conf_file = os.path.basename(__file__).replace('.py', '.conf') write_config(conf_file, port) add_client_command = { "commands": [{ "command": "createClient", "username": "user_one", "password": "password", "clientid": "cid", "textname": "Name", "textdescription": "Description", "rolename": "", "correlationData": "2" }] } add_client_response = {'responses': [{'command': 'createClient', 'correlationData': '2'}]} add_client_repeat_response = {'responses':[{"command":"createClient","error":"Client already exists", "correlationData":"2"}]} list_clients_command = { "commands": [{ "command": "listClients", "verbose": False, "correlationData": "10"}] } list_clients_response = {'responses': [{"command": "listClients", "data":{"totalCount":2, "clients":["admin", "user_one"]},"correlationData":"10"}]} list_clients_verbose_command = { "commands": [{ "command": "listClients", "verbose": True, "correlationData": "20"}] } list_clients_verbose_response = {'responses':[{"command": "listClients", "data":{"totalCount":2, "clients":[ {'username': 'admin', 'textname': 'Dynsec admin user', 'roles': [{'rolename': 'admin'}], 'groups': []}, {"username":"user_one", "clientid":"cid", "textname":"Name", "textdescription":"Description", "roles":[], "groups":[]}]}, "correlationData":"20"}]} get_client_command = { "commands": [{ "command": "getClient", "username": "user_one", "correlationData": "42"}]} get_client_response = {'responses':[{'command': 'getClient', 'data': {'client': {'username': 'user_one', 'clientid': 'cid', 'textname': 'Name', 'textdescription': 'Description', 'groups': [], 'roles': []}}, "correlationData":"42"}]} set_client_password_command = {"commands": [{ "command": "setClientPassword", "username": "user_one", "password": "password"}]} set_client_password_response = {"responses": [{"command":"setClientPassword"}]} delete_client_command = { "commands": [{ "command": "deleteClient", "username": "user_one"}]} delete_client_response = {'responses':[{'command': 'deleteClient'}]} rc = 1 keepalive = 10 connect_packet = mosq_test.gen_connect("ctrl-test", keepalive=keepalive, username="admin", password="<PASSWORD>") connack_packet = mosq_test.gen_connack(rc=0) mid = 2 subscribe_packet = mosq_test.gen_subscribe(mid, "$CONTROL/dynamic-security/#", 1) suback_packet = mosq_test.gen_suback(mid, 1) try: os.mkdir(str(port)) shutil.copyfile("dynamic-security-init.json", "%d/dynamic-security.json" % (port)) except FileExistsError: pass broker = mosq_test.start_broker(filename=os.path.basename(__file__), use_conf=True, port=port) try: sock = mosq_test.do_client_connect(connect_packet, connack_packet, timeout=5, port=port) mosq_test.do_send_receive(sock, subscribe_packet, suback_packet, "suback") # Add client command_check(sock, add_client_command, add_client_response) # List clients non-verbose command_check(sock, list_clients_command, list_clients_response) # List clients verbose command_check(sock, list_clients_verbose_command, list_clients_verbose_response) # Kill broker and restart, checking whether our changes were saved. broker.terminate() broker.wait() broker = mosq_test.start_broker(filename=os.path.basename(__file__), use_conf=True, port=port) sock = mosq_test.do_client_connect(connect_packet, connack_packet, timeout=5, port=port) mosq_test.do_send_receive(sock, subscribe_packet, suback_packet, "suback") # Get client command_check(sock, get_client_command, get_client_response) # List clients non-verbose command_check(sock, list_clients_command, list_clients_response) # List clients verbose command_check(sock, list_clients_verbose_command, list_clients_verbose_response) # Add duplicate client command_check(sock, add_client_command, add_client_repeat_response) # Set client password command_check(sock, set_client_password_command, set_client_password_response) # Delete client command_check(sock, delete_client_command, delete_client_response) rc = 0 sock.close() except mosq_test.TestError: pass finally: os.remove(conf_file) try: os.remove(f"{port}/dynamic-security.json") except FileNotFoundError: pass os.rmdir(f"{port}") broker.terminate() broker.wait() (stdo, stde) = broker.communicate() if rc: print(stde.decode('utf-8')) exit(rc)
#pragma once #include "il2cpp.h" void NexAssets_DataStore_Upload_CHANGEMETA_ARG___ctor (NexAssets_DataStore_Upload_CHANGEMETA_ARG_o* __this, const MethodInfo* method_info);
While at this year’s Sundance Film Festival, I saw one film that absolutely floored me in every possible way and is still my number one film of the year: director Luca Guadagnino’s Call Me by Your Name. Featuring a fantastic screenplay by Guadagnino and James Ivory, powerful performances from the entire cast, amazing cinematography by Sayombhu Mukdeeprom, and brilliance from every other department, Call Me by Your Name is one of those rare films where everything is just perfect and you walk out of the theater remembering why you love movies. If you haven’t yet heard of the film, based on the novel by André Aciman, the coming-of-age drama stars Timothée Chalamet (Interstellar) as a precocious 17-year-old American-Italian boy who’s on summer vacation with his family at their Italian villa. When a charming American scholar (Armie Hammer) comes to work with the boy’s father (Michael Stuhlbarg), a summer romance sparks that awakens feelings of first love, brilliantly and sensually captured by Guadagnino. Trust me when I say you need to see this film when it’s released in North America on November 24th. For more on read Adam Chitwood’s glowing review or watch the first trailer. While at this year’s Toronto International Film Festival I got to sit down with Armie Hammer, Timothée Chalamet and Luca Guadagnino for an exclusive video interview. They talked about if they had any idea the reviews would be so positive and enthusiastic, how they managed to make the movie feel authentic and real, and that even though Guadagnino normally takes a long time to edit his movies this was done in record time. However, the big surprise of the interview was Guadagnino revealing he had a version of the film that was four hours long! After you see the movie you’ll understand why I was so excited to hear about the existence of an extended cut. Did they have any idea while they were making the movie that the reaction would be so positive and enthusiastic? The film feels authentic and real. Was it all in the script? Was it found during the rehearsal process? What did Guadagnino learn from early screenings that impacted the finished film? Talks about how he usually spends a long time editing his films but he cut this one in record time. He reveals he originally had a 4 hour cut of the film! CALL ME BY YOUR NAME, the new film by Luca Guadagnino, is a sensual and transcendent tale of first love, based on the acclaimed novel by André Aciman. It’s the summer of 1983 in the north of Italy, and Elio Perlman (Timothée Chalamet), a precocious 17- year-old American-Italian boy, spends his days in his family’s 17th century villa transcribing and playing classical music, reading, and flirting with his friend Marzia (Esther Garrel). Elio enjoys a close relationship with his father (Michael Stuhlbarg), an eminent professor specializing in Greco-Roman culture, and his mother Annella (Amira Casar), a translator, who favor him with the fruits of high culture in a setting that overflows with natural delights. While Elio’s sophistication and intellectual gifts suggest he is already a fully-fledged adult, there is much that yet remains innocent and unformed about him, particularly about matters of the heart. One day, Oliver (Armie Hammer), a charming American scholar working on his doctorate, arrives as the annual summer intern tasked with helping Elio’s father. Amid the sun-drenched splendor of the setting, Elio and Oliver discover the heady beauty of awakening desire over the course of a summer that will alter their lives forever.
import ea.*; import java.util.ArrayList; /** * Die Spielwelt besetht aus 4x3 {@link Karte}n, durch dei die {@link Lunk Spielfigur} * wandern kann. Sie enthaät alles, was im Spiel sichtbar ist und interagieren kann. * * Sie prüft, ob die Bewegung einer Figur erlaub tist und setzt ggf. die Figuren * auf eine neue Position (z.B. auch auf eine neue Karte). * * Der Prototyp enthält als Beispiel eine Karte mit wenig Inhalt und einigen * zufälligen Elementen. */ public class Welt extends Knoten { // Index der aktuelle sichtbaren Karte im Karten-Array private int karteX, karteY; // Speicher der 12 Karten private Karte[][] karten; // Referenz zur Spielfigur private Lunk lunk; // Aktuelle Position der Spielerfigur als Index der aktuellen Karte // (Index des Feldes, auf dem Lunk steht.) private int lunkX, lunkY; public Welt( Lunk pLunk ) { lunk = pLunk; // Initialisiere die Karten der Welt karten = new Karte[Zulda.WORLD_WIDTH][Zulda.WORLD_HEIGHT]; for( int i = 0; i < karten.length; i++ ) { for (int j = 0; j < karten[0].length; j++) { if( i == 2 && j == 2 ) { karten[i][j] = new Karte_0(i, j, this); } else if( i == 0 || j == 0 ) { karten[i][j] = new Karte_Random(i, j, this); } else { karten[i][j] = new Karte(i, j, this); } } } // Zeige die erste Karte, die angezeigt werden soll. karteX = 2; karteY = 2; add(karten[karteX][karteY]); karten[karteX][karteY].karteAnzeigen(); // Setze Lunks Startposition auf der aktuellen Karte. lunkX = 10; lunkY = 8; karten[karteX][karteY].verschiebeZuFeldAnIndex(lunk, lunkX, lunkY); add(lunk); } /** * Gibt den Spielercharakter zurück. * @return */ public Lunk getSpieler() { return lunk; } /** * Bewegt den {@ink Lunk Spielercharakter} ein Feld nach links. Überschreitet * die Figur den Rand der Karte, wird die aktuelle Karte gewechselt, sofern * nicht der Rand der Welt erreicht wurde. Ist das Feld nicht passierbar * oder der Rand der Welt erricht, passiert nichts. */ public void bewegeLinks() { lunk.zustandSetzen("run_left"); if (lunk.aktuelleFigur().getX() < Zulda.TILE_SIZE) { if( karteX > 0 ) { wechseleKarte(karteX-1, karteY); int newX = (Zulda.MAP_WIDTH-1)*Zulda.TILE_SIZE; karten[karteX][karteY].verschiebeZuFeldAnKoordinate(lunk, newX, lunk.getY()); } } else { Feld feld = karten[karteX][karteY].feldAnKoordinate(lunk.getX()-Zulda.TILE_SIZE, lunk.getY()); // feld ist ungleich null, da sonst nicht der else-Zweig ausgeführt werden würde if( feld.istPassierbar() ) { karten[karteX][karteY].verschiebeZuFeld(lunk, feld); // Gegenstände auf dem neuen Feld einsammeln for( Gegenstand g: karten[karteX][karteY].getGegenstaendeAufFeld(feld) ) { g.einsammeln(lunk); } } } lunk.zustandSetzen("idle_left"); } /** * Bewegt den {@ink Lunk Spielercharakter} ein Feld nach rechts. Überschreitet * die Figur den Rand der Karte, wird die aktuelle Karte gewechselt, sofern * nicht der Rand der Welt erreicht wurde. Ist das Feld nicht passierbar * oder der Rand der Welt erricht, passiert nichts. */ public void bewegeRechts() { lunk.zustandSetzen("run_right"); if (lunk.aktuelleFigur().getX() >= (Zulda.MAP_WIDTH-1)*Zulda.TILE_SIZE) { if( karteX < 3 ) { wechseleKarte(karteX+1, karteY); karten[karteX][karteY].verschiebeZuFeldAnKoordinate(lunk, 0, lunk.getY()); } } else { Feld feld = karten[karteX][karteY].feldAnKoordinate(lunk.getX()+Zulda.TILE_SIZE, lunk.getY()); // feld ist ungleich null, da sonst nicht der else-Zweig ausgeführt werden würde if( feld.istPassierbar() ) { karten[karteX][karteY].verschiebeZuFeld(lunk, feld); // Gegenstände auf dem neuen Feld einsammeln for( Gegenstand g: karten[karteX][karteY].getGegenstaendeAufFeld(feld) ) { g.einsammeln(lunk); } } } lunk.zustandSetzen("idle_right"); } /** * Bewegt den {@ink Lunk Spielercharakter} ein Feld nach oben. Überschreitet * die Figur den Rand der Karte, wird die aktuelle Karte gewechselt, sofern * nicht der Rand der Welt erreicht wurde. Ist das Feld nicht passierbar * oder der Rand der Welt erricht, passiert nichts. */ public void bewegeHoch() { lunk.zustandSetzen("run_right"); if (lunk.aktuelleFigur().getY() < Zulda.TILE_SIZE) { if( karteY > 0 ) { wechseleKarte(karteX, karteY-1); int newY = (Zulda.MAP_HEIGHT-1)*Zulda.TILE_SIZE; karten[karteX][karteY].verschiebeZuFeldAnKoordinate(lunk, lunk.getX(), newY); } } else { Feld feld = karten[karteX][karteY].feldAnKoordinate(lunk.getX(), lunk.getY()-Zulda.TILE_SIZE); // feld ist ungleich null, da sonst nicht der else-Zweig ausgeführt werden würde if( feld.istPassierbar() ) { karten[karteX][karteY].verschiebeZuFeld(lunk, feld); // Gegenstände auf dem neuen Feld einsammeln for( Gegenstand g: karten[karteX][karteY].getGegenstaendeAufFeld(feld) ) { g.einsammeln(lunk); } } } lunk.zustandSetzen("idle_right"); } /** * Bewegt den {@ink Lunk Spielercharakter} ein Feld nach unten. Überschreitet * die Figur den Rand der Karte, wird die aktuelle Karte gewechselt, sofern * nicht der Rand der Welt erreicht wurde. Ist das Feld nicht passierbar * oder der Rand der Welt erricht, passiert nichts. */ public void bewegeRunter() { lunk.zustandSetzen("run_left"); if (lunk.aktuelleFigur().getY() >= (Zulda.MAP_HEIGHT-1)*Zulda.TILE_SIZE) { if( karteY < 2 ) { wechseleKarte(karteX, karteY+1); karten[karteX][karteY].verschiebeZuFeldAnKoordinate(lunk, lunk.getX(), 0); } } else { Feld feld = karten[karteX][karteY].feldAnKoordinate(lunk.getX(), lunk.getY()+Zulda.TILE_SIZE); // feld ist ungleich null, da sonst nicht der else-Zweig ausgeführt werden würde if( feld.istPassierbar() ) { karten[karteX][karteY].verschiebeZuFeld(lunk, feld); // Gegenstände auf dem neuen Feld einsammeln for( Gegenstand g: karten[karteX][karteY].getGegenstaendeAufFeld(feld) ) { g.einsammeln(lunk); } } } lunk.zustandSetzen("idle_left"); } /** * Lässt den {@ink Lunk Spielercharakter} alle Gegner auf dem Feld rechts * von ihm attackieren. Gegener deren Hitpoints auf Null sinken, werden aus * der Karte entfernt. */ public void attackeRechts() { Karte aktuelleKarte = karten[karteX][karteY]; Feld feldRechts = aktuelleKarte.feldAnKoordinate(lunk.zentrum().x+ Zulda.TILE_SIZE, lunk.zentrum().y); ArrayList<Gegner> gegnerRechts = aktuelleKarte.getGegnerAufFeld(feldRechts); for( Gegner g: gegnerRechts ) { // TODO: Überlgen, wie Schaden berechnet wird ... g.addHitpoints((int) ((lunk.getAttack() - g.getDefense()) * -0.5) ); if( g.getHitpoints() <= 0 ) { aktuelleKarte.entferneGegner(g); } } } /** * Lässt den {@ink Lunk Spielercharakter} alle Gegner auf dem Feld links * von ihm attackieren. Gegener deren Hitpoints auf Null sinken, werden aus * der Karte entfernt. */ public void attackeLinks() { Karte aktuelleKarte = karten[karteX][karteY]; Feld feldRechts = aktuelleKarte.feldAnKoordinate(lunk.zentrum().x-Zulda.TILE_SIZE, lunk.zentrum().y); ArrayList<Gegner> gegnerRechts = aktuelleKarte.getGegnerAufFeld(feldRechts); for( Gegner g: gegnerRechts ) { // TODO: Überlgen, wie Schaden berechnet wird ... g.addHitpoints((int) ((lunk.getAttack() - g.getDefense()) * -0.5) ); if( g.getHitpoints() <= 0 ) { aktuelleKarte.entferneGegner(g); } } } /** * Tauscht die aktuelle Karte gegen eine neue am Index (i|j) im Karten-Array * aus. Gibt es an diesem index keine Karte, dann passiert nichts. * @param i Horizontaler Index der neuen Karte * @param j Vertikaler Index der neuen Karte */ private void wechseleKarte( int i, int j ) { if( i >= 0 || i < karten.length && j >= 0 && j < karten[0].length ) { karten[karteX][karteY].karteVerstecken(); entfernen(karten[karteX][karteY]); karteX = i; karteY = j; add(karten[karteX][karteY]); karten[karteX][karteY].karteAnzeigen(); // Lunk einmal entfernen und wieder adden, damit die Figur // nicht von der neuen Karte verdeckt wird. entfernen(lunk); add(lunk); } } }
It is known from the literature that nitrosamines are formed in removal of CO2 from flue gases from power plants fired using fossil fuels. These nitrosamines result from reactions of the NOx gas components present in the flue gas and the amines present in the solvents for the CO2 scrubbing. According to the information available so far, these reactions preferentially take place with secondary amines, which are always present in the industrially synthesized amines. In addition, primary amines combine with the oxygen present in flue gas to form decomposition products which also include secondary amines. It may therefore be considered certain that nitrosamines are formed in the aqueous amine solution and also accumulate there, regardless of whether primary, secondary or tertiary amines are used for the CO2 scrubbing. Nitrosamines are considered to be carcinogenic substances. It is therefore necessary to ensure that the nitrosamine content of the amine solution is limited to the extent that nitrosamine compounds are not discharged with the flue gas from the CO2 scrubbing. As long as the nitrosamine concentration in the amine solution is low, a secondary scrubbing stage installed in the CO2 absorption column is used to ensure that the nitrosamines are always backwashed and remain in the solution. However, if the nitrosamine concentration continues to increase, there is the risk that nitrosamines might enter the off-gas of the power plant. It is known that nitrosamines decompose in the atmosphere, which reduces the environmental threat, but it is questionable whether nitrosamines decompose rapidly enough in the atmosphere. In comparison with off-gas consisting essentially of nitrogen, nitrosamines have a significantly higher molecular weight with a high boiling point accordingly and therefore have a tendency to settle to the ground because they are heavier than air. For health and safety reasons and also for environmental protection reasons, it is therefore necessary to ensure that no nitrosamines enter the environment from the solvent circulation. This can be achieved, as mentioned above, if the concentration of nitrosamines in the solvent circulation is limited and an unhindered increase in the amine solution is prevented.
def instantiateShootCallback(): d = defer.Deferred() d.callback(1)
The determination and analysis of factors affecting to student learning by artificial intelligence in higher education At the present time, with the improvement of educational work in the education field life, and is intended to facilitate the organization. Many methods are used to accomplish this goal. Artificial intelligence is one of these methods. With artificial intelligence applications to problems encountered in the educational life solutions are the way. In this study, an analysis of the factors affecting student learning and the identification process is carried out by an artificial intelligence-based method. An optimization method for determining the factors affecting the learning process has been proposed. Because many factors that are effective for student learning. For this reason the optimization process must be performed of these factors. An optimization process using fuzzy logic and genetic algorithm is proposed to perform this operation. Then, the classification process of these factors is performed using K-means algorithm. Factors that affect the learning process in order to perform the analysis of the factors affecting student learning students, teachers, the curriculum and namely social life is divided into four main sections. Artificial intelligence method was used to perform the analysis of the impact of these factors on the learning process. Thus, the determination of the factors that affect student learning and analysis will be done in an easier way. Also easily cause remedy the problems which occur in the training using the results obtained from this method is produced. Thereby more accurately and quickly determine the problems to occur in the training satisfies faster and more effective solutions can be realized.
<gh_stars>0 from simuvex.s_errors import SimError class ExplorationTechnique(object): """ An otiegnqwvk is a set of hooks for path groups that assists in the implementation of new techniques in symbolic exploration. TODO: choose actual name for the functionality (techniques? something?) Any number of these methods may be overridden by a subclass. To use an exploration technique, call ``pg.use_technique``. """ # pylint: disable=unused-argument, no-self-use def __init__(self): # this attribute will be set from above by the path group self.project = None def setup(self, pg): """ Perform any initialization on this path group you might need to do. """ pass def step_path(self, path): """ Perform the process of stepping a path forward. If the stepping fails, return None to fall back to a default stepping procedure. Otherwise, return a tuple of lists: successors, unconstrained, unsat, pruned, errored """ return None def step(self, pg, stash, **kwargs): """ Step this stash of this path group forward. Return the stepped path group. """ return pg.step(stash=stash, **kwargs) def filter(self, path): """ Perform filtering on a path. If the path should not be filtered, return None. If the path should be filtered, return the name of the stash to move the path to. If you want to modify the path before filtering it, return a tuple of the stash to move the path to and the modified path. """ return None def complete(self, pg): """ Return whether or not this path group has reached a "completed" state, i.e. ``pathgroup.run()`` should halt. """ return False def _condition_to_lambda(self, condition, default=False): """ Translates an integer, set or list into a lambda that checks a path address against the given addresses, and the other ones from the same basic block :param condition: An integer, set, or list to convert to a lambda. :param default: The default return value of the lambda (in case condition is None). Default: false. :returns: A lambda that takes a path and returns the set of addresses that it matched from the condition """ if condition is None: condition_function = lambda p: default elif isinstance(condition, (int, long)): return self._condition_to_lambda((condition,)) elif isinstance(condition, (tuple, set, list)): addrs = set(condition) def condition_function(p): if p.addr in addrs: # returning {p.addr} instead of True to properly handle find/avoid conflicts return {p.addr} try: # If the address is not in the set (which could mean it is # not at the top of a block), check directly in the blocks # (Blocks are repeatedly created for every check, but with # the IRSB cache in angr lifter it should be OK.) return addrs.intersection(set(self.project.factory.block(p.addr).instruction_addrs)) except (AngrError, SimError): return False elif hasattr(condition, '__call__'): condition_function = condition else: raise AngrExplorationTechniqueError("ExplorationTechnique is unable to convert given type (%s) to a callable condition function." % condition.__class__) return condition_function #registered_actions = {} #registered_surveyors = {} # #def register_action(name, strat): # registered_actions[name] = strat # #def register_surveyor(name, strat): # registered_surveyors[name] = strat from .explorer import Explorer from .threading import Threading from .dfs import DFS from .looplimiter import LoopLimiter from .lengthlimiter import LengthLimiter from .veritesting import Veritesting from .oppologist import Oppologist from .director import Director, ExecuteAddressGoal, CallFunctionGoal from .spiller import Spiller from ..errors import AngrError, AngrExplorationTechniqueError
Preoperative Prognostic Nutritional Index Value is Related to Postoperative Delirium in Elderly Patients After Noncardiac Surgery: A Retrospective Cohort Study Purpose Malnutrition has been considered as a risk factor for postoperative delirium (POD). The Prognostic Nutritional Index (PNI) is a validated tool for assessing nutritional status. This study aimed to investigate the association between preoperative PNI values and the occurrence of POD in elderly surgical patients. Methods The retrospective cohort study included 361 elderly individuals who underwent noncardiac surgery between 2018 and 2019. Perioperative data were collected from the patients medical records. PNI was used to evaluate preoperative nutritional status. The primary outcome was the occurrence of POD. Univariate and multivariate logistic regression analyses were used to identify key factors associated with POD and assess the relationship between PNI values and the occurrence of POD. Receiver operating characteristic (ROC) curve analysis was used to assess the predictive value of PNI for POD. Results Seventy-two (19.9%) individuals developed postoperative delirium after surgery. Compared with patients of normal nutrition status (PNI ≥ 50), mild malnutrition (PNI 4550) did not increase the risk of POD, while patients with moderate to severe malnutrition (PNI 4045) (odds ratio , 2.92; 95% confidence interval , 1.316.50) and serious malnutrition (PNI < 40) (OR, 3.15; 95% CI, 1.128.83) were more likely to develop POD. The cut-off value of PNI was 46.05 by ROC curve analysis, the area under the curve (AUC) was 0.69 (95% CI 0.620.77). Conclusion Preoperative PNI value is related to postoperative delirium in elderly patients after noncardiac surgery. Introduction Postoperative delirium (POD) is one of the most common neurological complications among elderly patients after surgery. Acute and fluctuating disturbance of consciousness, inattention, disorganized thinking and altered consciousness are characteristics of POD. 1 POD is associated with several negative outcomes, such as longer length of stay in hospital, impaired functional abilities, increased longterm care requirement and mortality. 2,3 It is generally believed that not only posoperative status, but also pre-existing factors, such as reduced functional status, advanced age, malnutrition and decreased cognitive levels, may be closely related to POD. 4 Among the elderly, malnutrition commonly occurs, especially in those who are chronically ill or hospitalized. 5 It was reported that the incidence of malnutrition ranges from 50% to 80% related to different types of disease. 6,7 Poor nutritional status of surgical patients is associated with various adverse outcomes, including functional and cognitive impairment, and increased risk of depression. In clinical settings, several measures can reflect the status of nutrition, for instance, the Malnutrition Universal Screening Tool (MUST), 11 Controlling Nutritional Status (CONUT), 12 Short Form Mini Nutritional Assessment (MNA-SF), 13 Geriatric Nutritional Risk Index (GNRI) 14 and Prognostic Nutritional Index (PNI). 15 Among these tools, the PNI is a convenient and accurate way to quantify nutritional status. Although two studies have shown that preoperative low PNI is related to increased risk of POD among patients undergoing orthopedic surgeries, 16,17 the relationship between PNI and POD in elderly patients undergoing noncardiac surgeries is unclear. Thus, we conducted the present retrospective study. The study may provide further evidence to support the idea of reducing the incidence of POD by strengthening patients' preoperative nutritional status, especially for elderly patients undergoing elective surgery. Study Design We conducted the retrospective study at the Affiliated Hospital of Xuzhou Medical University (Xu Zhou, China) between December 2018 and August 2019. The study was approved by the clinical research ethics committee of the Affiliated Hospital of Xuzhou Medical University (Certification No. XYFY2019-KL198-01, approval date: October 28, 2019), and registered at Chinese Clinical Trial Registry (ChiCTR2000029657, February 9, 2020). The ethics committee agreed that informed patient consent was not required for the retrospective and observational nature of this analysis. There was compliance with the 1964 Helsinki declaration. The information provided from patients' recorded data was kept confidential. Codes instead of names were used to identify the study populations. Inclusion criteria were as follows: age 60 years or older; American Society of Anesthesiologists (ASA) physical status I-III; patients who underwent noncardiac surgery under general anesthesia. Exclusion criteria were as follows: Mini Mental State Examination (MMSE) scores less than 15, the duration of surgery less than 90 minutes, length of stay after surgery less than 3 days, preoperative albumin infusion, pre-existing neurological diseases (Parkinson's disease, Alzheimer's disease, or preoperative delirium), missing data or disconnection of follow-up. Data Collection The baseline characteristics and demographic details were obtained from the hospital medical information system. Preoperative, perioperative and postoperative data were collected, including age, gender, years of education, body mass index (BMI), smoking and drinking history, ASA physical status, MMSE score, 18 pre-existing comorbidities, Charlson Comorbidity Index (CCI), 19 and laboratory results, such as serum potassium levels, blood glucose, alanine transaminase (ALT), aspartate transaminase (AST), blood urea nitrogen (BUN), creatinine (Cr), hemoglobin, albumin, and lymphocyte count. Delirium Assessment Delirium was assessed using rigorous methodologies, including the Confusion Assessment Method (CAM) 21 applied to assess POD in wards, and the CAM for the intensive care unit (CAM-ICU) 22 used in the PACU or the ICU. Patients were assessed for delirium at least 2 h after the end of surgery and twice-daily on the first 3 postoperative days at a minimum 6-hour interval. 23 Additionally, investigators collected evidence of delirium from nurses, caregivers and medical records, including confusion, agitation, sedation, hallucinations and delusions. Primary Outcomes The primary outcome was the presence of POD during the first 3 days after surgery. The primary objective of this study was to evaluate the association between preoperative PNI value and the present of POD. Statistical Analysis The data were analyzed using SPSS software (version 19 expressed as mean and standard error (SD) or as median and interquartile range (IQR), and analyzed by Mann-Whitney U-test or t-test, while categorical data were demonstrated as number (n, %) and analyzed by Chi-square test or Fisher's exact test. Univariable and multivariable logistic regression models were estimated to identify independent risk factors of POD. Variables with P < 0.1 (2 sided) in the univariable analysis were included in the multivariable regression model using a backward selection algorithm. Multicollinearity diagnostic was performed between the variables to evaluate the validity of the regression model by calculating the values of tolerance and the variance inflation factor (1/tolerance). Receiver operating characteristic (ROC) curve analysis was used to evaluate the predictive and cut-off value of PNI for POD. All P values given are based on 2-tailed tests, and a P-value < 0.05 was considered a statistically significant difference. Results Enrollment A total of 650 consecutive patients were screened between December 2018 and August 2019; 250 patients did not meet the inclusion criteria, and 39 patients were with incomplete data. These patients were excluded from the study. Thus, 361 patients were enrolled in the study, and the records of the patients were available for the final analysis. The specific content is shown in the flow chart ( Figure 1). Patient Characteristics A total of 72 cases (19.9%) developed delirium in the first 3 days after surgery. Table 1 shows the demographic and perioperative data of these patients with or without POD, and no statistical difference was found in the smoking and drinking habits, the history of diabetes and hypertension, CCI, types of operation, operative blood loss, the volume of perioperative blood and fluid transfusion, the duration of operation and anesthesia between the patients with and without POD (P > 0.005) ( Table 1). The age of the patients in the POD group was significantly higher (P = 0.001), while the BMI (P = 0.003) and the MMSE scores (P = 0.002) were significantly lower than in the non-POD group. Furthermore, compared with patients of ASA II, the incidence of POD was higher in patients of ASA III (P = 0.022). DovePress In addition, for the preoperative laboratory tests, the levels of blood lymphocyte count, hemoglobin and albumin were significantly lower in the POD group than the non-POD group (P < 0.001). Furthermore, potassium ions, blood glucose, ALT, AST, BUN, Cr levels did not differ significantly between patients with and without POD (P > 0.05). Compared Outcomes Univariable logistic regression analyses were performed to evaluate potential risk factors for POD. As shown in Table 2 Cut-Off Values for PNI As shown in the ROC curve for the incidence of POD, the cut-off value for PNI according to the Youden index was 46.05. The area under the curve (AUC) was 0.692 (95% CI 0.62-0.77). The sensitivity and specificity were 0.779 and 0.556, respectively ( Figure 2). Discussion This retrospective study was performed in a population of elderly patients undergoing elective noncardiac surgery in general anesthesia. We found that a preoperative lower value of PNI is significantly associated with the presence of POD. Furthermore, adjusted for age, ASA physical status, BMI, CCI, hemoglobin and MMSE in a multivariable logistic regression analysis model, low value of PNI preoperatively was an independent risk factor for the development of POD. The risk of POD for individuals with moderate to severe malnutrition and serious malnutrition increased by 2.9 times and 3.1 times, respectively, compared with those with normal nutritional status. Nutrition risk screening tools can be defined as subjective or objective assessment indicators. 24 The subjective ones do not require special laboratory test and are easy to perform, but the accuracy is relatively low. Some subjective indicators, such as MNA-SF, are developed for white people and may be not suitable for other races. 13 The objective indicators mainly consist of laboratory indicators, and thus are more accurate and expensive, such as PNI, CONUT and GNRI. A study compared the efficiency of GNRI, PNI and CONUT on the prediction of delirium in coronary ICU patients and found no significant difference between PNI and CONUT. 25 However, PNI had the superiority in terms of convenience (CONUT is calculated by the serum albumin, serum total cholesterol and total lymphocyte counts). Nowadays, preoperative malnutrition has been considered a predisposing factor for delirium. 4,17,26 A study in patients undergoing spinal deformity surgery reported that PNI < 49.7 predicted the incidence of POD. 17 Another study of 163 elderly patients suggested that a low value of PNI is a predictor for POD in elderly patients after hip fracture surgery. 16 However, for the present study, PNI was found to be a risk factor for POD after noncardiac surgery, and the PNI cut-off value for POD was 46.05, which may enable medical caregivers to identify patients with malnutrition early. Our study is in line with these studies, though with the following differences. Firstly, the participants of the present study were noncardiac surgery patients. Secondly, previous studies focused on the relationship between the absolute value of PNI and POD, while this study assessed the nutrition status with the four-class stratification of the PNI tool and explored the correlation between different degrees of nutrition status and the risk of POD. Compared with patients with normal nutritional status, the risk of POD increased by 2.9 and 3.1 in patients with moderate to severe malnutrition and serious malnutrition, respectively. Thus, a more accurate and rapid assessment of the risk of POD could be done according to PNI values before surgery. Malnutrition has been reported to be associated with several adverse effects, and numerous studies consistently showthat nutritional supplementation is helpful to reduce postoperative complications. 27,28 Improving the nutrition status during the perioperative period may be of great significance for the treatment and prevention of POD. The deficiency of nutrients is thought to contribute to the development of delirium and impair cognitive performance. 29 A study in elderly hip-fracture patients demonstrated that metabolic abnormalities before surgery could possibly increase the vulnerability of the brain and result in POD, including lack of 3 and 6 fatty acids, dysfunction of energy metabolism and glutamate-glutamine cycle dysfunction. 30 However another study in Irish older adults found that nutritional supplements of Omega-3 Polyunsaturated Fatty Acids and vitamin D could not improve overall cognitive function. 31 In addition, a review reported that receiving high-protein nutritional supplements significantly improved the clinical outcomes of patients, reduced readmissions to hospital, and reduced surgical complications. 28 Although there is no direct evidence demonstrating the benefits of nutritional supplements on preventing POD, it is hoped that this hypothesis can be verified. Furthermore, the finding that lower preoperative hemoglobin level and lower MMSE scores are also risk predictors of POD is in line with the results of previous studies. 32 The potential hypothesis is that the oxygenation function of blood decreases because of low levels of hemoglobin, and then the supply of energy and oxygen for brain tissue are limited, resulting in the disorder of cerebral metabolism and the onset of delirium symptoms such as disorientation and altered consciousness. The MMSE was used to screen for potential dementia before surgery. Similar to a previous study, 33 the current study found that individuals with lower MMSE score before surgery had a greater risk of POD, and indicated that pre-existing cognitive impairment is associated with the development of POD. The advantage of this study is that the sample size is relatively large and the study population contains most noncardiac major surgery, which strengthens the conclusion. In addition, seven protocols of postoperative followup eliminate a missed diagnosis of POD as far as possible. However, this study has several possible limitations. Firstly, we only measured the incidence of POD on the first 3 days after surgery. Although POD regularly occurred during this period, it could occur at a later stage, so some patients with potential POD may be missed. Secondly, PNI may be affected by external factors and is not the most sensitive indicator for assessing the nutritional status of patients. This study only analyzed the relationship between preoperative PNI and POD, and failed to analyze the changes of postoperative PNI and its relationship with POD. Third, patients with dementia and pre-existing delirium were excluded, and there may be a decrease in the accuracy of PNI in predicting the occurrence of POD. In addition, the study was a single-center retrospective cohort study, the AUC of PNI (0.692) did not have enough accuracy, future prospective studies should be performed to validate these results, and the nomogram logistic analysis is helpful to quantify the contribution to the occurrence of POD. Conclusion In summary, for elderly patients undergoing noncardiac surgery, preoperative PNI value is associated with the development of POD. In addition, the risk of POD increases when the degree of nutritional inadequacy is more serious. Low hemoglobin and low MMSE scores are two other independent risk factors of POD. Data Sharing Statement The raw data set used/analyzed during the current study is available from Mingsheng Dai on reasonable request, and the data will be made available once the original research published. Author Contributions All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work. Publish your work in this journal Risk Management and Healthcare Policy is an international, peerreviewed, open access journal focusing on all aspects of public health, policy, and preventative measures to promote good health and improve morbidity and mortality in the population. The journal welcomes submitted papers covering original research, basic science, clinical & epidemiological studies, reviews and evaluations, guidelines, expert opinion and commentary, case reports and extended reports. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
<filename>src/classes/post.ts // eslint-disable-next-line no-unused-vars import axios, { AxiosResponse } from 'axios'; import config from '../config'; function addParentToComments(parent: string, comments: CommentData[]) { return comments.map((comment) => ({ ...comment, parent })); } export default class Post { private readonly POST_JSON_URL: string; private name: string; constructor(name: string) { this.name = name; this.POST_JSON_URL = `${config.BASE_URL}/${name}.json`; } private async getPostJSON() { let response: AxiosResponse<[PostListing, CommentListing]>; try { response = await axios.get(this.POST_JSON_URL); } catch (error) { throw new Error(error.message); } if (response.status !== 200) { throw new Error(response.data.toString()); } return response?.data; } public async getComments() { const [, commentListing] = await this.getPostJSON(); let topLevelComments = commentListing.data?.children; topLevelComments = addParentToComments(this.name, topLevelComments); return topLevelComments; } }
package cfg type IDGetter interface { GetID() string } type TransitionInterface interface { GetFrom() []IDGetter GetTo() []IDGetter } type TransitionRegistryInterface interface { GetAsMap() map[string]TransitionInterface GetByID(transitionID IDGetter) (TransitionInterface, error) } type Interface interface { GetStart() IDGetter GetFinish() IDGetter GetPlaces() []IDGetter GetTransitions() TransitionRegistryInterface }
Evidence of inhibitory effect of Pseudomonas fluorescens CHA0 and aqueous extracts on tomato plants infected with Meloidogyne javanica (Tylenchida: Heteroderidae) Effects of Pseudomonas fluorescens L. (jimsonweed) (Pf) isolate and the two plant extracts, Datura stramonium and Myrtus communis, were investigated on hatching and juvenile (J2s) mortality of Meloidogyne javanica (Tylenchida: Heteroderidae) under laboratory conditions. After determining the values of LC30, LC50, and LC70 of each extract, four leaf stage seedlings of tomato were treated by 20 ml of Pf suspension at a concentration of 10 CFU/ml, using a soil drenching method. After 1 week, the tested plants were inoculated by 4000 eggs and (J2s) of M. javanica and simultaneously were treated by 100 ml of the selected concentrations of D. stramonium (1.1, 1.4, and 1.8%) and M. communis (1.8, 3 and 5.2%), as soil drench. Results showed that a combination of Pf and the leaf extract, D. stramonium at the rate of 1.8% or M. communis at the rate of 5.2%, respectively, reduced the number of eggs per root system and the reproduction factor by 68 and 45%, the number of galls by 64 and 33%, and the number of egg masses by 65 and 43%, than the control. In conclusion, combination of Pf and D. stramonium at the rate of 1.8% or M. communis at the rate of 5.2% can significantly reduce the damage of M. javanica on tomato, under greenhouse conditions. Background Root-knot nematodes (Meloidogyne spp.) are among the most dangerous plant parasites () which cause a loss of 8.8 to 14.6% to agricultural products annually (). The nematodes have a short life cycle, a wide host range, and a high reproduction rate; therefore, their management is very difficult (Trudqill and Blok 2001). Besides, the application of chemical nematicides is harmful to the environment; their use also is not economically feasible. Therefore, due to their health consequences for human beings, application of chemical pesticides is limited and the researchers are searching for some safe and environmentally friendly methods which are based on economic and environmental issues. Some examples of such approaches are using plant extracts and herbal products like root exudates, herbal meal, and medicinal plant wastes (). Biological control agents (;) and their products () are used in integrated pest management program. Among the bio agents, plant growth-promoting rhizobacteria (PGPR)-such as Pseudomonas fluorescens CHA0-has a high efficiency in controlling plant pathogens like root-knot nematodes (Tavakol ). The outstanding feature of P. fluorescens is its high solubilization capacity of soil phosphorous (). Results showed that a mixture of P. fluorescens and Azospirillum brasilense had a positive influence on the yield of three potato varieties (). In the last seven decades, plant extracts and other phytochemicals were surveyed for their effects on plantparasitic nematodes. Aqueous extracts of different parts of Neem, Chinaberry, and marigold were successfully used against the root-knot nematodes (Siddiqui and Shakeel 2007). Based on the chemical analysis of plant tissues from some plants, some chemical compounds have been detected. Marigold with alpha-terthienyl () and Neem with azadirachtin, nimbin, salannin, nimbidin, and thionemone (Ferraz and de Freitas 2004) showed good effects on root-knot nematodes. Considering the advantages of biological control, this study aimed to find out suitable aqueous extracts of some plants on juvenile mortality and hatching of M. javanica under laboratory conditions. The other target was to evaluate the efficiency of the combination of plant extracts and P. fluorescens CHA0 on reducing nematode damage in tomatoes greenhouse. Materials and methods Effects of different aqueous extract concentrations of Datura stramonium L. (jimsonweed), Myrtus communis L. (myrtle), Fumaria officinalis L., and Vitex agnuscastus L. (Chaste tree) on hatching and juvenile (J 2 s) mortality of M. javanica were analyzed under laboratory conditions. After identifying the suitable plant extracts, the necessary concentrations to cause 30, 50, and 70% J 2 s mortality were used in combination with an isolate of P. fluorescens CHA0 on tomato plants. Preparation of root-knot nematodes culture The roots of nematode-infected tomatoes were collected from greenhouses at Boyer-Ahmad County, Iran, and a single egg mass of the root-knot nematode, M. javanica, was cultured on tomato seedlings (cv. Early-Urbana) in the greenhouse. The root-knot nematodes species were identified based on the study of perennial pattern, as described by Taylor and Netscher. In order to provide the suspension of nematode eggs, the method of Hussey and Barker was used. By storing the egg suspension in incubator adjusted to 27°C, J 2 s were hatched and were collected over a period of 4 days (Baghaee Ravari and Mahdikhani Moghaddam 2015). Preparation of plant extracts The aerial parts of the plants D. stramonium, M. communis, F. officinalis, and V. agnus-castus were collected from Boyer-Ahmad County, Iran, dried in shade and finely grinded using an electric grinder and a stock solution (10% w/v) was prepared (Ferris and Zheng 1999). Preparation of bacterial isolate The isolate of P. fluorescens CHA0 was obtained from the Department of Plant Protection, Faculty of Agriculture, Tehran University, Iran. To obtain a pure and fresh bacterial culture, bacterial suspension was grown on Nutrient Agar (NA) culture. The grown bacteria were harvested and mixed up with distilled water, and then the concentration was adjusted to 10 8 CFU/ml. Laboratory assay Two laboratory experiments were conducted to examine the inhibitory effect of plant extracts and bacterial suspension on J 2 s hatching and mortality in M. javanica. One milliliter of the egg suspension containing 100 ± 10 of nematode eggs was poured into the Petri dishes (with a diameter of 8 cm, then, 9 ml) of bacterial suspension with the concentration of 10 8 CFU/ml, or aqueous plant extracts at the rates of 0.5, 1, 2, 4, 6 and 8% (w/v) were added and then they were kept under controlled conditions at 27°C. After two periods of 72 and 120 h, the hatched juveniles were counted, using a stereo microscope (Gkhan and Sevilhan 2014). In another experiment, the J 2 s mortality was investigated. Nine milliliters of bacterial suspension with the concentration of 10 8 CFU/ml, or aqueous plant extracts at the rates of 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 5, 6, and 8% were added to 1 ml of nematode suspension containing 100 ± 10 J 2 s of M. javanica in Petri dishes and kept at 27°C, under controlled conditions. Dead J 2 s were counted with the help of a stereo microscope (). The experiments were carried out in completely randomized designs with four replications. The lethal mortality values (LC 30, LC 50, and LC 70 ) necessary to cause 30, 50, and 70% J 2 s mortality, were subjected to probit analysis in order to find the suitable concentrations. Greenhouse experiment Two greenhouse experiments were conducted at Boyer-Ahmad County, Iran, in 2017 and 2018. Different lethal concentrations (LC 30, LC 50, and LC 70 ) of D. stramonium and M. communis were chosen for greenhouse test. Seeds of susceptible tomato (cv. Early-Urbana) were sown in plastic pots with 1000 g of steam sterilized soil mixture of farm soil (sandy loam soil with electrical conductivity (EC) = 0.671 dS m −1, pH = 7.45, contains 76% calcium carbonate, 52.9 mg/ kg phosphorus, 0.170 mg/kg of organic matter and 0.0987 mg/kg of organic carbon), cow manure and sand with a ratio of 1:1:2, respectively. The pots were kept under controlled conditions in the greenhouse with 16:8 h light to dark photoperiod and 27 ± 5°C. At four leaf stage, each one of the tomato seedlings was treated by 20 ml of a suspension of 10 8 CFU/ml of P. fluorescens CHA0 as soil drench. After 7 days, treated seedlings were inoculated simultaneously by 4000 eggs and J 2 s of M. javanica and soil-drenched with 100 ml/pot of selected concentrations of D. stramonium, viz. 1.1, 1.4, and 1.8% (w/v) and M. communis viz. 1.8, 3 and 5.2% (w/v). Sixty days after nematode inoculation, plants were harvested and plant shoot length as well as the fresh, dry weight of shoot, and fresh weight of root was recorded. Number of eggs, galls, and egg masses per root system and number of J 2 s per pot were counted, and the reproductive factor of the nematode was calculated as described by Sasser and Taylor. Approximately 2 months after the first trial of the experiment, for the same procedure was done as the second trial of the experiment. The experiments were carried out in a completely randomized design test with five replications. Statistical analysis For the greenhouse experiment, data of plant growth parameters were subjected to a 4 2 2 (plant extracts bacterial isolate nematode) factorial analysis of variance (ANOVA) and the data of nematode population indices were subjected to a 4 2 (plant extracts bacterial isolate) factorial analysis of variance (ANOVA) in a completely randomized design, using SAS statistical software (SAS Institute, Cary, NC). For assays of the inhibition of hatching and J 2 s mortality, data were subjected to oneway ANOVA. To normalize the data sets, prior to ANOVA analysis, expressed data as percentages were transformed to arcsin values (ArcSin√X) and only untransformed arithmetic means were presented. Where the F-test showed significance difference at p < 0.01, treatment means were compared using least significant differences (LSDs). Results and discussions Laboratory assay Application of P. fluorescens CHA0 significantly reduced the percentage of hatching of M. javanica and increased its J 2 s mortality than those treatments, which had no bacteria (Fig. 1). The results of D. stramonium and M. communis extracts on hatching (Fig. 2) and J 2s mortality ( Fig. 3) indicated that the increase in plant extract concentrations and numbers of hatched eggs decreased, while the numbers of dead J 2 s increased. The effects of V. agnuscastus and F. officinalis extracts were low; therefore, they were not chosen for greenhouse experiments. Toxicity lines were established for both of D. stramonium and M. communis, then LC 30, LC 50, LC 70, and LC 90 were calculated (Table 1). The inhibitory effects of the plant extracts against the nematodes may be related to the presence of some metabolites in the plant. These chemicals can either affect the growth of J 2 s or kill them inside the egg or can dissolve the egg masses of root- knot nematodes (Adegbite and Adesiyan 2005). Some plant extracts may affect the behavior of nematodes, such as host finding capabilities and ultimately killing them. This type of response of juveniles and nematode eggs to plant extracts can be due to the differences in the type of plant metabolites (Zuckerman and Esnard 1994). Because of the variations which exist in antimicrobial compounds of plant extracts and essential oils, there are different mechanisms for their activities. Considering the combined activity, these compounds destroy the cell wall membrane and increase cellular permeability and ionic leakage. Following the breakdown of phospholipid molecules of cell wall, mitochondria, and membrane proteins, as well as decomposition of cytoplasm, cells are severely damaged and will die (Burt 2004). Greenhouse experiment Plants treated with the extract of M. communis at the rate of 5.2% (w/v) had the longest shoot and the highest shoot fresh weight. Non-significant differences were observed between these treatments and nematode noninoculated plants, which were treated by plant extract at the rate of 3% (w/v) in the case of shoot length. The highest shoot dry weight and root fresh weight were observed in nematode non-inoculated plants that treated with plant extract at the rate of 5.2% (w/v) along with bacterial incubation. The shoot dry weights had nonsignificant difference with the non-inoculated plants and with myrtle extract at the rate of 5.2% (w/v) ( Table 2). The number of eggs, galls, and egg masses per root system and the reproductive factor were significantly reduced in the bacterial-treated and non-treated tomatoes receiving 5.2% (w/v) myrtle extract. They had significant difference than other treatments, except bacterial-treated and non-treated plants containing 3% (w/v) myrtle extract. The lowest numbers of J 2 s were observed in the soil of bacterial-treated and non-treated tomatoes applied with 5.2% (w/v) myrtle extract. It had nonsignificant difference with bacterial-treated plants with 3% myrtle extract (Table 3). The results of application of D. stramonium extract indicated that the longest shoots were observed in nematode non-inoculated plants, when treated with the plant extract at the rate of 1.8% (w/v) with and without P. fluorescens CHA0 incubation. They indicated non-significant difference with nematode inoculated and bacterial-treated plants with plant extract at the rate of 1.8% (w/v) and with nematode non-inoculated plants treated with plant extract at the rate of 1.4% (w/v) without P. fluorescens CHA0 incubation. The highest shoot fresh weight was observed in healthy plants that were treated with 1.8% (w/v) plant extract, with or without P. fluorescens CHA0 incubation. They indicated non-significant difference with bacterial-treated nematodeinfected plants along with plant extract at the rate of 1.8% (w/v) and also with treated healthy plants with plant extract at the rate of 1.4% (w/v) with or without bacterial incubation. The highest shoot dry weights were in healthy treated plants with 1.8% (w/v) concentration with and without bacterial incubation. They had non-significant difference with bacterial-treated and nematode-infected plants that treated with plant extract at the rate of 1.8% (w/v). The highest root fresh weights were observed in healthy bacterialtreated plants with 1.8% (w/v) of plant extract (Table 4). Numbers of eggs per root system and the reproductive factor of nematode in bacterial-treated and non-treated tomato roots, receiving 1.8% (w/v) of D. stramonium extract, were significantly reduced than other treatments. The lowest numbers of J 2 s in soil and egg masses per root system were observed in the root of bacterial-treated and nontreated plants with 1.8% (w/v) plant extract. There were non-significant differences with bacterial-treated plants receiving 1.4% (w/v) plant extract. The lowest numbers of galls per root system were observed on the roots of bacterial-treated plants that were treated with 1.8% (w/v) plant extract. There were non-significant differences between those and bacterial non-treated plants with 1.8% (w/ v) plant extract ( Table 5). The positive effects of D. stramonium and M. communis extracts in reducing the damages caused by rootknot nematodes have been shown in several studies. The powder of D. stramonium at the rates of 75 g/kg of soil caused an improvement in okra growth indices and decreased the M. incognita infections (). It was indicated that D. stramonium extract contains large quantities of saponin and flavenoid and small quantities of tannin and alkaloid so it caused mortality of J 2 s of M. javanica in laboratory conditions, and also, it caused a significant decrease in nematode indices and a significant growth improvement in infected melon (Umar and Ngwamdai 2015). Phytochemicals, such as saponins, tannins, flavonoids, alkaloids, phenols, and steroids, cause a significant decrease in reproductive factor as well as the numbers of galls and eggs of M. incognita, and they increase the amount of yield and plant growth indices. In a study carried out on ethanolic and aqueous extracts of D. stramonium, D. innoxia, and D. tatula, the ethanolic extracts had more effects on egg hatch inhibition and increase the J 2 s mortality in M. incognito. Moreover, they reduced nematode infection but were uninfluential to plant growth indices (). It is also indicated that with an increase in the concentration of aqueous extracts of D. stramonium, the J 2 s mortality was increased (), which is consistent with the results of the present study. In previous studies, the antibacterial effects of myrtle extracts were analyzed (;). The nematicidal properties of these antibacterial compounds have not been proven so far. In the study of Oka et al., the effects of aqueous extract and powder of M. communis on J 2 s mortality in M. javanica in soil and the number of eggs per root system and gall index were investigated and the results proved the nematicidal activities of myrtle plant. Induced systemic resistance (ISR) in response to rhizospheric bacteria is one of the mechanisms of resistance against plant pathogenic nematodes, such as root-knot nematodes. Systematic resistance, caused by P. fluorescens CHA0, is attributed to the secondary metabolite, namely 2, 4-diacetylphloroglucinol (Siddiqui and Shaukat 2003). This kind of resistance which is caused by P. fluorescens CHA0 was reported in other studies as well. Rhizobacterial isolates have different mechanisms in inhibiting the life cycle of plant-parasitic nematodes (Siddiqui and Mahmood 1999). The production of hydrogen cyanide, ammonium, hydrogen sulfide, antibiotic, and volatile fatty acids are other inhibitory mechanisms of P. fluorescens CHA0. These toxic metabolites affected the reproductive rate of nematodes, therefore decreasing the number of nematodes (). Furthermore, the antagonistic fluorescent bacterial used against plant-parasitic nematodes, which are compatible with the rhizospheres, has low costs and has no adverse consequences for the environment. Conclusion The study indicated that aqueous extracts of D. stramonium and M. communis in combination with P. fluorescens CHA0 had a potential to decrease the reproduction of the root-knot nematode, M. javanica, and can improve the plant growth indices of infected tomatoes under greenhouse conditions.
The Role of Objective and Subjective Experiences, Direct and Media Exposure, Social and Organizational Support, and Educational and Gender Effects in the Prediction of Children Posttraumatic Stress Reaction One Year after Calamity. Eighty-three students and one teacher died when a powerful earthquake hit eastern Turkey at 3:27 in the morning. The boarding school at which they were residents was destroyed. The purpose of this study was to test the direct, indirect, objective, and subjective exposure effect on the development of Post Traumatic Stress Disorder (PTSD). The impact of social and organizational support, as well as age and gender factors, were examined in relationship to the development of PTSD in this group. Participants included 270 disaster survivor elementary and secondary school students. One year after the disaster, each participant filled out a Childrens Post-Traumatic Stress Disorder Reaction Index (CPTSD), trauma exposure, trauma experiencing, social support, and organizational support scale. Contributing factors were predicted with a stepwise regression analysis. A combination of direct, indirect, objective exposure scores, subjective exposure scores, gender, age, organizational and social support variables accounted for 17% of the PTSD scores. Direct exposure accounted for 6%, subjective exposure 5.4%, age 2%, food shortage 1%, and having a friend moving away after the disaster contributed 2.6% of the total variance. Subjective exposure (fear) and direct exposure appeared to be the most significant predictors. However, inconsistent with previous research studies, media exposure, gender, and physical exposure seemed to be especially poor contributors. Neither school nor home damage, the death of relatives or friends, or gas, water, and electric shortages contributed significantly to the results. However, fear experienced during the disaster, food shortages and the loss of a friend who moved away after the earthquake were all powerful predictors. Protective factors, which can strengthen or modify the individual's ability to cope, include healthy family functioning, support from peers and family members, organizing a social network, and the utilization of civil organizations. Thus, researcher and practitioner should pay attention to those predictive and protective factors.
NEW YORK, Jan 19 (Reuters) - Some traders at the largest Wall Street banks are about to get big, fat zeroes for bonuses while they watch markets thrive. Trading revenue was down significantly across the industry during the fourth quarter, wrapping up a year in which clients around the globe sat idle as market volatility hovered near historic lows. The big five Wall Street banks – JPMorgan Chase & Co , Citigroup Inc, Bank of America Corp, Goldman Sachs Group Inc and Morgan Stanley – reported an average revenue decline of 32 percent for the fourth quarter, and 12 percent for the full year. Even though stock markets hit new highs and bond markets moved little, executives said it was hard to generate income from inactive customers. As a result, bonuses could be 10 percent to 20 percent lower than the prior year, and traders who sit on desks that posted losses could get nothing at all, consultants and recruiters said in interviews. “Getting zero bonuses was unheard of a couple years ago, but it happens today,” said Alan Johnson, head of compensation consulting firm Johnson Associates. “I expect that there are people who will get no bonus” this season, he added. Traders have been feeling the crunch for several years, as trading revenue has been on a near-steady march downward and banks have embarked on aggressive cost-cutting campaigns. It has also become harder for traders to leave banks for attractive opportunities on the buy-side because active managers have been facing their own difficulties with performance and fund-raising. Commodities traders may have it the worst. Muted client activity and wild fluctuations in power and natural gas markets resulted in one of the worst years on record for many trading firms. Big names in energy trading, including hedge fund manager Andy Hall and Texas tycoon T. Boone Pickens, simply closed up shop. After posting one of the worst years on record, managers in Goldman Sachs’ commodities trading unit have told some staff to expect little to no bonus for 2017 performance, three people familiar with the matter told Reuters. They were not authorized to speak on the record. Spokeswoman Tiffany Galvin declined to comment. While $0 bonus checks are still relatively rare, Wall Street banks are trying hard to keep a lid on compensation costs more broadly. Goldman cut its compensation costs 12 percent last year, even as it hired 2,200 more workers. Its average employee received $323,852 in compensation during 2017. That represented 37 cents for every dollar in revenue they produced, down from 38 cents the year before. Compensation costs in Morgan Stanley’s institutional business declined only slightly more than its revenue declined. The investment bankers and traders in that unit received 34 cents in compensation for every dollar in revenue they brought into the bank, down from 35 cents-per-dollar in 2016. “We pay for performance,” Chief Financial Officer Jon Pruzan said in an interview. Historically during bonus season, traders have expected to take home some percent of either the revenue they generated during the year, or the value of their book of assets. That structure offered enormous upside for strong performance, but because it also encouraged risk-taking, banks have shifted to a model that adjusts for risk and is more discretionary, recruiters and consultants said. Ross Gregory, a director at the talent firm Proco Commodities, said he expects bonuses to be much lower this year because of those factors, as well as weak performance.
/* * Licensed to the OpenAirInterface (OAI) Software Alliance under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The OpenAirInterface Software Alliance licenses this file to You under * the OAI Public License, Version 1.0 (the "License"); you may not use this file * except in compliance with the License. * You may obtain a copy of the License at * * http://www.openairinterface.org/?page_id=698 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. *------------------------------------------------------------------------------- * For more information about the OpenAirInterface (OAI) Software Alliance: * <EMAIL> */ /*! \file s1ap_common.c * \brief s1ap procedures for both eNB and MME * \author <NAME> and <NAME> * \email <EMAIL> * \date 2012-2015 * \version 0.1 */ #include <stdint.h> #include "s1ap_common.h" #include "S1AP-PDU.h" int asn_debug = 1; int asn1_xer_print = 2; #if defined(EMIT_ASN_DEBUG_EXTERN) inline void ASN_DEBUG(const char *fmt, ...) { if (asn_debug) { int adi = asn_debug_indent; va_list ap; va_start(ap, fmt); fprintf(stderr, "[ASN1]"); while(adi--) fprintf(stderr, " "); vfprintf(stderr, fmt, ap); fprintf(stderr, "\n"); va_end(ap); } } #endif ssize_t s1ap_generate_initiating_message( uint8_t **buffer, uint32_t *length, e_S1ap_ProcedureCode procedureCode, S1ap_Criticality_t criticality, asn_TYPE_descriptor_t *td, void *sptr) { S1AP_PDU_t pdu; ssize_t encoded; memset(&pdu, 0, sizeof(S1AP_PDU_t)); pdu.present = S1AP_PDU_PR_initiatingMessage; pdu.choice.initiatingMessage.procedureCode = procedureCode; pdu.choice.initiatingMessage.criticality = criticality; ANY_fromType_aper(&pdu.choice.initiatingMessage.value, td, sptr); if (asn1_xer_print) { xer_fprint(stdout, &asn_DEF_S1AP_PDU, (void *)&pdu); } /* We can safely free list of IE from sptr */ ASN_STRUCT_FREE_CONTENTS_ONLY(*td, sptr); if ((encoded = aper_encode_to_new_buffer(&asn_DEF_S1AP_PDU, 0, &pdu, (void **)buffer)) < 0) { return -1; } *length = encoded; return encoded; } ssize_t s1ap_generate_successfull_outcome( uint8_t **buffer, uint32_t *length, e_S1ap_ProcedureCode procedureCode, S1ap_Criticality_t criticality, asn_TYPE_descriptor_t *td, void *sptr) { S1AP_PDU_t pdu; ssize_t encoded; memset(&pdu, 0, sizeof(S1AP_PDU_t)); pdu.present = S1AP_PDU_PR_successfulOutcome; pdu.choice.successfulOutcome.procedureCode = procedureCode; pdu.choice.successfulOutcome.criticality = criticality; ANY_fromType_aper(&pdu.choice.successfulOutcome.value, td, sptr); if (asn1_xer_print) { xer_fprint(stdout, &asn_DEF_S1AP_PDU, (void *)&pdu); } /* We can safely free list of IE from sptr */ ASN_STRUCT_FREE_CONTENTS_ONLY(*td, sptr); if ((encoded = aper_encode_to_new_buffer(&asn_DEF_S1AP_PDU, 0, &pdu, (void **)buffer)) < 0) { return -1; } *length = encoded; return encoded; } ssize_t s1ap_generate_unsuccessfull_outcome( uint8_t **buffer, uint32_t *length, e_S1ap_ProcedureCode procedureCode, S1ap_Criticality_t criticality, asn_TYPE_descriptor_t *td, void *sptr) { S1AP_PDU_t pdu; ssize_t encoded; memset(&pdu, 0, sizeof(S1AP_PDU_t)); pdu.present = S1AP_PDU_PR_unsuccessfulOutcome; pdu.choice.successfulOutcome.procedureCode = procedureCode; pdu.choice.successfulOutcome.criticality = criticality; ANY_fromType_aper(&pdu.choice.successfulOutcome.value, td, sptr); if (asn1_xer_print) { xer_fprint(stdout, &asn_DEF_S1AP_PDU, (void *)&pdu); } /* We can safely free list of IE from sptr */ ASN_STRUCT_FREE_CONTENTS_ONLY(*td, sptr); if ((encoded = aper_encode_to_new_buffer(&asn_DEF_S1AP_PDU, 0, &pdu, (void **)buffer)) < 0) { return -1; } *length = encoded; return encoded; } S1ap_IE_t *s1ap_new_ie( S1ap_ProtocolIE_ID_t id, S1ap_Criticality_t criticality, asn_TYPE_descriptor_t *type, void *sptr) { S1ap_IE_t *buff; if ((buff = malloc(sizeof(S1ap_IE_t))) == NULL) { // Possible error on malloc return NULL; } memset((void *)buff, 0, sizeof(S1ap_IE_t)); buff->id = id; buff->criticality = criticality; if (ANY_fromType_aper(&buff->value, type, sptr) < 0) { fprintf(stderr, "Encoding of %s failed\n", type->name); free(buff); return NULL; } if (asn1_xer_print) if (xer_fprint(stdout, &asn_DEF_S1ap_IE, buff) < 0) { free(buff); return NULL; } return buff; } void s1ap_handle_criticality(S1ap_Criticality_t criticality) { }
<reponame>onlyrico/focalboard // Copyright (c) 2015-present Mattermost, Inc. All Rights Reserved. // See LICENSE.txt for license information. import {Utils} from './utils' test('assureProtocol', async () => { expect(Utils.ensureProtocol('https://focalboard.com')).toBe('https://focalboard.com') // long protocol expect(Utils.ensureProtocol('somecustomprotocol://focalboard.com')).toBe('somecustomprotocol://focalboard.com') // short protocol expect(Utils.ensureProtocol('x://focalboard.com')).toBe('x://focalboard.com') // no protocol expect(Utils.ensureProtocol('focalboard.com')).toBe('https://focalboard.com') })
CAREGIVER-SPECIFIC QUALITY MEASURES FOR HCBS: STAKEHOLDER PRIORITIES AND ENVIRONMENTAL SCAN Abstract Although informal family caregivers are increasingly recognized for their essential role in helping older and/or medically-complex adults live in the community for as long as possible, their priorities and perspectives have not been well-integrated into assessments of home- and community-based services (HCBS). Our aim was to identify measurement gaps to guide quality monitoring and improve HCBS. Caregiver concerns and quality measurement priorities were identified during a multi-level stakeholder engagement process (34 Veterans, 24 caregivers, and 39 facility leaders, clinicians, and staff) across four VA healthcare systems. We conducted an environmental scan and scoping review of national quality measure sets for HCBS, comparing caregiver-specific items against stakeholder-identified concerns and priorities. Only five of eleven non-VA measure sets and three of four VA measure sets included caregiver-specific items; these did not encompass the full range stakeholder-identified concerns and priorities. Measures that emphasize caregivers can help healthcare systems monitor and improve HCBS quality.
<filename>cli/config/auth.go package config import "github.com/go-ini/ini" type BasicAuthList struct { UserName string Passwords string } var ( BasicAuth BasicAuthList SimultaneousAccess int ) func LoadInit(file string) error { cfg, err := ini.Load(file) if err != nil { return err } BasicAuth = BasicAuthList{ UserName: cfg.Section("basic_auth").Key("username").String(), Passwords: cfg.Section("basic_auth").Key("passwords").String(), } SimultaneousAccess, err = cfg.Section("simultaneous_access").Key("num").Int() if err != nil { return err } return nil }
In Vivo Detection of Chronic Kidney Disease Using Tissue Deformation Fields From Dynamic MR Imaging Objective: Chronic kidney disease (CKD) is a serious medical condition characterized by gradual loss of kidney function. Early detection and diagnosis is mandatory for adequate therapy and prognostic improvement. Hence, in the current pilot study we explore the use of image registration methods for detecting renal morphologic changes in patients with CKD. Methods: Ten healthy volunteers and nine patients with presumed CKD underwent dynamic T1 weighted imaging without contrast agent. From real and simulated dynamic time series, kidney deformation fields were estimated using a poroelastic deformation model. From the deformation fields several quantitative parameters reflecting pressure gradients, and volumetric and shear deformations were computed. Eight of the patients also underwent a kidney biopsy as a gold standard. Results: We found that the absolute deformation, normalized volume changes, as well as pressure gradients correlated significantly with arteriosclerosis from biopsy assessments. Furthermore, our results indicate that current image registration methodologies are lacking sensitivity to recover mild changes in tissue stiffness. Conclusion: Image registration applied to dynamic time series correlated with structural renal changes and should be further explored as a tool for invasive measurements of arteriosclerosis. Significance: Under the assumption that the proposed framework can be further developed in terms of sensitivity and specificity, it can provide clinicians with a non-invasive tool of high spatial coverage available for characterization of arteriosclerosis and potentially other pathological changes observed in chronic kidney disease.
<reponame>Tyluur/paragon464-server package com.paragon464.gameserver.io.database.pool.impl; import com.paragon464.gameserver.io.database.DatabaseType; final class Database { private final int port; private final DatabaseType type; private final String address; private final String database; private final String username; private final String password; public Database(int port, DatabaseType type, String address, String database, String username, String password) { this.port = port; this.type = type; this.address = address; this.database = database; this.username = username; this.password = password; } public int getPort() { return port; } public DatabaseType getType() { return type; } public String getAddress() { return address; } public String getDatabase() { return database; } public String getUsername() { return username; } public String getPassword() { return password; } }