content
stringlengths
7
2.61M
Superradiance from few emitters with non-radiative decay Description of superradiance of few quantum emitters with non-radiative decay in terms of quantum states is presented. Quantum efficiencies (QE) of SR of two and three emitters are calculated and compared with the case of two and three independent emitters. Maximum increase in QE is 8% for two emitters and 16% for three emitters, it is reached at certain ratios between non-radiative and radiative rates. Approach can be generalized with inclusion of the incoherent pump, dephasing and delay in emitter-emitter interaction. Introduction In order to consider superradiance (SR) in dissipative environment, in particular, in laser cavity with incoherent pump, one has to understand better how to describe the influence of decoherence to SR. The usual way to do it is with the density matrix formalism as, for example, in. However it is much more easy to use quantum states. Description of dissipation in quantum systems in terms of quantum states is not as common as the density matrix formalism, but it has a long history. One can mention well-known Weiskopf and E. Wigner approach for spontaneous emission, its generalization to superradiance of few atoms in free space. Here we present the way of description of superradiant quantum emitters in terms of quantum states in any dissipative environment: with non-radiative decay, incoherent pump and dephasing. One important source of decoherence is non-radiative decay of emitters, so we proceed detailed derivation of our method for particular case of superradiance with non-radiative decay. However the same approach can be applied for any other dissipation or for the incoherent pump -as we'll show. For simplicity we restrict ourselves by the case of emitters on the distance from each other much smaller than the radiation wavelength, so that we can neglect by the delay in the emitter-emitter interaction. However such delay can be easily added into consideration following. We suppose symmetric position of emitters, when each of them equally interacts with the others, so that SR is not affected by the difference in emitter transition frequencies. We'll suppose the excitation of all emitters at initial time moment and then describe their radiative and non-radiative decay. We consider SR as cascaded or step process with radiative or non-radiative decay of only one excitation at each step. In the next two Sections 2 and 3 we describe the first and the second steps for two emitters. We find radiation power and relative quantum efficiency of SR of two emitters. Generalization of our method for many emitters became clear after description of three emitters in Sections 4 -7. The first step for three emitters is in Section 4, the second step describing non-radiative decay -in Section 5, radiative decay -in Section 6. The rate equations for populations of states of three emitters are derived and solved in Section 7. There we calculate the radiation power and relative quantum efficiency of three superradiant emitters and compare them with the case of two emitters. Section 8 shows how to add into consideration the incoherent pump. Conclusions are presented in Section 9. Two emitters, the first step. We consider first two two-level emitters with the transition frequency. They decay by emission of photons and by non-radiative decay. Each emitter has non-radiative decay to its own bath. There are various mechanisms of non-radiative decay as interactions with impurities, defects, quenching etc. Here we do not describe these mechanisms in details, but introduce some effective broadband non-radiative decay bath. The particle from this bath we call "phonon". The state of emitter The state of two emitters is 1 2 There are four states of emitters: the state 11 with two, states 10, 01 with one excited emitter and the ground state 00. States of the system are products of emitter's and bath's states. Baths states are: f of one photon in the state f with certain wave vector and polarization, and i p -the state of a phonon from non-radiative decay bath of i-th emitter. The state 0 is the ground bath state (no photons, no phonons). States f and i p means that other bathes have no particles. While an emitter emits a particle to the bath, the bath particle never came back to the emitter. The probability of decay of two emitters simultaneously is negligibly small, so we can consider the decay of emitters as a step or cascade process: only one photon or phonon is emitted in each step. We'll describe the state of emitters and baths by wave functions. At first step one of emitters comes to its ground state due to radiative or non-radiative decay, another emitter remains in its ground state. At the second step another emitter loses the excitation and come to the ground state. The wave function of emitters and bathes for the first step is: ( ) The wave function shows that emitters can emit the same photon, but can't emit the same phonon. One can see in Eq. entangled symmetric state of two emitters emitting photons. We come to symmetric and anti-symmetric states of emitters with one excitation and one photon: Note that states are orthogonal to states The Hamiltonian describing emitter-photon and emitter-phonon interactions is: From Eqs. we see the state 1,1 0 decay into three mutually orthogonal manifolds of states: non-radiative decay rate. 1 One emitter is in 1 0 r = The second step The second step does not depend on the first one, so we can suppose that initial states of the second step contains no photons and phonons. Now we have to consider the radiative and nonradiative decay of symmetric appears at the procedure of adiabatic elimination of bath's variables: it is square of normalizing factor 1 / 2 in the wave function. In other words, this is statistic weight of single emitter state in the entangled state. Besides 0 + state we have two other initial states: 10 0 and 01 0 in the second step. Single atom decays into each of these states with usual radiative and non-radiative decay rate. We can join three "ground states" { } 00 f and { } 00 i p manifolds in one "ground state" manifold. Fig.1 shows all manifolds for the first and the second steps and transitions between them. In order to find the radiation rate of two emitters we have to sum all rates of radiative transitions: ( ) Here the upper index means two emitters. At initial conditions 2 1 W =, One can represent P by noting that the radiation rate of two independent emitters is where 1 ( ) W t is the probability that one emitter is excited, while another one is not. Thus The second term in Eq. is "addition" of radiation respectively to two independent emitters. Fig.2a shows One can find total number of emitted photons (photon's yield) -with SR, and without SR: Obviously, that without non-radiative decay 0 2 Q Q = =, but it is not so with non-radiative decay. Fig.3 shows the relative quantum efficiency (RQE) of photon's yield as a function of / r. Fig.3 Relative quantum efficiency for two superradiant and two independent emitters as function of relative nonradiative decay rate. As one can see, 1 R > : at prescience of non-radiative decay SR always increases the number of emitted photons respectively to the case of no SR. It is interesting, that at certain / r there is a maximum in the photon's yield increase. From Eq. one can find 0 8 7 2 max / 1.086 There is only small, about 8-9% maximum increase of efficiency for two emitters due to SR. However the acceleration of emission of two emitters by SR is also small (see Fig.2). One can expect larger acceleration of emission and increase in the number of emitted photons for more than two SR emitters, we'll see it on the example of three emitters. More than two emitters will be described the same way as two emitters: by considering decays to state manifolds including emitter's and bath's states. Three emitters The first step The case of three emitters is more general, than for two ones. As we'll see, it contains nonradiative relaxation transitions between symmetric Dicke states. More than three emitters can be described the same way as three emitters. At the first step one photon or phonon is emitted from the state 1,1,1 0 of all three emitters excited and with no photons or phonons in baths. Energy states and transitions for the first step are shown in Fig.4 Next steps. Manifolds, originated from non-radiative decay. In the next steps one can consider manifolds, shown in Fig.4 The same way radiative and non-radiative transitions happened from 101 and 011 states. 6. The next steps. Relaxation from 3, 2 f s state. Here we drop the index f in the state notation: 3, 2 3 Thus, the state 3, 2 s radiatively decays, with the rate 4 r, to symmetric Dicke state of three emitters (as it is in the usual Dicke model without non-radiative decay). Also 3, 2 s decays with emission of a phonon into three symmetric Dicke states of two emitters, with the third emitter (marked by index in 2,1 i s notation) in the ground state. By carrying out adiabatic elimination of phonon variable one can see that the non-radiative relaxation rate of 3, 2 s state into any of states 2,1 i s is 2 / 3. The factor 2 / 3 comes from formal procedure, it is statistical weight: non-radiative relaxation from two excited emitters presented in 3, 2 s state is equally distributed between three symmetric states 2,1 i s with one emitter excited. Note that each of 2,1 i s state appeared at the non-radiative relaxation from 3, 2 s state came with a phonon; while 2,1 i s appeared due to radiative relaxation from states as 101, come with a photon, so that such symmetric states of two emitters are orthogonal to each other. The state 3,1 s decays to the ground state radiatively with the rate 3 r and non-radiatively with the rate. The scheme of transitions for the first, second and third steps -from the state 3, 2 f s is shown in Fig.6. Rate equations for populations of states of three emitters and RQE We denote 3 W the population of 111 state with all three emitters excited, 3 Relaxation rates, include "more" r -s than -s as, for example, 3 Fig.7a shows also radiation power from three independent emitters. Fig.7b shows RQE R for three superradiant respectively to three independent emitters: Incoherent pump Similar way as the non-radiative decay, one can take into consideration the incoherent pump of SR emitters -with its own pump bath for each emitter -see Fig.8. Transitions by incoherent pump for the case of two emitters are marked in Fig.8 by red arrows, each of such transition has the rate p. There is no transition by incoherent pump from the ground state to the excited symmetric state with population W + : the incoherent pump does not lead to appearance of symmetric state +. However while the state + appears due to collective spontaneous emission from 1,1 state, the incoherent pump can excite 1,1 also from the state +. Conclusion We describe superradiance of few emitters in terms of quantum states taken into account nonradiative decay of emitters. Orthogonality between different states is provided taking into account states of the photon and phonon relaxation baths. Dynamics of emitters follows rate equations of population balance. These equation can be solved analytically or, for large number of emitters, numerically by simple iteration procedure. We considered radiation from two and three superradiant emitters and compared them with the radiation from two and three independent emitters. The case of three emitters can be obviously generalized to the case of 3 N > emitters. Radiation power and relative quantum efficiency (RQE) of radiation are calculated. Quantum efficiency of radiation from SR emitters is always greater than from independent emitters. Maximum RQE for two emitters is about 8%, for three emitters -about 16%: maximum RQE grows with the number of emitters. Maximum RQE is reached for certain ratio of radiative and non-radiative relaxation rates. Incoherent pump and dephasing can be taken into account the same way as the non-radiative decay. Delay in the emitter-emitter interaction can be taken into account in future studies. Results can be used for modeling SR in realistic systems with dissipation and in general, for better understanding and modeling of dynamics of quantum dissipative systems.
#include <lcom/lcf.h> #include <vbe.h> #include <stdlib.h> #include "util.h" vbe_mode_info_t vbe_mode_info; static uint8_t* mapped_mem; static uint8_t *backbuffer = NULL; static uint32_t buffer_size = 0; static uint8_t bytes_per_pixel = 0; void *retry_lm_alloc(size_t size, mmap_t *mmap){ void *result = NULL; for(unsigned i = 0; i < 5 ; i++){ result = lm_alloc(size, mmap); if(result != NULL) break; sleep(1); } return result; } int vbe_get_mode_info_2(uint16_t mode, vbe_mode_info_t * vmi_p) { struct reg86u r; mmap_t mmap; /* Reset the struct values */ memset(&r, 0, sizeof(r)); /* Allocate memory block in low memory area */ if (retry_lm_alloc(sizeof(vbe_mode_info_t), &mmap) == NULL) { printf("(%s): lm_alloc() failed\n", __func__); return VBE_LM_ALLOC_FAILED; } /* Build the struct */ r.u.b.ah = VBE_FUNC; r.u.b.al = RETURN_VBE_MODE_INFO; r.u.w.cx = mode; r.u.w.es = PB2BASE(mmap.phys); r.u.w.di = PB2OFF(mmap.phys); r.u.b.intno = VIDEO_CARD_SRV; /* BIOS Call */ if( sys_int86(&r) != FUNC_SUCCESS ) { lm_free(&mmap); printf("(%s): sys_int86() failed \n", __func__); return VBE_SYS_INT86_FAILED; } /* Verify the return for errors */ if (r.u.w.ax != FUNC_RETURN_OK) { lm_free(&mmap); printf("(%s): sys_int86() return in ax was different from OK \n", __func__); return VBE_INVALID_RETURN; } /* Copy the requested info to vbe_mode_info */ memcpy(vmi_p, mmap.virt, sizeof(vbe_mode_info_t)); /* Free allocated memory */ lm_free(&mmap); return VBE_OK; } int set_video_mode(uint16_t mode){ struct reg86u r; /* Reset the struct values */ memset(&r, 0, sizeof(r)); /* Build the struct */ r.u.b.ah = VBE_FUNC; r.u.b.al = SET_VBE_MODE; r.u.w.bx = LINEAR_FRAME_BUFFER | mode; r.u.b.intno = VIDEO_CARD_SRV; /* BIOS Call */ if( sys_int86(&r) != FUNC_SUCCESS ) { printf("(%s): sys_int86() failed \n", __func__); return VBE_SYS_INT86_FAILED; } return VBE_OK; } void* (vg_init)(uint16_t mode){ /* Initialize lower memory region */ if(lm_init(true) == NULL){ printf("(%s) Couldnt init lm\n", __func__); return NULL; } int res = 0; if((res = vbe_get_mode_info_2(mode, &vbe_mode_info)) != OK ){ printf("(%s) Couldnt get mode info\n", __func__); return NULL; } bytes_per_pixel = calculate_size_in_bytes(get_bits_per_pixel()); buffer_size = get_x_res() * get_y_res() * bytes_per_pixel; backbuffer = malloc(buffer_size); memset(backbuffer, 0, buffer_size); struct minix_mem_range mr; /* physical memory range */ unsigned int vram_base = vbe_mode_info.PhysBasePtr; /* VRAM’s physical addresss */ unsigned int vram_size = vbe_mode_info.XResolution * vbe_mode_info.YResolution * vbe_mode_info.BitsPerPixel ; /* VRAM’s size, but you can use the frame-buffer size, instead */ void *video_mem; /* frame-buffer VM address */ /* Allow memory mapping */ mr.mr_base = (phys_bytes) vram_base; mr.mr_limit = mr.mr_base + vram_size; if( OK != (res = sys_privctl(SELF, SYS_PRIV_ADD_MEM, &mr))) panic("sys_privctl (ADD_MEM) failed: %d\n", res); /* Map memory */ video_mem = vm_map_phys(SELF, (void *)mr.mr_base, vram_size); if(video_mem == MAP_FAILED) panic("couldn’t map video memory"); /* Store the mapped memmory pointer in mapped_mem */ mapped_mem = video_mem; /* Set video mode */ if(set_video_mode(mode) != OK) return NULL; return video_mem; } int (pj_draw_hline)(uint16_t x, uint16_t y, uint16_t len, uint32_t color) { /* Check if out of bounds */ if (x >= get_x_res() || y >= get_y_res()) { printf("(%s) Invalid coordinates: x=%d, y=%d", __func__, x, y); return VBE_INVALID_COORDS; } /* Calculate size in bytes each pixel occupies */ for (uint32_t i = 0; i < len; i++){ /* Check if x is outside of range */ if (x+i >= get_x_res()) return VBE_OK; /* Color the pixel */ uint32_t y_coord = y * get_x_res() * bytes_per_pixel; uint32_t x_coord = (x + i) * bytes_per_pixel; memcpy(backbuffer + y_coord + x_coord, &color, bytes_per_pixel); } return VBE_OK; } int (pj_draw_vline)(uint16_t x, uint16_t y, uint16_t len, uint32_t color) { /* Check if out of bounds */ if (x >= get_x_res() || y >= get_y_res()) { printf("(%s) Invalid coordinates: x=%d, y=%d", __func__, x, y); return VBE_INVALID_COORDS; } /* Calculate size in bytes each pixel occupies */ for (uint32_t i = 0; i < len; i++){ /* Check if x is outside of range */ if (y+i >= get_y_res()) return VBE_OK; /* Color the pixel */ uint32_t y_coord = (y+i) * get_x_res() * bytes_per_pixel; uint32_t x_coord = x * bytes_per_pixel; memcpy(backbuffer + y_coord + x_coord, &color, bytes_per_pixel); } return VBE_OK; } int (pj_draw_rectangle)(int16_t x, int16_t y, uint16_t width, uint16_t height, uint32_t color) { /* Check if completly out of bounds */ /* TODO: this is unsafe af, but since resolutions dont go over nor near 32,767 we're cool */ if(x > (int16_t)get_x_res() || y > (int16_t)get_y_res()) return 1; if ((x < 0 && (x+(int16_t)width) < 0) || (y < 0 && (y + (int16_t)height) < 0 )){ //printf("(%s) Invalid coordinates: x=%d, y=%d\n", __func__, x, y); return 2; } if(x<0){ width += x; x = 0; } if(y<0){ height += y; y = 0; } /* Write a line for the whole height */ for (uint32_t i = 0; i < height; i++) { // Check if out of screen range if (y+i >= get_y_res()) break; if (pj_draw_hline((uint16_t)x, (uint16_t)y + i, width, color) != OK) { //printf("(%s) There was a problem drawing a h line\n", __func__); return VBE_DRAW_LINE_FAILED; } } return VBE_OK; } void draw_pixmap_on(const char *pixmap, uint16_t x, uint16_t y, int width, int height, uint8_t *buffer){ /* Iterate lines */ for(int i = 0; i < height; i++){ /* Y is out of bounds */ if((i+y) >= get_y_res()) break; /* Iterate columns */ for(int j = 0; j<width; j++){ /* X is out of bounds */ if((j+x) >= get_x_res()) break; /* Draw the pixmap pixel */ buffer[(y+i)*get_x_res() + x + j] = pixmap[i*width + j]; } } } void (draw_pixmap)(const char *pixmap, uint16_t x, uint16_t y, int width, int height){ draw_pixmap_on(pixmap, x, y, width, height, mapped_mem); } void (clear_buffer)(uint8_t color){ memset(backbuffer, color, buffer_size); } void clear_buffer_four(uint32_t color){ uint32_t *fixed_buffer = (uint32_t*)backbuffer; for(uint32_t i = 0; i<get_x_res()*get_y_res(); i++) fixed_buffer[i] = color; } void swap_buffers(){ memcpy(mapped_mem, backbuffer, buffer_size); } uint8_t get_bits_per_pixel() { return vbe_mode_info.BitsPerPixel; } uint16_t get_x_res() { return vbe_mode_info.XResolution; } uint16_t get_y_res() { return vbe_mode_info.YResolution; } uint8_t get_memory_model() { return vbe_mode_info.MemoryModel; } uint8_t get_red_mask_size() { return vbe_mode_info.RedMaskSize; } uint8_t get_red_field_position() { return vbe_mode_info.RedFieldPosition; } uint8_t get_blue_mask_size() { return vbe_mode_info.BlueMaskSize; } uint8_t get_blue_field_position() { return vbe_mode_info.BlueFieldPosition; } uint8_t get_green_mask_size() { return vbe_mode_info.GreenMaskSize; } uint8_t get_green_field_position() { return vbe_mode_info.GreenFieldPosition; } uint8_t get_rsvd_mask_size() { return vbe_mode_info.RsvdMaskSize; } uint8_t get_rsvd_field_position() { return vbe_mode_info.RsvdFieldPosition; } void draw_pixmap_direct_mode(uint8_t * symbol, uint16_t x, uint16_t y, int width, int height, uint32_t color, bool fixedColor) { /* Iterate lines */ for(int i = 0; i < height; i++){ /* Y is out of bounds */ if((i+y) >= get_y_res()) break; /* Iterate columns */ for(int j = 0; j<width; j++){ /* X is out of bounds */ if((j+x) >= get_x_res()) break; /* Get symbol color */ uint32_t temp; memcpy(&temp, symbol + (i*width + j) * bytes_per_pixel, bytes_per_pixel); /* If transparent position, do not draw anything */ if (temp == TRANSPARENCY_COLOR_8_8_8_8) continue; /* Draw with specified color */ if (fixedColor) temp = color; memcpy(backbuffer + ((y+i)*get_x_res() + x + j) * bytes_per_pixel, &temp, bytes_per_pixel); } } } void draw_background(uint8_t * bckg, int width, int height) { /* Iterate lines */ for(int i = 0; i < height; i++){ /* Y is out of bounds */ if(i >= get_y_res()) break; /* Copy entire line */ memcpy(backbuffer + i * get_x_res() * bytes_per_pixel, bckg + i * width * bytes_per_pixel, width * bytes_per_pixel); } } uint8_t get_bytes_per_pixel() { return bytes_per_pixel; }
This photo shows the positions of the past, present and predicted future appearances of the Refsdal supernova. The circle top left is the position of the supernova as it was in 1995 (though it wasn’t actually observed). The circle bottom right shows the galaxy which lensed the Refsdal Supernova to produce four images — a discovery made in late 2014. The middle circle shows the predicted position of the reappearing supernova in late 2015 or early 2016. The Visible Infrared Imaging Radiometer Suite instrument abroad NASA-NOAA's Suomi NPP satellite captured these phytoplankton communities between the Falkland Islands to the west and South Georgia Island to the east. In the very center of this photo is VY Canis Majoris, an incredibly bright red hypergiant star. It’s one of the largest known stars in the Milky Way and has clouds of glowing red hydrogen gas, dust clouds and the bright star cluster around the bright star Tau Canis Majoris towards the upper right. An upclose shot of VY Canis Majoris, a red hypergiant star. Its mass is almost 40 times the mass of the Sun and 300,000 times more luminous. This upclose view was captured by the SPHERE instrument on the VLT, revealing how the brilliant light of the star illuminates the clouds of material surrounding it. In this photo, the star itself is hidden behind an obscuring disc and the crosses are artifacts due to features in the instrument. This galaxy— 2MASX J16270254+4328340—has merged with another galaxy. A fine mist, made of millions of stars, spews from it in long trails. The galaxy is growing old and its star-forming days are coming to an end. The clash caused an eruption of star formation and exhausted the vast majority of the galactic gas, leaving the galaxy sterile and unable to produce new stars. Eventually the stars will redden with age and dim one by one, leaving the galaxy to slowly fade. This photo is an expanded view of comet C/2006 W3 (Christensen). NASA’s NEOWISE spacecraft observed 163 comets including C/2006 W3 in 2009. Data from the survey are now giving new insights into the dust, comet nucleus sizes, and production rates for difficult-to-observe gases like carbon dioxide and carbon monoxide. This is a photo of Dione, one of Saturn’s 53 named moons. In the background are Saturn’s rings. It was captured during a Cassini flyby that was testing Dione’s gravity field, but Cassini also got some up close photos of the moon’s icy surface.
/* Copyright (C) 2016-2017 <NAME> This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Author: <EMAIL> (<NAME>) */ #include "mycss/selectors/list.h" mycss_selectors_list_t * mycss_selectors_list_create(mycss_selectors_t* selectors) { mycss_selectors_list_t* selectors_list = mcobject_malloc(selectors->mcobject_list_entries, NULL); mycss_selectors_list_clean(selectors_list); return selectors_list; } void mycss_selectors_list_clean(mycss_selectors_list_t* selectors_list) { memset(selectors_list, 0, sizeof(mycss_selectors_list_t)); } mycss_selectors_list_t * mycss_selectors_list_destroy(mycss_selectors_t* selectors, mycss_selectors_list_t* selectors_list, bool self_destroy) { if(selectors_list == NULL) return NULL; mycss_entry_t *entry = selectors->ref_entry; if(selectors_list->entries_list) { for(size_t i = 0; i < selectors_list->entries_list_length; i++) { mycss_selectors_entry_t *sel_entry = selectors_list->entries_list[i].entry; while(sel_entry) { mycss_selectors_entry_t *sel_entry_next = sel_entry->next; mycss_selectors_entry_destroy(entry->selectors, sel_entry, true); sel_entry = sel_entry_next; } } mycss_selectors_entries_list_destroy(entry->selectors, selectors_list->entries_list); } if(self_destroy) { mcobject_free(selectors->mcobject_list_entries, selectors_list); return NULL; } return selectors_list; } mycss_selectors_list_t * mycss_selectors_list_append_selector(mycss_selectors_t* selectors, mycss_selectors_list_t* current_list, mycss_selectors_entry_t* selector) { if(current_list->entries_list == NULL) { current_list->entries_list = mycss_selectors_entries_list_create(selectors); } else { current_list->entries_list = mycss_selectors_entries_list_add_one(selectors, current_list->entries_list, current_list->entries_list_length); } mycss_selectors_entries_list_t *entries_list = &current_list->entries_list[current_list->entries_list_length]; memset(entries_list, 0, sizeof(mycss_selectors_entries_list_t)); selectors->specificity = &entries_list->specificity; entries_list->entry = selector; current_list->entries_list_length++; return current_list; } mycss_selectors_entry_t * mycss_selectors_list_last_entry(mycss_selectors_list_t* list) { size_t i = list->entries_list_length; while(i) { i--; mycss_selectors_entry_t *entry = list->entries_list[i].entry; while(entry) { if(entry->next == NULL) return entry; entry = entry->next; } } return NULL; } void mycss_selectors_list_append_to_current(mycss_selectors_t* selectors, mycss_selectors_list_t* current_list) { if(selectors->list_last) { selectors->list_last->next = current_list; current_list->prev = selectors->list_last; } else { (*selectors->list) = current_list; } selectors->list_last = current_list; } mycss_selectors_entry_t ** mycss_selectors_list_current_chain(mycss_selectors_list_t* list) { if(list->entries_list_length) return NULL; return &list->entries_list[ list->entries_list_length - 1 ].entry; } bool mycss_selectors_list_destroy_last_empty_selector(mycss_selectors_t* selectors, mycss_selectors_list_t* list, bool destroy_found) { if(list->entries_list_length == 0) return false; size_t idx = list->entries_list_length - 1; mycss_selectors_entry_t *entry = list->entries_list[idx].entry; if(entry == NULL) { mycss_selectors_entry_destroy(selectors, entry, destroy_found); list->entries_list_length--; return true; } while(entry) { if(entry->next == NULL) { if(entry->key == NULL) { if(entry->prev) entry->prev->next = NULL; else { list->entries_list[idx].entry = NULL; list->entries_list_length--; } mycss_selectors_entry_destroy(selectors, entry, destroy_found); return true; } return false; } entry = entry->next; } return false; }
. Observations of the reactivation rate and subsequent growth in addition to the luminescence of cells subjected to supercooling and freezing at 15 degrees C suggested that the effect of supercooling is more significant than that of freezing. In supercooled suspensions, cells are more injured than in frozen ones and the number of injured cells increases more rapidly. Unlike the supercooled cells, the live cells from frozen suspensions maintain a higher growth activity for a longer time.
import React, { useState } from 'react'; import PerformanceChart from './PerformanceChart'; import { useSelector } from 'react-redux'; import { selectTotalEquityTimeframe } from '../../selectors/performance'; import { ToggleButton, StateText } from '../../styled/ToggleButton'; import { faToggleOn, faToggleOff } from '@fortawesome/free-solid-svg-icons'; import { FontAwesomeIcon } from '@fortawesome/react-fontawesome'; export const PerformanceTotalValueChart = () => { let totalEquityData = useSelector(selectTotalEquityTimeframe); const [chartStartsAt0, setChartMin] = useState(true); let chartMin: number | undefined = 0; if (!chartStartsAt0) { chartMin = undefined; } const data = React.useMemo( () => [ { label: 'Total Equity', data: totalEquityData?.map((a) => { let date = new Date(Date.parse(a.date)); return [ new Date(date.getFullYear(), date.getMonth(), date.getDate()), a.value, ]; }), color: '#04A286', }, ], [totalEquityData], ); const series = React.useMemo(() => ({ type: 'line' }), []); const axes = React.useMemo( () => [ { primary: true, type: 'time', position: 'bottom' }, { type: 'linear', position: 'left', hardMin: chartMin, showGrid: false }, ], [chartMin], ); return ( <div> <ToggleButton onClick={() => { setChartMin(!chartStartsAt0); }} > Zoom Scale &nbsp; {!chartStartsAt0 ? ( <React.Fragment> <FontAwesomeIcon icon={faToggleOn} /> <StateText>on</StateText> </React.Fragment> ) : ( <React.Fragment> <FontAwesomeIcon icon={faToggleOff} /> <StateText>off</StateText> </React.Fragment> )} </ToggleButton> <PerformanceChart className="equity" data={data} axes={axes} series={series} /> </div> ); }; export default PerformanceTotalValueChart;
// Copy a wave to a new wave Wave WaveCopy(Wave wave) { Wave newWave = { 0 }; newWave.data = RL_MALLOC(wave.frameCount*wave.channels*wave.sampleSize/8); if (newWave.data != NULL) { memcpy(newWave.data, wave.data, wave.frameCount*wave.channels*wave.sampleSize/8); newWave.frameCount = wave.frameCount; newWave.sampleRate = wave.sampleRate; newWave.sampleSize = wave.sampleSize; newWave.channels = wave.channels; } return newWave; }
<gh_stars>0 import {mount, ReactWrapper} from 'enzyme'; import * as React from 'react'; import {Provider} from 'react-redux'; import {Store} from 'redux'; import {IReactVaporState} from '../../../ReactVapor'; import {clearState} from '../../../utils/ReduxUtils'; import {TestUtils} from '../../../utils/tests/TestUtils'; import {UUID} from '../../../utils/UUID'; import {FlatSelect, IFlatSelectProps} from '../FlatSelect'; import {selectFlatSelect} from '../FlatSelectActions'; import {FlatSelectConnected} from '../FlatSelectConnected'; import {IFlatSelectOptionProps} from '../FlatSelectOption'; describe('FlatSelect', () => { describe('<FlatSelectConnected />', () => { let wrapper: ReactWrapper<any, any>; let flatSelect: ReactWrapper<IFlatSelectProps, void>; let store: Store<IReactVaporState>; const id: string = 'flatSelect'; const anOptionId: string = 'flatOption'; const defaultOptions: IFlatSelectOptionProps[] = [ { id: UUID.generate(), option: { content: 'test', }, }, { id: anOptionId, option: { content: 'test 1', }, }, ]; const renderDropdownSearchConnected = () => { wrapper = mount( <Provider store={store}> <FlatSelectConnected id={id} options={defaultOptions} /> </Provider>, {attachTo: document.getElementById('App')} ); flatSelect = wrapper.find(FlatSelect).first(); }; beforeEach(() => { store = TestUtils.buildStore(); }); afterEach(() => { store.dispatch(clearState()); wrapper.detach(); }); describe('mount and unmount', () => { beforeEach(() => { renderDropdownSearchConnected(); }); it('should call onMount prop when mounted', () => { wrapper.unmount(); store.dispatch(clearState()); expect(store.getState().flatSelect.length).toBe(0); wrapper.mount(); expect(store.getState().flatSelect.length).toBe(1); }); it('should set the first selected option for selectedOptionId in the state on mount', () => { wrapper.unmount(); store.dispatch(clearState()); expect(store.getState().flatSelect.length).toBe(0); const newFlatSelect = ( <FlatSelectConnected id={id} options={defaultOptions} defaultSelectedOptionId={anOptionId} /> ); wrapper.setProps({children: newFlatSelect}); wrapper.mount(); expect(store.getState().flatSelect.length).toBe(1); expect(store.getState().flatSelect[0].selectedOptionId).toBe(anOptionId); }); it('should call onDestroy prop when will unmount', () => { wrapper.unmount(); expect(store.getState().flatSelect.length).toBe(0); }); }); describe('mapStateToProps', () => { beforeEach(() => { renderDropdownSearchConnected(); }); it('should get an id as a prop', () => { const idProp = flatSelect.props().id; expect(idProp).toBeDefined(); expect(idProp).toBe(id); }); it('should get the options as a prop', () => { const isOpenedProp = flatSelect.props().options; expect(isOpenedProp).toBeDefined(); expect(isOpenedProp.length).toBe(2); }); it('should get the first option for selectedOption if the selectedOption is undefined as a prop', () => { const optionsPropId = flatSelect.props().selectedOptionId; expect(optionsPropId).toBeDefined(); expect(optionsPropId).toBe(defaultOptions[0].id); }); it('should get the current selectedOption as a prop', () => { store.dispatch(selectFlatSelect(id, defaultOptions[1].id)); wrapper.update(); flatSelect = wrapper.find(FlatSelect).first(); const optionsPropId = flatSelect.props().selectedOptionId; expect(optionsPropId).toBeDefined(); expect(optionsPropId).toBe(defaultOptions[1].id); }); }); describe('mapDispatchToProps', () => { beforeEach(() => { renderDropdownSearchConnected(); }); it('should get what to do on destroy as a prop', () => { const onDestroyProp = flatSelect.props().onDestroy; expect(onDestroyProp).toBeDefined(); }); it('should get what to do on onMount as a prop', () => { const onMountProp = flatSelect.props().onRender; expect(onMountProp).toBeDefined(); }); it('should add the first option as optionSelection on onMount', () => { expect(store.getState().flatSelect[0].selectedOptionId).toBe(defaultOptions[0].id); }); it('should get what to do on onOptionClick as a prop', () => { const onOptionClick = flatSelect.props().onOptionClick; expect(onOptionClick).toBeDefined(); }); it('should add the optionSelected in the state on onOptionClick', () => { expect(store.getState().flatSelect[0].selectedOptionId).toBe(defaultOptions[0].id); flatSelect.props().onOptionClick(defaultOptions[1]); expect(store.getState().flatSelect[0].selectedOptionId).toBe(defaultOptions[1].id); }); }); }); });
The invention relates to a wavelength-tuning, phase-shifting interferometry. Interferometric optical techniques are widely used to measure optical thickness, flatness, and other geometric and refractive index properties of precision optical components such as glass substrates used in lithographic photomasks. For example, to measure the surface profile of a measurement surface, one can use an interferometer to combine a measurement wavefront reflected from the measurement surface with a reference wavefront reflected from a reference surface to form an optical interference pattern. Spatial variations in the intensity profile of the optical interference pattern correspond to phase differences between the combined measurement and reference wavefronts caused by variations in the profile of the measurement surface relative to the reference surface. Phase-shifting interferometry (PSI) can be used to accurately determine the phase differences and the corresponding profile of the measurement surface. With PSI, the optical interference pattern is recorded for each of multiple phase-shifts between the reference and measurement wavefronts to produce a series of optical interference patterns that span a full cycle of optical interference (e.g., from constructive, to destructive, and back to constructive interference). The optical interference patterns define a series of intensity values for each spatial location of the pattern, wherein each series of intensity values has a sinusoidal dependence on the phase-shifts with a phase-offset equal to the phase difference between the combined measurement and reference wavefronts for that spatial location. Using numerical techniques known in the art, the phase-offset for each spatial location is extracted from the sinusoidal dependence of the intensity values to provide a profile of the measurement surface relative the reference surface. Such numerical techniques are generally referred to as phase-shifting algorithms. The phase-shifts in PSI can be produced by changing the optical path length from the measurement surface to the interferometer relative to the optical path length from the reference surface to the interferometer. For example, the reference surface can be moved relative to the measurement surface. Alternatively, the phase-shifts can be introduced for a constant, non-zero optical path difference by changing the wavelength of the measurement and reference wavefronts. The latter application is known as wavelength tuning PSI and is described, e.g., in U.S. Pat. No. 4,594,003 to G. E. Sommargren. Unfortunately, PSI measurements can be complicated by spurious reflections from other surfaces of the measurement object because they too contribute to the optical interference. In such cases, the net optical interference image is a superposition of multiple interference pattern s produced by pairs of wavefronts reflected from the multiple surfaces of the measurement object and the reference surface. The invention features a method for extracting selected interference data from overlapping optical interference patterns arising from spurious reflections. The method takes advantage of the fact that for each interference pattern, a change in optical wavelength induces a phase-shift that is substantially linearly proportional to the optical path difference (OPD) corresponding to the two wavefronts giving rise to the interference pattern. In other words, the intensity profile of each interference pattern has a sinusoidal dependence on wavelength, and that sinusoidal dependence has a phase-shifting frequency proportional to the OPD for that interference pattern. To extract the phase-information of the interference pattern of interest, the method employs a phase-shifting algorithm that is more sensitive to the phase-shifting frequency of the selected interference pattern than to those of the other interference patterns in the image. In general, in one aspect, the invention features a method for interferometrically profiling a measurement object having multiple reflective surfaces. The method includes: positioning the measurement object within an unequal path length interferometer (e.g., a Fizeau interferometer) employing a tunable coherent light source; recording an optical interference image for each of multiple wavelengths of the light source, each image including a superposition of multiple interference patterns produced by pairs of wavefronts reflected from the multiple surfaces of the measurement object and a reference surface; and extracting phases of a selected one of the interference patterns from the recorded images by using a phase-shifting algorithm that is more sensitive (e.g., at least ten times more sensitive) to a wavelength-dependent variation in the recorded images caused by the selected interference pattern than to wavelength-dependent variations in the recorded images caused by the other interference patterns. Embodiments of the profiling method can include any of the following features. The phase-shifting algorithm can include a phase calculation equal to an arctangent of a ratio, the numerator and denominator of the ratio being weighted sums of intensity values of the recorded images at each spatial coordinate. More specifically, the phase-shifting algorithm can be a Fourier-series phase-shifting algorithm. The multiple wavelengths can be spaced from one another to impart substantially equal phase-shifts between the selected interference patterns in consecutive images. Furthermore, the multiple wavelengths can be spaced from one another to impart an absolute phase-shift of less than 2xcfx80 between the selected interference patterns in consecutive images. Alternatively, the multiple wavelengths can be spaced from one another to impart an absolute phase-shift of greater than 2xcfx80 between the selected interference patterns in consecutive images (i.e., sub-Nyquist sampling). In general, in another aspect, the invention features a method for interferometrically profiling a measurement object having multiple reflective surfaces. The method includes: positioning the measurement object within an unequal path length interferometer employing a tunable coherent light source; recording an optical interference image for each of multiple wavelengths of the light source, each image including a superposition of multiple interference patterns produced by pairs of wavefronts reflected from the multiple surfaces of the measurement object and a reference surface; and extracting phases of a selected one of the interference patterns from the recorded images by using a Fourier-series phase-shifting algorithm. The Fourier-series phase-shifting algorithm can include a phase calculation equal to an arctangent of a ratio, the numerator and denominator of the ratio being weighted sums of intensity values of the recorded optical interference patterns at each spatial coordinate. For example, the phase calculation can corresponds to: tan ⁡ ( θ ) = - 3 ⁢ ( g 0 - g 12 ) - 4 ⁢ ( g 1 - g 11 ) + 12 ⁢ ( g 3 - g 9 ) + 21 ⁢ ( g 4 - g 8 ) + 16 ⁢ ( g 5 - g 7 ) - 4 ⁢ ( g 1 + g 11 ) - 12 ⁢ ( g 2 + g 3 + g 9 + g 10 ) + 16 ⁢ ( g 5 + g 7 ) + 24 ⁢ g 6 where for each spatial coordinate, xcex8 is the phase extracted by the algorithm and gj is the intensity value of the xe2x80x9cjthxe2x80x9d image, and where the wavelength shift xcex94xcex between consecutive patterns corresponds to a phase shift substantially equal to xcfx80/4 for the selected interference pattern. In general, in a further aspect, the invention features a system for profiling a measurement object having multiple reflective surfaces including a tunable coherent light source, an unequal length interferometer (e.g., a Fizeau interferometer), a detector, and a system controller. The light source is configured to generate light at any one of multiple wavelengths. The interferometer includes a mount configured to position a selected one of the reflective surfaces of the measurement object at a non-zero distance Z from the zero optical path difference (OPD) position of the interferometer. The distance Z is less than about nT/2, nT being the smallest optical distance between two of the multiple reflective surfaces of the measurement object. The interferometer is also configured to receive the light from light source and generate an optical interference image including a superposition of multiple interference patterns produced by pairs of wavefronts reflected from the multiple surfaces of the measurement object and a reference surface. The detector is configured to record the optical interference image generated by the interferometer. The system controller is connected to the light source and the detector. During operation it causes the light source to generate light at each of the multiple wavelengths, causes the detector to record the image for each of the multiple wavelengths of the light source, and implements a phase-shifting algorithm to determine phases of a selected one of the interference patterns from the recorded images. Embodiments of the profiling system can have any of the following features. The distance Z can satisfy the expression nT/2xe2x89xa7Zxe2x89xa7nT/5, for example Z can be equal to about nT/3. The phase-shifting algorithm implemented by the controller can be more sensitive to a wavelength-dependent variation in the recorded images caused by the selected interference pattern than to wavelength-dependent variations in the recorded images caused by the other interference patterns. The phase-shifting algorithm implemented by the controller can be a Fourier-series phase-shifting algorithm. In general, in a further aspect, the invention features a system for profiling a measurement object having multiple reflective surfaces including: including a tunable coherent light source, an unequal length interferometer (e.g., a Fizeau interferometer), a detector, and a system controller. The tunable coherent light source is configured to generate light at any one of multiple wavelengths spanning a range greater than or equal to about xcex2/nT, where xcex is an intermediate one of the multiple wavelengths and nT is the smallest optical distance between two of the multiple reflective surfaces of the measurement object. The interferometer is configured to support the measurement object, receive the light from light source, and generate an optical interference image including a superposition of multiple interference patterns produced by pairs of wavefronts reflected from the multiple surfaces of the measurement object and a reference surface. The detector is configured to record the optical interference image generated by the interferometer. The system controller is connected to the light source and the detector. During operation it causes the light source to generate light at each of the multiple wavelengths, causes the detector to record the optical interference image for each of the multiple wavelengths of the light source, and implements a phase-shifting algorithm to determine phases of a selected one of the interference patterns from the recorded images. Embodiments of the profiling system can include any of the following features. The wavelengths can span a range greater than or equal to about 3xcex2/2 nT, e.g., about 5xcex2/2 nT. The tunable coherent source can include a laser diode and a driver, which during operation adjusts the current to the laser diode to vary wavelength output of the laser diode. The phase-shifting algorithm implemented by the controller can be more sensitive to a wavelength-dependent variation in the recorded images caused by the selected interference pattern than to wavelength-dependent variations in the recorded images caused by the other interference patterns. The phase-shifting algorithm implemented by the controller can be a Fourier-series phase-shifting algorithm. Embodiments of the invention have many advantages. For example, a selected reflective surface of a measurement object can be profiled even though other surfaces produce spurious reflections that complicate the optical interference image. As a result, precision optical substrates such as glass flats can be profiled without coating any of their surfaces. Moreover, the profiling measurements can be as fast and robust as in measurement involving no such spurious reflections. In addition to measuring the topography of a selected surface of the measurement object, the optical profile of the measurement object including refractive index inhomogeneities can also be measured. Other aspects, advantages, and features will be apparent from the following detailed description and from the claims.
package de.worldiety.autocd.util; public enum Environment { //Populated by the CI if environment is set in .gitlab-ci.yml CI_REGISTRY_USER, CI_REGISTRY_EMAIL, CI_REGISTRY, CI_PROJECT_NAME, CI_PROJECT_NAMESPACE, CI_REGISTRY_PASSWORD, //Set for the wdy namespace K8S_REGISTRY_USER_TOKEN, K8S_REGISTRY_USER_NAME }
package loc.abondarev.sweater.util; import org.springframework.web.servlet.ModelAndView; import org.springframework.web.servlet.handler.HandlerInterceptorAdapter; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class RedirectInterceptor extends HandlerInterceptorAdapter { @Override public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception { if (modelAndView != null) { final String arguments = request.getQueryString() != null ? request.getQueryString() : ""; String url = request.getRequestURI().toString() + "?" + arguments; response.setHeader("Turbolinks-location", url); } } }
import Vue from 'nativescript-vue' import RadSideDrawer from 'nativescript-ui-sidedrawer/vue' Vue.use(RadSideDrawer) import Home from './components/Home.vue' import HomeFlix from './components/HomeFlix.vue' import App from './components/App.vue' import GettingStarted from './components/GettingStarted.vue' declare let __DEV__: boolean; // Prints Vue logs when --env.production is *NOT* set while building Vue.config.silent = !__DEV__ new Vue({ render: h => h(App) // render: (h) => h('frame', [h(App)]), }).$start() // render: (h) => h('frame', [h(Home)]),
/** * ConnectionHandler retrieves Connection objects from the property handlers. It runs handlers-side in the AssetConsumer * OMAS and retrieves Connections through the OMRSRepositoryConnector. */ public class GovernedAssetHandler { private final String serviceName; private final String serverName; private final RepositoryHandler repositoryHandler; private final OMRSRepositoryHelper repositoryHelper; private final InvalidParameterHandler invalidParameterHandler; private final RepositoryErrorHandler errorHandler; private OpenMetadataServerSecurityVerifier securityVerifier = new OpenMetadataServerSecurityVerifier(); private List<String> supportedZones; private ContextBuilder contextBuilder; /** * Construct the handler information needed to interact with the repository services * * @param serviceName name of this service * @param serverName name of the local server * @param invalidParameterHandler handler for managing parameter errors * @param repositoryHandler manages calls to the repository services * @param repositoryHelper provides utilities for manipulating the repository services objects * @param errorHandler provides utilities for manipulating the repository services * @param supportedZones setting of the supported zones for the handler **/ public GovernedAssetHandler(String serviceName, String serverName, InvalidParameterHandler invalidParameterHandler, RepositoryHandler repositoryHandler, OMRSRepositoryHelper repositoryHelper, RepositoryErrorHandler errorHandler, List<String> supportedZones) { this.serviceName = serviceName; this.serverName = serverName; this.invalidParameterHandler = invalidParameterHandler; this.repositoryHelper = repositoryHelper; this.repositoryHandler = repositoryHandler; this.errorHandler = errorHandler; this.supportedZones = supportedZones; contextBuilder = new ContextBuilder(serverName, repositoryHandler, repositoryHelper); } public void setSecurityVerifier(OpenMetadataServerSecurityVerifier securityVerifier) { this.securityVerifier = securityVerifier; } /** * Returns the list of governed assets with associated tags * * @param userId - String - userId of user making request. * @param entityTypes - types to start query offset. * @return List of Governed Access */ public List<GovernedAsset> getGovernedAssets(String userId, List<String> entityTypes, Integer offset, Integer pageSize) throws UserNotAuthorizedException, PropertyServerException, InvalidParameterException { String methodName = "getGovernedAssets"; invalidParameterHandler.validateUserId(userId, methodName); List<EntityDetail> response = new ArrayList<>(); if (CollectionUtils.isEmpty(entityTypes)) { response = repositoryHandler.getEntitiesForClassificationType(userId, null, SECURITY_TAG, offset, pageSize, methodName); } else { for (String typeName : entityTypes) { TypeDef typeDefByName = repositoryHelper.getTypeDefByName(userId, typeName); if (typeDefByName != null && typeDefByName.getGUID() != null) { response.addAll(repositoryHandler.getEntitiesForClassificationType(userId, typeDefByName.getGUID(), SECURITY_TAG, offset, pageSize, methodName)); } } } return convertGovernedAssets(userId, response); } public GovernedAsset getGovernedAsset(String userId, String assedID) throws InvalidParameterException, PropertyServerException, UserNotAuthorizedException { String methodName = "getGovernedAsset"; invalidParameterHandler.validateUserId(userId, methodName); EntityDetail entityDetailsByGUID = getEntityDetailsByGUID(userId, assedID, null); if (containsGovernedClassification(entityDetailsByGUID)) { return convertGovernedAsset(userId, entityDetailsByGUID); } return null; } public boolean containsGovernedClassification(EntityDetail entityDetail) { if (CollectionUtils.isEmpty(entityDetail.getClassifications())) { return false; } for (Classification classification : entityDetail.getClassifications()) { if (classification.getType() != null && classification.getType().getTypeDefName() != null && isGovernedClassification(classification.getType().getTypeDefName())) { return true; } } return false; } public boolean isSchemaElement(InstanceType entityType) { if (entityType == null) { return false; } return repositoryHelper.isTypeOf(serverName, entityType.getTypeDefName(), SCHEMA_ATTRIBUTE); } public String createSoftwareServerCapability(String userId, SoftwareServerCapability softwareServerCapability) throws UserNotAuthorizedException, PropertyServerException, org.odpi.openmetadata.commonservices.ffdc.exceptions.InvalidParameterException { String methodName = "createSoftwareServerCapability"; invalidParameterHandler.validateUserId(userId, methodName); InstanceProperties initialProperties = getSoftwareServerCapabilityProperties(softwareServerCapability); return repositoryHandler.createEntity(userId, SOFTWARE_SERVER_CAPABILITY_GUID, SOFTWARE_SERVER_CAPABILITY, initialProperties, Collections.emptyList(), InstanceStatus.ACTIVE, methodName); } public SoftwareServerCapability getSoftwareServerCapabilityByGUID(String userId, String guid) throws InvalidParameterException, PropertyServerException, UserNotAuthorizedException { String methodName = "getSoftwareServerCapabilityByGUID"; invalidParameterHandler.validateUserId(userId, methodName); EntityDetail entityDetailsByGUID = getEntityDetailsByGUID(userId, guid, SOFTWARE_SERVER_CAPABILITY); if (entityDetailsByGUID == null) { return null; } return convertSoftwareServerCapability(entityDetailsByGUID); } public GovernedAsset convertGovernedAsset(String userID, EntityDetail entity) throws InvalidParameterException, PropertyServerException, UserNotAuthorizedException { String methodName = "convertGovernedAsset"; GovernedAsset governedAsset = new GovernedAsset(); governedAsset.setGuid(entity.getGUID()); governedAsset.setType(entity.getType().getTypeDefName()); governedAsset.setFullQualifiedName(repositoryHelper.getStringProperty(serverName, QUALIFIED_NAME, entity.getProperties(), methodName)); governedAsset.setName(repositoryHelper.getStringProperty(serverName, DISPLAY_NAME, entity.getProperties(), methodName)); governedAsset.setContext(buildContext(userID, entity)); if (entity.getClassifications() != null && !entity.getClassifications().isEmpty()) { governedAsset.setAssignedGovernanceClassification(getGovernanceClassification(entity.getClassifications())); } return governedAsset; } private GovernanceClassification getGovernanceClassification(List<Classification> allClassifications) { Optional<Classification> classification = filterGovernedClassification(allClassifications); return classification.map(this::getGovernanceClassification).orElse(null); } private GovernanceClassification getGovernanceClassification(Classification classification) { String methodName = "getInstanceProperties"; GovernanceClassification governanceClassification = new GovernanceClassification(); governanceClassification.setName(classification.getName()); InstanceProperties properties = classification.getProperties(); if (properties != null) { governanceClassification.setSecurityLabels( repositoryHelper.getStringArrayProperty(serverName, SECURITY_LABELS, properties, methodName)); governanceClassification.setSecurityProperties( repositoryHelper.getStringMapFromProperty(serverName, SECURITY_PROPERTIES, properties, methodName)); } return governanceClassification; } private Optional<Classification> filterGovernedClassification(List<Classification> classifications) { return classifications.stream().filter(c -> isGovernedClassification(c.getType().getTypeDefName())).findAny(); } private boolean isGovernedClassification(String classificationName) { return SECURITY_TAG.equals(classificationName); } private List<GovernedAsset> convertGovernedAssets(String userID, List<EntityDetail> entityDetails) throws InvalidParameterException, PropertyServerException, UserNotAuthorizedException { if (CollectionUtils.isEmpty(entityDetails)) { return Collections.emptyList(); } List<GovernedAsset> result = new ArrayList<>(); for (EntityDetail entityDetail : entityDetails) { result.add(convertGovernedAsset(userID, entityDetail)); } return result; } private Context buildContext(String userID, EntityDetail entity) throws InvalidParameterException, PropertyServerException, UserNotAuthorizedException { switch (entity.getType().getTypeDefName()) { case RELATIONAL_COLUMN: return contextBuilder.buildContextForColumn(userID, entity.getGUID()); case RELATIONAL_TABLE: return contextBuilder.buildContextForTable(userID, entity.getGUID()); default: return null; } } private EntityDetail getEntityDetailsByGUID(String userId, String guid, String entityType) throws PropertyServerException, UserNotAuthorizedException, InvalidParameterException { String methodName = "getEntityDetailsByGUID"; return repositoryHandler.getEntityByGUID(userId, guid, "guid", entityType, methodName); } private SoftwareServerCapability convertSoftwareServerCapability(EntityDetail entityDetail) { InstanceProperties properties = entityDetail.getProperties(); SoftwareServerCapability softwareServerCapability = new SoftwareServerCapability(); softwareServerCapability.setGUID(entityDetail.getGUID()); softwareServerCapability.setOpenTypeGUID(entityDetail.getType().getTypeDefName()); softwareServerCapability.setName(getStringProperty(properties, NAME, repositoryHelper)); softwareServerCapability.setDescription(getStringProperty(properties, DESCRIPTION, repositoryHelper)); softwareServerCapability.setType(getStringProperty(properties, TYPE, repositoryHelper)); softwareServerCapability.setPatchLevel(getStringProperty(properties, PATCH_LEVEL, repositoryHelper)); softwareServerCapability.setVersion(getStringProperty(properties, VERSION, repositoryHelper)); softwareServerCapability.setSource(getStringProperty(properties, SOURCE, repositoryHelper)); return softwareServerCapability; } private InstanceProperties getSoftwareServerCapabilityProperties(SoftwareServerCapability softwareServerCapability) { InstanceProperties properties = new InstanceProperties(); addStringProperty(softwareServerCapability.getName(), NAME, properties, repositoryHelper); addStringProperty(softwareServerCapability.getDescription(), DESCRIPTION, properties, repositoryHelper); addStringProperty(softwareServerCapability.getType(), TYPE, properties, repositoryHelper); addStringProperty(softwareServerCapability.getVersion(), VERSION, properties, repositoryHelper); addStringProperty(softwareServerCapability.getPatchLevel(), PATCH_LEVEL, properties, repositoryHelper); addStringProperty(softwareServerCapability.getSource(), SOURCE, properties, repositoryHelper); return properties; } private void addStringProperty(String propertyValue, String propertyName, InstanceProperties properties, OMRSRepositoryHelper repositoryHelper) { String methodName = "addStringProperty"; if (propertyValue != null) { repositoryHelper.addStringPropertyToInstance(serverName, properties, propertyName, propertyValue, methodName); } } private String getStringProperty(InstanceProperties properties, String propertyName, OMRSRepositoryHelper repositoryHelper) { return repositoryHelper.getStringProperty(serverName, propertyName, properties, "getStringProperty"); } }
If you missed out on any of the earth-shattering revelations on the first season of CBS’ locally filmed summer series “Under the Dome,” don’t panic! They’re still under the dome. But to make sure you are all caught up for the new season, currently in production in the region, CBS has announced that it will air an hour-long recap special, titled “Under the Dome: Inside Chester’s Mill,” 10 p.m. Monday, June 23. The special will feature a rehash of all the first season’s big moments, as well as new interviews with the cast and crew. And just to dangle one last tease in front of fans, those who tune into the special will get a sneak peek at the footage of the new season, which will premiere the following week with an episode written by Stephen King. To stay update to date on all of the news coming out of the local production, check back with the “Under the Dome” page over at the StarNews.
// Listens for leadership changes. private class InternalLeadershipListener implements LeadershipEventListener { @Override public void event(LeadershipEvent event) { if (event.subject().topic().equals(CLUSTER_IP)) { processLeadershipChange(event.subject().leader()); } } }
<gh_stars>0 package com.meyling.zeubor.core.biology.net; import com.meyling.zeubor.core.biology.nerve.Axon; import com.meyling.zeubor.core.biology.nerve.Dendrite; import com.meyling.zeubor.core.biology.nerve.Glia; import com.meyling.zeubor.core.biology.nerve.Neuron; import java.util.ArrayList; import java.util.List; public final class NeuronalNet { private final List<Glia> glias; private Neuron neu11; private Neuron neu12; private Neuron neu13; private Neuron neu14; public static void main(String[] argv) { NeuronalNet net = new NeuronalNet(); net.init(); net.neu11.setFire(true); for (int i = 0; i < 50 ; i++) { if (i % 5 == 0) { net.neu11.setFire(true); net.neu12.setFire(true); } net.iterate(); } net.printNeurons(); } public NeuronalNet() { glias = new ArrayList<Glia>(); } public void init() { System.out.println("init"); // create input layer neurons // 01 02 // 03 04 neu11 = createNeuron(); neu12 = createNeuron(); neu13 = createNeuron(); neu14 = createNeuron(); // create application layer neurons Neuron neu21 = createNeuron(); Neuron neu22 = createNeuron(); Neuron neu23 = createNeuron(); Neuron neu24 = createNeuron(); // create output layer neurons Neuron neu31 = createNeuron(); // add initial connections neu21.addDentrite(neu11, 100); // neu21.addDentrite(neu21, 100); // recursive trigger ! FIXME // neu21.addDentrite(neu21, -900); // recursive trigger ! FIXME neu21.addDentrite(neu12, 10); neu21.addDentrite(neu13, 10); neu21.addDentrite(neu14, 5); neu22.addDentrite(neu11, 10); neu22.addDentrite(neu12, 100); neu22.addDentrite(neu13, 5); neu22.addDentrite(neu14, 10); neu23.addDentrite(neu11, 10); neu23.addDentrite(neu12, 5); neu23.addDentrite(neu13, 100); neu23.addDentrite(neu14, 5); neu24.addDentrite(neu11, 5); neu24.addDentrite(neu12, 10); neu24.addDentrite(neu13, 10); neu24.addDentrite(neu14, 100); neu31.addDentrite(neu21, 100); neu31.addDentrite(neu22, 100); neu31.addDentrite(neu23, 100); neu31.addDentrite(neu24, 100); } public Neuron createNeuron() { final Neuron neuron = new Neuron(); final Glia glia = new Glia(neuron); glias.add(glia); return neuron; } public void iterate() { printNeurons(); // route neuron fire to axons for (Glia glia : glias) { Neuron neuron = glia.getNeuron(); glia.addFireEvent(neuron.getFire()); for (Axon axon : neuron.getAxons()) { axon.setFire(neuron.getFire()); } neuron.setFire(false); } // firing axon triggers dendrite fire for (Glia glia : glias) { Neuron neuron = glia.getNeuron(); for (Dendrite dendrite : neuron.getDendrites()) { dendrite.setFire((dendrite.getAxon().getFire())); } } // accumulate action potential for firing dendrites for (Glia glia : glias) { Neuron neuron = glia.getNeuron(); for (Dendrite dendrite : neuron.getDendrites()) { if (dendrite.getFire()) { final Glia g = dendrite.getAxon().getNeuron().getGlia(); List<Boolean> fireHistory = g.getFireHistory(); int c = 0; int d = 0; for (boolean fire : fireHistory) { c++; if (fire) { d++; } } // short term memory? if (d > 5 && c > 0) { c = dendrite.getWeight() * (100 + (d + c) * 100 / 2 / c) / 100; // System.out.println(c); } else { c = dendrite.getWeight(); } neuron.setPotential(neuron.getPotential() + c); } } neuron.setFire(neuron.getPotential() >= neuron.getLowerThreshold() && neuron.getPotential() < neuron.getHigherThreshold()); if (neuron.getPotential() >= neuron.getLowerThreshold()) { neuron.setPotential(neuron.getPotential() - neuron.getLowerThreshold()); } } } public void printNeurons() { // route neuron fire to axons int i = 1; for (Glia glia : glias) { Neuron neuron = glia.getNeuron(); System.out.print(" " + (neuron.getFire() ? 'X' : '-')); } System.out.println(); } }
typedef unsigned int uint; typedef unsigned short ushort; typedef unsigned char uchar; typedef uint pde_t; //NEW #define MAX_TOTAL_PAGES 32 #define MAX_PSYC_PAGES 16 // pages in the physical memory #define MAX_SWAP_PAGES 17 // pages in the physical memory #define AL_PAGES 32 // pages of process #define OCCUPIED 1 #define UNOCCUPIED 0 #define COW 1//cow #define DEBUG 0 #define KDEBUG 0 // #define SCFIFO 0 // #define NFUA 1 // #define LAPA 2 // #define AQ 3 // #define NONE 4
Utilization of emergency psychiatry service in a tertiary care centre in north eastern India: A retrospective study Background: In a developing country like India, with a lot of psychosocial stressors and ample stigma toward psychiatry, we studied the sociodemographic pattern of the patients coming to a tertiary care center for emergency psychiatry services and also evaluated the types and pattern of emergency services provided to them. We also assessed the predominant presenting complaints with which patients presented at the emergency department, reasons for referral in an emergency by other departments, and types of psychiatric diagnoses in the patients. Subjects and Methods: Data were extracted retrospectively from the general emergency and psychiatry emergency register of Silchar Medical College and Hospital for 1 year and analyzed. Results: Out of 41,040 patients attending the hospital seeking emergency care, referral rate to the psychiatric emergency was only 2.8%. The commonest presenting complaint of subjects who were referred was medically unexplained somatic complaints (47.70%). The main reason for a referral from other departments was no physical illness was detected in the patient (38.59%). About 78.8% of the subjects were diagnosed as having a proper psychiatric illness, with the majority presenting with stress-related and somatoform disorders (F4049) (43.45%). Conclusion: This study highlights various important parameters regarding emergency services being provided and their utilization by the patients attending a psychiatric emergency, which could be helpful for future policies and resource allocation for providing superior quality and cost-effective mental health care to the patients. INTRODUCTION Emergency psychiatry is the service provided with the intention of providing immediate therapeutic interventions for "any disturbance in thoughts, feelings, or actions." The role of a psychiatrist in an emergency setup of a tertiary care center in consultation-liaison is manifold. He/She not only needs to address a person suffering from psychiatric illness but also needs to assess the associated bio-psycho-social problems and provide appropriate opinion or management for immediate redressal. In a country like India, where psychiatric consultation is associated with a lot of social stigmas, the study of psychiatric emergency services is an interesting and comprehensive way to recognize the subset of people who utilize the psychiatric services and would provide a gross idea about the prevalence of various psychiatric illnesses in the community. It also gives us information about how practitioners from other disciplines handle the patients in need of emergency psychiatric help. A study like this also provides information about the common presenting complaints of the patients attending a psychiatric emergency, which may vary depending on the sociocultural characteristics of the area. Numerous such studies have been carried out in various countries, namely by Ang et al. in Singapore, Salkovskis et al. in England, and Stebbins and Hardman at Boston, United States of America. Newer works conducted in this decade include studies done by Shakya et al. in Nepal, Chaput et al. in Quebec, Canada, and Shahid et al. at Karachi, Pakistan. A few such studies have been conducted in India, like research by Kelkar et al. in Chandigarh, Bhatia et al. in Delhi, and Keertish et al. in Tumkur. These kinds of studies provide the much-needed information required for better preparedness and to formulate strategies for emergency psychiatric and liaison-consultation services. However, most of the Indian data on this topic are from pre-1990s and with small sample size. Thus, with this background, we conducted this study to evaluate the specific important demographic variables and the predominant presenting complaints of the patients attending the emergency psychiatry department, to determine the various reasons for referral of these patients by other departments, and to gain knowledge about the primary psychiatric diagnosis established and the measures or steps taken after diagnosis of the patient. The present study had a large sample size and was done over a period of 1 year in a tertiary care center in the northeastern part of India. SUBJECTS AND METHODS This study was carried out in a tertiary care teaching hospital providing health services to most southern part of Assam, along with the neighboring states of Tripura, Meghalaya, Mizoram, and Manipur. This hospital provides a 24-h walk-in general emergency service in most of the medical disciplines including psychiatry. At first, the patient is attended by a postgraduate resident doctor on duty at emergency, where he/she evaluates the patient, provides the initial basic treatment, maintains a record of the workup, and, if required, refers the patient to appropriate specialty departments for further evaluation and treatment. Thus, when the patient comes to the psychiatry department, he/she is further evaluated by the resident doctor and the postgraduate resident of psychiatry on duty. Initial workup and evaluation of the patient are done, after which appropriate treatment or opinion is provided, and a record is kept in the departmental register. The psychiatry department emergency register contains data which include patients' hospital number, basic sociodemographic information, date and time of emergency visit, patients' complaint, the reason for referral, department from which the patient was referred, provisional diagnosis, medication prescribed, and, if required, department to which the patient is referred. This was a retrospective chart review study conducted after obtaining hospital ethics committee approval. Data were extracted from the general emergency and psychiatry department emergency register for 1 year from 1 November 2014 to 31 October 2015. RESULTS A total of 41,040 patients, including the patients asking for psychiatric interventions, attended the general emergency of Silchar Medical College and Hospital during the study period. The data were tabulated in Microsoft Excel spreadsheet under appropriate columns. Pivot charts were created in Microsoft Excel, and the data were grouped accordingly. The psychiatric diagnosis made provisionally was categorized according to the International Classification of Diseases version 10. The chief complaints of the patients were grouped appropriately. SPSS version 22 was used to evaluate the basic descriptive statistics. The total number of patients referred to psychiatry emergency -either directly from the emergency department or from various other departments -was 1153. Referral rate to psychiatry emergency was found to be 2.8%. The distribution of the specific important demographic variables of the patients is tabulated in Table 1. About 52.21% of the subjects were females while 47.78% were males. Table 2.1 shows the distribution of the total number of referrals from various departments. The predominant complaints with which the patients presented in the general emergency department are grouped and shown in Table 2.2. It shows that almost 47.70% of the patients presented with some sort of somatic complaints (any physical symptom that could not be explained by any detectable physical disorders excluding headache). The next most common presentation was abnormal behavior (13.79%). The reasons for which the first responder physician referred the patient from general emergency to psychiatry emergency are tabulated in Table 2.3, which shows that maximum referrals were for cases where "no physical illness was detected" in the patient (38.59%). Table 2.4 shows the diagnostic evaluations of the total sample. Out of the total 1153 cases referred, a provisional diagnosis of proper psychiatric illness could be made in 909 cases (78.8%), whereas in 182 patients, the diagnosis was deferred (15.78%), and in 62 patients (5.3%), a provisional diagnosis other than a psychiatric diagnosis was made. The outcome of those referrals is tabulated in Table 2.5. The provisional diagnosis according to ICD-10 categories across both the genders made by the attending psychiatrist or psychiatry resident at the psychiatry emergency department is tabulated in Table 3. Table 4 shows the distribution of the individual psychiatric diagnoses according to gender as per ICD-10 criteria. DISCUSSION We found that the maximum number of patients attending the emergency in need of psychiatric c o n s u l t a t i o n a r e i n t h e i r t h i r d d e c a d e o f life (34.61%) and the mean age of the subjects was 30.88 ± 13.38 years. The majority of cases (78.75%) having an ICD -10 psychiatric diagnosis were from the age range of 1-40 years as compared to 41-80 years (21.24%). Majority of the cases in the category F10-19 were from the age range of 1-40 years (74.46%). Since most of the people are likely to begin abusing drugs including tobacco, alcohol, and illegal and prescription drugs during adolescence and young adulthood, various studies suggest that by the time they are seniors in school, almost 70% of high school students will have tried alcohol, half will have taken an illegal drug, nearly 40% will have smoked a cigarette, and more than 20% will have used a prescription drug for a nonmedical purpose. Out of the category F40-49, we found that 83.63% of the cases were from the age range of 1-40 years. In this study, anxiety disorders have emerged as the most prevalent mental disorders in the general population. Martin observed that anxiety disorders are more prevalent in the younger age groups due to the presence of high stress during this period, which is similar to our study. Genderwise, we found that the maximum number of cases with an ICD-10 psychiatric diagnosis were females (52.21%). Various national and international studies suggest that stress-related neurotic and anxiety disorders are more prevalent in women. A total of 41,040 patients attended the general emergency of the hospital in the given 1-year period, and 1153 patients were referred to psychiatry emergency. The psychiatry referral rate from the emergency department was found to be 2.8%, and the result is comparable to that of other studies from the subcontinent. Various factors like number of tertiary care centers available, number of specialized psychiatry service centers present in the area, and sociocultural factors affect the pattern of utilization of emergency psychiatry services of a particular center. The doctor at emergency referred the cases mostly when "no physical illness was detected" in the patient, followed by cases where "predominant psychiatric symptoms" were present. Most patients presented to emergency psychiatry with some sort of somatic complaints (47.7%). The next common presentation was abnormal and disorganized behavior (13.79%). The prevailing sociocultural stressors and social unrest, which is going on for the last three decades in this part of the country, maybe indirectly contributing to the increased number of somatoform and stress-related disorders in our study. The above findings also show that patients who are usually referred to psychiatrists from emergency mainly present with somatic symptoms and that physicians of other disciplines want to involve psychiatrists when they do not find any clinically relevant medical/surgical findings to explain the complaints of the patient: 74.15% patients were direct referrals from the Department of Emergency, followed by referrals from Department of Medicine (23.16%). Reason for referral Number of patients (%) Management of associated psychiatric symptoms -provisional diagnosis regarding physical illness was made along with which there were associated psychiatric illnesses confirmed by previous records of patient. Organic illness insufficient to explain symptoms -organic illness, mostly neurological, was confirmed but the associated behavioral abnormality could not be explained by this organic illness. Predominant psychiatric symptoms -predominant presentation was psychological/behavioral abnormality with or without confirmed previous records with minimal physical illness. No physical illness detected -some behavioral or psychological abnormality present where no physical abnormality was detected to explain the nature and type of psychological/behavioral abnormality significantly higher among males (44.12%) than females (19.64%), whereas depressive disorder (F32) was found to be more common in females (55.36%) than in males (32.35%) in this group, which is as per the previously available literature. About 64.87% of the total patients were provided with emergency care and discharged after temporary observation, and only 7.72% of the total patients needed admission. Regarding the management of the patients at psychiatry department, the routine emergency protocol was adhered to, which included initial management with pharmacotherapy followed by other interventions like brief psychotherapy and psychoeducation to the primary caregiver as well as other family members. CONCLUSION This audit of the data, we have obtained here, is to understand the specific important demographic variables and the predominant presenting complaints of the patients attending the emergency psychiatry department, to determine the various reasons for referral of these patients by other departments, and to gain knowledge about the primary psychiatric diagnosis established and the measures or steps taken after diagnosis of the patient, with a larger sample size. Some recommendations that can be made from our observations are that first, there should be proper training of the emergency health-care providers on common psychiatric disorders, as a large bulk of the patients with psychiatric disorder seems to visit the emergency department. Second, most patients with pure psychiatric problems are coming from the rural population. This signifies the necessity of improvement of primary psychiatry delivery system in this region, and finally, since this study highlights various important parameters regarding emergency services provided and their utilization by the patients attending psychiatric emergency, it could be helpful information for future policies and resource allocation for providing superior quality and cost-effective mental health care to the patients. However, this study had some limitations. As this is a tertiary care hospital-based study, the findings of this study may not reflect the actual pattern of psychiatric illnesses requiring emergency psychiatric care which are prevalent in the community at large. Also, since this is a retrospective descriptive study, the final outcomes of the patients getting emergency services were not evaluated. Further prospective studies are recommended on this topic for better evaluation of various parameters. Financial support and sponsorship Nil.
from django.contrib import messages from django.contrib.auth.decorators import login_required from django.core.urlresolvers import reverse_lazy from django.views.generic import CreateView from django.views.generic import DeleteView from django.views.generic import ListView from django.views.generic import UpdateView from django.shortcuts import get_object_or_404, render, redirect from braces.views import LoginRequiredMixin from personas.forms import ProfesionalForm from subastas.forms import InscriptionForm from subastas.models import Subasta from .models import Persona, Profesional @login_required def asociar(request, subasta_id): subasta = get_object_or_404(Subasta, pk=subasta_id) if request.method == "POST": form = InscriptionForm(request.POST, instance=subasta) if form.is_valid(): personas = form.cleaned_data.get('personas') subasta.personas.add(*personas) msg = 'Usuarios agregados exitosamente.' else: msg = 'No tiene personas tildadas.' messages.add_message(request, messages.INFO, msg) return redirect(reverse_lazy('subastas:acreditadores')+'?tab=search') @login_required def desasociar(request, subasta_id, persona_id): subasta = get_object_or_404(Subasta, pk=subasta_id) persona = get_object_or_404(Persona, pk=persona_id) subasta.personas.remove(persona) messages.add_message(request, messages.INFO, 'Se borro la inscripcion exitosamente.') return redirect(reverse_lazy('subastas:acreditadores')+'?tab=search') class ProfesionalListView(LoginRequiredMixin, ListView): model = Profesional template_name = 'personas/profesionales/list.html' def get_queryset(self): return Profesional.objects.all().order_by('titulo') class ProfesionalCreateView(LoginRequiredMixin, CreateView): form_class = ProfesionalForm model = Profesional template_name = 'personas/profesionales/form.html' success_url = reverse_lazy('personas:profesionales_list') class ProfesionalUpdateView(LoginRequiredMixin, UpdateView): context_object_name = 'instance' form_class = ProfesionalForm model = Profesional template_name = 'personas/profesionales/form.html' success_url = reverse_lazy('personas:profesionales_list') class ProfesionalDeleteView(LoginRequiredMixin, DeleteView): model = Profesional template_name = 'personas/profesionales/confirm_delete.html' success_url = reverse_lazy('personas:profesionales_list')
<reponame>Petesuchos/elo-calculator<gh_stars>0 from setuptools import setup, find_packages setup( name='elopy', version='0.1.0', packages=find_packages(), url='https://github.com/Petesuchos/elo-calculator', license='MIT', author='<NAME>', author_email='<EMAIL>', description='A simple ELO rating calculator', install_requires=[ 'Click', ], entry_points=''' [console_scripts] elo=elopy.scripts.cli:cli ''', )
Technical Solution for the Disposal of Solid Slag from Metallurgical Plants with Production of Abrasive Powders Abstract This article considers a technical solution to the production of abrasive powders according to the standard ISO 11126 from copper slag and nickel slag with the use of air classification. Justification of the selection of air classifier for the process of classification of copper slag is performed. The results of laboratory studies on the effect of the consumption concentration on the quality of the separation of slag particles in apparatus with an inclined louver lattice with reverse air suction are presented. This article then discusses the dependence of the material separation boundary on the air flow rate through the classifiers louver. Based on the theoretical calculation, an industrial apparatus with a capacity of 50 t/h on initial raw materials was developed, laboratory tests were industrialized, and the results of industrial tests were shown. Industrial testing was carried out and the results of are shown. Introduction At present, the most common raw material for the production of abrasive blasting powders is granulated slag of copper-smelting and nickel production. Slag granules have high Mohs hardness and sharp angular shape. Copper slag and nickel slag contain quartz only in bound form, which makes it environmentally different from quartz powders. Since the relative density of slag granules is higher than that of most abrasive materials, it has a higher kinetic impact energy. Abrasive is not a metal alloy, so it complies with ISO 11126. The initial abrasive particle size distribution is between 0.1 and 3.5 mm. However, according to ISO 11126 the maximum grain size shall not exceed 3.15 mm and the minimum fraction content minus 0.2 mm shall not exceed 5 %. Technology Granulation of copper slag and nickel slag is usually done by mechanical crushing (spraying) of mineral melt in water. In order to obtain the required abrasive fraction, it is necessary to dry the raw material and sort it. The main method of dry separation of bulk materials by grain size is screening the separation of the raw material by grain size on the screening surface with calibrated holes. Screening efficiency is affected by the size of the screen holes. When the raw material passes through the screen, only a certain number of grains passes through each hole and this number is constant, and this phenomenon does not depend on the size of the holes. It is well known that fine screening requires dozens of times more production floor space and, taking into account the significant abrasive wear of fine screens, is hardly feasible for the purpose of dry separation. The most efficient use of screening is determined by the size of the boundary grain of at least 1-1.5 mm. An alternative method of fine screening is pneumatic classification -separation by weight and shape using the aerodynamic characteristics of the grains in the air stream. Numerous studies show high efficiency of air classification on the boundaries of less than 1 mm. Available experience of industrial application of air classifiers for granulated slag screening has revealed the most suitable separator designan apparatus with inclined louver lattice with reverse air suction (see Figure 1). A distinctive feature of the apparatus is no sealing of the raw material loading unit and unloading unit of coarse material, meanwhile the fine product is deposited initially in the settling chamber built in with the classifier and then in the cyclone. Experiment When designing an industrial classifier, it is important to know not only the 'sharpness' or clarity of the powder separation on a given boundary of separation, but as well its energy efficiency, directly related to the specific load on the initial feed. In order to define this parameter (the ratio of the material consumption to the air flow rate), an experiment was carried out in which the consumption concentration of the initial copper slag was varied in the range from 1 to 7 kg/m 3. From the obtained data (see Figure 2), it follows that the classifier works stably up to a concentration of 6 kg/m 3 with an efficiency of about 60% according to the Eder-Mayer criterion. Slag (Table 1) with a grain size of less than 5 mm was fed into the classification. Knowing the initial granulometric composition, the given capacity of the initial supply, efficiency and separation boundary, it is easy to calculate the basic dimensions of the industrial classifier. In order to select the fan, it is necessary to know the influence of the air flow rate on the boundary grain (see Figure 3). According to the method, a mathematical model of the process of separation of copper slag on the boundary of 500 microns with the capacity of the initial supply Industrial Testing Based on experimental studies and calculations using the classification process model,
from enums.enums import MediusIdEnum from utils import utils from enums.enums import MediusEnum, CallbackStatus class CheckMyClanInvitationsResponseSerializer: data_dict = [ {'name': 'mediusid', 'n_bytes': 2, 'cast': None} ] @classmethod def build(self, message_id, callback_status, clan_invitation_id = 0, clan_id = 0, response_status = 0, message = '', leader_account_id = 0, leader_account_name = '', end_of_list = 1 ): packet = [ {'name': __name__}, {'mediusid': MediusIdEnum.CheckMyClanInvitationsResponse}, {'message_id': message_id}, {'buf': utils.hex_to_bytes("000000")}, {'callback_status': utils.int_to_bytes_little(4, callback_status, signed=True)}, {'clan_invitation_id': utils.int_to_bytes_little(4, clan_invitation_id)}, {'clan_id': utils.int_to_bytes_little(4, clan_id)}, {'response_status': utils.int_to_bytes_little(4, response_status)}, {'message': utils.str_to_bytes(message, MediusEnum.CLANMSG_MAXLEN)}, {'leader_account_id': utils.int_to_bytes_little(4, leader_account_id)}, {'leader_account_name': utils.str_to_bytes(leader_account_name, MediusEnum.ACCOUNTNAME_MAXLEN)}, {'end_of_list': utils.int_to_bytes_little(4, end_of_list)}, ] return packet class CheckMyClanInvitationsResponseHandler: def process(self, serialized, monolith, con): raise Exception('Unimplemented Handler: CheckMyClanInvitationsResponseHandler')
Simulations of Heat Supply Performance of a Deep Borehole Heat Exchanger under Different Scheduled Operation Conditions With the changing world energy structure, the development of renewable energy sources is gradually accelerating. Among them, close attention has been given to geothermal energy because of its abundant resources and supply stability. In this article, a deep borehole heat exchanger (DBHE) is coupled with a heat pump system to calculate the heat supply and daily electricity consumption of the system. To make better use of the peaks and valleys in electricity prices, the following three daily operating modes were studied: 24-h operation (Mode 1), 8-h operation plus 16-h non-operation (Mode 2), and two cycles of 4-h operation and 8-h non-operation (Mode 3). Simulation results show that scheduled non-continuous operation can effectively improve the outlet temperature of the heat extraction fluid circulating in the DBHE. The heat extraction rates of Mode 1 is 190.9 kW for mass flowrate of 9 kg/s; in Mode 2 and Mode 3 cases, the rates change to 304.7 kW and 293.0 kW, respectively. The daily operational electricity cost of Mode 1 is the greatest because of 24-h operation; due to scheduled non-continuous operation, the daily operational electricity cost of Mode 3 is only about 66% of that of Mode 2. After an 8-month period without heating, the formation-temperature can be restored within 4 °C of its original state; 90% recovery of the formation-temperature can be achieved by the end of the second month of the non-operation season.
<reponame>aikramer2/Skater # coding=utf-8 from skater.util import exceptions from rpy2.robjects.packages import importr from rpy2.robjects import pandas2ri import numbers import numpy as np import pandas as pd import rpy2.robjects as ro pandas2ri.activate() class BRLC(object): """ :: Experimental :: The implementation is currently experimental and might change in future BRLC(Bayesian Rule List Classifier) is a python wrapper for SBRL(Scalable Bayesian Rule list). SBRL is a scalable Bayesian Rule List. It's a generative estimator to build hierarchical interpretable decision lists. This python wrapper is an extension to the work done by Professor <NAME>, <NAME>, <NAME>, <NAME> and others. For more information check out the reference section below. Parameters ---------- iterations: int (default=30000) number of iterations for each MCMC chain. pos_sign: int (default=1) sign for the positive labels in the "label" column. neg_sign: int (default=0) sign for the negative labels in the "label" column. min_rule_len: int (default=1) minimum number of cardinality for rules to be mined from the data-frame. max_rule_len: int (default=8) maximum number of cardinality for rules to be mined from the data-frame. min_support_pos: float (default=0.1) a number between 0 and 1, for the minimum percentage support for the positive observations. min_support_neg: float (default 0.1) a number between 0 and 1, for the minimum percentage support for the negative observations. eta: int (default=1) a hyper-parameter for the expected cardinality of the rules in the optimal rule list. n_chains: int (default=10) alpha: int (default=1) a prior pseudo-count for the positive(alpha1) and negative(alpha0) classes. Default values (1, 1) lambda_: int (default=8) a hyper-parameter for the expected length of the rule list. discretize: bool (default=True) apply discretizer to handle continuous features. drop_features: bool (default=False) once continuous features are discretized, use this flag to either retain or drop them from the dataframe References ---------- .. [1] Letham et.al(2015) Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model (https://arxiv.org/abs/1511.01644) .. [2] Yang et.al(2016) Scalable Bayesian Rule Lists (https://arxiv.org/abs/1602.08610) .. [3] https://github.com/Hongyuy/sbrl-python-wrapper/blob/master/sbrl/C_sbrl.py Examples -------- >>> from skater.core.global_interpretation.interpretable_models.brlc import BRLC >>> import pandas as pd >>> from sklearn.datasets.mldata import fetch_mldata >>> input_df = fetch_mldata("diabetes") ... >>> Xtrain, Xtest, ytrain, ytest = train_test_split(input_df, y, test_size=0.20, random_state=0) >>> sbrl_model = BRLC(min_rule_len=1, max_rule_len=10, iterations=10000, n_chains=20, drop_features=True) >>> # Train a model, by default discretizer is enabled. So, you wish to exclude features then exclude them using >>> # the undiscretize_feature_list parameter >>> model = sbrl_model.fit(Xtrain, ytrain, bin_labels="default") >>> #print the learned model >>> sbrl_inst.print_model() >>> features_to_descritize = Xtrain.columns >>> Xtrain_filtered = sbrl_model.discretizer(Xtrain, features_to_descritize, labels_for_bin="default") >>> predict_scores = sbrl_model.predict_proba(Xtest) >>> _, y_hat = sbrl_model.predict(Xtest) >>> # save and reload the model and continue with evaluation >>> sbrl_model.save_model("model.pkl") >>> sbrl_model.load_model("model.pkl") >>> # to access all the learned rules >>> sbrl_model.access_learned_rules("all") # For a complete example refer to rule_lists_continuous_features.ipynb or rule_lists_titanic_dataset.ipynb notebook """ _estimator_type = "classifier" def __init__(self, iterations=30000, pos_sign=1, neg_sign=0, min_rule_len=1, max_rule_len=8, min_support_pos=0.10, min_support_neg=0.10, eta=1.0, n_chains=10, alpha=1, lambda_=10, discretize=True, drop_features=False): self.__r_sbrl = importr('sbrl') self.model = None self.__as_factor = ro.r['as.factor'] self.__s_apply = ro.r['lapply'] self.__r_frame = ro.r['data.frame'] self.model_params = { "iters": iterations, "pos_sign": pos_sign, "neg_sign": neg_sign, "rule_minlen": min_rule_len, "rule_maxlen": max_rule_len, "minsupport_pos": min_support_pos, "minsupport_neg": min_support_neg, "eta": eta, "nchain": n_chains, "lambda": lambda_, "alpha": alpha } self.__discretize = discretize self.__drop_features = drop_features self.discretized_features = [] self.feature_names = [] def set_params(self, params): """ Set model hyper-parameters """ self.model_params[list(params.keys())[0]] = list(params.values())[0] def discretizer(self, X, column_list, no_of_quantiles=None, labels_for_bin=None, precision=3): """ A discretizer for continuous features Parameters ----------- X: pandas.DataFrame Dataframe containing continuous features column_list: list/tuple no_of_quantiles: int or list Number of quantiles, e.g. deciles(10), quartiles(4) or as a list of quantiles[0, .25, .5, .75, 1.] if 'None' then [0, .25, .5, .75, 1.] is used labels_for_bin: labels for the resulting bins precision: int precision for storing and creating bins Returns -------- new_X: pandas.DataFrame Contains discretized features Examples --------- >>> sbrl_model = BRLC(min_rule_len=1, max_rule_len=10, iterations=10000, n_chains=20, drop_features=True) >>> ... >>> features_to_descritize = Xtrain.columns >>> Xtrain_discretized = sbrl_model.discretizer(Xtrain, features_to_descritize, labels_for_bin="default") >>> predict_scores = sbrl_model.predict_proba(Xtrain_discretized) """ if not isinstance(X, pd.DataFrame): raise TypeError("Only pandas.DataFrame as input type is currently supported") q_value = [0, .25, .5, .75, 1.] if no_of_quantiles is None else no_of_quantiles q_labels = [1, 2, 3, 4] if labels_for_bin is 'default' else labels_for_bin new_X = X.copy() for column_name in column_list: new_clm_name = '{}_q_label'.format(column_name) self.discretized_features.append(new_clm_name) new_X.loc[:, new_clm_name] = pd.qcut(X[column_name].rank(method='first'), q=q_value, labels=q_labels, duplicates='drop', precision=precision) # Drop the continuous feature column which has been discritized new_X = new_X.drop([column_name], axis=1) if self.__drop_features else new_X # explicitly convert the labels column to 'str' type new_X = new_X.astype(dtype={'{}_q_label'.format(column_name): "str"}) return new_X def _filter_continuous_features(self, X, column_list=None): import collections # Sequence is a base class for list and tuple. column_list could be of either type if not isinstance(column_list, collections.Sequence): raise TypeError("Only list/tuple type supported for specifying column list") c_l = X.columns if column_list is None else column_list # To check for numeric type, validate again numbers.Number (base class for numeric type ) # Reference[PEP-3141]: https://www.python.org/dev/peps/pep-3141/ numeric_type_columns = tuple(filter(lambda c_name: isinstance(X[c_name].iloc[0], numbers.Number), c_l)) return numeric_type_columns # a helper function to filter unwanted features filter_to_be_discretize = lambda self, clmn_list, unwanted_list: \ tuple(filter(lambda c_name: c_name not in unwanted_list, clmn_list)) def fit(self, X, y_true, n_quantiles=None, bin_labels='default', undiscretize_feature_list=None, precision=3): """ Fit the estimator. Parameters ----------- X: pandas.DataFrame object, that could be used by the model for training. It must not have a column named 'label' y_true: pandas.Series, 1-D array to store ground truth labels Returns ------- SBRL model instance: rpy2.robjects.vectors.ListVector Examples --------- >>> from skater.core.global_interpretation.interpretable_models.brlc import BRLC >>> sbrl_model = BRLC(min_rule_len=1, max_rule_len=10, iterations=10000, n_chains=20, drop_features=True) >>> # Train a model, by default discretizer is enabled. So, you wish to exclude features then exclude them using >>> # the undiscretize_feature_list parameter >>> model = sbrl_model.fit(Xtrain, ytrain, bin_labels="default") """ if len(np.unique(y_true)) != 2: raise Exception("Supports only binary classification right now") if not isinstance(X, pd.DataFrame): raise exceptions.DataSetError("Only pandas.DataFrame as input type is currently supported") # Conditions being managed # 1. if 'undiscretize_feature_list' is empty and discretization flag is enabled, # discretize 'all' continuous features # 2. if undiscretize_feature_list is not empty and discretization flag is enabled, filter the ones not needed # needed for_discretization_clmns = tuple(filter(lambda c_name: c_name not in undiscretize_feature_list, X.columns)) \ if undiscretize_feature_list is not None else tuple(X.columns) data = self.discretizer(X, self._filter_continuous_features(X, for_discretization_clmns), no_of_quantiles=n_quantiles, labels_for_bin=bin_labels, precision=precision) \ if self.__discretize is True else X # record all the feature names self.feature_names = data.columns data.loc[:, "label"] = y_true data_as_r_frame = self.__r_frame(self.__s_apply(data, self.__as_factor)) self.model = self.__r_sbrl.sbrl(data_as_r_frame, **self.model_params) return self.model def save_model(self, model_name, compress=True): """ Persist the model for future use """ import joblib if self.model is not None: joblib.dump(self.model, model_name, compress=compress) else: raise Exception("SBRL model is not fitted yet; no relevant model instance present") def load_model(self, serialized_model_name): """ Load a serialized model """ import joblib try: self.model = joblib.load(serialized_model_name) # update the BRLC model instance with the the uploaded model except (OSError, IOError) as err: print("Something is not right with the serialization format. Details {}".format(err)) raise def predict_proba(self, X): """ Computes possible class probabilities for the input 'X' Parameters ----------- X: pandas.DataFrame object Returns ------- pandas.DataFrame of shape (#datapoints, 2), the possible probability of each class for each observation """ if not isinstance(X, pd.DataFrame): raise exceptions.DataSetError("Only pandas.DataFrame as input type is currently supported") data_as_r_frame = self.__r_frame(self.__s_apply(X, self.__as_factor)) results = self.__r_sbrl.predict_sbrl(self.model, data_as_r_frame) return pandas2ri.ri2py_dataframe(results).T def predict(self, X=None, prob_score=None, threshold=0.5, pos_label=1): """ Predict the class for input 'X' The predicted class is determined by setting a threshold. Adjust threshold to balance between sensitivity and specificity Parameters ----------- X: pandas.DataFrame input examples to be scored prob_score: pandas.DataFrame or None (default=None) If set to None, `predict_proba` is called before computing the class labels. If you have access to probability scores already, use the dataframe of probability scores to compute the final class label threshold: float (default=0.5) pos_label: int (default=1) specify how to identify positive label Returns ------- y_prob, y_prob['label]: pandas.Series, numpy.ndarray Contains the probability score for the input 'X' """ # TODO: Extend it for multi-class classification probability_df = self.predict_proba(X) if X is not None and prob_score is None else prob_score y_prob = probability_df.loc[:, pos_label] y_prob['label'] = np.where(y_prob.values > threshold, 1, 0) return y_prob, y_prob['label'] def print_model(self): """ print the decision stumps of the learned estimator """ self.__r_sbrl.print_sbrl(self.model) def access_learned_rules(self, rule_indexes="all"): """ Access all learned decision rules. This is useful for building and developing intuition Parameters ---------- rule_indexes: str (default="all", retrieves all the rules) Specify the index of the rules to be retrieved index could be set as 'all' or a range could be specified e.g. '(1:3)' will retrieve the rules 1 and 2 """ if not isinstance(rule_indexes, str): raise TypeError('Expected type string {} provided'.format(type(rule_indexes))) # Convert model properties into a readable python dict result_dict = dict(zip(self.model.names, map(list, list(self.model)))) indexes_func = lambda indexes: [int(v) for v in indexes.split(':')] # original index starts from 0 while the printed index starts from 1, hence adjust the index rules_filter = lambda all_rules, indexes: all_rules['rulenames'][(indexes[0] - 1):(indexes[1] - 1)] \ if rule_indexes.find(':') > -1 else all_rules['rulenames'][indexes[0] - 1] # Enable the ability to access single or multiple sequential model learned decisions rules_result = result_dict['rulenames'] if rule_indexes == "all" \ else rules_filter(result_dict, indexes_func(rule_indexes)) return rules_result
import type {Context, OffsetQueryResult, Service} from '@core'; import type {GenerateTokenCommand} from './generate-token-command'; import type {RegisterWithTokenCommand} from './register-with-token-command'; import type {RegisterCommand} from './register-command'; import type {EmailExistsQuery} from './email-exists-query'; import type {User} from './user'; import type {UsernameExistsQuery} from './username-exists-query'; export type UserService = Service<string, User, OffsetQueryResult<string, User>> & { registerWithToken: (command: RegisterWithTokenCommand, context: Context) => Promise<void>; generateToken: ( command: GenerateTokenCommand, context: Context, ) => Promise<{loginToken: string; refreshToken: string; accessToken: string}>; register: ( command: RegisterCommand, context: Context, ) => Promise<{loginToken: string; refreshToken: string; accessToken: string}>; emailExists: (query: EmailExistsQuery, context: Context) => Promise<boolean>; usernameExists: (query: UsernameExistsQuery, context: Context) => Promise<boolean>; };
<reponame>thangdo19/nestjs-base<gh_stars>1-10 import { PERMISSIONS } from '@common/constants/permission'; import { Auth } from '@common/decorators/auth.decorator'; import { UseCrud } from '@common/decorators/crud.decorator'; import { Body, Controller, Get, Param, ParseIntPipe } from '@nestjs/common'; import { ApiOperation, ApiTags } from '@nestjs/swagger'; import { CrudController, Override } from '@nestjsx/crud'; import { CreateTreeExampleDto } from '../dto/create.dto'; import { UpdateTreeExampleDto } from '../dto/update.dto'; import { TreeExample } from '../index.entity.example'; import { TreeExampleService } from '../service'; @ApiTags('{{entity}}s') @UseCrud(TreeExample, { dto: { create: CreateTreeExampleDto, update: UpdateTreeExampleDto, }, routes: { only: ['createOneBase', 'updateOneBase', 'getManyBase'] } }) @Controller('{{entity}}s') export class TreeExampleController implements CrudController<TreeExample> { constructor(public service: TreeExampleService) {} @Override('createOneBase') @Auth(PERMISSIONS.CATEGORY.CREATE_ALL) createOne( @Body() dto: CreateTreeExampleDto ): Promise<TreeExample> { return this.service.createOne(dto) } @Override('getManyBase') getManyTree(): Promise<TreeExample[]> { return this.service.getManyTree() } @Get('/roots') @ApiOperation({ summary: 'Retrieve many Root Items' }) async getManyRoot() { return this.service.getManyRoots() } @Get('/:id/children') @ApiOperation({ summary: 'Retrieve Children Items' }) async getChildren( @Param('id', ParseIntPipe) id: number ): Promise<TreeExample> { return this.service.getTreeChildren(id) } @Get('/:id/parent') @ApiOperation({ summary: 'Retrieve one closest parent'}) async getOneClosestParent( @Param('id', ParseIntPipe) id: number ): Promise<TreeExample> { return this.service.getOneClosestParent(id) } }
/** * Returns true if DictionaryCollection is able to sort safely * @param throwError * @return * @throws java.lang.Exception */ public boolean canSafelySort(boolean throwError) throws Exception { boolean ret = true; try { Collections.sort(new ArrayList<>(nodeMap.values())); } catch (Exception e) { if (throwError) { throw new Exception(e); } ret = false; } return ret; }
<reponame>Niesyto/danger-zone import { buildSchema, prop as Property } from '@typegoose/typegoose'; import { ObjectId } from 'bson'; import { Schema } from 'mongoose'; import { Field, ObjectType } from 'type-graphql'; import { ObjectIdScalar } from '../../common/graphql-scalars/object-id.scalar'; import { User } from '../../users/models/user.schema'; @ObjectType() export class Report { @Field(() => ObjectIdScalar) readonly _id: ObjectId; @Field() @Property({ required: true, maxlength: 50 }) title: string; @Field() @Property({ required: true, maxlength: 250 }) description: string; @Field(() => ObjectIdScalar) @Property({ ref: User }) reportedBy: ObjectId; } export const ReportSchema: Schema<typeof Report> = buildSchema(Report);
VANCOUVER WHITECAPS want former Celtic chief scout John Park to be their new global head of recruitment. The ambitious MLS outfit — backed by Jeff Mallett, the man who built internet giant Yahoo! — have made Park an offer and hope to be able to agree a deal with him next week. News Group Newspapers Ltd 2 Ex-Celtic chief scout John Park has been offered a contract to become global head of recruitment for MLS side Vancouver Whitecaps He was ten years at Parkhead before quitting the Hoops last October and has attracted interest from clubs in England and Europe. But Whitecaps chiefs are confident they can secure Park, who was also head of development at Hibs before scouting the likes of Victor Wanyama, Ki Sung Yueng, Virgil van Dijk, Tom Rogic and Moussa Dembele for Celtic. Whitecaps boss Carl Robinson — a 40-year-old Welshman who played for Wolves, Sunderland and Norwich — is keen to work with Park and has also given David Templeton a chance. Getty Images 2 Welshman Carl Robinson is manager of the MLS giants and has taken former Rangers winger David Templeton on trial The 28-year-old former Hearts and Rangers winger has joined Whitecaps’ pre-season training camp in Wales and featured as a trialist in a bounce game against Cardiff City. He is now hoping to impress in games against Newport County and Bristol City, and possibly land a deal. We pay for your stories! Do you have a story for The Scottish Sun Online? Email us at scottishsundigital@news.co.uk or call 0141 420 5266
The Clinton Tapes: Wrestling History With The President As he began his presidency, Bill Clinton invited his friend Taylor Branch, the journalist and award-winning historian, to become the Arthur Schlesinger of his administration: a chronicler of momentous events as they happened. Preoccupied with his own writing, Branch turned down the offer, but he agreed to visit Clinton periodically to conduct a series of taped conversations. Branch alternately calls the project an oral history and a diary, but the seventy-nine sessions, held almost monthly throughout Clintons two terms, came closer to after-action debriefings. Recorded shortly after the events happened, the tapes captured Clintons immediate reactions to the people and incidents he described. Since Clinton intended to use the interviews for his autobiography, and eventually to make them available for researchers at his library, he kept the tapes of each session. But when Branch drove home from the White House to Baltimore, he would dictate what he remembered about each encounter, with the idea of writing his own account. Unable to consult Clintons tapes, Branch had to rely on his memory, bemoaning his inability to reproduce verbatim the presidents astute and articulate observationsthe very reason why oral historians record their interviews.
import { cx } from "emotion"; import * as React from "react"; import { disabledStyle, selectedStar, unSelectedStar, wrapStyle } from "./styles/Rating.styles"; import { RatingProps, RatingState } from "./typings/Rating"; function generateStars(maxRating: number, selectedValue: number) { return Array.from({ length: maxRating }, (_, i) => { return { active: i + 1 <= selectedValue }; }); } class Rating extends React.PureComponent<RatingProps, RatingState> { constructor(props: RatingProps) { super(props); this.state = { stars: generateStars(props.maxRating, props.value) }; } componentDidUpdate(prevProps: RatingProps) { const { maxRating, value } = this.props; if (prevProps.maxRating !== maxRating) { this.setState({ stars: generateStars(maxRating, value) }); } } setRating = (rating: number) => { const { maxRating, disabled } = this.props; if (disabled) { return; } this.setState({ stars: generateStars(maxRating, rating) }); }; render() { const { name, value, onChange, disabled, className } = this.props; const { stars } = this.state; const _className = cx(wrapStyle, className, disabled && disabledStyle); return ( <div className={_className}> {stars.map((star, starIndex) => { const rating = starIndex + 1; return ( <span key={`${name}-${rating}`} onMouseEnter={() => this.setRating(rating)} onMouseLeave={() => this.setRating(value)} onClick={() => { if (disabled) { return; } this.setRating(rating); onChange(rating); }} > <i className={cx( "pi pi-grade", unSelectedStar, star.active && selectedStar )} /> </span> ); })} </div> ); } } export default Rating;
Typing of porcine reproductive and respiratory syndrome viruses by a multiplex PCR assay A rapid multiplex PCR assay was developed to distinguish between North American and European genotypes of porcine reproductive and respiratory syndrome (PRRS) virus after a portion of the polymerase gene (open reading frame 1b) was sequenced for two North American PRRS virus strains. DNA products with unique sizes characteristic of each genotype were obtained.
// isClusterHealthy returns true if quantity of nodes returning true for unhealthyPredicate < MaxNodesUnhealthy for a zone, if zone == "" it checks the whole cluster func (nr NodeReaper) isClusterHealthy(nodes []corev1.Node, unhealthyPredicate NodeEvaluablePredicate, maxNodesUnhealthy config.MaxNodesUnhealthy, zone string) (bool, error) { var zoneStr string var propStr string if zone == "" { zoneStr = "cluster" propStr = "maxNodesUnhealthy" } else { zoneStr = "zone" propStr = "maxNodesUnhealthyInSameZone" } nr.logger.Infof("Checking %s health with MaxNodesUnhealthy=%s before reaping", zoneStr, maxNodesUnhealthy) totalNodesUnhealthy, err := nr.countUnhealthyNodes(&nodes, unhealthyPredicate, zone) if err != nil { return false, fmt.Errorf("failed counting unhealthy nodes in %s: %s", zoneStr, err) } nr.logger.Infof("Nodes unhealthy in %s: Total: %d Unhealhty: %d %s: %s", zoneStr, len(nodes), totalNodesUnhealthy, propStr, maxNodesUnhealthy) chc, err := NewClusterHealthCalculator(maxNodesUnhealthy) if err != nil { return false, fmt.Errorf("failed creating ClusterHealthCalculator: %s", err) } clusterHealthy := chc(totalNodesUnhealthy, uint(len(nodes))) if !clusterHealthy { return false, nil } return true, nil }
Controlling nutritional status (CONUT) score as a preoperative risk assessment index for older patients with colorectal cancer Background Assessment of preoperative general condition to predict postoperative outcomes is important, particularly in older patients who typically suffer from various comorbidities and exhibit impaired functional status. In addition to various indices such as Charlson Comorbidity Index (CCI), National Institute on Aging and National Cancer Institute Comorbidity Index (NIA/NCI), Adult Comorbidity Evaluation-27 (ACE-27), and American Society of Anesthesiologists Physical Status classification (ASA-PS), controlling nutritional status (CONUT) score is recently gaining attention as a tool to evaluate the general condition of patients from a nutritional perspective. However, the utility of these indices in older patients with colorectal cancer has not been compared. Methods The study population comprised 830 patients with Stage I - IV colorectal cancer aged 75years or older who underwent surgery at the National Cancer Center Hospital from January 2000 to December 2014. Associations of each index with overall survival (OS) (long-term outcome) and postoperative complications (short-term outcome) were examined. Results For the three indices with the highest Akaike information criterion values (i.e., CONUT score, CCI and ACE-27), but not the remaining indices (NIA/NCI and ASA-PS), OS significantly worsened as general condition scores decreased, after adjusting for known prognostic factors. In contrast, for postoperative complications, only CONUT score was identified as a predictive factor (≥4 versus 03; odds ratio: 1.90; 95% CI: 1.133.13; P=0.016). Conclusion For older patients with colorectal cancer, only CONUT score was a predictive factor of both long-term and short-term outcomes after surgery, suggesting that CONUT score is a useful preoperative risk assessment index. Methods: The study population comprised 830 patients with Stage I -IV colorectal cancer aged 75 years or older who underwent surgery at the National Cancer Center Hospital from January 2000 to December 2014. Associations of each index with overall survival (OS) (long-term outcome) and postoperative complications (short-term outcome) were examined. Results: For the three indices with the highest Akaike information criterion values (i.e., CONUT score, CCI and ACE-27), but not the remaining indices (NIA/NCI and ASA-PS), OS significantly worsened as general condition scores decreased, after adjusting for known prognostic factors. In contrast, for postoperative complications, only CONUT score was identified as a predictive factor (≥4 versus 0-3; odds ratio: 1.90; 95% CI: 1.13-3.13; P = 0.016). Conclusion: For older patients with colorectal cancer, only CONUT score was a predictive factor of both long-term and short-term outcomes after surgery, suggesting that CONUT score is a useful preoperative risk assessment index. Keywords: Controlling nutritional status (CONUT) score, Comorbidity index, Older, Colorectal cancer Background As older populations increase globally, colorectal cancer surgery is expected to become more common. Older patients typically suffer from several comorbidities and exhibit impaired functional status, which lead to higher postoperative morbidity and mortality compared with younger patients. Thus, assessing the preoperative general condition of older patients in particular is important for predicting postoperative short-term and long-term outcomes. Various risk assessment indices have been used to evaluate the general condition of patients, including American Society of Anesthesiologists Physical Status classification (ASA-PS), which assesses physical status, and Charlson Comorbidity Index (CCI), National Institute on Aging (NIA) and National Cancer Institute (NCI) Comorbidity Index (NIA/NCI), and Adult Comorbidity Evaluation-27 (ACE-27), which are used to assess comorbidities. For colorectal cancer, ASA-PS and CCI reportedly predict postoperative complications, and CCI, NIA/NCI, and ACE-27 predict overall survival (OS). Poor general condition is associated with increased postoperative complications and decreased survival after surgery. Little is known about the relationships between risk assessment indices that evaluate general condition and short-term and long-term outcomes in older patients with cancer. Accordingly, this study aimed to examine the association of risk assessment indices with both OS (long-term outcomes) and postoperative complications (short-term outcomes) in older patients with colorectal cancer. Study population Subjects of this retrospective study were patients with colorectal cancer aged 75 years or older who were treated at the National Cancer Center Hospital from January 2000 to December 2014. Patients with Stage 0 cancer, patients who did not undergo surgery due to unresectable Stage IV cancer, and patients for whom CONUT scores could not be calculated due to insufficient data were excluded. This retrospective study was approved by the Institutional Review Board (IRB) of the National Cancer Center Hospital (IRB code: 2017-437). Data collection The following parameters were retrospectively assessed using medical records: age, sex, body mass index (BMI) (≥25 versus < 25), primary tumor site (colon versus rectum), presence of lymph node metastasis, carcinoembryonic antigen (CEA) (≤5 versus > 5), carbohydrate antigen 19-9 (CA19-9) (≤37 versus > 37), stage according to the Union for International Cancer Control TNM classification (8th edition), and postoperative complications. Postoperative complications in this study were defined as a morbidity that occurred within duration of postoperative hospital stay or within 30 days after surgery, and as a morbidity with a Clavien-Dindo classification ≥II (See Additional file 1: Table S1 for a list of complication definitions). Statistical analysis Data are presented as numbers of patients, ratios (%), hazard ratios (HRs), or odds ratios (ORs) and 95% confidence intervals (CIs). OS was defined as the interval between the date of diagnosis of colorectal cancer and the date of death from all causes. Survivors were censored as of the date of data cut-off (April 2018). The Kaplan-Meier method was used to estimate OS. Differences in survival were assessed with the log-rank test. Models for Cox proportional hazards were constructed separately for the five indices and were used to calculate HRs and 95% CIs. HRs adjusted for sex, BMI, lymph node metastasis, stage, CEA, and CA19-9, all of which were reported to be significant covariates in the previous studies, were also calculated. BMI was included in the analysis as a categorical parameter (≥25 versus < 25). To estimate the goodness-of-fit of each index based on Cox regression survival analysis, Akaike Information Criterion (AIC) values were compared between the five indices. AIC was calculated as follows: AIC = − 2 log maximum likelihood + 2 x (number of parameters in the model). Smaller AIC values represent better optimistic prognostic stratification. Logistic regression analysis models were used to calculate ORs and 95% CIs for postoperative complications in each index. P < 0.05 was considered statistically significant. All statistical analyses were performed using the JMP14 software program (SAS Institute Japan Ltd., Tokyo, Japan). Study cohort characteristics Details of the study cohort are summarized in Fig. 1. Between 2000 and 2014, a total of 870 patients with colorectal cancer aged 75 years or older were treated at the National Cancer Center Hospital. Of these, we excluded 7 patients with Stage 0 cancer, 18 patients who did not undergo surgery due to unresectable stage IV cancer, and 15 patients for whom CONUT scores could not be calculated due to insufficient data (all were missing data for total cholesterol concentration). The final study population consisted of 830 patients with stage I -IV colorectal cancer who underwent surgery and were aged 75 years or older. Patient characteristics stratified by CONUT category are summarized in Table 1. For CONUT scores, the number of patients with scores of 0, 1 / 2, 3 / ≥4 were 508 (61%), 249 (30%), and 73 (9%), respectively. The median patient age was 78 years (range, 75-94 years), and 470 patients (57%) were male and 360 (43%) were female. Of the 830 patients, 653 (79%) had a tumor in the colon, 482 (58%) had stage I or II colorectal cancer, and 348 (42%) had stage III or IV colorectal cancer. Patients with higher stage were also the patients with higher CONUT score (p = 0.045). AIC of each index model AIC was used as a parameter for goodness-of-fit, with lower AIC values indicative of goodness-of-fit. AIC values of each index were 2764.52 for CONUT score, 2774.59 for ASA-PS, 2690.13 for CCI, 2775.19 for NIA/ NCI, and 2753.13 for ACE-27. According to this comparison, CCI had the best goodness-of-fit, followed by ACE-27 and CONUT score. Other infections included pseudomembranous colitis, cholangitis, parotitis, and catheter infection. Vascular events included cerebral infarction, angina attack, pulmonary embolism, arteriosclerosis obliterans, and acute peripheral artery occlusive disease. The "other" category included anastomotic bleeding, arrhythmia, peptic ulcer, urinary retention, drug eruption, convulsion, pneumothorax, gastrointestinal perforation, chylorrhea, ascites, and facial nerve paralysis. Three complications of Clavien-Dindo classification V consisted of one pneumonia / respiratory failure case and two vascular events. Associations between each index and postoperative complications Univariate and multivariate logistic regression analyses to assess associations of each index with postoperative complications are shown in Table 3. Univariate analysis showed that sex (p = 0.005), tumor location (p = 0.003), and CONUT score (p = 0.015), but not BMI (p = 0.648), were significantly associated with postoperative complications. There was no significant association between the four comorbidity indices and postoperative complications. Multivariate analysis showed that CONUT score ≥ 4 was an independent predictor of postoperative complications (OR = 1.93; 95% CI (1.15-3.20); p = 0.013), indicating that, among the five indices, only CONUT score was an independent predictor of short-term outcomes. Discussion This study had two notable points. First, we focused on older patients with colorectal cancer who typically have several comorbidities and impaired functional status that may lead to higher operative risk. Second, we included CONUT score as an index to evaluate the relationship of a patient's general condition with OS and postoperative complications. Through these new approaches, we demonstrated that among the five indices evaluated (CONUT score, ASA-PS, CCI, NIA/NCI, ACE-27), only CONUT score was a significant prognostic factor of both OS (long-term outcomes) and postoperative complications (short-term outcomes) in older patients with colorectal cancer. This suggests that CONUT score may be useful as a preoperative risk assessment index in this patient population. In terms of long-term outcomes, for CONUT score, CCI, and ACE-27, but not ASA-PS and NIA/NCI, as scores for general condition worsened, OS became significantly worse as well. Moreover, an assessment of AIC revealed that these three indices had better AIC values than those of ASA-PS and NIA/NCI. Taken together, these results suggest that, among the five indices, CONUT score, CCI, and ACE-27 were good models for predicting OS of older patients with colorectal cancer. Our results are compatible with previous studies reporting that CONUT score, CCI, and ACE-27 predict OS of patients with colorectal cancer, although not specifically older patients. Despite NIA/NCI not being a predictor of OS in our study, it was a predictor in other studies involving patients with colorectal cancer. It is not surprising that CONUT score is a prognostic factor for OS in various types of cancers because each of its three components reflects cancer progression. Serum albumin is a marker of nutritional status and reportedly correlates with tumor necrosis, as pro-inflammatory cytokines reduce albumin synthesis. Total cholesterol concentration has been reported to correlate with tumor progression, as tumor tissue reduces plasma cholesterol concentration and caloric intake. Finally, total lymphocyte counts reflect immunological status, and a low peripheral lymphocyte count is associated with worse prognosis in several cancers due to insufficient host immune response to cancer cells. Despite the above, the utility of CONUT score for evaluating postoperative complications in patients with cancer remains controversial. In the present study, we revealed that, among the five indices, only CONUT score was an independent predictor of short-term outcomes. CONUT score has an advantage over the other indices due to its calculation method. Whereas the CCI requires 19 variables, NIA/ NCI requires 24 variables, and ACE-27 requires 27 variables, CONUT score can be easily calculated using only three routinely measured parameters. Thus, CONUT score is an easy and convenient tool for predicting complications, which is not surprising because poor preoperative nutritional status reportedly correlates with the incidence of postoperative complications. Some studies have reported that nutritional intervention for preoperative malnutrition contributes to a reduction of postoperative complications, reduction of length of hospital stay, and reduction of medical costs. Our results support nutritional intervention for high CONUT score groups. CONUT score can be an indicator for the need to initiate nutritional intervention and can also serve as a scoring system to evaluate the therapeutic effects of the intervention. Furthermore, since CONUT score reflects both shortterm and long-term outcomes, it can impact surgical treatment strategies and thus be used for stratification in randomized clinical studies of older patients with cancer. This study has limitations worth noting. First, this study was retrospective in design and included patients from a single institution, although the sample size was much larger compared to those of previous studies. Second, although patients underwent various surgical procedures, with more invasive surgical procedures leading to higher mortality and morbidity, we did not account for this in the present study. Our findings warrant further consideration and validation in a larger series of older patients with colorectal cancer. Conclusions The general condition of patients with colorectal cancer impacts their survival and postoperative complications and thus should be considered in cancer management. We demonstrated that, among five indices which evaluate general condition (CONUT score, ASA-PS, CCI, NIA/NCI, ACE-27), only CONUT score was a significant prognostic factor of both OS (longterm outcomes) and postoperative complications (short-term outcomes) in older patients with colorectal cancer. Our findings suggest that CONUT score is useful not only for assessing nutritional status, but can also be used as a preoperative risk assessment index in older patients with colorectal cancer. Additional file 1: Table S1. A list of complication definitions.
<filename>src/test/suites/percentOff-testSuite.ts import TestSuite from "../testSuite"; export const suite: TestSuite = { name: "Percent Off", tests: { "60% off of 100": "40", "60% off of 10.0": "4", "60% off of $100": "$40", "60% off of 100 EUR": "40 EUR", "60% off of $10.0": "$4", "60% off of 10.0 EUR": "4 EUR", "60% off of $1,000": "$400", "60% off of 1,000 EUR": "400 EUR", "60% off of $100USD": "$40", "60% off of $100EUR": "40 EUR", "60% off of $100 USD": "$40", "60% off of $100 EUR": "40 EUR", "60% off of 100 USD": "$40", "60% off 100": "40", "60% off 10.0": "4", "60% off $100": "$40", "60% off $10.0": "$4", "60% off $1,000": "$400", "60% off $100USD": "$40", "60% off $100 USD": "$40", "60% off 100 USD": "$40", "60% off $100EUR": "40 EUR", "60% off $100 EUR": "40 EUR", "60% off 100 EUR": "40 EUR", "60% of 100": "60", "60% of 10.0": "6", "60% of $100": "$60", "60% of $10.0": "$6", "60% of $1,000": "$600", "60% of $100USD": "$60", "60% of $100 USD": "$60", "60% of 100 USD": "$60", "60% of $100EUR": "60 EUR", "60% of $100 EUR": "60 EUR", "60% of 100 EUR": "60 EUR", "variable = 100": "100", "60% off of variable": "40", "60% off variable": "40", "60% of variable": "60", "variable2 = 100 ft": "100 ft", "60% off of variable2": "40 ft", "60% off variable2": "40 ft", "60% of variable2": "60 ft", } };
This invention relates to microscope slides which include electrodes for the examination of organic cells, particularly such slides which are adapted for the study of the electrophysiological behavior of neurons or nerve cells and their processes. In order to examine the electrophysiological activity of living nerve cells, it is necessary to apply electrical potentials, currents, impulses, etc. to individual cells or to certain parts of a cell, such as the cell processes (neurites), by means of suitable electrodes located in microscopically close proximity to each other. Such electrodes must have contact surfaces of microscopic size, e.g., 1-10 microns, and must present a sufficiently small electrical contact impedance to the cell, as well as exhibiting other characteristics suitable for allowing such electrodes to contact the desired cells or cell areas without harming the cell matter. There have hitherto not been any satisfactory solutions for these requirements, especially where several simultaneous connections on different parts of a small volume of tissue are desired. It is known, for example, to utilize an electrode consisting of a very thin wire of a hard material, such as tungsten, which is set in a small glass tube. The free end of the wire, which protrudes slightly from the end of the glass tube, is brought into contact with particular areas of the cell by manipulation of the electrode under the microscope. The manufacture of such electrodes, however, is difficult and time consuming. Furthermore, as a rule, the electrodes which are usable must be separated from a large number of defective electrodes produced. Moreover, the three dimensional manipulation of such electrodes under a microscope is very difficult, consequently; the simultaneous manual operation of several electrodes in order to probe various cells or parts of cells at the same time is not practically feasible. Such thin wire electrodes have an additional serious disadvantage in that they vibrate during manipulation, and this motion may cause the death of the cell being studied. Similar objections are applicable to the pipette electrodes, which are filled with an electrolyte, which are known in the art.
<gh_stars>1-10 from castle.test import unittest from castle.utils.clone import UtilsClone class UtilsCloneTestCase(unittest.TestCase): def test_clone(self): params = {'foo': 'bar'} new_params = UtilsClone.call(params) self.assertEqual(params, new_params) self.assertIsNot(new_params, params)
ITGB5 Plays a Key Role in Escherichia coli F4ac-Induced Diarrhea in Piglets Enterotoxigenic Escherichia coli (ETEC) that expresses F4ac fimbriae is the major pathogenic microorganism responsible for bacterial diarrhea in neonatal piglets. The susceptibility of piglets to ETEC F4ac is determined by a specific receptor on the small intestinal epithelium surface. We performed an iTRAQ-labeled quantitative proteome analysis using a case-control design in which susceptible and resistant full-sib piglets were compared for the protein expression levels. Two thousand two hundred forty-nine proteins were identified, of which 245 were differentially expressed (fold change > 1.5, FDR-adjusted P < 0.05). The differentially expressed proteins fell into four functional classes: (I) cellular adhesion and binding, (II) metabolic process, (III) apoptosis and proliferation, and (IV) immune response. The integrin signaling pathway merited particular interest based on a pathway analysis using statistical overexpression and enrichment tests. Genomic locations of the integrin family genes were determined based on the most recent porcine genome sequence assembly (Sscrofa11.1). Only one gene, ITGB5, which encodes the integrin 5 subunit that assorts with the v subunit to generate integrin v5, was located within the SSC13q41 region between 13:133161078 and 13:139609422, where strong associations of markers with the ETEC F4ac susceptibility were found in our previous GWAS results. To identify whether integrin v5 is the ETEC F4acR, we established an experimental model for bacterial adhesion using IPEC-J2 cells. Then, the ITGB5 gene was knocked out in IPEC-J2 cell lines using CRISPR/Cas9, resulting in a biallelic deletion cell line (ITGB5−/−). Disruption of ITGB5 significantly reduced ETEC F4ac adhesion to porcine intestinal epithelial cells. In contrast, overexpression of ITGB5 significantly enhanced the adhesion. A GST pull-down assay with purified FaeG and ITGB5 also showed that FaeG binds directly to ITGB5. Together, the results suggested that ITGB5 is a key factor affecting the susceptibility of piglets to ETEC F4ac. INTRODUCTION Enterotoxigenic Escherichia coli (ETEC)-induced diarrhea is one of the major diseases in neonatal and weaned piglets, resulting in severe economic losses in the swine industry. Among the five different fimbriae isolated from diarrheic pigs, F4 (K88) is the most prevalent. Three antigenically distinct subgroups (F4ab, F4ac, and F4ad) have been identified in F4 fimbriae, of which the F4ac variant is the most common. Sellwood et al. first proposed the "specific K88 receptor" hypothesis, which states that the susceptibility of piglets to ETEC F4 is determined by the presence or absence of a specific F4 receptor on the small intestinal epithelium surface of the animal. The gene encoding the F4ac receptor (F4acR) has been mapped to the SSC13q41 region in two linkage studies. Subsequently, it was refined to a 5.7 cm interval by using a meta-analysis, and it was further narrowed down to a 1.6 cm interval by using a pedigree disequilibrium test (PDT). Within this interval, we identified 18 SNPs through a genome-wide association study (GWAS), and these were strongly associated with the susceptibility of piglets to ETEC F4ac, and HEG1 and ITGB5 emerged as the most promising candidate gene for F4acR. Although some further studies have been carried out to reveal the molecular basis of the susceptibility of piglets to ETEC F4ac, the role of the F4acR protein and its encoding gene remain uncertain. Because post-transcriptional and translational regulatory mechanisms affect protein levels in eukaryotes, mRNA abundance could be a misleading indicator of protein levels. In contrast, proteomics more directly measures protein levels and may provide a better view into the molecular basis of ETEC F4ac susceptibility. Using iTRAQ (isobaric tag for relative and absolute quantitation) or other labeling methods, it is possible to quantitatively compare the protein levels of up to eight samples in a single mass spectrometry experiment. We therefore conducted a high-throughput proteomics analysis to compare protein expression in ETEC F4ac-susceptible and resistant piglets, focusing primarily on identifying the potential F4acR protein(s), and the corresponding gene(s). Four pairs of full-sib piglets, each consisting of one susceptible and one resistant to ETEC F4ac, were analyzed. The eight samples were multiplexed using iTRAQ and subjected to LC (liquid chromatography)-MS/MS (tandem mass spectrometry) to identify differentially expressed proteins (DEPs). Among the DEPs detected, integrin v5 was considered as a potential F4acR protein. ITGB5, which encodes integrin subunit beta 5, was disrupted using methods based on CRISPR/Cas9. Cells containing the ITGB5 knockout, and cells in which ITGB5 was overexpressed, were tested for their ability to adhere to ETEC F4ac. The results provided direct evidence for the role of ITGB5 in infection by ETEC F4ac and helped to clarify the mechanisms underlying piglet susceptibility to diarrhea. Adhesion Phenotypes One hundred eighty-nine Large White piglets were examined for the adhesion phenotype by co-culturing epithelial cells from their jejunums with ETEC F4ac. A total of 83 piglets were found to be adhesive, 14 weakly adhesive, and 92 were non-adhesive. Four pairs of full-sibs, each with one adhesive, and one non-adhesive piglet, were selected for proteomics analysis. iTRAQ Profiling of Adhesive vs. Non-adhesive Samples Protein samples from the four pairs of full-sibs were labeled with isobaric tags (pair 1, 113:117; pair 2, 114:118; pair 3, 115:119; and pair 4, 116:121) and then subjected to quantitative proteomics analysis. After combining data from the four pairs, we identified 17,155 unique peptides from 43,261 spectra, corresponding to 2,249 proteins (a 1% FDR threshold was imposed for both peptides and proteins). Sample quality were inferred from the wide range of protein classes detected in the analysis. Using the PANTHER classification system, the 2,249 identified proteins fell into 29 families ( Figure S1). A protein was defined as differentially expressed protein (DEP) when its fold-change (FC) of expression between adhesive and one non-adhesive samples was >1.5 at an FDR-adjusted significant level of P < 0.05 ( Figure S2). A total of 245 DEPs were identified, of which 117 (47.8%) were more abundant in adhesive samples, and 128 (52.2%) were less abundant (Tables S1, S2). Protein-Protein Interaction Network To identify possible functions associated with the differentially expressed proteins, we constructed a protein-protein interaction network using the DEPs as seed nodes (Figure 1). Four subclusters were apparent. The first sub-cluster is associated with cellular adhesion and binding, and includes adhesion proteins such as ITGA5, COL6A3, ACTN2, CAV1, ILK, COL14A1, and VTN. Since the susceptibility of piglets to ETEC F4ac is determined by the presence of F4acR on the surface of the small intestinal epithelium, these proteins are potentially involved in the diarrhea induced by ETEC F4ac. The other three sub-clusters are associated with metabolic processes, apoptosis and proliferation, and the immune response. Members of these groups have been identified by mRNA expression profiling of porcine epithelial cells infected with ETEC F4ac. Pathway Analysis of the Genes Corresponding to DEPs A pathway enrichment analysis was conducted to gain deeper insight into the functions of the differentially expressed proteins. The functions were assessed using the statistical overrepresentation and statistical enrichment tests. The statistical overrepresentation test is based conceptually on the simple binomial test to determine whether a particular pathway of genes is overrepresented or underrepresented. The statistical enrichment test uses the Mann-Whitney test to determine whether any pathway has numeric values that are non-randomly distributed with respect to the entire list of values. Of note was that only the integrin signaling pathway was significantly enriched (P < 0.05) by either of the two tests. Figure 2 compares the distributions of the proteins from the integrin signaling pathway and the reference proteins. The blue curve is the overall distribution for all proteins and the one is the integrin signaling pathway. Chromosomal Locations of the Integrin Family Genes The results of the protein-protein interaction network analysis and the KEGG analysis of the DEPs suggest that the protein(s) responsible for the adhesion of ETEC F4ac to the small intestinal epithelium surface of piglets are very likely member(s) of the integrin family. We therefore focused on integrin family proteins in the subsequent analysis. It has been commonly accepted that the gene(s) encoding ETEC F4acR are located in the SSC13q41 region. We used BioCircos to visualize the chromosomal locations of the genes corresponding to the differentially expressed proteins. As shown in Figure 3, these genes are found on all chromosomes except SSC16. BioMart was used to assign chromosomal locations for genes of the integrin family ( Table 1). Only one gene, ITGB5, is located in the SSC13q41 region. CRISPR/Cas9-Mediated ITGB5 Gene Deficiency Six single-guide RNAs (sgRNA1 to sgRNA6) were designed to target sites within exon 1 and exon 2 of the ITGB5 coding sequence ( Figure 4A). The workflow to establish an ITGB5 gene knockout cell line is summarized in Figure 4B. To test the luciferase signal, pEGFP-C1 plasmids, which included genes encoding enhanced green fluorescent protein (eGFP), were cotransfected with CRISPR/Cas9-sgRNA into IPEC-J2 cells to confirm DNA uptake ( Figure 4C). T7 endonuclease I (T7EN1)cleavage assays were used to detect gene targeting efficiency. As shown in Figure 4D, sgRNA1, sgRNA2, and sgRNA3 did not generate any significant cleavage, whereas sgRNA4, sgRNA5, and sgRNA6 exhibited cleavage efficiencies of 11.8, 10.2, and 15.5%, respectively. As the sgRNA4 target site is located in exon1, we used sgRNA4 in subsequent experiments. The minimal lethal dose of puromycin was determined to be 600 g/mL for IPEC-J2 and was used to obtain 21 cell lines. Green fluorescence was detected in all cell lines by fluorescence microscopy ( Figure 4E). One cell line (IPEC-J2-sg4-6) contained a compound heterozygous knockout (ITGB5 −/− ) in which one allele was a 1-nucleotide deletion (based on sequencing 29 TA clones) and the other allele was a 1-nucleotide insertion (based on sequencing 21 TA clones) in exon 1 of ITGB5 ( Figure 4F). IPEC-J2-sg4-6 was therefore used to assess the function of ITGB5. Effects of Knockout and Overexpression of ITGB5 on ETEC F4ac Adhesion to IPEC-J2 Cells To quantify ETEC F4ac adherence to IPEC-J2 cells, a standard curve ( Figure 5A) was prepared using a range of bacterial concentrations (1 10 5 -1 10 9 CFU/mL). Bacterial adhesion to IPEC-J2 cells was evaluated by real-time PCR. ITGB5 −/− cells showed significantly less adherence in comparison to cells transfected with an empty vector ( Figure 5B). Overexpression of ITGB5 in IPEC-J2 cells resulted in a significant increase in mRNA expression (P < 0.01) and increased ETEC F4ac adherence to porcine intestinal epithelial cells (P < 0.01; Figure 5C). Verification of the Interaction Between ITGB5 and FaeG Previous studies have demonstrated that the fimbrial subunit FaeG is the most prominent part for F4 adherence and is directly involved in the binding of the F4 fimbriae to the host cells. To further verify the interaction between FaeG and ITGB5, a GST pull-down assay was conducted. A pull-down assay is an in vitro technique used to detect physical interactions between two or more proteins, and it is also an invaluable tool for confirming a predicted protein-protein interaction. To increase the solubility of the protein when expressed in prokaryotic cells, we eliminated the transmembrane region of ITGB5, and then the His-ITGB5 and GST-FaeG fusion proteins were expressed in Escherichia coli strain Rosetta and purified. GST pull-down results with purified ITGB5 and FaeG demonstrated that ITGB5 binds directly to FaeG in vitro ( Figure 5D). DISCUSSION The initial step in infection for the ETEC F4ac is to adhere to host enterocytes through fimbriae-mediated recognition of receptors on the host cell surface. Sellwood first reported that piglets lacking the appropriate receptors in the intestinal mucosa FIGURE 3 | BioCircos was used to visualize the locations of DEG loci on chromosomes. Red points represent genes expressed at higher levels in adhesive piglets; blue points represent genes expressed at lower levels in adhesive piglets. The red line in the band at 13q41 locates the locus that encodes ETEC F4acR based on previous studies. The distance from location to outer periphery is -log (p-value). were resistant to the F4ac infection. Identifying the ETEC F4acR protein(s) in piglets is an important step in the efforts to combat enterotoxigenic Escherichia coli-associated diarrhea. Erickson et al. and Billey et al. described that F4ac and F4ab bind to two intestinal mucin-type sialoglycoproteins (IMTGP-1 and IMTGP-2) with a molecular mass of 210 and 240 kDa, and that the intestinal transferrin (GP74) with a molecular mass of 74 kDa was shown to be a F4ab-specific receptor. Furthermore, Melkebeek et al. identified aminopeptidase N (APN) as an newly discovered receptor for F4ac fimbria, which is involved in oral immune response and clathrin-mediated endocytosis of F4ac fimbriae. Also, many studies were seeking to unravel the gene encoding the F4ac receptor protein. Edfors-Lilja et al. first mapped the F4acR gene to the SSCq41 region, 7.4-cM away from the TF locus. Subsequent studies further mapped it between Sw207-S0075 within SSCq41. Within this region, our group restricted the F4acR gene to a 1.6-cM interval between S0283 and SW1876. Further genomewide association mapping with the Illumina PorcineSNP60 BeadChip revealed that 18 SNPs located between 13:133161078 and 13:139609422 were strongly associated with susceptibility to ETEC F4ac. Despite the current knowledge of ETEC F4ac receptors, there are problems that remain unsolved: it is difficult to locate the exact region of the receptor gene on chromosome 13 and choose the appropriate candidate genes to study, and it is hard to determine which key factors affect the adhesion of ETEC F4ac. The lack of convincing evidence regarding the F4ac receptors and their function motivates further research. In this study, we used an iTRAQ-labeled proteome analysis and a full-sib pair case-control design to identify differentially expressed proteins (DEPs) between F4ac-susceptible and resistant piglets, and we also used it to reveal proteins that are likely to be responsible for the susceptibility of piglets to F4ac. A total of 245 DEPs were identified, of which 117 (47.8%) were more abundant in cells characterized as adhesive, and 128 (52.2%) were more abundant in those classified as non-adhesive. Analysis of the protein-protein interaction network constructed using the DEPs revealed that they were significantly enriched in functions of (I) cellular adhesion and binding, (II) metabolic processes, (III) apoptosis and proliferation, and (IV) the immune response (Figure 1). Overrepresentation and enrichment tests were used to analyze pathways containing DEPs. After Bonferroni correction, only the integrin signaling pathway was identified by either of the two tests. Since the diarrhea caused by ETEC F4ac infection is thought to be due to the adhesion of the bacteria to the enterocyte brush borders, we focused on integrin signaling pathway molecules as interesting candidate proteins. Integrins are cell surface receptors that participate in cell-cell and extracellular matrix (ECM)-cell interactions, and they can also be targeted by pathogenic bacteria, fungi, and viruses. Several human pathogens invade their hosts by taking advantage of integrin-mediated signaling. Some pathogenic bacteria, such as Yersinia enterocolitica, Y. pseudotuberculosis, Helicobacter pylori, and Neisseria gonorrhoeae, can bind integrin receptors directly using specific adhesins. However, most microorganisms bind integrin indirectly, i.e., they first bind the ECM-binding proteins, and then the integrin receptors bind the arginine-glycine-aspartate motif, such as fibronectin (Fn) and vitronection (Vn), in the ECM proteins. The integrin "adhesome network" is estimated to include more than 180 potential signaling and adaptor proteins. Several integrin signaling pathway-related proteins, including integrin alpha-5 and vitronectin (Vn), which were enriched in cellular adhesion and were binding in our analyses, were more abundant in adhesive samples (Table S1). Integrin alpha-5, encoded by ITGA5, is a member of the integrin family and functions in cell-surface adhesion and signaling. Vitronectin, encoded by VTN, is recognized by some integrins and plays a key role in cell-to-substrate adhesion. Integrins are glycoproteins that are generally composed of one and one subunit. In mammals, there are 8 different subunits and 18 different subunits that can assort with each other to form 24 different integrins with different ligandbinding specificities. As mentioned above, many studies have revealed that the gene(s) encoding ETEC F4acR are located in the SSC13q41 region. Our previous GWAS study identified 18 SNPs associated with susceptibility to ETEC F4ac, located within the interval from 13:133161078 to 13:139609422 (Table S3). We mapped the integrin family genes onto the most recent porcine genome sequence assembly (Sscrofa11.1) ( Table 1) and found only ITGB5 within SSC13q41, between 13:133161078 and 13:139609422. ITGB5 encodes the integrin 5 subunit, which combines with the v subunit to generate integrin v5, a complex that functions in the innate defense system against bacteria. Integrin v5 is a major endocytic receptor for vitronectin (Vn). Because vitronectin binds both pathogens and epithelial cells, it probably functions as an adapter molecule between them. When Vn binds to Escherichia coli, Staph. aureus, S. pneumoniae, Streptococcous spp., and Pseudomonas fluorescens, it enables more efficient adhesion of the bacteria to epithelial cells. In addition, our iTRAQ-labeled proteome analysis showed that Vn was more abundant in adhesive samples (Table S1). Fimbria act as lectins, which bind to receptors, and destroying receptors completely abolishes the binding of F4ac fimbriae to enterocytes. To test the hypothesis that the ETEC F4acR protein is integrin v5, we generated cell lines in which ITGB5 was either inactivated by a CRISPR/Cas9mediated knockout or overexpressed. Both ITGB5 alleles in the resulting monoclonal cell line IPEC-J2-sg4-6 contained mutations (ITGB5 −/− ). As expected, IPEC-J2-sg4-6 (ITGB5 −/− ) cells bound significantly less bacteria in an adhesion assay ( Figure 5B). In the complementary experiment, overexpression of ITGB5 in IPEC-J2 cells increased significantly ETEC F4ac adhesion ( Figure 5C). The fimbrial subunit FaeG is the most prominent part for F4ac adherence and is directly involved in the binding of the F4ac fimbriae to the receptors. Results from GST pull-down assay with purified FaeG and ITGB5 also showed that FaeG binds directly to ITGB5 (Figure 5D). Together, these data suggest that ITGB5 is a key factor affecting ETEC F4ac susceptibility in Large White piglets. The genetic mechanism of the susceptibility of piglets to ETEC F4ac might not be completely the same over breeds, and more research is required to validate the findings in other breeds. CONCLUSION In this study, an iTRAQ-labeled quantitative proteome analysis using a case-control design was performed. ITGB5 was considered to be a promising candidate gene for ETEC F4ac susceptibility in piglets. To test this hypothesis, we established an experimental model for bacterial adhesion using IPEC-J2 cells. ITGB5 gene knockout significantly reduced ETEC F4ac adhesion to porcine intestinal epithelial cells, and overexpression of ITGB5 significantly enhanced adhesion. A GST pull-down assay with purified FaeG and ITGB5 also showed that FaeG binds directly to ITGB5. Together, the results suggest that ITGB5 is a key factor affecting ETEC F4ac susceptibility in Large White piglets. Ethics Statement Animal experiments were carried out in accordance with the Guidelines for Experimental Animals established by the Ministry of Science and Technology (Beijing, China), and all efforts were made to minimize suffering. The protocol was approved by the Institutional Animal Care and Use Ethics Committee of Shandong Agricultural University. Measurement of Phenotypes The experimental design used to test the susceptibility of piglet intestinal epithelial cells to ETEC F4ac is outlined in Figure 6. The 189 piglets were slaughtered at 35 days of age, and jejunum samples were collected. A 10 cm segment was taken from each of the samples, and the remainder was frozen immediately in liquid nitrogen for later use. The longitudinal axis of the jejunum was cut, and the material was cleaned with a cold hypotonic EDTA solution (5 mmol/L EDTA, pH 7.4). Epithelial cells were obtained by scraping the mucosal surface of the tissue with a glass microscope slide. Using the cells, the piglets were then classified with respect to adhesion phenotype. The E. coli strains were cultured, harvested by centrifugation, and resuspended in PBS (pH 7.4) at an optical density of ∼1.0 at 520 nm. The cell suspension and the bacterial suspension (0.1 mL each) were mixed in 0.4 mg/mL mannose and incubated for 30 min at room temperature. A drop of the mixture was assessed for bacterial adhesion using a phase contrast microscope. Adhesion phenotypes were classified (adhesive, weakly adhesive, and nonadhesive) in the same way as described previously. To minimize the influence of differences in genetic background and environment between individuals on protein expression, we adapted a full-sib paired case-control design for proteomics analysis, in which four pairs of full-sibs (each with one negative and one positive piglet) from different boars were selected from the 189 piglets. Protein Extraction and Quantitation Samples of intestinal tissues of the eight piglets were ground to a powder in liquid nitrogen using a mortar and pestle. An amount of 200 L lysis buffer (7 M urea, 2 M thiourea, and 0.1% CHAPS) was added with phenylmethanesulfonyl fluoride (PMSF) and ethylene diamine tetra-acetic acid (EDTA) at final concentrations of 1 and 2 mM, respectively. The suspension was sonicated for 60 s (periods of 0.2 s at 22% amplitude at 2 s intervals). The homogenate was incubated at room temperature for 30 min and then centrifuged at 4 C and 15,000 g for 20 min. The supernatant was collected, and the protein concentration was determined using the Bio-Rad Protein assay reagent (Bio-Rad Laboratories, CA, USA). Protein Digestion and iTRAQ Labeling Protein digestion was conducted using a published protocol with minor modifications. Briefly, 200 g of protein from each sample was combined with 10 mM dithiothreitol (DTT) and incubated at 37 C for 1 h. Subsequently, cysteines were blocked by the addition of 40 mM iodoacetamide for 1 h at room temperature in the dark. The supernatant was mixed well with chilled acetone (1:5, v/v) for 2 h at −20 C to precipitate proteins. The protein was diluted 1:3 with 50 mM triethylammonium bicarbonate (TEAB, Applied Biosystems, Milan, Italy) and then incubated with 4 g trypsin (Promega) at 37 C overnight. The digested peptides were desalted using Sep-Pak C18 cartridges (Waters) and dried in a SpeedVac (Eppendorf). Desalted peptides were labeled with iTRAQ reagents (Applied Biosystems, Foster City, CA) according to the manufacturer's instructions. The control samples (proteins extracted from piglets phenotyped as non-adhesive) were labeled using iTRAQ labels 117, 118, 119, and 121, and the corresponding case samples (adhesive) were labeled using labels 113-116. LC-MS/MS Analysis First dimension peptide separation was performed with an Ultimate 3000 liquid chromatography system (RIGOL L-3000, Beijing, China) connected to a strong cation exchange (SCX) column. Then, 60 L of labeled peptides were injected using the microliter-pickup injection mode into a 4.6 250 mm SCX column (Agela Durashell C18) that contained 5 m particles. SCX buffer A was 98% ddH 2 O (adjusted to pH 10 using ammonia) and 2% CAN, and buffer B was 2% ddH 2 O (adjusted to pH 10 using ammonia) and 98% CAN. The flow rate was 0.7 mL/min. Absorbance at 214 nm was measured to monitor elution. From this, 48 fractions were obtained (90 s each) using step gradients of mobile phase B as follows: 5-8% for 5 min, 8-18% for 30 min, 18-32% for 27 min, 32-95% for 2 min and then maintained for 4 min, and decreased to 5% for the final 4 min. The 48 fractions were combined into 10 fractions before seconddimension reverse phase (RP) chromatography. Each fraction was trapped and desalted on an Acclaim PepMap100 precolumn (20 mm 100 m, C18, 5 m) and eluted on an EASY-Spray column (120 mm 75 m, C18, 3 m) for analytical separations. For second-dimensional separation, mobile phases A and B were 2% ACN with 0.1% formic acid, and 98% ACN with 0.1% formic acid, respectively. Trapping and desalting were carried out with solvent A for 15 min at a flow rate of 350 nL/min. Analytical separation was accomplished using 5% B for 5 min at a flow rate of 350 nL/min. A linear gradient of 5-35% of mobile phase B was applied during the next 60 min. Subsequently the gradient was increased to 95% B within 5 min and maintained for the next 12 min. B was then decreased to 5% within 3 min and maintained for 5 additional min. MS analysis was conducted with a TripleTOF 5600 System (AB SCIEX, Concord, ON, Canada) in Information Dependent Mode. Parameter settings were as described by Andrews et al.. Peptide and Protein Identification For iTRAQ quantitation, peptides were automatically selected by the Pro Group TM algorithm to calculate the reporter peak area. The algorithm uses only ratios that are unique to a protein to avoid calculating artifacts that can occur when peptides common to both proteins are included. Data were automatically corrected for bias to remove variations caused by unequal mixing during sample preparation. Differences in protein abundance in adhesive and non-adhesive piglets were evaluated using a t test. Differentially expressed proteins (DEPs) were identified using an FDR-adjusted significance threshold of P < 0.05 and fold change (FC) > 1.5. A small number of proteins were excluded from the bioinformatics analysis because they exhibited large variations amongst the four replicates. In these cases, it is possible that significant differences in levels may be the result of detection errors. Bioinformatics Analysis Protein identification and relative iTRAQ quantification were performed with ProteinPilot TM 4.2 (AB SCIEX, USA) in which peptides were identified using the Paragon TM algorithm. Data were further processed using the Pro Group TM algorithm, which performs isoform-specific quantification. Peptides were compared to entries in the NCBInr database (69110 sequences; http://www.ncbi.nlm.nih.gov/protein), concatenated with a decoy database containing randomized sequences from the original database. Pathway enrichment analysis for DEPs was conducted using the PANTHER (protein annotation through evolutionary relationship) classification system (http://www. pantherdb.org/). Data were analyzed using a statistical overrepresentation test and statistical enrichment test. The numerical data of our work is the fold-change value for each protein in the differential pairs. DEPs were used as queries in the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING; http://string.embl.de/) to build a functional protein association network. BioCircos was used to visualize the genomic location of DEGs. Construction of CRISPR/Cas9-sgRNA Expression Vector Single-guide RNAs (sgRNAs) targeted to exon 1 and 2 of Sus scrofa integrin subunit beta 5 (ITGB5) were designed using online CRISPR design tools (http://crispr.mit.edu/). Six sgRNAs ( Figure 4A, Table 2) were selected for expression vector construction using clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9)-sgRNA, based on their predicted scores and lower off-target effects. DNA oligonucleotides corresponding to the sgRNAs were synthesized by Invitrogen (Shanghai, China). Annealed oligonucleotides were inserted into pX330-U6-Chimeric_BB-CBh-hSpCas9 (plasmid 42230, PX330, Addgene, a gift from Feng Zhang, Broad Institute of MIT and Harvard) containing two BbsI (R3539S, NEB, Ipswich, MA) restriction enzyme sites, using a published protocol. The sgRNA with higher efficiency was used for single clone selection. (Table S4). DSBs (double strand breaks) introduced by CRISPR/Cas9 are primarily repaired by NHEJ (non-homologous end joining), which often generates "indels" around cleaving site. If indels emerged and formed mismatches with wild type DNA, it could be detected via T7EN1 (T7 endonuclease I) assay because T7EN1 enzyme is sensitive to DNA mismatches. Also, T7EN1 is the preferred enzyme to scan mutations triggered by CRISPR/Cas9 and evaluate knockout efficiency. Purified PCR products were annealed before conducting a T7 endonuclease I (T7EN1)-cleavage assay (M0302L, NEB). Digestion products were analyzed by agarose gel electrophoresis. Band intensities were measured using ImageJ (ImageLab, http:// imagej.net). The PCR product enzyme digestion frequency, fcut, was determined using the formula (b + c)/(a + b + c), where a is the intensity of the undigested PCR product, and b and c are the intensities of the cleavage bands. Indel formation was estimated from fcut using the binomial probability distribution: Establishment of Cell Line With ITGB5 Gene Knockout CRISPR/Cas9-sgRNA and pEGFP-C1 plasmids were transfected into IPEC-J2 cells using Lipofectamine 2000. Cells transfected with pEGFP-C1 and PX330 but without sgRNA served as a control. Puromycin selection was performed 48 h after transfection and maintained 8-10 days until all control cells died. After selection, cells were counted using a hemocytometer and diluted to a final concentration of 1 cell per 100 L. Individual cells were then transferred to 96-well plates and cultured for 10-14 days to obtain single-clone colonies. Cells from each colony were collected by trypsinization, and the cell line was gradually expanded by sequential passage through cultures in 24-well plates, 12-well plates, and 6-well plates. Genomic DNA extracted from single clones was used as a PCR template, and the products were inserted into the PMD19-T vector. TA clones were analyzed by sequencing (Invitrogen, Shanghai, China). The workflow is summarized in Figure 4B. Cloning the ITGB5 Into pEGFP-C1 A full-length cDNA encoding the porcine ITGB5 gene was synthesized by Invitrogen (Shanghai, China). The product was cloned into the pEGFP-N1 vector at the Bgl II and Kpn I sites after restriction enzyme digestion and ligation using T4 DNA ligase (New England BioLabs). The resulting construct, pEGFP-N1-ITGB5, expressed the sense strand of the gene. Quantitative RT-PCR Analysis Total RNA was extracted from cells with TRIzol reagent (Invitrogen, USA) and reverse-transcribed to cDNA. The qRT-PCR reactions were performed with the Bio-Rad CFX96 TM Real-Time System (Bio-Rad). The GAPDH gene was served as an internal reference gene, and all reactions were performed in triplicate. Gene expression levels were calculated using the 2 − Ct method. Adhesion Assay Bacterial adhesion to IPEC-J2 cells was evaluated by realtime PCR using procedures described by Candela et al. with slight modification. Briefly, cells (ITGB5-knockout, ITGB5overexpression, and control cells) were cultured in 6-well plates until reaching 90% confluence. The cells were washed three times with PBS buffer, and then 1 ml of DMEM/F12 and 30 L of F4ac ETEC strain 200 were added. Cells and bacteria were then co-incubated at 37 C in a 5% CO 2 -95% air atmosphere for 4 h. Unattached bacteria were removed by washing the monolayers four times with sterile PBS. The remaining (attached) bacterial cells were quantified by real time PCR performed with the STa primers listed in Table S4. Serial dilutions of bacteria in PBS (1 10 5 -1 10 9 CFU/mL) were also subjected to real time PCR and used as standards. GST Pull-Down Assay The full length of the FaeG gene was cloned into pGEX-4T-1 for fusion with a GST tag, and the fragments of ITGB5 with the transmembrane region eliminated was cloned into pCzn1 for fusion with an N-His tag. Recombinant protein was expressed in the E.coli strain Rosetta and purified. The fusion protein of GST-FaeG and GST (control) was then bound to glutathione agarose beads for 4 h at 4 C and then washed. His-ITGB5 was purified and desalinated, and then they were incubated with the glutathione agarose beads bounded with GST-FaeG or GST at 4 C overnight, respectively. Next, the mixture was washed by PBS 3 times, and then the beads-bound proteins were eluted by boiling in PAGE buffer for 30 min. Finally, Western blotting were performed to determine whether FaeG and ITGB5 interact in vitro. The blots were incubated overnight with either anti-GST antibody or anti-His antibody, and they were then stained using enhanced chemiluminescence (ECL) (Pierce) regents. DATA AVAILABILITY STATEMENT The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http://proteomecentral. proteomexchange.org) via the iProX partner repository with the dataset identifier PXD013722. ETHICS STATEMENT The animal study was reviewed and approved by Institutional Animal Care and Use Ethics Committee of Shandong Agricultural University. AUTHOR CONTRIBUTIONS QZ, WW, and YY conceived this study. WW and YL were responsible for animal care, prepared samples, and performed the experiments. QZ, YY, WW, and HT performed the data processing and wrote the manuscript. All authors reviewed and approved the final manuscript.
<gh_stars>0 from typing import List def twoSum( nums: List[int], target: int) -> List[int]: my_dict={} for i, ele in enumerate(nums): s = target-ele if s in my_dict.keys(): return [i, my_dict[s]] if s in nums: my_dict[s]=nums.index(s) return [] if __name__=='__main__': nums=[3,2,4] target = 6 out = twoSum(nums, target) print(out)
/** * * @return une liste de mission pour un utilisateur */ public List<?> findAllPrimes() { return findAllMission().stream().map(mission -> mission.getPrime()).collect(Collectors.toList()); }
Last month, a rep for Extell told the NY Post that all images of Nordstrom Tower released so far are inaccurate, which is especially confusing considering some of the images came from documents produced in-house at the firm. But the PR doublespeak is technically correct, and YIMBY can now confirm that there has indeed been an additional tweak to the plans, and the country’s future tallest building (by roof height) has been scalped of its spire. Two separate project insiders contacted by YIMBY confirmed that the most recent construction documents left out any indications of a spire, and also sent along the latest schematics, allowing us to create updated renderings of the building. In the realm of absolute English relativity, this does validate the PR team’s claim that the renderings released so far are not final, in that the version presented by Extell earlier this year was not yet value-engineered. Besides discarding the spire, changes appear to have been minimal, although the parapet height has seen a boost to 1,550 feet, and the floor count has increased to 99. Given the value-engineering that happened at One57, a repeat would seem more likely than not at Nordstrom Tower, and any architectural flourishes beyond the bare-minimum in the design spec are only likely to be found on the interiors. Still, the hand of architects Adrian Smith + Gordon Gill should result in something that transforms the skyline in a positive manner, somewhat resembling a taller and more angular version of the firm’s Trump International Hotel and Tower in Chicago. One57’s fault isn’t so much its form as its “waterfall” color palette and the mechanical grills visible up top, and these are both problems that 217 West 57th’s sleek design should avoid, with rooftop hardware encased behind the glass parapets. And while the official height may have been reduced from 1,795 feet, a rooftop 1,550 feet above street level will still make it the tallest building by that measure in the United States, surpassing the 1,450-foot top of the Willis Tower in Chicago (and the 1,397-foot peak of 432 Park Avenue here in New York City). The building has been officially dubbed “Central Park Tower” despite the fact it is not on the Park. 220 Central Park South, now rising across 59th Street, will also obstruct a large portion of the first 1,000 feet of Nordstrom Tower’s Park views. But while many decry the impact of all the new height on the Manhattan skyline, the positive externalities of the project will stretch even farther than the fallacies of its NIMBY opponents. Combined with the new retail components of 432 Park Avenue, Nordstrom Tower’s rise will likely result in new goal-posts for high-end shopping along 57th Street, with the corridor between Broadway and Park potentially emerging as a rival to the upper 50s along Fifth Avenue. With tourists arriving in record numbers and hotel construction also reaching new highs, the expansion of Midtown’s luxury retail core will translate into more jobs for New Yorkers and more tax dollars for the city. Cranes are now going up, foundation pouring is underway, and building permits confirm something exceedingly similar to what’s already been posted will indeed rise. While it seems the spire has been removed, it is always possible that it may return at the last minute, and the most recent changes could simply be a bid for discretion. Though the measure is extreme, it is not without precedent in New York City’s history. The 1.3 million square foot project is expected to wrap in 2019. Subscribe to the YIMBY email newsletter and receive the latest new development news in your inbox. Follow the YIMBYgram for real-time photo updates Follow YIMBY’s Twitter for the latest in YIMBYnews
Vancomycin elution from a biphasic ceramic bone substitute Objectives The aim of this study was to analyze drain fluid, blood, and urine simultaneously to follow the long-term release of vancomycin from a biphasic ceramic carrier in major hip surgery. Our hypothesis was that there would be high local vancomycin concentrations during the first week with safe low systemic trough levels and a complete antibiotic release during the first month. Methods Nine patients (six female, three male; mean age 75.3 years (sd 12.3; 44 to 84)) with trochanteric hip fractures had internal fixations. An injectable ceramic bone substitute, with hydroxyapatite in a calcium sulphate matrix, containing 66 mg of vancomycin per millilitre, was inserted to augment the fixation. The vancomycin elution was followed by simultaneously collecting drain fluid, blood, and urine. Results The antibiotic concentration in the drain reached a peak during the first six hours post-surgery (mean 966.1 mg/l), which decreased linearly to a mean value of 88.3 mg/l at 2.5 days. In the urine, the vancomycin concentration reached 99.8 mg/l during the first two days, followed by a logarithmic decrease over the next two weeks to reach 0 mg/l at 20 days. The systemic concentration of vancomycin measured in blood serum was low and decreased linearly from 2.17 mg/l at one hour post-surgery to 0 mg/l at four days postoperatively. Conclusion This is the first long-term pharmacokinetic study that reports vancomycin release from a biphasic injectable ceramic bone substitute. The study shows initial high targeted local vancomycin levels, sustained and complete release at three weeks, and systemic concentrations well below toxic levels. The plain ceramic bone substitute has been proven to regenerate bone but should also be useful in preventing bone infection. Cite this article: M. Stravinskas, M. Nilsson, A. Vitkauskiene, S. Tarasevicius, L. Lidgren. Vancomycin elution from a biphasic ceramic bone substitute. Bone Joint Res 2019;8:4954. DOI: 10.1302/2046-3758.82.BJR-2018-0174.R2. Introduction local antibiotic use in the treatment of bone infection is an interesting approach due to the advantage of high local concentrations and low systemic concentrations. 1 It has led to a good outcome not only in active infections, but also when used prophylactically. 5 Bone and joint infections constitute a significant and costly societal burden. Whether caused by trauma, tumour surgery, or joint arthroplasty, they may require repeated invasive revision surgery and extensive systemic antimicrobial treatment that can last for years. The measurement of antibiotic distribution at infected and noninfected sites is recommended by the United States Food and Drug Administration (FDA). 6 The serious consequences of inadequate drug concentration in target tissues include treatment failure and selection pressure for antibiotic-resistant organisms. There have been major advances in aseptic and antiseptic routines over the years, but approximately a quarter of high-energy open fractures treated with fixation devices may still become infected. 1,7,8 vancomycin is an effective second-choice systemic antibiotic against Gram-positive pathogens involved in bone infections, and is used as an additive in polymeric bone cements for prosthetic joint infection (PJI) revisions. 9 Treatments combining systemic antibiotics with local elution from antibiotic-containing bone cement have been shown to decrease the number of revisions. 10 In the last few decades, bone cements containing vancomycin have been recommended, especially in the United States, for PJI prevention in preoperatively screened patients carrying methicillin-resistant Staphylococcus aureus (MRSA). 11 More efficient antibiotic carriers, in the form of material that is replaced by normal bone, are desired. This could provide complete antibiotic delivery and bone healing without the need for further surgery to remove the polymeric antibiotic carriers. In vivo studies have shown a preventive and curative effect of using an injectable vancomycin-containing biphasic ceramic in an osteomyelitis model in rabbits, 12 but no clinical long-term pharmacokinetic release study has been reported. The aim of this study was to investigate the antibiotic elution from a new commercially available bone substitute containing vancomycin (CERAMENT v; BoneSupport AB, lund, Sweden). The product was developed by adding antibiotics to a clinically well-documented boneregenerating biphasic ceramic bone graft substitute. It was predicted that the vancomycin concentration would be elevated locally during the first week after implantation, while the systemic concentration of vancomycin would be maintained at a low level. Materials and Methods Nine patients (six female, three male) with trochanteric hip fractures, classified as A1 (n = 1) and A2 (n = 8) according to the Ao classification, were included in a single-centre, prospective, observational study. Patients with systemic vancomycin usage before surgery, infection in the hip joint, psychiatric or neurological disorders, renal failure, and/or impaired hearing were excluded. All nine patients were treated with internal fixation using a dynamic hip screw (DHS). The mean age was 75.3 years (sd 12.3; 44 to 84). An injectable ceramic with 40 wt% hydroxyapatite (HA) embedded in a calcium sulphate matrix, containing 66 mg vancomycin per millilitre (CERAMENT v), was used to augment the bone defects at implantation of the DHS. It was injected into the bone defect in the trochanteric region after placing the DHS but before plate implantation. A drain was placed under fascia on the posterior side along the DHS plate and inserted by a separate incision three centimetres inferior and in line with the surgical incision. It was a closed suction system that was routinely removed on the second day. None of the patients had systemic vancomycin during the study period. No patient had any kidney disease or renal insufficiency evaluated by creatinine levels and glomerular filtration rate. All patients had an extended stay for seven days at the hospital after surgery. Drain fluid, urine, and blood serum samples were collected during this time to analyze the vancomycin release from the ceramic bone graft substitute. The drain fluid was assessed every six hours for 60 hours (2.5 days) postoperatively. The total volume of the drain fluid was measured for each timepoint. A maximum of 10 ml drain fluid was collected for each sample and centrifuged for ten minutes at 2200 g or 4000 rpm at room temperature. The supernatant was then separated from the rest and deep frozen at -80°C prior to analysis. Urine was collected daily during the hospital stay (seven days), and thereafter approximately every three days for a month. During the first four days, it was collected from a urinary catheter, following which morning urine was collected. The samples were homogenized, transferred into two 50 ml tubes, and kept cool in a refrigerator. Blood serum was assessed every hour for the first six hours post-surgery and every six hours (± one hour) thereafter up to 96 hours post-surgery (total of 21 samples per patient). A minimum of 4 ml of blood was drawn at each timepoint and placed in a 5 ml heparin tube. It was centrifuged for ten minutes at 2200 g, and the supernatant was transferred to two 5 ml polypropylene tubes and deep frozen at -80°C until analysis. The vancomycin concentrations in all samples (drain fluid, urine, and blood serum) were analyzed using vANC (vancomycin) reagent in conjunction with UniCel DxC 600/800 Systems and SYNCHRoN Systems vancomycin Calibrator set (both Beckman Coulter, Inc., Brea, California). The detection limit without dilution was 0.1 mg/l. The study was approved by the local independent ethics committee, and informed consent was obtained from all patients. Results A mean of 9.7 ml (sd 0.7; 8 to 10) of the vancomycincontaining bone substitute was injected into each patient (Figs 1a and 1b). Radiographs were taken immediately postoperatively and at five weeks (Fig. 1). No screw penetration or migration was noted. We did not encounter any wound healing problems. Stitches were removed after two weeks. Eight patients were fully weight-bearing at the last follow-up (between 11 and 13 weeks), and four patients used one crutch. The elution of vancomycin was followed for up to 60 hours in the drain fluid, for four days in blood, and for 30 days in urine. The concentration of vancomycin in the drain fluid reached a peak during the first six hours after surgery, with a mean value of 966.1 mg/l (sd 546.3), which decreased linearly to a mean value of 88.3 mg/l at 2.5 days (Fig. 2). Drain fluid was collected up to 60 hours postoperatively from one patient, up to 42 hours from three patients, up to 36 hours from seven patients, and up to 30 hours from all nine patients. one patient had a myocardial infarction after full weight-bearing on postoperative day 6, and was treated at the intensive care unit and the cardiology unit until she died one month post-surgery. The myocardial infarction was evaluated by the principal investigator (ST) and found to be unrelated to the insertion of the bone graft substitute. This patient was included in the analysis; blood was collected for four days, drain fluid for 1.5 days, and urine for six days. In the urine, the vancomycin concentration reached a mean of 99.8 mg/l (sd 49.8) during the first day, which was maintained during day two (mean 113.2 mg/l (sd 74.8)), followed by a logarithmic decrease over the following two weeks to reach levels below the detection limit (< 0.1 mg/l) at 20 days (Fig. 3). Urine was collected up until day 30 from one patient, day 27 from three patients, and day 24 from eight patients. The systemic concentration of vancomycin measured in blood serum was low and decreased linearly from a mean of 2.17 mg/l (sd 0.29) at one hour post-surgery to levels below the detection limit (< 0.1 mg/l) at day 4 post-surgery (Fig. 4). Blood samples were collected until day 4 from all patients. Discussion The vancomycin levels eluted from the ceramic bone substitute were different in the drain fluid, serum, and urine. The highest concentrations were reached in the subcutaneous drain fluid, which was placed close to the implant and therefore regarded as the surrogate local concentration. The local levels measured in the drain fluid were more than 1000 times higher than those in blood during the first week. There was a very high concentration of vancomycin locally, while the systemic concentration of vancomycin was maintained at a safe and low level. The serum levels remained low during the entire study (at least 6.9 times lower than trough levels), indicating that systemic toxicity is unlikely to occur. Similar observations were made by Wahl et al, 17 who concluded that the local application of vancomycin with calcium sulphate as a carrier showed slow release, systemic safety, and a release profile far more interesting than that from poly(methyl methacrylate) (PMMA). The vancomycin concentration in urine was also initially approximately 100 times higher than in serum, reflecting the fact that levels measured at the implant site Chart showing the vancomycin concentration measured in the urine after surgery. There was an initial peak followed by a logarithmic decrease. At approximately 20 days, the vancomycin concentration was below the detection level of 0.1 mg/l. Chart showing the vancomycin concentration measured in the drain fluid after surgery. There was an initial peak followed by a linear decrease. The levels measured in the drain were more than 1000 times higher than those measured in the blood serum and ten times higher than those measured in the urine. were much higher and more sustained than the systemic levels. A detectable vancomycin concentration in urine observed for up to 20 days also indicates prolonged elution of the antibiotic well above minimum inhibitory concentration (MIC) levels. As vancomycin is eliminated in the kidney, the level in urine is a surrogate measure of the local release from the carrier. vancomycin is approximately 50% protein-bound, with a volume of distribution of 0.4 l/kg to 1.0 l/kg and a -elimination half-life of three to six hours with normal kidney function. Clearance is linearly related to the glomerular filtration rate. 18 It is important that the initial release is high and significantly above the MIC level locally to prevent bacterial adherence leading to an established deep infection. 19 The MIC level for most vancomycin-sensitive microorganisms is 2 mg/l. 20 In this study, local concentration levels were measured as high as 500 times MIC at six hours, and they were maintained above 200 times MIC for at least 60 hours. This should have allowed for the eradication of any planktonic bacteria that were causing infection and given good preventive support for the bone healing. Such high local vancomycin concentrations have an effect on biofilms, and it has been shown that even lower local vancomycin concentrations, in which case the vancomycin would be combined with rifampicin, may also have an effect on established Staphylococcus aureus biofilm infection. Zimmerli and Sendi 21 reported that the combination of systemic rifampicin (1 mg/l to peak 8 mg/l) and vancomycin (3 mg/l to peak 9 mg/l) was much more effective than monotherapy against biofilm infection in clinical practice. Recent in vitro studies have not been able to verify the synergistic effect on established biofilm, but the discrepancy may be explained by the systemic peak levels achieved clinically with vancomycin. 22,23 Previous studies carried out with the same ceramic bone substitute containing gentamicin showed similar release results. 24,25 A high concentration of antibiotics was observed in the drain fluid (i.e. locally), and the concentration observed in the serum never exceeded the corresponding MIC level of 4 mg/l. 24 There was also a prolonged antibiotic elution detected in the urine. 24 When compared with antibiotic elution from PMMA, the ceramic bone substitute was more efficient, with a higher peak observed during the first couple of days. 25 The ceramic bone substitute also presented a higher maintained antibiotic concentration during the first ten days, demonstrating effective prevention of bacterial attachment to the bone bed. The elution from the ceramic bone substitute has been shown not to be dependent only on the surface area, as the elution occurs from the bulk of the entire material as well. 24 This is in contrast to PMMA, which is a compact material without minimal microporosity and therefore elutes the antibiotics mainly from the surface. 26 Antibiotic particles are embedded in the bulk of PMMA and may give rise to the long-term elution of subinhibitory antibiotic concentrations with the risk of inducing antibiotic resistance. 27 Chart showing the vancomycin concentration measured in the blood serum after surgery. The concentrations remained at a low level and decreased linearly to reach below the detectable concentration (0.1 mg/l) at four days. The ceramic bone substitute consists of HA and calcium sulphate, where the vancomycin does not bind chemically, or electrostatically, to the HA. The elution of vancomycin will therefore be controlled by the microporosity of the material (approximately 30%) and ultimately by the resorption rate of the calcium sulphate phase. In this study, it was not possible to detect vancomycin in the urine more than 20 days post-surgery. However, it is probable that the antibiotic accumulates in the surrounding tissue of the defect and that concentrations above MIC will be locally present for the ensuing weeks, depending on the degree of vascularization of the area. 28 In conclusion, this is the first long-term pharmacokinetic report on vancomycin release from a biphasic injectable ceramic bone substitute. The study shows initial high targeted local vancomycin levels (wound drain fluid), sustained and complete release during the first month (verified by the urine concentrations), and systemic concentrations that are well below toxic levels. This material should be useful in preventing bone infection and regenerating new bone tissue. Supplementary Material Tables showing the long-term release of vancomycin from a biphasic ceramic carrier in blood serum, drain fluid, and urine in major hip surgery.
CAT: A Customized Automata Toolkit Automata are a kind of abstract computing machines. They play a basic role in computability theory and programming language theory and are widely used in some programming language compilers as token scanners and syntactic analyzers. More recently in data analytics, data automata have become a formal way to represent pipelines and workflows. In the research involved with automata, however, there are many situations where a practitioner has to build a new automaton with bare hands, which causes a lot of redundant works to rebuild the frame work of an automaton. Moreover, when lot of researchers need to display their ideas and to discuss about new algorithms, it will be extremely hard for them to switch among different styles of codes, not to mention modifying parts of others' programs. In order to solve this problem, we propose a new toolkit: CAT, which provides a simple and unified frame work for automaton construction and customization. This paper introduces the main architecture and functionality of CAT and shows a simple example of how to use it. In the end, we will briefly talk about its advantages and disadvantages as well as some future works to improve it.
// Do a simple decode of the datagram to verify that it is coalesced. pub fn assert_coalesced_0rtt(payload: &[u8]) { assert!(payload.len() >= 1200); let mut dec = Decoder::from(payload); let initial_type = dec.decode_byte().unwrap(); // Initial assert_eq!(initial_type & 0b1111_0000, 0b1100_0000); let version = dec.decode_uint(4).unwrap(); assert_eq!(version, QUIC_VERSION.into()); dec.skip_vec(1); // DCID dec.skip_vec(1); // SCID dec.skip_vvec(); let initial_len = dec.decode_varint().unwrap(); dec.skip(initial_len.try_into().unwrap()); let zrtt_type = dec.decode_byte().unwrap(); assert_eq!(zrtt_type & 0b1111_0000, 0b1101_0000); }
/* * cb_on_incoming_call * declares method on_incoming_call for callback struct */ static void cb_on_incoming_call(pjsua_acc_id acc_id, pjsua_call_id call_id, pjsip_rx_data *rdata) { if (PyCallable_Check(g_obj_callback->on_incoming_call)) { PyObj_pjsip_rx_data * obj; ENTER_PYTHON(); obj = (PyObj_pjsip_rx_data *)PyType_GenericNew(&PyTyp_pjsip_rx_data, NULL, NULL); obj->rdata = rdata; PyObject_CallFunctionObjArgs( g_obj_callback->on_incoming_call, Py_BuildValue("i",acc_id), Py_BuildValue("i",call_id), obj, NULL ); LEAVE_PYTHON(); } }
Among drivers who like driving, however, "Nothing has been a perfect replacement for the stick shift yet," said Alexander Edwards, the president of the automotive research division of Strategic Vision of Bandon, Ore. He said that predictions of the death of stick shifts are premature. Several experts theorized that people who consider driving a chore favor automatics because they make the job easier. By contrast, stick shifts "force you to be involved in the driving process," and enthusiastic drivers love that, said John Nielsen, AAA's national director of auto repair and buying. Seeking to give drivers the fun of a stick without the work, automakers are pushing five-, six- and even seven-speed automatics, mainly in sporty cars. Buyer interest in six-speeds increased from 9 percent to 15 percent of all potential buyers in the past five years, according to GfK Custom Research North America, a company that tracks market trends. Other alternatives, such as paddle shifters that shift gears without a clutch pedal, are spreading from high-end sports cars such as Ferraris to more mid-market Nissans and Corvettes. Declining stick sales over time mean that, "Young folks aren't exposed to manual transmissions at all anymore," Nielsen said. "And if you don't learn on it, you'll probably never learn." Take the case of Alicia Carbaugh, 31, a health-policy analyst in Washington. "I've always wanted to learn, but there were no cars around," she said. "My boyfriend at 16 tried to teach me on his RV, which is probably why it was such an awful experience." Jennifer Lickteig, 23, a research intern, offered a similar account. "We only had one stick shift growing up," she said. "It was my father's, and you don't touch my father's car." Kevin Thompson, a Washington Volkswagen salesman, recalled talking a young novice out of buying a stick shift after he stalled five times on his first test-drive. Thompson pushed a clutchless five-speed Tiptronic instead but didn't make a sale. The big picture reflects all these trends, according to Strategic Vision, a consumer research firm based in San Diego. In 1998, a third of drivers under 25 opted to buy a stick shift. Last year, 13 percent did. Accordingly, car makers are making fewer of them. For example, Toyota, the maker of the Scion tC, a sporty coupe designed for younger drivers, offered stick shifts on 30 percent of the model fleet in 2005. Now it offers 25 percent of them with sticks, said Allison Takahashi, a Toyota spokeswoman. Dealers didn't want more, she said. While manual transmissions are cheaper to buy and use less gas, "you'll lose your backside on the resale," said AAA's Nielsen. Indeed, at Thompson's Washington VW dealership, used cars with stick shifts are usually shipped to wholesalers while used automatics are sold off the lot. Marriage also is hard on the stick shift market. According to Strategic Vision, 10 percent of single drivers bought a stick shift in 2008, compared with 6 percent of married drivers. As Volkswagen salesman Ronald Sowell said of his wife, "I tried to teach her manual, but she didn't want to learn. She said that's why they make automatics." "Ladies putting on makeup while they're drinking coffee and talking on their cell phone — they don't have time to shift gears," said Scott Parsons, a sales associate at Mantrans, a manual transmission repair shop in Tallahassee, Fla. In fact, while interest in manual transmissions among men declined steadily for a generation, interest among women has grown steadily, from 4 percent in 1985 to 15 percent last year. Stick shifts have "always been associated with guys and power," theorized Patty Gaffney, 31, a bartender at Tonic, a Washington watering hole. "A stick shift makes women feel manly and in control."
Insulin resistance as a noninvasive predictor of esophageal varices in hepatitis C virus cirrhotic patients Objectives The aim was to evaluate sensitivity and specificity of insulin resistance (IR) as a noninvasive predictor of esophageal varices (EV) in hepatitis C virus (HCV) cirrhotic patients. Background Variceal bleeding due to portal hypertension is associated with a high probability of circulatory dysfunction and even death. However, routine endoscopy is an invasive maneuver which consumes effort, time, and money. IR was studied as an early noninvasive predictor of EV. Patients and methods Eighty cirrhotic patients were included in this prospective casecontrol study and 20 nondiabetic nonhepatic patients served as the control group. Patients were recruited from the Gastrointestinal Endoscopy Unit of Tropical Medicine Department, Menoufia University Hospitals from January 2017 to March 2018. IR was calculated by the homeostasis model assessment (HOMA-IR)=fasting insulin (U/ml)fasting glucose (mmol/l)/22.5. Results HOMA-IR showed a high statistically significant correlation with the presence and grade of EV in HCV cirrhotic patients (P < 0.001). In comparison with others noninvasive predictors, HOMA-IR gave the highest sensitivity at a cutoff value of 4.41. Next, the midclavicular liver span/albumin ratio at a cutoff value of 3.51 was followed by the portal vein diameter at a cutoff value of 13 mm. The least sensitive predictor was the platelet count/splenic bipolar diameter ratio at a cutoff value of 1414. Conclusion IR estimated by HOMA-IR can provide sensitive information for determination of the presence and grade of EV in HCV cirrhotic patients regardless of their ChildTurcottePugh classification.
/** * Refreshes the preview by model modification. Used by non-model change. * */ private void refreshPreview( ) { boolean currentValue = getChart( ).getTitle( ).isVisible( ); ChartAdapter.ignoreNotifications( true ); getChart( ).getTitle( ).setVisible( !currentValue ); ChartAdapter.ignoreNotifications( false ); getChart( ).getTitle( ).setVisible( currentValue ); }
Impact of spatial dispersion, evolution, and selection on Ebola Zaire Virus epidemic waves Ebola virus Zaire (EBOV) has reemerged in Africa, emphasizing the global importance of this pathogen. Amidst the response to the current epidemic, several gaps in our knowledge of EBOV evolution are evident. Specifically, uncertainty has been raised regarding the potential emergence of more virulent viral variants through amino acid substitutions. Glycoprotein (GP), an essential component of the EBOV genome, is highly variable and a potential site for the occurrence of advantageous mutations. For this study, we reconstructed the evolutionary history of EBOV by analyzing 65 GP sequences from humans and great apes over diverse locations across epidemic waves between 1976 and 2014. We show that, although patterns of spatial dispersion throughout Africa varied, the evolution of the virus has largely been characterized by neutral genetic drift. Therefore, the radical emergence of more transmissible variants is unlikely, a positive finding, which is increasingly important on the verge of vaccine deployment. The role of zoonosis in EBOV transmission is well recognized, yet incompletely understood. Between 2001 and 2005, 47 dead animals, including 23 great apes (18 gorillas, 5 chimpanzees), were discovered in Gabon and DRC. EBOV infection was confirmed for 13 gorillas, three chimpanzees, and one duiker 7. The ongoing seventh DRC Ebola outbreak has been traced back to a single index case, a pregnant woman who butchered a monkey and subsequently spread the infection to four healthcare workers 8. Fruit bats have also been implicated in harboring the virus 9. However, the spatial dispersal of EBOV is incompletely explained by a single zoonotic transmission event. More likely, several animals are responsible for bridging transmission events between epidemic waves 10. With the current epidemic ongoing and new cases reported daily, there is an urgent need to understand the transmission of EBOV in Africa. This requires an understanding of geographic spread of the virus in the context of evolutionary factors (i.e. selection) driving the repeated emergence of EBOV in Africa, which can be inferred using historical genetic data from previous outbreaks. The spatial dispersal can also provide important insights that will allow for assessment of the impact of zoonosis on current and past epidemics. Previous phylogenetic analyses of the 2014 Ebola outbreak have focused on the phylogenetic relationships among strains isolated from Guinean patients and other sequences of the genus Ebolavirus, which includes Bundingyo, Reston, Sudan, Tai Forest and Zaire species 6,11. Estimates of evolutionary rates have shown that the accumulation of nucleotide substitution has nearly doubled during the current epidemic when compared to previous outbreaks and resulted more frequently in non-synonymous polymorphisms 11,12. These findings suggest that the ecological niche for Ebola may be expanding due to the emergence of new genetic variants driven by ongoing positive selection 13. However, a systematic investigation of how selection has been affecting EBOV spatial dispersion across West Africa leading the past and current epidemic waves is still lacking. In this study, we used Bayesian phylogeography to reconstruct the spatial spread of the virus since the 1970s and analyzed the posterior distribution of phylogenetic trees for evidence of selection along the major backbone of the EBOV genealogies -representing major lineages propagating through time from root node to each epidemic outbreak. The results clearly indicate that EBOV evolution has consistently been driven by neutral genetic drift, rendering the emergence of even more aggressive variants under positive selection unlikely. Results EBOV epidemics were divided into groups based on the temporality of each epidemic wave and phylogenetic clustering in the Bayesian analysis (Fig. 1A). The genealogy and spatial dispersion was mapped for visualization. The Bayesian Maximum Clade Credibility (MCC) EBOV phylogenetic tree exhibited staircase-like topology with focal epidemics in specific geographic areas giving rise to epidemics in subsequent years and leading to the current 2014 epidemics (Fig. 1A). Furthermore, the genealogy illustrated the emergence of two lineages after the 1976 epidemic in DRC. The first lineage gave rise to the 1994-1996 epidemics in Gabon and DRC (Group II), and later to 2001-2002 epidemics in Gabon, Cameroon, and Republic of Congo (Group III). The second lineage gave rise to the 2001-2005 epidemics in Gabon and Republic of Congo (Group IV). Group IV sequences included EBOV sequences collected from humans and great apes that were concurrently infected, highlighting the spillover events that occurred during this time point. Last, Group VI represents the current 2014 epidemic in Guinea (Gueckedou and Kissidougou) and Sierra Leone (Kailahun District), which emerged from a shared common ancestor with sequences from the 2007 epidemic in Luebo, DRC (Group V). The general topologies in the posterior distribution of trees ( Fig. 1B) were highly consistent with the topology of the MCC tree (Fig. 1A), indicating that this particular topology was well supported given the data. The phylogenetic structure between distinct epidemics was further investigated by likelihood mapping analysis 14. The evaluation of all possible quartets (group of four sequences) for each group of sequences sampled during distinct EBOV epidemic waves (Fig. 1) demonstrated significant differences in phylogenetic signal between epidemics ( Supplementary Fig. S1). Most notably, 100% and 93% of the quartets from the 2007 epidemic in Luebo, DRC ( Fig. 1 -Group V) and the current 2014 epidemic in Sierra Leone and Guinea ( Fig. 1 -Group VI), respectively, were distributed in the center of the likelihood map ( Fig. 2A). This indicates strong star-like phylogenetic signal reflecting an outburst of new lineages due to exponential epidemic spread. 15 Interestingly, however, the current 2014 outbreak (group VI), as well as the 1976-77 (group I) and 2007-08 epidemics (group V) were characterized by the lowest genetic diversity (Fig. 2B). In contrast, groups II -IV, including strains from the 1990s and early 2000s epidemics, were characterized by more structured tree-like topology (star-like signal <50%), and higher within groups diversity in case of group II-IV (Fig. 2B), typical of multiple introductions of genetically diverse strains evolving from independent lineages over a longer time span. The overall demographic history of EBOV was also investigated through a Bayesian coalescent framework 16. The GMRF Skygrid model -representing non-parametric estimates of virus effective population size (Ne) over time -with a relaxed molecular clock was found to be the best fitting model. Molecular clock calibration estimated the evolutionary rate of the EBOV GP gene at 1.075 10 −3 substitutions site −1 per year (95% HPD 1.22 10 −3, 9.32 10 −4 ) and dated the time of the most recent common ancestor (TMRCA) as 1976 (95% HPD 1975(95% HPD -1976, in agreement with known epidemiological data 5. Ne estimation (i.e. number of genomes effectively giving rise to the next generation) indicated an increasing population that peaked in 2007 Scientific RepoRts | 5:10170 | DOi: 10.1038/srep10170 and subsequently decreased thereafter (Fig. 3), consistent with the decrease in genetic diversity observed during the current epidemic. Phylogeographic analysis illustrated the spatial diffusion of EBOV within the West-African region (Supplementary File 1). All geographical migrations were found to be significant (lnBF > 3) when the dispersal was assessed using SPREAD. From our analysis, it is likely that migration of EBOV outward from Yambuku, DRC, in 1976 served to seed additional regions in DRC and Gabon where the virus emerged again in 1994 and 1995, respectively. This lineage then radiated outward causing sporadic outbreaks in Gabon, Cameroon, and Republic of Congo from 2001 to 2002. A similar radiating spatial dispersion pattern was inferred among Gabon and Republic of Congo strains (Group IV) causing contemporaneous human and non-human primate infections. A closely related virus emerged again in Luebo, DRC, in 2007, and subsequently led to the current Sierra Leone and Guinea epidemics 2,400 miles away. Interestingly, the Republic of Congo and Gabon epidemics spanning 2001 to 2005 (Groups III and IV) are geographically compartmentalized, representing two distinct clades on separate lineages of the MCC genealogy. Given the observed variations in spatial dispersion patterns and evolutionary dynamics between different epidemics, we then sought to investigate the presence of selective pressures driving the emergence of new viral variants during each epidemic wave. We first assessed synonymous (dS) and non-synonymous (dN) substitutions among sequence pairs within groups I-VI. While dN/dS increased among groups along the backbone of the genealogy leading to the 2014 epidemic, only the 1976-1977 (Group I) significantly departed from strict neutrality (i.e., dN > dS) (Supplemental Table 1). A greater number of dN mutations was also observed for the 2014 epidemic (Group VI). As these findings provided evidence that dN and dS varied across EBOV epidemics, we then compared rates of change at the codon level that would result in dN or dS substitutions by independently estimating the evolutionary rate of the 1 st + 2 nd codon position and 3 rd codon position. Overall, the evolutionary rate (molecular clock) for the 3 rd codon position was significantly greater than the 1 st + 2 nd codon position , which suggests low levels, if any, of positive selection (Fig. 4A). Comparison of absolute rates of dS and dN substitutions along the backbone path of the EBOV genealogy also showed that dS and dN substitutions accumulated at similar rates ( Fig. 4B), indicating that the evolution of the major of EBOV lineage successfully propagating through time, from its emergence in 1976 until the current epidemic, has been driven by neutral genetic drift. Selection was further investigated by evaluation of specific amino acid sites along internal and terminal braches of the genealogy using six methods available on the online server (http://www.datamonkey.org) 17. The analysis identified only one amino acid site that was found to experience significant diversifying selection by three methods, and two sites (210 and 664) under significant purifying selection by three and four methods respectively (Table 1). When assessing the location of these sites along the branches of the genealogy, however, we found that although amino acid site was located on an internal branch, it did not propagate along the backbone of the lineage leading to the current epidemic. Remaining mutations were located on terminal branches of the genealogy, likely representing transient polymorphisms. Figure 2. Comparison of between and within group evolutionary diversity with phylogenetic signal. (a) Between group estimates of evolutionary diversity (i.e. nucleotide substitutions per site from averaging over all sequence pairs between groups) using maximum composite likelihood model with a gamma distribution. Each group represents a distinct group (I -VI) defining a specific epidemic. Percentages along the diagonal represent the results from likelihood mapping phylogenetic signal analysis (material and methods). The greater the percentage, the higher the "star-like" signal for each clade. (b) Within group estimates for evolutionary diversity and standard errors using the maximum composite likelihood model with a gamma distribution for six distinct groups representing distinct epidemics. Discussion At the time of this writing, the countries of Guinea, Liberia, and Sierra Leone have all reported intense, widespread transmission of EBOV, whereas others have had an initial case or cases with localized transmission (Nigeria, Senegal, Spain, and the United States of America). The phylogenetic history of EBOV leading to the current epidemics in Africa are seemingly complex, with punctuated focal outbreaks emerging in multiple discrete geographic regions over the past four decades. Upon analysis of 65 dated EBOV GP gene sequences from several affected countries throughout West Africa, we found that the current Sierra Leone epidemic in Kailahun is most closely related to the circulating viral strain in the Guinean cities of Kissidougou and Gueckedou (Fig. 1), representing two distinct clades in the Bayesian MCC genealogy. The single clade including all Kailihun cases is consistent with a single point source introduction followed by direct person-to-person transmission. This is concordant with previous analyses of the current Guinea outbreak, which found that the Guinean epidemic strain emerged from an EBOV lineage previously found in DRC, Republic of Congo, and Gabon 6,11. Furthermore, the two lineages responsible for the current Guinea and Sierra Leone outbreaks diverged from EBOV responsible for the 2007 outbreak in Luebo, DRC. This divergence between the Guinean epidemic from the Central African lineage has previously been estimated to have occured in 2002 11. The spread of EBOV from Central to West Africa may involve movement of people and animals, increasingly connected by expanding populations and improved infrastructure (i.e. roads). Pigott et al. recently estimated that zoonotic transmission is possible in 22 countries of Central and West Africa, encompassing a population greater than 22 million 13. In comparison to previous Central Africa outbreaks in Congo, 1995 and Uganda, 2000, recent studies have demonstrated that the transmission dynamics of the current epidemics are comparable in terms of reproductive rate, serial interval, and case fatality ratio 18. However, EBOV transmission dynamics in Central and West African are seemingly different. In West Africa, villages are linked by an extensive transportation network, whereas in Central Africa the affected villages are remote and poorly connected 8. Therefore, the dissemination patterns may vary based on geography, in concert with the epizootic epidemiology of EBOV in and among animal vectors. Furthermore, previous EBOV spread rates have been estimated at 50 km per year 10, which complicates the explanation of the nearly 3,900 km traversed in the migration from Central to West Africa over a seven year period. While the introduction into West Africa has yet to be explained, it is evident that there are strong epizootic and zoonotic components to EBOV transmission. For example, the Republic of Congo and Gabon epidemics spanning 2001 to 2005 (Groups III and IV) are phylogeographically distinct. Notably, Group IV (Fig. 1A) contains both human and great ape strains from Republic of Congo and Gabon epidemics on a single, well-supported monophyletic clade with high within-group diversity, suggesting contemporaneous epidemics in humans and animals and multiple non-human-primate-to-human spillover events 7,19. The compartmentalization of these epidemics suggests variations in reservoirs and intermediate non-human primate host species, as has been previously suggested 5. GP is the most widely studied region of the EBOV genome, having an evolutionary rate ideal for phylogenetic studies of this timescale 20,21. Its expression on the virion surface is responsible for host cell attachment and fusion, making it an essential component of pathogenicity as well as a potential vaccine target 22,23. Yet, few studies have comprehensively assessed selection in the GP gene 24. Our estimate of the evolutionary rate of the GP gene was 1.075 10 −3 substitutions site −1 per year (95% HPD 1.22 10 −3, 9.32 10 −4 ), consistent with previous studies 11, and Ne reconstruction demonstrated an exponentially increasing population. Despite varying patterns of spatial dispersal and epidemic origin, the central finding of the current study is that EBOV evolution, at least after the first known epidemic wave in 1976-77, has largely been driven by neutral genetic drift. There was some evidence that dN/dS has varied across epidemic waves but, excluding the initial 1976-77 epidemic when there were no observed dS substitutions, deviations from neutrality were not statistically significant. Furthermore, amino acid substitutions were mostly transient, either located on terminal branches of the genealogy or removed by purifying selection. Investigation of site-specific selection only identified one site on an internal branch that was significant for positive selection. Importantly, however, no amino acid substitutions were located on backbone branches leading to the current 2014 epidemic. Overall, the rates of dN and dS did not significantly vary between epidemic waves, providing little evidence that selection acting upon the GP gene is driving EBOV evolution. Given that we observed a greater proportion of dN than dS substitutions within the 2014 clade (Group VI), we cannot exclude the possibility that advantageous mutations could occur in the future and become fixed in the population. Yet, it is noteworthy that while clades with dN/dS > 1 were observed during past epidemics, these variants did not propagate through time. The proline to leucine substituion at amino acid site 429 in the Mucin-like domain (MLD, nucleotides 1,285-1,287) occurred twice along the EBOV genealogy, once after the bottleneck following Group I, with the diverging Groups II and III possessing the substitution, and again along the branch leading to the current 2014 epidemic. Interestingly, there was a reversion of leucine to proline on a terminal branch of Group II (strain 4KI). Although the MLD is dispensable for EBOV infections in vitro 25,26 and is the least conserved of the GP domains 27 it has been determined to have several functions, including enhancing viral adhesion 28,29 and protecting conserved regions of GP from antibody recognition 30,31. Therefore, the potential effect of this mutation on virulence and/or pathogenicity should be further investigated. Unfortunately, we were unable to identify the specific location of this amino acid and the potential effect of its change on local structure because the crystal structure of this highly flexible domain has not yet been determined. Meanwhile, the two sites found to be under purifying selection were located on terminal branches of the genealogy. Overall, together with the observation of equivalent increases in dN and dS across the genealogy, it is evident that EBOV evolution is largely driven by neutral genetic drift. Furthermore, while it has been observed that the rate of dN is higher among EBOV genomes in the current epidemic, there is no evidence yet that any of these mutations are adaptive and/or if they will become fixed in the population or removed by purifying selection. Based on the current analysis, we posit the latter scenario is more likely 12. The ongoing EBOV epidemics in Central and West Africa continue to claim lives, and it is estimated that the epidemics will expand exponentially 12. As the transmission dynamics of these epidemics are consistent with those previously studied, our analysis provides evidence that the exponential growth of cases is attributed to variations in population structure (e.g., mobility) and large-scale transmission events, rather than viral factors such as enhanced virulence or transmissibility. Phylogenetic analysis of the epidemic has revealed several gaps in our knowledge in Ebola epidemiology. First, it is evident that the virus has been circulating in animal populations spanning the 2007 and 2014 epidemics. More studies are needed to examine zoonotic transmission and subsequent spillover into human populations, which would better explain the spatial rate of spread and the bottlenecks observed between epidemics. Second, while EBOV evolution has until now been characterized by neutral genetic drift, the progressive accumulation of mutations during epidemic spread increases the possibility of viral adaptation. Overall, continued genomic surveillance of the epidemic is required to assess adaptability and selection, which is increasingly important on the verge of vaccine deployment. Likelihood mapping analysis and diversity estimates. In addition to testing for phylogenetic signal, the analysis of groups of four randomly chosen sequences (quartets) through the process of likelihood mapping provides an indication of phylogenetic structure and population expansion 14. For a quartet, three unrooted tree topologies are possible. The likelihood of each topology is estimated and the three likelihoods are reported as a dot in an equilateral triangle (the likelihood map). Three main areas in the map can be distinguished: the three corners representing fully resolved tree topologies (i.e., the presence of tree-like phylogenetic signal); the center, which represents star-like genealogy, and the three areas on the sides indicating network-like genealogy (i.e., presence of recombination or conflicting phylogenetic signals). Extensive simulation studies have shown that >50% dots falling within the central area indicate substantial star-like signal, or outburst of multiple phylogenetic lineages, often associated with exponential epidemic growth 14,15. Likelihood mapping analysis was performed using the program TREE-PUZZLE 34 by analyzing all possible quartets for each of the six groups representing specific epidemics. The analysis was then repeated for the entire genealogy. Between and within group divergence/ diversity were determined by estimating the nucleotide substitutions per site over all sequence pairs between and within groups using MEGA v6.06 35. A maximum composite likelihood model with gamma distribution was used to estimate divergence and diversity for each of the six distinct clades/groups. Evolutionary rate estimates. Evolutionary rates were estimated using a Bayesian Markov chain Monte Carlo (MCMC) method implemented in BEAST package v1.8.0 16,36 employing a non-parametric Gaussian Markov randomfield (GMRF) Skygrid 16,37,38 evolutionary model and both a strict and relaxed clock with an uncorrelated log normal rate distribution. The GMRF Skygrid model provides enhanced performance compared to Bayesian skyline plot (BSP) and Bayesian Skyride models by parameterizing Ne and smoothing the trajectory. TMRCA is also better estimated since the prior is independent of the genealogy. The alignment was partitioned into first + second codon positions and third codon positions. An HKY nucleotide substitution model with gamma-distributed rates among sites was selected. Chains were conducted for 2 10 8 generations sampled every 20,000 steps for each molecular clock model. Posterior probabilities were calculated using the program Tracer v. 1.6 after 10% burn-in. Convergence was assessed on the basis of the effective sampling size (ESS) and only parameter estimates with ESS values of >200 were accepted. Marginal likelihoods estimates for each model were obtained using path sampling and stepping stone analyses. Uncertainty in the estimates was indicated by 95% highest posterior density (95% HPD) intervals, and the best fitting models were selected by a Bayes Factor 39,42. The GMRF Skygrid model enforcing a relaxed molecular clock was selected as the most appropriate representation of the Ebola demographic history 42-44. Time-scaled phylogeography reconstruction. The time-scaled phylogenetic reconstruction and the phylogeographic analysis were conducted by using a Bayesian MCMC method implemented in BEAST package v1.8.0 36,41 implementing the HKY+G nucleotide substitution model with codon partitioning and assuming a relaxed clock with an uncorrelated log normal rate distribution and the GMRF Skygrid demographic model (previously selected by a Bayes factor). Statistical support for specific clades was obtained by calculating the posterior probability of each monophyletic clade. MCMC chains were run for at least 200 million generations and sampled every 20,000 steps. The continuous time MCMC process over discrete sampling locations implemented in BEAST v1.8.0 45 was used for the phylogeographic analysis by implementing the Bayesian Stochastic Search Variable Selection (BSSVS) model, which allows diffusion rates to be zero with a positive prior probability. Comparison of the posterior and prior probabilities of the individual rates being zero provided a formal BF for testing the significance Scientific RepoRts | 5:10170 | DOi: 10.1038/srep10170 of the linkages between locations. Rates with a lnBF of >3 were considered well supported and formed the migration pathway. The maximum clade credibility (MCC) tree (the tree with the largest product of posterior clade probabilities) was selected from the posterior tree distribution after a 10% burn-in using TreeAnnotator v1.8.0. The final trees were manipulated in FigTree v1.4.2 for display purposes. The posterior distribution of trees was also visualized with DensiTree 46. In the DensiTree, well-supported branches are designated by solid colored areas, while webs represent less agreement. The migration routes were visualized using SPREAD v1.0.6 and mapped with ArcGIS v10.1. Selection analysis. MEGA6 was used to estimate the mean synonymous (dS) and nonsynonymous (dN) substitutions for each group of sequences representing distinct epidemic waves based on phylogenetic and epidemiological data 47. For each group of sequences, the Nei-Gojobori method was used to test the hypothesis of dN > dS (i.e. a deviation from strict neutrality) and variance was computed using 1000 bootstrap replicates 48. The number of dS and dN substitutions along the EBOV genealogy were then estimated by reconstructing and comparing ancestral sequences along the EBOV trees sampled from the posterior distribution from the Bayesian analysis. This approach is an empirical extension of the coalescent-based Bayesian molecular clock models 49. Branch lengths proportional to either dS or dN substitutions were re-estimated using a subsample of 200 trees randomly selected from the posterior distribution obtained by BEAST and dN and dS point estimates over time were plotted. For each clock-like tree, estimates were obtained for all, internal, and backbone paths. The backbone path is the lineages effectively surviving from the root node to sequences sampled at the last time point. Sites under diversifying or purifying selection were assessed using six varying methods available on the Datamonkey online server (http://www.datamonkey.org) 17. Selection tests included Single likelihood ancestor counting (SLAC), fixed effect likelihood (FEL), internal FEL (IFEL), random effects likelihood (REL), Fast Unconstrained Bayesian Approximation for Inferring Selection (FUBAR), and Mixed Effects Model of Evolution (MEME). FEL and IFEL assess selection along all branch and internal braches respectively, while REL replicates the Nielson and Yang model implemented in PAML 53. Site found to be statistically significant for positive or negative selection (P < 0.1, posterior probability >0.9, or Bayes Factor >50) by more than two methods were further assessed to identify their location within the EBOV genealogy.
. 1667 children of 3 to 6 years old were inspected randomly in kindergartens of Hangzhou from April to June 2006. Enterobius vermicularis eggs were detected by cellophane swab technique. Eggs of Ascaris lumbricoides, Trichuris trichiura and hookworms in fresh stool samples were examined by Kato-Katz thick smear and saturated brine flotation. 216 children were found to have infected with intestinal nematodes (12.96%). The prevalence of E. vermicularis, A. lumbricoides, T. trichiura and hookworm was 4.44%, 8.28%, 0.54% and 0.24% respectively. Higher prevalence has been found in kindergartens with poorer environment and sanitation.
Colour ornamentation in the blue tit: quantitative genetic (co)variances across sexes Although secondary sexual traits are commonly more developed in males than females, in many animal species females also display elaborate ornaments or weaponry. Indirect selection on correlated traits in males and/or direct sexual or social selection in females are hypothesized to drive the evolution and maintenance of female ornaments. Yet, the relative roles of these evolutionary processes remain unidentified, because little is known about the genetic correlation that might exist between the ornaments of both sexes, and few estimates of sex-specific autosomal or sex-linked genetic variances are available. In this study, we used two wild blue tit populations with 9 years of measurements on two colour ornaments: one structurally based (blue crown) and one carotenoid based (yellow chest). We found significant autosomal heritability for the chromatic part of the structurally based colouration in both sexes, whereas carotenoid chroma was heritable only in males, and the achromatic part of both colour patches was mostly non heritable. Power limitations, which are probably common among most data sets collected so far in wild populations, prevented estimation of sex-linked genetic variance. Bivariate analyses revealed very strong cross-sex genetic correlations in all heritable traits, although the strength of these correlations was not related to the level of sexual dimorphism. In total, our results suggest that males and females share a majority of their genetic variation underlying colour ornamentation, and hence the evolution of these sex-specific traits may depend greatly on correlated responses to selection in the opposite sex. INTRODUCTION Since Darwin's development of sexual selection theory, theoretical and empirical work has greatly progressed towards explaining the mechanisms responsible for the evolution and maintenance of exaggerated male ornaments and weaponry (see, for example, Andersson, 1994;Savalli, 2001). Although secondary sexual characters are commonly more developed in males than females, in many animal species females also display elaborate ornaments (for example, conspicuous colours) or weaponry. After ignoring the issue for decades, evolutionary biologists have struggled to explain the evolution and maintenance of secondary sexual characteristics that are also exaggerated in females (Amundsen, 2000;Lebas, 2006;Clutton-Brock, 2007;;). Two nonexclusive hypotheses have been suggested as explanations for the evolution and maintenance of female ornaments: indirect selection for elaborate female traits through direct selection on correlated male traits (correlated response hypothesis) and direct selection on female traits occurring through either femalefemale competition for mates (), male mate choice (;Byrne and Rice, 2006) or social competition for ecological resources (). Although each of these hypotheses has received some empirical support, there is currently no consensus about whether one plays a prevailing role, even within a given taxonomic group such as birds. A recent comparative analysis on 6000 species of passerines concluded that both female and male plumage colourations are more extravagant in larger species and in tropical species (). Yet, the strength of sexual selection has antagonistic effects in the two sexes as it increases male colouration while decreasing female colouration (), supporting the possibility of independent evolution as suggested by previous studies. The work by Dale et al. also confirms that the general focus on male ornamentation has limited our understanding of the evolution of colour ornaments in both sexes. Even though the presence of strong cross-sex genetic covariances is a crucial assumption underlying the correlated response hypothesis, sex-specific estimates of key quantitative genetic (co)variances underlying secondary sexual traits that are expressed in both sexes have been conspicuously scarce in the empirical literature (but see Price and Burley, 1993;Price, 1996;Chenoweth and Blows, 2003;Roulin and Jensen, 2015). In particular, studies on the role of ornamentation have largely focussed on sexually dimorphic species while neglecting species with low or no sexual dimorphism (see reviews in ;). In the absence of further investigation into the heritability of sexual ornaments/weapons and their cross-sex genetic covariance, no generality can be drawn from the present empirical data regarding the importance of the hypotheses cited above. For example, in the review of Poissant et al., only 14 out of 549 estimations of cross-sex genetic correlations concerned ornaments or weaponry and the strength of these correlations varied substantially across taxa and across trait types (for example, between morphological traits and traits linked to communication). In addition, theory for the evolution of sexual dimorphism also predicts that the degree of phenotypic difference between the sexes should be negatively associated with the cross-sex additive genetic covariance (and, to a certain extent, the cross-sex additive genetic correlation), and positively associated with the amount of sex-linked genetic variance (Fairbairn and Roff, 2006). Evidence supporting these predictions is presently very limited (Dean and Mank, 2014). Quantitative genetic analyses in natural populations based on longterm observations of individual phenotypes and relatedness (pedigrees) could offer a means to estimate sex-linked genetic variance. However, the large majority of studies in wild populations estimate additive genetic (co)variances while assuming only autosomal inheritance (). Recent investigations on colour variation have revealed Z-linked genetic variance in the collared flycatcher Ficedulla albicollis (explaining 40% of total phenotypic variance in wing patch size; ), the barn owl Tyto alba (30% of variance in eumelanic spot diameter; ) and W-linked genetic variance in the zebra finch Taeniopygia guttata (2.6% of variance in beak colouration; ). In many other cases however, investigations show no evidence for sex-linked genetic variance in colour ornamentation (for example, the Florida scrub-jay Aphelocoma coerulescens; ). Overall, contributions of sex-linked genetic variance to phenotypic variance in sexually selected and morphological traits measured in pedigreed populations is usually weak, yet it is commonly acknowledged that this could be because of low power to distinguish autosomal from sexlinked genetic variance (). It is indeed currently unclear whether wild population pedigrees used for quantitative genetic analyses confer sufficient power to disentangle autosomal additive genetic variance from other components of genetic variance (Wolak and Keller, 2014), as power analyses are not performed in these studies. Colouration is often a sexually and/or socially selected trait that can signal individual quality and identity, as well as signal species identity, enhance crypsis, provide thermoregulatory benefits and protect against bacteria (Hill and McGraw, 2006), and is therefore central to an animal's fitness. However, to date, we know very little on the heritability of colouration (Svensson and Wong, 2011) or the genetic correlation that might exist between the sexes (;Roulin and Ducrest, 2013). Comparative analyses have shown that colouration can evolve conjointly or separately in the two sexes (Amundsen, 2000;), but quantitative genetic studies of colouration are required to determine the main factors driving the observed sex-specific evolutionary patterns. Although blue tits can appear sexually monomorphic to a human eye, spectrophotometry analyses have shown that blue tits from the subspecies Cyanistes caeruleus caeruleus are sexually dichromatic in the ultraviolet (UV) blue of the crown patch but monomorphic in their yellow carotenoid-based chest colouration (;;). Both male and female UV blue colouration influences intrasexual interactions (see, for example, ;;but see ), mutual mate choice () and mate reproductive investments (see, for example, ;;; but see for males for females). In addition, male and female UV blue and yellow adult colouration is condition dependent (; but see ) and can be linked to parental investment or success (see, for example, ; ) and to parasite levels (del ). Overall, all these studies suggest that both UV blue and yellow colouration can be sexually selected in both sexes, yet Parker and colleagues (;Parker, 2013) have recently challenged this view. Parker et al. found weak but contrasted evidence of fecundity selection on colouration for both sexes over 3 years. Following a meta-analysis that considered all previous studies with the same strength, regardless of the pertinence and robustness of their methodology, Parker further concluded that the sexual and/or social functions of blue and yellow colouration in blue tits remains to be demonstrated. This debate highlights the need for more studies on the colour patches in this species, and the examination of the cross-sex genetic correlation is an essential step to advance our understanding of the evolution in ornaments in both sexes. Despite many documented and proposed selective advantages to colour ornaments in blue tits, only three quantitative genetic studies have been conducted on colouration in this species (;aHadfield et al.,, 2007), showing low autosomal heritability for both types of colourations. Furthermore, the indirect selection hypothesis remains untested as there are no estimates to date of cross-sex additive genetic covariance. We used 9 years of colour measures in long-term monitored blue tits located in a Mediterranean mainland population (subspecies C. c. caeruleus) and on the island of Corsica (subspecies C. c. ogliastrae) to investigate the sex-specific and cross-sex additive genetic (co)variances underlying colour ornamentation traits that show a gradient of sexual dimorphism, and have been suggested to be involved in intra-or inter-sexual selection. Colour features were measured in one structurally based (blue crown) and one carotenoid-based (yellow chest) ornament. In the context of improving our understanding of the evolution and maintenance of sexual ornaments and the importance of genetic correlations in the evolution of female ornaments, our aims were originally threefold: Assessing whether there is autosomal and/or sex-linked genetic variation for colour ornamentation in the blue tit; Measuring the strength of cross-sex genetic covariances, with the particular aim to evaluate whether female ornament evolution could be driven by such covariances; Testing the theoretical predictions that the degree of sexual dimorphism is negatively associated with the cross-sex additive genetic covariance and positively associated with the amount of sex-linked genetic variance (Fairbairn and Roff, 2006;). Sampling procedure and colour measurement Blue tits have been monitored in the Rouvire forest (mainland France) since 1991 and at two localities in Corsica since 1976 (Pirio) and 1994 (Muro). Details on these study sites can be found in (Blondel et al. and Charmantier et al.. Blue tits from Corsica belong to a different subspecies from blue tits found in the French Mediterranean mainland. The distance between Muro and Pirio in Corsica is 25 km. In order to improve our power for quantitative genetic models, individuals from these two valleys were pooled in one common Corsican data set. Supplementary Information A2 provides statistical justification for this choice based on a test for equality of additive genetic variances between the two populations. Each year, breeding parents were captured in nest boxes between April and June. A small proportion of individuals were caught before the breeding period. Each bird was equipped with a uniquely numbered metal ring provided by the Museum National d'Histoire Naturelle in Paris, six blue feathers were collected from the bird's blue crown and eight yellow feathers from the yellow chest to allow colour measurements in the lab. Bird sex and age were determined based on the capture-recapture database or on the colour of wing coverts for unringed birds. Chicks were ringed after 9 days of age that allowed building social pedigrees for each population. Genotyping of parents and offspring in 2000-2003 has shown that up to 29.3% (annual range: 18.2-29.3%, of chicks were the result of extra-pair matings in Corsica, and 18.2% (annual range: 11.5-18.2%, ) on the mainland. The social pedigree used in this study was corrected for extra pair paternities only for chicks born in 2000-2003 in both populations. In these years, molecular genetic data allowed to identify 53% of extra-pair sires, whereas nonidentified genetic fathers were attributed a dummy identity. The Corsican pruned pedigree included 1507 individuals over 14 generations and the mainland pedigree 1233 individuals over 12 generations. Feather colouration was measured in laboratory conditions, using a spectrometer (AVASPEC-2048, Avantes BV, Apeldoorn, Netherlands) and a deuterium-halogen light source (AVALIGHT-DH-S lamp, Avantes BV) covering the range 300-700 nm ((Doutrelant et al.,, 2012) and kept at a constant angle of 90°from the feathers. For each bird and colour patch (crown and chest), we computed the mean of six reflectance spectra taken on two sets of three blue and four yellow feathers ((Doutrelant et al.,, 2012. We used the software Avicol v2 to compute chromatic and achromatic colour variables based on the shape of the spectra (;), following previous studies on blue tits in our populations (see, for example, Doutrelant et al.,, 2012 and others (see, for example, ; but see ). For the UV blue crown colouration, we computed one achromatic variable: blue brightness (area under the reflectance curve divided by the width of the interval 300-700 nm); and two chromatic variables: blue hue (wavelength at maximal reflectance) and blue UV chroma (proportion of the total reflectance falling in the range 300-400 nm). Lower values of hue and higher values of UV chroma mean that the signal is stronger in the UV. For the yellow chest colouration, in addition to yellow brightness, we computed yellow chroma as (R 700 − R 450 )/R 700. Higher values of yellow chroma are linked to higher carotenoid contents in the plumage (). We have shown previously that our measures of these five colour traits using a spectrometer are highly repeatable (see, for example, Doutrelant et al.,, 2012), suggesting acceptable measurement error. Figure 1 displays average spectra for blue crown and yellow chest measures in 2011. The complete data set included 3629 observations with at least one colour parameter measured (n = 1659 in Rouvire (mainland), n = 1035 observations in Muro (Corsica) and n = 935 in Pirio (Corsica)) for a total of 2177 birds (see Table 1 for detailed sampling efforts on males and females). Supplementary Figure S1 presents the distribution of each colour parameter in Rouvire and in Corsica and Supplementary Table S1 shows the phenotypic correlation between each pair of traits in the mainland and the Corsican populations (Supplementary Information). As Supplementary Table S1 illustrates, among the five classically used and biologically relevant measures of colouration, some are phenotypically correlated yet Spearman rank's correlation did not exceed 0.672 in absolute value. The strongest phenotypic correlation was between blue UV chroma and blue hue (Spearman rank's correlation ranging from − 0.476 to − 0.672). All other trait combinations showed correlations of absolute value less than 0.4. Sexual dimorphism For each trait in both data sets, we measured the degree of sexual dimorphism in colour ornamentation by calculating a standardized effect size: Cohen's d, and its associated standard error (equations 10 and 16 in Nakagawa and Cuthill, 2007). Cohen's d effect size is a dimensionless statistic; a value of 0.2 would typically be suggestive of small sexual dimorphism, whereas a value of 0.8 would be interpreted as revealing strong sexual dimorphism. Quantitative genetics Exploring fixed effects. Previous to conducting quantitative genetic models, we conducted linear mixed models to explore the contribution of fixed effects (year of measure, year of birth, period of measurement and individual age) to the various colour parameters in both data sets. For all traits, only year of Figure 1 Average UV blue crown spectra for male (blue) and female (orange) blue tits sampled in 2011 (a) in Corsica and (b) on the mainland. Average yellow chest spectra for male (black) and female (red) blue tits sampled in 2011 (c) in Corsica and (d) on the mainland. Thick lines represent mean spectra and shaded areas associated s.d. values. Plots were realized using the R package 'pavo' (www.rafaelmaia.net/r-packages/pavo). Quantitative genetics of blue tit colour A Charmantier et al measure was retained in all models as a categorical fixed effect (see details in Supplementary Information A1). Univariate animal models. Genetic (co)variances, heritabilities and genetic correlations were estimated using restricted maximum-likelihood (REML) estimation procedures implemented in the software ASReml v3.0. For each data set and colour measure, we first implemented a sex-specific univariate 'animal model' that combined the phenotypic measures for a given sex with the pedigree information to partition the phenotypic variance into an additive genetic variance (V A ), a variance due to permanent environment effects (V PE, based on repeated observations of individuals) and a residual variance (V R ), while controlling for annual fluctuations using year as a single fixed effect. In such a model the phenotypic value of an individual i is written as: The additive genetic effect on individual i (a i ), was assumed to be normally distributed with mean of zero and variance of V A. The permanent environment effect (PE i ) and residual errors ( i ) were also assumed to be normally distributed, with zero means and variances V PE and V R. Residual errors were assumed to be uncorrelated within individuals across measurements. In the Corsican model, a genetic group determined the Muro/Pirio origin for each bird. The additive genetic variance estimates were tested against a null hypothesis of zero by carrying out likelihood ratio tests, where minus two times the difference in log likelihood between a model including the variance and a model without it was tested against the 2 distribution with one degree of freedom. Bivariate animal models and cross-sex additive genetic variance. In order to estimate the cross-sex additive genetic covariance for each colour measurement, we expanded Equation 1 to a bivariate model where the phenotypic values of males (m i ) and of females (f i ) are explained by fixed (year of measure) and random effects (as previously, additive genetic, permanent environment and residual effects): This bivariate animal model provides sex-specific estimations of additive genetic variances (V Am, V Af ), permanent environment variances (V PEm, V PEf ) and residual variances (V Rm, V Rf ). In this model, each character is sex specific and cannot be measured in males and females simultaneously, and hence this model cannot fit any between-individual (permanent environment) or within-individual (residual) covariance. However, it can fit a cross-sex additive genetic covariance (COV Am;f ) from which we estimate the cross-sex additive genetic correlation: A bivariate animal model was fitted for each colour trait in each population, with a genetic group specified for Muro and Pirio individuals in the case of the Corsican data set. The additive genetic covariance estimates were tested against a null hypothesis of zero by carrying out a likelihood ratio test using the 2 distribution with one degree of freedom. In order to test for a genotype sex interaction, which occurs when a given genotype has different phenotypic expressions in males versus females, we compared the original model with a model where V Am V Af COV Am;f using likelihood ratio tests and the 2 distribution with two degrees of freedom. To allow comparisons between traits and populations, we also report sex-specific coefficients of additive genetic variance CV Am and CV Af in which the square root of the additive genetic variance is scaled by the trait mean: Including a Z-linked genetic variance. We conducted power analyses to determine the ability of the animal model to estimate sex-chromosomal and autosomal additive genetic variance given our blue tit pedigrees and data structures. Specifically, our goal was to determine whether we could detect Z-chromosome-linked additive genetic variance (V Z ) in the blue tit colouration data. Our general approach was to use Monte Carlo simulation to reassign individual phenotypes with known (that is, simulated) sources of trait covariation in the population and then use animal models with each simulated data set to test the null hypothesis that Z-chromosomal additive genetic (co) variances were equal to zero. Over many replicate simulations, the proportion of significant P-values (Po0.05) obtained from our null hypothesis tests reflect the power (the probability of rejecting the null hypothesis when it is false) of the animal model to estimate V Z. We note that this does not determine the power of the animal model to provide unbiased estimates of autosomal (V A ) and sex-linked (V Z ) additive genetic variances (Supplementary Information). We simulated random effects underlying observed phenotypes similar to those modeled for the observed data (Equation 2): additive genetic (autosomal and Z-linked), permanent environment and residual effects (Supplementary Information A3). We used 27 unique combinations of autosomal additive genetic, permanent environment and residual variances along with cross-sex autosomal and sex-linked additive genetic correlations (Supplementary Quantitative genetics of blue tit colour A Charmantier et al Table S2). Within each of these unique combinations, the Z-linked additive genetic variance was set to one of seven values: Z-male 2 = 1, 10, 30, 50, 60, 70 or 90 to assess the power at each level. For each of the above parameter combinations, in each of the two data sets (Corsica and Rouvire), we simulated phenotypes for every individual (Supplementary Equation S1 in Supplementary Information) a total of 1000 different times. We used R (R Core Team, 2014) and the R package nadiv to simulate each of the above effects ( Supplementary Information A3). We used the model of sex-chromosomal additive genetic variance of Fernando and Grossman that assumes no global sex chromosomal dosage compensation or recombination between the Z and W chromosomes. Simulated phenotypes were analysed with an animal model implemented in the ASReml R package (v3.0, ). Models were conducted with and without the Z-linked additive genetic (co)variance terms, and minus two times the difference in these model log likelihoods was used to calculate a likelihood ratio test statistic. Probabilities of obtaining a difference in log likelihoods were assigned assuming an asymptotically 2 distributed test statistic with three degrees of freedom. For a given set of parameters, we used the proportion of P-values o0.05 as an estimate of power. Full details are available in the Supplementary Information A3. Additive genetic (co)variances and heritabilities As detailed in Table 2, the bivariate animal models revealed that the chromatic part of the crown colouration (blue UV chroma and hue) was overall heritable (except for 2 of the 8 estimated colour parameters: male hue and female UV chroma in Corsica) with heritability estimates ranging from 0.07 to 0.19 in Corsica (0.73-4.06 for CV A ) and from 0.18 to 0.23 in Rouvire (1.10-3.98 for CV A ). In contrast, the achromatic part of the crown colouration (brightness) was heritable for both sexes (with heritabilities of 0.18 and 0.10 for males and females) in Corsica but not heritable for both sexes in the mainland Rouvire population (although CV A s were high), suggesting it is more sensitive to nongenetic variation than chromatic parameters. Similarly, the achromatic part of the yellow colouration was nonheritable in both sexes and populations, whereas the chromatic part was significantly heritable in males (heritabilities of 0.13 and 0.25 in Corsica and the mainland), but not in females. Differences in model log likelihoods where sex-specific additive genetic variances-V Am and V Af -were unconstrained or constrained to be equal indicated genotype by sex interactions only in two cases: in Corsica for blue hue (P = 0.003) and blue UV chroma (Po0.0001). Estimated COV A m;f (Table 2) for all blue measures and for Corsican yellow chroma were large and significantly greater than zero. COV A m;f was not significantly different from zero for yellow chroma in Rouvire, yet this is most likely explained by the very small V A f that prevents a correct estimation of the covariance. Power analysis for sex-linked genetic variance Overall, the power simulations revealed low power to estimate Z-linked additive genetic variance in our two data sets (see partial results in Figure 2 and Supplementary Information A3). Using a common rule of thumb for power, the Corsican data only achieve a minimum level of desired power (80%) when the Z-linked betweensex additive genetic correlation is one (bottom row, Supplementary Figure S2a), Z-linked additive genetic variance is very high and autosomal additive genetic variance is two. The animal model combined with the Rouvire population structure (Supplementary Figure S2b) achieves 80% power under less restrictive conditions, although this still requires Z-linked additive genetic variance to comprise at least 50% of total phenotypic variance (that is, h 2 Z-linked 40.5). Sexual colour dimorphism All colour traits displayed some sexual dimorphism, apart from yellow brightness in the Rouvire population (Figure 3, all paired one-sided Student's t-tests with Po0.016 except for yellow brightness in the Rouvire: P = 0.061), with males being more colourful than females for both ornaments, with brighter blue and slightly brighter yellow. DISCUSSION Autosomal and sex-linked genetic variation for colour ornamentation in the blue tit Autosomal genetic variation. Our quantitative genetic analyses reveal higher heritabilities for the crown blue UV colour than previously estimated (a), confirm that the chromatic part of yellow colouration can be heritable in males (Evans and Sheldon, 2012) and reveal a lower heritability for yellow chroma in females than males. Quantitative genetics of blue tit colour A Charmantier et al Blue UV colouration depends on the microstructure of the plumage, whereas yellow colouration is influenced by carotenoid contents () and by microstructure (Shawkey and Hill, 2005). Food is the sole source of carotenoids for blue tits as animals cannot synthesize them. Hence, stronger environmental dependence is expected in carotenoid-based colouration compared with the structurally based colouration. This prediction is upheld by our results for females, but not so much for males where there is no strong contrast between heritabilities for carotenoid-based and structurally based colouration. The fact that male yellow chroma is as heritable as the UV blue colouration in both populations, and displays high CV Am, is very interesting. As yellow chroma is related to carotenoid content in the feathers, more chromatic individuals are often depicted as having higher foraging capacities and/or higher parasite resistance (see, for example, del ). Our results suggest that more chromatic males have male offspring that are more chromatic themselves. This could be interpreted as more chromatic males having higher abilities at finding food and/or at parasite resistance, and that their male offspring inherit these aptitudes, either genetically or nongenetically. Indeed, although the additive genetic variance is estimated here based on a variety of relatedness types, the animal model cannot always decipher accurately between genetic versus shared environmental or social resemblance between relatives when the large majority of individuals in the pedigree are siblings or parentoffspring (Wolak and Keller, 2014). A male-specific social rather than genetic inheritance of yellow chroma, for example, mediated by paternal care, could explain why this trait is less heritable in females (although note that CV A s are of similar magnitude). As females disperse longer distances than males in the Blue tit (), males sharing microhabitats could possibly lead to a malespecific nongenetic inheritance pattern. Such sex-specific environmental covariance between relatives needs to be investigated in future work, ideally using experimental approaches to isolate genetic and environmental effects. In any case, such father-son resemblance in yellow chroma makes it a good candidate for a sexually selected trait to optimize both direct and indirect benefits for females. The moderate but significant heritabilities presented here are consistent with previous estimates in colour patches of blue tits (Hadfield and Owens, 2006) and great tits (Evans and Sheldon, 2012), yet they are much smaller than heritabilities associated with the sizes of melanin and white colour patches in other species (ranging from 0.28 to 0.90, see, for example, Saino Table 1 for sample sizes and main text for statistics. Quantitative genetics of blue tit colour A Charmantier et al Roulin and Jensen, 2015). Melanin and white patches have previously been suggested to be influenced by an individual's condition (;Griffith, 2000), including long-lasting effects of early environments. However, our recent understanding on the genetic determinism for melanism (;), as well as the comparison with our present results suggest that variation in black and white ornaments may be less susceptible to body condition than structural and carotenoid-based colourations. In particular, we found that additive genetic variance explains only a small proportion of total variation in the achromatic part of yellow chest colouration, consistent with findings in other blue and great tit populations (Hadfield and Owens, 2006;b;). These results suggest that most variation in this aspect of colouration is likely attributable to environmental sources, including individual condition. Differences in condition dependence across colouration signals have been demonstrated experimentally using drug or nutritional treatments (;). Two comparative analyses have also revealed that sexual dichromatism (used as a proxy of sexual selection) is more intense for carotenoid-based or structurally based colouration than for melanin-based colouration (Badyaev and Hill, 2000;). Our variance partitioning in the blue UV crown colour reveals some striking differences between the two blue tit subspecies (for example, heritable blue brightness in Corsica only), although the absence of other comparable studies prevents any generalization. In addition, we found higher heritabilities overall than Hadfield et al. (2006a), thereby illustrating that the genetic determinism of colouration can vary across populations and requires further quantitative genetic investigations of colouration both within and across species (see review in Mundy, 2006). Sex-linked genetic variation. Although it is now clear that many genes underlying sexual dimorphism are not sex linked and that sex linkage is not a requirement for sexual dimorphism (Fairbairn and Roff, 2006;Dean and Mank, 2014;Roulin and Jensen, 2015), there is accumulating evidence for sex linkage of genes underlying sexually dimorphic traits, especially with the increasing accessibility of genetic mapping in nonmodel organisms (Charlesworth and Mank, 2010;Huang and Rabosky, 2015). Recent evidence suggests that Z-linked genetic variance can explain as much as 40% of the total phenotypic variation in colour ornaments of birds (see Introduction and ). However, the statistical power to estimate Z-linked additive genetic variance in our two data sets was very low (see partial results in Figure 2 and Supplementary Information A3). Although we found more power in Rouvire than the Corsican populations (possibly because of a higher pedigree connectedness), it is unlikely, however, that an animal model using data collected from either population would have enough power to detect Z-linked additive genetic variance. Only when the simulated autosomal additive genetic variance was at its lowest value and the simulated Z-chromosomal additive genetic variance was among its highest values would conventional rules of thumb deem there to be sufficient power (that is, power 480%) to calculate Z-chromosomal additive genetic variance. Although empirical estimates of sex-chromosomal additive genetic variance are few, it seems an unlikely condition to find such high sexchromosomal heritability almost at the exclusion of autosomal heritability. Overall, these simulations revealed that we could not test the hypotheses involving sex-linked genetic variation, and that most if not all previously published results on sex-linked genetic variance suffered from similar lack of power. This is a worrying report that calls for further simulations to determine the structure and size of pedigree and data required to estimate sex-linked genetic variance. Cross-sex genetic covariances and female ornamentation In the animal kingdom, dimorphic traits under sexual selection have been shown to be associated with a whole range of cross-sex genetic correlations : from low (for example, in Drosophila serrata; Chenoweth and Blows, 2003) to very strong correlations (for example, in the red deer Cervus elaphus; ). The sparse and contrasted results prevent from drawing general conclusions on the link between genetic covariances across sexes and the evolution of sexual traits. In our study, estimated cross-sex additive genetic correlation-r A m;fwere high (close to one), even in cases where the trait was not significantly heritable in one sex (for example, blue hue and yellow chroma in Corsica, see Table 2 for details). To our knowledge, only one other study explored r A m;f for blue structural colours (in Florida scrub-jays, ), with similar results of very strong cross-sex genetic correlations. These results validate the fundamental assumption underlying the correlated response hypothesis. They suggest that evolution of female crown colouration could be drastically constrained by indirect selection acting on males. Analogous conclusions can be drawn from estimates of r A m;f in carotenoid-based ornaments: the evolution of colouration in one sex is likely to have a strong influence on the colouration in the other sex. However, we found large variability in our estimates, with r A m;f of yellow chest chroma ranging from 0.22 in Rouvire to 1 in Corsica. The few estimates of cross-sex genetic correlations for carotenoidbased colour traits so far in the literature show a similarly large range of values for r A m;f. High additive genetic correlation was found for beak redness in the zebra finch (r A m;f 0:926; ) but a study of yellow brightness, saturation and hue in blue tit nestlings showed r A m;f ranging from − 0.13 to 0.19 with very large confidence intervals (). These very divergent results call for further investigations on cross-sex genetic correlations in carotenoid-based ornaments in a wider range of species and populations. New genomic tools might also soon allow the identification of genomic regions involved in colour variation in both males and females, thereby revealing whether the same genes influence plumage colouration in both sexes (Roulin and Ducrest, 2013;Kraaijeveld, 2014;Huang and Rabosky, 2015). Our quantitative genetic analyses used social pedigrees that were only partially corrected for extra-pair paternity. Hence, additive genetic variances and heritabilities in Table 2 could be underestimated, although as said above they were overall larger than reported in previous studies (Charmantier and Rale, 2005). Unfortunately, little is known on how errors in paternity assignment due to extra-pair reproduction can affect the estimation of genetic covariances and sex-linked genetic variance. A study combining data on extra-pair occurrence and parental colour is planned for our study populations so that we may quantify to what extent missassigned paternity will bias quantitative genetic (co)variance estimates. Linking the degree of sexual dimorphism to cross-sex additive genetic covariance In accordance with previous studies in this species (;;Delhey and Peters, 2008;) blue characteristics were all highly dimorphic, with the strongest dimorphism expressed in the blue UV chroma, whereas yellow characteristics showed small or moderate dimorphism. Interestingly, Corsican subspecies of blue tits were significantly more dimorphic for yellow chroma than mainland birds (two-sided Student's t-test, P = 4 10 − 8 ), whereas the reverse was true for blue hue (twosided t-test, P = 0.0001). Although sexual dimorphism in yellow is usually considered very small for this species and possibly below the detectable level for birds (), strong dichromatism has been reported once before, in central Spain (). Our personal observations across the ultramarinus complex (C Doutrelant and G Sorci, unpublished data) suggest that the yellow sexual dimorphism might be a characteristic of blue tits in the southern part of the species distribution. These observations limited to the southern edge of the distribution could be explained by differences in selective forces acting on this ornament. Southern blue tit populations are subject to more drastic food limitation than northern ones. Comparative selection analyses would confirm whether these increased environmental constraints result in different selection pressures acting on male and/or female yellow colouration, in particular on yellow chroma, as it is directly linked to the carotenoid content of the feather, and is heritable. Homologous characters in the two sexes, such as blue crown colour and yellow chest colour in blue tits, are presumably controlled, at least in the early evolution of these traits, by very similar sets of genes, leading to strong cross-sex genetic covariance. As any dimorphic character, these traits are likely to be under antagonistic selection in males and females that, combined with a strong cross-sex genetic covariance, would create an intralocus sexual conflict (Lande, 1980;Bonduriansky and Rowe, 2005;). This leads to the classic prediction that the degree of sexual dimorphism should be inversely correlated with the level of COV A m;f (Fairbairn and Roff, 2006) and of r Am;f (Lande, 1980;Bonduriansky and Rowe, 2005). The negative relationship between the cross-sex additive genetic covariance and the magnitude of sexual dimorphism is generally upheld over a range of trait types and across a variety of animal and plant species (Fairbairn and Roff, 2006;Bonduriansky, 2007;). However, studies on the role of ornamentation in sexual selection have largely focussed on conspicuously sexually dimorphic species, neglecting species with low or no sexual dimorphism (see reviews in ;). Estimating cross-sex genetic covariances for weakly dimorphic or nondimorphic species/ traits is now a necessary stepping stone in our understanding of the evolution of sexual dimorphism. In our blue tit study, this prediction was not validated when comparing the five colour traits with varying degrees of dimorphism. Indeed, the most dimorphic traits (blue UV chroma and blue UV hue) displayed strong COV A m;f in both data sets, with r A m;f close to 1, and the only nonsignificant COV Am;f was found in one of the least dimorphic traits (yellow chroma). These results imply that the evolution of sexual dimorphism in this species was not facilitated by low intersexual genetic covariance, suggesting other mechanisms should be considered. First, the observed sexual dimorphism in colour could be driven by environmental differences rather than genetic ones, with a greater sensitivity of one sex to environmental variation. For instance, it has been shown in insects that sex-specific phenotypic plasticity can generate variation in sexual size dimorphism (). Differences in plasticity between males and females should lead to consistent differences in sex-specific heritabilities for similar levels of CV A, but this is not a general result witnessed across the focal traits in Table 2. Second, genes linked to sex chromosomes could explain the sexual dimorphism over and above the autosomal genetic (co)variances estimated here, although we could not estimate such sex-linked genetic variance. Third, cross-sex genetic covariances may have changed over the course of the evolution of sexual dimorphism. Meagher has suggested that during the evolution of sexual dimorphism, loci that show sex-specific expression should be strongly selected for and should become fixed, thereby no longer contributing to the additive genetic variance. This could explain how COV Am;f could be temporarily low or negative during the evolution of dimorphism, but then large and positive once the sex-specific loci are fixed. An important limitation of our study is that we could not adopt a truly multivariate approach where genetic covariances between suites of traits within and between the sexes might provide a different view on the genetic constraints for the evolution of sexual dimorphism. Indeed, the evolutionary trajectory of a given sex-specific character can be constrained or facilitated by selection acting on the variance displayed by the same trait expressed in the other sex, but also by positive or negative genetic correlations with other traits within and between both sexes (Blows and Hoffmann, 2005;). For this reason, future studies will need to integrate cross-sex genetic covariances across traits with multivariate selection analyses (Lande, 1980;) in order to fully uncover how sexual antagonistic selection and intralocus sexual conflicts can promote or constrain the evolution of divergent male and female traits (). Such an approach has been adopted recently in a study of a laboratory population of Drosophila melanogaster () and also in a natural population of barn owl (Roulin and Jensen, 2015). Yet, model complexity combined with data availability still largely prevent such multivariate analyses in many natural populations. CONCLUSION Overall, this study brought three major advancements in our understanding of the evolution of colour ornamentation and sexual dimorphism. First, the present analyses demonstrated heritability for UV colouration (in both sexes) and yellow colouration (in males), a major requirement for the evolution of colour through sexual or social selection. Second, our simulations revealed the low power of animal models to estimate sex-linked additive genetic variance in wild populations, thereby hampering our ability to test a major hypothesis for the evolution of sexual dimorphism. Third, in the current debate on the evolution of female ornaments, the present results suggest that cross-sex genetic correlations can be very high in colour traits across varying degrees of dimorphism. A fine-scale analysis of sex-specific forces of natural, social and sexual selection is now required to determine the role of indirect (selection acting on males) and direct selection for the evolution of female ornaments. Future genomic studies should be used to determine whether the same genes underlie colouration in males and females. DATA ARCHIVING Phenotypic and pedigree data sets available from the Dryad Digital Repository: http://dx.doi.org/10.5061/dryad.gp384. The raw data will be embargoed for 5 years, but could be made available during this period upon request to the authors.
A Baseline Analysis of Regulatory Review Timelines for ANVISA: 20132016 Background The Brazilian health regulatory agency (Agncia Nacional de Vigilncia Sanitria, ANVISA) has embarked on transformational initiatives to fulfill its mandate to provide timely access to safe, effective, and quality therapeutics. A new Brazilian law was enacted to provide the agency with greater flexibility. Optimizing Efficiencies in Regulatory Agencies (OpERA) is a regulatory-strengthening program that seeks to provide benchmarking data that can be used to define performance targets and focus performance improvement. The objective of this study was to use OpERA methodology to undertake a retrospective analysis of the timelines associated with important components of the ANVISA regulatory review process to establish a baseline against which the influence of the new law could be measured. Methods The OpERA tool was used to collect specific milestone data that identify time periods, review stages, and data points for products approved by ANVISA 20132016. Results For the 138 products approved in this cohort, the overall median approval time was 795 days. ANVISA and submitting companies will need to reduce their review and response times by approximately half in order to meet the total time goal of 365 days. Conclusions The observations from this baseline study have identified opportunities for ANVISA and sponsor companies to collaborate to reduce regulatory assessment times while assuring the timely approval of safe and effective, quality medicines. These analyses will be repeated to determine how the provisions of the new Law will impact the activities of ANVISA and the extent of sponsors' contributions to this effort. Introduction Measuring performance involves collecting and reporting data on practices, processes, and outcomes. Measuring pharmaceutical regulatory performance provides a necessary basis for a structured discussion with stakeholders to identify key indicators to monitor and improve processes. Integrating these indicators into regulatory practices by monitoring regulatory assessment times enables transparent tracking of process improvement initiatives. This information can be used to identify and prioritize improvement goals and to track progress toward those goals and to monitor the maintenance of changes that have been already made. The first requirement of any performance measurement is to formulate a robust conceptual framework within which performance measures can be developed. Definitions of performance indicators should fit into the framework and satisfy several criteria, such as validity, reproducibility, acceptability, feasibility, and reliability. The measurement of regulatory review performance should be documented and tracked to identify where time is spent in the regulatory process, thus ensuring the efficiency of this process as it evolves. This helps regulators and other stakeholders understand what drives regulatory review time and facilitates the integration of the practice of tracking and 1 3 measuring regulatory performance, thereby promoting continuous improvement in review and approval times while ensuring safety, efficacy, and quality in medicine. Hence, the need for agencies to proactively and consistently measure their performance against stated target times is one of the World Health Organization (WHO) global benchmarking tool parameters. Brazil With a current population of more than 209 million, Brazil's gross domestic product (GDP) was 1.868 trillion USD in 2018. In 2016, healthcare expenditure represented 11.8% of Brazil's GDP, and this is expected to grow, aided by increased government investment in the country's universal and free public healthcare system, supporting programs to improve access to health services and medicines among all of its population. Ranked among the world's top ten largest pharmaceutical markets, all major pharmaceutical companies operate in Brazil and the value of that market is forecast to grow to 29.9 billion USD in 2020. As recognized by former Minister of Health Ricardo Barros, "Brazil undoubtedly holds great development opportunities for the global pharmaceutical and healthcare industries, and we hope to gain the trust of an increasing number of international investors and jointly work on improving the healthcare products and services". Agncia Nacional de Vigilncia Sanitria (ANVISA) Established in 1999, ANVISA regulates medicinal products for human use, medical devices, food, cosmetics, and sanitizers. The total number of staff at ANVISA is approximately 1,600, including 200 reviewers of marketing authorization/ product licenses, who are primarily pharmacists. The total annual budget of 840 million USD is 40% government funded and 60% fee based. Underlining the agency's important efforts to ensure the highest global standards against this background of rapid growth, former ANVISA Director/President Dr Barbosa da Silva stated that "ANVISA has conducted comprehensive review of its process, with the objective to strengthen the registration process, and to have a more transparent and predictable ecosystem for all stakeholders". In fact, ANVISA has embarked on several transformational initiatives to ensure that it will continue to be in a strong position to fulfill its mandate of providing timely access to safe, effective, and quality therapeutics. The agency is a regulatory member of the International Council of Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) since 2016 and was accepted as member of the Management Committee in 2019. ANVISA is also recognized as a Level IV reference agency by the Pan American Health Organization (PAHO) and has entered into a variety of international collaborative agreements such as the Statement of Cooperation (SOC) with the US Food and Drug Administration (US FDA) that is intended to strengthen existing structures and develop new opportunities for cooperative engagement in regulatory and scientific matters and public health protection. However, because of its broad mandate to address the ongoing assessment of a wide variety of medicinal products, manpower limitations, and the need to work within the legal framework for conducting regulatory reviews, the agency had been faced with regulatory review timelines that were among the longest for Latin American countries. In addition, other factors such as the obligation to perform full reviews, protracted company response times, and the requirement for a Certificate of Pharmaceutical Product for product approval contribute to lengthy review times. Prolonged regulatory timelines have been a limitation to patient access to medicines. In response to these issues, in December 2016, the new Law Number 13,411 was enacted to modify existing legislation to provide the agency with greater flexibility in its approaches to medicine regulation. Among the important innovations of this new law, which went into effect in March 2017, is a risk-based approach addressing the technical complexity of products. In addition, this law specifies the clinical, economic, and social benefits of the medication that determine its status as regulatory review category I-a priority medicines, for which reviews are to be conducted in 120 days of receipt of the marketing authorization application (MAA) or category II-an ordinary medicine, for which reviews are to be conducted within 365 days of MAA receipt. It should be noted that the timelines may be extended by up to one-third of the original deadline. Also, ANVISA requests for clarification or rectification suspend these deadlines until company responses are received, which must be within 120 days of the agency request. Recognizing that the new law could have a positive impact on workload, efficiency, and ultimately, process times, ANVISA collaborated with the Centre for Innovation in Regulatory Science (CIRS, www.cirsc i.org) to undertake a retrospective analysis of the timelines associated with important components of the ANVISA regulatory review process to establish a baseline against which the influence of the new law could be measured. This study represents the first comprehensive analysis of ANVISA regulatory activity timelines (addressing both agency and company time) across multiple years, product types, and therapeutic areas. Methodology CIRS has been collaborating with regulators from around the world to develop the bespoke program entitled "Optimizing Efficiencies in Regulatory Agencies" (OpERA). OpERA is a multi-year project initiated by CIRS in 2013 based on requests from regulatory agencies. Objectives of the program are to provide benchmarking data that can be used to define performance targets and focus ongoing performance improvement initiatives, accurately compare the processes used in the review of new drug marketing authorizations, encourage the sharing of information on common practices in order to learn from others' experiences, and encourage systematic measuring of the processes that occur during the review of new drug marketing authorizations. The OpERA methodology comprises two components: a process assessment analysis designed to clearly assess the component activities associated with the medicine review and assessment processes within an agency or regional regulatory initiative (RRI) and the collection of key milestone metrics aligned with the elements of the process assessment. The specific milestones include time periods, review stages, and data points that have been selected by agencies and RRIs participating in the OpERA program so as to permit a detailed analysis of an agency's efficiency (Table 1). Participating agencies and RRIs have identified commonly collected milestones that demonstrate both the agency and company time associated with the medicine review process. Results obtained from OpERA analysis help agencies identify where time is spent in their processes, define and meet their regulatory performance goals, monitor change activities, embed a culture of ongoing self-assessment, optimize their process efficiencies, and increase internal/external transparency. ANVISA provided to CIRS product characteristics and regulatory milestone dates consistent with those collected through the OpERA program. This analysis focused on products approved by ANVISA between January 1, 2013 and December 31, 2016. Assessments were conducted for new active substances (NASs), major line extensions (MLEs), biologics, and generics. An Anatomical, Therapeutic, Chemical (ATC) category was assigned to each product by ANVISA. All products were anonymized using a random coding assigned by ANVISA prior to submitting the data to CIRS. Data were provided by ANVISA in Microsoft Excel for the following milestones: Receipt of the dossier (Dossier validation); Start of primary scientific assessment; Completion of primary scientific assessment (Primary scientific assessment); Primary assessment deficiency letter sent to sponsor; Response from sponsor (If applicable) (Clock stop/sponsor time); Additional cycles of assessment following deficiency letter response (if applicable) (Secondary scientific assessments); Advisory Committee review (if applicable) (Advisory Committee); Marketing authorization granted. A product could have undergone multiple review cycles. Data were checked for consistency and completeness by CIRS and clarifications were provided by ANVISA. Timelines in calendar days were calculated for the following sequences: receipt of dossier to start of primary assessment (which includes queue time and dossier validation); start of scientific assessment to end of first scientific assessment (primary scientific assessment); outcome letter 1 response received from sponsor (sponsor time); response of outcome letter to end of scientific assessment (subsequent scientific assessments); advisory committee time (if relevant); response from outcome letter to decision of MAA (overall approval time). In a move to reduce review backlogs, in 2013 sponsors of generic products were offered a one-time opportunity to advance selected products to an earlier position in the review queue. This "switch" opportunity has been reflected in these analyses; for these 86 products, the switch date of April 15, 2013 has been used as the date for the receipt of the submission. All analyses of generic products were conducted after adjustment for switch dates. The numbers of products submitted by year are detailed in Table 2. Because of the generic switch opportunity, 2013 saw the most submissions. For the 235 submitted products, the most common therapeutic areas were nervous system (46 (20%), cardiovascular (32 (14%) and anticancer/immunomodulators (28 (12%). The 46 products submitted by multinational pharmaceutical companies accounted for the majority of NASs and biologic approvals. Local (Brazilian) companies submitted 189 products, representing the vast majority of MLEs and generic submissions. Consequently, approvals from 2013 to 2016 comprised 103 products from local companies and 35 from multinationals. Regulatory Timing Metrics For the 138 products approved in this cohort, the overall median approval time was 795 days; this comprised median review times by product type of 691 days (generics), 552 days (NASs), 454 days (biologics), and 1,018 days (MLEs). The widest variability (25th to 75th percentiles) in approval times was observed for generics (653 days) while the narrowest variance was for MLEs (172 days) (Fig. 1). An analysis of the review process was conducted to identify the time taken for each cycle review (Fig. 2). The median time between each milestone for standard review compounds submitted to ANVISA between 2013 and 2016 was calculated. Because of some missing milestone data, criteria were applied for the application to either be excluded from this analysis or included through the extrapolation of other available data. Specifically, the following types of applications were excluded from the analysis: where there was no "start of primary scientific assessment"; where there was no "company response date" or "start of scientific assessment" date; where there was no "completion of scientific assessment" or "outcome letter sent date". The following applications were included in the analysis with the use of substitute data: Where "no outcome letter sent" date was provided, the "completion of scientific assessment" date was used. Where "completion of scientific assessment" date was provided but no "outcome letter sent" date was provided, the "completion of scientific assessment" date was used for the "date the outcome letter sent". Where there was a no "company response" date but a "start of scientific assessment" date was given, the "start of scientific assessment" date was used. Where there was no "completion of scientific assessment" date, and there are no further cycles, the "completion of all scientific assessment" date was used as the end date of that cycle, if this was within 30-40 days of the start of that cycle. In addition, other types of applications excluded from analysis included the following: applications rejected by ANVISA; applications that had more than four cycles of review, which were considered as special cases and not the usual review process for ANVISA; applications that were in the appeal process, but stayed in the review system, until the appeal decision was made; applications where the date of "start of scientific assessment" or "company response" date for a cycle was present but no other information except date of "completion of all scientific assessment"; if this was longer than 30-40 days, the application was rejected from the analysis as it was not clear if this was company or agency time. Using these criteria, a total of 84 applications were analyzed. Most of the approved products underwent a 3-cycle review. Four applications (5%) were approved in the first cycle, 21 (25%) in the second cycle, 45 (54%) in the third cycle, and 14 (17%) in the fourth cycle. The overall review time for 84 applications was 684 median days. For the majority of applications, which went through 3-cycle reviews, the median approval time was 557 median days. The majority of agency time was between receipt of dossier to start of primary assessment. The company time ranged from 86 to 120 median days (Fig. 2). Table 3 shows the variance of each milestone for the 5th and 95th percentile. When assessed by therapeutic area, for NASs submitted between 2013 and 2016, hormone therapies had the shortest median review time (733 days) compared with dermatologic products (median, 1512 days). Anticancer NASs had a median review time of 1312 days. Median review times for MLEs ranged from 983 days (nervous system therapies) to 2320 days (blood products) and for biologics from 70 days (musculoskeletal products) to 787 days (immunomodulators). Median review times for generics ranged from 266 days (dermatologic products) to 1688 (respiratory products). Discussion These observations represent an important analysis of ANVISA regulatory activity timelines (addressing both agency and company time) across multiple years, product types, and therapeutic areas. Under the terms of Law Number 13,411, individual reviewers may be held liable for noncompliance with the stated timelines. While this degree of personal liability is not observed frequently in mature regulatory agencies, it may present a challenge to reviewers faced with the assessment of complex NASs or biologic products. ANVISA have developed and implemented an assessment template for the review of the safety and efficacy of medicines. The template includes critical questions for the assessor to ask during the review, including reference documents to support the review. The introduction of the template will provide transparency, consistency, and compliance with the timelines. The target total time for ANVISA registration (agency and company time) is up to 365 days. In this study, the median agency time was 389 median days and company time was 304 median days. This indicates that the agency is close to meeting the total time goal of 365 days established by Law 13,411. Our observations indicate that a significant time savings can be obtained by reducing the time from receipt of the dossier to the time of the start of the first scientific assessment (214-day queue time), which occurred as the result of manpower limitations to start the scientific assessment. Should this manpower be increased, this time period could be used to validate the content of the dossier. This process is observed in some other agencies such as the European Medicines Agency. During this period a rapid validation (e.g., in under 2 weeks), requesting missing items could be conducted. With this process, the observed 15-day median for the first scientific assessment would likely be increased but this would be offset by a significantly shorter time to the start of the first assessment. A quality and timely regulatory review is facilitated by a quality regulatory submission. Sponsors need to provide dossiers that reflect the needs and expectations of the sponsor. In order to further improve the time for patient access to medicines, sponsors should strive to respond to the agency in a timely manner. Company time represented one quarter (304 days) of the total approval time (684 days) across all approved products for this cohort. Company time can be influenced by a variety of factors including prioritization of products in a global regulatory environment, local capabilities to respond efficiently to ANVISA requests, and the nature of the clarifications required by the agency based on the initial quality of the submission. To streamline responses, the requests for major clarification or rectification by the agency are now being consolidated into a single request for each major dossier section, except when they are needed to clarify or rectify information related to a requirement previously answered by the applicant company. In 2017 and 2018, ANVISA published three new resolutions with the purpose of accelerating the approval of medicines; Resolution 204/2017, Resolution 205/2017, and Service Orientation 45/2018. Resolution 204/2017 establishes "Priority Review" criteria for products that meet at least one of the eligibility criteria, for example, medicines for neglected diseases, and vaccines to be incorporated in the national immunization program. This guidance also addresses priority review processes for post-approval applications when there is a public health risk of drug shortages. In 2018, 173 applications were approved out of 827. The timeline for the final decision is 120 calendar days (365 calendar days for ordinary category). Resolution 205/2017 establishes a special procedure for the consent of clinical trials, certification of GMP, and registration of new medicines for treatment, diagnosis, or prevention of rare diseases. In 2018, the median timeline for the final decisions was 155 days for medicines evaluated under this resolution. Service Orientation 45, which establishes optimized review for registration and post-registration changes for biological products, is being considered a "Reliance Pilot Project." Products already approved by the US FDA and European Medicines Agency with same indications, dosage, adverse reactions, and precautions are eligible. Applicants must submit reports containing the criteria used by both agencies to review and approve these applications. ANVISA also recognized the backlog of generic applications and has worked with international institutions such as CIRS to implement standardized risk assessment models to speed up the registration process for generics. Leveraging this regulatory update, it was possible for ANVISA to reduce the number of these registration files. The new Brazilian law provides ANVISA with a degree of flexibility in addressing its approaches to regulatory reviews. One approach that is being used successfully by emerging agencies worldwide addresses submissions from a risk-based approach. In these models, a product's risk is assessed by various criteria established by the agency such as the number of agencies that have conducted a prior assessment of the product, whether they are considered reference agencies, or how long the product has been on the market. As ANVISA implements a reliance mechanism in which prior decisions can be used as the basis for informing the assessment, but wherein the agency retains the role of conducting a targeted benefit-risk assessment relevant to the Brazilian population, the efficient use of agency resources can be addressed while allowing the reviewers to maintain their ability to apply their expertise to the country-specific issues of the product. Data Limitations Where there were missing data, datapoints were either substituted or excluded (as outlined above). Where no "company response" date was given, the date of "start of scientific assessment" was used; the company may have responded in a timely manner, but the date was not logged by the agency. As a result, the company time may have been overestimated. Agency time may have been underestimated with regards to a product's last review cycle. If there is was no "completion of scientific assessment" date for that cycle, and there and there were no further cycles, the "completion of all scientific assessment" date was used as the end date of that cycle. If this time period was greater than 40 days, the datapoints were excluded, because of uncertainty around agency and company factors that may have had an impact. Even though caveats were applied, Fig. 2 still reflects the elements of the review process to achieve marketing authorization within ANVISA. Conclusions The observations from this baseline study have identified possible opportunities for ANVISA and sponsor companies to collaborate to reduce regulatory assessment times while assuring the timely approval of safe and effective, quality medicines. These analyses will be repeated on a periodic basis to determine how the provisions of Law 13,411 will impact the activities of ANVISA and the extent to which the sponsors have maximized their contributions to this effort.
Nonlinear and Perturbative Evolution of Distorted Black Holes. II. Odd-parity Modes We compare the fully nonlinear and perturbative evolution of nonrotating black holes with odd-parity distortions utilizing the perturbative results to interpret the nonlinear results. This introduction of the second polarization (odd-parity) mode of the system, and the systematic use of combined techniques brings us closer to the goal of studying more complicated systems like distorted, rotating black holes, such as those formed in the final inspiral stage of two black holes. The nonlinear evolutions are performed with the 3D parallel code for Numerical Relativity, {Cactus}, and an independent axisymmetric code, {Magor}. The linearized calculation is performed in two ways: (a) We treat the system as a metric perturbation on Schwarzschild, using the Regge-Wheeler equation to obtain the waveforms produced. (b) We treat the system as a curvature perturbation of a Kerr black hole (but here restricted to the case of vanishing rotation parameter a) and evolve it with the Teukolsky equation The comparisons of the waveforms obtained show an excellent agreement in all cases. I. INTRODUCTION Coalescing black holes are considered one of the most promising sources of gravitational waves for gravitational wave observatories like the LIGO/VIRGO/GEO/TAMA network under construction (see, e.g., Ref. and references therein). Reliable waveform information about the merger of coalescing black holes can be crucial not only to the interpretation of such observations, but also could greatly enhance the detection rate. Therefore, it is crucial to have a detailed theoretical understanding of the coalescence process. It is generally expected that full scale, 3D numerical relativity will be required to provide such detailed information. However, numerical simulations of black holes have proved very difficult. Even in axisymmetry, where coordinate systems are adapted to the geometry of the black holes, black hole systems are difficult to evolve beyond about t = 150M, where M is the mass of the system. In 3D, the huge memory requirements, and instabilities associated presumably with the formulations of the equations themselves, make these problems even more severe. The most advanced 3D calculations based on traditional Cauchy evolution methods published to date, utilizing massively parallel computers, have difficulty evolving Schwarzschild, Misner, or distorted Schwarzschild beyond about t = 50M. Characteristic evolution methods have been used to evolve distorted black holes in 3D indefinitely, although it is not clear whether the technique will be able to handle highly distorted or colliding black holes due to potential trouble with caustics. In spite of such difficulties, much physics has been learned and progress has been made in black hole simulations, in both axisymmetry and in 3D. In axisymmetry, calculations of distorted black holes with and without angular momentum, Misner two black hole initial data, including variations of boosted and unequal mass black holes, have been all been successfully carried out, and the waveforms generated during the collision process have been extensively compared to calculations performed using perturbation theory. In 3D, similar calculations have been carried out, especially in evolution of 3D distorted black holes where it was shown that very accurate waveforms can be extracted as a distorted black holes settles down, as is expected to happen when two black holes coalesce. One of the important results to emerge from these studies is that the full scale numerical and perturbative results agree very well in the appropriate regimes, giving great confidence in both approaches. In particular, the perturbative approach turned out to work extremely well in some regimes where it was not, a priori, expected to be accurate. For example, in the head-on collision of two black holes (using Misner data), the perturbative results for both waveforms and energy radiated turned out to be remarkably accurate against full numerical simulationseven in some cases where the black holes had distinct apparent horizons. These impressive agreements have then been improved by the use of second order perturbation theory (see Ref. for a comprehensive review on the Zerilli approach and Ref. for the more recent curvature based approach holding also for rotating black holes). Study of perturbations also offered the plausible explanation that the peak of the potential barrier that surrounds a black hole was the more relevant quantity, not the horizon. In a more complex application, the collision of boosted black holes was studied. With a small boost, the total energy radiated in the collision was shown to go down when compared with the Misner data. Linear theory was able to show that there were two components to the radiation, one from the background Misner geometry and one from the boost. These two components are anti-correlated and combined to produce what has since been called the "Baker dip". Had there only been a perturbative analysis one might have worried that nonlinear effects might eliminate the dip. Had there been only a full numerical simulation, the dip might have been thought to be evidence of a coding error. When the two were combined, however, a confirmation of the correctness of both procedures was established and the effect understood. These are just two examples of a rather large body of work that has led to a revival of perturbative calculations, now considered to be used as a tool to aid in the verification and interpretation of numerically generated results. The potential uses of this synergistic approach to black hole hole evolutions, combining both numerical and perturbative evolutions, are many. First, the two approaches go hand-in-hand to verify the full scale nonlinear numerical evolutions, which will become more and more difficult as 3D binary mergers of unequal mass black holes are attempted, with linear, spin, and orbital angular momentum. Second, as the above examples show, they can aid greatly in the interpretation and physical understanding of the numerical results, as also shown in 3D distorted black hole evolutions. Such insight will become more important as we move towards more complex simulations. (As an example of this below, we will show how nonlinear effects and mode-mixing can be understood and cleanly separated from linear effects with this approach.) Finally, there are at least two important ways in which a perturbative treatment can actually aid the numerical simulation. First, as shown in Ref., it is possible to use perturbative evolutions to provide good outer boundary conditions for a numerical simulation, since away from the strong field region one expects to see low amplitude gravitational waves propagating on a black hole background. This information can be exploited in the outer region in providing boundary data. Second, this combined approach can be used in future applications of perturbative approaches to "take over" and continue a previously computed full scale nonlinear numerical simulation. For example, if gravitational waveforms are of primary interest in a simulation, once the system has evolved towards a perturbative regime (e.g., two coalescing black holes form a distorted Kerr hole, or evolve close enough that a close limit approximation is valid), then one may be able to extract the relevant gravitational wave data, and evolve them on the appropriate black hole background to extract waveforms. Not only would such a procedure save computational time, it may actually be necessary in some cases to extend the simulations. As discussed above, 3D black hole evolutions using traditional ADM style formulations, with singularity avoiding slicings, generally break down before complete wave forms can be extracted. A perturbative approach may be necessary in such cases to extract the relevant waveform physics. This work (called Lazarus project) is currently being undertaken by some of the present authors. However, all work to date in this area of comparing full scale numerical simulations with perturbative approaches has dealt with even-parity distortions of Schwarzschildlike black holes. See for instance Ref., referred to here as Paper I, where we compared perturbative techniques, based on the Zerilli approach, with fully nonlinear evolutions of even-parity distorted black holes. This restriction to the Zerilli equation cannot handle the odd-parity class of perturbations, and more importantly, it cannot be applied easily to the case of rotating black holes. The more general black hole case has both even-and odd-parity distortions, and also involves black holes with angular momentum. For this reason, in this paper we take an important step towards application to the more general case of rotating, distorted black holes, by introducing the Teukolsky equation as the fundamental perturbation equation. In fact, for black holes with angular momentum, there is not an ℓ − m multipole decomposition of metric perturbations in the time domain and the most natural way to proceed is with the curvature-based perturbation formalism leading to the Teukolsky equation, which also simultaneously handles, in a completely gauge invariant way, both even-and odd-parity perturbations. The paper is structured as follows. In section II, we review the initial data sets and four different techniques and approaches to evolve black holes. 1. We first carry forward the metric-based perturbation approach by considering the Regge-Wheeler (odd-parity) equation to perform perturbative evolutions, and for the first time apply these techniques to a class of distorted black hole data sets containing even-and odd-parity distortions. 2. We also show how one can carry out such perturbative evolutions with the curvature-based Teukolsky equation, using the same initial datasets. Although in certain cases, the metric perturbations can be computed from the curvature perturbations, and vice versa, in general using both approaches helps us better to understand the systems we are dealing with. 3. We carry out fully nonlinear evolutions of the same data sets for comparison with a 2D (axisymmetric) code, Magor, also capable of evolving distorted rotating black holes. 4. Finally, the same initial data is evolved in its full 3D mode with a general parallel code for numerical relativity, Cactus. In section III we finally discuss the results in detail and show how the combination of these different approaches provides an extremely good and systematic strategy to cross check and further verify the accuracy of the codes used. The comparisons of waveforms obtained in this way show an excellent agreement, in both perturbative and full nonlinear regimes. Although in this paper we restrict ourselves to the case of initial datasets without angular momentum, the family of datasets we use for this study also includes distorted Kerr black holes, which will be considered in a follow up paper. In fact, our eventual goal is to apply both fully nonlinear numerical and perturbative techniques to evolve a binary black hole system near the merger phase, which final stage can be reasonable modeled by a single distorted Kerr black hole. In this case we should be able to address extremely important questions like how much energy and angular momentum can be radiated in the final merger stage of two black holes. A. Distorted Black Hole Initial Data Our starting point is represented by a distorted black hole initial data sets developed originally by Brandt and Seidel to mimic the coalescence process. These data sets correspond to "arbitrarily" distorted rotating single black holes, such as those that will be formed in the coalescence of two black holes. Although this black hole family can include rotation, in this first step we restrict ourselves to the non-rotating limit (the so-called "Odd-Parity Distorted Schwarzschild" of Ref. ). However, these data sets do include both degrees of gravitational wave freedom, including the "rotation-like" oddparity modes. The details of this initial data procedure are covered in, so we will go over them only briefly here. We follow the standard 3+1 ADM decomposition of the Einstein equations which give us a spatial metric, an extrinsic curvature, a lapse and a shift. We choose our system such that we have a conformally flat three-metric ij defined by where the coordinates and are the usual spherical coordinates and the radial coordinate has been replaced by an exponential radial coordinate (r = M 2 e ). Thus, if we let the conformal factor be = √r we have the flat space metric with the origin at = −∞. If we let we have the Schwarzschild 3-metric. In this case one finds that = 0 corresponds to the throat of a Schwarzschild wormhole, = ±∞ corresponds to spatial infinity in each of the two spaces connected by the Einstein-Rosen bridge (wormhole). Note also that this metric is invariant under the isometry operation → −. In the full nonlinear 2D evolution will use this fact to give ourselves the appropriate boundary conditions for making distorted black holes. The extrinsic curvature is chosen to be where The various functions have been chosen so that the momentum constraints are automatically satisfied, and have the form of odd-parity distortions in the black hole extrinsic curvature. The function q G provides an adjustable distortion function, which satisfies the isometry operation, and whose amplitude is controlled by the parameter Q 0. This parameter carries units of length squared. Since we will be comparing cases with different masses we will refer to an amplitudeQ 0 = Q 0 /M 2 normalized by the ADM mass of the initial slice. If Q 0 vanishes, an unperturbed Schwarzschild black hole results. The parameter n is used to describe an "odd-parity" distortion. It must be odd, and have a value of at least 3. The function is the conformal factor, which we have abstracted from the metric and extrinsic curvature according to the factorization given by Lichnerowicz. This decomposition is valuable, because it allows us to solve the momentum and Hamiltonian constraints separately (with this factorization the extrinsic curvature given above analytically solves the momentum constraints). For the class of data considered here the only nontrivial component of the momentum constraints is the component: Note that this equation is independent of the function q G. This enables us to choose the solutions to these equations independently of our choice of metric perturbation. At this stage we solve the Hamiltonian constraint numerically to obtain the appropriate value for. Data at the inner boundary ( = 0) is provided by an isometry condition, namely, that the metric should not be changed by an inversion through the throat described by → −. If we allow Q 0 to be zero, we recover the Schwarzschild solution for. The Hamiltonian constraint equation can be expanded in coordinate form to yield, in this case, This construction is similar to that given in Bowen and York, except that form of the extrinsic curvature is different. The same procedure described above can also be used to construct Kerr and distorted Kerr black holes, as described in Ref., but we defer that application to a future paper. Note that although the form of the extrinsic curvature is decidedly odd-parity (consider reversing the the -direction), the Hamiltonian constraint equation for affects the diagonal elements of the three-metric producing a nonlinear even-parity distortion. If both F and H E vanish, undistorted Schwarzschild results. If they are present, they generate a linear odd-parity perturbation directly through the extrinsic curvature, and a second order even-parity perturbation through the conformal factor. Hence, the system will have mixed odd-and evenparity distortions mixed together at different perturbative orders. As we will see below, because even-and odd-parity components are cleanly separated in this way, and the background geometry is explicitly Schwarzschild, it is straightforward to construct analytic, linearized initial data for these distorted black holes, which can then be evolved with the perturbation equations. In summary, our initial data sets contain both evenand odd-parity distortions of a Schwarzschild black hole, and are characterized by parameters (Q 0, n, 0, ), where Q 0 determines the amplitude of the distortion,n determines the angular pattern, 0 determines the radial location (with 0 = 0 being the black hole throat), and determines the radial extent of the distortion. For simplicity of discussion, all cases we will consider in this paper have the form (Q 0, n, 0 = 2, = 1). Metric Perturbations The theory of metric perturbations around a Schwarzschild hole was originally derived by Regge and Wheeler for odd-parity perturbations and by Zerilli for even-parity ones. The spherically symmetric background allows for a multipole decomposition even in the time domain. Moncrief has given a gaugeinvariant formulation of the problem, which like the work of Regge-Wheeler and Zerilli, is given in terms of the three-geometry metric perturbations. We will use the Moncrief formalism here as already described in Paper I. For special combinations of the perturbation equations, a wave equation, the famous Regge-Wheeler equation, resulted for a single function (ℓm) : Here r * ≡ r + 2M ln(r/2M − 1), and the potential Because we are considering only axisymmetric perturbations all components with m = 0 vanish identically. We will subsequently suppress the m labels. Moncrief showed that one can define a gauge-invariant function, that is invariant under infinitesimal coordinate transformations (gauge transformations), which is defined for any gauge via which satisfies the Regge-Wheeler equation above. As the Regge-Wheeler equation is a wave equation, in order to evolve the function we must also provide its first time derivative, which is computed then directly through the definition of the extrinsic curvature of the perturbed Schwarzschild background: This general prescription of linear Schwarzschild perturbations simplifies dramatically in the present case. As discussed in Sec. II A above, the three-metric contains only even-parity perturbations, and those appear only at second order. Hence, for linearized treatment both the even-and odd-parity perturbation functions vanish in the initial data! To first order the metric is described by the Schwarzschild background and perturbed initial data consists solely of odd-parity extrinsic curvature contributions. Even-parity modes appear only at higher order, and are not considered in our comparisons here. For the specific initial data given in the previous section we obtain 0.0 FIG. 1. The initial data for the Moncrief variable. For n = 3 the only linear content is the ℓ = 3 multipole. The initial value of vanishes for linear odd-parity perturbations, since our choice of initial data only allows for second order even-parity perturbations of the initial three-metric. Our choice of the initial extrinsic curvature generates an almost Gaussian ∂t that sits near the maximum of the Regge-Wheeler potential. where for n = 3, ℓ = 3 and n = 5, ℓ = 3, 5 the numerical coefficients coming from the multipole decomposition of the extrinsic curvature are Although these initial data could be obtained numerically via an extraction process described in Paper I, it is not necessary to do so in this case with a clear analytic linearization. In Fig. 1 we plot an example of these analytic initial data. We are now ready to evolve linearly these data with the Regge-Wheeler equation. Curvature Perturbations There is an independent formulation of the perturbation problem derived from the Newman-Penrose formalism that is valid for perturbations of rotating black holes. This formulation fully exploits the null structure of black holes to decouple the perturbation equations into a single wave equation that, in Boyer-Lindquist coordinates (t, r,, ), can be written as: where M is the mass of the black hole, a its angular momentum per unit mass, ≡ r 2 + a 2 cos 2, and ∆ ≡ r 2 − 2M r + a 2. The source term T is built up from the energy-momentum tensor. Gravitational perturbations s = ±2 are compactly described in terms of contractions of the Weyl tensor with a null tetrad, which components (also given in Ref. ) conveniently chosen along the repeated principal null directions of the background spacetime (Kinnersley choice) where an overbar means complex conjugation and ≡ 1/(r − ia cos ). This field represents either the outgoing radiative part of the perturbed Weyl tensor, (s = −2), or the ingoing radiative part, (s = +2). For the applications in this paper we will consider s = −2, since we will study emitted gravitational radiation and a = 0, i.e. perturbations around a Schwarzschild black hole. In general, for the rotating case, it is not possible to make a multipole decomposition of which is preserved in time. So to keep generality we shall use Eqs. (3.1) and (3.2) of Ref. to build up our initial 4 and ∂ t 4 not decompose into ℓ multipoles. The analytic expressions for the distorted black hole initial data sets considered in this paper, are: The initial data for the Weyl scalar 4 and its time derivative as given in Eq.. The general form of 4 resembles that of the Gaussian-like ∂t ℓ. Note that ∂t4 is nonvanishing in contrast to the initial ℓ = 0 As expected for pure odd-parity perturbations, only the imaginary part of 4 is nonvanishing. Also note that unlike ℓ = 0, ∂ t 4 does not vanish initially. This is because 4 and its time derivative depend on both, the 3-geometry and the extrinsic curvature while the Moncrief function only depends on the perturbed 3-geometry and its time derivative only on the extrinsic curvature. We plot the initial data in Fig. 2. We evolve of these initial data via Teukolsky equation using the numerical method is described in Ref.. Since, after all, we are computing perturbations on a Schwarzschild (a = 0) background there must be a way to relate the metric and curvature approaches. In fact the relations between 4 and Moncrief even-and oddparity waveforms in the time domain have been found in Ref. and tested for the even-parity case in Ref. * Here we can perform the same kind of cross check for the odd parity modes. From the equations in section II.B of Ref. or (2.9) in Ref. we obtain a relation that holds at all times * Note that this relation among waveforms is only valid at first perturbative order. When nonlinearities are included the two approaches may give widely different results. and that we can integrate to give us 4 from ℓ evolved with the Regge-Wheeler equation instead of evolving 4 directly with the Teukolsky equation. C. Axisymmetric Nonlinear Evolutions The 2D fully nonlinear evolutions have been performed with a code, Magor, designed to evolve axisymmetric, rotating, highly distorted black holes, as described in Ref.. Magor has also been modified to include matter flows accreting onto black holes, but here we consider only the vacuum case. In a nutshell, this nonlinear code solves the complete set of Einstein equations, in axisymmetry, with maximal slicing, for a rotating black hole. The code is written in a spherical-polar coordinate system, with the rescaled radial coordinate that vanishes on the black hole throat. An isometry operator is used to provide boundary conditions on the throat of the black hole. All three components of a shift vector are employed to keep all off diagonal components of the metric zero, except for the g component, which carries information about the oddparity polarization of the radiation. For complete details of the nonlinear code, please see Refs. 34]. The initial data described in Sec. II A above are provided through a fully nonlinear, numerical solution to the Hamiltonian constraint. The code is able to evolve such data sets for time scales of roughly t ≤ 10 2 M, and study such physics as horizons and gravitational wave emission. Consistently with the two different perturbations approaches there are two methods we use to extract information about the gravitational waves emitted during the fully nonlinear simulation: metric based gauge-invariant waveform extraction and direct evaluation of the curvature based Newman-Penrose quantities, such as 4. The first method has been developed and refined over the years to compute waveforms from the numerically evolved metric. Surface integrals of various metric quantities are combined to build up the perturbatively gauge invariant odd-parity Moncrief functions. These can then be compared directly with the perturbative results. A second method for wave extraction is provided by the calculation of the Weyl scalar 4 which is coordinate invariant but depends on a choice of tetrad basis. For our numerical extractions we follow the method proposed in Ref.. To define the tetrad of their form numerically we thus align the real vector (which can be thought of as providing the spatial components of l and l ) with the radial direction. The complex vectors m andm point within the spherical 2-surface. At each step, a Gram-Schmidt procedure is used to ensure that the triad remains ortho-normal. The tetrad assumed by this method is not directly consistent with the one assumed in the perturbative calculation, but for the a = 0 it can be made consistent by a type III (boost) null rotation which fixes the relative normalization of the two real-valued vectors. We have found that the transformation n P → A −1 l O and l P → Al O where A = 2/(1 − 2M/r) fixes the normalization appropriately. For the general a = 0 case this would be insufficient, and we would instead use the more general method proposed in Ref.. D. Full 3D Evolutions with Cactus The last of our approaches for evolving these distorted black hole data sets utilizes full 3D nonlinear numerical relativity, and is based on Cactus. More general than a numerical code, the Cactus Computational Toolkit is actually a general parallel framework for numerical relativity (and other sets of PDE's), that allows users from various simulation communities to gain high performance parallelism on many platforms, access a variety of computational science tools, and to share modules of different evolution methods, initial data, analysis routines, etc. For the relativity community, an extensive suite of numerical relativity modules (or thorns in the language of Cactus) is available, including black hole and other initial data, slicing routines, horizon finders, radiation indicators, evolution modules, etc. For this paper, Cactus was used to assemble a set of 3D initial data, evolution modules, and analysis routines needed for the comparisons with Magor and the two perturbative approaches described above. All operations have been carried out in 3D Cartesian coordinates, from initial data to evolution to waveform extraction. The initial data are computed as in the Magor code, in a polar-spherical type coordinate system, and interpolated onto the Cartesian coordinate system as described in Paper I. The evolutions are carried out with a formulation of Einstein's equations based on the conformal, tracefree approach developed originally by Shibata and Nakamura and Baumgarte and Shapiro, and further tested and developed by Alcubierre, et. al., as described in. Due to certain symmetries in these initial data sets, the evolutions can be carried out in an octant, in Cartesian coordinates. However, we have chosen in this case to use the full 3D Cartesian grid, as enough memory is now available to run sufficiently large scale evolu- Cactus is well documented and can be downloaded freely from a web server at http://www.cactuscode.org. For more information on Cactus, its use in numerical relativity and other fields, please see the web pages. tions that cover the entire spacetime domain of interest, as would also be necessary when considering the general black hole inspiral problem. For comparison with the 2D code we extract waveforms via the same gauge-invariant Moncrief approach. In this case, surface integrals are carried out on the Cartesian based system by coordinate transformations and interpolation onto a coordinate 2sphere, as described in Ref.. Further details of the individual simulation parameters are provided as needed when discussing the results below. III. RESULTS Here we compare the results of evolving the odd-parity distorted black holes by the four techniques described above. We consider two classes of distortions (n = 3 and n = 5) with different angular distributions, and various amplitudes to include cases of linear and distinctly nonlinear dynamics. For the n = 3 case the distortion is (linearly) pure ℓ = 3, while the n = 5 case encodes a mix of ℓ = 3 and ℓ = 5 distortions in the initial data. A. Comparison of Nonlinear Evolutions with Regge-Wheeler Theory In this subsection we compare the 2D nonlinear (Magor) evolutions with the results of the Regge-Wheeler perturbative approach. We first consider the nonlinear evolution of a family of data sets with parameters (Q 0, n = 3, 0 = 2, = 1). FIG. 4. The ℓ = 3, odd-parity Moncrief waveform, extracted from the fully nonlinear 2D evolution code, for a series of evolutions with initial data parameters (Q0, n = 3, 0 = 2, = 1). The waveforms are extracted at isotropic coordinatesr = 15M. For n = 3 we only have ℓ = 3 linear contributions. We normalized waveforms by the ampli-tudeQ0 in order to study the linear and nonlinear regimes. It is observed that for Q0 ≤ 8 the linear regime is maintained, while for Q0 = 32 nonlinearities are well noticeable. The effect of nonlinear contributions increases the scaled amplitude of the waveform and increases its frequency. This indicates that the final ringing black hole is significantly smaller than the initial mass of the system. For low amplitude cases with Q 0 < 8, we are in the linear regime and even the nonlinear evolutions exhibit strongly linear dynamics. In Fig. 3 we show ℓ = 3 waveform results obtained from the 2D nonlinear code, for the case Q 0 = 2, and compare with Regge-Wheeler evolutions of the (, ∂ t ) system. The agreement is so close that the curves cannot be distinguished in the plot. The perturbative-numerical agreement is equally good with the other linear waveforms at low amplitude so we will leave the perturbative results out of the plots and focus on the transition to nonlinear dynamics. In Fig. 4 we show the ℓ = 3 gauge-invariant Moncrief waveforms for a sequence of such evolutions of increasing amplitude Q 0. The waveforms have all been normalized by the amplitude factorQ 0 = Q 0 /M 2 to accentuate nonlinear effects. If the system is in the linear regime, the normalized waveforms will all line up, as is clearly the case in the regime Q 0 ≤ 8. For the large amplitude case Q 0 = 32, the normalized waveform is much larger, indicating that here we are well into the nonlinear regime. We now consider the transition to nonlinear dynamics in our second family of data sets, given by parameters (Q 0, n = 5, 0 = 2, = 1). These data sets have a linear admixture of both ℓ = 3 and ℓ = 5 perturbations, and should contain waveforms of both types. In FIG. 5. We show the ℓ = 3 normalized odd-parity Moncrief waveform for (Q0, n = 5, 0 = 2, = 1) initial data, extracted from the fully nonlinear 2D code for a variety of amplitudes Q0. Again, the system is clearly linear for Q0 ≤ 8 and nonlinearities cause an increase in amplitude and frequency of the wave. tion with the Magor code in maximal slicing, extracting the ℓ = 3 gauge-invariant Moncrief waveform for various amplitudes. The waveforms are again normalized by the amplitude factorQ 0. In this case the nonlinearity is somewhat weaker at Q 0 = 32 so we have included the Q 0 = 64 curve in the figure. For the ℓ = 5 case, shown in Fig. 6, the higher frequency of the quasi-normal ringing makes it easier to appreciate the nonlinearities at Q 0 = 32. The plots indicate that again the dynamics are quite linear below Q 0 = 8 The waveforms we have shown so far are the only ones predicted to linear order in perturbation theory. We would need to apply higher order perturbation theory to predict waveforms for the even-parity or higher-ℓ odd parity modes. Nevertheless general considerations from the perturbative point of view do provide some expectations on the scaling of the other nonlinear waveform modes within the families considered here. We return to the n = 3 family for an example. The leading contribution to the n = 3, ℓ = 5 odd-parity waveform comes from the cubic coupling of the first order ℓ = 3 oddparity mode discussed above (including the coupling of the ℓ = 3 odd-parity mode with the second order evenparity ℓ = 2 mode expected via the source term contribution to the solution of the Hamiltonian constraint in the initial data. Thus, this wave component should appear at the third perturbative order. We verify this expectation by plotting the numerical results for the ℓ = 5 odd-parity waveforms scaled this time byQ 3 0 in Fig 7. Although the magnitudes of these waveforms are far smaller than those of the ℓ = 3 mode we again see very nice agreement, below Q 0 = 8, with the perturbative expectation, that the waveforms should superpose. FIG. 6. We show the ℓ = 5 normalized odd-parity Moncrief waveform for (Q0, n = 5, 0 = 2, = 1) initial data, extracted from the fully nonlinear 2D code for a variety of amplitudes Q0. The regime is clearly linear for Q0 = 2 and nonlinear components appear for Q0 = 32. The increase in frequency and amplitude of the wave are seen here also. 7. The Moncrief waveform for a purely nonlinear mode. For n = 3, the ℓ = 5 multipole is generated by cubic products of the odd-parity wave (squares generate even-parity ones). Accordingly we normalized waveforms by Q 3 0. Still higher nonlinearities switch on for Q0 = 32 and show the generic increase in the frequency of the wave. Let us now take another look at Figs. 4-6 to consider what is happening as we move into the nonlinear regime where the waveforms no longer superpose. In all the graphs we see the same general features arising as begin to drive the system into the nonlinear regime. These are higher frequency ringing, and larger amplitudes for the later parts of the waveform. At Q 0 = 32 the n = 3 case shows a roughly 10% increase in frequency compared to 5% for the n = 5 case. Since the final state of the system will be a black hole, we expect quasi-normal ringing in the late-time behavior of the system regardless of the size of the initial perturbations. This is indeed what we see in the waveforms, except that the ringing is at a higher frequency (relative to the initial mass of the system) than we expected. This indicates that the mass of the final ringing black hole has less mass than the ADM mass of the initial data. The perturbations have grown large enough to generate radiation amounting to a noticeable fraction of the total ADM mass leaving behind a slightly smaller black hole. The smaller mass of the final black hole is also consistent with a larger amplitudes, since the scaled perturbationQ 0 = Q 0 /M 2 is larger relative to a smaller mass black hole. The arrival time of the wave pulse is not strongly affected by the change in mass because the time and wave extraction points are both scaled against the initial mass. B. Comparison of Nonlinear Evolutions with Teukolsky Theory We now turn to the curvature based Teukolsky approach to perturbative evolution for black hole spacetimes. As motivated above, this is a much more powerful approach that will enable perturbative evolutions of both polarizations of the gravitational wave, and evolutions of distorted black holes with angular momentum, and without the need for multipolar expansions. The key difference for analysis within this (Teukolsky) formalism is that, in the general case, we no longer have the benefit of a time-independent separation into multipoles. In this first step towards the transition between the metric perturbation (Moncrief) approach and the curvature based (Teukolsky) approach, we consider the same data sets studied above, which have linear perturbations only for odd-parity, nonrotating black holes. We will consider more general systems in future papers. We now consider evolutions of the distorted black hole data set (Q 0 = 1, n = 3, 0 = 2, = 1). The perturbative initial data for 4 and ∂ t 4, needed for use in the Teukolsky evolution, have been obtained as described in Sec. II B 2 above. These data are then evolved and recorded at the same coordinate location as before (r = 15M ) for comparison with the previous results. In with the Teukolsky equation, observed at a constant angular location = /4, and the dotted line shows the result of the Magor evolution at the same location, with 4 extracted from the full nonlinear simulation as described in Sec. II.C.2 above. The results agree extremely well except at very late times, when the nonlinear results are affected slightly by coarse resolution in the outer regions of the numerical grid. We also verify here that the results of the Regge-Wheeler evolution, transformed to provide the same function 4 according to Eq. above, agree with the Teukolsky evolution. The results of the two perturbative approaches are indistinguishable in the graph. We now examine the other family of distorted black holes with the choice of angular parameter n = 5. The initial data were obtained as before, and evolved with the nonlinear Magor code and the Teukolsky code. The results are shown in Fig. 9, where we see excellent agreement between the two plots. But notice that the waveform does not show the clear quasi-normal mode appearance that one is accustomed to in such plots. This is because this data set has a roughly equal admixture of both ℓ = 3 and ℓ = 5 components of radiation, and the curvature based 4 approach is not decomposed into separate multipoles. This waveform shows a clear beat of the two ℓ = 3 and ℓ = 5 components. 9. The Weyl scalar 4 for the n = 5 initial data. We observe the beating of the ℓ = 3 and ℓ = 5 components since we are not making any multipole decomposition. C. Comparison of nonlinear 2D and full 3D codes Having successfully tested the 2D fully nonlinear code Magor for odd-parity distortions against perturbative evolutions, we can now test the 3D code Cactus against the 2D one. In Cactus, using the same procedures as in Sec. II C, the initial data are evolved in the full (no octant) 3D mode, with a second order convergence algorithm, maximal slicing, and static boundary conditions. Note we perform the conformal-traceless scheme for this evolution. The first observation is that we have to solve the initial value problem taking into account all nonlinearities, even if we are in the linear regime (Q 0 < 10), since small violations of the Hamiltonian constraint contaminate the outgoing waveforms. The runs presented in Figs. 10, 11, and 12 show very nice agreement with the 2D code (hence also with perturbation theory). Note that the spatial resolution (∆x j = 0.15M = 0.3) is not high. Here we show waveforms for t/M ≤ 30. The runs do not crash afterwards, but become less accurate due to the low resolution and boundary effects, and even later to collapse of the lapse. The ℓ−modes shown in Figs. 10-12 are essentially dominated (for Q 0 = 2) by the linear initial distortion of the black hole. Those are the modes that we can compare with first order perturbation theory. Since we have two nonlinear codes we can now compare their predictions for modes dominated by nonlinear effects. That is the case of the odd mode ℓ = 5 when the initial data parameter is n = 3. This mode has a linear contribution only for ℓ = 3. For ℓ = 5 is easy to see that to generate an odd mode we need at least cubic contributions. Thus this mode will scale as Q 3 0. To be able to verify the agreement between the 2D and 3D codes we ampli-. For comparison we also plot the results of evolving the same initial data with the fully nonlinear 2D evolution code Magor (solid line), which in turn has been tested against perturbation theory as shown in Fig. 3. Very good agreement is reached with a relatively low resolution of the 3D code. fied this mode taking Q 0 = 32 and checked the (almost) quadratic convergence of Cactus to the correct results as shown in Fig. 13. IV. DISCUSSION We have completed a series of comparisons covering four different approaches for two classes of odd-parity distortions of Schwarzschild black holes. This includes 2D and 3D nonlinear evolutions and for the first time, in both cases, a comparison of the odd-parity Regge-Wheeler-Moncrief formulation as well as the Teukolsky approach with numerical results. In all cases we find excellent agreement among the different approaches. We emphasize that these matchings have been achieved without the aid of any parameters and thereby stand as a strong verification of these techniques. Although the distorted black hole initial data configurations we consider here are not necessarily astrophysically relevant, our analysis provides an example of the usefulness of perturbation theory as an interpretive tool for understanding the dynamics produced in fully nonlinear evolutions. In order to distinguish the cases of linear and nonlinear dynamics we simply show the output of the full nonlinear code, but we scale it by the factor Q 0 /M 2 so that, if the system is responding linearly to Q 0 all the waveforms will lie exactly on top of one another. Using this procedure we are able to recognize the emergence of FIG. 11. The ℓ = 3, odd-parity Moncrief waveform produced by the 3D code Cactus (dotted line). It corresponds to initial data with parameters (n = 5, Q0 = 2, 0 = 2, = 1). It also show very good agreement with the 2D results (solid line) which had been checked against perturbation theory as displayed in Fig. 5. FIG. 13. The ℓ = 5, odd-parity Moncrief waveform produced by the 3D code Cactus (dotted line) with initial data having (n = 3, Q0 = 32, 0 = 2, = 1) and MADM = 2.777. This is a purely nonlinear mode, its leading term being cubic in the amplitude Q0. Comparison with the 2D results (solid line) show a good rate of convergence with the grid spacing (from ∆x = 0.6M/2.777 to ∆x = 0.3M/2.777. See Fig. 7 for the purely 2D runs. nonlinear dynamics. Considering the mixing of perturbative modes also enables us to understand the results of one case which displays strictly nonlinear behavior, the ℓ = 5 waveform of the initial data with n = 3 (see Figs. 7 and 13). This wave strictly vanishes to linear order inQ 0 and scales at lower amplitudes likeQ 3 0. The perspective of perturbation theory allows us to create a full picture, identifying and explaining aspects of the nonlinear dynamics even when the perturbations are beyond the linear regime. In this case we find that linearized dynamics provide a very good approximation of the systems' behavior until the radiation constitutes a significant portion of the initial mass, producing a smaller final black hole and, for example, higher quasi-normal ringing frequencies. Although we restrict to the case where the black hole system does not have net angular momentum, the approach we develop in this paper is completely general, and can easily be extended to the case of distorted black holes with nonvanishing angular momentum. For this reason, we developed a procedure for using the Teukolsky equation to evolve the perturbations on a black hole background, handling both the even-and odd-parity perturbations simultaneously, and providing the capability to deal with perturbations evolving on a Kerr background. In future work we expect to move in two directions: (a) We will apply the techniques developed here to the case of distorted, rotating black holes, to study nonlinear effects in the radiation of energy and angular momentum as well as to further develop the Teukolsky perturbative evolution paradigm for application to coalescing black hole initial data in a close-limit approximation. (b) We will use these techniques to evolve black hole systems, either from numerically generated initial data, or from partially evolved datasets that have reached a stage where they can be treated via perturbation theory.
Tics and Tourette Syndrome: A Literature Review of Etiological, Clinical, and Pathophysiological Aspects Tourette syndrome (TS) is a condition characterized by tics produced because of neuropsychiatric malfunctioning occurring in childhood, which becomes less severe in adulthood, followed by a difference in the severity of tics between two persons. TS is a diverse variable in which symptoms vary in different patients. It is associated with comorbidities like obsessive-compulsive disorder (OCD), attention deficit hyperactivity disorder (ADHD), and depression, and hampers the quality of life. Comorbid disorders must be investigated and treated as part of the clinical approach for all TS patients. Clinicians should be aware of the infrequent but serious neurological problems that can occur in these patients and recommend aggressively treating tics. Currently, there is more emphasis on symptom-based treatments by medicines, but as etiological knowledge improves, we will divert to disease-modifying medications in the future. Behavioral, pharmacological, and surgical methods can treat TS. Neuroleptics, other drugs, and behavioral therapies are the first-line options. Deep brain stimulation is evolving but has its pros and cons. The main focus of this review is on tics characteristics, how to manage and assess them, and limitations in the clinical spectrum. Introduction And Background Gilles de la Tourette Syndrome (TS) is a neurodevelopmental motor condition of childhood characterized by motor and vocal tics, first described in 1885 by French neurobiologist Georges Gilles de la Tourette. The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) has classified tic disorders into three categories: Tourette's syndrome, persistent motor or vocal tic disorder, and provisional tic disorder. Individuals with these illnesses all have tics that are described as non-rhythmic, abrupt, quick motor actions or vocalizations that occur repeatedly and are not caused by another disorder and are usually preceded by urges. For example, individuals might experience the impulse of clapping their hands impulsively and constantly, making faces/frowns or grunting, or even doing obscure actions such as waggling tongue movements. Although these actions might be appropriate in certain situations, the fact that they are repeated even in inappropriate cases is why they are considered abnormal. Individuals can be classified into the type of tic disorder they belong to based on the following criteria: the number of motor or phonic/vocal tics, duration of tics, and age of the patient when tics first appeared. Table 1 depicts that individuals with TS have numerous motor tics and not less than one vocal tic, but they need not necessarily occur together. The fact that both are present is noteworthy. Individual tics might change in incidence over periods, but they must persist for at least a year to be diagnosed as TS. Finally, in TS, the tics essentially start before the age of 18 years. Different studies show a male predominance of about 0.1-6%, and the overall prevalence rate of TS is 0.53%. It is worth mentioning that nearly two-thirds of people diagnosed with TS have comorbidities, the most common being attention deficit hyperactivity disorder(ADHD) and obsessive-compulsive Disorder (OCD). Additional comorbidities often faced by an individual with TS are depression, disturbed sleep, emotional disorder, migraine, or other neuropsychiatric disturbances. Tic disorders are most common prior to puberty, between four to six years of age, and severity is mostly in the age range of 10-12 years. The symptoms usually decrease in severity later as age progresses. Patients can deal with their symptoms through pharmacological or nonpharmacological treatments on a daily basis. ≥2 motor tics and ≥1 vocal tic Persists for ≥ 1 year Started before the age of 18 Although the neurobiology of TS is still incompletely understood, a lot of studies indicate that the caudate, putamen, globus pallidus, substantia nigra, and subthalamic nuclei, which constitute the basal ganglia have an important role. The basal ganglia are hypothesized to be involved in suppressing unwanted action apart from other diverse brain functions, which is why they are especially relevant to TS. The principal excitatory neurotransmitter dopamine from the corticostriatal-thalamocortical circuit has been linked to the pathogenesis of TS. Some studies mention the increased binding of dopamine to the D2 receptor in the caudate nucleus, which results in dopaminergic system dysfunction in TS patients. However, the cause of TS is quite complex. Current studies have suggested one's neurobiological vulnerability to TS with multifactorial etiology such as genetic, environmental, and immunological factors. The largest signal identified in a large genome-wide association study came from the gene COL27A1, which has rs7868992 on chromosome 9q32 but remains unclear. A piece of rare stronger evidence for causing TS has been found due to histidine decarboxylase deficiency caused by gene mutation. Educating the families of pediatric patients about the disorder's natural history can assist them in making treatment decisions. To this end, we will briefly examine the major findings concerning TS in several aspects. Review Tics could be classified into simple or complex, as depicted in Figure 1. Simple tics are often minimal in duration, spanning milliseconds, and can involve motor movements such as eye blinks or verbal habits such as throat clearing. Complex tics are frequently a mixture of simple tics, such as shaking one's head while shrugging their shoulders and persisting longer, sometimes over a second. Complex motor tics can include echopraxia, a tic-like repetition of other people's movements, and copropraxia, tics involving inappropriate comments. They can consist of echolalia (repeating the last word or phrase heard from others), palilalia (repeating one's own words or phrases), and coprolalia (saying words or obscenities). FIGURE 1: Classification of tics Image credit: Anshuta Ramtake Individuals may sometimes detect a unique emotion or urge that happens prior to the commencement of a tic, such as an itch before reaching for a scratch. Tics are also more common or severe during stress, excitement, or tiredness. TS and associated tic disorders have no cure, but they can be treated with a mix of therapy and medication. Table 2 summarizes the diagnosis of tic disorders according to DSM-5. Tourette Syndrome Persistent motor or vocal tic disorder Provisional tic disorder Both multiple motors and one or more vocal tics are present. There is the presence of either one or more motor or vocal tics but not both of them together. Presence of one or more motor tics and/or one or more vocal tics Genetics Over the last year, there have been numerous advancements in TS genetics, many of which have resulted from large-scale cooperation. Genetic factors influence TS; patients' relatives have a higher incidence of tics, OCD, and ADHD. Monozygotic twins have a high prevalence rate, whereas dizygotic twins do not. Although segregation results confirmed the autosomal-dominant concept, researchers today prefer a polygenic model. An additional hypothesis is a bilinear inheritance, whereby both paternal and maternal family members may have a history of tics and/or comorbidities. Only a small amount of de-novo coding variations have been linked to TS in recent research, including WW and C2 domain containing 1 (WWC1), fibronectin 1 (FN1), cadherin EGF LAG seven-pass G-type receptor 3 (CELSR3) along with nipped-B-like (NIPBL). WWC1 regulates trafficking, cell polarity, and migratory action. The NIPBL gene plays a dynamic role in the meiosis of cells and also holds the expression of genes during maturation in the mouse central nervous system. Axon pathfinding and cell polarity are assessed by the CELSR3 gene. The FN1 gene regulates cell proliferation, motility, and adherence. Ercan-Sencicek et al. discovered a functional mutation by examining a two-generation family in histidine decarboxylase (Hdc) for immunological disturbance in TS. The Hdc gene is essential for histamine production, which causes increased tic-like behavior; for example, excessive grooming was seen in Hdc mutant mice. Environmental Risk Factors Cesarean section, abnormal fetal growth, breech baby, and preterm birth were related to increased risk of TS. Thus, intrauterine and birth insults are risk factors. Children who were given an antibiotic or hospitalized for infection were more prone to develop any psychiatric disease later in life. Surprisingly, tic disorders were the most likely to require antibiotics, followed by OCD. The likelihood of hospitalization was higher for people with intellectual disabilities, whereas the second most common was tic disorders. The link does not prove the relation of infections with TS. Pediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal Infections (PANDAS) is the most common disorder by group A Streptococcus (GAS) in a child and in adults is acute pharyngitis, accounting for about 20-37% of all pediatric cases. It can act as a disease-altering agent or trigger factor in TS, according to clinical research. The diagnosis of PANDAS depends on the following factors: presence of tic symptoms or OCD, onset before puberty, intermittent symptoms or variable remission and relapse, temporal relationship between tic symptoms onset and infection with GAS, presence of other neurological abnormalities in which the commonest are hyperactivity or choreiform movements. In a population-based Taiwanese statewide retrospective investigation, Wang et al. showed that GAS infection causes higher chances of TS and ADHD. Another population-based study in the United States found that individuals who had a previous streptococcal infection were prone to TS, OCD, or tic disorder prior to initiation of symptoms. Furthermore, persons who have recently had repeated GAS infections are at a higher risk of developing TS. GAS is not the sole pathogen involved in the genesis of TS. Enterovirus (EV), Toxoplasma gondii, Borrelia burgdorferi, Mycoplasma pneumoniae, Chlamydia pneumoniae, and even HIV have all been identified as pathogens. Immunological Dysregulation in TS: Autoimmune disorders and allergies can both be caused by a breakdown in the immune tolerance process. Population-based studies were used in clinical reports that related allergy disorders to TS as summarized in Figure 2.. It was observed that the content of TS during COVID-19 increased on the social media site TikTok (ByteDance Ltd, Beijing, China) and was highly viewed by teenage girls, which resulted in portraying tic-like behaviors. This is an example of mass sociogenic illness. The simulation of tics viewed by the girls can be called as functional psychogenic tics. FIGURE 2: Immunological dysregulation in TS The average age of onset of psychogenic movement disorder was 29.7 years. These patients had cooccurrence of other functional movement disorders and were unable to momentarily suppress movements; there was absence of premonitory sensations and presence of pseudo seizures. The difference between patients with TS and psychogenic tics is that the latter is common in older individuals, females are more affected than males, and there is no evidence of childhood or family history of tic disorder. Structural Neuroimaging Many neuroimaging studies have been done to know the affected part of the brain in TS patients, which in some studies revealed no difference in grey or white matter. However, in other studies, it was discovered that there is a decreased thickness of grey matter and lower depth in internal, superior, and inferior, including pre and post-central frontal sulci. In a prospective longitudinal study by Bloch et al., it was found that caudate volume in early childhood has a significant and inverse relation to the severity of tics. With the help of voxel-based morphometry (VBM), there was evidence of a grey matter increase in the ventral putamen, left hippocampus, and midbrain. Connectivity reduction between basal ganglia and supplementary motor areas (SMA), along with frontal cortico-cortical circuits, was established with probabilistic fibre tractography. Functional Neuroimaging.Fluorodeoxyglucose (FDG) and positron emission tomography (PET) scans found two patterns, which include increased cerebral activity and bilateral premotor cortex along with metabolic activity decrease in the orbital frontal cortex and caudate/putamen. With the help of flumazenil, a GABA receptor ligand, it was found that there is decreased binding in the bilateral thalamus, right insula, bilateral ventral striatum, and bilateral amygdala of TS patients and increased binding in bilateral substantia nigra, bilateral cerebellum, dentate nuclei, and right posterior cingulate cortex. This concluded that there is the involvement of the GABA-ergic system, which causes inhibitory loss in the brain of TS patients causing triggered rapid movements. Involvement of the right dorsal anterior insula in the urge of tic had evidence as it is thought to influence cortico-striato-thalamic regions by not suppressing the urge of tic, which it normally does. A study based on voxel-morphometry showed the involvement of the anterior dorsal region in a premonitory urge to tic and the posterior region in the generation of motor tics. Neurophysiology Basal ganglia have their function in the planning and programming of motor movements, suppression of both voluntary and involuntary movements, and cognition. It has two pathways: the direct pathway, which stimulates activities, and the indirect pathway, which inhibits actions, as described in Figures 3, 4. In individuals with TS, it's hypothesized that the faulty inhibitory mechanism in basal ganglia fails to stop unwanted signals from reaching the motor cortex (cerebrum). This causes the execution of undesired actions by the patient, which forms the basis of tics. It is thought that there is coupled reaction of failed inhibition in basal ganglia and increased activity in the motor pathway that results in the generation of movements. FIGURE 3: Mechanism of action of basal ganglia via direct pathway While glutamate is excitatory, gamma aminobutyric acid (GABA) is inhibitory Image credit: Anshuta Ramteke FIGURE 4: Mechanism of action of basal ganglia via indirect pathway While glutamate is excitatory, gamma-aminobutyric acid (GABA) is inhibitory Image credit: Anshuta Ramteke There is strong evidence that overactivity in the dopaminergic system is related to the generation of tic. Studies suggest that dopamine system hypersensitivity is due to developmental dysfunction in dopamine neurons. Dopamine is thought to send signals to relieve an urge to make movements. Figure 5 depicts the mechanism of dopamine action. 2022 Comorbid conditions There are various comorbidities associated with TS. ADHD affects 20-90% of people with TS. ADHD is a complicated neurological disorder characterized by inattention and hyperactivity/impulsive behavior. ADHD pathogenesis is complex in patients with TS and includes neurobiological factors, genetic and environmental. OCD affects 11-80% of people with TS. Obsessions (intrusive thoughts) and compulsions (repetitive behavior) are features of OCD, which result in adaptive dysfunction and emotional maladjustment. OCD symptoms in TS patients may differ from those seen in persons with primary OCD. TS individuals, for example, have increased symmetric preoccupation, "just right" perception, and obsessional counting (arithmomania); on the other hand, individuals with pure OCD have increased urge for compulsive washing, contamination worries, and cleaning rituals. Some persons with TS feel compelled to do things they shouldn't have to, such as making nasty or personal remarks that are out of character, etc. This might take the form of a tic or a more complex behavioral response called non-obscene socially inappropriate behavior. Depression, sleep Issues, and migraines are some comorbidities associated with TS. More severe sequelae of TS include myelopathy of the cervix, herniation of cervical disk, compressive neuropathy, arterial dissection, and stroke. Treatment There are several approaches that can be employed to assist patients with unpleasant tics, summarized in Figure 6. Of course, the first careful thought is whether or not to treat because treatment is only symptomatic. Few individuals only experience minimal tics, so treatment could be more harmful than the disease. Furthermore, usually, tics are self-limiting and vanish on their own in many patients. However, if symptomatic treatment is required, effective therapy is available. The treatment has to be a multidisciplinary, individualized, and integrative approach. There are many ways in which one can assess the efficacy of any kind of therapeutic intervention, but by far, we rely on clinical grading scales like the Yale Global Tic Severity Scale (YGTSS), particularly the total tic severity component (TTS). Current Management includes behavioral, pharmacologic, and surgical treatments ( Table 4). All patients should be educated about the disease and, if possible, receive behavioral therapy for tics and/or comorbidities. Cognitive-behavioral therapies have an extended history along with excellent confirmation about two specific approaches. Comprehensive behavioral intervention is one of the methods. It is based on the habit-retraining therapy viewpoint, where the patient withstands the tic urge by producing any muscle motion that avoids the tic from occurring. Response prevention and exposure are other treatments where patients are taught to endure the urge to tic but refrain from doing so. Because motivation, learning difficulty, and other comorbidities can interfere with these treatments, they are not appropriate for all individuals. The availability of specialized clinical psychologists is the greatest barrier to treatment. Pharmacological Treatment Pharmacological medications like clonidine and guanfacine, vesicular monoamine transport type 2 inhibitors, topiramate, and tetrabenazine are often employed as first-line therapy for patients with tics who cannot be managed with behavioral therapy or when it is not accessible or available. Antipsychotics such as aripiprazole, ziprasidone, risperidone, and fluphenazine are used as second-line therapy. Clonazepam (benzodiazepine) can be helpful but not be used as a first-line drug. These drugs are often effective, but they come with the risk of tardive dyskinesia and metabolic syndrome, along with some side effects. Another possibility is a botulinum neurotoxin injection. Botulinum toxin can be used to treat focal tics, particularly those involving the neck or eyes, as well as injections of the vocal cord to treat coprolalia and vocal tics, which are often accompanied by a hoarse voice. There is inadequate verification for using cannabis-derived substances like nabiximols, nabilone, and cannabidiol to treat tics. These drugs' most common adverse effects include dizziness, fatigue, and dry mouth. More research is needed before cannabisbased drugs may be properly prescribed to TS sufferers. Although all existing dopamine receptor-blocking medications predominantly antagonize D2 receptors, there may also be a positive effect of D1 receptors inhibition. Ecopipam (D1 receptor antagonist) was developed initially as a capable antipsychotic medication in the 1980s, but it failed in schizophrenia trials. However, it has shown potential in treating tics. Surgical Treatment Deep brain stimulation (DBS) could be an alternate therapy option for severe and resistant TS sufferers. Conclusions TS is a complicated psycho-neurological disorder including motor and vocal tics and various additional comorbid conditions, which includes ADHD, OCD, depression, sleep issues, impulsive behavior, migraine, rage attacks, myelopathy of the cervix, and also dissection and stroke due to violent motor tics. The tics can be mildly or moderately bothersome, and in some circumstances, they can lead to self-harm or become otherwise debilitating. Comorbid cognitive and psychiatric problems can exacerbate overall impairment and reduce the quality of life. Based on clinical similarities, we believe that TS produced by genetic origin and Tourette-like syndrome (or secondary Touretism) induced by environmental causes are related to medical disorders with numerous etiologies. Education of the patient and a personalized and targeted therapeutic strategy are essential in its treatment. As a result, there is a requirement for a multifaceted approach, addressing motor symptoms as well as psychological/behavioral problems linked to TS DBS restriction, as it comes with its own set of dangers. The body of knowledge about TS is rapidly expanding. However, a few easy yet critical questions are yet to be answered: Why do tics develop in children in the age group of 5-10 years? Why are they more prevalent among boys? Why do they decrease during sleep? Why do tics typically resolve as age progresses? How well can we forecast the prognosis of a single patient? Is secondary prevention a viable option? Hopefully, future research will address these and other critical challenges. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
Q: Em dash vs semicolon: which is more appropriate in the following examples? I am very confused by these, and even when I understand other people's usage of them I find it difficult to know when to employ them myself. For this reason, I am trying to make my own examples and see if I get them correct. Please understand, English is not my first language, but I have never learned grammar in my native language either. I hope that what I'm saying is comprehensible to you. In the following examples, I'll be using a period in place of the em dash or semicolon, because I am utterly confused as to which one should be used. English is not my first language, and I'm having trouble with the grammar. Specifically semicolons and dashes. Don't ask Jim to fix your car. That sort of thing would be better handled by Steve. The question isn't what you can take away from this, but what you can learn in the process. / The question isn't what you can take away from this. It is what you can learn in the process. Normally I would use a semicolon in all of these instances, but recently I have come to learn that this is incorrect usage. A: Dashes can be used in place of parentheses to indicate an aside or qualifying statement. I don't think either has a place in any of your examples. Generally speaking, for the same reason you're having a hard time understanding their use, it's a good idea to avoid using semicolons altogether. The semicolon is intended to separate two sentences where the second sentence clarifies or extends the first. In practice, they're often used incorrectly and there is ample evidence that they confuse readers and translation software. A comma or period would often suffice. It's good advice to use the simplest punctuation possible. That often means using the simplest sentence construction possible as well. Here is how I would punctuate your examples: English is not my first language. I'm having trouble understanding the punctuation, specifically semicolons and dashes. Note here that the wording is more specific so that the second clause merely clarifies. It could be thought of as a contraction of this more verbose version: English is not my first language. I'm having trouble understanding the punctuation. Specifically, I'm having trouble understanding semicolons and dashes. Or, if you really felt the need to use that spare semicolon: English is not my first language. I'm having trouble understanding the punctuation; specifically, I'm having trouble understanding semicolons and dashes. Your second example is fine as is; it's completely clear in meaning as two sentences (see what I did there?). Your third sentence provides a great example of the many ways to associate two sentences. The first is very clear, but awkward and wordy. The second is probably most confusing to readers because the second sentence is quasi-grammatical. "it" implies "The question" here. The third is a rather elegant construction to my native English comprehension. Does the conjunction "but" imply the same meaning to you, however? The question isn't what you can take away from this. The question is what you can learn in the process. The question isn't what you can take away from this; it is what you can learn in the process. The question isn't what you can take away from this, but what you can learn in the process. These all mean exactly the same thing. From your perspective, take the construction that makes the most sense and use that consistently in your writing. Much great writing can be done without any semicolons at all. Finally, note that your last example is a rhetorically loaded construction in English. I'm sure "Not this, but that" phrasings are encountered in many languages. Here's a famous example: Ask not what your country can do for you. Ask what you can do for your country. In these cases, simple, repeated, parallel constructions work in your favor in spite of the punctuation: It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness... Be clear. Be consistent. Remember that many writers don't actually know the rules of punctuation. My apologies for rambling.
#!/usr/bin/env python """ .. py:currentmodule:: FileFormat.Results.XraySpectraSpecimen .. moduleauthor:: <NAME> <<EMAIL>> Read XraySpectraSpecimen MCXRay results file. """ # Script information for the file. __author__ = "<NAME> (<EMAIL>)" __version__ = "" __date__ = "" __copyright__ = "Copyright (c) 2012 Hendrix Demers" __license__ = "" # Standard library modules. import os.path import csv # Third party modules. # Local modules. # Project modules import pymcxray.FileFormat.Results.BaseResults as BaseResults # Globals and constants variables. ENERGIES_keV = "Energy (keV)" SPECTRUM_TOTAL = "Spectra Total" SPECTRUM_LINES = "Spectra Lines" SPECTRUM_BREMSSTRAHLUNG = "Spectra Bremsstrahlung" HDF5_XRAY_SPECTRA_SPECIMEN = "XraySpectraSpecimen" HDF5_ENERGIES_keV = ENERGIES_keV HDF5_TOTAL = SPECTRUM_TOTAL HDF5_CHARACTERISTIC = SPECTRUM_LINES HDF5_BREMSSTRAHLUNG = SPECTRUM_BREMSSTRAHLUNG class XraySpectraSpecimen(BaseResults.BaseResults): def __init__(self): super(XraySpectraSpecimen, self).__init__() self.energies_keV = [] self.totals = [] self.characteristics = [] self.backgrounds = [] def read(self): suffix = "_SpectraSpecimen.csv" filename = self.basename + suffix filepath = os.path.join(self.path, filename) with open(filepath, 'r') as csvFile: reader = csv.DictReader(csvFile, self.fieldNames) # Skip header row next(reader) for row in reader: self.energies_keV.append(float(row[ENERGIES_keV])) self.totals.append(float(row[SPECTRUM_TOTAL])) self.characteristics.append(float(row[SPECTRUM_LINES])) self.backgrounds.append(float(row[SPECTRUM_BREMSSTRAHLUNG])) def write_hdf5(self, hdf5_group): hdf5_group = hdf5_group.require_group(HDF5_XRAY_SPECTRA_SPECIMEN) hdf5_group.create_dataset(HDF5_ENERGIES_keV, data=self.energies_keV) hdf5_group.create_dataset(HDF5_TOTAL, data=self.totals) hdf5_group.create_dataset(HDF5_CHARACTERISTIC, data=self.characteristics) hdf5_group.create_dataset(HDF5_BREMSSTRAHLUNG, data=self.backgrounds) @property def fieldNames(self): fieldNames = [] fieldNames.append(ENERGIES_keV) fieldNames.append(SPECTRUM_TOTAL) fieldNames.append(SPECTRUM_LINES) fieldNames.append(SPECTRUM_BREMSSTRAHLUNG) return fieldNames @property def energies_keV(self): return self._energies_keV @energies_keV.setter def energies_keV(self, energies_keV): self._energies_keV = energies_keV @property def totals(self): return self._totals @totals.setter def totals(self, totals): self._totals = totals @property def characteristics(self): return self._characteristics @characteristics.setter def characteristics(self, characteristics): self._characteristics = characteristics @property def backgrounds(self): return self._backgrounds @backgrounds.setter def backgrounds(self, backgrounds): self._backgrounds = backgrounds
Variable effects of 12 weeks of omega-3 supplementation on resting skeletal muscle metabolism. Omega-3 supplementation has been purported to improve the function of several organs in the body, including reports of increased resting metabolic rate (RMR) and reliance on fat oxidation. However, the potential for omega-3s to modulate human skeletal muscle metabolism has received little attention. This study examined the effects of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) supplementation on whole-body RMR and the content of proteins involved in fat metabolism in human skeletal muscle. Recreationally active males supplemented with 3.0 g/day of EPA and DHA (n = 21) or olive oil (n = 9) for 12 weeks. Resting muscle biopsies were sampled in a subset of 10 subjects before (pre) and after (post) omega-3 supplementation. RMR significantly increased (5.3%, p = 0.040) following omega-3 supplementation (Pre, 1.33 ±0.05; Post, 1.40 ±0.04 kcal/min) with variable individual responses. When normalizing for body mass, this effect was lost (5.2%, p = 0.058). Omega-3s did not affect whole-body fat oxidation, and olive oil did not alter any parameter assessed. Omega-3 supplementation did not affect whole muscle, sarcolemmal, or mitochondrial FAT/CD36, FABPpm, FATP1 or FATP4 contents or mitochondrial electron chain and PDH proteins, but did increase the long form of UCP3 by 11%. In conclusion, supplementation with a high dose of omega-3s for 12 weeks increased RMR in a small and variable manner in a group of healthy young men. Omega-3 supplementation also had no effect on several proteins involved in skeletal muscle fat metabolism and did not cause mitochondrial biogenesis.
Get the biggest politics stories by email Subscribe Thank you for subscribing We have more newsletters Show me See our privacy notice Could not subscribe, try again later Invalid Email An 'invisible' Tory councillor who refused to take up his post triggering a by-election has been slammed by colleagues. Sandy Thornton, 79, has been dubbed the 'ghost councillor' after not showing up for May's election count and refusing to take up his seat. Now a by-election will have to be held in September which will cost the tax payer an estimated £50,000, the Sunday Mail reports. He has blamed chronic ill-health – but political opponents say he was a paper ­candidate who had no expectation of being elected. Thornton won a seat in North Lanarkshire as the Tories had their best local election results in Scotland in decades under Ruth Davidson . (Image: DAILY RECORD) He was elected after winning 13.3 per cent of the vote in the ­four-seat Fortissat ward but did not even attend the election count. Five years earlier, Thornton won just 2.6 per cent of the vote. He has confirmed to council chiefs that he would not take up the seat. A by-election has been set for September 7. (Image: Wishaw Press) Tommy Cochrane, SNP councillor for Fortissat, said: “We call him the ghost councillor because we have never seen him. I couldn’t even tell you what he looks like. "He stood in this ward in 2012, and not one election leaflet was put through people’s doors. He didn’t appear at any polling stations. “Fast-forward to 2017 and we have exactly the same situation. My feeling is that he was a paper candidate. “If he was unwell, he could have stood down as a candidate 23 days before the election and someone else could have been nominated. “My view is that even if you’re ­standing for the Monster Raving Loony Party you should be prepared to be elected because no one knows how the public is going to vote. “When I stood in 2012, my wife was severely ill and I was looking after a severely disabled son. “My wife died six weeks later but I still took up my post. The people elected me and I stood up to the mark.” (Image: PA) A Tory spokesman said: “Sandy has decided he is unable to fulfil the role of a councillor due to ill health.” When approached by the Sunday Mail, Thornton said: “I have a medical condition I don’t wish to discuss. “I spoke to the council chief executive in confidence and I don’t want to discuss it with the press.” Asked about the cost of the by-election, he said: “Sadly that’s the situation that pertains.”
Occlusion of the semicircular canal using argon laser. The effects of argon laser on the bony semicircular canals were studied in the guinea pig. After intraperitoneal administration of Nembutal, the bulla was opened in order to approach the lateral and posterior canals. The anterior canal was approached through the posterior fossa. The argon laser was applied through a probe which was connected to a device from HGM Medical Laser Systems. One of the three semicircular canals was irradiated one to several times by argon laser (1.0-1.5 W x 0.5 sec). Histopathologic examination of the temporal bones revealed that the semicircular duct shrank immediately after irradiation. The laser produced a charred area in the bony canal wall. The semicircular canals gradually became fibrotic and ossified and completely occluded within several weeks. Heat produced in the bony canal may be responsible for the morphologic changes. On delayed observation, the cochlea of the canal-irradiated animals showed no morphologic changes. Auditory brain stem responses were normal. Caloric stimulation using 5 ml/5 sec of ice water revealed no response in the lateral canal-irradiated animals.
import { async, ComponentFixture, TestBed } from '@angular/core/testing'; import { DepartmentSingleComponent } from './department-single.component'; import { DepartmentFormComponent } from '../department-form/department-form.component'; import { ReactiveFormsModule } from '@angular/forms'; import { HttpClientTestingModule } from '@angular/common/http/testing'; describe('DepartmentSingleComponent', () => { let component: DepartmentSingleComponent; let fixture: ComponentFixture<DepartmentSingleComponent>; beforeEach(async(() => { TestBed.configureTestingModule({ imports:[ ReactiveFormsModule, HttpClientTestingModule ], declarations: [ DepartmentSingleComponent, DepartmentFormComponent ] }) .compileComponents(); })); beforeEach(() => { fixture = TestBed.createComponent(DepartmentSingleComponent); component = fixture.componentInstance; component.department = { deptName: "Department", funcName: "functionality" } fixture.detectChanges(); }); it('should create', () => { expect(component).toBeTruthy(); }); });
/* Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package com.amazon.ask.request; import com.amazon.ask.dispatcher.request.handler.HandlerInput; import com.amazon.ask.model.IntentRequest; import com.amazon.ask.model.Request; import com.amazon.ask.model.canfulfill.CanFulfillIntentRequest; import com.amazon.ask.model.interfaces.display.ElementSelectedRequest; import com.amazon.ask.request.viewport.ViewportUtils; import com.amazon.ask.request.viewport.ViewportProfile; import java.util.function.Predicate; /** * A collection of built in Predicates that can be used to evaluate properties of an incoming {@link HandlerInput}. */ public final class Predicates { /** Prevent instantiation. */ private Predicates() { } /** * Returns a predicate that returns to true if the incoming request is an instance * of the given request class. * @param <T> class of the request to evaluate against * @param requestType request type to evaluate against * @return true if the incoming request is an instance of the given request class */ public static <T extends Request> Predicate<HandlerInput> requestType(final Class<T> requestType) { return i -> requestType.isInstance(i.getRequestEnvelope().getRequest()); } /** * Returns a predicate that returns to true if the incoming request is an {@link IntentRequest} * for the given intent name. * @param intentName intent name to evaluate against * @return true if the incoming request is an {@link IntentRequest} for the given intent name */ public static Predicate<HandlerInput> intentName(final String intentName) { return i -> i.getRequestEnvelope().getRequest() instanceof IntentRequest && intentName.equals(((IntentRequest) i.getRequestEnvelope().getRequest()).getIntent().getName()); } /** * Returns a predicate that returns to true if the incoming request is an {@link CanFulfillIntentRequest} * for the given intent name. * @param intentName intent name to evaluate against * @return true if the incoming request is an {@link CanFulfillIntentRequest} for the given intent name */ public static Predicate<HandlerInput> canFulfillIntentName(final String intentName) { return i -> i.getRequestEnvelope().getRequest() instanceof CanFulfillIntentRequest && intentName.equals(((CanFulfillIntentRequest) i.getRequestEnvelope().getRequest()).getIntent().getName()); } /** * Returns a predicate that returns to true if the incoming request is an {@link IntentRequest} * and contains the given slot name and value. * @param slotName expected intent slot name * @param slotValue expected intent slot value * @return true if the incoming request is an {@link IntentRequest} and contains the given slot name and value */ public static Predicate<HandlerInput> slotValue(final String slotName, final String slotValue) { return i -> i.getRequestEnvelope().getRequest() instanceof IntentRequest && ((IntentRequest) i.getRequestEnvelope().getRequest()).getIntent().getSlots() != null && ((IntentRequest) i.getRequestEnvelope().getRequest()).getIntent().getSlots().containsKey(slotName) && slotValue.equals(((IntentRequest) i.getRequestEnvelope().getRequest()).getIntent().getSlots().get(slotName).getValue()); } /** * Returns a predicate that returns to true if the incoming request is an {@link CanFulfillIntentRequest} * and contains the given slot name and value. * @param slotName expected intent slot name * @param slotValue expected intent slot value * @return true if the incoming request is an {@link CanFulfillIntentRequest} and contains the given slot name and value */ public static Predicate<HandlerInput> canFulfillSlotValue(final String slotName, final String slotValue) { return i -> i.getRequestEnvelope().getRequest() instanceof CanFulfillIntentRequest && ((CanFulfillIntentRequest) i.getRequestEnvelope().getRequest()).getIntent().getSlots() != null && ((CanFulfillIntentRequest) i.getRequestEnvelope().getRequest()).getIntent().getSlots().containsKey(slotName) && slotValue.equals(((CanFulfillIntentRequest) i.getRequestEnvelope().getRequest()).getIntent().getSlots().get(slotName).getValue()); } /** * Returns a predicate that returns to true if the incoming request is an {@link ElementSelectedRequest} * with the given token. * @param elementToken token to evaluate against * @return true if the incoming request is an {@link ElementSelectedRequest} with the given token */ public static Predicate<HandlerInput> selectedElementToken(final String elementToken) { return i -> i.getRequestEnvelope().getRequest() instanceof ElementSelectedRequest && elementToken.equals(((ElementSelectedRequest) i.getRequestEnvelope().getRequest()).getToken()); } /** * Returns a predicate that returns to true if the request attributes included with the {@link HandlerInput} * contain the expected attribute value. * @param key key of the attribute to evaluate * @param value value of the attribute to evaluate * @return true if the request attributes included with the {@link HandlerInput} contain the expected * attribute value */ public static Predicate<HandlerInput> requestAttribute(final String key, final Object value) { return i -> i.getAttributesManager().getRequestAttributes().containsKey(key) && value.equals(i.getAttributesManager().getRequestAttributes().get(key)); } /** * Returns a predicate that returns to true if session attributes are included with the {@link HandlerInput} * and contain the expected attribute value. * @param key key of the attribute to evaluate * @param value value of the attribute to evaluate * @return true if session attributes are included with the {@link HandlerInput} and contain the expected * attribute value. */ public static Predicate<HandlerInput> sessionAttribute(final String key, final Object value) { return i -> i.getRequestEnvelope().getSession() != null && i.getAttributesManager().getSessionAttributes().containsKey(key) && value.equals(i.getAttributesManager().getSessionAttributes().get(key)); } /** * Returns a predicate that returns to true if the persistent attributes included with the {@link HandlerInput} * contain the expected attribute value. * @param key key of the attribute to evaluate * @param value value of the attribute to evaluate * @return true if the persistent attributes included with the {@link HandlerInput} contain the expected * attribute value */ public static Predicate<HandlerInput> persistentAttribute(final String key, final Object value) { return i -> i.getAttributesManager().getPersistentAttributes().containsKey(key) && value.equals(i.getAttributesManager().getPersistentAttributes().get(key)); } /** * Returns a predicate that returns to true if the viewport profile included with {@link HandlerInput} * contain any of the predefined viewport profiles. * @param viewportProfile key of the attribute to evaluate * @return true if the viewport profile included with {@link HandlerInput} contain the expected attribute value. */ public static Predicate<HandlerInput> viewportProfile(final ViewportProfile viewportProfile) { return i -> viewportProfile.equals(ViewportUtils.getViewportProfile(i.getRequestEnvelope())); } }
<reponame>yzxsunshine/SoCBirrt<filename>TSR.py<gh_stars>1-10 # Copyright (c) 2010 Carnegie Mellon University and Intel Corporation # Author: <NAME> <<EMAIL>> # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of Intel Corporation nor Carnegie Mellon University, # nor the names of their contributors, may be used to endorse or # promote products derived from this software without specific prior # written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL INTEL CORPORATION OR CARNEGIE MELLON # UNIVERSITY BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; # OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, # WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR # OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF # ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # -*- coding: utf-8 -*- '''Functions for Serializing TSRs and TSR Chains SerializeTSR(manipindex,bodyandlink,T0_w,Tw_e,Bw) Input: manipindex (int): the 0-indexed index of the robot's manipulator bodyandlink (str): body and link which is used as the 0 frame. Format 'body_name link_name'. To use world frame, specify 'NULL' T0_w (double 4x4): transform matrix of the TSR's reference frame relative to the 0 frame Tw_e (double 4x4): transform matrix of the TSR's offset frame relative the w frame Bw (double 1x12): bounds in x y z roll pitch yaw. Format: [x_min x_max y_min y_max...] Output: outstring (str): string to use for SerializeTSRChain function SerializeTSRChain(bSampleFromChain,bConstrainToChain,numTSRs,allTSRstring,mimicbodyname,mimicbodyjoints) Input: bSampleStartFromChain (0/1): 1: Use this chain for sampling start configurations 0:Ignore for sampling starts bSampleGoalFromChain (0/1): 1: Use this chain for sampling goal configurations 0:Ignore for sampling goals bConstrainToChain (0/1): 1: Use this chain for constraining configurations 0:Ignore for constraining numTSRs (int): Number of TSRs in this chain (must be > 0) allTSRstring (str): string of concetenated TSRs generated using SerializeTSR. Should be like [TSRstring 1 ' ' TSRstring2 ...] mimicbodyname (str): name of associated mimicbody for this chain (NULL if none associated) mimicbodyjoints (int [1xn]): 0-indexed indices of the mimicbody's joints that are mimiced (MUST BE INCREASING AND CONSECUTIVE [FOR NOW]) Output: outstring (str): string to include in call to cbirrt planner''' from numpy import * from rodrigues import * from TransformMatrix import * import copy #these are standalone functions for serialization but you should really use the classes def SerializeTSR(manipindex,bodyandlink,T0_w,Tw_e,Bw): return '%d %s %s %s %s'%(manipindex, bodyandlink, SerializeTransform(T0_w), SerializeTransform(Tw_e), Serialize1DMatrix(Bw)) def SerializeTSRChain(bSampleStartFromChain,bSampleGoalFromChain,bConstrainToChain,numTSRs,allTSRstring,mimicbodyname,mimicbodyjoints): outstring = ' TSRChain %d %d %d %d %s %s'%(bSampleStartFromChain, bSampleGoalFromChain, bConstrainToChain, numTSRs, allTSRstring, mimicbodyname) if size(mimicbodyjoints) != 0: outstring += ' %d %s '%(size(mimicbodyjoints),Serialize1DIntegerMatrix(mimicbodyjoints)) return outstring class TSR(): def __init__(self, T0_w_in = mat(eye(4)), Tw_e_in = mat(eye(4)), Bw_in = mat(mat(zeros([1,12]))), manipindex_in = -1, bodyandlink_in = "NULL"): self.T0_w = T0_w_in self.Tw_e = Tw_e_in self.Bw = Bw_in self.manipindex = manipindex_in self.bodyandlink = bodyandlink_in def Serialize(self): return '%d %s %s %s %s'%(self.manipindex, self.bodyandlink, SerializeTransform(self.T0_w), SerializeTransform(self.Tw_e), Serialize1DMatrix(self.Bw)) class TSRChain(): def __init__(self, bSampleStartFromChain_in=0, bSampleGoalFromChain_in=0, bConstrainToChain_in=0, mimicbodyname_in="NULL", mimicbodyjoints_in = []): self.bSampleStartFromChain = bSampleStartFromChain_in self.bSampleGoalFromChain = bSampleGoalFromChain_in self.bConstrainToChain = bConstrainToChain_in self.mimicbodyname = mimicbodyname_in self.mimicbodyjoints = mimicbodyjoints_in self.TSRs = [] def insertTSR(self, tsr_in): self.TSRs.append(copy.deepcopy(tsr_in)) def Serialize(self): allTSRstring = '%s'%(' '.join(' %s'%(tsr.Serialize()) for tsr in self.TSRs)) numTSRs = len(self.TSRs) outstring = ' TSRChain %d %d %d %d %s %s'%(self.bSampleStartFromChain, self.bSampleGoalFromChain, self.bConstrainToChain, numTSRs, allTSRstring, self.mimicbodyname) if size(self.mimicbodyjoints) != 0: outstring += ' %d %s '%(size(self.mimicbodyjoints),Serialize1DIntegerMatrix(self.mimicbodyjoints)) return outstring def SetFirstT0_w(self,T0_w_in): self.TSRs[0].T0_w = copy.deepcopy(T0_w_in) if __name__ == '__main__': juiceTSR = TSR() juiceTSR.Tw_e = MakeTransform(rodrigues([pi/2, 0, 0]),mat([0, 0.22, 0.1]).T) juiceTSR.Bw = mat([0, 0, 0, 0, -0.02, 0.02, 0, 0, 0, 0, -pi, pi]) juiceTSR.manipindex = 0 juiceTSRChain1 = TSRChain(1,0) juiceTSRChain1.insertTSR(juiceTSR) juiceTSR.Tw_e = MakeTransform(rodrigues([0, pi, 0])*rodrigues([pi/2, 0, 0]),mat([0, 0.22, 0.1]).T) juiceTSRChain2 = TSRChain(1,0) juiceTSRChain2.insertTSR(juiceTSR) print juiceTSRChain1.Serialize() print juiceTSRChain2.Serialize()
Cholangiocytes express the aquaporin CHIP and transport water via a channel-mediated mechanism. Cholangiocytes line the intrahepatic bile ducts and regulate salt and water secretion during bile formation, but the mechanism(s) regulating ductal water movement remains obscure. A water-selective channel, the aquaporin CHIP, was recently described in several epithelia, so we tested the hypothesis that osmotic water movement by cholangiocytes is mediated by CHIP. Isolated rodent cholangiocytes showed a rapid increase in volume in the presence of hypotonic extracellular buffers; the ratio of osmotic to diffusional permeability coefficients was > 10. The osmotically induced increase in cholangiocyte volume was inversely proportional to buffer osmolality, independent of temperature, and reversibly blocked by HgCl2. Also, the luminal area of isolated, enclosed bile duct units increased after exposure to hypotonic buffer and was reversibly inhibited by HgCl2. RNase protection assays, anti-CHIP immunoblots, and immunocytochemistry confirmed that CHIP transcript and protein were present in isolated cholangiocytes but not in hepatocytes. These results demonstrate that (i) isolated cholangiocytes and intact, polarized bile duct units manifest rapid, mercury-sensitive increases in cell size and luminal area, respectively, in response to osmotic gradients and (ii) isolated cholangiocytes express aquaporin CHIP at both the mRNA and the protein level. The data implicate aquaporin water channels in the transcellular movement of water across cholangiocytes lining intrahepatic bile ducts and provide a plausible molecular explanation for ductal water secretion.
When characters in a movie are having a good time, it’s hard to completely resist their charms. Such is the case with Finding Steve McQueen, a jaunty heist tale that has more than its fair share of fun, even if the final product is on the sleight side. This is the sort of picture that, depending on how and when you watch it, will determine in large part what your feelings are. If you were to see it on cable or on a plane, for example, it would certainly satisfy and pass the time. In theaters or On Demand, however? There, it comes up a little short of the necessary mark. The film is a mix of a crime caper and a bit of a romance as well. In 1980, Harry Barber (Travis Fimmel) begins to tell his girlfriend Molly Murphy (Rachel Taylor) a story. Back in 1972, a group of thieves from Youngstown, Ohio, under the direction of Enzo Rotella (William Fichtner), set out to steal a number of millions from President Richard Nixon. Apparently, he’s hidden the fortune in illegal contributions and blackmail in a secret fund that’s been discovered. This heist puts them on the radar of the F.B.I. and specifically the duo of Howard Lambert (Forest Whitaker) and Sharon Price (Lily Rabe). As Harry details this to Molly, we start to find out more about him, the robbery, and why he’s telling her this now. Mark Steven Johnson directs a screenplay by the duo of Ken Hixon and Keith Sharon. Cinematography is by José David Montero, while Víctor Reyes composed the score. Supporting players here include Rhys Coiro, John Finn, Louis Lombardi, Jake Weary and more. It’s undeniable fun to watch the heist come together. Unfortunately, it doesn’t add up to much and the structural decision to essentially tell the tale in flashback negates some of the excitement. Travis Fimmel is fine, yet unremarkable in the lead role, while Rachel Taylor is charming and spunky, yet under-utilized. William Fichtner gets very little to do, which is a shame, while Forest Whitaker is completely and utterly wasted. Director Mark Steven Johnson moves things along well enough, though the script by Ken Hixon and Keith Sharon doesn’t quite jump off the page enough. They all do fine enough work, but the final project never leaps over the bar. Interestingly, the concept of money hidden by a crooked President is one that could have been explored more. Hell, the fictional Triple Frontier ended up covering similar ground, just not on American soil. That film makes that the central conceit of the plot, while this one does not. Finding Steve McQueen is as concerned with the love story as the robbery. Arguably, they should have leaned more into the romance, as that’s more consistently engaging. Neither add up to much, but that’s where more interest level resides. The heist aspect is fine, but there’s nothing here we haven’t seem many times before in other, better, projects. Finding Steve McQueen is in theaters now!
<gh_stars>100-1000 // // Example how to play a tune using MAVSDK. // #include <cstdint> #include <future> #include <mavsdk/mavsdk.h> #include <mavsdk/plugins/tune/tune.h> #include <iostream> #include <thread> using namespace mavsdk; using namespace std::this_thread; using namespace std::chrono; void usage(const std::string& bin_name) { std::cerr << "Usage : " << bin_name << " <connection_url>\n" << "Connection URL format should be :\n" << " For TCP : tcp://[server_host][:server_port]\n" << " For UDP : udp://[bind_host][:bind_port]\n" << " For Serial : serial:///path/to/serial/dev[:baudrate]\n" << "For example, to connect to the simulator use URL: udp://:14540\n"; } std::shared_ptr<System> get_system(Mavsdk& mavsdk) { std::cout << "Waiting to discover system...\n"; auto prom = std::promise<std::shared_ptr<System>>{}; auto fut = prom.get_future(); // We wait for new systems to be discovered, once we find one that has an // autopilot, we decide to use it. mavsdk.subscribe_on_new_system([&mavsdk, &prom]() { auto system = mavsdk.systems().back(); if (system->has_autopilot()) { std::cout << "Discovered autopilot\n"; // Unsubscribe again as we only want to find one system. mavsdk.subscribe_on_new_system(nullptr); prom.set_value(system); } }); // We usually receive heartbeats at 1Hz, therefore we should find a // system after around 3 seconds max, surely. if (fut.wait_for(seconds(3)) == std::future_status::timeout) { std::cerr << "No autopilot found.\n"; return {}; } // Get discovered system now. return fut.get(); } int main(int argc, char** argv) { if (argc != 2) { usage(argv[0]); return 1; } Mavsdk mavsdk; ConnectionResult connection_result = mavsdk.add_any_connection(argv[1]); if (connection_result != ConnectionResult::Success) { std::cerr << "Connection failed: " << connection_result << '\n'; return 1; } auto system = get_system(mavsdk); if (!system) { return 1; } // Instantiate plugin. Tune tune(system); std::vector<Tune::SongElement> song_elements; song_elements.push_back(Tune::SongElement::Duration4); song_elements.push_back(Tune::SongElement::NoteG); song_elements.push_back(Tune::SongElement::NoteA); song_elements.push_back(Tune::SongElement::NoteB); song_elements.push_back(Tune::SongElement::Flat); song_elements.push_back(Tune::SongElement::OctaveUp); song_elements.push_back(Tune::SongElement::Duration1); song_elements.push_back(Tune::SongElement::NoteE); song_elements.push_back(Tune::SongElement::Flat); song_elements.push_back(Tune::SongElement::OctaveDown); song_elements.push_back(Tune::SongElement::Duration4); song_elements.push_back(Tune::SongElement::NotePause); song_elements.push_back(Tune::SongElement::NoteF); song_elements.push_back(Tune::SongElement::NoteG); song_elements.push_back(Tune::SongElement::NoteA); song_elements.push_back(Tune::SongElement::OctaveUp); song_elements.push_back(Tune::SongElement::Duration2); song_elements.push_back(Tune::SongElement::NoteD); song_elements.push_back(Tune::SongElement::NoteD); song_elements.push_back(Tune::SongElement::OctaveDown); song_elements.push_back(Tune::SongElement::Duration4); song_elements.push_back(Tune::SongElement::NotePause); song_elements.push_back(Tune::SongElement::NoteE); song_elements.push_back(Tune::SongElement::Flat); song_elements.push_back(Tune::SongElement::NoteF); song_elements.push_back(Tune::SongElement::NoteG); song_elements.push_back(Tune::SongElement::OctaveUp); song_elements.push_back(Tune::SongElement::Duration1); song_elements.push_back(Tune::SongElement::NoteC); song_elements.push_back(Tune::SongElement::OctaveDown); song_elements.push_back(Tune::SongElement::Duration4); song_elements.push_back(Tune::SongElement::NotePause); song_elements.push_back(Tune::SongElement::NoteA); song_elements.push_back(Tune::SongElement::OctaveUp); song_elements.push_back(Tune::SongElement::NoteC); song_elements.push_back(Tune::SongElement::OctaveDown); song_elements.push_back(Tune::SongElement::NoteB); song_elements.push_back(Tune::SongElement::Flat); song_elements.push_back(Tune::SongElement::Duration2); song_elements.push_back(Tune::SongElement::NoteG); const int tempo = 200; Tune::TuneDescription tune_description; tune_description.song_elements = song_elements; tune_description.tempo = tempo; const auto result = tune.play_tune(tune_description); if (result != Tune::Result::Success) { std::cerr << "Tune result: " << result << '\n'; return 1; } return 0; }
<filename>src/components/stacks-chain-tip-button.tsx import * as React from 'react'; import { t } from '@lingui/macro'; import { useAtom } from 'jotai'; import Tooltip from '@mui/material/Tooltip'; import Button from '@mui/material/Button'; import { networkInfoAtom } from '@store/networks'; import { currentStacksExplorerState, currentChainState } from '@utils/helpers'; import ProgressIcon from '@components/progress-icon'; import StacksIcon from '@assets/stacks-icon'; const StacksChainTipButtonSkeleton = () => { return ( <Tooltip title={t`Stacks Chain Tip`}> <Button startIcon={<StacksIcon/>} variant="text" size="small" color={'success'} > ???? </Button> </Tooltip> ); }; export { StacksChainTipButtonSkeleton }; const StacksChainTipButton = () => { const [networkInfo] = useAtom(networkInfoAtom); const [currentStacksExplorer] = useAtom(currentStacksExplorerState); const [currentChain] = useAtom(currentChainState); return ( <Tooltip title={t`Stacks Chain Tip`}> <Button href={ networkInfo.stacks_tip === undefined ? '#' : `${currentStacksExplorer}/block/${networkInfo.stacks_tip}?chain=${currentChain}` } target="_blank" startIcon={<ProgressIcon left={2} top={5} size={20} icon="stacks" />} variant="text" size="small" color={networkInfo.stacks_tip === undefined ? 'error' : 'success'} > {networkInfo.stacks_tip_height} </Button> </Tooltip> ); }; export { StacksChainTipButton };
/** @file @author <NAME> @copyright Copyright (c) 2008-2020 Regents of the University of California @brief Function beta_deviate. */ #include <fvar.hpp> /** * Description not yet available. * \param */ double beta_deviate(double a,double b,double x,double eps) { double y=cumd_norm(x); y=.9999999*y+.00000005; double z=inv_cumd_beta_stable(a,b,y,eps); return z; }
<filename>mybank-app/src/main/java/com/mybank/dao/AccountDeleteDAO.java<gh_stars>0 package com.mybank.dao; import com.mybank.exception.BusinessException; public interface AccountDeleteDAO { public void deleteMaxUserIdFromUserPersonalInfo() throws BusinessException; }
import { expect } from 'chai'; import { createPolyfillsLoaderConfig } from '../../src/createPolyfillsLoaderConfig'; describe('createPolyfillsLoaderConfig()', () => { it('creates a config for a single module build', () => { const pluginConfig = {}; const bundle = { options: { format: 'es' }, entrypoints: [{ importPath: 'app.js' }], }; // @ts-ignore const config = createPolyfillsLoaderConfig(pluginConfig, bundle); expect(config).to.eql({ legacy: undefined, modern: { files: [{ path: 'app.js', type: 'module' }] }, polyfills: undefined, }); }); it('creates a config for multiple entrypoints', () => { const pluginConfig = {}; const bundle = { options: { format: 'es' }, entrypoints: [{ importPath: 'app-1.js' }, { importPath: 'app-2.js' }], }; // @ts-ignore const config = createPolyfillsLoaderConfig(pluginConfig, bundle); expect(config).to.eql({ legacy: undefined, modern: { files: [ { path: 'app-1.js', type: 'module' }, { path: 'app-2.js', type: 'module' }, ], }, polyfills: undefined, }); }); it('creates a config for a single systemjs build', () => { const pluginConfig = {}; const bundle = { options: { format: 'system' }, entrypoints: [ // @ts-ignore { importPath: 'app.js' }, ], }; // @ts-ignore const config = createPolyfillsLoaderConfig(pluginConfig, bundle); expect(config).to.eql({ legacy: undefined, modern: { files: [{ path: 'app.js', type: 'systemjs' }] }, polyfills: undefined, }); }); it('creates a config for 2 build outputs', () => { const pluginConfig = { modernOutput: { name: 'modern' }, legacyOutput: { name: 'legacy', test: "!('noModule' in HTMScriptElement.prototype)" }, }; const bundles = { modern: { options: { format: 'es' }, entrypoints: [{ importPath: 'app.js' }], }, legacy: { options: { format: 'system' }, entrypoints: [{ importPath: 'legacy/app.js' }], }, }; // @ts-ignore const config = createPolyfillsLoaderConfig(pluginConfig, undefined, bundles); expect(config).to.eql({ modern: { files: [{ path: 'app.js', type: 'module' }] }, legacy: [ { files: [{ path: 'legacy/app.js', type: 'systemjs' }], test: "!('noModule' in HTMScriptElement.prototype)", }, ], polyfills: undefined, }); }); it('creates a config for 3 build outputs', () => { const pluginConfig = { modernOutput: { name: 'modern' }, legacyOutput: [ { name: 'super-legacy', test: 'window.bar' }, { name: 'legacy', test: 'window.foo' }, ], }; const bundles = { modern: { options: { format: 'es' }, entrypoints: [{ importPath: 'app.js' }], }, legacy: { options: { format: 'system' }, entrypoints: [{ importPath: 'legacy/app.js' }], }, 'super-legacy': { options: { format: 'system' }, entrypoints: [{ importPath: 'super-legacy/app.js' }], }, }; // @ts-ignore const config = createPolyfillsLoaderConfig(pluginConfig, undefined, bundles); expect(config).to.eql({ modern: { files: [{ path: 'app.js', type: 'module' }] }, legacy: [ { files: [{ path: 'super-legacy/app.js', type: 'systemjs' }], test: 'window.bar', }, { files: [{ path: 'legacy/app.js', type: 'systemjs' }], test: 'window.foo', }, ], polyfills: undefined, }); }); it('creates set the file type', () => { const pluginConfig = { modernOutput: { name: 'modern', type: 'script' }, legacyOutput: { name: 'legacy', type: 'script', test: "!('noModule' in HTMScriptElement.prototype)", }, }; const bundles = { modern: { options: { format: 'es' }, entrypoints: [{ importPath: 'app.js' }], }, legacy: { options: { format: 'system' }, entrypoints: [{ importPath: 'legacy/app.js' }], }, }; // @ts-ignore const config = createPolyfillsLoaderConfig(pluginConfig, undefined, bundles); expect(config).to.eql({ modern: { files: [{ path: 'app.js', type: 'script' }] }, legacy: [ { files: [{ path: 'legacy/app.js', type: 'script' }], test: "!('noModule' in HTMScriptElement.prototype)", }, ], polyfills: undefined, }); }); it('can set polyfills to load', () => { const pluginConfig = { polyfills: { fetch: true, webcomponents: true }, }; const bundle = { options: { format: 'es' }, entrypoints: [{ importPath: 'app.js' }], }; // @ts-ignore const config = createPolyfillsLoaderConfig(pluginConfig, bundle); expect(config).to.eql({ legacy: undefined, modern: { files: [{ path: 'app.js', type: 'module' }] }, polyfills: { fetch: true, webcomponents: true }, }); }); it('throws when a single build is output while multiple builds are configured', () => { const pluginConfig = { modernOutput: 'modern', }; const bundle = { options: { format: 'es' }, entrypoints: [{ importPath: 'app.js' }], }; // @ts-ignore const action = () => createPolyfillsLoaderConfig(pluginConfig, bundle); expect(action).to.throw(); }); it('throws when a multiple builds are output while no builds are configured', () => { const pluginConfig = {}; const bundles = { modern: { options: { format: 'es' }, entrypoints: [{ importPath: 'app.js' }], }, legacy: { options: { format: 'system' }, entrypoints: [{ importPath: 'legacy/app.js' }], }, }; // @ts-ignore const action = () => createPolyfillsLoaderConfig(pluginConfig, undefined, bundles); expect(action).to.throw(); }); it('throws when the modern build could not be found', () => { const pluginConfig = { modernOutput: 'not-modern', legacyOutput: { name: 'legacy', test: 'window.foo' }, }; const bundles = { modern: { options: { format: 'es' }, entrypoints: [{ importPath: 'app.js' }], }, legacy: { options: { format: 'system' }, entrypoints: [{ importPath: 'legacy/app.js' }], }, }; // @ts-ignore const action = () => createPolyfillsLoaderConfig(pluginConfig, undefined, bundles); expect(action).to.throw(); }); });
Frequency band resources are in shortage due to increase in mobile data services, and mobile data services in large quantities may not be achievable through network deployment and service transmissions using only authorized frequency band resources. In view of above, it is optional to deploy transmissions of mobile data services using unauthorized frequency band resources, to enhance utilization ratio of frequency band resources and improve user experience. An unauthorized frequency band that serves as an auxiliary carrier assists an authorized frequency band that serves as a main carrier in achieving transmissions of mobile data services. The unauthorized frequency band can be shared by various wireless communications systems such as Bluetooth and Wi-Fi, and the various wireless communications systems uses the shared unauthorized frequency band resources through competing for the resources. Hence, it is important and difficult in research how to ensure unlicensed long term evolutions (abbreviated as U-LTEs or LTE-Us) deployed by different service providers to co-exist or how to ensure different wireless communications systems such as LTE-U and Wi-Fi to co-exist. An LTE system may support frequency division duplexing (FDD) and time division duplexing (TDD) that adopt different frame structures. In the two different frame structures, each radio frame consists of ten subframes each last 1 ms. The FDD system adopts a first frame structure as shown in FIG. 1, and the TDD system adopts a second frame structure as shown in FIG. 2. It can be found from the LTE frame structure, the data is transmitted in unit of subframe that lasts 1 ms. In the LTE-U, due to factors such as listen before talk (LBT) competitive access, data preparation time in a base station and radio frequency preparation time in a base station, a start time point for transmission of a LTE-U signal starts may be located at any position within one subframe, such that an incomplete subframe, i.e., a physical resource that lasts a period shorter than a length of one normal subframe, is transmitted. If no signal is sent using the incomplete subframe, the resource is of course to be taken by other nodes in situation of intense resource competition. To ensure fair competition between LTE-U and Wi-Fi, the LTE-U may be designed with each transmission lasting 10 ms, and each transmission is better not longer than 40 ms. In the case that the LTE-U is designed with each transmission lasting about 10 ms and any incomplete subframe is not used for transmission, transmission efficiency of the LTE-U is greatly decreased. In the case that an incomplete subframe is transmitted on a resource that is not available for transmitting a complete subframe and the incomplete subframe is used in data transmission, the data transmission efficiency can be enhanced and resource is not wasted. However, there is no technical solution for performing data transmission using incomplete subframe in an LTE unauthorized frequency band. In sum, no solution is given in related art to achieve data transmission using incomplete subframe in the unauthorized frequency band.
import argparse import sys from collections import defaultdict from capstone import CS_OP_IMM, CS_GRP_JUMP, CS_GRP_CALL, CS_OP_MEM from capstone.x86_const import X86_REG_RIP from elftools.elf.descriptions import describe_reloc_type from elftools.elf.enums import ENUM_RELOC_TYPE_x64 from elftools.elf.relocation import RelocationSection from elftools.elf.sections import SymbolTableSection class Rewriter(): GCC_FUNCTIONS = [ "_start", "__libc_start_main", "__libc_csu_fini", "__libc_csu_init", "__lib_csu_fini", "_init", "__libc_init_first", "_fini", "_rtld_fini", "_exit", "__get_pc_think_bx", "__do_global_dtors_aux", "__gmon_start", "frame_dummy", "__do_global_ctors_aux", "__register_frame_info", "deregister_tm_clones", "register_tm_clones", "__do_global_dtors_aux", "__frame_dummy_init_array_entry", "__init_array_start", "__do_global_dtors_aux_fini_array_entry", "__init_array_end", "__stack_chk_fail", "__cxa_atexit", "__cxa_finalize", ] DATASECTIONS = [".rodata", ".data", ".bss", ".data.rel.ro", ".init_array", ".fini_array"] def __init__(self, container, outfile): self.container = container self.outfile = outfile for sec, section in self.container.sections.items(): section.load() for _, function in self.container.functions.items(): if function.name in Rewriter.GCC_FUNCTIONS: continue function.disasm() def symbolize(self): symb = Symbolizer() symb.symbolize_text_section(self.container, None) symb.symbolize_data_sections(self.container, None) def dump(self): results = list() for sec, section in sorted( self.container.sections.items(), key=lambda x: x[1].base): results.append("%s" % (section)) #added by JX; let's add the TLS section #.section .tbss,"awT",@nobits # .align 4 # .type main_tls_var, @object # .size main_tls_var, 4 #main_tls_var: # .zero 4 results.append(".section .tbss") results.append(".align 32") for key, tls in self.container.tls_list.items(): results.append(".type\t" + tls.name + ",@object") results.append(".globl\t" + tls.name) results.append(".size\t" + tls.name + ", %d" % (tls['st_size'])) results.append(tls.name + ":") results.append("\t.zero\t%d" % (tls['st_size'])) #end by JX results.append(".section .text") results.append(".align 16") for _, function in sorted(self.container.functions.items()): if function.name in Rewriter.GCC_FUNCTIONS: continue #added by JX #the same address has more than one symbols if function.start in self.container.alias_list and len(self.container.alias_list[function.start]) > 1: function.set_alias(self.container.alias_list[function.start]) results.append("\t.text\n%s" % (function)) with open(self.outfile, 'w') as outfd: outfd.write("\n".join(results + [''])) class Symbolizer(): def __init__(self): self.bases = set() self.pot_sw_bases = defaultdict(set) self.symbolized = set() # TODO: Use named symbols instead of generic labels when possible. # TODO: Replace generic call labels with function names instead def symbolize_text_section(self, container, context): # Symbolize using relocation information. for rel in container.relocations[".text"]: fn = container.function_of_address(rel['offset']) if not fn or fn.name in Rewriter.GCC_FUNCTIONS: continue inst = fn.instruction_of_address(rel['offset']) if not inst: continue # Fix up imports if "@" in rel['name']: suffix = "" if rel['st_value'] == 0: suffix = "@PLT" if len(inst.cs.operands) == 1: inst.op_str = "%s%s" % (rel['name'].split("@")[0], suffix) else: # Figure out which argument needs to be # converted to a symbol. if suffix: suffix = "@PLT" mem_access, _ = inst.get_mem_access_op() if not mem_access: continue value = hex(mem_access.disp) inst.op_str = inst.op_str.replace( value, "%s%s" % (rel['name'].split("@")[0], suffix)) else: mem_access, _ = inst.get_mem_access_op() if not mem_access: # These are probably calls? continue if (rel['type'] in [ ENUM_RELOC_TYPE_x64["R_X86_64_PLT32"], ENUM_RELOC_TYPE_x64["R_X86_64_PC32"] ]): value = mem_access.disp ripbase = inst.address + inst.sz inst.op_str = inst.op_str.replace( hex(value), ".LC%x" % (ripbase + value)) if ".rodata" in rel["name"]: self.bases.add(ripbase + value) self.pot_sw_bases[fn.start].add(ripbase + value) else: print("[*] Possible incorrect handling of relocation!") value = mem_access.disp inst.op_str = inst.op_str.replace( hex(value), ".LC%x" % (rel['st_value'])) self.symbolized.add(inst.address) self.symbolize_cf_transfer(container, context) # Symbolize remaining memory accesses self.symbolize_mem_accesses(container, context) self.symbolize_switch_tables(container, context) def symbolize_cf_transfer(self, container, context=None): for _, function in container.functions.items(): addr_to_idx = dict() for inst_idx, instruction in enumerate(function.cache): addr_to_idx[instruction.address] = inst_idx for inst_idx, instruction in enumerate(function.cache): is_jmp = CS_GRP_JUMP in instruction.cs.groups is_call = CS_GRP_CALL in instruction.cs.groups if not (is_jmp or is_call): # Simple, next is idx + 1 if instruction.mnemonic.startswith('ret'): function.nexts[inst_idx].append("ret") instruction.cf_leaves_fn = True else: function.nexts[inst_idx].append(inst_idx + 1) continue instruction.cf_leaves_fn = False if is_jmp and not instruction.mnemonic.startswith("jmp"): if inst_idx + 1 < len(function.cache): # Add natural flow edge function.nexts[inst_idx].append(inst_idx + 1) else: # Out of function bounds, no idea what to do! function.nexts[inst_idx].append("undef") elif is_call: instruction.cf_leaves_fn = True function.nexts[inst_idx].append("call") if inst_idx + 1 < len(function.cache): function.nexts[inst_idx].append(inst_idx + 1) else: # Out of function bounds, no idea what to do! function.nexts[inst_idx].append("undef") if instruction.cs.operands[0].type == CS_OP_IMM: target = instruction.cs.operands[0].imm # Check if the target is in .text section. if container.is_in_section(".text", target): function.bbstarts.add(target) instruction.op_str = ".L%x" % (target) elif target in container.plt: instruction.op_str = "{}@PLT".format( container.plt[target]) else: gotent = container.is_target_gotplt(target) if gotent: found = False for relocation in container.relocations[".dyn"]: if gotent == relocation['offset']: instruction.op_str = "{}@PLT".format( relocation['name']) found = True break if not found: print("[x] Missed GOT entry!") else: print("[x] Missed call target: %x" % (target)) if is_jmp: if target in addr_to_idx: idx = addr_to_idx[target] function.nexts[inst_idx].append(idx) else: instruction.cf_leaves_fn = True function.nexts[inst_idx].append("undef") elif is_jmp: function.nexts[inst_idx].append("undef") def symbolize_switch_tables(self, container, context): rodata = container.sections.get(".rodata", None) if not rodata: return all_bases = set([x for _, y in self.pot_sw_bases.items() for x in y]) for faddr, swbases in self.pot_sw_bases.items(): fn = container.functions[faddr] for swbase in sorted(swbases, reverse=True): value = rodata.read_at(swbase, 4) if not value: continue value = (value + swbase) & 0xffffffff if not fn.is_valid_instruction(value): continue # We have a valid switch base now. swlbl = ".LC%x-.LC%x" % (value, swbase) rodata.replace(swbase, 4, swlbl) # Symbolize as long as we can for slot in range(swbase + 4, rodata.base + rodata.sz, 4): if any([x in all_bases for x in range(slot, slot + 4)]): break value = rodata.read_at(slot, 4) if not value: break value = (value + swbase) & 0xFFFFFFFF if not fn.is_valid_instruction(value): break swlbl = ".LC%x-.LC%x" % (value, swbase) rodata.replace(slot, 4, swlbl) def _adjust_target(self, container, target): # Find the nearest section sec = None for sname, sval in sorted( container.sections.items(), key=lambda x: x[1].base): if sval.base >= target: break sec = sval assert sec is not None end = sec.base # + sec.sz - 1 adjust = target - end assert adjust > 0 return end, adjust def _is_target_in_region(self, container, target): for sec, sval in container.sections.items(): if sval.base <= target < sval.base + sval.sz: return True for fn, fval in container.functions.items(): if fval.start <= target < fval.start + fval.sz: return True return False #added by JX def obtain_all_symbols(self, container): print("Trying to find all symbols") all_symbols = dict() symbol_tables = [ sec for sec in container.loader.elffile.iter_sections() if isinstance(sec, SymbolTableSection) ] for section in symbol_tables: for sym in section.iter_symbols(): if sym['st_shndx'] != 'SHN_UNDEF': all_symbols[sym['st_value']] = sym if sym['st_value'] == 0x124e0b0: print("Hmm, name found " + sym.name) print("Total numer of symbols %d" % len(all_symbols)) return all_symbols #end by JX def symbolize_mem_accesses(self, container, context): #added by JX all_symbols = self.obtain_all_symbols(container) #end by JX for _, function in container.functions.items(): for inst in function.cache: if inst.address in self.symbolized: continue mem_access, _ = inst.get_mem_access_op() if not mem_access: continue # Now we have a memory access, # check if it is rip relative. base = mem_access.base if base == X86_REG_RIP: value = mem_access.disp ripbase = inst.address + inst.sz target = ripbase + value is_an_import = False #addef by JX if target in container.plt: is_an_import = container.plt[target] sfx = "@PLT" #prioritize the cases where target matches a symbol location #this will ensure the correctness when we will encounter cases where # (i) target matches a symbol location and (ii) target matches the offset of a relocation elif target in all_symbols and all_symbols[target]['st_info']['type'] != 'STT_SECTION': is_an_import = container.loader.adjust_sym_name(all_symbols[target]) sfx = "" else: for rel in [x for x in container.relocations[".dyn"] if x['offset'] == target]: reloc_type = rel['type'] #special case: tls symbols which have no real memory if reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_DTPMOD64"]: is_an_import = rel['name'] sfx = "@TLSGD" break #well, special cases... what can you do ... if rel['st_value'] == 0 and rel['name'] != None: is_an_import = rel['name'] sfx = "@GOTPCREL" break res = 0 #let's try to find the symbol if reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_64"] or reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_GLOB_DAT"]: is_an_import = rel['name'] # the name here is from the symbol index; check loader.py sfx = "@GOTPCREL" break #relative relocation will have no symbol index;;; so what can we do? ... if reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_RELATIVE"]: res = rel['addend'] if res in all_symbols: is_an_import = container.loader.adjust_sym_name(all_symbols[res]) sfx = "@GOTPCREL" break #if reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_GLOB_DAT"]: # res = rel['st_value'] #end by JX if is_an_import: inst.op_str = inst.op_str.replace( hex(value), "%s%s" % (is_an_import, sfx)) else: # Check if target is contained within a known region in_region = self._is_target_in_region( container, target) if in_region: inst.op_str = inst.op_str.replace( hex(value), ".LC%x" % (target)) else: oritarget = target target, adjust = self._adjust_target( container, target) inst.op_str = inst.op_str.replace( hex(value), "%d+.LC%x" % (adjust, target)) print("[*] Adjusted: %x -- %d+.LC%x" % (inst.address, adjust, oritarget)) if container.is_in_section(".rodata", target): self.pot_sw_bases[function.start].add(target) def _handle_relocation(self, container, section, all_symbols, rel): reloc_type = rel['type'] if reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_PC32"]: swbase = None for base in sorted(self.bases): if base > rel['offset']: break swbase = base value = rel['st_value'] + rel['addend'] - (rel['offset'] - swbase) swlbl = ".LC%x-.LC%x" % (value, swbase) section.replace(rel['offset'], 4, swlbl) elif reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_64"]: value = rel['st_value'] + rel['addend'] label = ".LC%x" % value #relocation already has a name if rel['st_value'] == 0 and rel['name'] != None: label = rel['name'] if rel['addend'] != 0: label += " + 0x%x" % rel['addend'] #internal symbol #we use name from symbols elif rel['st_value'] and rel['st_value'] in all_symbols: #if not, then use "sym + offset" label = container.loader.adjust_sym_name(all_symbols[rel['st_value']]) if rel['addend'] != 0: label += " + 0x%x" % rel['addend'] section.replace(rel['offset'], 8, label) #end by JX elif reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_RELATIVE"]: value = rel['addend'] label = ".LC%x" % value if rel['addend'] in all_symbols: label = container.loader.adjust_sym_name(all_symbols[rel['addend']]) section.replace(rel['offset'], 8, label) #end by JX elif reloc_type == ENUM_RELOC_TYPE_x64["R_X86_64_COPY"]: # NOP pass else: print("[*] Unhandled relocation {}".format( describe_reloc_type(reloc_type, container.loader.elffile))) def symbolize_data_sections(self, container, context=None): #added by JX all_symbols = self.obtain_all_symbols(container) #end by JX # Section specific relocation for secname, section in container.sections.items(): for rel in section.relocations: self._handle_relocation(container, section, all_symbols, rel) # .dyn relocations dyn = container.relocations[".dyn"] for rel in dyn: section = container.section_of_address(rel['offset']) if section: self._handle_relocation(container, section, all_symbols, rel) #else: # print("[x] Couldn't find valid section {:x}".format(rel['offset'])) if __name__ == "__main__": from .loader import Loader from .analysis import register argp = argparse.ArgumentParser() argp.add_argument("bin", type=str, help="Input binary to load") argp.add_argument("outfile", type=str, help="Symbolized ASM output") argp.add_argument("--ignorepie", dest="ignorepie", action='store_true', help="Ignore position-independent-executable check (use with caution)") argp.set_defaults(ignorepie=False) args = argp.parse_args() loader = Loader(args.bin) if loader.is_pie() == False and args.ignorepie == False: print("RetroWrite requires a position-independent executable.") print("It looks like %s is not position independent" % args.bin) sys.Exit(1) tls_list = loader.tlslist_from_symtable() flist = loader.flist_from_symtab() loader.load_functions(flist) slist = loader.slist_from_symtab() loader.load_data_sections(slist, lambda x: x in Rewriter.DATASECTIONS) reloc_list = loader.reloc_list_from_symtab() loader.load_relocations(reloc_list) global_list = loader.global_data_list_from_symtab() loader.load_globals_from_glist(global_list) loader.container.attach_loader(loader) rw = Rewriter(loader.container, args.outfile) rw.symbolize() rw.dump()
Relaxed simulated tempering for VLSI floorplan designs In the past two decades, the simulated annealing technique has been considered as a powerful approach to handle many NP-hard optimization problems in VLSI designs. Recently, a new Monte Carlo and optimization technique, named simulated tempering, was invented and has been successfully applied to many scientific problems, from random field Ising modeling to the traveling salesman problem. It is designed to overcome the drawback in simulated annealing when the problem has a rough energy landscape with many local minima separated by high energy barriers. In this paper, we have successfully applied a version of relaxed simulated tempering to slicing floorplan design with consideration of both area and wirelength optimization. Good experimental results were obtained.
/** * Simple utility class to create a temporary deployment for testing purposes */ public class Deployer { // No static initialization - the results of PathNames.xxx may be different when invoked, // as compared to class load time. private Path _binariesDestPath = null; private Path _configDestPath = null; private Path _disksDestPath = null; private Path _logsDestPath = null; private Path _symbiontsDestPath = null; private Path _tapesDestPath = null; private Path _webDestPath = null; /** * If source is a file, copy the file to the destination (which should be a directory) * If source is a directory, copy all the entities of source to destination (which should be a directory) */ private static void copy( final File sourceFile, final File destinationFile, final String indent ) throws IOException { String sourceFileName = sourceFile.getName(); if (!sourceFileName.equals(".") && !sourceFileName.equals("..")) { System.out.println(String.format("%sCopying %s to %s", indent, sourceFile.toString(), destinationFile.toString())); if (!destinationFile.exists()) { System.out.println(String.format("%s Creating %s", indent, destinationFile.toString())); Files.createDirectories(destinationFile.toPath()); } else { if (!destinationFile.isDirectory()) { throw new RuntimeException("! " + destinationFile.toString() + " is not a directory"); } } if (sourceFile.isDirectory()) { File[] subFiles = sourceFile.listFiles(); if (subFiles != null) { for (File subFile : subFiles) { Path destSubPath = Paths.get(destinationFile.toString(), subFile.getName()); File dspf = destSubPath.toFile(); if (dspf.exists()) { delete(dspf, indent + " "); } copy(subFile, dspf, indent + " "); } } } else if (sourceFile.isFile()) { Files.deleteIfExists(destinationFile.toPath()); Files.copy(sourceFile.toPath(), destinationFile.toPath()); } } } /** * recursive path delete */ private static void delete( final File destination, final String indent ) throws IOException { if (destination != null) { System.out.println(String.format("%sDeleting %s...", indent, destination.toString())); if (destination.exists()) { if (destination.isDirectory()) { File[] subFiles = destination.listFiles(); if (subFiles != null) { for (File subFile : subFiles) { delete(subFile, indent + " "); } } } Files.deleteIfExists(destination.toPath()); } } } // ---------------------------------------------------------------------------------------------------------------------------- /** * Deploys directory content. * Can run standalone, or be invoked by other tests. * Generally, standalone won't work if for non-containerized situations, so... */ public void deploy( ) throws IOException { Path binariesSourcePath = Paths.get("../resources/media/binaries"); Path configSourcePath = Paths.get("../resources/config"); Path disksSourcePath = Paths.get("../resources/media/disks"); Path tapesSourcePath = Paths.get("../resources/media/tapes"); Path webSourcePath = Paths.get("../resources/web"); _binariesDestPath = Paths.get(PathNames.BINARIES_ROOT_DIRECTORY); _configDestPath = Paths.get(PathNames.CONFIG_ROOT_DIRECTORY); _disksDestPath = Paths.get(PathNames.DISKS_ROOT_DIRECTORY); _logsDestPath = Paths.get(PathNames.LOGS_ROOT_DIRECTORY); _symbiontsDestPath = Paths.get(PathNames.SYMBIONTS_ROOT_DIRECTORY); _tapesDestPath = Paths.get(PathNames.TAPES_ROOT_DIRECTORY); _webDestPath = Paths.get(PathNames.WEB_ROOT_DIRECTORY); Files.createDirectories(_logsDestPath); copy(binariesSourcePath.toFile(), _binariesDestPath.toFile(), ""); copy(configSourcePath.toFile(), _configDestPath.toFile(), ""); copy(disksSourcePath.toFile(), _disksDestPath.toFile(), ""); copy(disksSourcePath.toFile(), _symbiontsDestPath.toFile(), ""); copy(tapesSourcePath.toFile(), _tapesDestPath.toFile(), ""); copy(webSourcePath.toFile(), _webDestPath.toFile(), ""); } /** * Removes the deployed directories * Can run standalone, or be invoked by other tests * Generally, standalone won't work if for non-containerized situations, so... */ public void remove( ) throws IOException { delete(_binariesDestPath.toFile(), ""); delete(_configDestPath.toFile(), ""); delete(_disksDestPath.toFile(), ""); delete(_logsDestPath.toFile(), ""); delete(_symbiontsDestPath.toFile(), ""); delete(_tapesDestPath.toFile(), ""); delete(_webDestPath.toFile(), ""); _binariesDestPath = null; _configDestPath = null; _disksDestPath = null; _logsDestPath = null; _symbiontsDestPath = null; _tapesDestPath = null; _webDestPath = null; } }
Long-term maintenance of gains from memory training in older adults: two 3 1/2-year follow-up studies. This study investigated long-term effects from memory training in healthy older adults, using samples from two previous studies showing maintenance of gains 6 months after training. In both studies, a multifactorial memory training program (encoding operations, attentional functions, and relaxation) was compared with other training programs. The results from both studies showed that all groups performed at the same level in the 3 1/2-year follow-up as in the 6-month follow-up. Most important, the groups receiving training in encoding operations performed at higher levels at the 3 1/2-year assessment compared with pretest. These data indicate that memory training may result in long-term effects for older adults in tasks that are congruent with the training activity.
Wrestling is full of dangerous moves. This week alone, much has been made about Seth Rollins' use of the Curb Stomp, as it appears to have gone the way of the Dodo. Moves like the piledriver aren't permitted by WWE standards. Triple H appeared on the Tim Ferris Show this week and talked about a variety of topics, including a particular move he simply won't take in the ring. The move in question? A seemingly simple shoulder to the ring post. "I have things I don't do well in the ring. For example, they go through the top and middle turnbuckle and hit the post from the inside," Triple H said. "It's one of those mental blocks for me. I can't seem to navigate going between the two turnbuckles, I always get stuck somehow, so I never do it. People grab me in the ring and say 'take the post,' and I say no." Triple H had been touching on the topic of wrestlers needing to stick to doing things that they know will look good in the ring. He spoke about the importance of not wasting motion. You can listen to the full podcast at this link.
package ch.deletescape.lawnchair.iconpack; import android.content.Context; import android.content.pm.PackageManager; import android.content.res.Resources; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.ColorFilter; import android.graphics.Paint; import android.graphics.PixelFormat; import android.graphics.PorterDuff; import android.graphics.PorterDuffXfermode; import android.graphics.drawable.BitmapDrawable; import android.graphics.drawable.Drawable; import android.support.annotation.NonNull; import android.support.annotation.Nullable; import android.util.DisplayMetrics; import ch.deletescape.lawnchair.compat.LauncherActivityInfoCompat; public class CustomIconDrawable extends Drawable { private final Context mContext; private final IconPack mIconPack; private final Resources mResources; private final Drawable mOriginalIcon; private Drawable mIconBack = null; private Drawable mIconUpon = null; private Bitmap mIconMask = null; private float mScale = 1f; public CustomIconDrawable(Context context, IconPack iconPack, LauncherActivityInfoCompat info) throws PackageManager.NameNotFoundException { mContext = context; mIconPack = iconPack; mResources = context.getPackageManager().getResourcesForApplication(iconPack.getPackageName()); mOriginalIcon = info.getIcon(DisplayMetrics.DENSITY_XXXHIGH); if (iconPack.getIconBack() != null) { mIconBack = getDrawable(iconPack.getIconBack()); } if (iconPack.getIconUpon() != null) { mIconUpon = getDrawable(iconPack.getIconUpon()); } if (iconPack.getIconMask() != null) { mIconMask = BitmapFactory.decodeResource(mResources, getIconRes(iconPack.getIconMask())); } mScale = iconPack.getScale(); } private Drawable getDrawable(String name) { try { return mResources.getDrawable(getIconRes(name)); } catch (Resources.NotFoundException e) { return null; } } private int getIconRes(String name) { return mResources.getIdentifier(name, "drawable", mIconPack.getPackageName()); } @Override public void draw(@NonNull Canvas canvas) { int width = canvas.getWidth(), height = canvas.getHeight(); // draw iconBack if (mIconBack != null) { mIconBack.setBounds(0, 0, width, height); mIconBack.draw(canvas); } // mask the original icon to iconMask and then draw it Drawable maskedIcon = getMaskedIcon(width, height); maskedIcon.setBounds(0, 0, width, height); maskedIcon.draw(canvas); // draw iconUpon if (mIconUpon != null) { mIconUpon.setBounds(0, 0, width, height); mIconUpon.draw(canvas); } } private Drawable getMaskedIcon(int width, int height) { Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888); Canvas canvas = new Canvas(); canvas.setBitmap(bitmap); float scaledWidth = width * mScale, scaledHeight = height * mScale; float horizontalPadding = (width - scaledWidth) / 2; float verticalPadding = (height - scaledHeight) / 2; mOriginalIcon.setBounds((int) horizontalPadding, (int) verticalPadding, (int) (scaledWidth + horizontalPadding), (int) (scaledHeight + horizontalPadding)); mOriginalIcon.draw(canvas); if (mIconMask != null) { Paint clearPaint = new Paint(Paint.ANTI_ALIAS_FLAG); clearPaint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.DST_OUT)); Bitmap scaledMask = Bitmap.createScaledBitmap(mIconMask, width, height, false); canvas.drawBitmap(scaledMask, 0, 0, clearPaint); } return new BitmapDrawable(mContext.getResources(), bitmap); } @Override public void setAlpha(int i) { } @Override public void setColorFilter(@Nullable ColorFilter colorFilter) { } @Override public int getOpacity() { return PixelFormat.TRANSLUCENT; } }
Privacy-Preserving Query over Encrypted Graph-Structured Data in Cloud Computing In the emerging cloud computing paradigm, data owners become increasingly motivated to outsource their complex data management systems from local sites to the commercial public cloud for great flexibility and economic savings. For the consideration of users' privacy, sensitive data have to be encrypted before outsourcing, which makes effective data utilization a very challenging task. In this paper, for the first time, we define and solve the problem of privacy-preserving query over encrypted graph-structured data in cloud computing (PPGQ), and establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality. Our work utilizes the principle of "filtering-and-verification". We prebuild a feature-based index to provide feature-related information about each encrypted data graph, and then choose the efficient inner product as the pruning tool to carry out the filtering procedure. To meet the challenge of supporting graph query without privacy breaches, we propose a secure inner product computation technique, and then improve it to achieve various privacy requirements under the known-background threat model.
Service Learning through Educational Outreach Enhances Student Learning and Engagement Service learning (SL) as a pedagogy is thought to enhance learning and create a more inclusive environment in the college classroom. SL is a unique active learning strategy because it provides students the opportunity to apply classroom content to realworld situations concomitant with postservice reflection activities. Unfortunately, most SL (and other active learning) strategies on college campuses are implemented unsystematically and are rarely assessed for impacts other than content mastery and student retention. Therefore, we tested the hypothesis that using SL in an introductory course could improve student learning as well as more qualitative characteristics such as student engagement. An educational outreach event in collaboration with a local elementary school was planned as a final project for Introduction to Neuroscience students (NEUR 101, n=45) at Susquehanna University, a primarily undergraduate liberal arts institution. Students used backward design to develop learning goals and active learning strategies prior to the event. Undergraduates provided 11 different activities during the event to educate the community regarding a variety of Physiological concepts such as digestion, metabolism and CNS function. Information from pre and postservice reflection assignments and attitude surveys shows that SL, in this context, facilitates content mastery and enhances student engagement, especially in 1st year and 1st generation college students. In addition, the educational outreach event provided much needed support for public science literacy in the local community. Followup SL projects and assessments will determine if SL pedagogy, in this context, effectively instills longlasting characteristic of student engagement and citizenship. Funding: NSF IOS1350448/APS
<gh_stars>0 #ifndef NMOS_CONNECTION_API_H #define NMOS_CONNECTION_API_H #include "cpprest/api_router.h" #include "nmos/id.h" namespace slog { class base_gate; } // Connection API implementation // See https://github.com/AMWA-TV/nmos-device-connection-management/blob/v1.1-dev/APIs/ConnectionAPI.raml namespace nmos { struct api_version; struct node_model; struct resource; struct tai; struct type; // Connection API callbacks // a transport_file_parser validates the specified transport file type/data for the specified (IS-04/IS-05) resource/connection_resource and returns a transport_params array to be merged // or may throw std::runtime_error, which will be mapped to a 500 Internal Error status code with NMOS error "debug" information including the exception message // (the default transport file parser only supports RTP transport via the default SDP parser) typedef std::function<web::json::value(const nmos::resource&, const nmos::resource&, const utility::string_t&, const utility::string_t&, slog::base_gate&)> transport_file_parser; namespace details { // a connection_resource_patch_validator can be used to perform any final validation of the specified merged /staged value for the specified (IS-04/IS-05) resource/connection_resource // that cannot be expressed by the schemas or /constraints endpoint // it may throw web::json::json_exception, which will be mapped to a 400 Bad Request status code with NMOS error "debug" information including the exception message typedef std::function<void(const nmos::resource&, const nmos::resource&, const web::json::value&, slog::base_gate&)> connection_resource_patch_validator; } // Connection API factory functions web::http::experimental::listener::api_router make_connection_api(nmos::node_model& model, transport_file_parser parse_transport_file, details::connection_resource_patch_validator validate_merged, slog::base_gate& gate); inline web::http::experimental::listener::api_router make_connection_api(nmos::node_model& model, transport_file_parser parse_transport_file, slog::base_gate& gate) { return make_connection_api(model, parse_transport_file, {}, gate); } web::http::experimental::listener::api_router make_connection_api(nmos::node_model& model, slog::base_gate& gate); // Connection API implementation details shared with the Node API /receivers/{receiverId}/target endpoint namespace details { void handle_connection_resource_patch(web::http::http_response res, nmos::node_model& model, const nmos::api_version& version, const std::pair<nmos::id, nmos::type>& id_type, const web::json::value& patch, transport_file_parser parse_transport_file, connection_resource_patch_validator validate_merged, slog::base_gate& gate); } // Functions for interaction between the Connection API implementation and the connection activation thread // Activate an IS-05 sender or receiver by transitioning the 'staged' settings into the 'active' resource void set_connection_resource_active(nmos::resource& connection_resource, std::function<void(web::json::value&)> resolve_auto, const nmos::tai& activation_time); // Clear any pending activation of an IS-05 sender or receiver // (This function should not be called after nmos::set_connection_resource_active.) void set_connection_resource_not_pending(nmos::resource& connection_resource); // Update the IS-04 sender or receiver after the active connection is changed in any way // (This function should be called after nmos::set_connection_resource_active.) void set_resource_subscription(nmos::resource& node_resource, bool active, const nmos::id& connected_id, const nmos::tai& activation_time); // Helper functions for the Connection API callbacks // Validate and parse the specified transport file for the specified receiver // (this is the default transport file parser) web::json::value parse_rtp_transport_file(const nmos::resource& receiver, const nmos::resource& connection_receiver, const utility::string_t& transport_file_type, const utility::string_t& transport_file_data, slog::base_gate& gate); // "On activation all instances of "auto" should be resolved into the actual values that will be used" // See https://github.com/AMWA-TV/nmos-device-connection-management/blob/v1.1-dev/APIs/ConnectionAPI.raml#L280-L281 // and https://github.com/AMWA-TV/nmos-device-connection-management/blob/v1.1-dev/APIs/schemas/sender_transport_params_rtp.json // and https://github.com/AMWA-TV/nmos-device-connection-management/blob/v1.1-dev/APIs/schemas/receiver_transport_params_rtp.json // "In many cases this is a simple operation, and the behaviour is very clearly defined in the relevant transport parameter schemas. // For example a port number may be offset from the RTP port number by a pre-determined value. The specification makes suggestions // of a sensible default value for "auto" to resolve to, but the Sender or Receiver may choose any value permitted by the schema // and constraints." // This function implements those sensible defaults for the RTP transport type. // "In some cases the behaviour is more complex, and may be determined by the vendor." // See https://github.com/AMWA-TV/nmos-device-connection-management/blob/v1.1-dev/docs/2.2.%20APIs%20-%20Server%20Side%20Implementation.md#use-of-auto // This function therefore does not select a value for e.g. sender "source_ip" or receiver "interface_ip". void resolve_rtp_auto(const nmos::type& type, web::json::value& transport_params, int auto_rtp_port = 5004); namespace details { template <typename AutoFun> inline void resolve_auto(web::json::value& params, const utility::string_t& key, AutoFun auto_fun) { if (!params.has_field(key)) return; auto& param = params.at(key); if (param.is_string() && U("auto") == param.as_string()) { param = auto_fun(); } } } } #endif
<reponame>zhouhaifeng/vpe /* * pim_bfd.c: PIM BFD handling routines * * Copyright (C) 2017 Cumulus Networks, Inc. * <NAME> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; see the file COPYING; if not, write to the * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, * MA 02110-1301 USA */ #include <zebra.h> #include "lib/json.h" #include "command.h" #include "vty.h" #include "zclient.h" #include "pim_instance.h" #include "pim_neighbor.h" #include "pim_cmd.h" #include "pim_vty.h" #include "pim_iface.h" #include "pim_bfd.h" #include "bfd.h" #include "pimd.h" #include "pim_zebra.h" /* * pim_bfd_write_config - Write the interface BFD configuration. */ void pim_bfd_write_config(struct vty *vty, struct interface *ifp) { struct pim_interface *pim_ifp = ifp->info; if (!pim_ifp || !pim_ifp->bfd_config.enabled) return; #if HAVE_BFDD == 0 if (pim_ifp->bfd_config.detection_multiplier != BFD_DEF_DETECT_MULT || pim_ifp->bfd_config.min_rx != BFD_DEF_MIN_RX || pim_ifp->bfd_config.min_tx != BFD_DEF_MIN_TX) vty_out(vty, " ip pim bfd %d %d %d\n", pim_ifp->bfd_config.detection_multiplier, pim_ifp->bfd_config.min_rx, pim_ifp->bfd_config.min_tx); else #endif /* ! HAVE_BFDD */ vty_out(vty, " ip pim bfd\n"); if (pim_ifp->bfd_config.profile) vty_out(vty, " ip pim bfd profile %s\n", pim_ifp->bfd_config.profile); } static void pim_neighbor_bfd_cb(struct bfd_session_params *bsp, const struct bfd_session_status *bss, void *arg) { struct pim_neighbor *nbr = arg; if (PIM_DEBUG_PIM_TRACE) { zlog_debug("%s: status %s old_status %s", __func__, bfd_get_status_str(bss->state), bfd_get_status_str(bss->previous_state)); } if (bss->state == BFD_STATUS_DOWN && bss->previous_state == BFD_STATUS_UP) pim_neighbor_delete(nbr->interface, nbr, "BFD Session Expired"); } /* * pim_bfd_info_nbr_create - Create/update BFD information for a neighbor. */ void pim_bfd_info_nbr_create(struct pim_interface *pim_ifp, struct pim_neighbor *neigh) { /* Check if Pim Interface BFD is enabled */ if (!pim_ifp || !pim_ifp->bfd_config.enabled) return; if (neigh->bfd_session == NULL) neigh->bfd_session = bfd_sess_new(pim_neighbor_bfd_cb, neigh); bfd_sess_set_timers( neigh->bfd_session, pim_ifp->bfd_config.detection_multiplier, pim_ifp->bfd_config.min_rx, pim_ifp->bfd_config.min_tx); bfd_sess_set_ipv4_addrs(neigh->bfd_session, NULL, &neigh->source_addr); bfd_sess_set_interface(neigh->bfd_session, neigh->interface->name); bfd_sess_set_vrf(neigh->bfd_session, neigh->interface->vrf_id); bfd_sess_set_profile(neigh->bfd_session, pim_ifp->bfd_config.profile); bfd_sess_install(neigh->bfd_session); } /* * pim_bfd_reg_dereg_all_nbr - Register/Deregister all neighbors associated * with a interface with BFD through * zebra for starting/stopping the monitoring of * the neighbor rechahability. */ void pim_bfd_reg_dereg_all_nbr(struct interface *ifp) { struct pim_interface *pim_ifp = NULL; struct listnode *node = NULL; struct pim_neighbor *neigh = NULL; pim_ifp = ifp->info; if (!pim_ifp) return; for (ALL_LIST_ELEMENTS_RO(pim_ifp->pim_neighbor_list, node, neigh)) { if (pim_ifp->bfd_config.enabled) pim_bfd_info_nbr_create(pim_ifp, neigh); else bfd_sess_free(&neigh->bfd_session); } } void pim_bfd_init(void) { bfd_protocol_integration_init(pim_zebra_zclient_get(), router->master); }
package org.irods.jargon.datautils.synchproperties; import java.util.ArrayList; import java.util.List; import java.util.Properties; import org.irods.jargon.core.connection.IRODSAccount; import org.irods.jargon.core.exception.DuplicateDataException; import org.irods.jargon.core.exception.JargonException; import org.irods.jargon.core.pub.CollectionAO; import org.irods.jargon.core.pub.EnvironmentalInfoAO; import org.irods.jargon.core.pub.IRODSAccessObjectFactory; import org.irods.jargon.core.pub.domain.AvuData; import org.irods.jargon.core.pub.io.IRODSFile; import org.irods.jargon.core.pub.io.IRODSFileFactory; import org.irods.jargon.core.query.AVUQueryElement; import org.irods.jargon.core.query.AVUQueryElement.AVUQueryPart; import org.irods.jargon.core.query.MetaDataAndDomainData; import org.irods.jargon.core.query.QueryConditionOperators; import org.irods.jargon.testutils.TestingPropertiesHelper; import org.junit.Assert; import org.junit.BeforeClass; import org.junit.Test; import org.mockito.Matchers; import org.mockito.Mockito; public class SynchPropertiesServiceImplTest { private static Properties testingProperties = new Properties(); private static TestingPropertiesHelper testingPropertiesHelper = new TestingPropertiesHelper(); @BeforeClass public static void setUpBeforeClass() throws Exception { TestingPropertiesHelper testingPropertiesLoader = new TestingPropertiesHelper(); testingProperties = testingPropertiesLoader.getTestProperties(); } @Test public void testGetUserSynchTargetForUserAndAbsolutePath() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; long expectedIrodsTimestamp = 949493049304L; long expectedLocalTimestamp = 8483483948394L; String expectedLocalPath = "/a/local/path"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); CollectionAO collectionAO = Mockito.mock(CollectionAO.class); Mockito.when(irodsAccessObjectFactory.getCollectionAO(irodsAccount)).thenReturn(collectionAO); // build expected query List<AVUQueryElement> avuQuery = new ArrayList<AVUQueryElement>(); AVUQueryElement avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.UNITS, QueryConditionOperators.EQUAL, SynchPropertiesService.USER_SYNCH_DIR_TAG); avuQuery.add(avuQueryElement); avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.ATTRIBUTE, QueryConditionOperators.EQUAL, userDevAttrib.toString()); avuQuery.add(avuQueryElement); StringBuilder anticipatedAvuValue = new StringBuilder(); anticipatedAvuValue.append(expectedIrodsTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalPath); List<MetaDataAndDomainData> queryResults = new ArrayList<MetaDataAndDomainData>(); MetaDataAndDomainData testResult = MetaDataAndDomainData.instance( MetaDataAndDomainData.MetadataDomain.COLLECTION, "1", testIrodsPath, 1, userDevAttrib.toString(), anticipatedAvuValue.toString(), SynchPropertiesService.USER_SYNCH_DIR_TAG); queryResults.add(testResult); Mockito.when(collectionAO.findMetadataValuesByMetadataQueryForCollection(avuQuery, testIrodsPath)) .thenReturn(queryResults); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); UserSynchTarget userSynchTarget = synchPropertiesService.getUserSynchTargetForUserAndAbsolutePath(testUserName, testDeviceName, testIrodsPath); Assert.assertNotNull("null userSynchTarget returned", userSynchTarget); Assert.assertEquals("invalid user", testUserName, userSynchTarget.getUserName()); Assert.assertEquals("invalid device", testDeviceName, userSynchTarget.getDeviceName()); Assert.assertEquals("invalid irods path", testIrodsPath, userSynchTarget.getIrodsSynchRootAbsolutePath()); Assert.assertEquals("invalid local path", expectedLocalPath, userSynchTarget.getLocalSynchRootAbsolutePath()); Assert.assertEquals("invalid local timestamp", expectedLocalTimestamp, userSynchTarget.getLastLocalSynchTimestamp()); Assert.assertEquals("invalid irods timestamp", expectedIrodsTimestamp, userSynchTarget.getLastIRODSSynchTimestamp()); } @Test(expected = JargonException.class) public void testGetUserSynchTargetNoAccount() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = null; IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); synchPropertiesService.getUserSynchTargetForUserAndAbsolutePath(testUserName, testDeviceName, testIrodsPath); } @Test(expected = JargonException.class) public void testGetUserSynchTargetNoAccessObjectFactory() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = null; SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); synchPropertiesService.getUserSynchTargetForUserAndAbsolutePath(testUserName, testDeviceName, testIrodsPath); } @Test(expected = JargonException.class) public void testGetUserSynchTargetForUserAndAbsolutePathMultipleResults() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; long expectedIrodsTimestamp = 949493049304L; long expectedLocalTimestamp = 8483483948394L; String expectedLocalPath = "/a/local/path"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); CollectionAO collectionAO = Mockito.mock(CollectionAO.class); Mockito.when(irodsAccessObjectFactory.getCollectionAO(irodsAccount)).thenReturn(collectionAO); // build expected query List<AVUQueryElement> avuQuery = new ArrayList<AVUQueryElement>(); AVUQueryElement avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.UNITS, QueryConditionOperators.EQUAL, SynchPropertiesService.USER_SYNCH_DIR_TAG); avuQuery.add(avuQueryElement); avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.ATTRIBUTE, QueryConditionOperators.EQUAL, userDevAttrib.toString()); avuQuery.add(avuQueryElement); StringBuilder anticipatedAvuValue = new StringBuilder(); anticipatedAvuValue.append(expectedIrodsTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalPath); List<MetaDataAndDomainData> queryResults = new ArrayList<MetaDataAndDomainData>(); MetaDataAndDomainData testResult = MetaDataAndDomainData.instance( MetaDataAndDomainData.MetadataDomain.COLLECTION, "1", testIrodsPath, 1, userDevAttrib.toString(), anticipatedAvuValue.toString(), SynchPropertiesService.USER_SYNCH_DIR_TAG); queryResults.add(testResult); queryResults.add(testResult); Mockito.when(collectionAO.findMetadataValuesByMetadataQueryForCollection(avuQuery, testIrodsPath)) .thenReturn(queryResults); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); synchPropertiesService.getUserSynchTargetForUserAndAbsolutePath(testUserName, testDeviceName, testIrodsPath); } @Test(expected = JargonException.class) public void testGetUserSynchTargetForUserAndAbsolutePathNonNumericIrodsTimestamp() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; long expectedLocalTimestamp = 8483483948394L; String expectedLocalPath = "/a/local/path"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); CollectionAO collectionAO = Mockito.mock(CollectionAO.class); Mockito.when(irodsAccessObjectFactory.getCollectionAO(irodsAccount)).thenReturn(collectionAO); // build expected query List<AVUQueryElement> avuQuery = new ArrayList<AVUQueryElement>(); AVUQueryElement avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.UNITS, QueryConditionOperators.EQUAL, SynchPropertiesService.USER_SYNCH_DIR_TAG); avuQuery.add(avuQueryElement); avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.ATTRIBUTE, QueryConditionOperators.EQUAL, userDevAttrib.toString()); avuQuery.add(avuQueryElement); StringBuilder anticipatedAvuValue = new StringBuilder(); anticipatedAvuValue.append("1121212xx"); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalPath); List<MetaDataAndDomainData> queryResults = new ArrayList<MetaDataAndDomainData>(); MetaDataAndDomainData testResult = MetaDataAndDomainData.instance( MetaDataAndDomainData.MetadataDomain.COLLECTION, "1", testIrodsPath, 1, userDevAttrib.toString(), anticipatedAvuValue.toString(), SynchPropertiesService.USER_SYNCH_DIR_TAG); queryResults.add(testResult); Mockito.when(collectionAO.findMetadataValuesByMetadataQueryForCollection(avuQuery, testIrodsPath)) .thenReturn(queryResults); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); synchPropertiesService.getUserSynchTargetForUserAndAbsolutePath(testUserName, testDeviceName, testIrodsPath); } @Test(expected = JargonException.class) public void testGetUserSynchTargetForUserAndAbsolutePathNonNumericLocalTimestamp() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; String expectedLocalPath = "/a/local/path"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); CollectionAO collectionAO = Mockito.mock(CollectionAO.class); Mockito.when(irodsAccessObjectFactory.getCollectionAO(irodsAccount)).thenReturn(collectionAO); // build expected query List<AVUQueryElement> avuQuery = new ArrayList<AVUQueryElement>(); AVUQueryElement avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.UNITS, QueryConditionOperators.EQUAL, SynchPropertiesService.USER_SYNCH_DIR_TAG); avuQuery.add(avuQueryElement); avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.ATTRIBUTE, QueryConditionOperators.EQUAL, userDevAttrib.toString()); avuQuery.add(avuQueryElement); StringBuilder anticipatedAvuValue = new StringBuilder(); anticipatedAvuValue.append("1121212"); anticipatedAvuValue.append("~"); anticipatedAvuValue.append("484848d"); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalPath); List<MetaDataAndDomainData> queryResults = new ArrayList<MetaDataAndDomainData>(); MetaDataAndDomainData testResult = MetaDataAndDomainData.instance( MetaDataAndDomainData.MetadataDomain.COLLECTION, "1", testIrodsPath, 1, userDevAttrib.toString(), anticipatedAvuValue.toString(), SynchPropertiesService.USER_SYNCH_DIR_TAG); queryResults.add(testResult); Mockito.when(collectionAO.findMetadataValuesByMetadataQueryForCollection(avuQuery, testIrodsPath)) .thenReturn(queryResults); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); synchPropertiesService.getUserSynchTargetForUserAndAbsolutePath(testUserName, testDeviceName, testIrodsPath); } @Test(expected = DuplicateDataException.class) public void testAddUserSynchTargetWhenAlreadyExists() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; long expectedIrodsTimestamp = 949493049304L; long expectedLocalTimestamp = 8483483948394L; String expectedLocalPath = "/a/local/path"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); CollectionAO collectionAO = Mockito.mock(CollectionAO.class); Mockito.when(irodsAccessObjectFactory.getCollectionAO(irodsAccount)).thenReturn(collectionAO); // build expected query List<AVUQueryElement> avuQuery = new ArrayList<AVUQueryElement>(); AVUQueryElement avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.UNITS, QueryConditionOperators.EQUAL, SynchPropertiesService.USER_SYNCH_DIR_TAG); avuQuery.add(avuQueryElement); avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.ATTRIBUTE, QueryConditionOperators.EQUAL, userDevAttrib.toString()); avuQuery.add(avuQueryElement); StringBuilder anticipatedAvuValue = new StringBuilder(); anticipatedAvuValue.append(expectedIrodsTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalPath); List<MetaDataAndDomainData> queryResults = new ArrayList<MetaDataAndDomainData>(); MetaDataAndDomainData testResult = MetaDataAndDomainData.instance( MetaDataAndDomainData.MetadataDomain.COLLECTION, "1", testIrodsPath, 1, userDevAttrib.toString(), anticipatedAvuValue.toString(), SynchPropertiesService.USER_SYNCH_DIR_TAG); queryResults.add(testResult); Mockito.when(collectionAO.findMetadataValuesByMetadataQueryForCollection(avuQuery, testIrodsPath)) .thenReturn(queryResults); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); synchPropertiesService.addSynchDeviceForUserAndIrodsAbsolutePath(testUserName, testDeviceName, testIrodsPath, expectedLocalPath); } @Test public void testAddUserSynchTarget() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; long expectedIrodsTimestamp = 0L; long expectedLocalTimestamp = 0L; String expectedLocalPath = "/a/local/path"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); CollectionAO collectionAO = Mockito.mock(CollectionAO.class); Mockito.when(irodsAccessObjectFactory.getCollectionAO(irodsAccount)).thenReturn(collectionAO); // build expected query List<AVUQueryElement> avuQuery = new ArrayList<AVUQueryElement>(); AVUQueryElement avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.UNITS, QueryConditionOperators.EQUAL, SynchPropertiesService.USER_SYNCH_DIR_TAG); avuQuery.add(avuQueryElement); avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.ATTRIBUTE, QueryConditionOperators.EQUAL, userDevAttrib.toString()); avuQuery.add(avuQueryElement); StringBuilder anticipatedAvuValue = new StringBuilder(); anticipatedAvuValue.append(expectedIrodsTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalPath); List<MetaDataAndDomainData> queryResults = new ArrayList<MetaDataAndDomainData>(); Mockito.when(collectionAO.findMetadataValuesByMetadataQueryForCollection(avuQuery, testIrodsPath)) .thenReturn(queryResults); // mock out lookup of file, which will exist here IRODSFileFactory irodsFileFactory = Mockito.mock(IRODSFileFactory.class); Mockito.when(irodsAccessObjectFactory.getIRODSFileFactory(irodsAccount)).thenReturn(irodsFileFactory); IRODSFile irodsFile = Mockito.mock(IRODSFile.class); Mockito.when(irodsFile.exists()).thenReturn(true); Mockito.when(irodsFileFactory.instanceIRODSFile(testIrodsPath)).thenReturn(irodsFile); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); synchPropertiesService.addSynchDeviceForUserAndIrodsAbsolutePath(testUserName, testDeviceName, testIrodsPath, expectedLocalPath); AvuData expectedAvuData = AvuData.instance(testUserName + ":" + testDeviceName, 0 + "~" + 0 + "~" + expectedLocalPath, SynchPropertiesService.USER_SYNCH_DIR_TAG); Mockito.verify(collectionAO).addAVUMetadata(testIrodsPath, expectedAvuData); } @Test public void testSynchTimestamps() throws Exception { long expectedIrodsTimestamp = 949493049304L; IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); EnvironmentalInfoAO environmentalInfoAO = Mockito.mock(EnvironmentalInfoAO.class); Mockito.when(irodsAccessObjectFactory.getEnvironmentalInfoAO(irodsAccount)).thenReturn(environmentalInfoAO); Mockito.when(environmentalInfoAO.getIRODSServerCurrentTime()).thenReturn(expectedIrodsTimestamp); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); SynchTimestamps synchTimestamps = synchPropertiesService.getSynchTimestamps(); Assert.assertNotNull("null synchTimestamps returned", synchTimestamps); Assert.assertEquals("invalid irods timestamp", expectedIrodsTimestamp, synchTimestamps.getIrodsSynchTimestamp()); } @Test public void testUpdateTimestampsToCurrent() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; long expectedIrodsTimestamp = 949493049304L; long expectedLocalTimestamp = 8483483948394L; String expectedLocalPath = "/a/local/path"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); CollectionAO collectionAO = Mockito.mock(CollectionAO.class); Mockito.when(irodsAccessObjectFactory.getCollectionAO(irodsAccount)).thenReturn(collectionAO); // build expected query List<AVUQueryElement> avuQuery = new ArrayList<AVUQueryElement>(); AVUQueryElement avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.UNITS, QueryConditionOperators.EQUAL, SynchPropertiesService.USER_SYNCH_DIR_TAG); avuQuery.add(avuQueryElement); avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.ATTRIBUTE, QueryConditionOperators.EQUAL, userDevAttrib.toString()); avuQuery.add(avuQueryElement); StringBuilder anticipatedAvuValue = new StringBuilder(); anticipatedAvuValue.append(expectedIrodsTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalPath); List<MetaDataAndDomainData> queryResults = new ArrayList<MetaDataAndDomainData>(); MetaDataAndDomainData testResult = MetaDataAndDomainData.instance( MetaDataAndDomainData.MetadataDomain.COLLECTION, "1", testIrodsPath, 1, userDevAttrib.toString(), anticipatedAvuValue.toString(), SynchPropertiesService.USER_SYNCH_DIR_TAG); queryResults.add(testResult); Mockito.when(collectionAO.findMetadataValuesByMetadataQueryForCollection(avuQuery, testIrodsPath)) .thenReturn(queryResults); EnvironmentalInfoAO environmentalInfoAO = Mockito.mock(EnvironmentalInfoAO.class); Mockito.when(irodsAccessObjectFactory.getEnvironmentalInfoAO(irodsAccount)).thenReturn(environmentalInfoAO); Mockito.when(environmentalInfoAO.getIRODSServerCurrentTime()).thenReturn(expectedIrodsTimestamp); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); synchPropertiesService.updateTimestampsToCurrent(testUserName, testDeviceName, testIrodsPath); Mockito.verify(collectionAO).modifyAvuValueBasedOnGivenAttributeAndUnit(Matchers.eq(testIrodsPath), Matchers.any(AvuData.class)); } @Test public void testGetUserSynchTargets() throws Exception { String testUserName = "testUser"; String testDeviceName = "testDevice"; String testIrodsPath = "/path/to/irods"; long expectedIrodsTimestamp = 949493049304L; long expectedLocalTimestamp = 8483483948394L; String expectedLocalPath = "/a/local/path"; StringBuilder userDevAttrib = new StringBuilder(); userDevAttrib.append(testUserName); userDevAttrib.append(":"); userDevAttrib.append(testDeviceName); IRODSAccount irodsAccount = testingPropertiesHelper.buildIRODSAccountFromTestProperties(testingProperties); IRODSAccessObjectFactory irodsAccessObjectFactory = Mockito.mock(IRODSAccessObjectFactory.class); CollectionAO collectionAO = Mockito.mock(CollectionAO.class); Mockito.when(irodsAccessObjectFactory.getCollectionAO(irodsAccount)).thenReturn(collectionAO); // build expected query List<AVUQueryElement> avuQuery = new ArrayList<AVUQueryElement>(); AVUQueryElement avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.UNITS, QueryConditionOperators.EQUAL, SynchPropertiesService.USER_SYNCH_DIR_TAG); avuQuery.add(avuQueryElement); avuQueryElement = AVUQueryElement.instanceForValueQuery(AVUQueryPart.ATTRIBUTE, QueryConditionOperators.LIKE, testUserName + ":%"); avuQuery.add(avuQueryElement); StringBuilder anticipatedAvuValue = new StringBuilder(); anticipatedAvuValue.append(expectedIrodsTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalTimestamp); anticipatedAvuValue.append("~"); anticipatedAvuValue.append(expectedLocalPath); List<MetaDataAndDomainData> queryResults = new ArrayList<MetaDataAndDomainData>(); MetaDataAndDomainData testResult = MetaDataAndDomainData.instance( MetaDataAndDomainData.MetadataDomain.COLLECTION, "1", testIrodsPath, 1, userDevAttrib.toString(), anticipatedAvuValue.toString(), SynchPropertiesService.USER_SYNCH_DIR_TAG); queryResults.add(testResult); Mockito.when(collectionAO.findMetadataValuesByMetadataQuery(avuQuery)).thenReturn(queryResults); SynchPropertiesServiceImpl synchPropertiesService = new SynchPropertiesServiceImpl(); synchPropertiesService.setIrodsAccessObjectFactory(irodsAccessObjectFactory); synchPropertiesService.setIrodsAccount(irodsAccount); List<UserSynchTarget> userSynchTargets = synchPropertiesService.getUserSynchTargets(testUserName); Assert.assertNotNull("null userSynchTarget returned", userSynchTargets); Assert.assertEquals("should be one synch target", 1, userSynchTargets.size()); UserSynchTarget userSynchTarget = userSynchTargets.get(0); Assert.assertEquals("invalid user", testUserName, userSynchTarget.getUserName()); Assert.assertEquals("invalid device", testDeviceName, userSynchTarget.getDeviceName()); Assert.assertEquals("invalid irods path", testIrodsPath, userSynchTarget.getIrodsSynchRootAbsolutePath()); Assert.assertEquals("invalid local path", expectedLocalPath, userSynchTarget.getLocalSynchRootAbsolutePath()); Assert.assertEquals("invalid local timestamp", expectedLocalTimestamp, userSynchTarget.getLastLocalSynchTimestamp()); Assert.assertEquals("invalid irods timestamp", expectedIrodsTimestamp, userSynchTarget.getLastIRODSSynchTimestamp()); } }
Long Horizon Inflation Forecasts and the Measurement of the Ex Ante Long-Term Real Interest Rate This paper examines three empirical measures of the ex ante 10-year real interest rate: inflation-indexed government bond yields and two Fisher-hypothesis proxies based on survey inflation expectations and a shifting endpoint econometric forecasting model. Consistency between the alternative estimates provides some confidence that we may rely on the shifting endpoint forecast model to substantially expand the otherwise short sample periods. Stylized facts of the behavior of long-term real interest rates in the U.S., U.K. and Germany show some important similarities and differences over the sample period 1960-2009. The data from the expanded sample can be used in future research to examine the effects and the determinants of longterm real interest rate more closely.
Peak-Power-Demand Limitation through Independent Consumer Coordination The allocation of resources to competing entities is a significant problem in various applications areas. There are many examples of situations in which it is necessary to impose a global constraint limiting the total quantities of resources available to individual consumers while there is no global objective to govern the distribution. The problem of limiting peak demand for power among loosely related buildings such as those on a college campus provides an example application. The individual objective performances of some loosely coupled independent buildings can be improved if resources can be exchanged, but such exchange becomes difficult if resource exchanges are required to balance. Since the buildings are independent, the exchange of resources may be required to occur without compromising this independence. A mechanism is presented to support resource exchanges among independent users while assuring that the average resource exchanged remains at zero and the autonomy of the individual users is retained. The presented mechanism uses a simplified form of barter to eliminate the need for gaming in certain cases.
<filename>core_modules/instances_generator/multi_pdf_generators.py # -*- coding: utf-8 -*- """ Created on Fri Sep 4 13:14:33 2020 @author: <NAME> """ # #%% import os import sys import numpy as np import pandas as pd import multiprocessing from multiprocessing import Pool from math import ceil from datetime import datetime, timedelta import json import scipy.stats as st import utils.support as sup import analyzers.sim_evaluator as sim import matplotlib.pyplot as plt import warnings from tqdm import tqdm import time import traceback ##%% class MultiPDFGenerator(): """ This class evaluates the inter-arrival times """ def __init__(self, ia_times, ia_valdn, parms): """constructor""" self.ia_times = ia_times self.ia_valdn = ia_valdn self.parms = parms self.model_metadata = dict() self._load_model() # @safe_exec def _load_model(self) -> None: filename = os.path.join(self.parms['ia_gen_path'], self.parms['file'].split('.')[0]+'_mpdf.json') if os.path.exists(filename) and not self.parms['update_mpdf_gen']: with open(filename) as file: self.model = json.load(file) elif os.path.exists(filename) and self.parms['update_mpdf_gen']: with open(filename) as file: self.model = json.load(file) self._create_model(True) elif not os.path.exists(filename): self._create_model(False) # self._generate_traces(num_instances, start_time) # self.times['caseid'] = self.times.index + 1 # self.times['caseid'] = self.times['caseid'].astype(str) # self.times['caseid'] = 'Case' + self.times['caseid'] # return self.times def _create_model(self, compare): # hours = [8] hours = [1, 2, 4, 8, 12] args = [(w, self.ia_times, self.ia_valdn, self.parms) for w in hours] reps = len(args) def pbar_async(p, msg): pbar = tqdm(total=reps, desc=msg) processed = 0 while not p.ready(): cprocesed = (reps - p._number_left) if processed < cprocesed: increment = cprocesed - processed pbar.update(n=increment) processed = cprocesed time.sleep(1) pbar.update(n=(reps - processed)) p.wait() pbar.close() cpu_count = multiprocessing.cpu_count() w_count = reps if reps <= cpu_count else cpu_count pool = Pool(processes=w_count) # Simulate p = pool.map_async(self.create_evaluate_model, args) pbar_async(p, 'evaluating models:') pool.close() # Save results element = min(p.get(), key=lambda x: x['loss']) metadata_file = os.path.join( self.parms['ia_gen_path'], self.parms['file'].split('.')[0]+'_mpdf_meta.json') # compare with existing model save = True if compare: # Loading of parameters from existing model if os.path.exists(metadata_file): with open(metadata_file) as file: data = json.load(file) data = {k: v for k, v in data.items()} if data['loss'] < element['loss']: save = False print('dont save') if save: self.model = element['model'] sup.create_json(self.model, os.path.join( self.parms['ia_gen_path'], self.parms['file'].split('.')[0]+'_mpdf.json')) # best structure mining parameters self.model_metadata['window'] = element['model']['window'] self.model_metadata['loss'] = element['loss'] self.model_metadata['generated_at'] = ( datetime.now().strftime("%d/%m/%Y %H:%M:%S")) sup.create_json(self.model_metadata, metadata_file) @staticmethod def create_evaluate_model(args): def dist_best(data, window): """ Finds the best probability distribution for a given data serie """ # Create a data series from the given list # data = pd.Series(self.data_serie) # plt.hist(data, bins=self.bins, density=True, range=self.window) # plt.show() # Get histogram of original data hist, bin_edges = np.histogram(data, bins='auto', range=window) bin_edges = (bin_edges + np.roll(bin_edges, -1))[:-1] / 2.0 # Distributions to check distributions = [st.norm, st.expon, st.uniform, st.triang, st.lognorm, st.gamma] # Best holders best_distribution = st.norm best_sse = np.inf best_loc = 0 best_scale = 0 best_args = 0 # Estimate distribution parameters from data for distribution in distributions: # Try to fit the distribution try: # Ignore warnings from data that can't be fit with warnings.catch_warnings(): warnings.filterwarnings('ignore') # fit dist to data params = distribution.fit(data) # Separate parts of parameters arg = params[:-2] loc = params[-2] scale = params[-1] # Calculate fitted PDF and error with fit in distribution pdf = distribution.pdf(bin_edges, loc=loc, scale=scale, *arg) sse = np.sum(np.power(hist - pdf, 2.0)) # identify if this distribution is better if best_sse > sse > 0: best_distribution = distribution best_sse = sse best_loc = loc best_scale = scale best_args = arg except: pass return {'dist': best_distribution.name, 'loc': best_loc, 'scale': best_scale, 'args': best_args} def generate_traces(model, num_instances, start_time): dobj = {'norm': st.norm, 'expon': st.expon, 'uniform': st.uniform, 'triang': st.triang, 'lognorm': st.lognorm, 'gamma': st.gamma} timestamp = datetime.strptime(start_time, "%Y-%m-%dT%H:%M:%S.%f+00:00") times = list() # clock = timestamp.floor(freq ='H') clock = (timestamp.replace(microsecond=0, second=0, minute=0) - timedelta(hours=1)) i = 0 def add_ts(timestamp, dname): times.append({'dname': dname, 'timestamp': timestamp}) return times # print(clock) while i < num_instances: # print('Clock:', clock) try: window = str(model['daily_windows'][str(clock.hour)]) day = str(clock.weekday()) dist = model['distribs'][window][day] except KeyError: dist = None if dist is not None: missing = min((num_instances - i), dist['num']) if dist['dist'] in ['norm', 'expon', 'uniform']: # TODO: Check parameters gen_inter = dobj[dist['dist']].rvs(loc=dist['loc'], scale=dist['scale'], size=missing) elif dist['dist'] == 'lognorm': m = dist['mean'] v = dist['var'] phi = np.sqrt(v + m**2) mu = np.log(m**2/phi) sigma = np.sqrt(np.log(phi**2/m**2)) sigma = sigma if sigma > 0.0 else 0.000001 gen_inter = dobj[dist['dist']].rvs(sigma, scale=np.exp(mu), size=missing) elif dist['dist'] in ['gamma', 'triang']: gen_inter = dobj[dist['dist']].rvs(dist['args'], loc=dist['loc'], scale=dist['scale'], size=missing) else: clock += timedelta(seconds=3600*model['window']) print('Not implemented: ', dist['dist']) #TODO: check the generated negative values timestamp = clock neg = 0 for inter in gen_inter: if inter > 0: timestamp += timedelta(seconds=inter) if timestamp < clock + timedelta(seconds=3600*model['window']): add_ts(timestamp, dist['dist']) else: neg +=1 else: neg +=1 i += len(gen_inter) - neg # print(neg) # print(i) #TODO: Check if the clock is not been skipped try: clock += timedelta(seconds=3600*model['window']) except: print(clock) print(model['window']) print(3600*model['window']) sys.exit(1) # pd.DataFrame(times).to_csv('times.csv') return pd.DataFrame(times) def create_model(window, ia_times, ia_valdn, parms): try: hist_range = [0, int((window * 3600))] day_hour = lambda x: x['timestamp'].hour ia_times['hour'] = ia_times.apply(day_hour, axis=1) date = lambda x: x['timestamp'].date() ia_times['date'] = ia_times.apply(date, axis=1) # create time windows i = 0 daily_windows = dict() for x in range(24): if x % window == 0: i += 1 daily_windows[x] = i ia_times = ia_times.merge( pd.DataFrame.from_dict(daily_windows, orient='index').rename_axis('hour'), on='hour', how='left').rename(columns={0: 'window'}) inter_arrival = list() for key, group in ia_times.groupby(['window', 'date', 'weekday']): w_df = group.copy() w_df = w_df.reset_index() prev_time = w_df.timestamp.min().floor(freq ='H') for i, item in w_df.iterrows(): inter_arrival.append( {'window': key[0], 'weekday': item.weekday, 'intertime': (item.timestamp - prev_time).total_seconds(), 'date': item.date}) prev_time = item.timestamp distribs = dict() for key, group in pd.DataFrame(inter_arrival).groupby(['window', 'weekday']): intertime = group.intertime if len(intertime)>2: intertime = intertime[intertime.between( intertime.quantile(.15), intertime.quantile(.85))] distrib = dist_best(intertime, hist_range) # TODO: averiguar porque funciona con la mitad de los casos??? number = group.groupby('date').intertime.count() if len(number)>2: number = number[number.between( number.quantile(.15), number.quantile(.85))] # distrib['num'] = int(number.median()/2) distrib['num'] = ceil(number.median()/2) # distrib['num'] = int(number.median()) if distrib['dist'] == 'lognorm': distrib['mean'] = np.mean(group.intertime) distrib['var'] = np.var(group.intertime) distribs[str(key[0])] = {str(key[1]): distrib} model = {'window': window, 'daily_windows': {str(k):v for k, v in daily_windows.items()}, 'distribs': distribs} # validation # modify number of instances in the model num_inst = len(ia_valdn.caseid.unique()) # get minimum date start_time = (ia_valdn .timestamp .min().strftime("%Y-%m-%dT%H:%M:%S.%f+00:00")) times = generate_traces(model, num_inst, start_time) # ia_valdn = ia_valdn[['caseid', 'timestamp']] # times = times[['caseid', 'timestamp']] evaluation = sim.SimilarityEvaluator(ia_valdn, times, parms, 0, dtype='serie') evaluation.measure_distance('hour_emd') return {'model': model, 'loss': evaluation.similarity['sim_val']} except Exception: traceback.print_exc() return {'model': [], 'loss': 1} return create_model(*args) def generate(self, num_instances, start_time): dobj = {'norm': st.norm, 'expon': st.expon, 'uniform': st.uniform, 'triang': st.triang, 'lognorm': st.lognorm, 'gamma': st.gamma} timestamp = datetime.strptime(start_time, "%Y-%m-%dT%H:%M:%S.%f+00:00") times = list() # clock = timestamp.floor(freq ='H') clock = (timestamp.replace(microsecond=0, second=0, minute=0) - timedelta(hours=1)) i = 0 def add_ts(timestamp, dname): times.append({'dname': dname, 'timestamp': timestamp}) return times # print(clock) while i < num_instances: # print('Clock:', clock) try: window = str(self.model['daily_windows'][str(clock.hour)]) day = str(clock.weekday()) dist = self.model['distribs'][window][day] except KeyError: dist = None if dist is not None: missing = min((num_instances - i), dist['num']) if dist['dist'] in ['norm', 'expon', 'uniform']: # TODO: Check parameters gen_inter = dobj[dist['dist']].rvs(loc=dist['loc'], scale=dist['scale'], size=missing) elif dist['dist'] == 'lognorm': m = dist['mean'] v = dist['var'] phi = np.sqrt(v + m**2) mu = np.log(m**2/phi) sigma = np.sqrt(np.log(phi**2/m**2)) sigma = sigma if sigma > 0.0 else 0.000001 gen_inter = dobj[dist['dist']].rvs(sigma, scale=np.exp(mu), size=missing) elif dist['dist'] in ['gamma', 'triang']: gen_inter = dobj[dist['dist']].rvs(dist['args'], loc=dist['loc'], scale=dist['scale'], size=missing) else: clock += timedelta(seconds=3600*self.model['window']) print('Not implemented: ', dist['dist']) #TODO: check the generated negative values timestamp = clock neg = 0 for inter in gen_inter: if inter > 0: timestamp += timedelta(seconds=inter) if timestamp < clock + timedelta(seconds=3600*self.model['window']): add_ts(timestamp, dist['dist']) else: neg +=1 else: neg +=1 i += len(gen_inter) - neg #TODO: Check if the clock is not been skipped clock += timedelta(seconds=3600*self.model['window']) # pd.DataFrame(times).to_csv('times.csv') self.times = pd.DataFrame(times) self.times['caseid'] = self.times.index + 1 self.times['caseid'] = self.times['caseid'].astype(str) self.times['caseid'] = 'Case' + self.times['caseid'] return self.times @staticmethod def _graph_timeline(log) -> None: time_series = log.copy()[['caseid', 'timestamp']] time_series['occ'] = 1 time_series.set_index('timestamp', inplace=True) time_series.occ.rolling('3h').sum().plot(figsize=(30,10), linewidth=5, fontsize=10) plt.xlabel('Days', fontsize=20); print(time_series)
package com.spring.sample.dao; import java.util.List; import org.hibernate.HibernateException; import org.hibernate.Query; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Repository; import com.spring.sample.model.User; import com.spring.sample.util.Utils; @Repository("userDAO") public class UserDAO { @Autowired private SessionFactory sessionFactory; @SuppressWarnings("unchecked") public List<User> findAll(int startIndex, int count) { Session session = sessionFactory.openSession(); try { Query query = session.createQuery("from User"); query.setFirstResult(startIndex); if (count > 0) { query.setMaxResults(count); } return query.list(); } finally { session.close(); } } public User findById(int id) { Session session = sessionFactory.openSession(); try { return (User) session.get(User.class, id); } finally { session.close(); } } public User findByUserNameAndPassword(String userName, String password) { Session session = sessionFactory.openSession(); try { String hql = "from User u WHERE u.userName = :username and u.password = :password"; Query query = session.createQuery(hql); query.setParameter("username",userName); query.setParameter("password", password); List results = query.list(); if (results != null && !results.isEmpty() && results.size() > 0) { return (User) results.get(0); } } catch (Exception e) { e.printStackTrace(); } finally { session.close(); } return null; } public User findByUserName(String userName) { Session session = sessionFactory.openSession(); try { String hql = "from User u WHERE u.userName = :username"; Query query = session.createQuery(hql); query.setParameter("username",userName); List results = query.list(); if (results != null && !results.isEmpty()) { return (User) results.get(0); } } catch (Exception e) { e.printStackTrace(); } finally { session.close(); } return null; } public boolean update(User user) { Session session = sessionFactory.openSession(); Transaction transaction = session.beginTransaction(); try { User obj = findByUserName(user.getUserName()); obj.setAddress(user.getAddress()); obj.setFirstName(user.getFirstName()); obj.setLastName(user.getLastName()); if (user.getPassword() != null && !user.getPassword().isEmpty()) { obj.setPassword(Utils.getHashMD5(user.getPassword())); } if (user.getImageUrl() != null && !user.getImageUrl().isEmpty()) { obj.setImageUrl(user.getImageUrl()); } session.update(obj); transaction.commit(); return true; } catch (HibernateException e) { e.printStackTrace(); transaction.rollback(); } finally { session.close(); } return false; } public boolean save(User User) { Session session = sessionFactory.openSession(); Transaction transaction = session.beginTransaction(); try { session.save(User); transaction.commit(); return true; } catch (HibernateException e) { e.printStackTrace(); transaction.rollback(); } finally { session.close(); } return false; } }
Heavitree Heavitree is a historic village and parish situated formerly outside the walls of the City of Exeter in Devon, England, and is today an eastern suburb of that city. It was formerly the first significant village outside the city on the road to London. It was the birthplace of Thomas Bodley, and Richard Hooker, and until 1818 was a site for executions. History The name appears in Domesday Book as Hevetrowa or Hevetrove, and in a document of c.1130 as Hefatriwe. Its derivation is uncertain, but because of the known execution site at Livery Dole, it is thought most likely to derive from heafod–treow (old English for "head tree"), which refers to a tree on which the heads of criminals were placed, though an alternative explanation put forward by W. G. Hoskins is that it was a meeting place for the hundred court. The last executions for witchcraft in England took place at Heavitree in 1682, when the "Bideford Witches" Temperance Lloyd, Mary Trembles, and Susanna Edwards were executed. (Local folklore used to associate the name with the aftermath of the Monmouth Rebellion of 1685, when Judge Jeffreys supposedly ran out of gibbets.) The last execution to take place here was in 1818, when Samuel Holmyard was hanged at the Magdalen Drop for passing a forged one pound note. In the hundred years from 1801 to 1901, the population of Heavitree grew from 833 to 7,529, reflecting its assimilation into the expanding city of Exeter. It first became an independent Urban District, but became a part of the city in 1913. Part of the historic district is still one of the wards for elections to the City Council. The expanding population necessitated the rebuilding of the small medieval church and the church of St Michael and All Angels was built in 1844–46 to the design of architect David Mackintosh. Its most imposing feature is the west tower, built in 1890 to the design of E. Harbottle. In 2002, a yew tree in the churchyard was included among the "50 Great British Trees" to celebrate the Golden Jubilee of Queen Elizabeth II. However, it is unlikely that this is the actual tree from which Heavitree gets its name. The Heavitree Brewery was a local brewer, located in Heavitree; its history can be traced back to 1790. It was the last brewery in Exeter to cease production, continuing until 1970, the brewery buildings were demolished in 1980. The name continues in use as the owner of a chain of pubs in South West England, and Heavitree Brewery PLC continues as a quoted company with its address in Exeter. There is also a linked charitable trust. Recreation By 1905 there was pressure to provide facilities for the youth of the district who were causing problems in Fore Street in the evenings, so at the end of that year the urban district council purchased four fields from a builder for £3,100 and opened a children's playground on 1 May 1906. The rest of the grounds were landscaped by the Veitch family, and a bowling green and tennis courts followed in 1907. Heavitree Pleasure Ground is still open today and contains a number of leisure facilities. The district's football team, Heavitree Social United (a merger of the previous Heavitree United and Heavitree Social Club), is one of the better known local teams in Exeter, as of 2006 playing in the Devon and Exeter Football League Premier division; the club has previously played in the (more senior) Devon County League. Geography Heavitree lies on one of the most convenient routes from the city centre to the northbound M5 motorway and eastbound A30 trunk road ensuring that much traffic continues to pass through the district. Its main thoroughfare is Fore Street, a shopping street which rises sharply to the former execution site of Livery Dole, now marked by almshouses and a small medieval chapel built of red Heavitree stone. From here, Heavitree Road runs downhill to Exeter city centre, passing the main city Police Station on the right and St Luke's Hall, part of the University of Exeter, left. Heavitree is also the location of the Royal Devon and Exeter Heavitree Hospital. Heavitree stone is a type of red sandstone that was formerly quarried in the area and was used to construct many of Exeter's older buildings, including Exeter Guildhall. The Heavitree Gap in the MacDonnell Ranges mountains in Australia was named after Heavitree by the surveyor William Mills, who had attended Heavitree School in England. The Heavitree Gap adjoins the city of Alice Springs in Australia's Northern Territory.
The commanding general of the 101st Airborne Division and 700 of his troops will head to Liberia in late October as the military steps up its response to the Ebola crisis in West Africa, the Pentagon announced Tuesday. The "Screaming Eagle" troops from Fort Campbell, Kentucky, will set up a headquarters in Monrovia, the Liberian capital, and will be joined by 700 combat engineers from several commands, the Pentagon said. Once the troops have arrived, Army Maj. Gen. Gary Volesky, commander of the 101st, will replace Maj. Gen. Darryl Williams, as commander of the U.S. military response to the Ebola epidemic that has hit hardest in Liberia, Guinea and Sierra Leone. Williams will return to his post as commander of U.S. Army Africa, the Pentagon said. Aid groups and officials in West Africa have complained about what they called the slow pace of the U.S. and the international community's response to the epidemic. Rear Adm. John Kirby, the Pentagon spokesman, took particular issue with published reports calling the military's efforts thus far "slow-footed." "I just flatly disagree," Kirby said at a Pentagon briefing. "It takes some time, it takes some logistics expertise" to organize the response outlined by President Obama two weeks ago when he announced that 3,000 U.S. troops would be deployed to West Africa. Obama's announcement on Sept. 16 came six months after the outbreak in West Africa of history's worse Ebola epidemic. Kirby stressed that "everybody in the military shares the sense of urgency" as the epidemic worsens. The World Health Organization has reported that cases of Ebola and deaths have escalated in recent weeks. Army Gen. Martin Dempsey, chairman of the Joint Chiefs of Staff, also issued a statement defending the military's response. He said the military's efforts would "support U.S. government and international relief efforts by leveraging our unique U.S. military capabilities." "Specifically, we're establishing command and control nodes, logistics hubs, training for health care workers, and providing engineering support," Dempsey said. "The protection of our men and women is my priority as we seek to help those in Africa and work together to stem the tide of this crisis." Kirby also said the military efforts were part of a "whole of government" approach that involved the U.S. Agency for International Development and the Centers for Disease Control. Currently, about 195 U.S. military personnel are in Liberia, Kirby said. They are involved in locating and preparing sites for facilities to treat health care workers who may have contracted the virus. A 25 bed facility was expected to be operational by mid-October in Liberia and 17 other 100-bed facilities were planned. Obama also pledged that the military would set up an Intermediate Staging Base in Senegal to serve as an "air bridge" for channeling medical personnel and supplies to the region. Currently, there are no U.S. military personnel in Senegal, the Pentagon said. Kirby said that all military personnel deployed to West Africa would be trained in the use of protective gear "and on the disease itself" although "U.S. military personnel are not and will not" be in direct contact with Ebola victims, Kirby said. Kirby said that the deployments to West Africa were expected to last six months but could go longer, depending on whether the virus was contained. Kirby also said that the military response could also be expanded to involve more than 3,000 troops. The World Health Organization reported Tuesday that the number of Ebola patients in Guinea, Liberia and Sierra Leone had passed 6500, with more than 3,000 deaths recorded. -- Richard Sisk can be reached at richard.sisk@monster.com
use std::pin::Pin; use std::task::{Context, Poll}; use futures::{ready, Future, Stream}; use pin_project::pin_project; use crate::scalar::Number; use crate::transaction::Txn; use crate::TCResult; use super::super::Coord; use super::{Read, ReadValueAt}; #[pin_project] pub struct ValueReader<'a, S: Stream + 'a, T> { #[pin] coords: Pin<Box<S>>, #[pin] pending: Option<Read<'a>>, txn: &'a Txn, access: &'a T, } impl<'a, S: Stream + 'a, T> ValueReader<'a, S, T> { pub fn new(coords: S, txn: &'a Txn, access: &'a T) -> Self { Self { coords: Box::pin(coords), txn, access, pending: None, } } } impl<'a, S: Stream<Item = TCResult<Coord>> + 'a, T: ReadValueAt> Stream for ValueReader<'a, S, T> { type Item = TCResult<(Coord, Number)>; fn poll_next(self: Pin<&mut Self>, cxt: &mut Context<'_>) -> Poll<Option<Self::Item>> { let mut this = self.project(); Poll::Ready(loop { if let Some(mut pending) = this.pending.as_mut().as_pin_mut() { let result = ready!(pending.as_mut().poll(cxt)); this.pending.set(None); break Some(result); } else if let Some(coord) = ready!(this.coords.as_mut().poll_next(cxt)) { match coord { Ok(coord) => { let read = this.access.read_value_at(&this.txn, coord); this.pending.set(Some(read)); } Err(cause) => break Some(Err(cause)), } } else { break None; } }) } }
package com.planet.staccato.config; import com.fasterxml.jackson.annotation.JsonInclude; import com.fasterxml.jackson.annotation.JsonTypeInfo; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.jsontype.NamedType; import com.planet.staccato.collection.CollectionMetadata; import com.planet.staccato.model.Item; import com.planet.staccato.properties.CoreProperties; import lombok.RequiredArgsConstructor; import org.springframework.context.annotation.Configuration; import org.springframework.stereotype.Component; import javax.annotation.PostConstruct; import java.util.List; /** * Jackson configuration for extensions. Tells Jackson to inspect the collection field value in item properties to * determine which concrete class to deserialize to. * TODO: this seems to be a combination of an initializer and a configuration. Probably a cleaner way to do this. * * @author joshfix * Created on 10/22/18 */ @Component @Configuration @RequiredArgsConstructor public class ExtensionConfig { private final StacConfigProps configProps; private final ObjectMapper mapper; private final List<CollectionMetadata> collectionMetadataList; /** * The following code is necessary for Jackson to understand what implementation class to deserialize item * properties to. It creates a mapping inside of Jackson between the properties implementation class and the * id of the collection */ @PostConstruct public void init() { mapper.addMixIn(Item.class, ItemMixin.class); collectionMetadataList.forEach(metadata -> { metadata.setStacVersion(configProps.getVersion()); NamedType namedType = new NamedType(metadata.getProperties().getClass(), metadata.getId()); mapper.registerSubtypes(namedType); }); // TODO -- this isn't being set from the main initializer for some reason??? mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL); } private interface ItemMixin<T extends CoreProperties> { @JsonTypeInfo( use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXTERNAL_PROPERTY, property = "collection" ) public T getProperties(); } }
Wafer-Scale Method of Controlling Impurity-Induced Disordering for Optical Mode Engineering in High-Performance VCSELs Impurity-induced disordering (IID) provides a wafer-scale method of enhancing the performance of vertical-cavity surface-emitting lasers (VCSELs) for applications requiring higher output power in a specific optical mode. IID has been demonstrated to achieve higher optical power, faster modulation, and single-mode operation in oxide-confined VCSELs. Through the formation of an IID aperture, spatial control of mirror reflectivity can be selectively used to increase the threshold modal gain of only selected optical modes. However, these IID apertures have been limited by the lack of a method to control the shape of the diffusion front. For maximum laser mirror loss, IID apertures employed for mode-control require deep disordering. Consequently, significant lateral diffusion can be present that undesirably increases the lasing threshold for the fundamental mode. A manufacturable method is presented for controlling the shape of the IID aperture diffusion front by tailoring the strain of the diffusion mask. Experimental analysis to determine an optimal IID aperture size for single-mode high-power operation is next discussed. Numerical analysis of the mirror losses induced and consequent reduction in supported higher order modes as a result of the IID aperture is then presented.
Synthesis and Evaluation of Oxyguanidine Analogues of the Cysteine Protease Inhibitor WRR-483 against Cruzain. A series of oxyguanidine analogues of the cysteine protease inhibitor WRR-483 were synthesized and evaluated against cruzain, the major cysteine protease of the protozoan parasite Trypanosoma cruzi. Kinetic analyses of these analogues indicated that they have comparable potency to previously prepared vinyl sulfone cruzain inhibitors. Co-crystal structures of the oxyguanidine analogues WRR-666 and WRR-669 bound to cruzain demonstrated different binding interactions with the cysteine protease, depending on the aryl moiety of the P1' inhibitor subunit. Specifically, these data demonstrate that WRR-669 is bound noncovalently in the crystal structure. This represents a rare example of noncovalent inhibition of a cysteine protease by a vinyl sulfone inhibitor.
LATROBE, Pa. — Pittsburgh Steelers coach Mike Tomlin is excited to see his new-look secondary. The Steelers overhauled the position following a bumpy season which included a disappointing upset playoff loss against Jacksonville in the divisional round. The team followed by parting ways with several long-time veterans, and their position coach, while adding experience, depth and a pair of hard-hitting draft picks. "We have some guys in that position that are new to us," Tomlin said. "It's going to be an active, active training camp for that group. I'm excited about seeing them all." Pittsburgh cut ties with veterans Mike Mitchell, William Gay and Robert Golden, in addition to J.J. Wilcox and position coach Carnell Lake. The Steelers added long-time Penn State assistant Tom Bradley as the defensive backs coach along with former veteran Green Bay safety Morgan Burnett and special teams standout Nat Berhe. They also invested a first-round draft pick in Virginia Tech safety Terrell Edmunds and another in fifth-rounder Marcus Allen out of Penn State. "We didn't go into the draft saying we were going to draft another safety," Steelers' general manager Kevin Colbert said. "We went into the draft saying, OK, if safety is the best position available, we can add another guy, and that's what we did." The Steelers also plan to switch Sean Davis from strong to free safety this season. Davis said he's more comfortable on the strong side, but he's anxious to improve at free safety. Davis, who enters his third season, is a veteran in the room of sorts as he and Jordan Dangerfield are the lone returning safeties who have experience with the team. "The coaches talked to me and said we need you to be a leader. to be a veteran," Davis said. "I'm embracing my role. I have to get comfortable with being uncomfortable and take a bigger step in helping this defense reach our goals." Burnett was versatile during eight seasons in Green Bay. He moved around late in his tenure, but primarily played safety with the Packers. Burnett started 102 games for the Packers and won a Super Bowl ring in 2011 when Green Bay beat Pittsburgh. "I'm a firm believer that you let your work do the talking, and you go about doing things the right way," Burnett said. "You have to earn the respect of your teammates." In last season's AFC divisional playoff round, the Steelers allowed 45 points at home to a Jacksonville offense that scored just 10 points a week earlier. The secondary gave up two pass plays of 40 yards or more in that game, a communication issue the new group looks to rectify this season. Tackling is also a priority for the secondary. "We were 31st in the league at missed tackles, so that's definitely a point of emphasis," Davis said. "We all know 31st is unacceptable, and for us to be world champions, that needs to be worked on and corrected." The Steelers hope Edmunds, their first-round pick, can help. Edmunds is classified as a safety, but he also served as a linebacker at Virginia Tech when the Hokies played with five or more defensive backs. He could play a similar hybrid role with the Steelers. "Of course I want to start, but whatever position they put me in, I gotta go out and make a play," Edmunds said. "If that's strictly on special teams, if it's coming in on certain packages, I'm just trying to go out there and help the team win." NOTES: Steelers starting left guard Ramon Foster exited Saturday's practice on a cart with a lower body injury. Tomlin said after practice that Foster was being evaluated and did not have an update on his status. Trainers worked on Foster's right knee and leg following a play during the Steelers' first padded practice of training camp. ... Burnett missed his second practice on Saturday with a minor hamstring injury. ... The Steelers announced Rocky Bleier, Buddy Dial, Alan Faneca, Bill Nunn and Arthur J. Rooney, Jr. to the team's 2018 Hall of Honor.
A survey of self-reported outcome instruments for the foot and ankle. The information acquired from self-reported outcome instruments is useful only if there is evidence to support the interpretation of obtained scores. To properly interpret scores, there should be evidence for content validity, construct validity, reliability, and responsiveness. Evidence regarding score interpretation must also contain a description of the applicable test conditions, including information about the characteristics of subjects, timing of data collection, and construct of change. The objective of this review was to identify self-reported outcome instruments that have evidence to support their usefulness for assessingthe effect of treatment directed at individuals with foot and ankle-related pathologic conditions in an orthopaedic physical therapy setting. In addition, we provide specific information that will allow clinicians and researchers to select an appropriate instrument and properly interpret the obtained scores. Fourteen self-reported outcome instruments that met the objective of this review were identified. Five instruments, the Foot and Ankle Ability Measure, Foot Function index, Foot Health Status Questionnaire, Lower Extremity Function Scale, and Sports Ankle Rating System quality of life measure, satisfied all 4 categories of evidence (content validity, construct validity, reliability, and responsiveness) outlined herein.
Pyramidal Hybrid Approach: Wavelet Network with OLS Algorithm-Based Image Classification Taking advantage of both the scaling property of wavelets and the high learning ability of neural networks, wavelet networks have recently emerged as a powerful tool in many applications in the field of signal processing such as data compression, function approximation as well as image recognition and classification. A novel wavelet network-based method for image classification is presented in this paper. The method combines the Orthogonal Least Squares algorithm (OLS) with the Pyramidal Beta Wavelet Network architecture (PBWN). First, the structure of the Pyramidal Beta Wavelet Network is proposed and the OLS method is used to design it by presetting the widths of the hidden units in PBWN. Then, to enhance the performance of the obtained PBWN, a novel learning algorithm based on orthogonal least squares and frames theory is proposed, in which we use OLS to select the hidden nodes. In the simulation part, the proposed method is employed to classify colour images. Comparisons with some typical wavelet networks are presented and discussed. Simulations also show that the PBWN-orthogonal least squares (PBWN-OLS) algorithm, which combines PBWN with the OLS algorithm, results in better performance for colour image classification.
Designing and constructing a magnetic stimulator: theoretical and practical considerations Magnetic nerve stimulation has proven to be an effective non-invasive technique that can be used to excite peripheral and central nervous systems. In this technique, the excitement of the neural tissue depends on exposing the body to a transient magnetic field. This field can be generated by passing a high pulse of current through a coil over a short period of time. This paper presents general guidelines for designing and constructing a magnetic stimulator. These guidelines cover theoretical concepts, hardware aspects and components required to build these systems. The critical points discussed in this paper are based on key findings and difficulties encountered during the process of building the system used for this study. Furthermore, some suggestions were addressed to improve future designs.
Associations between exploratory dietary patterns and incident type 2 diabetes: a federated meta-analysis of individual participant data from 25 cohort studies Purpose In several studies, exploratory dietary patterns (DP), derived by principal component analysis, were inversely or positively associated with incident type 2 diabetes (T2D). However, findings remained study-specific, inconsistent and rarely replicated. This study aimed to investigate the associations between DPs and T2D in multiple cohorts across the world. Methods This federated meta-analysis of individual participant data was based on 25 prospective cohort studies from 5 continents including a total of 390,664 participants with a follow-up for T2D (3.825.0 years). After data harmonization across cohorts we evaluated 15 previously identified T2D-related DPs for association with incident T2D estimating pooled incidence rate ratios (IRR) and confidence intervals (CI) by Piecewise Poisson regression and random-effects meta-analysis. Results 29,386 participants developed T2D during follow-up. Five DPs, characterized by higher intake of red meat, processed meat, French fries and refined grains, were associated with higher incidence of T2D. The strongest association was observed for a DP comprising these food groups besides others (IRRpooled per 1 SD=1.104, 95% CI 1.0591.151). Although heterogeneity was present (I2=85%), IRR exceeded 1 in 18 of the 20 meta-analyzed studies. Original DPs associated with lower T2D risk were not confirmed. Instead, a healthy DP (HDP1) was associated with higher T2D risk (IRRpooled per 1 SD=1.057, 95% CI 1.0271.088). Conclusion Our findings from various cohorts revealed positive associations for several DPs, characterized by higher intake of red meat, processed meat, French fries and refined grains, adding to the evidence-base that links DPs to higher T2D risk. However, no inverse DPT2D associations were confirmed. Supplementary Information The online version contains supplementary material available at 10.1007/s00394-022-02909-9. A solution to overcome the limitation of study-specific findings is to replicate the association of DPs with T2D in independent populations. So far, only one study investigated the generalizability of T2D-associations with DPs derived by principal component analysis (PCA). However, this study was restricted to European populations participating in the EPIC-InterAct consortium with the aim to replicate only those T2D-associated DPs which were derived in country-specific analyses within this consortium. In addition to PCA, patterns derived by reduced rank regression, were also replicated. The main principle for those replication approaches is the reconstruction of pattern variables based on the reported pattern structure. In this context, it has been proposed to derive so-called simplified DP variables to construct less populationdependent DP variables with a content approximately similar to that of original exploratory DPs. It has been shown that the DP variables, calculated with this method, correlated highly with the original DP and reflected variation in intake of individual components well. Hence, this approach seems well suited to replicate study-specific associations of exploratory DPs in independent study populations. To date, however, this method has not been used to examine exploratory DPs in relation to T2D across populations from different continents of the world. To overcome the research gap of investigating the generalizability of DP-T2D associations using the approach of simplified DPs, the present study aimed 1) to investigate the association of previously reported T2D-associated DPs with incident T2D and 2) to evaluate, if two DPs of overlapping FGs ("mainly healthy" and "mainly unhealthy"), also previously identified in the same systematic review, are associated with incident T2D. For this purpose, the InterConnect collaboration project offers a well-suited research platform for federated meta-analyses of harmonized individual level study data from 25 cohorts across different continents and adjusting for a common set of potential confounders across studies. As another advantage, this approach allowed the inclusion of cohorts that have relevant data, but never published on the topic before. Dietary assessment and construction of dietary patterns Dietary intake was assessed by food frequency questionnaires (FFQ) in most cohorts, by dietary history interview and a 24-h recall in one cohort each (Table S1). For the present study food intake encoded in g/day was used. Some cohorts provided only standard portion sizes and frequency of consumed food items, which were converted into g/day. For some US cohorts, where information on portion size was not available, variable-specific standard portion sizes sourced from the United States Department of Agriculture were used. The dietary data of all cohorts were then harmonized to form a set of food groups. For this purpose, the FGs used in the published DPs associated with T2D risk were compared. Based on this, a set of FGs was defined to be used across all published DPs (Tables 1, S2 and S3). If for a specific food item, which was used in the original DP, no intake information was available in other included studies, it was omitted. Then the respective study-specific food items were added in each InterConnect cohort to form the corresponding harmonized FG (Excel Table S6). Subsequently, DPs were constructed based on the harmonized FGs. The structure of DPs was defined based on the findings of our previous systematic review, thus reflecting a) DPs found to be significantly associated with T2D risk in at least one cohort study (13 individual DPs) and b) two DPs reflecting DPs with overlapping food composition: the DP reflecting the overlap of "mainly healthy" food groups was composed of fruits, vegetables, legumes, poultry and fish, while the DP of " mainly unhealthy" food groups was composed of refined grains, French fries, red meat, processed meat, high-fat dairy products and eggs. Thus, 15 DPs in total were constructed. To calculate individual DP scores for study participants, the approach of simplified DPs was used. In PCA-derived DPs, all food groups contribute with a respective factor loading to the overall pattern structure. The simplification approach considers only those FGs with strong contribution to the respective DP (factor loading (FL) ≥ 0.2) in the original DPs. Details of which FGs were combined to calculate the respective simplified DP scores are shown in Tables 1, S2 and S3. These FGs were standardized according to the distribution in each participating study, respectively. Then, simplified DP scores were calculated by summing up the selected FGs without any weighting (in original DP the respective FL is the weighting) and by also considering negative algebraic signs for those FGs with negative FL from the original publication. Finally, study-specific simplified DP scores were also standardized to allow meta-analysis across cohorts. Ascertainment of incident T2D To minimize potential variations due to varying diagnosis criteria of T2D incidence across cohorts, two harmonized outcomes were defined. As primary outcome, clinically incident T2D was defined when any one or more of the following criteria were fulfilled: ascertained by linkage to a registry or medical record; confirmed antidiabetic medication usage; self-report of physician diagnosis or antidiabetic medication, verified by any of the following: (a) at least one additional source from 1 or 2 above, (b) biochemical measurement (glucose or HbA1c), (c) a validation study with high concordance. As secondary outcome with less strict criteria, we defined incident T2D, when any of the following criteria were fulfilled: ascertained by linkage to a registry or medical record; confirmed antidiabetic medication usage; self-report of physician diagnosis or antidiabetic medication or biochemical measurement (glucose or HbA1c). Assessment of covariates We defined a set of potential confounders to be used in analyses based on: frequent usage in the studies of the 13 published T2D-associated DPs and availability across all participating InterConnect cohorts (Table S4). The final set of confounders included: age at baseline (years), sex, body mass index (BMI) (kg/m 2 ), physical activity (PA, cohort specific items were used), education (cohort specific items were used), smoking (never, former, current smoker), alcohol consumption (g/day), hypertension (yes/no), and energy intake (kcal/day). The recorded data of confounders of the respective InterConnect cohorts were used and harmonized across all cohorts, if possible (Table S5). All cohorts provided age in years, BMI in kg/m 2, hypertension as yes or no. Smoking was harmonized as never, former, and current smoker, energy intake into kcal/day and alcohol into g/day. In the Golestan Cohort Study from Iran alcohol consumption was used as never or ever drinker. Study-specific coding was used for PA and education because harmonization was not feasible due to extensive differences in codes (Table S5). Statistical analysis All analyses were conducted using R within the DataSH-IELD federated meta-analysis programming library. For analysis, participants with the following criteria were excluded: T2D, myocardial infarction, stroke or cancer at baseline to avoid reverse causation, extreme energy intake (men < 800 kcal or > 4200 kcal, women < 500 kcal or > 3500 kcal), missing follow up time, missing confounders, and more than 10% missing food items. In total, 46.9% of the participants of the InterConnect cohorts were excluded (Table 2). Baseline characteristics were calculated stratified by cohorts. Normally distributed variables were presented as mean and standard deviation (SD), not normally distributed as median and interquartile range (IQR), and categorical variables as relative percentages. Incidence rate ratios (IRRs) and 95% confidence intervals (CI) were estimated to test for the associations between 1 standard deviation (SD) increase in DP scores and incident T2D in each cohort separately, using Piecewise Poisson regression adjusted for age, sex, BMI, PA, education, smoking, alcohol consumption, hypertension and energy intake. The Piecewise Poisson regression is available in the Data-SHIELD library and has been shown to represent a close approximation to the Cox Proportional Hazards regression. For the European Prospective Investigation into Cancer and Nutrition (EPIC)-InterAct cohorts a weighting was 17.9 (21.1) applied that is analogous to Prentice weighting (weights of 1 for all cases and weights of #non−casesinwholecohort #non−casesinsubcohort for non-cases) to account for the case-cohort design in survival analyses, when using the piecewise Poisson method. Pooled IRR were estimated using random-effects metaanalysis models and were visualized with forest plots. Heterogeneity was assessed using I 2, p value of chi-square test and tau 2 statistic. For each DP a statistical model for the primary and the secondary outcome was calculated. For sensitivity analysis we calculated a second set of the 13 DPs by considering only FGs with FL ≥ 0.4 in the original publication to identify those strongly contributing to the DP. Moreover, a sensitivity analysis with exclusion of certain component FGs was conducted to estimate if few FGs were mainly driving the association from the UDP3, which showed the strongest association with T2D. To account for characteristics potentially explaining heterogeneity between the cohorts, meta-regressions were calculated with the pooled IRR as dependent variable and age, BMI, follow-up time and region as the independent variables. For this, the metareg function within the metafor package (version 3.02) in R was used. Results In the present analysis, data from 390,664 participants across 25 cohorts with a median follow-up time ranging from 3.8 to 25.0 years were included ( Table 2). Four cohorts included only women (EPIC-InterAct-France, Mexican Teachers' Cohort (MTC), Swedish Mammography Cohort (SMC), Women's Health Initiative Observational Study (WHI-OS)) and two only men (Cohort of Swedish Men (COSM), Puerto Rico Heart Health Program (PRPHH)). Participants from Coronary Artery Risk Development in Young Adults (CARDIA) study, MTC and Seguimiento University of Navarra (SUN) cohort were of younger age (24.9-41.8 years), whereas participants from other cohorts were older (49.5-63.1 years). The mean BMI ranged from 23.9 kg/m 2 in SUN to 29.3 kg/m 2 in EPIC-InterAct-Spain. During follow-up, 29,386 clinically incident cases of T2D were recorded for the primary outcome and 36,527 incident cases for the secondary outcome. The dietary intake of harmonized FGs showed marked differences between the cohorts (Excel Supplemental Table). For example, reported median fruit intake was highest in MTC (321.7 g/day) and about three times higher than median intake in the cohorts with lowest fruit intake like CARDIA (94.9 g/day) and EPIC-InterAct-Germany (91.4 g/ day). Particularly high intakes compared to other cohorts were observed for vegetables in SUN Study (391 g/day), legumes and soy (but mostly beans) in Brazilian Longitudinal Healthy dietary patterns and risk of T2D None of the HDPs ( were robustly associated with a reduced risk of T2D. This was the case for the two outcome definitions and for the two versions of each HDP constructed using different cut-offs of FL to define component FGs. HDP1 was significantly associated with a higher T2D risk (primary outcome: pooled IRR per SD = 1.057, 95% CI 1.027-1.088; secondary outcome: IRR per SD = 1.042, 95% CI 1.018-1.065, Table 3). This DP contains vegetables, fruits, margarine, nuts, poultry, eggs, fish, red meat, whole milk, high fat dairy and low-medium fat dairy. However, this association was absent in sensitivity analysis, when only FGs with published absolute FL ≥ 0.4 (vegetables and fruits, Table 2) were used to construct the HDP1 (Supplemental Table 6). HDP3, composed of fruits and dairy products, was also not significantly associated with T2D risk (pooled IRR per SD = 0.976, 95% CI 0.948-1.005, Table 3), when using the secondary outcome definition. For the remaining HDPs the pooled risk estimators did not indicate associations with T2D risk (Table 3). Overall, there was moderate to substantial heterogeneity (I 2 = 58-83%, Table 3) for the HDP-T2D associations. For HDP1, none of the characteristics (age, BMI, follow-up time and region) explained the observed heterogeneity (I 2 = 66%) in meta-regressions (data not shown). Unhealthy dietary patterns and risk of T2D Five of the seven UDPs (UDP3-7) were associated with a higher T2D risk in pooled analyses across all cohorts ( Table 6). Most cohort-specific IRRs indicated that UDP3 was associated with a higher T2D risk or a trend towards an association (Figs. 1, 2). Similar findings, although weaker, were observed for UDPs 4-7, where heterogeneity ranged from moderate (I 2 = 49% for UDP 4) to substantial (I 2 = 81% for UDP 6). Here, region explained a considerable proportion of the heterogeneity for UDP6 (29%) and UDP7 (25%), while follow-up time explained 30% for UDP5 and 24% for UDP6 of the overall heterogeneity. No association with T2D risk was found for UDP 1 and UDP 2, neither for the two outcome definitions nor for the two FL cut-offs (Table 3, Supplemental Table 6). Dietary patterns with "mainly healthy" and "mainly unhealthy" food groups and T2D risk We evaluated the two DPs reflecting previously published DPs with overlapping FG components irrespective of whether they have been described to be associated with T2D previously or not. The DP consisting of "mainly healthy" FGs, i.e. fruits, vegetables, legumes, poultry and fish, was not associated with T2D risk across the included cohorts (primary outcome: pooled IRR per 1 SD = 1.033, 95% CI 0.998-1.071; secondary outcome: pooled IRR per 1 SD = 1.000, 95% CI 0.975-1.026) (Fig. 3, Supplemental Fig. 6). The heterogeneity across studies was substantial (primary outcome: I 2 = 84%, secondary outcome: I 2 = 76%). Hence, the forest plots show the cohorts arranged by region. In contrast, the DP consisting mainly of "mainly unhealthy" FGs, i.e. refined grains, French fries, red meat, processed meat, high-fat dairy products and eggs, was significantly associated with a higher T2D risk (primary outcome: pooled IRR per 1 SD = 1.079, 95% CI 1.051-1.108; secondary outcome: pooled IRR per 1 SD = 1.067, 95% CI 1.037-1.098) (Fig. 3, Supplemental Fig. 6). The heterogeneity was moderate for the primary outcome (I 2 = 58%), but substantial for the secondary outcome (I 2 = 74%). Most study-specific IRRs indicated a higher risk of this DP, except for the Golestan Cohort Study, which pointed towards an inverse association. Sensitivity analysis of UDP 3 UDP3 was composed of the FGs red meat, processed meat, poultry, eggs, fish, French fries, refined grain products, and rice. To assess the contribution of these individual FGs to the T2D risk of UDP3, a sensitivity analysis was carried out by excluding individual FGs (Supplemental Table 7). The exclusion of refined grains resulted in the highest reduction of the IRR estimate (from 1.094-1.047, − 4.74%), followed by processed meat (− 1.66%) and eggs (− 1.10%). Discussion This study investigated associations between exploratory DPs and T2D risk in a large number of prospective cohort studies in a worldwide context, using harmonized data analyses across all studies and federated meta-analyses of individual studies. No robust inverse associations were observed between HDPs and risk of T2D. HDP1 was associated with a higher T2D risk in primary analysis, but this unexpected finding was not confirmed in sensitivity analyses. We observed more consistent findings for UDPs with five of the seven UDPs being associated with higher T2D risk in our meta-analysis of included studies. We investigated two DPs which reflect commonly shared FGs of exploratory DPs identified in previous studies on DP and T2D. The DP with "mainly healthy" FGs, characterized by higher intakes of vegetables, legumes, fruits, poultry and fish, was not associated with T2D risk, but the DP with "mainly unhealthy" FGs, characterized by red meat, processed meat, high-fat dairy products, eggs, refined grains and French fries, was associated with a higher T2D risk. The effect size for all the significant associations was relatively modest with IRRs being 1.10 per 1 SD increased DP score or less. Previous studies have shown differences in risk associations between DPs and T2D in U.S. cohorts and the European EPIC-InterAct study, although this was restricted to a priori DPs like the Dietary Approaches to Stop Hypertension (DASH) diet, the Alternative Healthy Eating Index (AHEI) or reduced rank regression-derived DPs. Given the strong heterogeneity in the composition of exploratory DPs already in the European context, this underlines the importance of investigating if population-specific DP-T2D associations can be replicated across diverse populations, where even higher heterogeneity is expected. To our knowledge, this is the first study to investigate if associations of exploratory DPs with T2D risk can be replicated across cohorts from multiple regions across the world. We have previously investigated the generalizability of exploratory DPs associations with T2D in EPIC-InterAct, a European-wide cohort study. In this analysis, three DPs identified in country-specific analyses were associated with T2D. However, only one DP was consistently associated with T2D risk across the included European cohorts (pooled IRR per 1 SD: 1.12, 95% CI 1.04-1.20). This DP was characterized by high intakes of processed meat, potatoes (including French fries), vegetable oils, sugar, cake and cookies, and tea. Besides the EPIC-InterAct study, we are not aware of any further systematic replication of associations of exploratory DPs and T2D. Also, the EPIC-InterAct study did not attempt to replicate T2D-associated DPs identified in other cohorts than EPIC-InterAct, which has been our current major aim. We were able to replicate associations with higher T2D risk for five of seven investigated UDPs. These five UDPs (UDP3-7) share red meat, processed meat, French fries and refined grains (comprising refined grain bread and refined grain breakfast cereals) as component FGs. Also eggs and high-fat dairy products were component FGs of three out of these five DPs. These FGs are identical to those which we used to construct one DP based on commonly shared "mainly unhealthy" FGs of published DPs. Consequently, this pattern was also associated with a higher T2D risk in our meta-analysis: we observed a pooled IRR of 1.08 per 1 SD, 95% CI 1.05-1.11 for the primary outcome definition, being slightly stronger than the risk estimates for most of the UDPs, which ranged between pooled IRRs of 1.04 for the UDP5 by Yu et al. and for UDP7 by Schoenaker et al. to 1.07 for the UDP4 identified by Erber et al.. However, an even higher risk estimate was found for UDP3 (IRR of 1.10 per 1 SD, 95% CI 1.06-1.15), which had been observed in the Melbourne Collaborative Cohort Study to be associated with higher risk of T2D. This DP was not only characterized by red and processed meat, eggs, French fries, refined grains, but also by fish, poultry and rice. We noted that the DPs associated with higher risk in our meta-analyses had only potatoes (including French fries) and processed meat in common with the DP identified to be associated in the EPIC-InterAct study. To gain insight into the role of individual FGs for pattern associations, we conducted a sensitivity analysis on the UDP3-T2D association by excluding individual FGs one at a time. Particularly the exclusion of refined grains led to an attenuation of the risk estimate from IRR of 1.10 to 1.05 for the primary outcome. Still, other components seemed to contribute to the associations and we interpret the synergy of these component FGs in this pattern as driving the association with T2D. The UDPs which were identified as being associated with a higher risk of T2D did not only show overlaps but also differences in component FGs. For example, butter (UDP4), sugar and confectionary and offals (UDP5) or pizza (UDP6, UDP7) were patternspecific components besides the commonly shared FGs. Two of the UDPs (UDP5, UDP6) additionally shared the FG sugar-sweetened beverages. This food group was also a component in 4 out of 5 previously identified reduced rank regression-patterns, which were associated with higher T2D risk [14, and evidence from a systematic literature review suggests 13% risk increase for T2D per one serving (250 mL/day), even after adjustment for BMI. The UDP6 was furthermore characterized by the negatively weighted FGs cakes & cookies, legumes, vegetables, fruits and whole grains. However, after exclusion of these FGs due to the use of the cut-off FL ≥ 0.4, the IRR was only marginally changed. None of the HDPs, either individual DPs described by single studies or the DP defined by commonly shared "mainly healthy" FGs of investigated patterns, were inversely associated with T2D risk in our meta-analyses. This is generally in line with evidence for single FGs being components of such DPs. For instance, vegetables, fruits, legumes, poultry and fish have not been clearly identified to relate to lower T2D risk in cohort studies. In contrast to the original observation from the Finnish Mobile Clinic Health Examination Survey, we observed the HDP1 being associated with a higher risk of T2D. Red meat and eggs-frequent components of UDPs-were also contributing components of this pattern; thus, the direction of association in our analysis could potentially be driven by these two components. While a higher T2D risk of red meat is well documented, the role of egg consumption remains unclear. Differences how specific foods are prepared and/or consumed together across populations may explain their association with healthy or unhealthy patterns. Furthermore, if a food group like fish is the main animal protein source in a population, detrimental components like methylmercury could play a more important role leading to health detrimental effects than in a population, where these components play a minor role due to less intake. Besides the components of the investigated DPs, it is relevant to discuss overall methodological limitations. To Fig. 1 Incidence rate ratios and 95% confidence intervals for the association between replicated dietary pattern variables and incident type 2 diabetes. Shown are results for the primary outcome definition and harmonized food groups with published factor loadings > 0.2 by subgroups of region. Associations are adjusted for age, sex, BMI, physical activity, education, smoking, alcohol consumption, total energy intake and hypertension. CI confidence intervals, IRR incidence rate ratios, HDP healthy dietary pattern, UDP unhealthy dietary patterns ◂ Fig. 2 Incidence rate ratios and 95% confidence intervals for the association between replicated dietary pattern variables and incident type 2 diabetes. Shown are results for the secondary outcome definition and harmonized food groups with published factor loadings > 0.2 by subgroups of region. Associations are adjusted for age, sex, BMI, physical activity, education, smoking, alcohol consumption, total energy intake and hypertension. CI confidence intervals, IRR incidence rate ratios, HDP healthy dietary pattern, UDP unhealthy dietary patterns Fig. 2 (continued) Fig. 3 Incidence rate ratios and 95% confidence intervals for the association between the dietary patterns of "mainly healthy" and "mainly unhealthy" food groups and incident type 2 diabetes using the primary outcome. Associations are shown by subgroups of region and adjusted for age, sex, BMI, physical activity, education, smoking, alcohol consumption, total energy intake and hypertension. CI confidence intervals, IRR incidence rate ratios enable the meta-analytical investigation of the DPs across so many different cohorts in the first place, we harmonized the cohort specific food items into a number of food groups. This inherits the problem of summarizing different numbers of food items into one food group, depending on the original dietary assessment. Hence, the difference in median intake of certain food groups between the cohorts could be due to real dietary intake differences in the populations or due to a higher extent of inquired food items. Furthermore, the condensing of food items into food groups led to a lack of granularity. Hence, potential differences in the association with T2D of specific food items, e.g. green leafy vegetables, could not be distinguished from other food items within this food group. Another methodological limitation could be the lack of detail about preparation methods, e.g. frying, in the dietary assessment of most of the participating cohorts. Hence, this may have led to an underestimation of the association for the UDP3, which related to each of fried fish, poultry and rice in the original study by Hodge et al., while we could only consider overall intake of fish, poultry and rice in our study. A distinction between French fries and potatoes (non-fried) was also not possible in all participating cohorts. However, a recent meta-analysis investigated the association of potatoes with T2D risk and distinguished between French fries and boiled/baked/mashed potatoes and both types of potato culinary preparations were associated with a higher T2D risk, although to a higher extent for 150 g/ day intake of French fries (RR of 1.66, 95% CI 1.43-1.94) compared to 150 g/day intake of boiled potatoes (RR of 1.09, 95% CI 1.01-1.18). Hence, we would still expect the risk estimates to point to a similar direction. Besides the food items, a common set of important and well-established confounders had to be harmonized across the cohorts. The set was selected based on those confounders, which were reported in the original publications of DPs and based on the availability of confounders in the participating InterConnect cohorts. Clearly, due to the harmonization approach and the technical setup for federated data analysis, it was not possible to account for all potential confounders, either being generally important (e.g. family history of diabetes) or being relevant for some specific study populations (e.g. ethnicity). Still, the consideration of a harmonized confounder set could be seen as strength of this study. Alongside the exposure and covariates, the outcome definitions needed also harmonization attempts. Due to different definitions of T2D as outcome in the participating cohorts, we have applied two different outcome definitions (primary, secondary). To assess if large differences in the number of T2D cases in some cohorts due to the definitions affect the associations, we conducted a sensitivity analysis. We compared the IRR for subgroup analyses of cohorts with a large (> 40%) to small (≤ 40%) difference and did observe slightly attenuated associations for all UDPs (data not shown). This indicated that a stricter outcome definition ("primary outcome") resulted in slightly stronger associations. Furthermore, the DPs were replicated in the different cohorts by using a simplification process which restricts the DP score calculations to those FGs with high FL and ignores differences in FL between FGs. However, many original DPs contained only very few FGs with relative high FL (≥ 0.4). So, for instance, the simplified UDP3 resulted in red meat as the only FG and hence lost the complex pattern structure. Therefore, we decided to use FGs in the simplified pattern with FL ≥ 0.2 as the main analysis. The simplification ignores relative differences in contributions of FGs to DPs (reflected by differences in FLs), however, it supports interpretation of DPs in terms of FG intake. While the approach has been successfully applied to replicate other data-driven pattern associations, we cannot rule out that the relative loss in precision in DP score calculation has influenced the success of pattern-T2D association replications in our study. We observed moderate to strong heterogeneity of associations across cohorts, with I 2 values ranging from 49% (UDP4) to 85% (UDP3). Heterogeneity between studies may have different explanations. The condensation of foods into harmonized FGs in the cohorts may have led to the inclusion of heterogeneous food items due to strong culinary differences between populations, but also due to different extent of inquired food items depending on the dietary assessment instrument. Another explanation for heterogeneity could have been the inclusion of cohorts with a short follow-up time, introducing the bias of reverse causation. Especially for HDPs, participants with a high risk at developing T2D could have changed their dietary habits by eating more health promoting food groups, but still developed the disease. However, this could not be confirmed by the results of our meta-regression on several characteristics of the cohorts (region, follow-up time, age, BMI). Here, the follow-up time explained only a considerable proportion of heterogeneity for two UDPs (UDP5, UDP6). Overall, the magnitude of the pooled risk estimates was much smaller compared to the original studies. However, comparability is constrained, since the risk estimates are given per 1 SD increase and SD is highly dependent on the population distribution of the respective DPs. Nevertheless, we were restricted to the calculation of analyses assuming a linear association between the DPs and T2D, due to the federated approach and the solutions, which could be realised with DataSH-IELD. Hence, generalizable conclusions based solely on the magnitude of risk estimates from the meta-analyses should be done with caution and no quantitative recommendations can be deduced for public health guidance. Therefore, we mainly base our conclusions on the consistency of direction of associations: in the meta-analyses with significant pooled risk estimates, the majority of included cohorts pointed also towards a higher risk. Another limitation was the standardization of FGs for DP score calculation based on the distribution of FG intake in the respective cohorts. This could be a problem, if food intake distributions differ extensively between those cohorts compared to the study population where a DP had previously been reported from and hence, may jeopardize attempts to replicate associations of DPs with disease risk. However, two main reasons were pivotal for this approach. On the one hand, the information on the intake distribution was not provided in most original publications, but rather the correlation structure as a basis for the exploratory derivation of DPs. On the other hand, even if this information would be provided by the original publications, this would result in more limitations: In most studies, non-or semi-quantitative dietary assessment instrument were applied and hence, the reported intake distributions did not provide a valid estimation of absolute intakes. Furthermore, dietary assessment instruments per se differed between the cohorts and nothing is known about their comparability in estimating food intake. Another limitation of this study was the high exclusion rate of 46.9%. Hence, a potential selection bias due to missing follow-up time, covariates or food intake data could not be ruled out. Conclusion To our knowledge, this is the first study replicating population-specific associations of exploratory DPs with T2D risk across a large number of cohort studies from different continents. Our meta-analyses of harmonized individuallevel data from various cohorts revealed a higher T2D risk for several DPs characterized by higher intake of red meat, processed meat, French fries and refined grains (comprising refined grain bread and refined grain breakfast cereals). These results confirm former study-specific results in a generalizable context and therefore enrich evidence for DPs related to higher T2D risk. However, none of the inverse associations of investigated HDPs could be confirmed across different cohorts. Author contributions The author's responsibilities were as follows: MBS, NJW and NGF: designed the research; SD and AF: evaluated the meta-data, SD, AF, TRPB, MP and GOD: harmonized the Inter-Connect data; SD: analyzed data; SD, FJ and MBS: wrote the manuscript and have primary responsibility for final content; and all authors: interpreted the results and critically revised the article for important intellectual content, and read and approved the final manuscript. The corresponding authors attest that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. MBR and MAMG acknowledge that the SUN Project has received funding from the Spanish Government-Instituto de Salud Carlos III, and the European Regional Development Fund (FEDER) (RD 06/0045, CIBER-OBN, Grants PI10/02658, PI10/02293, PI13/00615, PI14/01668, PI14/01798, PI14/01764, PI17/01795, PI20/00564 and G03/140), PNSD-2020/021, the Navarra Regional Government for epidemiological studies on dairy products and cardiometabolic diseases from the Dutch Dairy Association and the Danish Dairy Research Foundation. PMV and PV acknowledge funding from GlaxoSmithKline, the Faculty of Biology and Medicine of Lausanne, and the Swiss National Science Foundation (grants 33CSCO-122661, 33CS30-139468, 33CS30-148401 and 33CS30_177535/1). MK and the Whitehall II study were supported by the UK Medical Research Council (MRCMR/R024227/1), the Wellcome Trust (221854/Z/20/Z) and the US National Institutes of Health (NIH, RF1AG062553, R01AG056477), during the conduct of the study. The funding sources did not participate in the design or conduct of the study; collection, management, analysis, or interpretation of the data; or preparation, review, or approval of the manuscript. Funding Availability of data and material Due to the federated and collaborated design of this InterConnect study, data and material cannot be made accessible. Individual study meta-data may be available upon request from the individual study PI's. Code availability The analysis code can be provided on request. Conflict of interest The authors declare no conflict of interest. Ethics approval All cohorts obtained ethical review board approval at the host institution and written informed consent from participants. Consent to participate All participants in the individual cohorts gave their signed informed consent at recruitment. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
// Parse implements the Syncer interface. func (s *FrontierSyncer) Parse(buf []byte) (bool, error) { s.current = new(block.Frontier) if err := s.current.UnmarshalBinary(buf); err != nil { return false, err } if s.current.IsZero() { return true, nil } s.cb(s.current) return false, nil }
Single access laparoscopic total colectomy for severe refractory ulcerative colitis BACKGROUND Single port laparoscopic surgery allows total colectomy and end ileostomy for medically uncontrolled ulcerative colitis solely via the stoma site incision. While intuitively appealing, there is sparse evidence for its use beyond feasibility. AIM To examine the usefulness of single access laparoscopy (SAL) in a general series experience of patients sick with ulcerative colitis. METHODS All patients presenting electively, urgently or emergently over a three-year period under a colorectal specialist team were studied. SAL was performed via the stoma site on a near-consecutive basis by one surgical team using a surgical glove port allowing group-comparative and case-control analysis with a contemporary cohort undergoing conventional multiport surgery. Standard, straight rigid laparoscopic instrumentation were used without additional resource. RESULTS Of 46 consecutive patients requiring surgery, 39 (85%) had their procedure begun laparoscopically. 27 (69%) of these were commenced by single port access with an 89% completion rate thereafter (three were concluded by multi-trocar laparoscopy). SAL proved effective in comparison to multiport access regardless of disease severity providing significantly reduced operative access costs (> 100€case) and postoperative hospital stay (median 5 d vs 7.5 d, P = 0.045) without increasing operative time. It proved especially efficient in those with preoperative albumin > 30 g/dL (n = 20). Its comparative advantages were further confirmed in ten pairs case-matched for gender, body mass index and preoperative albumin. SAL outcomes proved durable in the intermediate term (median follow-up = 20 mo). CONCLUSION Single port total colectomy proved useful in planned and acute settings for patients with medically refractory colitis. Assumptions regarding duration and cost should not be barriers to its implementation. INTRODUCTION The acceptance of the clear advantages of laparoscopy over open surgery for patients with inflammatory bowel disease, particularly in the acute setting, has been relatively recent. For patients undergoing a total abdominal colectomy for ulcerative colitis (UC), a laparoscopic approach is associated with lower overall complication and mortality rates. However, surgical technique and technology continues to undergo evolutionary change. Single access laparoscopic (SAL) surgery is a recent modified access technique that allows grouping of laparoscopic instrumentation at a single confined site in the abdomen in order to further minimize the degree of parietal wounding associated with intraperitoneal surgery. Meta-analyses demonstrate that overall, SAL for segmental colorectal resection compared to standard multiport approaches has no difference in conversion to open laparotomy, morbidity or operation time but a significantly shorter total skin incision and a shorter post-operative length of stay is observed. As the size of an ileostomy approximates that of a single port access site, total colectomy with end ileostomy should be ideally suited to this access modality. Early reports demonstrated that with judicious patient selection and considered operative technique, SAL total colectomy for medically UC can be safely performed. To date however, experience analyses have predominantly focused on feasibility and technical adequacy in small series predominantly in the elective setting and mostly without a concurrent comparative cohort. Here we analyze, including case-matching, our experience of SAL in a consecutive series of patients requiring planned, urgent or emergency total colectomy for refractory UC in comparison with contemporaneous others in the same departments undergoing multiport access colectomy. The purpose of this study is to examine the role of this access in an all-comers experience reflective of general practice in patients with UC including those with acute severe colitis and those with severe disease and systemic toxicaemia in debilitated condition as indicated by symptoms, endoscopy and biochemistry including albumin and inflammatory markers. This a retrospective study of a clinical experience whose details were recorded prospectively. October 21, 2020 Volume 26 Issue 39 MATERIALS AND METHODS All patients presenting for total colectomy with end ileostomy for medically refractory severe UC to a tertiary referral centre over a 36-mo period were considered for inclusion regardless of urgency of presentation. Patients requiring surgery for dysplasia or neoplasia were excluded. Laparoscopic surgery is the standard approach for all colorectal resections in the departments although only one surgeon has trained in SAL. All procedures were performed either solely by a Senior Resident alongside the scrubbed consultant or shared between the two depending on procedure circumstance, difficulty and duration as would be our typical practice within a teaching hospital. Preoperative preparation Those patients already in the hospital and those who were referred as out-patients for planned resections were prepared for surgery similarly with the latter routinely being admitted on the morning of surgery. Preoperative mechanical bowel preparation for bowel cleansing was not utilized. All patients were marked for optimal stoma site by a specialist nurse practitioner or senior member of the Surgical Team. A formal Enhanced Recovery Programme with a dedicated nurse specialist was in place over the duration of the study period and implemented uniformly across all surgical teams. All patients received standard anti-thrombosis and antimicrobial prophylaxis and underwent general anesthesia without epidural/spinal anesthesia. The anaesthetized patient was placed in a Trendelenburg position on an anti-slip beanbag and painted and draped in the standard fashion. Single port access device The single port access device of preference was the "Surgical Glove Port". Constructed table-side, in short, this comprises a standard surgical glove into which laparoscopic trocar sleeves (one 10 mm and two 5 mm) are inserted without needing obturators into three fingers cut at their tips ( Figure 1). The ports are tied in position using strips cut from the other glove in the pair and the cuff of the "Glove port" stretched onto the outer ring of a wound protector-retractor (ALEXIS TM XS, Applied Medical) sited in the operative access wound. Single port procedure A local anesthetic block (bupivicaine) was infiltrated around the intended incision site in the right iliac fossa at the site planned ultimately for stoma maturation. A 3 cm skin and fascial incision was measured and made in the appropriate site. On securing safe entry into the peritoneum, a wound protector-retractor was placed into the wound and its outer ring adjusted down to the abdominal wall. The "Glove Port" was then stretched onto the outer ring. The operation was performed using standard rigid laparoscopic instruments, a 10 mm 30 o high definition laparoscope (where possible using the Endoeye TM, Olympus Corporation, which has sterilized in-line optical cabling) along with an atraumatic grasper and an energy sealer-divider (Ligasure, Covidien). Total colectomy with end ileostomy for colitis recalcitrant to medical therapy was performed as previously described. In brief, early rectosigmoid transection was achieved by laparoscopic stapling at the level of the sacral promonitory. Thereafter the operation was performed progressively quadrant by quadrant, working in a close pericolic plane and proceeding distal to proximal until the caecum was reached. After intracorporeal stapling across the distal ileum, the entire colon was then withdrawn "caecum first" via the stoma site. Relaparoscopy was performed via the stoma site and the rectal stump checked in addition to peritoneal lavage and haemostasis control. The end ileostomy was then matured at the single port access site ( Figure 2). Multiport procedure The multiport procedure was performed in a conventional fashion typically beginning with an open induction of the pneumoperitoneum in a subumbilical site and thereafter typically employing four additional trocars of between 5 and 10 mm diameter (two on the left side and two on the right). The specimen was extracted either via the stoma site incision or via a separate incision (most commonly a dedicated Pfanniestiel, suprapubic or subumbilical incision). Local anaesthetic was infiltrated at all wounds on completion of the procedure. October Access selection SAL was the preferred commencement access of RAC in patients considered potentially suitable (precluding exceptional cases) and so this approach required this surgeon be available. As many patients with UC can undergo planned surgery rather than needing immediate operation this allowed the majority of patients be considered for this approach. There was no especial referral to any particular surgeon for the patients who tended to be seen by the surgeon taking acute referrals at the time of surgical need. Postoperative management All patients were managed postoperatively using an enhanced recovery protocol. Analgesia was by means of patient-controlled analgesia transitioning to oral medicines once oral diet commenced. Patient with extraction site or laparotomy wounds had local an aesthetic infusion catheters placed at time of wound closure. Nasogastric tubes were routinely removed at procedure completion and the patients are mobilized within the first 6-12 h of surgery. Oral intake was commenced on demand commencing within six hours of surgery and built up steadily as tolerated thereafter. Urinary catheters were removed on the first postoperative day. Intra-abdominal drains and transanal decompressive catheters were placed by surgeon judgment and were removed on or before the third postoperative day. Ethical considerations Departmental approval was agreed in advance of this experience. The technique of October 21, 2020 Volume 26 Issue 39 SAL was not itself considered experimental as it is a variation of standard multiport laparoscopy that has been already proved valid and feasible and is in common use for other resectional procedures in the department. All patients were fully consented regarding the approach and informed of alternatives. As the intention in treatment was always to ensure safe, effective and efficient surgery, all patients were assured a low threshold for conversion if any deviation from operative plan was encountered. The authors have no conflicts of interest or relevant disclosures to declare with respect to this report. Data collection and analysis Patient demographics along with their clinical, haematological, biochemical and radiological profiles and disease characteristics were recorded prospectively on a dedicated database in addition to operative and postoperative details. Access equipment and length of stay costs were determined by the directorate business manager. Postoperative classifications were categorized as by Clavien-Dindo. Unless otherwise stated, data is represented as median (range) and n represents the number of patients included in the analysis. Differences in categorical variables were evaluated using a Pearson's chi-squared test and differences in continuous variables were evaluated using Mann-Whitney U and Student's t-testing where appropriate (the latter for comparison between paired patients). All calculations were done using SPSS version 12.0 (SPSS, Inc., Chicago, IL, United States). RESULTS Over the thirty-six month study period, 46 patients with confirmed UC required scheduled, urgent or emergency total colectomy with end ileostomy by a colorectal specialist consultant for medically uncontrolled severe disease alone. Overall, the median age (range) was 38 years and median (range) body mass index was 22.8 (17.3-38.9) kg/m 2. Twenty-six patients were male. Nine patients had acutely severe disease resulting in clinical deteriorating conditions with toxaemia and low preoperative albumin (< 30 g/dL). Thirteen patients had their surgery performed on scheduled lists while the others were done either urgently (n = 25) or emergently (n = 8). Overall, co-morbidity was low (one patient had multiple sclerosis while two had asthma). Only five patients had had prior abdominal surgery (only one had a prior midline laparotomy and another was a renal transplant recipient). All patients were considered for a laparoscopic approach ab initio with 39 (85%) having their procedure commenced in such fashion at the attending surgeon's discretion. 29 of these patients were already inpatient in the hospital under the care of the gastroenterology service for an acute exacerbative episode. The other ten patients were admitted specifically for surgery. 27 patients (59% of total cohort, 69% of those having laparoscopic surgery) had their procedure begun via a single port approach (three on scheduled lists) with a completion rate thereafter of 89% (Table 1). The SAL approach patients were begun consecutively on a non-selected basis with the exception of two patients (7% of this cohort) over the time period who had their operation commenced by multiport laparoscopic access due to exceptional comorbidity (one had concurrent acute bilateral ileofemoral deep venous thrombosis and steroid psychosis while the other had congenital micrognathia and oesphageotracheal atresia with long-term feeding jejeunostomy) and both were in fact converted to open operations due to extreme friability of the colon. The three "converted" SAL patients had between 1 (n = 2) and three additional trocars inserted for reasons of difficult splenic flexure mobilization, intra-operative evidence of colitis-related perforation and extensive adhesiolysis (related to prior open nephrectomy for trauma) respectively. All patients in the SAL group had their specimens removed via the stoma site incision. Ten other patients had their operation performed by a multiport approach (no conversions) while the remaining seven patients had their operations commenced via laparotomy by other surgeons in the department ( Table 2). The characteristics of the patients undergoing surgery are shown by access (both at start and by completion) in Table 1 and postoperative complications for patients undergoing laparoscopic surgery are shown in Table 3. Overall there was no significant difference between the groups in terms of age, gender, Body mass index (BMI) or preoperative disease suppressant medications and the postoperative morbidity was predominantly reflective of the severity of the disease process rather than of operative access route. One patient in the single port group (4%), required an early return to theatre for a fascial release for an oedematous stoma while, after a median follow-up of 20 mo (range 5-40 mo), two patients (7%) who had single port surgery have had parastomal hernia requiring repair (one done at the same time as completion proctectomy). One patient in the multiport group has complained of a parastomal hernia after an overall mean follow-up of 19 mo (range 1-25 mo). As compared to other patients with preoperative albumin > 30 g/dL, those having laparoscopic surgery with preoperative albumin < 30 g/dL (n = 9, 7 of whom had their procedure started by SAL with one in this group being converted to multiport access) were significantly more likely to be anaemic (median preoperative haemoglobin 10.4 vs 12.25, P = 0.002) and have elevated preoperative (median 10 vs 51, P = 0.03) and postoperative C-Reactive Protein (CRP) levels ( Figure 3). They were also more likely to have an urgent or emergent operation and to be converted from their initial access approach whether started by multi-port or single-port. As a group overall, patients having their surgery by single port access had a significantly shorter postoperative hospital stay (5 d vs 7.5 d, P = 0.045) being especially evident in those who were non-toxic (P = 0.034) and who also had their surgery completed by this access (P = 0.005). Furthermore, these patients were also significantly more often discharged on or before day 5 as compared with patients undergoing multiport surgery (P = 0.04, Pearson Chi-square). While as an overall group the single port patients had trends towards reduced operative time (P = 0.46) and total theatre occupancy (P = 0.85), these did not reach statistical significance. There Table 2 Characteristics of patients undergoing total colectomy and end ileostomy for medically refractory colitis by laparotomy (either at commencement or by completion) Laparotomy commenced (n = 7) Laparotomy completed (n = 9) was also no significance difference overall in terms of resumption of bowel function, postoperative pain scores, analgesia requirements, daily CRP levels or complications. Interestingly, although patients who were toxic and underwent single port surgery had a significantly longer hospital stay (median 9 d, P = 0.03) as well as CRP levels on each day before and after their surgery than those non-toxic patients having the same operation by the same access approach, there was no significant difference in terms of operating length of time or indeed with postoperative length of stay between these patients and those having multiport access (whether as a group overall or those with preoperative albumin > 30 g/dL) with a median hospital stay of 7.5 and 7 d respectively. Case-matching for gender, albumin > 30 g/dL and BMI (+/-3 kg/m 2 ) in addition to commencement and completion by method of laparoscopic access, surgery type and indication, presented 10 pairs for analysis. Comparison between the groups again shows significant difference in favour for single port surgery for postoperative length of stay, both by group medians (P = 0.02 Student's t-test) as well as day of discharge on or before day 5 (P = 0.02 Pearson Chi-square) with no significant difference in either operative time or total theatre occupancy. While there was no significant difference in terms of opiate requirement or pain score, the trend was in favour of single port access for opiate requirement (day 3, P = 0.07). Economically, the cost of the glove port per case is €63.80 (comprising wound protector with three trocar sleeves). Assuming the use of disposable trocars, as compared to a four port trocar technique (comprising a balloon Hassan Port, a 12 mm port with obturator for stapling as well as one 5 mm trocar with obturator and another one without) there is a cost saving of €101.10 per case (a wound protector is also used in the latter cases while both techniques require two staplers fires, an energy sealer and suction/irrigation). The cost of a 24-h stay in our unit has been averaged at €950. Therefore, the total cost saving when a SPLS total colectomy is compared to case matched multiport equivalent is €2476.10. DISCUSSION Aside from isolated cases and small series describing elective colectomy for colitis, the effectiveness and appropriateness of SAL for severe colitis has only recently begun to be specifically reflected in the literature. Its practitioners view SAL as particularly useful for these individuals who are often slim and young and without previous laparotomy and who value body image. Psychologically, a minimally invasive approach may also seem less traumatic. Many in addition will need their surgery performed urgently at a time when they are physically and immunologically debilitated and so have an impaired capacity for wound healing. Furthermore, such patients have to come to terms with managing a stoma in the early postoperative period and an ability to concentrate on this alone rather than any additional abdominal wall wounds may be advantageous. Many in this group will also need further surgery in the future for proctectomy with or without restorative ileal pouchanal reconstruction. Preservation of the majority of the abdominal wall to facilitate future surgery along with the minimization of peritoneal adhesions could therefore be advantageous. SAL may therefore be particularly relevant to this patient cohort. While prior series have compared patients undergoing SAL and multiport total colectomy or total proctocolectomy and ileal pouch anal anastomosis, these have predominantly been performed solely with respect to the elective setting. The current data represents an all comers' experience, including both planned and urgent total colectomies for ulcerative colitis whether or not the procedure could be included on a scheduled list. Importantly no patient in this cohort is purely elective in that all suffered a debilitating disease requiring operative intervention and indeed most were already inpatients under the gastroenterology service or urgent transfers from outside institutions and were therapeutically immunosuppressed. This is why these patients were chosen to undergo total colectomy and end ileostomy while of course patients presenting purely electively for surgical relief of ulcerative colitis can undergo panproctocolectomy with ileo-anal pouch formation as part of a two stage procedure towards gastrointestinal reconstitution (rather than three stage as is our practice with the sicker medically refractory group). The current data demonstrates that both overall and when matched for gender, preoperative albumin, BMI and method of completion, SAL was directly applicable to this patient group and provided shorter postoperative length of stays without increased operative time then patients undergoing the same operation for the same disease by a multiport access approach. Preoperative albumin level is a reasonable indicator of preoperative clinical deterioration upon which to case match disease severity as, in general, pre-operative hypoalbuminaemia is associated with increased surgical site infection following gastrointestinal surgery and, specifically for patients undergoing laparoscopic total abdominal colectomy for ulcerative colitis, is associated with reoperation. Furthermore, prior series have shown a higher pre-operative serum albumin is associated with performance of a laparoscopic approach. This study shows that, while the advantages of the single port access are particularly evident in those undergoing their surgery when in a less toxic state, single port access can also be implemented in sicker patients without significantly compromise of theatre or hospital efficiency as compared to patients undergoing multiport total colectomy although the numbers are too limited to define specific comparative advantage in relation to wound healing in this cohort. Therefore the current experience has shown that SAL allows completion of surgery via the stoma site alone as the only point of transabdominal access, thereby obviating any additional port sites, in the majority of cases. While not the same magnitude of advance that laparoscopy represents over laparotomy (prior to introduction of laparoscopy as access of preference in 2010, the median length of stay for this category of operation overall was ten days in our unit), there are nonetheless advantages for both the patient and healthcare provider. Although the morbidity associated with 5 mm internal diameter trocars is considered minimal, colorectal surgery typically requires a stapler and/or clip applicator and so mandates at least one extra 12 mm port, a diameter more likely to be associated with postoperative complications including discomfort, infection and fascial herniation. Furthermore, the sole site of abdominal wounding is confined to one small area of the parietal wall, a factor likely to favor effective local postoperative analgesic techniques reducing opiate requirements although the current data show did not show a statistically significant difference in this parameter compared between the groups (indeed it is difficult from this data to be specific regarding why exactly the confined access route translated into significantly shortened postoperative hospital stays). Although demonstrated feasible for colorectal surgery in general, some experts continue to feel SAL is undermined by the current expense of the commercially available devices. Our choice of access port obviates this issue proving in fact cheaper than the multi-port equivalent as the surgical glove port needs only trocar sleeves rather than the otherwise necessary obturators. In addition, because these ports are placed into the glove space (and so are in fact extracorporeal) rather than into the patient means that the risk visceral or vascular injury at the time of trocar placement is reduced. However the main advantage of this innovative access modality is its performance which is, in our experience, better than the commercial equivalent by virtue of its elasticity and lack of fulcrum point (permitting enhanced horizontal, vertical and rotational maneuverability as well as augmented instrument tip ab/adduction and triangulation) while being equally stable and durable during a case. Furthermore, the device is always available (without needing prepurchasing), applicable to every patient regardless of body wall depth (due to the adaptable wound protector-retractor component) and is associated with no financial penalty if October 21, 2020 Volume 26 Issue 39 conversion to a multiport or open operation is required due to the specifics of the patient or case. Also there were no costs accrued due to loss of theatre efficiency, in fact the operative time of a SAL total colectomy tended to be shorter than its multiport equivalent (although interestingly any potential gain in this aspect is noticeably offset by the fact overall theatre occupancy was the same reflecting a need for engagement and focus of the entire perioperative team in order to maximize any potential gain associated with innovation in operative access). One of the primary delays following colorectal resection is patient education in ostomy care. The shortened hospital stay associated with a laparoscopic approach, particularly SAL, can increase demands on the stoma education service that traditionally has had several days to get to know the patient and provide appropriate training. However, the dedicated nurse practitioners in our unit have responded to this issue by providing additional visits, commencing preoperatively. The reduced period of ileus facilitates early eating and increased opportunity to gain experience in ostomy management. While the relatively small numbers of patients in this study period is a limitation of the study, this experience does still represent the largest reported experience of single port total colectomy with end ileostomy for recalcitrant ulcerative colitis to date. The published experience even regarding multiport total colectomy is also relatively small as these cases present relatively infrequently even in large centres with most groups tending to publish figures that at most approximate 20 cases annually. There is in addition a possible bias in that choice of surgical approach reflected surgeon experience. We have tried to control for this aspect by including case match analysis rather than crude group analysis overall. Furthermore, the operations presented here were never solely done by one operator alone but included resident performance of the majority of the procedure in most cases as is routine for all cases in our university teaching hospital. The postoperative care pathways are shared for all patients also including common postoperative care pathways and protocols in addition to common ward rounds and allied health professional input in all cases. Certainly, further experience with larger patient numbers is required to understand why exactly patients are significantly more likely to be discharged earlier when having their surgery by single access vs the conventional, standard multiport approach. Lastly, single port access itself can impose technical limitations on surgeons performing this aspect and its usefulness of course relates to experience across the discipline and our practice incudes its employment in elective surgery for neoplasia either for part or the entirety of the operation in addition to its employment for such multiquadrant operating as for this indication. We have found empirically however that its need for only two experienced surgeons and very limited instrument set-up does seem positive in the case of urgent operations which often in our institution take place at inconvenient times and in general, non-specialist and emergency operating theatres. In conclusion, SAL represents an adapted laparoscopic access technique that can safely and effectively allow total colectomy with end ileostomy in the majority of patients with medically uncontrolled ulcerative colitis in both scheduled and acute settings. Not only does it not need to be associated with increased costs either in terms of access devices or theatre efficiency, it can in fact be an economically favorable option that enables earlier discharge from hospital. CONCLUSION SAL was confirmed as a therapeutic option for surgical approach for patients with UC and should be considered more often where the skillsets and technology exist. Research background Single access laparoscopy (SAL) is a modification of standard laparoscopy that has not be studied in detail for the operation of total colectomy in patients sick with ulcerative colitis (UC). Here we examine its impact in this patient cohort. Research motivation Clinical outcomes were examined along with measure of operative efficiency to define the comparative advantages of the SAL approach for this surgery.
Q: Security model for datacenters I have several datacentres and now I want to plan security for it, so it's resilient to external attacks. I need to make a security model which will cover my web servers, auth servers, log servers, routers, databases and so on running on the Internet, and at minimum it should include: OSI layers are present source code handling certification secure operating system build, network policy and sql schema encryption Basically, there would be security on each layer, and these layers would have to be linked and the process managing these would have to be robust. For the Secure Development Lifecycle I have seen Microsoft's SDL - is this appropriate for that stage? The user management and physical access is something simple to understand and follow, it's not the big deal, I have good alarm systems as well monitoring, however I do lack the above at the moment. A: OK, it sounds like you are asking for a policy, plan, and practices for secure system administration of a data center. I have some suggestions for you: Start with a policy. Start by thinking out your security policy. Develop a written security policy, and gain approval from management. Take a look at SANS's resources on security policies. They'll give you some ideas of things that might make sense for you. Security plan. Develop a security plan. I suggest you start by building up an inventory of the data and systems you store, with some idea of how critical it is to the mission of your business. This will help you, because you should plan to devote the most energy to securing your most mission-critical assets. Next, you might give a little thought to what are the most important kinds of attacks or threats that should receive the highest priority (either the most likely, given your situation, or the ones that would be most serious). You can brainstorm what threats you are most likely to face (e.g., who will have an incentive to attack you?), and use this to help you develop a plan for securing your organization. Once you have inventoried your assets and prioritized the top security issues you are likely to face, develop a plan to mitigate the risks and protect your organization from these attacks. Take a look at the SANS list of top 20 security controls; they might form some elements of your plan, or might give you ideas for how you can protect your assets from these threats. Execute. Next, implement your plan. You don't need to do everything at once; it is fine to pick a piece of your plan, execute on it, and gradually grow your security maturity level. Security training. It might be helpful to have some training on good security practices for system administrators. I'm not the right person to ask, but others might have some suggestions. I think SANS has a good reputation for professional training in this area. Resources for additional information. Take a look at Security policy for system administrators on this site. You might want to take a look at the questions tagged security on ServerFault. ServerFault is a sister site where many professional system administrators hang out, and they have some good resources on topics related to security oriented for sysadmins. You should be aware of professional organizations in this area, and consider joining them and making use of their resources. Look at SANS. They have many resources available on their web page. Also, look into LISA, a Usenix professional association for system administrators. They have excellent conferences, good networking opportunities, and chances to keep up-to-date on the latest technology. Also, they have a booklet series with some information for sysadmins; see, e.g., System Security: A Management Perspective. What about Microsoft SDL? Microsoft's SDL is fantastic -- but it is really oriented at software development. I don't think it's going to be as useful to system administrators, so it might not be quite the right resource for you.
package io.spotnext.jfly.ui.generic; import io.spotnext.jfly.ComponentHandler; import io.spotnext.jfly.ui.base.AbstractComponent; import io.spotnext.jfly.ui.base.AbstractContainerComponent; public class GenericContainer extends AbstractContainerComponent<AbstractComponent> { private String tagName; private boolean useWrapper = false; public GenericContainer(ComponentHandler handler, String tagName) { super(handler); this.tagName = tagName; } public String getTagName() { return tagName; } public boolean isUseWrapper() { return useWrapper; } public void setTagName(String tagName) { this.tagName = tagName; } /** * Indicates that the renderer should create an additional wrapper element * around the child components. */ public void setUseWrapper(boolean renderWrapper) { this.useWrapper = renderWrapper; } }