content
stringlengths 7
2.61M
|
---|
//package Codeforces;
import java.util.Scanner;
public class SolveQ {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
String inp = sc.next();
int A = 0,B=0;int C=0;
int plusIndex = inp.indexOf('+');
int equalIndex = inp.indexOf('=');
for (int i=0;i<plusIndex;i++){
A++;
}
for (int i=plusIndex;i<equalIndex;i++){
B++;
}
for (int i=equalIndex;i<inp.length();i++){
C++;
}
if(A+B==C){
System.out.println(inp);
}else if(C==A+B+2){
System.out.print('|');
for (int i=0;i<inp.length()-1;i++) System.out.print(inp.charAt(i));
}else if(C+2==A+B){
int move = plusIndex-1;
if (move==0) move=2;
for (int i=0;i<inp.length();i++){
if (i != move){
System.out.print(inp.charAt(i));
}
}
System.out.print('|');
} else{
System.out.println("Impossible");
}
}
}
|
The foreign ministers of the NATO-Russia Council (NRC) met in Berlin on Friday in the first session after the NRC Summit in Lisbon in November last year.
“Our meeting in Berlin today is an important stepping stone on the way to the true strategic partnership we pledged to develop together in Lisbon. Day by day we are building this modernised relationship for the 21st Century, because we know that by acting together, we can do more for international security,” said the chairman of the NRC, NATO Secretary General Anders Fogh Rasmussen.
Maintaining the NRC tradition of active political consultation on current international issues, ministers discussed the situation in Libya. All participants stressed the need to implement fully the relevant UN Security Council Resolutions.
NATO and Russia also exchanged views on the situation in and around Afghanistan. The security of this country is vital for all NRC members. In a sign of their joint commitment to increased stability in Afghanistan, ministers set in motion the NRC Helicopter Maintenance Trust Fund. This facility, jointly funded by NATO Allies and Russia, will provide vitally-needed maintenance and repair capacity to the Afghan security forces’ helicopter fleet.
Ministers also made progress in other areas implementing the Lisbon Summit agenda. They approved an updated NRC Action Plan on Terrorism, which strengthens practical cooperation in a shared fight against terrorism.
Finally, ministers also discussed missile defence, giving further guidance to the ongoing work on outlining the future framework for missile defence cooperation between Russia and NATO. They agreed that this work should be taken forward energetically and looked forward to the discussion which NRC defence ministers will have in June.
The Secretary General described the agenda of the debate as "a testimony to our growing cooperation to face common challenges: from the stability of Afghanistan to piracy, proliferation and terrorism”. |
This week sees the release of Identity Thief, a new comedy with everyone’s second favourite Teen Wolf, Jason Bateman, and everyone’s favourite Bridesmaid, Melissa McCarthy. We won’t let the fact the film looks terrible get in the way of this week’s list, which takes a look at the most memorable moments of mistaken identity in movies.
Uma Thurman has praised her Bel Ami co-star Robert Pattinson as 'good-looking' but said she was disappointed that his skin wasn't sparkly like it is in The Twilight Saga movies.
At the tender age of 16, British teenager Nyasha Matonhodze has joined the likes of Kristen McMenamy and Uma Thurman as the new face of Louis Vuitton.
Cannes Film Festival has finally kicked off with stars Salma Hayek, Rachel McAdams and Uma Thurman hitting the red carpet - and even Lady Gaga making an appearance.
Robert Pattinson is snapped in a steamy clinch with Uma Thurman in the first still from their new film Bel Ami.
Uma Thurman film Motherhood took just £88 at the UK box office on its opening weekend earlier this month, though the fact that only one British cinema was given permission to launch it may have something to with it.
Hollywood's A-list turned out in force for Burberry's ground-breaking 3D show at yesterday's London Fashion Week.
Robert Pattinson is to star opposite Uma Thurman in steamy new movie Bel Ami.
Kill Bill 3 movie in the pipeline?
Uma Thurman has hinted that she might star in a third instalment in the Kill Bill series.
Poor old Uma Thurman – from the cult coolness of Kill Bill to box-office turkeys like this in five years.
Uma Thurman has denied she wed fiance Arpad Busson at the weekend.
Uma Thurman, Jodie Foster and Minnie Driver made the premiere of Motherhood a glamorous all-female affair.
Uma Thurman has been caught doing yoga in the galley of a passenger jet.
It's a classic style, favoured by everyone from Uma Thurman to Helen Mirren, and the simple bob is celebrating its 100th birthday.
Films starring Jim Carrey, Ewan McGregor, Winona Ryder, Richard Gere, Uma Thurman and Ashton Kutcher have been added to the line-up of the next Sundance Film Festival.
Uma Thurman and fiancé Arpad 'Arki' Busson's romance is so hot that they needed to cool off by a moonlit swim while holidaying together in Italy.
I don't understand the City: never have and never will. I don't know how they make all that loot; I've no idea what a hedge fund is or why it can buy you a girlfriend who looks like Uma Thurman or lets you pick up a Picasso like the rest of us pick up a Pringle. I always get lost there, too, disoriented in the concrete canyons or getting my Moorgate mixed up with my Mansion House. |
<reponame>aryx/principia-softwarica
#include "grep.h"
char *validflags = "bchiLlnsv";
void
usage(void)
{
fprint(2, "usage: grep [-%s] [-e pattern] [-f patternfile] [file ...]\n", validflags);
exits("usage");
}
void
main(int argc, char *argv[])
{
int i, status;
ARGBEGIN {
default:
if(utfrune(validflags, ARGC()) == nil)
usage();
flags[ARGC()]++;
break;
case 'e':
flags['e']++;
lineno = 0;
str2top(EARGF(usage()));
break;
case 'f':
flags['f']++;
filename = EARGF(usage());
rein = Bopen(filename, OREAD);
if(rein == 0) {
fprint(2, "grep: can't open %s: %r\n", filename);
exits("open");
}
lineno = 1;
str2top(filename);
break;
} ARGEND
if(flags['f'] == 0 && flags['e'] == 0) {
if(argc <= 0)
usage();
str2top(argv[0]);
argc--;
argv++;
}
follow = mal(maxfollow*sizeof(*follow));
state0 = initstate(topre.beg);
Binit(&bout, 1, OWRITE);
switch(argc) {
case 0:
status = search(0, 0);
break;
case 1:
status = search(argv[0], 0);
break;
default:
status = 0;
for(i=0; i<argc; i++)
status |= search(argv[i], Hflag);
break;
}
if(status)
exits(0);
exits("no matches");
}
int
search(char *file, int flag)
{
State *s, *ns;
int c, fid, eof, nl, empty;
long count, lineno, n;
uchar *elp, *lp, *bol;
if(file == 0) {
file = "stdin";
fid = 0;
flag |= Bflag;
} else
fid = open(file, OREAD);
if(fid < 0) {
fprint(2, "grep: can't open %s: %r\n", file);
return 0;
}
if(flags['b'])
flag ^= Bflag; /* dont buffer output */
if(flags['c'])
flag |= Cflag; /* count */
if(flags['h'])
flag &= ~Hflag; /* do not print file name in output */
if(flags['i'])
flag |= Iflag; /* fold upper-lower */
if(flags['l'])
flag |= Llflag; /* print only name of file if any match */
if(flags['L'])
flag |= LLflag; /* print only name of file if any non match */
if(flags['n'])
flag |= Nflag; /* count only */
if(flags['s'])
flag |= Sflag; /* status only */
if(flags['v'])
flag |= Vflag; /* inverse match */
s = state0;
lineno = 0;
count = 0;
eof = 0;
empty = 1;
nl = 0;
lp = u.buf;
bol = lp;
loop0:
n = lp-bol;
if(n > sizeof(u.pre))
n = sizeof(u.pre);
memmove(u.buf-n, bol, n);
bol = u.buf-n;
n = read(fid, u.buf, sizeof(u.buf));
/* if file has no final newline, simulate one to emit matches to last line */
if(n > 0) {
empty = 0;
nl = u.buf[n-1]=='\n';
} else {
if(n < 0){
fprint(2, "grep: read error on %s: %r\n", file);
return count != 0;
}
if(!eof && !nl && !empty) {
u.buf[0] = '\n';
n = 1;
eof = 1;
}
}
if(n <= 0) {
close(fid);
if(flag & Cflag) {
if(flag & Hflag)
Bprint(&bout, "%s:", file);
Bprint(&bout, "%ld\n", count);
}
if(((flag&Llflag) && count != 0) || ((flag&LLflag) && count == 0))
Bprint(&bout, "%s\n", file);
Bflush(&bout);
return count != 0;
}
lp = u.buf;
elp = lp+n;
if(flag & Iflag)
goto loopi;
/*
* normal character loop
*/
loop:
c = *lp;
ns = s->next[c];
if(ns == 0) {
increment(s, c);
goto loop;
}
// if(flags['2'])
// if(s->match)
// print("%d: %.2x**\n", s, c);
// else
// print("%d: %.2x\n", s, c);
lp++;
s = ns;
if(c == '\n') {
lineno++;
if(!!s->match == !(flag&Vflag)) {
count++;
if(flag & (Cflag|Sflag|Llflag|LLflag))
goto cont;
if(flag & Hflag)
Bprint(&bout, "%s:", file);
if(flag & Nflag)
Bprint(&bout, "%ld: ", lineno);
/* suppress extra newline at EOF unless we are labeling matches with file name */
Bwrite(&bout, bol, lp-bol-(eof && !(flag&Hflag)));
if(flag & Bflag)
Bflush(&bout);
}
if((lineno & Flshcnt) == 0)
Bflush(&bout);
cont:
bol = lp;
}
if(lp != elp)
goto loop;
goto loop0;
/*
* character loop for -i flag
* for speed
*/
loopi:
c = *lp;
if(c >= 'A' && c <= 'Z')
c += 'a'-'A';
ns = s->next[c];
if(ns == 0) {
increment(s, c);
goto loopi;
}
lp++;
s = ns;
if(c == '\n') {
lineno++;
if(!!s->match == !(flag&Vflag)) {
count++;
if(flag & (Cflag|Sflag|Llflag|LLflag))
goto conti;
if(flag & Hflag)
Bprint(&bout, "%s:", file);
if(flag & Nflag)
Bprint(&bout, "%ld: ", lineno);
/* suppress extra newline at EOF unless we are labeling matches with file name */
Bwrite(&bout, bol, lp-bol-(eof && !(flag&Hflag)));
if(flag & Bflag)
Bflush(&bout);
}
if((lineno & Flshcnt) == 0)
Bflush(&bout);
conti:
bol = lp;
}
if(lp != elp)
goto loopi;
goto loop0;
}
State*
initstate(Re *r)
{
State *s;
int i;
addcase(r);
if(flags['1'])
reprint("r", r);
nfollow = 0;
gen++;
fol1(r, Cbegin);
follow[nfollow++] = r;
qsort(follow, nfollow, sizeof(*follow), fcmp);
s = sal(nfollow);
for(i=0; i<nfollow; i++)
s->re[i] = follow[i];
return s;
}
|
Palliative Embolization of Arterial Renal Tumour Supply Palliative occlusion of the arterial renal tumour supply was performed in 10 patients and the follow up is reported. Nine of the patients had no subsequent nephrectomy. Spongostan (99% gelatin) was used as the embolic material in 4 patients with the addition of steel coils in 2. Bucrylate was used in 6 cases. Six patients are alive with survival rates presently ranging from 3 to 24 months after embolization. Improvement of the survival time cannot be estimated but local symptoms such as hematuria and pain may be treated in those patients with renal tumours who are not considered for surgery. |
Novel semi-analytical optoelectronic modeling based on homogenization theory for realistic plasmonic polymer solar cells Numerical-based simulations of plasmonic polymer solar cells (PSCs) incorporating a disordered array of non-uniform sized plasmonic nanoparticles (NPs) impose a prohibitively long-time and complex computational demand. To surmount this limitation, we present a novel semi-analytical modeling, which dramatically reduces computational time and resource consumption and yet is acceptably accurate. For this purpose, the optical modeling of active layer-incorporated plasmonic metal NPs, which is described by a homogenization theory based on a modified MaxwellGarnett-Mie theory, is inputted in the electrical modeling based on the coupled equations of Poisson, continuity, and driftdiffusion. Besides, our modeling considers the effects of absorption in the non-active layers, interference induced by electrodes, and scattered light escaping from the PSC. The modeling results satisfactorily reproduce a series of experimental data for photovoltaic parameters of plasmonic PSCs, demonstrating the validity of our modeling approach. According to this, we implement the semi-analytical modeling to propose a new high-efficiency plasmonic PSC based on the PM6:Y6 PSC, having the highest reported power conversion efficiency (PCE) to date. The results show that the incorporation of plasmonic NPs into PM6:Y6 active layer leads to the PCE over 18%. Apart from the researches mentioned above, there are many other experimental studies on the plasmonic PSCs, while there are a few theoretical studies for the simulation of plasmonic PSCs that almost all of them cannot be successfully applied to realistic plasmonic PSCs. To investigate the effects of certain properties of incorporating NPs, some simplifications have considered in the theoretical studies in the literature that should be addressed for the modeling of realistic plasmonic PSCs. Two of the simplifications include the incorporation of plasmonic NPs with the same size in an ordered array, while the experimental reports have documented the randomly blending of NPs with various sizes in the PSCs. Both of these factors cause significant changes in optical properties and consequently the PCE of plasmonic PSCs. Furthermore, the effects of interference and reflection introduced by different layers on the absorption of NPs and the effects of increased trap-assisted recombination due to NPs on the electrical properties have to be considered for realistic modeling. The implementation of these conditions to the modeling of realistic plasmonic PSCs through numerical methods leads to prohibitively long-time complex computation process due to the consideration an enormous collection of NPs, which is far beyond the reach of even modern computers. To overcome this limitation, in this study, we present a novel semi-analytical modeling for predicting the performance of the realistic structure of plasmonic PSCs to dramatically reduce computational time and resource consumptions and yet is acceptably accurate. Hence, this paper is structured as follows. The modeling is explained in Section II, where the geometrical parameters of plasmonic BHJ PSC and some assumptions we make to implement modeling are expressed in section II-A, the effect of embedded plasmonic NPs into PSCs on optical properties is described by an analytical modeling based on homogenization theory (HT) in section II-B, and the electrical properties of plasmonic PSCs are obtained in section II-C with solving the coupled equations of Poisson, continuity, and drift-diffusion. In section III, to evaluate the applicability of the semi-analytical modeling, its results are compared with the experimental results of a fabricated plasmonic PSC. Besides, the influence of the NPs parameters, including the amount of size dispersion and concentration of the NPs, on the performance of plasmonic PSCs is discussed. In section IV, based on the reported PSC with the best performance so far, a new high efficiency plasmonic PSC is proposed and investigated. Optical modeling for the effect of incorporated plasmonic NPs. Optical properties of an ordered array of metal NPs incorporated into the active layer can be accurately obtained by considering a few NPs and defining their geometry exactly and then solving the Maxwell equations using numerical techniques such as finite difference time domain (FDTD) 86, boundary element method (BEM) 87, discrete dipole approximation (DDA) 88, etc. In the case of irregularly incorporated NPs with various sizes, a large number of NPs must be considered for defining the geometry of active layer, and the aforementioned numerical methods are much less tractable due to the high dependence of calculation time on the size of the system. To surmount the shortcomings of numerical methods in obtaining the optical properties of a vast collection of plasmonic NPs, analytical approaches based on the HTs are methodical. In the HTs, a complex medium formed by the inclusion of NPs into a material is replaced with a homogeneous medium that has the same optical properties as the complex medium. Therefore, the optical properties of the active layer:NPs composite can be expressed through the HTs with the complex dielectric function of this homogeneous medium, simply, HM. A conventional HT is Maxwell-Garnett theory (MGT), derived from the Lorentz-Lorenz relations or the Clausius-Mossotti formula 96. It averages over the induced electric dipole moments of individual NPs without considering the interaction between NPs to calculate the LSPR band of NPs with the same sizes 97, so it fails to apply for NPs with size dispersion or with low interparticle distance (small distance among neighboring NPs leads to the high volume fraction of NPs). Furthermore, the MGT is based on the quasistatic limit and ignores retardation effects, so it produces significant errors for NPs with diameters that the electrostatic limit is no longer valid and the retardation effects become predominant, namely, it is restricted to the spherical NPs with the diameter (d) much smaller than the wavelength of incident photons (), i.e., d < < 98-100. For example, d < 6 nm for Au NPs and d < 3 nm for Ag NPs are acceptable sizes for applying the MGT 101. Moreover, the dependence of intrinsic confinement effects, induced for very small NPs, smaller than the mean free path of conduction electrons (typically d < 4 nm) 102,103, on the NPs size is not taken into account in the MGT. Indeed, the size of NPs does not explicitly appear in the MGT. To remove the restrictions mentioned above, different extensions of the MGT have been proposed 97,. For example, a corrected version of MGT which accounts for a dipolar interaction between NPs is obtained by Markel et al. 97, or an extension of MGT considering both intrinsic confinement and retardation effects, called Maxwell-Garnett-Mie theory (MGMT), is achieved by replacing quasistatic electric dipole polarizability with that of obtained by the Mie theory. In this paper, to model the optical properties of active layer:NPs composite, a modified MGMT is developed by considering the size dispersion of NPs, size-dependent intrinsic confinement for very small NPs, and retardation effects for large NPs. Therefore, it can predict the LSPR band of NPs distributed over a wide range of sizes, from very small to relatively large. Besides, light scattering within the active layer by NPs is considered by an additional contribution to the modified MGMT. The MGMT gives the complex dielectric function of active layer:NPs composite in terms of volume fraction of NPs in the active layer (f), mean radius of NPs (R ), and frequency-and size-dependent Mie polarizability ( Mie (,R )) as 90 where 1 and 1 are the first order of Riccati-Bessel functions of the first and second kind, respectively, and m and x are defined as: where NP (,R ) is the size-dependent dielectric function of NPs. It should be noted that the effect of intrinsic confinement is considered in the MGMT through the implementation of NP, instead of using the dielectric function of bulk metal ( bm ), in the M e (,R ). By assuming that the only effect of NPs size is on the free electrons, NP can be derived from the Matthiessen rule by modifying bm, described by Lorentz-Drude Model, as 112 : where p is the plasma frequency, 0 is the damping constant, A s is the parameter depending on the scattering process of the electrons of NP surface 113, and v f is the Fermi velocity of free electrons. The effect of size dispersion of NPs can be included in the MGMT, Eq. (1a), by considering the mean-field theory. Size dispersion leads to various electric dipole moments for NPs with different radii. Therefore, the average polarizability is calculated by weighting the polarizabilities over the relative abundance of each NP. Hence, by considering Gaussian distribution with the standard deviation of for the NPs radii, M e (,R ) in Eq. (1b) would be replaced with the following expression: where R min and R max stand for the smallest and largest radius of NPs in the size dispersion. The homogenized dielectric function ( HM ), describing the optical properties of the active layer:NPs composite, must address all the mechanisms resulted from inserting NPs. Therefore, in addition to the impact of plasmonic near-field due to LSPR excitation, considered through MGMT, the effect of light scattering by the embedded NPs within the active layer must be reflected in the HM. Because of the size dispersion of NPs, the absorption mechanism is partly attributed to enhanced LSPR near-field around the small size NPs and partly attributed to light scattering from the large size NPs 114-117 that disperse the electromagnetic waves of the incident light. The reemitting of incident light in different directions inside the active layer leads to an increase in the optical path length 20. The effect of enhanced optical path length by the specific angular spread of scattered light can be expressed by the Percus-Yevick correction term 89,90. This term is added to the MGMT, Eq. (1a), to obtain HM for the active layer:NPs composite as: Electrical modeling of plasmonic BHJ PSCs. The mechanism of generating electron-hole pairs and their transport in plasmonic BHJ PSCs, like pristine BHJ PSCs (without plasmonic NPs), is that the absorbed photons by the active layer cause the transition of electrons from the highest occupied molecular orbital (HOMO) of electron-donating material to its lowest unoccupied molecular orbital (LUMO) and creating neutral Frenkel excitons (FEs) with the generation rate of G F. Generated FEs diffuse to the donor:acceptor interface (10-20 nm) and then dissociate into electrons on the LUMO of electron-accepting material and holes on the HOMO of electron-donating material on either side of the interface with the Coulomb interaction between them, called charge transfer excitons (CTEs). CTEs will undergo recombination to FEs after a finite time unless induce to separate 118,119. The motion of electrons causes the dissociation of CTEs into free electrons and holes moving towards the corresponding electrodes by incoherent hopping between localized states randomly distributed in www.nature.com/scientificreports/ space due to the field arising from the difference of the energy levels of intermediate layers or the work functions of electrodes. Therefore, for electrical modeling, following mechanisms have to be taken into account: the generation, dissociation, and recombination of CTEs, generation and recombination of free charges, drift and diffusion of charges, and the extraction of charges at the electrodes. To consider these mechanisms, several one-dimensional electrical models differing in the choice of their components, the definition of boundary conditions, and the method of solving drift-diffusion equations have been developed in the literature 80,83,. In the following, while expressing the coupled equations of continuity, drift-diffusion, and Poisson for obtaining the density of electrons and holes (n and p) and electric potential () in the plasmonic BHJ PSCs, the effect of plasmonic NPs on the aforementioned mechanisms will be clarified. where Eqs. and are the continuity equations for electrons and holes, respectively, and Eq. is the Poisson equation, j n and j p, comprised of drift and diffusion components, are the electron and hole current densities, respectively, q is the elementary charge, n and p are the mobility of electrons and holes, respectively, N A, N D, and n RC are the densities of ionized acceptors, donors, and trapped charges in recombination centers, respectively, and HM = Real is the homogenized dielectric constant of active layer:NPs composite, which shows that the plasmonic NPs affect clearly the electrical modeling of plasmonic BHJ PSCs through HM. The right-hand side of Eqs. and describes generation and recombination processes, where G CT is the amount of CTEs generated in the active layer which is considered as equal to the G F, i.e., the conversion efficiency of FEs to CTEs is considered to be unity. G F is equivalent to useful absorption, i.e., the portion of incident photons of sunlight absorbed by the active layer. To calculate G F, the portion of parasitic photons, including absorbed photons by the non-active layers and scattered photons escaping from the PSC in all directions, is subtracted from the total number of incident photons to the plasmonic PSC. For this purpose, transfer matrix formalism that considers all optical interference effects is implemented to calculate the attenuation in each layer and the transmission and reflection at each interface layer of the plasmonic PSC, with the inputs of thickness and complex refractive index (n () = () + i()) of each layer. Therefore, the effect of NPs on the G CT is through the complex refractive index of the active layer:NPs composite, n HM, defined as: It is noted that the portion of absorbed photon by NPs does not contribute to creating FEs and, therefore, is parasitic, but it is not taken into account in the semi-analytical optoelectronic modeling because Morawiec et al. 130 have shown that it is insignificant in the visible part of the AM1.5G spectrum. P CT→e-h in Eqs. and is the probability of dissociation from CTEs to free electrons and holes defined by: where k F is the rate constant of the decaying of CTEs to FEs, and k D is the rate constant of CTEs separation to free electrons and holes. The analytical expression of k D reported by Mihailetchi et al. 131 is implemented in our optoelectronic modeling, which is defined as: where a and E b are the separation distance and binding energy of bound electron-hole pairs; respectively, k B is the Boltzmann constant, T is temperature, J 1 is the first order of Bessel function, and ∇ is the electric field strength in the active layer:NPs composite. As seen in Eq. (10b), the impact of embedded NPs on k D and consequently on P CT→e-h is through the homogenized dielectric constant and electric potential of the active layer:NPs composite. The second term on the right-hand side of Eqs. and describes the recombination process where R Lan and R trap stand for the Langevin bimolecular and trap-assisted monomolecular recombination, respectively 125,132. The recombination of two free opposite charges created from different CTEs refers to Langevin bimolecular, and the recombination of a free charge with an immobilized charge at a trap state refers to monomolecular, where the first is defined as 133 : www.nature.com/scientificreports/ where C Lan = q( n + p )/ HM, stands for the recombination coefficient predicted by Langevin model, is an additional reduced factor taking into account experimentally derived reduced Langevin factor, and n 1 and p 1 are the characteristic electron and hole concentrations, respectively, the product of which is equal to the square of intrinsic carrier density, i.e., n 1 p 1 = n i 2138. Trap-assisted monomolecular recombination is described by a modification of Shockley-Read-Hall rate equation (r SRH (E)) 139 in which Gaussian density of state is considered for recombination centers (DOS RC (E)), as follows 140 : where the first curly bracket in the integral refers to DOS RC (E), E RC is the center of Gaussian distribution of recombination centers considered in the middle of the band gap, RC is the width of Gaussian distribution, the second curly bracket in the integral refers to r SRH (E), N RC is the total density of recombination centers including defects, impurities, and NPs, n ( p ) is electron (hole) lifetime, and C trap stands for trap-assisted recombination coefficient. As reported by Wu et al. 132, incorporating NPs into the active layer causes the increase in the recombination centers at the interfacial region of donor:acceptor. Therefore, in addition to defects and impurities density of donor:acceptor blend, the density of NPs is included in N RC 89. Consequently, embedded NPs also affect n and p because these are inversely proportional to N RC 141. It is to be noted that the incorporation of NPs in the PSCs changes the number of photo-generated carriers. As a result, both recombination processes in plasmonic PSCs differ from NPs-free counterpart PSCs. In addition to the number of photo-generated carriers, Eqs. and show that R Lan and R trap are respectively influenced by embedded NPs through HM and N RC. Embedded NPs also influence on the Poisson equation, Eq., through trapped charges in recombination centers, n RC, calculated as: where the curly bracket in the integral refers to the possibility of a recombination center being occupied by one electron. To obtain a unique solution to Eqs. to, it is necessary to specify appropriate boundary conditions 126,142. They are defined for n(z), p(z), and (z), by assuming that the contact of active layer with ABL is hole ohmic and with CBL or cathode is electron ohmic, at z = 0 and z = t active as: where V is the applied bias, WF C and WF A are the work functions of cathode and anode, respectively, E gap is the effective energy band gap of active layer defined by the difference of the HOMO energy level of the donor polymer (E HOMO-do ) and the LUMO energy level of the acceptor molecule (E LUMO-ac ), and N c(v) is the effective density of states for electrons (holes). Evaluation of the validity of semi-analytical optoelectronic modeling Choosing a fabricated plasmonic BHJ PSCs as a test case. Plasmonic BHJ PSCs based on blending of poly-(3-hexylthiphene) and phenyl-C 61 -butyric acid methyl ester, P3HT:PCBM, with the incorporation of metallic NPs with different volume fractions, sizes, and shapes have been fabricated and extensively studied 20,40,147,148. In the following, we will focus on the conventional structure of P3HT:PCBM PSC, reported by Stratakis and Kymakis groups 22,28,149, and P3HT:PCBM:NPs PSC with following geometrical parameters and materials: the weight ratio of P3HT:PCBM is 1:1, t active = 100 nm, the ABL is poly (3,4- Comparison to experimental data and discussion of modeling results. The validation of our semi-analytical optoelectronic modeling is investigated by Fig. 2. At the first stage, the pristine PSC, the ITO/ R Lan = C Lan np − n 1 p 1 = q n + p HM np − n 1 p 1 www.nature.com/scientificreports/ PEDOT:PSS/P3HT:PCBM/Al structure without metal NPs, is simulated with the parameters indicated in Table 1. As can be found in the experimental literature, the aspects of the fabrication process of PSCs lead to a range of possible values for a parameter. From these ranges, the values of parameters in Table 1 are chosen from literature by comparing the experimental J-V curve with the modeling results to reach the best fitting. At the second stage, the experimental data for J-V characteristics of P3HT:PCBM:NPs PSC with 5% Au NPs concentration 22, reported by Paci et al. 22 and shown with pink solid circles in Fig. 2, are compared with the simulated J-V characteristics obtained by our semi-analytical modeling. Since ref. 22 has reported that the NPs diameters with an average of 10 nm are distributed in the range of 1.5 to 20 nm, R = 5 nm, R min = 0.75 nm, R max = 10 nm, and = 5 nm are considered for the optical modeling of P3HT:PCBM:NPs layer. The good agreement of simulated photovoltaic parameters of P3HT:PCBM:NPs PSC with the experimental ones shown in Table 2 confirms the validity of the semi-analytical optoelectronic modeling to examine the performance of plasmonic PSCs. It can be found from Fig. 2 and Table 2 that incorporating Au NPs into the active layer induces an improvement of PCE by 39%, which is the result of increase in the J sc and FF, while the V oc remains unchanged. To understand the origin of improved J sc, the absorption coefficient of the composite active layer of P3HT:PCBM:NPs, calculated by the HT, is shown in Fig. 3a. For comparison, the figure also shows the absorption coefficient of P3HT:PCBM. It can be seen that incorporating NPs improves the optical absorption of the active layer, resulting from the excitation of LSPR modes and multiple light scattering by the NPs simultaneously. Besides, the wide range of NPs diameter (from 1.5 to 20 nm) leads to broadband optical absorption enhancement, originating from the red shift of resonance peak of LSPR for large size NPs in size dispersion due to retardation effect and the blue shift of resonance peak for small size NPs due to intrinsic confinement effect. Another reason for improving J sc is the increase in the absorbed photons percentage in the active layer (useful absorption) of plasmonic PSC. To illustrate it, the percentage of absorbed photons in each layer and the percentage of reflection, including the interference induced by electrodes and scattered light escaping from the PSC, for the P3HT:PCBM and P3HT:PCBM:NPs PSCs, calculated by transfer matrix method, are shown in Fig. 4. It is found that after incorporating Au NPs in the active layer, the percentage of reflection decreases, and the percentage of absorbed photon in www.nature.com/scientificreports/ the active layer increases for wavelengths above 650 nm. It should be noted that the incorporation of Au NPs also affects recombination processes. The presence of Au NPs in the active layer leads to increased trap density and consequently higher trap-assisted recombination, and strong local field around NPs results in a higher density of photo-generated excitons, G F, thereby leading to increased bimolecular recombination, which both recombination processes deplete the photo-generated carriers. However, the increase in the photo-generated carriers In the P3HT:PCBM:NPs PSC, the V oc increase caused by the increase in the G F due to the LSPR effect of embedded NPs are counteracted with the V oc decrease caused by the increase in the C trap due to the addition of embedded NPs to the recombination centers. FF is pronouncedly influenced by the electrical resistivity of active layer and electrodes 152. The inverse of electrical resistivity, electrical conductivity (), can be calculated by the real part of complex refractive index () and absorption coefficient () as follows 153 : where c 0 is the velocity of light in vacuum and 0 is the vacuum permittivity. The refractive index of the composite active layer of P3HT:PCBM:NPs calculated by the HT ( HM ()) is shown in Fig. 3b. It is found from Eq. and Fig. 3a,b that the conductivity of P3HT:PCBM:NPs is more than the P3HT:PCBM. Increasing the conductivity is beneficial to carrier transport, leading to the increased FF of the P3HT:PCBM:NPs PSC. The influence of the concentration (volume fraction) of Au NPs on the J-V characteristics of the P3HT:PCBM:NPs PSCs is simulated and depicted in Fig. 5, and the calculated photovoltaic parameters are listed in Table 2. Low volume fractions of NPs, varying from 0.04 to 0.07, are considered, because incorporating NPs with high volume fraction may deteriorate the morphology of donor:acceptor blend of active layer, and may create a short circuit leading to the degradation of the electrical properties of P3HT:PCBM:NPs PSCs. With increasing the volume fraction (f) of NPs in the P3HT:PCBM:NPs active layer, no variation in V oc is observed from Fig. 5, but J sc increases. On the one hand, increasing f typically increases the number of NPs in the active layer and, consequently, increases recombination centers and, as a result, decreases the number of photo-generated carriers. On the other hand, increasing f improves the effective absorption coefficient of the active layer, as shown in Fig. 6 (right axis), and decreases the percentage of parasitic absorption (the sum of reflection and non-active layers absorption), as shown in Fig. 6 (left axis), leading to an increase in the number of photo-generated carriers. Photo-generated carrier enhancement due to the last two reasons outweighs their loss through recombination, which leads to an increase in J sc with increasing f. Figure 5 and Table 2 also show the experimental data of J-V characteristics and respective photovoltaic parameters obtained by Spyropoulos et al. 149. It is seen that for the P3HT:PCBM:4%NPs and P3HT:PCBM:5%NPs PSCs, there is good agreement between calculated and experimental data, but not for the P3HT:PCBM:6%NPs PSC. The poor performance of P3HT:PCBM:6%NPs PSC reported by Spyropoulos et al. 149 is due to poor properties of NPs dispersion into the active layer resulting in NPs aggregation, affecting the plasmonic effects. To investigate the effect of size dispersion on the performance of plasmonic PSCs, the J-V characteristics of P3HT:PCBM:5%NPs PSCs for Au NPs of 10 nm in mean diameter (2R = 10 nm) and various radius dispersions () are shown in Fig. 7. The blue, black, and red solid J-V curves are respectively for the NPs with equal diameters of 10 nm ( = 0), diameters in the 5 to 15 nm range (2 = 5 nm), and in the 1.5-20 nm range (2 = 10 nm). With increasing, V oc is almost constant, and FF slightly decreases from 64.57 to 64.52%. An improvement in the www.nature.com/scientificreports/ J sc and PCE by 2.8% and 2.6%, respectively, for 2 = 5 nm and by 12.9% and 12.7%, respectively, for 2 = 10 nm compared to the = 0 is achieved. J sc enhancement is due to the increase in the absorption coefficient of the active layer, calculated by HT and shown in the right axis of Fig. 8a, and the decrease in the percentage of parasitic photons, shown in the left axis of Fig. 8a. Figure 7 also shows the J-V characteristics of P3HT:PCBM:5%Ag NPs PSC (the pink solid circles). The parameters of incorporated Ag NPs are chosen to be the same as incorporated Au NPs reported in refs. 22,149 (f = 0.05, 2 = 10 nm, 2R = 10 nm, 2R min = 1.5 nm, and 2R max = 20 nm). The P3HT:PCBM PSC incorporated with Ag NPs exhibits higher PCE of 4.03% compared to the P3HT:PCBM:Au NPs PSC, which is 3.81%. PCE enhancement is due to improved J sc, from 9.87 mA/cm 2 for Au NPs to 10.41 mA/cm 2 for Ag NPs, implying that the improved photocurrent results from enhanced absorption coefficient of the active layer in the wavelength range of 350-550 nm and the decrease in the percentage of parasitic photons below 480 nm for the P3HT:PCBM:Ag NPs PSC compared to the P3HT:PCBM:Au NPs PSC (see Fig. 8 (b)). Proposing a high efficiency plasmonic BHJ PSC As can be found from the modeling results and the experimental data of previous section, the efficiency of the plasmonic P3HT:PCBM PSCs is very low which is due to the much poorer performance of the reference P3HT:PCBM PSC, mainly resulted from intrinsic shortcomings of fullerene PCBM acceptor. The best reported PCE of single-junction PSCs with fullerene derivative acceptors, certified by National Renewable Energy www.nature.com/scientificreports/ Laboratory (NREL), is 11.5% 154. Therefore, the designing of high-performance PSCs based on non-fullerene acceptors has attracted tremendous efforts in the recent years. These efforts along with developing polymer, optimizing several aspects of BHJ morphology, and interface engineering have not only promoted the PCE of the non-fullerene PSCs to a high level of 17.23% (NREL-certified value 16.77%), but have also improved stability compared to fullerene PSCs 165. The BHJ PSC with the highest efficiency so far, reported by Li group 165, is a conventional structure with the layers of ITO/PEDOT:PSS/PM6:Y6/PDINN/Ag, where conjugated polymer PM6 and Y6 molecule of PM6:Y6 blend are applied as p-type donor and acceptor, respectively, and aliphatic amine-functionalized perylene-diimide (PDINN)/Ag as a bilayer cathode. Based on experimental reports in the literature and our modeling results in the previous section, we foresee that, by means of incorporating plasmonic NPs into PM6:Y6 active layer, it is possible to achieve even higher efficiency than 17.23%. Therefore, our semi-analytical modeling, providing a realistic prediction, is employed to investigate the performance of PM6:Y6:NPs PSCs. First, our modeling results have been fitted in Fig. 9 with the experimental J-V characteristics of reference PM6:Y6 PSC, reported by Li group 165. In the electrical modeling, we set WF C = 3.72 eV 165, which is the Ag WF modified by PDINN. Electron and hole mobilities of PM6:Y6 active layer with the weight ratio of 1:1.2 and thickness of 150 nm, determined by the space charge limited current method and reported in ref. 164, are 5.90 10 -4 cm 2 V −1 s −1 and 2.00 10 -4 cm 2 V −1 s −1, respectively. Second, through the HM calculation of the absorption spectrum of PM6:Y6 film embedding Ag, Au, Al, and Cu NPs along with the size optimization of the NPs, the most appropriate metal with optimized size for incorporating into PM6:Y6 blend is found. Third, the J-V characteristics of PM6:Y6:NPs PSC with the optimized conditions of NPs, which is Ag NPs with mean size of 20 nm ranged from 5 to 35 nm, is calculated www.nature.com/scientificreports/ with our semi-analytical modeling and shown in Fig. 9. From this figure and the photovoltaic parameters summarized in Fig. 10a) and the increased percentage of absorbed photons in the active layer (see Fig. 10b), prevail the negative influence, which is increased trap-assisted recombination due to increase in trapping states emanating from the embedded Ag NPs. www.nature.com/scientificreports/ The increased FF of PM6:Y6:NPs PSC most likely arises from the improved conductivity of the PM6:Y6:NPs compared to the PM6:Y6, as can be found from Eq. and Fig. 10a, leading to the better electron transport within the active layer in the presence of Ag NPs 114,166. Conclusion In conclusion, a semi-analytical optoelectronic modeling that could predict the performance of plasmonic BHJ PSCs where spherical NPs were incorporated into the active layer was demonstrated. Firstly, the effect of incorporation of NPs into the active layer on the optical properties was analytically modeled by the homogenization theory, considering a disordered array of NPs with size dispersion. Secondly, the percentage of useful absorption by the active layer was calculated by transfer matrix method, in which the number of photons related to absorption in the non-active layers, interference induced by electrodes, and scattered light escaping from the PSC in all directions were subtracted from the total absorbed photons in the PSC. Finally, J-V characteristics of plasmonic PSCs were modeled by coupled Poisson, continuity, and drift-diffusion equations. Then, by comparing the results obtained by the semi-analytical modeling with the experimental data reported in the literature for the photovoltaic parameters of P3HT:PCBM:NPs PSCs, which showed a good agreement, our modeling approach was verified. Therefore, for realistic prediction of the photovoltaic parameters of a new high efficiency plasmonic PM6:Y6 PSC, the modeling was applied and yielded that incorporating Ag NPs into PM6:Y6 active layer led to 10.9% improvement in the PCE, from 17 to 18.86%. Data availability The source code of this work will be made available from the corresponding author upon reasonable request. |
The space tourism industry hasn't sent many paying tourists into space yet. But it's getting close. And as it begins to mature it will be a huge industry worth an estimated $1 billion in the next 10 years, according to the U.S. government.
The latest player in the blossoming industry -- alongside the likes of billionaire businessmen Elon Musk (SpaceX) and Richard Branson (Virgin Galactic) -- is World View Enterprises, an Arizona-based company that wants to send people to the edge of space in high-altitude balloons. The idea, as Wall Street Journal reports, is to provide a space-like experience (though at an altitude of 100,000 feet it won't be a weightless one) without the training (and costs) needed to send someone to space.
With a projected ticket price of $75,000, the goal is "bringing space to the masses as much as we can," said Taber MacCallum, Paragon's chief executive and co-founder. Revenue flights won't commence until 2016 at the earliest, while testing or regulatory complications could push that deadline out further.
By contrast, Virgin Galactic is planning to send space tourists about three times as high for around $200,000 starting next year.
It might be a "budget" space tourism experience but it still looks pretty amazing. Click through the photos to see what a $75,000 high-altitude balloon trip gets you.
All images courtesy of World View Enterprises.
A new company plans to provide a near-space experience on the cheap(er). But there's nothing budget about the views. |
"Contra Rush Limbaugh, history’s actual fascists were not primarily known for their anti-smoking policies or generous social welfare programs. Fascism celebrated violence, anti-rationalism and hysterical devotion to an authoritarian leader. To date, the Obama administration has fallen rather short in these departments.
Perhaps uncomfortably aware of the shortcoming, the hardliners have developed okay, invented really their own mythology about Obama “brownshirts.” (The popular conservative website RedState.org literally uses the term.) The complaint rests on a single case that of conservative activist Kenneth Gladney, who got into a scuffle at a townhall in St. Louis, Missouri. The altercation was captured on video and you can watch it on YouTube. What you’ll see is a man, already on the ground, and another man stepping back in order to avoid tripping over him. The man on the ground is Gladney. Gladney walked away from the confrontation and later went to hospital, where he was treated for light injuries and released the same day. Whatever happened and whoever started it, this happily bloodless encounter bears not even the most glancing resemblance to the brutality that made Hitler’s brownshirts notorious. And yet, look up Gladney’s name online and he’s suddenly a poignant martyr.," - David Frum, New Majority. |
Alice Wellington Rollins
Alice Wellington Rollins (June 12, 1847 – December 5, 1897), was an American writer whose output spanned essays, novels, stories, and children's poetry. She became known for a series of articles on the terrible conditions in New York tenements in the 1880s and for travel writing about the American West.
Family and education
She was born Alice Wellington in Boston, Massachusetts; her father was Ambrose Wellington. She was educated by her father in Latin and math before attending various schools. In 1876 she married Daniel M. Rollins of New York City; they had a son. For a time they lived in Lawrence Park, a development in Bronxville that attracted many artists and writers.
Career
Rollins contributed articles, profiles, and reviews to leading American periodicals, including Lippincott's Magazine, Cosmopolitan Magazine, The Century Magazine, Harper's Magazine, and the North American Review. She also worked as an editor, wrote children's stories and poetry for publications like St. Nicholas Magazine, and compiled a collection of aphorisms. A series of essays on New York tenements provided the inspiration for her 1888 novel Uncle Tom's Tenement. She wrote frequently about traveling in the American West, and two of her books feature western settings — The Three Tetons is set in Yellowstone Park and The Story of a Ranch in Kansas.
When she died, the writer Kate Douglas Wiggin wrote this in tribute: "Her literary work was brilliant, vigorous, original, poetic, by turns." |
<gh_stars>0
import { Scatter } from '../../../../src';
import { data } from '../../../data/gender';
import { createDiv } from '../../../utils/dom';
describe('scatter', () => {
it('color: string options', () => {
const scatter = new Scatter(createDiv(), {
width: 400,
height: 300,
appendPadding: 10,
data,
xField: 'weight',
yField: 'height',
color: 'red',
xAxis: {
nice: true,
},
});
scatter.render();
const geometry = scatter.chart.geometries[0];
const elements = geometry.elements;
expect(elements.length).toBe(507);
expect(elements[0].getModel().color).toBe('red');
});
it('color: string array options', () => {
const scatter = new Scatter(createDiv(), {
width: 400,
height: 300,
appendPadding: 10,
data,
xField: 'weight',
yField: 'height',
color: ['#e764ff', '#2b0033'],
xAxis: {
nice: true,
},
});
scatter.render();
const geometry = scatter.chart.geometries[0];
const elements = geometry.elements;
// @ts-ignore
expect(geometry.attributeOption.color.values.length).toBe(2);
expect(elements.length).toBe(507);
expect(elements[0].getModel().color).not.toBe('red');
});
});
|
An empirical identification and categorisation of training best practices for ERP implementation projects Although training is one of the most cited critical success factors in Enterprise Resource Planning (ERP) systems implementations, few empirical studies have attempted to examine the characteristics of management of the training process within ERP implementation projects. Based on the data gathered from a sample of 158 respondents across four stakeholder groups involved in ERP implementation projects, and using a mixed method design, we have assembled a derived set of training best practices. Results suggest that the categorised list of ERP training best practices can be used to better understand training activities in ERP implementation projects. Furthermore, the results reveal that the company size and location have an impact on the relevance of training best practices. This empirical study also highlights the need to investigate the role of informal workplace trainers in ERP training activities. |
Research on the Construction Method of the Service-Oriented Web-SWMM System On a global scale, with the acceleration of urbanization and the continuous expansion of cities, the problem of urban flooding has become increasingly prominent. An increasing number of experts and scholars have begun to focus on this phenomenon and build corresponding models to solve the problem. The storm water management model 5 (SWMM5) is a dynamic rainfall-runoff simulation model developed by the US Environmental Protection Agency (EPA); this model simulates urban flooding and drainage well and is widely favored by researchers. However, the use of SWMM5 is relatively cumbersome and limited by the operational platform, and these factors hinder the further promotion and sharing of SWMM5. Based on the OpenGMS platform, this study first encapsulates, deploys, and publishes SWMM5 and further builds the Web-SWMM system for the model. With Web-SWMM, the user can conveniently use network data resources online and call SWMM5 to carry out calculations, avoiding the difficulties caused by the localized use of SWMM5 and enabling the sharing and reuse of SWMM5. |
Temporal adaptation of human neutrophil metabolic responsiveness to the peptide formylmethionylleucyl phenylalanine: A comparison between human neutrophils and granuledepleted neutrophil cytoplasts When polymorphonuclear leukocytes (neutrophils) and soluble or particulate matter interact, the cells produce superoxide anions (O2−) and hydrogen peroxide (H2O2). The chemotactic peptide formylmethionylleucylphenylalanine (FMLP) induced a very weak response in normal neutrophils. The cellular response was changed, however, as a result of in vitro aging of the cells, i.e. the magnitude of the response was increased following storage of the cells at 22°C for up to 120 min, in the absence of any stimulus, and before the addition of the peptide. When phorbol myristate acetate was used as a stimulus, there was a pronounced production of O2− and H2O2, but no change in magnitude as a result of in vitro aging. When neutrophil cytoplasts (granulefree vesicles of cytoplasm enclosed by plasmalemma) were exposed to the peptide FMLP of PMA, the vesicles produced both O2− and H2O2. There was, however, no increase in oxidative metabolite production in cytoplasts as a result of in vitro aging when either FMLP or PMA was used as a stimulus. The results thus indidate that mere incubation at room temperature primed the cells to increase their production of oxidative metabolites as a result of spontaneous exposure of hidden receptors. The fact that no such effects were observed with cytoplasts indicates that spontaneous receptor recruitment is a granuledependent process. |
Hobbyhorses: New Approaches in the Physical Activity of Children and Adolescents Modern additional education and upbringing of children reaches a new level of social needs. In the article, the authors present a new leisure for young people hobby horsing (jumping on a toy horse), which shows the aesthetics and beauty in the movements of teenagers. The method of conducting a master class on the production of horses for teachers of additional education and the method of opening the movement of horses on the basis of centers of additional education of children is proposed. In the course of scientific and pedagogical research, it was revealed that hobbyhorses perform a type of movement with a certain cycle. Moving movements allow not only to increase the motor activity of a person for a certain period of time, but also to develop coordination of movements. The practical significance of the study is that students of Kuban State University of Physical Culture, Sports and Tourism promote a new sports movement of youth and offer to introduce it in institutions of additional education in various regions of Russia as a socially-oriented project that performs the function of health-saving education. Additional education of children with the participation of students can get a new practice-oriented stage in the development of motor activity and creative cooperation of young people of different ages. |
Impact of the coronavirus disease 2019 (COVID-19) pandemic on infection control practices in a university hospital of control control to with related 1 Hand hygiene audits suspended. A double-glove was for COVID-19 patient care. have the compliance with basic infection control practices. (Received 3 February 2022; accepted 15 April 2022) As the coronavirus disease 2019 (COVID-19) pandemic spread, our center had to increase its capacity and was transformed to attend to COVID-19 patients. This transition included the creation of new intensive care units (ICUs) and the incorporation of untrained personnel in infection control practices and ICU patient care. Infection control activities were shifted to deal with COVID-19-related tasks. 1 Hand hygiene audits were suspended. A doubleglove protocol was implemented for COVID-19 patient care. These factors may have affected the optimal compliance with basic infection control practices. 2 In our center, blood culture contamination rates increased from 1.1% in the prepandemic period (March 2019-February 2020) to 2.7% in the pandemic period (March 2020-February 2021) and peaked at 4.8% in April 2020. Central-line-associated infections increased from 0.2 per 1,000 patient days to 0.4 per 1,000 patient days between these periods. To assess the effect of the pandemic on infection control practices and to identify issues needing urgent attention, we conducted a survey among frontline HCWs at a university hospital. Methods The survey was conducted at the Bellvitge University Hospital, a 700-bed hospital in Barcelona, Spain, where 2,486 patients had been hospitalized with COVID-19. The survey was distrributed via institutional e-mail on March 9, 2021, to 762 HCWs responsible for caring for COVID-19 patients (in the departments of infectious diseases, internal medicine, respiratory medicine, ICUs) and 5 infection preventionists. HCWs completed the survey once using a personalized code. The survey included questions assessing the World Health Organization (WHO) Five Moments for Hand Hygiene, 3 central venous catheter (CVC) insertion and maintenance practices, and use of personal protective equipment (PPE). Other questions focused on HCW perceived workload or changes in infection control activities. Data were collected in an anonymized REDCap database and were analyzed using SPSS version 25.0 software (IBM, Armonk, NY). The local ethics committee approved the study, and respondents provided informed consent. Regarding hand hygiene, 52 respondents (32.7%) never or occasionally performed hand hygiene before touching CVC hubs (clean or aseptic task; WHO moment 2) and 25 respondents (15.7%) performed hand hygiene after touching a patient's environment (WHO moment 5). The main factors interfering with hand hygiene compliance were inappropriate location (reported as "much" or "often" by 98 respondents, 61.7%) and shortages of hand sanitizers (reported as "much" or "often" by 88 respondents, 55.3%), and double gloving (reported as "much" or "often" by 72 respondents, 45.3%) ( Table 1). For CVC insertion bundles, hand hygiene compliance and rates of sterile gowns and glove use rates were 100% (26 of 26) among physicians performing this procedure. Among these physicians, 22 (84.6%) reported using ultrasound-guided CVC insertion always or frequently. For catheter maintenance, 38 (52.7%) of 72 nurses reported that changing dressings was challenging with double gloves. Among these 72 nurses, 38 (52.7%) stated that prone position complicated blood culture collection, and 42 (58.3%) reported that they obtained blood samples for culture through CVC hubs. The shortage of PPE during the first COVID-19 wave (March-June 2020) was reported by 129 HCWs (81.1%). This issue was recognized as a problem, together with increased HCW workload (reported by 89 HCWs, 55.9%), staff deficits (reported by 45 HCWs, 28.3%), and the incorporation of nontrained personnel in ICU patient care and infection control practices (reported by 73 HCWs, 45.9%). Finally, at the beginning of the pandemic, 70%-90% of infection preventionists duties involved COVID-19-related tasks. Gabriela Abelenda-Alonso et al Discussion Our survey identified significant barriers for optimal infection control practices during the pandemic. Contact and airborne precautions and the use of PPE (ie, masks, face shields, goggles, gloves, and gowns) were implemented during patient care. 4 However, the use of PPE is protective but also may hinder infection control practices. 5 During the first COVID-19 wave, the PPE stockpile was insufficient, and HCWs used the same gloves and gown when treating different patients and when performing different tasks. 6 As the survey shows, suboptimal hand hygiene practices were an issue. 7 Previous studies have identified changes in PPE use and hand hygiene practices as key elements associated with multidrug-resistant outbreaks, 8 increased blood culture contamination rates, and central-line-associated infections. 9,10 Indeed, the double-glove protocol, patient prone position, and the increased workload hampered CVC manipulation and made blood extraction more difficult and less aseptic than it should have been. Additionally, the need to reallocate untrained staff to COVID-19 units was a recognized problem. To optimize staffing, we had to reassess the adequate nurse-patient ratio, and a pool of nurses was daily redeployed to areas with more need. To mitigate the insufficient preparedness of the new staff on infection control practices, we planned to replace face-to-face training (which was suspended during the COVID-19 pandemic) with online training. Compensating for the shift of infection preventionists activities to SARS-CoV-2-related issues in the pandemic situation was even more challenging. 1 Perhaps better coordination between regional hospitals with common protocols would help infection preventionists deal with conflicting guidelines. Our study had several limitations. The survey was conducted in a single center with a moderate response rate and potential recall bias. We do not have information on nonrespondents, who might have identified different problems. However, the respondents included a variety of HCWs and medical departments, making data more generalizable to a range of contexts. Our survey results emphasizes the negative effect of the COVID-19 pandemic on basic infection control practices. The use of double gloves, suboptimal hand hygiene practices, the incorporation of untrained personnel, and the reassignment of infection preventionists to COVID-19 duties have been major issues. Seeking to achieve infection control excellence should be a priority during future pandemic waves. |
<filename>tensorflow_decision_forests/tensorflow/distribute/tf_distribution_py_worker.py<gh_stars>0
# Copyright 2021 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl import app
from absl import logging
import tensorflow as tf
from tensorflow.python.framework import load_library
from tensorflow.python.platform import resource_loader
tf.load_op_library(resource_loader.get_path_to_datafile("distribute.so"))
def main(argv):
del argv
cluster_resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver()
logging.info("Configuration: %s", cluster_resolver.cluster_spec())
server = tf.distribute.Server(
cluster_resolver.cluster_spec(),
job_name=cluster_resolver.task_type,
task_index=cluster_resolver.task_id,
protocol=cluster_resolver.rpc_layer or
"grpc", # cluster_resolver.rpc_layer or "grpc"
start=True)
logging.info("Server started, waiting for jobs")
server.join()
logging.info("Shutting down server")
if __name__ == "__main__":
app.run(main)
|
// Parse a hexadecimal node ID.
func ParseNodeID(nodeID string) (NodeID, error) {
var n NodeID
err := n.UnmarshalString(nodeID)
if err != nil {
return "", err
}
return n, nil
} |
Sex differences in incidence rates and referral ratios for first attack angina pectoris. The Drug Safety Research Unit used the technique of Prescription Event Monitoring to study events in patients prescribed fexofenadine following its launch in England in March 1997. 6 A total of 35 817 green forms were posted to 8057 GPs who had written prescriptions for fexofenadine between March and August 1997. In all, 18 238 were returned, giving a response rate of 50.9%. The final cohort totalled 16 638 patients. Less than 1% of patients discontinued the drug because of intolerance, and there were no specific reports of drug interactions involving fexofenadine. All cardiac events were examined in detail, and eight that resolved on stopping fexofenadine were possible sideeffects: palpitations (three), chest pain (three), arrhythmia (one), and chest tightness (one). There were no reports of ventricular tachycardia, prolonged QT interval, or serious cardiac events. Our study of patients taking fexofenadine in routine clinical practice failed to show any serious adverse cardiac events that could have been a result of drug exposure. The characteristics of our cohort (age, concomitant medication, and indication for use of fexofenadine) suggest that, for this drug, the clinical trial population and the community patients are comparable except for the delivery of care. Although the study was on a large cohort, the response rate was only 51%, and this could introduce an under-reporting bias. With a very rare event, a few cases could make a big difference to the generation of a safety signal. Our results, the results from clinical trials, and a single case report from a drug now in widespread use suggest that, even if serious cardiac arrhythmias are associated with exposure to fexofenadine, they are very rare. In the absence of dedicated case registries, the only practicable way to detect such very rare events is the spontaneous reporting made by vigilant practitioners. |
<reponame>ishaolin/CXShareSDK
//
// CXShareSDK.h
// Pods
//
// Created by wshaolin on 2018/7/1.
//
#ifndef CXShareSDK_h
#define CXShareSDK_h
#import "CXShareType.h"
#import "CXShareDefines.h"
#import "CXShareSDKLib.h"
#import "CXSharePlatformKey.h"
#import "CXShareSDKManager.h"
#import "CXSharePanel.h"
#import "CXShareUtils.h"
#import "CXShareSDKManager+CXDictionarySupported.h"
#endif /* CXShareSDK_h */
|
#include<iostream>
#include<cstring>
#include<algorithm>
#include<set>
using namespace std;
int main()
{
int i=0,n,a,b,c,d,e=0,f;
cin>>n;
for(i=1; i<=n; i++)
{
cin>>a>>b>>c>>d;
e=0;
e=a%2+b%2+c%2+d%2;
if(e<2 || (e>2 && a*b*c))
cout<<"YES\n";
else
cout<<"NO\n";
}
return 0;
} |
Movement Patterns and Catch-and-Release Impacts of Striped Bass in a Tidal Coastal Embayment in Massachusetts Striped bass (Morone saxatilis) are a popular sport fish among recreational anglers along the Atlantic coastline. Although there is a good understanding of their seasonal migration patterns, less is known about the short-term movements of striped bass once they have reached New England coastal embayments frequented during the summer months. It is important to understand striped bass movement patterns and behavioral ecology to make the most educated management decisions. Fine-scale movement and activity were assessed by tagging 35 striped bass (38.5-80.5 cm TL) with acoustic transmitters equipped with pressure and tri-axial accelerometer sensors and tracking them within a fixed array (n=34 receivers) in Plymouth, Kingston, Duxbury (PKD) Bay, MA. Activity space was significant over the course of the season and increased with water temperature. Striped bass most frequently exhibited low levels of locomotory activity representing 67% of total activity measurements (slow swimming or hovering in place), with occasional high activity and burst swimming, often within the upper 3 m of the water column. Depth distribution of striped bass ranged from 0 -14.95 m and fish remained at shallower depths when temperatures were over 21 C. Diel vertical migration was observed with shallower depths during the day and greatest depths during high tide. |
def build_file_structure(file_path: str) -> None:
abspath = os.path.abspath(file_path)
print("Adding {} as entry folder...".format(abspath))
if not os.path.isdir(file_path):
raise ValueError("Not a folder!")
path = os.path.dirname(abspath) + "/"
name = os.path.basename(abspath)
parent_id = create_root_folder(path=path, name=name)
traverse_subfolders(path=file_path, parent_id=parent_id) |
/**
* Confirms that the prefs accessor works (bug 51384).
*/
@Test
public void testPrefs() throws Exception {
Account account = TestUtil.getAccount(USER_NAME);
ZMailbox mbox = TestUtil.getZMailbox(USER_NAME);
ZPrefs prefs = mbox.getPrefs();
assertEquals(account.getPrefLocale(), prefs.getLocale());
} |
/*
* throw an exception instead of allocating a new buffer. The exception is a
* BufferOverflowException thrown from expand, and will restore the position to the point at which
* the flag was set with the disallowExpansion method.
*
* @param ee the exception to throw if expansion is needed
*/
public void disallowExpansion(Error ee) {
this.disallowExpansion = true;
this.expansionException = ee;
this.memoPosition = this.buffer.position();
} |
Groups > Persecuted Christians > Discussions > Topic: ACTION ALERT FROM OPEN DOORS!
Topic: ACTION ALERT FROM OPEN DOORS!
North Korea waits for your voice.
Im sending you this urgent Action Alert because a critical bill will be introduced when Congress reconvenes that could directly impact the lives of Christians in North Korea.
I know it can feel like any effort to make a difference in the life of persecuted believers is just a small drop of good in an ocean of oppression and evil.
But your voice counts. We have seen how God has used the advocacy ministry of Open Doors to make a difference in the lives of persecuted Christians worldwide.
as their recent firing of a missile against international law clearly demonstrated. And they remain #1 on the Open Doors World Watch List as the most repressive country in the world against Christians.
before the United States establishes diplomatic relations with them.
Passing this bill is critical if we are to ensure that the Obama Administration makes human rights a priority in their dealings with North Korea. And thats why I am asking you to make your voice heard today.
You can help by sending a message to your Representative asking them to sign the North Korea Sanctions and Diplomatic Nonrecognition Act of 2009. To be effective, we need to get bi-partisan support for the bill and as many signatures as possible from members of Congress.
Please send your message today!
Thank you for making your voice heard on behalf of our persecuted brothers and sisters in North Korea!
I have great news to share with you today and an urgent request.
The great news is this: Open Doors has been offered a $150,000 ch allenge grant to help provide desperately needed Bibles for China.
My urgent request is that you would help us meet this generous grant through your online gift today.
There is an urgency to my request for a number of reasons.
First, China continues to change. Since the Olympics, our contacts on the ground tell us the church is growing like wildfire, experiencing increasing religious freedom. In fact, an estimated 10,000 people are coming to Christ each day!
But in the midst of this growth and increasing freedom, we are getting reports that Christians are still being arrested and churches are being shut down in certain areas of the country. And we have evidence that the government continues to repress the distribution of Bibles.
"I was just in Shanghai and tried to purchase Bibles from the official church bookstore. This is supposed to be one of the main distribution centers in the whole of China. I was told that the most I could purchase was one box, but we need thousands. We will have to come up with another plan."
and that's why I am so thankful for this challenge grant!
If we can fully meet this grant, we will be able to put 60,000 Bibles into the hands of needy believers. Thats just $5 per Bible for printing and distribution!
asking for the most generous online gift you can give.
It is crucial for us to move quickly. So thank you for sharing generously today. |
Northumberland County Council’s trading standards service is reminding people to stay vigilant, following a scam phone call to a county councillor.
Coun Andrew Tebbutt, who represents Morpeth Kirkhill, thought he was speaking to the Telephone Preference Service (TPS) and was being asked about unwanted calls. Initially, he was asked general questions, but then the caller swiftly moved on asking about phone bills.
He then advised Coun Tebbutt his credit card was about to expire. On this occasion, when challenged, the caller promptly ended the call. However, trading standards is concerned that some people may provide their details.
Coun Tebbutt said: “I am always alert to potential scams, but I was initially more inclined to think it was genuine because I had a displayed number.
The council’s trading standards team is once again urging residents who receive calls like these to simply put down the phone – and never be tempted to share personal information.
The genuine TPS is aware that a number of organisations call people claiming to be them and try to charge consumers for registration.
However, it is free to sign up to the TPS register via their website – http://www.tpsonline.org.uk/tps/number_type.html – and it is the only official UK ‘do not call’ register for opting out of live telesales calls. |
<gh_stars>10-100
package health
import (
"log"
"net/http"
"github.com/ccpgames/aggregateD/output"
)
type healthHTTPHandler struct {
influxdbConfig output.InfluxDBConfig
}
func (handler *healthHTTPHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
_, err := http.Get(handler.influxdbConfig.InfluxURL + "/ping")
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte("Unable to write to InfluxDB"))
return
}
w.WriteHeader(http.StatusOK)
w.Write([]byte("aggregateD is healthy"))
}
//Serve exposes /health
func Serve(influxdbConfig output.InfluxDBConfig) {
server := http.NewServeMux()
handler := new(healthHTTPHandler)
handler.influxdbConfig = influxdbConfig
server.Handle("/health", handler)
log.Printf("Serving Healthcheck on port 8000")
log.Fatal(http.ListenAndServe(":8000", server))
}
|
# Import main class.
from .check import CheckClass
Check = CheckClass()
def create_dir():
Check.do_command('mkdir git_test')
def delete_dir():
Check.do_command('rm -rf git_test')
def test_do_command():
command = 'echo $(( 1 + 1))'
result = Check.do_command(command).replace('\n', '')
assert int(result) == 2
def test_no_repository_exists():
Check.do_command('mkdir /tmp/gnow_test')
result = Check.repository_exists('/tmp/gnow_test')
Check.do_command('rm -rf /tmp/gnow_test')
assert result == 0
def test_yes_repository_exists():
Check.do_command('mkdir /tmp/gnow_test')
Check.do_command('git init /tmp/gnow_test')
result = Check.repository_exists('/tmp/gnow_test')
Check.do_command('rm -rf /tmp/gnow_test')
assert result == 1
#def test_stage_exists():
# result = Check.stage_exists()
# assert result == True
#def test_get_git_status():
#def test_is_change(area=''):
#def test_get_git_branch():
#def test_branch_exists():
#def test_commit_exists():
#def test_get_latest_tag():
#def test_do_git_tag(tag = 0):
#def test_do_git_add():
#def test_do_git_commit(message = ''):
#def test_do_git_push(branch = 'main'):
#def test_status(status = ''):
#def test_format_status(status = ''):
|
The invention relates to a low frequency transformer for an audio system and a method of making it. More particularly, the invention concerns a low frequency transformer having primary, secondary, and tertiary windings with insulation between each winding wherein one end of the tertiary winding is embedded in the insulation between the secondary winding and the tertiary winding, and the other end is grounded.
In a conventional audio system, a volume adjusting variable resistor, which is located next to a detection circuit, adjusts the volume of the sound. In order to increase the sensitivity, a high frequency transformer has been used; however, the efficiency of a high frequency transformer in the low frequency range is remarkably reduced. That is well-known fact. In order to improve the efficiency, the output voltage fluctuation of the conventional transformer is controlled by adjusting the relative distance of two magnetic iron cores lined up on a line, whose relative distance affects the amount of magnetic flux coupling the magnetic iron cores together. This type of control of the output voltage fluctuation is practically too big.
Also, in a conventional audio system, if the transformer is operating in an overloaded condition, i.e., the output of the transformer is more than the rated load of the transformer, the output signal of the transformer is distorted, in other words, audio waveshape distortion results. Furthermore, if multistage speakers are connected in parallel to the output of the transformer, a common arrangement, the output frequency of the transformer may be different than the input frequency, i.e., frequency distortion may occur.
The performance of multistage speakers connected in parallel may be improved by increasing the current in the secondary winding, i.e., by increasing the output current of the transformer. But even if this technique is used, waveshape distortion, which may occur because of a transformer overload, and/or noise, which may be caused by external interference signals or leakage current from the primary winding, could result.
In commonly known audio systems, typical impedance values for speakers, which are known as passive speakers, are 4, 8, 16, and 600 ohms. Matching the input impedance of the speakers with the output impedance of the output amplifier of the audio system is difficult. Additionally, if the output power of the amplifier does not meet the rated power of the speaker, the speaker is unusable in the higher range of its capacity. |
Parental Experiences of Adolescent Cancer-Related Fatigue: A Qualitative Study. OBJECTIVE Cancer-related fatigue is common, disabling, and chronic, but professional help is not necessarily sought. Parents can support symptom management and facilitate help-seeking. This study explored parental experiences of their adolescent's cancer-related fatigue and what they do to help. METHODS Qualitative semi-structured interviews were conducted with 21 parents of 17 adolescents aged 12-18 who were previously diagnosed with cancer. Reflexive thematic analysis was used to analyze the data. RESULTS Three high-order themes were generated. Firstly, "fatigue is inevitable and unpredictable." This encompassed parental perceptions of fatigue as variable, distinct from normal tiredness, and linked to sleep and mood. Fatigue was seen as arising from cancer, which rendered parents helpless. Secondly, "fatigue is disruptive to normal life" beyond cancer treatment, which is contrary to expectations. Thirdly, parents managed fatigue by trying to balance the adolescent's desires for normality and their own perception of what is realistic with encouraging activities, and by seeking support from others. CONCLUSIONS Parents see adolescent cancer-related fatigue as multi-faceted and experience it as unpredictable and attributed to cancer. They struggle to distinguish normal adolescent behavior from problematic fatigue, and to balance supporting and empowering the adolescent to live life to the fullest whilst also being realistic about the limitations imposed by fatigue and the benefits of activity. Parents try to manage fatigue practically but want more information about adolescent cancer-related fatigue to help establish their own and their adolescent's expectations. |
Before the draft, Virginia pass rusher Eli Harold was considered a late first- or early second-round selection. Instead he tumbled midway to the third round.
Harold said on a conference call that he had determination and a sizable chip on his shoulder ever since a nephew, who was as close to him as a brother, and his mother died when he was 15. He said his drive increased on Friday.
Harold said he’s never met Ahmad Brooks, who also played at Virginia and with whom Harold will compete for playing time. But he said he’s long been inspired by Brooks, and more recently by outside linebacker Aldon Smith. At one point, Harold used a photo of Smith as the background on his iPad.
Tartt, meanwhile, is the third safety the 49ers have taken with a high draft pick since 2013. They took Eric Reid in the first round that year and used last year’s first rounder on Jimmie Ward.
That may be attributed, in part, to questions about the safety group.
Reid, for instance, has suffered three concussions in two seasons, including one that caused him to miss the final two games of 2014. The 49ers drafted Ward last year with a foot injury. He injured it again Nov. 9 and missed the remainder of the season. Ward did not take part in the team’s recent minicamp and general manager Trent Baalke declined to give a timetable for his return.
Baalke said Tartt’s selection was not related to medical concerns.
Tartt seemed eager to join the competition and noted that he and Ward have been talking about playing together for years. The two attended the same high school in Mobile, Ala. and became best friends in ninth grade.
Tartt started playing football his senior year of high school and therefore was not recruited by the Southeastern Conference schools.
Still, he had some of his best games – and biggest hits – when Samford played schools like Arkansas and Auburn. He also looked good at the Senior Bowl in January against more well-known competition.
Despite his size, Tartt covered the opposition’s slot receiver, which was Ward’s job with the 49ers last year. San Francisco’s decision-makers also had to be impressed that Tartt played nearly the entire 2013 season with a torn labrum (shoulder). |
On May 24, the U.S. Congress voted to continue the war in Iraq. The members called it “supporting the troops.” I call it stealing Iraq’s oil — the second largest reserves in the world.
The “benchmark,” or goal, that the Bush administration has been working on furiously since the United States invaded Iraq is the privatization of Iraq’s oil. Now they have Congress blackmailing the Iraqi Parliament and the Iraqi people: no privatization of Iraqi oil, no reconstruction funds.
This threat could not be clearer. If the Iraqi Parliament refuses to pass the privatization legislation, Congress will withhold U.S. reconstruction funds that were promised to the Iraqis to rebuild what the United States has destroyed there.
The privatization law, written by American oil company consultants hired by the Bush administration, would leave control with the Iraq National Oil Company for only 17 of the 80 known oil fields. The remainder (two-thirds) of known oil fields, and all yet undiscovered ones, would be up for grabs by the private oil companies of the world (but guess how many would go to United States firms — given to them by the compliant Iraqi government).
No other nation in the Middle East has privatized its oil. Saudi Arabia, Kuwait, Bahrain and Iran give only limited usage contracts to international oil companies for one or two years. The $120 billion “Support the Troops” legislation passed by Congress requires Iraq, in order to get reconstruction funds from the United States, to privatize its oil resources and put them up for long-term (20- to 30-year) contracts.
What does this “support the troops” legislation mean for the United States military? Supporting our troops has nothing to do with this bill, other than keeping them there for another 30 years to protect U.S. oil interests. It means that every military service member will need Arabic language training. It means that every soldier and Marine would spend most of his or her career in Iraq. It means that the 14 permanent bases there will get new Taco Bells and Burger Kings!
Why? Because the U.S. military will be protecting the U.S. corporate oilfields leased to U.S. companies by the compliant Iraqi government. Our troops will be the guardians of U.S. corporate interests in Iraq for the life of the contracts — for the next 30 years.
With the Bush administration’s “support the troops” bill and its benchmarks, primarily Benchmark No. 1, we finally have the reason for the U.S. invasion of Iraq: to get easily accessible, cheap, high-grade Iraqi oil for U.S. corporations.
Now the choice is for U.S. military personnel and their families to decide whether they want their loved ones to be physically and emotionally injured to protect not our national security, but the financial security of the biggest corporate barons left in our country — the oil companies.
It’s a choice for only our military families, because most nonmilitary Americans do not really care whether our volunteer military spends its time protecting corporate oil to fuel our one-person cars. Of course, when a tornado, hurricane, flood or other natural disaster hits in our hometown, we want our National Guard unit back. But on a normal day, who remembers the 180,000 U.S. military or the 150,000 U.S. private contractors in Iraq?
Since the “surge” began in January, over 500 Americans and 15,000 Iraqis have been killed. By the time September 2007 rolls around for the administration’s review of the “surge” plan, another 400 Americans will be dead, as well as another 12,000 Iraqis.
How much more can our military and their families take?
Ann Wright served 29 years in the U.S. Army and U.S. Army Reserves and retired as a colonel. She served 16 years in the U.S. diplomatic corps in Nicaragua, Grenada, Somalia, Uzbekistan, Kyrgyzstan, Sierra Leone, Afghanistan, Micronesia and Mongolia. She resigned from the U.S. Department of State in March 2003 in opposition to the war on Iraq. This article originally appeared at truthout.org and is reprinted with the author’s permission. |
Pancreatic Transplantation: Impact on the quality of life of diabetic renal transplant recipients OBJECTIVE To determine the impact of pancreas transplantation on the quality of life of renal transplant recipients with diabetes. RESEARCH DESIGN AND METHODS In this quasi-experimental comparative study of 41 successful pancreas transplant (SP) recipients, 13 failed pancreas transplant (FP) recipients, and 28 kidney alone (KA) transplant recipients, we collected data from individuals who had their pancreas/kidney or kidney alone transplants ≥6 months before at a university tertiary care center. This study was an extension of a 1992 study of SP and FP recipients. The subject group was enlarged with additional pancreas/kidney recipients and a control group of KA recipients. Five dimensions of life quality were measured. RESULTS Groups did not differ significantly regarding age, gender, marital status, comorbidity, type of prior dialysis, current kidney function, length of time since transplant, physical activity, symptom burden, emotional state, and feelings of well-being. A significant time by group interaction occurred for quality of life (P = 0.0023) and health (P = 0.0001). Patients in the SP and KA groups perceived their past life and health quality to be significantly lower and their present and future life and health quality to be significantly better than did the FP group. The groups' major concerns differed significantly. The FP group's concern related to diabetes, the SP group's to immunosuppression, and the KA group's to graft rejection. CONCLUSION Patients with failed pancreas but successful kidney transplants see less improvement in their quality of life than do patients who meet their transplant goals, irrespective of whether they receive a pancreas. |
"""
Find similar items to "Small Favor: A Novel of the Dresden Files"
(ASIN 0451462009).
"""
from amazonproduct.api import API
from amazonproduct.errors import AWSError
from amazonproduct.processors import BaseProcessor
import BeautifulSoup
class SoupProcessor (BaseProcessor):
"""
Custom response parser using BeautifulSoup to parse the returned XML.
"""
def parse(self, fp):
soup = BeautifulSoup.BeautifulSoup(fp.read())
# parse errors
for error in soup.findAll('error'):
code = error.find('code').text
msg = error.find('message').text
raise AWSError(code, msg)
return soup
if __name__ == '__main__':
# Don't forget to create file ~/.amazon-product-api
# with your credentials (see docs for details)
api = API(locale='us', processor=SoupProcessor())
result = api.item_lookup('0718155157')
print result
# ...
# now do something with it!
|
Characterization and hormonal regulation of casein kinase II activity in heterotransplanted human breast tumors in nude mice. Cytosolic casein kinase type II activity has been identified in MCF-7 and MDA-MB-231 human breast cancer cells heterotransplanted into athymic nude mice. Sephacryl S-300 chromatography of MCF-7 and MDA-MB-231 tumor cytosols revealed a major peak of casein kinase activity with an estimated molecular weight of 150,000. This peak was further characterized and optimal conditions for breast tumor casein kinase activity were established. Polylysine (10 micrograms) acted as a potent stimulator with casein as the phosphate acceptor protein. This enzyme used both ATP and GTP as phosphate donors and the Km for GTP was 10 microM. The rate of phosphorylation with increasing concentrations of GTP revealed typical Michaelis-Menten kinetics and Vmax was approached at a concentration of 30 microM GTP. MgCl2 stimulated enzyme activity at concentrations between 10-20 mM. Quercetin, a bioflavonoid, inhibited casein kinase type II activity in a dose dependent manner. MCF-7 (hormone-dependent) human breast cancer cells (2-3 X 10) were inoculated into the mammary fat pads of nude mice, supplemented with a 0.5 mg estradiol pellet. To determine the influence of various regulatory agents on casein kinase activity in vivo, tumor-bearing mice were treated for five days with estradiol, progesterone, dexamethasone or tamoxifen. Casein kinase type II was partially purified by gel filtration on a Sephacryl S-300 column and assayed in the presence of polylysine and casein. Dexamethasone treatment significantly decreased casein kinase II activity in MCF-7 tumors, which are receptor-positive for estrogen, androgen and glucocorticoid receptors. |
The ferry Mazovia caught fire in Port of Ystad, Sweden. The passenger were already on board and vessel was preparing to leave the terminal bound for Swinoujscie in Poland, when engine room fire alarm was raised and one of the diesel generators started smoking. The crew turned the generator off and reported the accident to the local authorities. The fire was extinguished without causing any sufficient damages to the propulsion system, but the defected generator had to be repaired and overheating to be put under control. The crew worked during the whole night, but were not able to fix the problem and to pass inspection from the authorities.
In the beginning the departure of the voyage was delayed with one hour, but later the vessel remained in Ystad for the whole night. The passengers were asked to leave the vessel in the morning and informed that the voyage was canceled. They were not refunded and had to buy tickets for another ferry. A lot of complaints were sent to the office of the ferry company.
The ferry Mazovia (IMO: 9010814) has overall length of 168.00 m, moulded beam of 28.00 m and maximum draft of 6.40 m. The deadweight of the vessel is 6,124 DWT and the gross tonnage is 29,289 GRT. The ship was built in 1996 by Dok Kodja Bahari in Jakarta, Indonesia. |
Q:
How should I have asked?
I have a problem. I need to know if there is a possibility to use gold to improve my character without buying magic items or using a wish. Actually I know that there is at least one combination of spells that would work, and I am wondering if there is any other alternative.
I have asked a question about this, which was closed for being too broad/opinion based, and even after several edits, I couldn't manage to make it acceptable.
Now my problem is still unsolved so in last resort I am posting there, so that you can tell me how should I ask the question so that it will be acceptable here?
A:
OK, you still don't seem to understand why more information was needed for this to be an acceptable question.
Other people don't have all the context you do. Therefore without some parameters about the situation, they will waste their time giving pointless answers - like the guy who had to delete his big ol' answer because you finally got around to explaining "oh our custom time travel doesn't work that way." Even Brian's graft etc. suggestions I'm not sure will work in this situation, tattoos and grafts and stuff should go if it's really terminator mode but you're not even sending your body back, just your spirit, as it took a whole week for you to mention.
Just saying "100k, no gear" doesn't make an optimization question good. "Optimize me!" OK, here's a way to use that to get +10 to WIS. Is that good for your character? Or not? hard to tell if you can't be arsed to say what class and level you are. We have general guidance on optimization questions here for that reason. Optimize what exactly, for what purpose exactly?
When you say "time travel to the distant past," that could mean "you're all popping out in dinosaur times, no civilization around, food and water and normal swords will be a challenge to get."
The question still has a bit too much on what doesn't help (your byzantine time travel plot) and a bit too little on what you need to do. When you drop back into your level 1 body, were you fighting vampires in Karrnath? Or just hanging out and conducting courtly intrigue in Breland?
The question doesn't have to be long, it just has to have the important parts in it. Here's an example that wouldn't have been closed and would probably only have needed a couple clarification comments before people could answer effectively.
"I am a level 12 human swashbucker that specializes in two-weapon fighting and diplomacy. In two weeks I and my party will be traveling back in time - our souls will move back to inhabit our first level bodies from 30 years ago. We can't take gear with us, though intelligent items' spirits will come back with us and can go into gear of the time. Body modifications like tattoos won't travel but permanent spells, inherent bonuses, and things like that will. When we will go back, we will be based in courtly society in Breland but also doing cross-country adventuring, trying to survive for three years and change the past by killing the vampire king of Korrath.
I have 100,000 gp to spend. What can I do in the two weeks I have to enhance myself for this trip?"
That doesn't require lengthy explanation, it just requires insertion of the facts that bear on your problem.
In the end, if you ask questions that make people go on fishing expeditions, a) they get closed and reopened, b) people write answers that have to get rewritten and/or deleted, and c) eventually people decide that maybe answering your questions is more trouble than it's worth. So it's in your best interest to learn how to ask effective questions.
In fact, it's worth restating one of our fundamental SE best practices. Ask about your PROBLEM, not about your SOLUTION. All the parts you spent actually talking about the parameters of your specific situation and what you want to achieve were valuable. Parts where you assumed a given answer and drilled down into that generally led people down wrong paths. So don't assume an answer or a solution. Post your problem, with all the appropriate parameters that would rule in/out solutions, and let people answer with the actual solutions.
A:
Too broad simply means too Broad
When a question is closed for Too broad its because the are too many answers that would be "correct" (not enough requirements to distinguish the right answer from another one) or simply that an appropriate answer to such a question would itself be too long because of all the ground it would need to cover.
Sometimes RPG.SE isn't the right place to ask a question simply because the nature of the question would be better served by ongoing discussion on a forum which SE in general avoids.
A:
Narrow down your criteria.
I know this is the obvious but hear me out.
Right now your question has nothing as to what kind of enhancements you are wanting for your character. There are dozens of builds that can be enhanced in dozens of ways each. Put in some background on your character. Is he a tank, a caster, a striker, what is his Class, race, theme, ect. Tell us what your end goal is and we can help you more.
Right now your only criteria is.
Old items.
100,000 gold spending limit
2 weeks to complete the process
Come into the chat and we could help you talk it out if you want.
http://chat.stackexchange.com/rooms/11/rpg-general-chat |
What Does Your Facebook Photo Say About You?
The photo you choose as your profile photo on Facebook says a lot about you.
Don't believe me? Gawker just did a whole piece on the various Facebook stock photos from "The Portrait" to the "Far and Away" to "The Family Photo," and one thing was clear: the image you have when someone feeds your name into Facebook tells them all they need to know about your life in two seconds or less.
That is quite the impression. So what does your say? Are you alone or with others? Are you holding a beer? Your children? A wedding veil?
Choose wisely because people are judging.
My current Facebook profile photo is at left and I chose it for one reason only: the boots. Seriously. I love them. But enough about me, this is about you and what your photo is telling the world about you.
The Bikini Profile Photo: This is a desperate bid for attention. Sorry, but it is. That may not be how you mean it -- you may be proud of your weight loss or making a funny face or even love your hat in the photo -- but if you're in a bikini, you're saying look at me! I want your attention! I need validation of my beauty!
The Constant Changer: There are some people on Facebook (and I may be one of them) who change their profile photo with their mood. This may be a sign of a mercurial personality (as it is in my case). But it might also be a sign of general discontent, an inability to commit. Watch these people closely.
The "My Baby Is So Much Cuter Than Me": Gawker mentioned "The Family Photo" but this one is a special one that usually applies to moms and not dads. They make disparaging remarks about themselves and say how much cuter their kid is. It's your name on the page for god's sake! Be in the photo! Get some self-esteem lady. You are still you, baby or not.
The "Sport" Photo: Look at me! I climbed a mountain! Look at me! I ran a race! Look at me! I am on a bicycle!
The Static Never Changer: Much like the serial changer, this one sends a message, too. And that message is: I don't like Facebook.
The No Photo: This is the person who has the default Facebook setting and is the most uncomfortable. Maybe this person is a serial killer or on the lam, but it's best to send them daily reminders or wall posts: "Hey dude! Put up a photo!" You might also try little jokes: "What? Are you so ugly you are afraid to show your face?" They will laugh and put up a photo. Or they will "unfriend" you. Either way, it's good to be rid of that freak.
The "Shadow": This one combines the best of all worlds. Maybe this person was on the beach and the sun was shining and their shadow looked vivid. Inevitably this photo says one thing and one thing only: "I am way, way cooler than you because I found a way to be in my bikini, doing a sport, using the Facebook default 'shadow,' and also be arty all in one shot." Bravo! Truly. Bravo.
What kind of photo do you have? |
You Asked: Should I Eat 3 Big Meals Or Lots of Small Ones?
Compared to the traditional “three squares” approach to eating, the concept of grazing on micro-meals spread throughout the day is popular among weight-loss dieters. Likewise, athletes and bodybuilders hoping to add weight are often told to restrict their calorie intake to larger, more infrequent meals.
For both groups, the presumption is that the human body is better able to manage a steady trickle of food—as opposed to a sudden deluge of calories.
Along with a library’s-worth of diet books, there’s research to back up these beliefs. Numerous studies have shown your metabolism, appetite, cholesterol levels and blood sugar may all benefit from a slow-and-steady influx of calories, rather than three big blasts.
But the newest and most in-depth science says “phooey.” In terms of weight loss, disease avoidance and lifespan, you should be eating fewer meals—not more. “Even three meals might be too much,” says Dr. Valter Longo, director of the University of Southern California’s Longevity Institute.
Longo, also of Italy’s Institute of Molecular Oncology, is at the vanguard when it comes to the study of meal timing and calorie restriction. He says there’s “no question” your goal should be to eat fewer meals, and the reasons for this are myriad.
For one thing, people almost always underestimate how many calories their food contains. “If something’s 500 calories, people guess 250,” Longo says. At the same time, life’s many distractions tend to confound our efforts to keep an eye on how much we’re putting in our mouths. Give yourself six or seven opportunities to eat throughout the day, and that’s six or seven occasions when you’re likely to overeat, Longo says.
There’s also the fact that the most convenient foods tends to be junk foods. Finding or preparing healthy meals is a challenge. So if you’re eating six times a day as opposed to three, you’re going to have a tougher time sticking with good stuff, Longo says.
Hold on a second, you may be saying. Won’t I end up overeating at mealtime if I stick to just three meals? The answer: Yes, but not enough to make up for what you’ve skipped.
A 2013 study from Cornell University found that when people cut out a meal from their eating routine, they consumed an average of 400 fewer calories per day. Research on people who fasted every other day came up with similar results; compared to a group who ate normally, the fasters’ calorie intake was elevated on eating days, but only by about 10% or 15%—not nearly enough to make up for all the calories they’d forgone.
Longo says studies that support a grazing approach tend to be flawed in predictable ways. They often look only at the short-term effects of increasing meal frequency. While your appetite, metabolism and blood sugar might at first improve, your system will grow accustomed to your new eating schedule after a month or two. When that happens, your body will start expecting and craving food all day long instead of only around midday or dinnertime.
The proof of this is borne out in the way our food consumption habits have evolved. Even three meals a day is excessive in historical terms. Our hunter-gatherer ancestors weren’t eating more than once a day—if they were lucky. And as recently as the mid 1900s, people tended only to eat just twice a day. But by the early 2000s, the average American was eating roughly five times a day. Compared to Americans of the early 1970s, we’re also eating more food—and in particular more energy-dense food—during each of our meals, according to data from the National Institutes of Health.
So what’s the ideal meal frequency? That remains a tough question, and the answer depends in part on your age, health and lots of other factors. But for most adults, two meals and a snack is a good goal, Longo says. “Something light for breakfast—150 calories—a healthy lunch, and a big dinner,” he suggests.
If you can work in a 12-hour period without food—say, finishing dinner by eight, and leaving off breakfast until eight the next morning—that stretch of fasting seems to be beneficial in terms of body weight, disease risk and longevity. But you’ll have to pay closer attention to the nutrients your food contains; fewer meals means fewer opportunities for your body to get all the vitamins and minerals it requires.
“Give yourself two months,” Longo says. While your mood, energy levels and appetite may freak out a bit at first, by the end of those two months your body will have gotten used to your new routine, he says. |
<filename>modules/afsocket/afsocket-source.c
/*
* Copyright (c) 2002-2014 Balabit
* Copyright (c) 1998-2012 <NAME>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
* As an additional exemption you are allowed to compile & link against the
* OpenSSL libraries as published by the OpenSSL project. See the file
* COPYING for details.
*
*/
#include "afsocket-source.h"
#include "messages.h"
#include "fdhelpers.h"
#include "gsocket.h"
#include "stats/stats-registry.h"
#include "mainloop.h"
#include "poll-fd-events.h"
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#if SYSLOG_NG_ENABLE_TCP_WRAPPER
#include <tcpd.h>
int allow_severity = 0;
int deny_severity = 0;
#endif
typedef struct _AFSocketSourceConnection
{
LogPipe super;
struct _AFSocketSourceDriver *owner;
LogReader *reader;
int sock;
GSockAddr *peer_addr;
} AFSocketSourceConnection;
static void afsocket_sd_close_connection(AFSocketSourceDriver *self, AFSocketSourceConnection *sc);
static gchar *
afsocket_sc_stats_instance(AFSocketSourceConnection *self)
{
static gchar buf[256];
gchar peer_addr[MAX_SOCKADDR_STRING];
if (!self->peer_addr)
{
/* dgram connection, which means we have no peer, use the bind address */
if (self->owner->bind_addr)
{
g_sockaddr_format(self->owner->bind_addr, buf, sizeof(buf), GSA_ADDRESS_ONLY);
return buf;
}
else
return NULL;
}
g_sockaddr_format(self->peer_addr, peer_addr, sizeof(peer_addr), GSA_ADDRESS_ONLY);
g_snprintf(buf, sizeof(buf), "%s,%s", self->owner->transport_mapper->transport, peer_addr);
return buf;
}
static LogTransport *
afsocket_sc_construct_transport(AFSocketSourceConnection *self, gint fd)
{
return transport_mapper_construct_log_transport(self->owner->transport_mapper, fd);
}
static gboolean
afsocket_sc_init(LogPipe *s)
{
AFSocketSourceConnection *self = (AFSocketSourceConnection *) s;
LogTransport *transport;
LogProtoServer *proto;
if (!self->reader)
{
transport = afsocket_sc_construct_transport(self, self->sock);
/* transport_mapper_inet_construct_log_transport() can return NULL on TLS errors */
if (!transport)
return FALSE;
proto = log_proto_server_factory_construct(self->owner->proto_factory, transport, &self->owner->reader_options.proto_options.super);
self->reader = log_reader_new(s->cfg);
log_reader_reopen(self->reader, proto, poll_fd_events_new(self->sock));
log_reader_set_peer_addr(self->reader, self->peer_addr);
}
log_reader_set_options(self->reader, s,
&self->owner->reader_options,
STATS_LEVEL1,
self->owner->transport_mapper->stats_source,
self->owner->super.super.id,
afsocket_sc_stats_instance(self));
log_pipe_append((LogPipe *) self->reader, s);
if (log_pipe_init((LogPipe *) self->reader))
{
return TRUE;
}
else
{
log_pipe_unref((LogPipe *) self->reader);
self->reader = NULL;
}
return FALSE;
}
static gboolean
afsocket_sc_deinit(LogPipe *s)
{
AFSocketSourceConnection *self = (AFSocketSourceConnection *) s;
log_pipe_unref(&self->owner->super.super.super);
self->owner = NULL;
log_pipe_deinit((LogPipe *) self->reader);
return TRUE;
}
static void
afsocket_sc_notify(LogPipe *s, gint notify_code, gpointer user_data)
{
AFSocketSourceConnection *self = (AFSocketSourceConnection *) s;
switch (notify_code)
{
case NC_CLOSE:
case NC_READ_ERROR:
{
if (self->owner->transport_mapper->sock_type == SOCK_STREAM)
afsocket_sd_close_connection(self->owner, self);
break;
}
}
}
static void
afsocket_sc_set_owner(AFSocketSourceConnection *self, AFSocketSourceDriver *owner)
{
GlobalConfig *cfg = log_pipe_get_config(&owner->super.super.super);
if (self->owner)
log_pipe_unref(&self->owner->super.super.super);
log_pipe_ref(&owner->super.super.super);
self->owner = owner;
self->super.expr_node = owner->super.super.super.expr_node;
log_pipe_set_config(&self->super, cfg);
if (self->reader)
log_pipe_set_config((LogPipe *) self->reader, cfg);
log_pipe_append(&self->super, &owner->super.super.super);
}
/*
This should be called by log_reader_free -> log_pipe_unref
because this is the control pipe of the reader
*/
static void
afsocket_sc_free(LogPipe *s)
{
AFSocketSourceConnection *self = (AFSocketSourceConnection *) s;
g_sockaddr_unref(self->peer_addr);
log_pipe_free_method(s);
}
AFSocketSourceConnection *
afsocket_sc_new(GSockAddr *peer_addr, int fd, GlobalConfig *cfg)
{
AFSocketSourceConnection *self = g_new0(AFSocketSourceConnection, 1);
log_pipe_init_instance(&self->super, cfg);
self->super.init = afsocket_sc_init;
self->super.deinit = afsocket_sc_deinit;
self->super.notify = afsocket_sc_notify;
self->super.free_fn = afsocket_sc_free;
self->peer_addr = g_sockaddr_ref(peer_addr);
self->sock = fd;
return self;
}
void
afsocket_sd_add_connection(AFSocketSourceDriver *self, AFSocketSourceConnection *connection)
{
self->connections = g_list_prepend(self->connections, connection);
}
static void
afsocket_sd_kill_connection(AFSocketSourceConnection *connection)
{
log_pipe_deinit(&connection->super);
/* Remove the circular reference between the connection and its
* reader (through the connection->reader and reader->control
* pointers these have a circular references).
*/
log_pipe_unref((LogPipe *) connection->reader);
connection->reader = NULL;
log_pipe_unref(&connection->super);
}
static void
afsocket_sd_kill_connection_list(GList *list)
{
GList *l, *next;
/* NOTE: the list may contain a list of
* - deinitialized AFSocketSourceConnection instances (in case the persist-config list is
* freed), or
* - initialized AFSocketSourceConnection instances (in case keep-alive is turned off)
*/
for (l = list; l; l = next)
{
AFSocketSourceConnection *connection = (AFSocketSourceConnection *) l->data;
next = l->next;
if (connection->owner)
connection->owner->connections = g_list_remove(connection->owner->connections, connection);
afsocket_sd_kill_connection(connection);
}
}
void
afsocket_sd_set_keep_alive(LogDriver *s, gint enable)
{
AFSocketSourceDriver *self = (AFSocketSourceDriver *) s;
self->connections_kept_alive_accross_reloads = enable;
}
void
afsocket_sd_set_max_connections(LogDriver *s, gint max_connections)
{
AFSocketSourceDriver *self = (AFSocketSourceDriver *) s;
self->max_connections = max_connections;
}
static const gchar *
afsocket_sd_format_name(const LogPipe *s)
{
const AFSocketSourceDriver *self = (const AFSocketSourceDriver *)s;
static gchar persist_name[1024];
if (s->persist_name)
{
g_snprintf(persist_name, sizeof(persist_name), "afsocket_sd.%s",
self->super.super.super.persist_name);
}
else
{
gchar buf[64];
g_snprintf(persist_name, sizeof(persist_name), "afsocket_sd.(%s,%s)",
(self->transport_mapper->sock_type == SOCK_STREAM) ? "stream" : "dgram",
g_sockaddr_format(self->bind_addr, buf, sizeof(buf), GSA_FULL));
}
return persist_name;
}
static const gchar *
afsocket_sd_format_listener_name(const AFSocketSourceDriver *self)
{
static gchar persist_name[1024];
g_snprintf(persist_name, sizeof(persist_name), "%s.listen_fd",
afsocket_sd_format_name((const LogPipe *)self));
return persist_name;
}
static const gchar *
afsocket_sd_format_connections_name(const AFSocketSourceDriver *self)
{
static gchar persist_name[1024];
g_snprintf(persist_name, sizeof(persist_name), "%s.connections",
afsocket_sd_format_name((const LogPipe *)self));
return persist_name;
}
static gboolean
afsocket_sd_process_connection(AFSocketSourceDriver *self, GSockAddr *client_addr, GSockAddr *local_addr, gint fd)
{
gchar buf[MAX_SOCKADDR_STRING], buf2[MAX_SOCKADDR_STRING];
#if SYSLOG_NG_ENABLE_TCP_WRAPPER
if (client_addr && (client_addr->sa.sa_family == AF_INET
#if SYSLOG_NG_ENABLE_IPV6
|| client_addr->sa.sa_family == AF_INET6
#endif
))
{
struct request_info req;
request_init(&req, RQ_DAEMON, "syslog-ng", RQ_FILE, fd, 0);
fromhost(&req);
if (hosts_access(&req) == 0)
{
msg_error("Syslog connection rejected by tcpd",
evt_tag_str("client", g_sockaddr_format(client_addr, buf, sizeof(buf), GSA_FULL)),
evt_tag_str("local", g_sockaddr_format(local_addr, buf2, sizeof(buf2), GSA_FULL)));
return FALSE;
}
}
#endif
if (self->num_connections >= self->max_connections)
{
msg_error("Number of allowed concurrent connections reached, rejecting connection",
evt_tag_str("client", g_sockaddr_format(client_addr, buf, sizeof(buf), GSA_FULL)),
evt_tag_str("local", g_sockaddr_format(local_addr, buf2, sizeof(buf2), GSA_FULL)),
evt_tag_int("max", self->max_connections));
return FALSE;
}
else
{
AFSocketSourceConnection *conn;
conn = afsocket_sc_new(client_addr, fd, self->super.super.super.cfg);
afsocket_sc_set_owner(conn, self);
if (log_pipe_init(&conn->super))
{
afsocket_sd_add_connection(self, conn);
self->num_connections++;
log_pipe_append(&conn->super, &self->super.super.super);
}
else
{
log_pipe_unref(&conn->super);
return FALSE;
}
}
return TRUE;
}
#define MAX_ACCEPTS_AT_A_TIME 30
static void
afsocket_sd_accept(gpointer s)
{
AFSocketSourceDriver *self = (AFSocketSourceDriver *) s;
GSockAddr *peer_addr;
gchar buf1[256], buf2[256];
gint new_fd;
gboolean res;
int accepts = 0;
while (accepts < MAX_ACCEPTS_AT_A_TIME)
{
GIOStatus status;
status = g_accept(self->fd, &new_fd, &peer_addr);
if (status == G_IO_STATUS_AGAIN)
{
/* no more connections to accept */
break;
}
else if (status != G_IO_STATUS_NORMAL)
{
msg_error("Error accepting new connection",
evt_tag_errno(EVT_TAG_OSERROR, errno));
return;
}
g_fd_set_nonblock(new_fd, TRUE);
g_fd_set_cloexec(new_fd, TRUE);
res = afsocket_sd_process_connection(self, peer_addr, self->bind_addr, new_fd);
if (res)
{
if (peer_addr->sa.sa_family != AF_UNIX)
msg_notice("Syslog connection accepted",
evt_tag_int("fd", new_fd),
evt_tag_str("client", g_sockaddr_format(peer_addr, buf1, sizeof(buf1), GSA_FULL)),
evt_tag_str("local", g_sockaddr_format(self->bind_addr, buf2, sizeof(buf2), GSA_FULL)));
else
msg_verbose("Syslog connection accepted",
evt_tag_int("fd", new_fd),
evt_tag_str("client", g_sockaddr_format(peer_addr, buf1, sizeof(buf1), GSA_FULL)),
evt_tag_str("local", g_sockaddr_format(self->bind_addr, buf2, sizeof(buf2), GSA_FULL)));
}
else
{
close(new_fd);
}
g_sockaddr_unref(peer_addr);
accepts++;
}
return;
}
static void
afsocket_sd_close_connection(AFSocketSourceDriver *self, AFSocketSourceConnection *sc)
{
gchar buf1[MAX_SOCKADDR_STRING], buf2[MAX_SOCKADDR_STRING];
if (sc->peer_addr->sa.sa_family != AF_UNIX)
msg_notice("Syslog connection closed",
evt_tag_int("fd", sc->sock),
evt_tag_str("client", g_sockaddr_format(sc->peer_addr, buf1, sizeof(buf1), GSA_FULL)),
evt_tag_str("local", g_sockaddr_format(self->bind_addr, buf2, sizeof(buf2), GSA_FULL)));
else
msg_verbose("Syslog connection closed",
evt_tag_int("fd", sc->sock),
evt_tag_str("client", g_sockaddr_format(sc->peer_addr, buf1, sizeof(buf1), GSA_FULL)),
evt_tag_str("local", g_sockaddr_format(self->bind_addr, buf2, sizeof(buf2), GSA_FULL)));
log_pipe_deinit(&sc->super);
self->connections = g_list_remove(self->connections, sc);
afsocket_sd_kill_connection(sc);
self->num_connections--;
}
static void
afsocket_sd_start_watches(AFSocketSourceDriver *self)
{
IV_FD_INIT(&self->listen_fd);
self->listen_fd.fd = self->fd;
self->listen_fd.cookie = self;
self->listen_fd.handler_in = afsocket_sd_accept;
iv_fd_register(&self->listen_fd);
}
static void
afsocket_sd_stop_watches(AFSocketSourceDriver *self)
{
if (iv_fd_registered (&self->listen_fd))
iv_fd_unregister(&self->listen_fd);
}
static gboolean
afsocket_sd_setup_reader_options(AFSocketSourceDriver *self)
{
GlobalConfig *cfg = log_pipe_get_config(&self->super.super.super);
if (self->transport_mapper->sock_type == SOCK_STREAM && !self->window_size_initialized)
{
/* distribute the window evenly between each of our possible
* connections. This is quite pessimistic and can result in very low
* window sizes. Increase that but warn the user at the same time
*/
self->reader_options.super.init_window_size /= self->max_connections;
if (self->reader_options.super.init_window_size < 100)
{
msg_warning("WARNING: window sizing for tcp sources were changed in " VERSION_3_3 ", the configuration value was divided by the value of max-connections(). The result was too small, clamping to 100 entries. Ensure you have a proper log_fifo_size setting to avoid message loss.",
evt_tag_int("orig_log_iw_size", self->reader_options.super.init_window_size),
evt_tag_int("new_log_iw_size", 100),
evt_tag_int("min_log_fifo_size", 100 * self->max_connections));
self->reader_options.super.init_window_size = 100;
}
self->window_size_initialized = TRUE;
}
log_reader_options_init(&self->reader_options, cfg, self->super.super.group);
return TRUE;
}
static gboolean
afsocket_sd_setup_transport(AFSocketSourceDriver *self)
{
GlobalConfig *cfg = log_pipe_get_config(&self->super.super.super);
if (!transport_mapper_apply_transport(self->transport_mapper, cfg))
return FALSE;
self->proto_factory = log_proto_server_get_factory(cfg, self->transport_mapper->logproto);
if (!self->proto_factory)
{
msg_error("Unknown value specified in the transport() option, no such LogProto plugin found",
evt_tag_str("transport", self->transport_mapper->logproto));
return FALSE;
}
afsocket_sd_setup_reader_options(self);
return TRUE;
}
static gboolean
afsocket_sd_restore_kept_alive_connections(AFSocketSourceDriver *self)
{
GlobalConfig *cfg = log_pipe_get_config(&self->super.super.super);
/* fetch persistent connections first */
if (self->connections_kept_alive_accross_reloads)
{
GList *p = NULL;
self->connections = cfg_persist_config_fetch(cfg, afsocket_sd_format_connections_name(self));
self->num_connections = 0;
for (p = self->connections; p; p = p->next)
{
afsocket_sc_set_owner((AFSocketSourceConnection *) p->data, self);
if (log_pipe_init((LogPipe *) p->data))
{
self->num_connections++;
}
else
{
AFSocketSourceConnection *sc = (AFSocketSourceConnection *)p->data;
self->connections = g_list_remove(self->connections, sc);
afsocket_sd_kill_connection((AFSocketSourceConnection *)sc);
}
}
}
return TRUE;
}
static gboolean
afsocket_sd_open_listener(AFSocketSourceDriver *self)
{
GlobalConfig *cfg = log_pipe_get_config(&self->super.super.super);
gint sock;
gboolean res = FALSE;
/* ok, we have connection list, check if we need to open a listener */
sock = -1;
if (self->transport_mapper->sock_type == SOCK_STREAM)
{
if (self->connections_kept_alive_accross_reloads)
{
/* NOTE: this assumes that fd 0 will never be used for listening fds,
* main.c opens fd 0 so this assumption can hold */
sock = GPOINTER_TO_UINT(
cfg_persist_config_fetch(cfg, afsocket_sd_format_listener_name(self))) -
1;
}
if (sock == -1)
{
if (!afsocket_sd_acquire_socket(self, &sock))
return self->super.super.optional;
if (sock == -1 && !transport_mapper_open_socket(self->transport_mapper, self->socket_options, self->bind_addr, AFSOCKET_DIR_RECV, &sock))
return self->super.super.optional;
}
/* set up listening source */
if (listen(sock, self->listen_backlog) < 0)
{
msg_error("Error during listen()",
evt_tag_errno(EVT_TAG_OSERROR, errno));
close(sock);
return FALSE;
}
self->fd = sock;
afsocket_sd_start_watches(self);
res = TRUE;
}
else
{
if (!self->connections)
{
if (!afsocket_sd_acquire_socket(self, &sock))
return self->super.super.optional;
if (sock == -1 && !transport_mapper_open_socket(self->transport_mapper, self->socket_options, self->bind_addr, AFSOCKET_DIR_RECV, &sock))
return self->super.super.optional;
}
self->fd = -1;
/* we either have self->connections != NULL, or sock contains a new fd */
if (self->connections || afsocket_sd_process_connection(self, NULL, self->bind_addr, sock))
res = TRUE;
}
return res;
}
static void
afsocket_sd_close_fd(gpointer value)
{
gint fd = GPOINTER_TO_UINT(value) - 1;
close(fd);
}
static void
afsocket_sd_save_connections(AFSocketSourceDriver *self)
{
GlobalConfig *cfg = log_pipe_get_config(&self->super.super.super);
if (!self->connections_kept_alive_accross_reloads || !cfg->persist)
{
afsocket_sd_kill_connection_list(self->connections);
}
else
{
GList *p;
/* for SOCK_STREAM source drivers this is a list, for
* SOCK_DGRAM this is a single connection */
for (p = self->connections; p; p = p->next)
{
log_pipe_deinit((LogPipe *) p->data);
}
cfg_persist_config_add(cfg, afsocket_sd_format_connections_name(self), self->connections,
(GDestroyNotify)afsocket_sd_kill_connection_list, FALSE);
}
self->connections = NULL;
}
static void
afsocket_sd_save_listener(AFSocketSourceDriver *self)
{
GlobalConfig *cfg = log_pipe_get_config(&self->super.super.super);
if (self->transport_mapper->sock_type == SOCK_STREAM)
{
afsocket_sd_stop_watches(self);
if (!self->connections_kept_alive_accross_reloads)
{
msg_verbose("Closing listener fd",
evt_tag_int("fd", self->fd));
close(self->fd);
}
else
{
/* NOTE: the fd is incremented by one when added to persistent config
* as persist config cannot store NULL */
cfg_persist_config_add(cfg, afsocket_sd_format_listener_name(self),
GUINT_TO_POINTER(self->fd + 1), afsocket_sd_close_fd, FALSE);
}
}
}
gboolean
afsocket_sd_setup_addresses_method(AFSocketSourceDriver *self)
{
return TRUE;
}
gboolean
afsocket_sd_init_method(LogPipe *s)
{
AFSocketSourceDriver *self = (AFSocketSourceDriver *) s;
return log_src_driver_init_method(s) &&
afsocket_sd_setup_transport(self) &&
afsocket_sd_setup_addresses(self) &&
afsocket_sd_restore_kept_alive_connections(self) &&
afsocket_sd_open_listener(self);
}
gboolean
afsocket_sd_deinit_method(LogPipe *s)
{
AFSocketSourceDriver *self = (AFSocketSourceDriver *) s;
afsocket_sd_save_connections(self);
afsocket_sd_save_listener(self);
return log_src_driver_deinit_method(s);
}
static void
afsocket_sd_notify(LogPipe *s, gint notify_code, gpointer user_data)
{
switch (notify_code)
{
case NC_CLOSE:
case NC_READ_ERROR:
{
g_assert_not_reached();
break;
}
}
}
void
afsocket_sd_free_method(LogPipe *s)
{
AFSocketSourceDriver *self = (AFSocketSourceDriver *) s;
log_reader_options_destroy(&self->reader_options);
transport_mapper_free(self->transport_mapper);
socket_options_free(self->socket_options);
g_sockaddr_unref(self->bind_addr);
self->bind_addr = NULL;
log_src_driver_free(s);
}
void
afsocket_sd_init_instance(AFSocketSourceDriver *self,
SocketOptions *socket_options,
TransportMapper *transport_mapper,
GlobalConfig *cfg)
{
log_src_driver_init_instance(&self->super, cfg);
self->super.super.super.init = afsocket_sd_init_method;
self->super.super.super.deinit = afsocket_sd_deinit_method;
self->super.super.super.free_fn = afsocket_sd_free_method;
self->super.super.super.notify = afsocket_sd_notify;
self->super.super.super.generate_persist_name = afsocket_sd_format_name;
self->setup_addresses = afsocket_sd_setup_addresses_method;
self->socket_options = socket_options;
self->transport_mapper = transport_mapper;
self->max_connections = 10;
self->listen_backlog = 255;
self->connections_kept_alive_accross_reloads = TRUE;
log_reader_options_defaults(&self->reader_options);
/* NOTE: this changes the initial window size from 100 to 1000. Reasons:
* Starting with syslog-ng 3.3, window-size is distributed evenly between
* _all_ possible connections to avoid starving. With the defaults this
* means that we get a window size of 10 messages log_iw_size(100) /
* max_connections(10), but that is incredibly slow, thus bump this value here.
*/
self->reader_options.super.init_window_size = 1000;
}
|
Frame-guided assembly of vesicles with programmed geometry and dimensions. In molecular self-assembly molecules form organized structures or patterns. The control of the self-assembly process is an important and challenging topic. Inspired by the cytoskeletal-membrane protein lipid bilayer system that determines the shape of eukaryotic cells, we developed a frame-guided assembly process as a general strategy to prepare heterovesicles with programmed geometry and dimensions. This method offers greater control over self-assembly which may benefit the understanding of the formation mechanism as well as the functions of the cell membrane. |
Woman's Century
Foundation
The founder of Woman's Century was Jessie Campbell MacIver. She had come to Canada from Scotland with her husband, a lawyer, and five children. She became involved in the National Council of Women. The first issue of Woman's Century appeared in May 1913. It was largely produced out of MacIver's home, with the help of her husband and children. The purpose was to educate women about public issues and the reforms that were needed, and to provide a forum for discussion by different women's groups. The title page described it as "A journal of education and progress for Canadian women." The monthly journal was modeled on successful British and American feminist periodicals. It was one of the very few women's rights journals published in Canada.
History
In April 1914 the NCWC made the magazine their official organ. The NCWC slowly assumed ownership of the magazine, while MacIver continued to manage and edit it. The magazine often reported on the British Dominions Woman Suffrage Union (BDWSU), an important empire-wide organization. In 1918–19 there was discussion about forming a Woman's Party, and some enthusiasts assumed that Woman's Century would become the new party's official organ. This claim was later retracted. Woman’s Century was published until 1921.
Views
An analysis of references in the magazine to consumer issues suggest that the contributors were economically conservative. They supported Canadian manufacturing and the federal state, but were not concerned with reducing inequalities of wealth. The NCWC said that the greater public responsibility that they were advocating for women was a natural extension of their role as mothers, an argument now known as "maternal feminism". In a 1917 article the Women's Art Association of Canada proclaimed its support of this view. It stated, "Service is the keynote to happiness. Every part of the Art Association's activities is based on service to the individual, to the community, and to the nation."
Elizabeth Becker wrote an article subtitled The Double Standard Shown in the Criminal Code. She noted that the maximum penalty for an employer who seduced an employee under twenty one years old was two years, while the maximum penalty for an employee who stole from their employer was fourteen years. In 1918 Edith Lang published an article attacking the Criminal Code Amendment Act." She wrote,
The recent debate in Federal Parliament on the proposed amendments for the Criminal Code has brought to the fore the old, old injustice of the legalized double standard of morals. The long-looked-for bill to amend the clause re crimes against morality has been brought in, but it falls far short of the oft-expressed desires of the organized women ... It does not recognize that there should be one standard of morals for both sexes ... As I think of the proposed injustice, my blood is so hot and my indignation so seething that I can hardly write these words that you will read.
During World War I (1914–18) Woman's Century supported Canadian involvement. In April 1915 the magazine stated it was opposed to the International Congress of Women planned for the Hague, which led to the formation of the International Committee of Women for Permanent Peace. In late summer 1917 there was a report that the suffragists Laura Hughes and Harriet Dunlop Prenter had equated suffrage and pacifism in Ontario. MacIver harshly denied this. She wrote that the "National Union and Ontario Equal Franchise Association have again and again expressed themselves as repudiating utterly any question of premature peace. Any pacifist literature which has been received from the Hague and elsewhere has been consigned by these societies to the waste-paper basket. Women's Century again wishes very definitely to repudiate all utterances ... or any pacifist propaganda, and to reiterate once again that it stands for a Union Government, Conscription and Winning the War".
In April 1918 Woman's Century ran several stories on the lowering of moral standards caused by the war. There were said to be millions of illegitimate children in Germany. France was trying to reduce venereal disease by licensing and regulating prostitution. Gertrude Richardson wrote "war and militarism are the bitterest of all foes of womanhood, wifehood, motherhood and the home". However, she said this was just one of the results of war. Unlike other feminist writers, she did not blame the soldiers or the loose women who tempted them. She wrote "Shall we who drive them to the hell of war condemn their departure from our standard of morals? Ours is the responsibility, not only for the blighted purity, but for the maimed forms, the shattered brains, the sightless eyes." |
Polyurethane tissue adhesives for annulus fibrosus repair: Mechanical restoration and cytotoxicity The microdiscectomy used for the treatment of intervertebral disc disorders leaves an open incision in the annulus fibrosus that must be sealed to avoid re-herniation and other subsequent degenerations. In this study, we developed an injectable and in situ polymerizable polyurethane adhesive as a long-term post-surgical annulus fibrosus repair strategy. It was investigated the chemical structure of the urethane-based adhesive and its physico-chemical, viscoelastic, kinetic, and in vitro cytotoxic properties. The adhesive formulated from the polycarbonate diol with the highest molar mass was the one that exhibited a compressive behavior closest to the intervertebral disc outer region, and therefore, the most suitable for restoration. This adhesive showed 18-day stability under moisture and required a preparation time of 10 h at 60°C before use. The material also adhered covalently to gelatin (without catalyst or initiator) and positively impacted cell proliferation after its polymerization, which are essential requirements for clinical translation. These findings confirmed the ability of the polyurethane adhesive to act as an annulus fibrosus sealant, although further improvements in its formulation are necessary. |
<filename>od/graphics/sampling/TapeHeadDisplay.cpp
/*
* HeadDisplay.cpp
*
* Created on: 23 Nov 2017
* Author: clarkson
*/
#include <od/graphics/sampling/TapeHeadDisplay.h>
namespace od
{
TapeHeadDisplay::TapeHeadDisplay(TapeHead *head, int left, int bottom, int width, int height) : HeadDisplay(head, left, bottom, width, height)
{
}
TapeHeadDisplay::~TapeHeadDisplay()
{
}
void TapeHeadDisplay::resetHead()
{
TapeHead *pTapeHead = tapeHead();
if (pTapeHead)
{
pTapeHead->reset();
}
}
void TapeHeadDisplay::drawStatus(FrameBuffer &fb)
{
switch (mZoomGadgetState)
{
case showTimeGadget:
fb.text(WHITE, mWorldLeft + 5, mWorldBottom + mHeight - 8,
"zooming: time", 10);
break;
case showGainGadget:
fb.text(WHITE, mWorldLeft + 5, mWorldBottom + mHeight - 8,
"zooming: height", 10);
break;
case gadgetHidden:
if (isPaused())
{
fb.text(WHITE, mWorldLeft + 5, mWorldBottom + mHeight - 8, "paused",
10);
}
break;
}
}
} /* namespace od */
|
The next time you visit a McDonald's in the Quad City Area, money may not qualify as a form of payment.
Instead, hungry customers could be asked to pay with some lovin'.
During Superbowl XLIX, the fast-food giant aired a commercial that displayed customers being asked if they would like to pay for their order with love. The campaign applies to McDonald's location around the U.S., including the Quad Cities.
"You know, we just started it on February 2 and already there is a lot of enthusiasm in the restaurants," said Kevin Murphy, the owner and operator of the McDonald's at 727 Avenue of the Cities in East Moline.
In the morning, the staff open an envelope containing particular times of the day. The customer at the register during the pre-selected time offered the question: Would you like to pay with cash or lovin'? According to Murphy, all of his customers have selected lovin'.
"We will have people turn around and give another customer a hug," Murphy said.
"In the drive thru yesterday, we had a customer that wanted to pay for the car behind her, and the next one did the same thing, and the next one did the same thing, and this went on for nine cars and until there eventually weren't any more cars," Thomson said.
The staff have fun, too. The employees around the cash register huddle together, watching as they prepare to share the love with their next customer.
When Crystal Stillwell offers her customers to pay with lovin', she said the responses are priceless.
"You get a lot of people who go, 'Really? Are you kidding me?'," Stillwell said.
She said she's seen customers, hug, high-five and dance.
But Murphy said nothing compares to the marriage proposal that happened at one of the area's McDonald's restaurants.
On Wednesday, February 4, Stella Correa and her son, Zeke, were offered to pay with lovin'.
"I think it's a good idea," Correas said.
If anything, employees say the campaign has brought smiles to both customers and employees.
I was asked to tell my 4 year old daughter what I loved about her. She then told me she loved that I was her mommy. It was such a sweet experience and we got free food on top of it. What a great idea McDonald’s.
The best part is my daughter had been asking about a happy meal all week but I didnt have the money to get her one. I swallowed my pride and borrowed $4 from a co worker to get her a happy meal. This promotion allowed me to get us both a meal and save the $4 for gas for next week. Go McDonald’s! |
Decay out of a Superdeformed Band Using a statistical model for the normally deformed states and for their coupling to a member of the superdeformed band, we calculate the ensemble average and the fluctuations of the intensity for decay out of the superdeformed band and of the intraband decay intensity. We show that both intensities depend on two dimensionless variables: The ratio $\Gamma^{\downarrow}/\Gamma_S$ and the ratio $\Gamma_N/d$. Here, $\Gamma^{\downarrow}$ is the spreading width for the mixing of the superdeformed and the normally deformed states, $d$ is the mean level spacing of the latter, and $\Gamma_S$ ($\Gamma_N$) is the width for gamma decay of the superdeformed state (of the normally deformed states, respectively). This parametric dependence differs from the one predicted by the approach of Vigezzi et al. where the relevant dimensionless variables are $\Gamma_N/\Gamma_S$ and $\Gamma^{\downarrow}/d$. We give analytical and numerical results for the decay intensities as functions of the dimensionless variables, including an estimate of the error incurred by performing the ensemble average, and we present fit formulas useful for the analysis of experimental data. We compare our results with the approach of Vigezzi et al. and establish the conditions under which this approach constitutes a valid approximation. Introduction The study of superdeformed (SD) bands is one of the most active fields of nuclear structure studies at high spin. The intensities of the E2 gamma transitions within a SD band show a remarkable feature: The intraband E2 transitions follow the band down with practically constant intensity. At some point, the transition intensity starts to drop sharply. This phenomenon is referred to as the decay out of a SD band. It is attributed to a mixing of the SD states and the normally deformed (ND) states with equal spin. The barrier separating the first and second minima of the deformation potential depends on and decreases with decreasing spin I. Decay out of the SD band sets in at a spin value I 0 for which penetration through the barrier is competitive with the E2 decay within the SD band, see Ref.. The theoretical description of the mixing between SD and ND states uses a statistical model for the ND states first proposed by Vigezzi et al.. The ND states to which the SD state is coupled, lie several MeV above the ground state. The spectrum of these states is expected to be rather complex. It is, therefore, assumed that the ND states can be described in terms of random-matrix theory or, more precisely, by the Gaussian Orthogonal Ensemble (GOE) of random matrices. Likewise, the E1 decay of the ND states is calculated within the statistical model. The results of this approach have been used to analyze experimental data. It is one of the aims of such work to determine the strength of the coupling between the SD and the ND states and, thereby, properties of the barrier separating the first and the second minimum of the deformation potential. We mention in passing that recently, experimental evidence for non-statistical decay out of the SD band has been found in 60 Zn, a nucleus very different from the nuclei investigated earlier. The model of Ref. was re-examined in Ref.. This was done because inspection shows that the formulae derived in Ref. apply only in the limit where the electromagnetic decay widths N and S for the ND and the SD states are small in comparison with the spreading width ↓ which describes the mixing of the ND and the SD states. However, analysis of the data has shown that this condition is not met in practice. In the present paper, we follow up the work of Ref. in which attention was focused on the ensemble average of the intraband decay amplitude. We evaluate the contribution of the fluctuating part and show that it cannot be neglected in situations of practical interest, in contrast to the claims made in Ref.. We use the supersymmetry approach developed in Ref.. With d the mean level spacing of the ND states, we show that the intensities for intraband decay and for decay out of the SD band depend on the dimensionless variables N / S and ↓ /d. We give analytical and numerical results for this dependence as well as fit formulas to facilitate the analysis of experimental data. We compare our results with those of Refs. and. Model We denote the first SD state with significant coupling to the ND states during the E2 decay down the SD band by |0 ; its energy by E 0 ; the ND states having the same spin as the state |0 by |j with j = 1,..., K and K ≫ 1; their energies by E j. The ND states decay by statistical E1 emission. We assume that the total E1 decay widths of all ND states have the common value N. The matrix elements V 0j connecting the SD and the ND states are responsible for decay out of the SD band. This situation is illustrated in Fig.1. We assume that the ND states can be modeled as eigenstates of the GOE. Then, the energies E j follow the GOE distribution, and the V 0j 's are uncorrelated Gaussian distributed random variables with mean value zero and common variance v 2. The spreading width ↓ is defined as ↓ = 2 v 2 /d. The limit K → ∞ is taken at the end of the calculation. The Hamiltonian H of the system is a matrix of dimension K + 1 and has the form (j, l = 1,..., K) To H must be added the diagonal width matrix SN given by The effective Hamiltonian H eff is given by The intraband decay amplitude has the form With 2 S = S, this quantity describes the feeding of the SD state from the SD state with the next-higher spin value, and its subsequent decay into the SD state with the next-lower spin value. For simplicity, we assume that the amplitudes for feeding and decay are both given by S. Similarly, the amplitudes for decay out of the SD band are given by where 2 N = N. The total intraband decay intensity has the form and the total decay intensity out of the SD band is The identity follows from unitarity and completeness. Except for Sections 3 and 6, we will, therefore, focus attention on I in. Both I in and I out vary with the realization of the ensemble of random matrices. We are going to calculate the ensemble average of both quantities, denoted by a bar. This average involves an average over both, the distribution of matrix elements V 0j and the distribution of eigenvalues E j. In any given nucleus, we deal with fixed values of the V 0j, and with fixed positions of the ND states |j. In other words, any given nucleus corresponds to a single realization of our random-matrix ensemble. The question is: How close to the actual behavior of the system will the ensemble average be? To answer this question, we also estimate the probability distribution of I in. This allows us to determine the error incurred by using the ensemble average. While it is possible to calculate I in analytically, the calculation of the probability distribution of I in is beyond the scope of the supersymmetry technique. Therefore, we use a two-pronged approach, employing both the supersymmetry technique and numerical simulation. We use analytical results for I in as a test for the accuracy of our numerical simulation which is then used to estimate the probability distribution of I in. Example: Perturbative Approach Various discussions have shown us that application of the GOE to the decay out of a superdeformed band involves conceptual difficulties. This fact has motivated us to include the present Section in our paper. In this Section, we present a simplified version of the GOE approach which can largely be dealt with analytically, and which is quite transparent. It is based upon a perturbative treatment of the mixing matrix elements V 0j. From the outset, we emphasize that this perturbative treatment is not justified in the cases of practical interest, and that our work described in later Sections of this paper is not based upon such a perturbative approach. Thus, the present Section serves a pedagogical purpose only. We wish to exhibit the problems encountered when one applies random-matrix theory to a limited data set of a single physical system, and the answers one can give. We expand the amplitude A 0j in powers of V 0j, keep the first non-vanishing term, and focus attention on the partial width amplitude All random variables of the GOE reside in j. Moreover, decay out of the SD band is (aside from trivial common factors) governed by the quantity j | j | 2. Therefore, we calculate the first and second moments of j | j | 2 as GOE averages. Using the statistical independence of V 0j and E j and performing first the average over the V 0j, we find To perform the GOE average over the energies E j, we rewrite this expression identically as By definition, j (E − E j ) = 1/d where d is the mean level spacing. Using ↓ = 2v 2 /d, we This implies that to lowest order in the V 0j 's, we have The result is remarkable on several counts. First, we observe that I out depends on the dimensionless variable ↓ / S and not, as Ref. would have us expect, on ↓ /d and on N / S. Second, we find that I out is independent of N. This fact is counter-intuitive because for any given realization of the GOE, the quantity I out will surely depend on the magnitude of N. The independence is caused by our use of first-order perturbation theory and by averaging over the eigenvalues E j. This average implies the third curious feature of the result : We smear out the positions of the E j 's completely while in the actual experiment, the decay out of a SD band will surely depend strongly on where the ND states closest to the SD state are located. This expectation can only be reconciled with the result if the distribution of I out about its mean value is rather broad. If correct, this statement would imply that a determination of ↓ from experimental data using the statistical approach would necessarily involve large errors. To check this point we calculate the variance of j | j | 2. A straightforward calculation shows that the variance is proportional to ( ↓ ) 2 (d/ N ). For the variance of I out, this implies Clearly, lowest-order perturbation theory in V 0j is not an adequate way of dealing with our problem. Nevertheless, we learn from the example of this Section that calculating only the ensemble average of I in or of I out is not sufficient. We recall that in the mass region A ∼ 190, d/ N is typically of the order 10 2. In this case, we need information on the entire probability distribution of I in if we wish to assign an error to results obtained from comparing experimental data with the GOE average. This point was also emphasized in Ref.. Ensemble Average Following standard procedure in the statistical approach, we write A 00 (E) as the sum of the average part A 00 (E) and the fluctuating part A fluc 00 (E), Calculation of the average part is straightforward and yields The ensemble average modifies the propagator through the SD state by the addition of an imaginary term i ↓ /2. This is well known from the theory of the optical model in compoundnucleus scattering. The decomposition entails a corresponding decomposition of I in, the two terms on the right-hand-side being defined in terms of |A 00 (E)| 2 and of |A fluc 00 (E)| 2, respectively. We have We observe that for ↓ ≪ S, this result agrees with the perturbative result for I out in Section 3. We turn to the calculation of I in fluc. In Ref., it was argued that for N ≫ S and N ≫ ↓, this term is negligibly small because the ND states decay overwhelmingly by E1 emission. While this assertion is certainly qualitatively correct, the question remains: How big is the term I in fluc in comparison with I in av, and how does it depend on the parameters S, N, ↓ and d of our model? To answer these questions, we use the supersymmetry formalism. We do not reproduce here the complete calculation which is lengthy but quite straightforward. It runs parallel to that of Ref.. Rather, we describe a shortcut which suggests the form of the final result and which also lends plausibility to this final result. The formalism of Ref. is taylored to compound-nucleus scattering. We use the fact that formally, the present problem has much in common with compound-nucleus scattering: The To display this similarity, we define the quantity We claim that S 00 (E) can be viewed as a bona fide S-matrix element. To make this claim plausible, we consider first the case where V 0j = 0, for all j. Then, S 00 (E) has magnitude one and the form This is a one-dimensional unitary S-matrix describing elastic scattering with a resonance located at E 0 of width S. For V 0j = 0, the coupling of the SD state to the ND states and the ability of the latter to undergo E1 decay, open additional decay channels. Then, the magnitude of S 00 is smaller than unity, and S 00 may be viewed as one element of a unitary S-matrix comprising the E1 decay channels in addition to the E2 SD band, and displaying the (K + 1) resonances stemming from the SD state |0 and the K ND states |j with j = 1,..., K. The actual construction of the other elements of this S-matrix is not needed, of course, since we are only interested in S 00 (E). In analogy to Eq. we write From Eq. we have The fluctuating parts of A 00 (E) and of S 00 (E) differ only by the factor (−i). Therefore, we can use Eq. (8.10) of Ref. to calculate |A fluc 00 (E)| 2 by calculating |S fluc 00 (E)| 2. The input parameters of this equation are specified as follows. The channel indices a, b, c, d are all equal to 0. The transmission coefficient T 0 which couples to the SD band is given by This coefficient displays a resonance at E = E 0 with width S + ↓. This is due to the fact that the SD state |0 is a doorway state for formation of the ND states from and for their decay into the SD band. The parameter in Eq. (8.10) of Ref. is given by the difference of the energy arguments of two scattering amplitudes. The energy arguments of A fluc 00 (E) and of A fluc 00 (E) * coincide, suggesting that we put = 0. However, since the E1 decay of the ND states is summarily accounted for in terms of their common width N, an imaginary energy difference arises which amounts to the replacement → −i N. As a result, E1 decay is accounted for by the appearance of an exponential factor exp(−( N /d)( 1 + 2 + 2)) under the integral. All transmission coefficients except T 0 must be put equal to zero. The resulting equation expresses |A fluc 00 (E)| 2 as a threefold integral over real variables, 1, 2. For the calculation of I in fluc, we need to integrate in addition over energy E. This yields eventually The four integrals must be done numerically. Eq. coincides with the result obtained by straightforward application of the supersymmetry approach. We observe that the integrand in Eq. is semi-positive definite. Therefore, I in fluc decreases monotonically with increasing N /d, tending to zero as N /d → ∞. This is what we expect on physical grounds. For. This follows from I in = 1 and from Eqs. and. The appearance of the exponential factor in Eq. can be visualized in yet another way. Instead of the replacement → −i N, we could have put = 0 but kept in Eq. (8.10) of Ref. the product over transmission coefficients We could have argued that this product accounts for both, coupling to the SD band via T 0, and for E1 decay into a large number of open decay channels described by the transmission coefficients T l with l = 0. Owing to the weakness of the electromagnetic force, we would have T l ≪ 1 for all l = 0, although the sum l =0 T l may be significant. Excepting l = 0, we could then approximate the product by exp(−(1/2)( l T l )( 1 + 2 + 2)). The prime indicates that the term with l = 0 is omitted. Comparison of this exponential with the one in Eq. shows that l T l = 2 N /d. This is a very satisfactory result. It is identical to the standard relation connecting decay width and sum over transmission coefficients in the theory of nuclear reactions, cf. Ref.. Hence, we identify the total transmission coefficient T N for E1 decay as We note that in nuclei with mass A ∼ 190 where N /d ≈ 10 −2 we have T N ≈.05. This is a rather small value. We will see in the next Section that owing to this small value, decay of the ND states back into the SD channel is not altogether negligible, in contrast to the claim made in Ref.. In summary, we have described how to generate an analytical expression for I in fluc. As a by-product, we have seen that this quantity depends on the input parameters S, N, ↓ and d Results We have calculated the fourfold integral in Eq. numerically. We add the term I in av, cf. Eq., and denote the result by I. We have also simulated the model of Eqs. numerically. This was done by drawing the matrix elements V 0j from a Gaussian distribution centered at zero with variance v 2. The energies E j were taken from an unfolded GOE spectrum with E 0 located in the center of the semicircle. Typically, we used matrices of dimension K = 100 or bigger, and we calculated N = 10 4 or more realizations. The calculations were simplified by using for A 00 (E) the expression We note that in the simulation, we calculate the total intraband intensity I in without introducing the distinction between I in av and I in fluc. We used I to test the results of the simulation. The width of the probability distribution of I in was estimated as follows. With I(n) the value of I in obtained in the n th realization (n = 1,..., N), two sets labelled s i with i = 1, 2 were formed depending on whether I(n) < I or I(n) > I, respectively, each set containing N i realizations labelled i = 1,..., N i. For i = 1, 2 we have calculated The results are shown in Figs find We emphasize that this formula is not based on any theoretical arguments and presents the result of an approach based upon trial and error. In Fig. 3 The basis of logarithm is ten. We have added a superscript vig to identify the origin of this formula. We note that according to Eq., the probability distribution of I vig out and, therefore, also that of the intraband E2 intensity I vig in = 1−I vig out, depend on two dimensionless variables: The ratio S / N which appears explicitly in Eq., and the ratio ↓ /d which determines the statistical behavior of the mixing parameters |c m | 2. We note that this parametric dependence of the Vigezzi model differs from the one characterizing the exact theory of Section 4 where the relevant parameters are N /d and ↓ / S. The reasoning behind Eq. leads us to expect that this equation renders a useful approximation to the exact result whenever S and N are sufficiently small (case of isolated resonances, S and N ≪ d). This is also suggested by the observation that I vig out is independent of the value of the fine-structure constant, a result which is not physically plausible. The worrisome aspect is that analysis of the data using the Vigezzi approach yields values of ↓ which are about two orders of magnitude smaller than N, thereby putting the entire approach into question. This is in fact what prompted the work of Ref. as well as the present investigation. By comparing numerical results obtained for I vig in with those generated from the model of Eqs., we now display the limits of validity of the approach of Vigezzi et al. as well as the expected range of validity of the Vigezzi approach. The plots in Fig. 6 use the parameter ↓ /d of the Vigezzi model to define the abscissa and show similar behavior. Further insight is provided by considering the case 1 ≫ ↓ / S ≫ N /d which combines weak mixing between SD and ND states with strong fluctuations. In this case, a modified perturbation treatment yields to lowest order A related formula was given by Vigezzi et al.. The predictions of this formula are shown as dash-dotted lines in Fig. 6. We note that whenever the condition of validity 1 ≫ ↓ / S ≫ N /d is met, the predictions of Eq. agree very well with the exact result. Can we determine the limits of validity of the Vigezzi approach also from theoretical argu- applies only for sufficiently large values of ↓. To quantify this statement, we have calculated the terms of next order in the perturbation expansion of Section 3. We find that, for N ≪ d, these are of order ( ↓ / S ) 2 (d/ N ). Limit A is excluded if the second-order terms are at least of the same order as the first-order ones, i.e., whenever ↓ / S ≥ N /d or N ≤ d( ↓ / S ). Violation of this condition accounts for the deviations between the exact results and the Vigezzi approach displayed in Fig. 6. In conclusion, we see that the approach by Vigezzi et al. is subject to two constraints. The obvious one is that it deals with isolated resonances. This implies N ≪ d. The second, less obvious one is due to the constraint N ≤ d( ↓ / S ). Summary In the present paper we have calculated the ensemble average and properties of the distribution function of the intraband E2 decay intensity for a statistical process leading to decay out of a SD band. We have shown that the entire distribution function depends only on the two dimensionless ratios ↓ / S and N /d. Writing the intraband intensity as the sum of two terms, given in terms of the average decay amplitude and of its fluctuating part, respectively, we have |
<reponame>fossabot/X-Road<filename>src/proxy/src/test/java/ee/ria/xroad/proxy/testsuite/DummyServerProxy.java<gh_stars>100-1000
/**
* The MIT License
* Copyright (c) 2015 Estonian Information System Authority (RIA), Population Register Centre (VRK)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
package ee.ria.xroad.proxy.testsuite;
import ee.ria.xroad.common.PortNumbers;
import ee.ria.xroad.common.util.StartStop;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.io.IOUtils;
import org.apache.commons.io.output.NullOutputStream;
import org.eclipse.jetty.server.Request;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.ServerConnector;
import org.eclipse.jetty.server.handler.AbstractHandler;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import static ee.ria.xroad.common.util.CryptoUtils.DEFAULT_DIGEST_ALGORITHM_ID;
import static ee.ria.xroad.common.util.MimeUtils.HEADER_HASH_ALGO_ID;
@Slf4j
class DummyServerProxy extends Server implements StartStop {
DummyServerProxy() {
ServerConnector connector = new ServerConnector(this);
connector.setName("ClientConnector");
connector.setHost("127.0.0.2");
connector.setPort(PortNumbers.PROXY_PORT);
addConnector(connector);
setHandler(new ServiceHandler());
}
private class ServiceHandler extends AbstractHandler {
@Override
public void handle(String target, Request baseRequest,
HttpServletRequest request, HttpServletResponse response)
throws IOException, ServletException {
log.debug("Proxy simulator received request {}, contentType={}",
target, request.getContentType());
response.addHeader("Connection", "close");
response.addHeader(HEADER_HASH_ALGO_ID, DEFAULT_DIGEST_ALGORITHM_ID);
// check if the test case implements custom service response
AbstractHandler handler = currentTestCase().getServerProxyHandler();
if (handler != null) {
handler.handle(target, baseRequest, request, response);
return;
}
// Read all of the request (and copy it to /dev/null).
IOUtils.copy(request.getInputStream(), new NullOutputStream());
if (currentTestCase().getResponseFile() != null) {
createResponseFromFile(currentTestCase().getResponseFile(),
baseRequest, response);
} else {
log.error("Unknown request {}", target);
}
}
private void createResponseFromFile(String fileName, Request baseRequest,
HttpServletResponse response) {
String file = MessageTestCase.QUERIES_DIR + '/' + fileName;
try {
response.setContentType(
currentTestCase().getResponseContentType());
response.setStatus(HttpServletResponse.SC_OK);
try (InputStream fileIs = new FileInputStream(file);
InputStream responseIs =
currentTestCase().changeQueryId(fileIs)) {
IOUtils.copy(responseIs, response.getOutputStream());
}
} catch (FileNotFoundException e) {
log.error("Could not find answer file: " + file, e);
return;
} catch (Exception e) {
log.error("An error has occured when sending response "
+ "from file " + file, e);
}
baseRequest.setHandled(true);
}
}
private static MessageTestCase currentTestCase() {
return ProxyTestSuite.currentTestCase;
}
}
|
Head and Neck Cystic Lesions: A Cytology Review of Common and Uncommon Entities Background: Cystic lesions of the head and neck are a diagnostic challenge since they are seen in the clinical presentation of a wide variety of conditions. Herein, common and uncommon entities that present as cystic lesions in the head and neck are reviewed. Summary: In this study, peer-reviewed articles were selected using the database PubMed, Google, Google Scholar, and Scopus. Emphasis was placed on peer-reviewed articles that discuss the cytomorphology and differential diagnosis of entities that present as cystic lesions of the head and neck. In the anterior neck, both benign and malignant neoplasms can present, including papillary thyroid carcinoma (PTC), thyroid adenomatoid nodule, parathyroid cysts, and thyroglossal cysts. In the lateral neck, branchial cleft cyst, PTC, ectopic thyroid cyst, and squamous cell carcinomas (human papilloma virus and non- human papilloma virus-related) are common. Age over 40 years raises the possibility of malignancy. In the deep neck, mostly benign cystic entities occur such as a pleomorphic adenoma, paraganglioma, schwannoma, branchial cyst, epidermal inclusion cyst, and lymphoepithelial cyst. Lesions with squamous cell features can pose diagnostic dilemmas. Conclusion: Cytologic examination of head and neck cysts can provide valuable information regarding the nature of the cystic lesions. Information about anatomic site and clinical history can assist with the differential diagnoses. Ancillary studies can improve the diagnosis in some cases. Each case should be evaluated very carefully since there are a wide variety of congenital conditions, infectious/inflammatory conditions, benign neoplasms, and primary and secondary malignancies presenting as a cystic mass in the head and neck. |
import Command from '../command';
import {ConsoleAutocompleteOptions} from "../../console/console";
import path from "path";
const scriptsFolder = path.join(__dirname, "resources/scripts")
class RunCommand extends Command {
onPerform(args: string[]): void {
if(args.length !== 1) {
this.console.logger.log("Usage: " + this.getUsage())
return
}
this.console.runScript(args[0], 0)
}
onTabComplete(args: string[], options: ConsoleAutocompleteOptions) {
return this.autocompletePath(args[args.length - 1], scriptsFolder, ".script", options)
}
getName() {
return "run"
}
getUsage() {
return "run <script name>"
}
getDescription() {
return "Run script"
}
}
export default RunCommand; |
// ListKiosks returns a list of active kiosks.
func (s *DisplayServer) ListKiosks(c context.Context, x *google_protobuf.Empty) (*pb.ListKiosksResponse, error) {
s.mux.Lock()
defer s.mux.Unlock()
response := &pb.ListKiosksResponse{}
for _, k := range s.kiosks {
if k != nil {
response.Kiosks = append(response.Kiosks, k)
}
}
return response, nil
} |
// GetSymbols returns an array of symbols supported by Ngkex
func (h *NGKEX) GetSymbols() ([]Symbol, error) {
type response struct {
Symbols []Symbol
}
var result response
urlPath := fmt.Sprintf("%sopenapi/v%s/%s", h.APIUrl, ngkexAPIVersion, ngkexSymbols)
err := h.SendHTTPRequest(urlPath, &result.Symbols)
return result.Symbols, err
} |
Questionnaire on class management readiness for future teachers Introduction. The relevance of the research is determined by the special role of the class teacher at the present stage of the development of Russian education. The preparation of future teachers at the university for implementing the social and pedagogical function of class management requires the development of new approaches to the educational process, in particular, the provision of informative diagnostic methods. Materials and methods. The study used a method of experimental modeling of the questionnaire on class management readiness. The basis for creating the questionnaire was the idea of the structural organization of readiness for class management: motivational and required, cognitive, activity, and emotional-reflexive components. Statistical methods: Spearman's rank correlation coefficient; cluster analysis. Results. The stimulating materials of the questionnaire were developed, consisting of four subscales, corresponding to the component composition of readiness for class management. The subscales of the questionnaire do not duplicate each other and have independent diagnostic value. Thus, all the rank correlation coefficients are significant at the level of 0.01; at the same time, there are no indicators close in modulus to 1. Significant correlations between the diagnostic results of each subscale and the final result of readiness for class management indicate the internal consistency of the entire questionnaire. Conclusion. The initial psychometric check of the questionnaire of future teachers' readiness for class management showed the admissibility of its use to assess this construct. The questionnaire makes up for the shortage of diagnostic tools for examining the readiness of university students to implement the educational function at school and can also be used to assess the effectiveness of some aspects of the educational process at university. |
Membrane topology of guinea pig cytochrome P450 17 alpha revealed by a combination of chemical modifications and mass spectrometry. Cytochrome P450s in endoplasmic reticulum membranes function in the hydroxylation of exogenous and endogenous hydrophobic substrates concentrated in the membranes. The reactions require electron supplies from NADPH-cytochrome P450 reductase in the same membranes. The membranes play important roles in the reaction of cytochrome P450. The membrane topology of guinea pig P450 17alpha was investigated on the basis of the differences in reactivity to hydrophilic chemical modification reagents between those in the detergent-solubilized state and proteoliposomes. Recombinant guinea pig cytochrome P450 17alpha was purified from Escherichia coli and incorporated into liposome membranes. Lysine residues in the detergent-solubilized P450 17alpha and in the proteoliposomes were acetylated with acetic anhydride at pH 9.0, and the acidic amino acid residues were conjugated with glycinamide at pH 5.0 by the aid of a coupling reagent, 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride. The modifications were performed under conditions where the denatured form, P420, was not induced. The modified P450 17alpha's were digested by trypsin, and the molecular weights of the peptide fragments were determined by MALDI-TOF mass spectrometry. From the increase in the molecular weights of the peptides, the positions of modifications could be deduced. In the detergent-solubilized state, 11 lysine residues and 7 acidic amino acid residues were modified, among which lysine residues at positions 29, 59, 490, and 492 and acidic residues at 211, 212, and/or 216 were not modified in the proteoliposomes. Both the N- and C-terminal domains and the putative F-G loop were concluded to be in or near the membrane-binding domains of P450 17alpha. |
July 22 (UPI) -- Eleven taxi drivers in South Africa were shot and killed after attending a funeral on Saturday evening.
A group of 17 taxi drivers -- all associated with the Ivory-Park Taxi Association in Gauteng -- were traveling to Johannesburg together in a taxi from a funeral for one of their colleagues when a group of unknown men opened fire on the vehicle, the South African Police Service said.
In addition to the 11 deaths, four people were critically injured and are being treated at a hospital and two people were unharmed.
SAPS General Khehla Sitole launched a 72-hour action plan involving multiple units to track down the suspects.
"The Acting Provincial Commissioner of Kwa-Zulu Natal, Lieutenant General Nhlanhla Mkhwanazi and his management team are overseeing the investigation into these murders. We will await for the investigation to advance before speculating on a motive," Sitole said.
The attack happened between the towns of Colenso and Weenen where a police representative said there has been a history of taxi violence, according to Sky News. |
/**
* @ClassName TemplateCacheManager
* @Description
* @Author Wangjunkai
* @Date 2019/8/26 15:33
**/
public class TemplateCacheManager implements CacheManager {
public TemplateCacheManager(RedisTemplate<String, Object> redisTemplate){
super();
this.redisTemplate = redisTemplate;
init(redisTemplate);
}
protected RedisTemplate<String, Object> redisTemplate;
//String
protected ValueOperations<String, Object> valueOperations;
//map
protected HashOperations<String, String, Object> hashOperations;
//list
protected ListOperations<String, Object> listOperations;
//无序set
protected SetOperations<String, Object> setOperations;
//有序set
protected ZSetOperations<String, Object> zSetOperations;
void init(RedisTemplate<String, Object> redisTemplate){
valueOperations = redisTemplate.opsForValue();
hashOperations = redisTemplate.opsForHash();
listOperations = redisTemplate.opsForList();
setOperations = redisTemplate.opsForSet();
zSetOperations = redisTemplate.opsForZSet();
}
@Override
public Object get(String key) {
return valueOperations.get(key);
}
@Override
public void set(String key, Object Value) {
valueOperations.set(key, Value);
}
@Override
public void delete(String key) {
redisTemplate.delete(key);
}
@Override
public Object hGet(String key, String hKey){
return hashOperations.get(key, hKey);
}
@Override
public Map hGet(String key){
return hashOperations.entries(key);
}
@Override
public void hSet(String key, String hKey, Object value){
hashOperations.put(key, hKey, value);
}
@Override
public void hDelete(String key, String hKey){
hashOperations.delete(key, hKey);
}
} |
. Effect of Gd-DTPA on tissue T1 were studied in 10 patients with intracranial tumors, including 4 meningiomas, 2 gliomas, 3 metastatic tumors and 1 brain abscess. T1 values were comparatively measured in the various portions of tumor tissue, peritumoral edema and normal brain before and after intravenous injection of 0.1 mmol Gd-DTPA per kg weight, with the time ranges between 5 and 60 minutes. In vivo T1 was determined by means of field focusing technique of a FONAR QED 80-alpha system (0.05T). T1 values of blood sample were also seriously measured and applied to estimate the enhancement pattern in the varied tumors. Meningiomas showed the maximal decrease of T1 5 minutes after Gd-DTPA administration and rapid recovery of T1 fairly paralleling to that of whole blood. On the other hand, such tumors as gliomas and metastasis revealed very slowly recovery of T1 followed by the similar T1 decrease just after administration of Gd-DTPA. In the portions of peritumoral edema and normal brain, there were no significant changes in tissue T1 by Gd-DTPA administration. In order to evaluate the clearance pattern of Gd-DTPA in tumor tissue, tissue-blood ratio (TBR) was calculated from the relaxation values at the terms of 5 and 30 minutes following Gd-DTPA administration. The ratios of relaxation rates, TBR 30/TBR 5, were ranged from 1.0 to 1.5 in cases of meningiomas, whereas they distributed in the ranges from 1.5 to 3.0 in the other types of brain tumors. This value of TBR 30/TBR 5 convincingly depends upon the differences in histological types of tumors.(ABSTRACT TRUNCATED AT 250 WORDS) |
package com.hust.rbacbackend.controller;
import com.hust.rbacbackend.component.ResultInfo;
import com.hust.rbacbackend.entity.Role;
import com.hust.rbacbackend.entity.User;
import com.hust.rbacbackend.service.api.RoleService;
import com.hust.rbacbackend.service.api.UserService;
import com.hust.rbacbackend.vo.RoleIdListVO;
import com.hust.rbacbackend.vo.UserUpdateVO;
import com.sun.javafx.image.IntPixelGetter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.List;
@RestController
@RequestMapping("/users")
@CrossOrigin
public class UserController {
@Autowired
UserService userService;
@Autowired
RoleService roleService;
@PostMapping("/{uid}/roles")
public ResultInfo addRoles(@PathVariable("uid") Integer id,@RequestBody RoleIdListVO roleIdListVO){
if(id==null){
throw new IllegalArgumentException("用户名不能为空");
}
userService.addRoles(id,roleIdListVO.getRoleIdList());
return ResultInfo.success(200,"post请求成功",null);
}
@GetMapping("/{uid}")
public ResultInfo queryUser(@PathVariable("uid") Integer uid){
if(uid==null){
throw new IllegalArgumentException("用户名不能为空");
}
User user = userService.queryUser(uid);
if(user==null){
throw new IllegalArgumentException("用户不存在");
}
return ResultInfo.success(200,"查询成功",user);
}
@GetMapping("")
public ResultInfo queryAllUsers(){
List<User> list=userService.queryAllUsers();
return ResultInfo.success(200,"操作成功",list);
}
@DeleteMapping("/{uid}/roles")
public ResultInfo removeRoles(@PathVariable("uid") Integer uid,@RequestBody RoleIdListVO roleIdList){
userService.delRoles(uid,roleIdList.getRoleIdList());
return ResultInfo.success(200,"操作成功",null);
}
@PutMapping("/{uid}")
public ResultInfo updateUser(@PathVariable("uid") Integer id,@RequestBody UserUpdateVO userUpdateVO){
if(userUpdateVO==null||userUpdateVO.getUser()==null){
throw new IllegalArgumentException("请填写用户信息");
}
User user = userUpdateVO.getUser();
RoleIdListVO roleIdList = userUpdateVO.getRoleIdList();
user.setId(id);
userService.updateUser(user);
userService.delRoles(user);
if(roleIdList!=null&&roleIdList.getRoleIdList().size()>0){
userService.addRoles(user.getId(),roleIdList.getRoleIdList());
}
return ResultInfo.success("更新成功");
}
}
|
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
#include "cuda_langunit.hpp"
//#include "cuda_cublas.hpp"
#include "cuda_cudnn.hpp"
using namespace nnfusion::kernels;
// Header
LU_DEFINE(header::cuda, "#include <cuda.h>\n#include <cuda_runtime.h>\n#include <cooperative_groups.h>\n");
LU_DEFINE(header::cublas, "#include <cublas_v2.h>\n");
LU_DEFINE(header::cudnn, "#include <cudnn.h>\n");
LU_DEFINE(header::superscaler, "#include \"superscaler.h\"\n");
LU_DEFINE(header::cupti, "#include <cupti.h>\n");
LU_DEFINE(header::cuda_prof_api, "#include <cuda_profiler_api.h>\n");
LU_DEFINE(header::cuda_fp16, "#include <cuda_fp16.h>\n");
LU_DEFINE(header::cub, "#include <cub/cub.cuh>\n");
LU_DEFINE(header::math_constants, "#include <math_constants.h>\n");
// Macro
LU_DEFINE(macro::HALF_MAX,
R"(#ifndef __HALF_COMPARE_EX__
#define __HALF_COMPARE_EX__
inline __device__ half max(half x, half y) { return x > y ? x : y; }
inline __device__ half min(half x, half y) { return x < y ? x : y; }
#endif
)");
LU_DEFINE(
macro::CUDA_SAFE_CALL_NO_THROW,
R"(#define CUDA_SAFE_CALL_NO_THROW(x) \
do \
{ \
cudaError_t result = (x); \
if (result != cudaSuccess) \
{ \
const char* msg = cudaGetErrorString(result); \
std::stringstream safe_call_ss; \
safe_call_ss << "\nerror: " #x " failed with error" \
<< "\nfile: " << __FILE__ << "\nline: " << __LINE__ << "\nmsg: " << msg; \
std::cout << safe_call_ss.str() << std::endl; \
} \
} while (0)
)");
LU_DEFINE(
macro::CUDA_SAFE_CALL,
R"(#define CUDA_SAFE_CALL(x) \
do \
{ \
cudaError_t result = (x); \
if (result != cudaSuccess) \
{ \
const char* msg = cudaGetErrorString(result); \
std::stringstream safe_call_ss; \
safe_call_ss << "\nerror: " #x " failed with error" \
<< "\nfile: " << __FILE__ << "\nline: " << __LINE__ << "\nmsg: " << msg; \
throw std::runtime_error(safe_call_ss.str()); \
} \
} while (0)
)");
LU_DEFINE(
macro::CUDNN_SAFE_CALL_NO_THROW,
R"(#define CUDNN_SAFE_CALL_NO_THROW(func) \
do \
{ \
cudnnStatus_t e = (func); \
if (e != CUDNN_STATUS_SUCCESS) \
{ \
const char* msg = cudnnGetErrorString(e); \
std::stringstream safe_call_ss; \
safe_call_ss << "\nerror: " #func " failed with error" \
<< "\nfile: " << __FILE__ << "\nline: " << __LINE__ << "\nmsg: " << msg; \
std::cout << safe_call_ss.str() << std::endl; \
} \
} while (0)
)");
LU_DEFINE(
macro::CUDNN_SAFE_CALL,
R"(#define CUDNN_SAFE_CALL(func) \
do \
{ \
cudnnStatus_t e = (func); \
if (e != CUDNN_STATUS_SUCCESS) \
{ \
const char* msg = cudnnGetErrorString(e); \
std::stringstream safe_call_ss; \
safe_call_ss << "\nerror: " #func " failed with error" \
<< "\nfile: " << __FILE__ << "\nline: " << __LINE__ << "\nmsg: " << msg; \
throw std::runtime_error(safe_call_ss.str()); \
} \
} while (0)
)");
LU_DEFINE(
macro::CUBLAS_SAFE_CALL_NO_THROW,
R"(#define CUBLAS_SAFE_CALL_NO_THROW(func) \
do \
{ \
cublasStatus_t e = (func); \
if (e != CUBLAS_STATUS_SUCCESS) \
{ \
std::stringstream safe_call_ss; \
safe_call_ss << "\nerror: " #func " failed with error" \
<< "\nfile: " << __FILE__ << "\nline: " << __LINE__ << "\nmsg: " << e; \
std::cout << safe_call_ss.str() << std::endl; \
} \
} while (0)
)");
LU_DEFINE(
macro::CUBLAS_SAFE_CALL,
R"(#define CUBLAS_SAFE_CALL(func) \
do \
{ \
cublasStatus_t e = (func); \
if (e != CUBLAS_STATUS_SUCCESS) \
{ \
std::stringstream safe_call_ss; \
safe_call_ss << "\nerror: " #func " failed with error" \
<< "\nfile: " << __FILE__ << "\nline: " << __LINE__ << "\nmsg: " << e; \
throw std::runtime_error(safe_call_ss.str()); \
} \
} while (0)
)");
LU_DEFINE(
macro::CUDA_SAFE_LAUNCH,
R"(#define CUDA_SAFE_LAUNCH(x) \
do \
{ \
(x); \
cudaError_t result = cudaGetLastError(); \
if (result != cudaSuccess) \
{ \
const char* msg = cudaGetErrorString(result); \
std::stringstream safe_call_ss; \
safe_call_ss << "\nerror: " #x " failed with error" \
<< "\nfile: " << __FILE__ << "\nline: " << __LINE__ << "\nmsg: " << msg; \
throw std::runtime_error(safe_call_ss.str()); \
} \
} while (0)
)");
LU_DEFINE(macro::CUPTI_CALL,
R"(#define CUPTI_CALL(call) \
do { \
CUptiResult _status = call; \
if (_status != CUPTI_SUCCESS) { \
const char *errstr; \
cuptiGetResultString(_status, &errstr); \
fprintf(stderr, "%s:%d: error: function %s failed with error %s.\n", \
__FILE__, __LINE__, #call, errstr); \
exit(-1); \
} \
} while (0)
)");
// Declaration
//<TODO>Need special code for this global_cublas_handle
LU_DEFINE(declaration::num_SMs, "int num_SMs;\n");
LU_DEFINE(declaration::global_cublas_handle, "cublasHandle_t global_cublas_handle;\n");
LU_DEFINE(declaration::global_cudnn_handle, "cudnnHandle_t global_cudnn_handle;\n");
LU_DEFINE(
declaration::division_by_invariant_multiplication,
R"(__device__ __forceinline__ int division_by_invariant_multiplication(int value, int magic, int shift)
{
int result;
asm("{\n\t"
".reg .pred p;\n\t"
".reg .u64 res64;\n\t"
".reg .u32 lo32, hi32;\n\t"
"setp.ne.s32 p, %2, 1;\n\t"
"mul.wide.u32 res64, %1, %2;\n\t"
"mov.b64 {lo32, hi32}, res64;\n\t"
"selp.u32 hi32, hi32, %1, p;\n\t"
"shr.u32 %0, hi32, %3;\n\t"
"}" : "=r"(result) : "r"(value), "r"(magic), "r"(shift));
return result;
}
)");
LU_DEFINE(
declaration::rocm_division_by_invariant_multiplication,
R"(__device__ __forceinline__ int division_by_invariant_multiplication(int value, int magic, int shift)
{
long long res64 = ((long long)(unsigned int)value) * ((long long)(unsigned int)magic);
int hi32 = res64 >> 32;
if(magic == 1)
hi32 = value;
int result = hi32 >> shift;
return result;
}
)");
LU_DEFINE(declaration::mod16,
R"(__device__ __forceinline__ int mod16(int numerator, int div, int maxdiv)
{
int res;
asm("vmad.s32.u32.u32 %0, -%1.h0, %2.h0, %3;" : "=r"(res) : "r"(div), "r"(maxdiv), "r"(numerator));
return res;
}
)");
LU_DEFINE(declaration::mad16,
R"(__device__ __forceinline__ int mad16(int a, int b, int c)
{
int res;
asm("vmad.s32.u32.u32 %0, %1.h0, %2.h0, %3;" : "=r"(res) : "r"(a), "r"(b), "r"(c));
return res;
}
)");
LU_DEFINE(
declaration::load,
R"(__device__ __forceinline__ char load(const char* __restrict__ in, int i=0, bool b=true)
{
char v = 0;
if (b)
{
v = __ldg(in + i);
}
return v;
}
__device__ __forceinline__ float load(const float* __restrict__ in, int i=0, bool b=true)
{
float v = 0.0f;
if (b)
{
v = __ldg(in + i);
}
return v;
}
__device__ __forceinline__ int32_t load(const int32_t* __restrict__ in, int i=0, bool b=true)
{
int32_t v = 0;
if (b)
{
v = __ldg(in + i);
}
return v;
}
__device__ __forceinline__ int64_t load(const int64_t* __restrict__ in, int i=0, bool b=true)
{
int64_t v = 0;
if (b)
{
v = __ldg(in + i);
}
return v;
}
)");
LU_DEFINE(declaration::cuda_fp16_scale,
R"(
__global__ void nnfusionHalfScaleKernel(half *x, half *alpha, size_t count)
{
size_t offset = threadIdx.x + blockIdx.x * blockDim.x;
x += offset;
if (offset < count)
{
*x *= *alpha;
}
}
void nnfusionHalfScale(half *x, half *alpha, size_t len)
{
nnfusionHalfScaleKernel<<<(len+255)/256, 256>>>(x, alpha, len);
}
)");
LU_DEFINE(declaration::cuda_convert_template,
R"(template<typename InT, typename OutT>
__device__ __forceinline__ OutT convert(InT x0)
{
return x0;
}
template <>
__device__ __forceinline__ half convert(int64_t a)
{
return __ll2half_rn((long long)a);
}
template <>
__device__ __forceinline__ int64_t convert(half a)
{
return __half2ll_rn(a);
}
)");
LU_DEFINE_EXTEND(declaration::cuda_reduce_primitive,
R"(
#if CUDA_VERSION < 9000
#define CREATE_SHFL_MASK(mask, predicate) mask = 0u;
#else
#define FULL_WARP_MASK 0xFFFFFFFF
#define CREATE_SHFL_MASK(mask, predicate) \
mask = __ballot_sync(FULL_WARP_MASK, (predicate))
#endif
__forceinline__ __device__ float CudaShuffleDownSync(unsigned mask, float val,
int delta,
int width = 32) {
#if CUDA_VERSION < 9000
return __shfl_down(val, delta, width);
#else
return __shfl_down_sync(mask, val, delta, width);
#endif
}
__device__ static float reduceMax(float val, int tid, int blockSize, float* shm) {
unsigned mask = 0u;
CREATE_SHFL_MASK(mask, tid < blockSize);
val = max(val, CudaShuffleDownSync(mask, val, 16));
val = max(val, CudaShuffleDownSync(mask, val, 8));
val = max(val, CudaShuffleDownSync(mask, val, 4));
val = max(val, CudaShuffleDownSync(mask, val, 2));
val = max(val, CudaShuffleDownSync(mask, val, 1));
if (tid < warpSize) shm[tid] = 0.;
__syncthreads();
if (tid % warpSize == 0) shm[tid / warpSize] = val;
__syncthreads();
CREATE_SHFL_MASK(mask, tid < warpSize);
if (tid < warpSize) {
val = shm[tid];
val = max(val, CudaShuffleDownSync(mask, val, 16));
val = max(val, CudaShuffleDownSync(mask, val, 8));
val = max(val, CudaShuffleDownSync(mask, val, 4));
val = max(val, CudaShuffleDownSync(mask, val, 2));
val = max(val, CudaShuffleDownSync(mask, val, 1));
}
return val;
}
__device__ static float reduceSum(float val, int tid, int blockSize, float* shm) {
unsigned mask = 0u;
CREATE_SHFL_MASK(mask, tid < blockSize);
val += CudaShuffleDownSync(mask, val, 16);
val += CudaShuffleDownSync(mask, val, 8);
val += CudaShuffleDownSync(mask, val, 4);
val += CudaShuffleDownSync(mask, val, 2);
val += CudaShuffleDownSync(mask, val, 1);
if (tid < warpSize) shm[tid] = 0.;
__syncthreads();
if (tid % warpSize == 0) shm[tid / warpSize] = val;
__syncthreads();
CREATE_SHFL_MASK(mask, tid < warpSize);
if (tid < warpSize) {
val = shm[tid];
val += CudaShuffleDownSync(mask, val, 16);
val += CudaShuffleDownSync(mask, val, 8);
val += CudaShuffleDownSync(mask, val, 4);
val += CudaShuffleDownSync(mask, val, 2);
val += CudaShuffleDownSync(mask, val, 1);
}
return val;
}
)",
R"(
#if CUDA_VERSION < 9000
#define CREATE_SHFL_MASK(mask, predicate) mask = 0u;
#else
#define FULL_WARP_MASK 0xFFFFFFFF
#define CREATE_SHFL_MASK(mask, predicate) \
mask = __ballot_sync(FULL_WARP_MASK, (predicate))
#endif
__forceinline__ __device__ float CudaShuffleDownSync(unsigned mask, float val,
int delta,
int width = 32) {
#if CUDA_VERSION < 9000
return __shfl_down(val, delta, width);
#else
return __shfl_down_sync(mask, val, delta, width);
#endif
}
__device__ static float reduceMax(float val, int tid, int blockSize, float* shm) {
unsigned mask = 0u;
CREATE_SHFL_MASK(mask, tid < blockSize);
val = max(val, CudaShuffleDownSync(mask, val, 16));
val = max(val, CudaShuffleDownSync(mask, val, 8));
val = max(val, CudaShuffleDownSync(mask, val, 4));
val = max(val, CudaShuffleDownSync(mask, val, 2));
val = max(val, CudaShuffleDownSync(mask, val, 1));
if (tid < warpSize) shm[tid] = 0.;
__syncthreads();
if (tid % warpSize == 0) shm[tid / warpSize] = val;
__syncthreads();
CREATE_SHFL_MASK(mask, tid < warpSize);
if (tid < warpSize) {
val = shm[tid];
val = max(val, CudaShuffleDownSync(mask, val, 16));
val = max(val, CudaShuffleDownSync(mask, val, 8));
val = max(val, CudaShuffleDownSync(mask, val, 4));
val = max(val, CudaShuffleDownSync(mask, val, 2));
val = max(val, CudaShuffleDownSync(mask, val, 1));
}
return val;
}
__device__ static float reduceSum(float val, int tid, int blockSize, float* shm) {
unsigned mask = 0u;
CREATE_SHFL_MASK(mask, tid < blockSize);
val += CudaShuffleDownSync(mask, val, 16);
val += CudaShuffleDownSync(mask, val, 8);
val += CudaShuffleDownSync(mask, val, 4);
val += CudaShuffleDownSync(mask, val, 2);
val += CudaShuffleDownSync(mask, val, 1);
if (tid < warpSize) shm[tid] = 0.;
__syncthreads();
if (tid % warpSize == 0) shm[tid / warpSize] = val;
__syncthreads();
CREATE_SHFL_MASK(mask, tid < warpSize);
if (tid < warpSize) {
val = shm[tid];
val += CudaShuffleDownSync(mask, val, 16);
val += CudaShuffleDownSync(mask, val, 8);
val += CudaShuffleDownSync(mask, val, 4);
val += CudaShuffleDownSync(mask, val, 2);
val += CudaShuffleDownSync(mask, val, 1);
}
return val;
}
)",
"");
LU_DEFINE_EXTEND(declaration::cuda_layer_norm,
R"(
template <typename T>
__device__ void cuWelfordOnlineSum(
const T curr,
T& mu,
T& sigma2,
T& count) {
count = count + T(1);
T delta = curr - mu;
T lmean = mu + delta / count;
mu = lmean;
T delta2 = curr - lmean;
sigma2 = sigma2 + delta * delta2;
}
template <typename T>
__device__ void cuChanOnlineSum(
const T muB,
const T sigma2B,
const T countB,
T& mu,
T& sigma2,
T& count) {
T delta = muB - mu;
T nA = count;
T nB = countB;
count = count + countB;
T nX = count;
if (nX > T(0)) {
nA = nA / nX;
nB = nB / nX;
mu = nA * mu + nB * muB;
sigma2 = sigma2 + sigma2B + delta * delta * nA * nB * nX;
} else {
mu = T(0);
sigma2 = T(0);
}
}
template <typename T>
__device__ void cuWelfordMuSigma2(
const T* __restrict__ vals,
const int n1,
const int n2,
const int i1,
T& mu,
T& sigma2,
T* buf) {
// Assumptions:
// 1) blockDim.x == GPU_WARP_SIZE
// 2) Tensor is contiguous
// 3) 2*blockDim.y*sizeof(T)+blockDim.y*sizeof(int) shared memory available.
//
// compute variance and mean over n2
T count = T(0);
mu = T(0);
sigma2 = T(0);
if (i1 < n1) {
// one warp normalizes one n1 index,
// synchronization is implicit
// initialize with standard Welford algorithm
const int numx = blockDim.x * blockDim.y;
const int thrx = threadIdx.x + threadIdx.y * blockDim.x;
const T* lvals = vals + i1 * n2;
int l = 4 * thrx;
for (; l + 3 < n2; l += 4 * numx) {
for (int k = 0; k < 4; ++k) {
T curr = static_cast<T>(lvals[l + k]);
cuWelfordOnlineSum(curr, mu, sigma2, count);
}
}
for (; l < n2; ++l) {
T curr = static_cast<T>(lvals[l]);
cuWelfordOnlineSum(curr, mu, sigma2, count);
}
// intra-warp reductions
#pragma unroll
for (int stride = GPU_WARP_SIZE / 2; stride > 0; stride /= 2) {
T muB = WARP_SHFL_DOWN(mu, stride);
T countB = WARP_SHFL_DOWN(count, stride);
T sigma2B = WARP_SHFL_DOWN(sigma2, stride);
cuChanOnlineSum(muB, sigma2B, countB, mu, sigma2, count);
}
// threadIdx.x == 0 has correct values for each warp
// inter-warp reductions
if (blockDim.y > 1) {
T* ubuf = (T*)buf;
T* ibuf = (T*)(ubuf + blockDim.y);
for (int offset = blockDim.y / 2; offset > 0; offset /= 2) {
// upper half of warps write to shared
if (threadIdx.x == 0 && threadIdx.y >= offset && threadIdx.y < 2 * offset) {
const int wrt_y = threadIdx.y - offset;
ubuf[2 * wrt_y] = mu;
ubuf[2 * wrt_y + 1] = sigma2;
ibuf[wrt_y] = count;
}
__syncthreads();
// lower half merges
if (threadIdx.x == 0 && threadIdx.y < offset) {
T muB = ubuf[2 * threadIdx.y];
T sigma2B = ubuf[2 * threadIdx.y + 1];
T countB = ibuf[threadIdx.y];
cuChanOnlineSum(muB, sigma2B, countB, mu, sigma2, count);
}
__syncthreads();
}
// threadIdx.x = 0 && threadIdx.y == 0 only thread that has correct values
if (threadIdx.x == 0 && threadIdx.y == 0) {
ubuf[0] = mu;
ubuf[1] = sigma2;
}
__syncthreads();
mu = ubuf[0];
sigma2 = ubuf[1] / T(n2);
// don't care about final value of count, we know count == n2
} else {
mu = WARP_SHFL(mu, 0);
sigma2 = WARP_SHFL(sigma2 / T(n2), 0);
}
}
}
template <typename T>
__global__ void cuApplyLayerNorm(
T* __restrict__ output_vals,
T* __restrict__ mean,
T* __restrict__ invvar,
const T* __restrict__ vals,
const int n1,
const int n2,
const T epsilon,
const T* __restrict__ gamma,
const T* __restrict__ beta) {
// Assumptions:
// 1) blockDim.x == GPU_WARP_SIZE
// 2) Tensors are contiguous
//
for (int i1 = blockIdx.y; i1 < n1; i1 += gridDim.y) {
extern __shared__ T s_float[];
T* buf = (T*)s_float;
T mu, sigma2;
cuWelfordMuSigma2(vals, n1, n2, i1, mu, sigma2, buf);
const T* lvals = vals + i1 * n2;
T* ovals = output_vals + i1 * n2;
T c_invvar = Rsqrt(sigma2 + epsilon);
const int numx = blockDim.x * blockDim.y;
const int thrx = threadIdx.x + threadIdx.y * blockDim.x;
if (gamma != NULL && beta != NULL) {
for (int i = thrx; i < n2; i += numx) {
T curr = static_cast<T>(lvals[i]);
ovals[i] = gamma[i] * static_cast<T>(c_invvar * (curr - mu)) + beta[i];
}
} else {
for (int i = thrx; i < n2; i += numx) {
T curr = static_cast<T>(lvals[i]);
ovals[i] = static_cast<T>(c_invvar * (curr - mu));
}
}
if (threadIdx.x == 0 && threadIdx.y == 0) {
mean[i1] = mu;
invvar[i1] = c_invvar;
}
}
}
template <typename T>
void HostApplyLayerNorm(
T* output,
T* mean,
T* invvar,
const T* input,
int64_t n1,
int64_t n2,
T epsilon,
const T* gamma,
const T* beta) {
const dim3 threads(GPU_WARP_SIZE, 4, 1);
const dim3 blocks(1, std::min((uint64_t)n1, MAX_GRID_Y), 1);
int nshared =
threads.y > 1 ? threads.y * sizeof(T) + (threads.y / 2) * sizeof(T) : 0;
cuApplyLayerNorm<<<blocks, threads, nshared, 0>>>(
output,
mean,
invvar,
input,
n1, n2,
epsilon,
gamma, beta);
}
)",
R"(
template <typename T>
__device__ void cuWelfordOnlineSum(
const T curr,
T& mu,
T& sigma2,
T& count) {
count = count + T(1);
T delta = curr - mu;
T lmean = mu + delta / count;
mu = lmean;
T delta2 = curr - lmean;
sigma2 = sigma2 + delta * delta2;
}
template <typename T>
__device__ void cuChanOnlineSum(
const T muB,
const T sigma2B,
const T countB,
T& mu,
T& sigma2,
T& count) {
T delta = muB - mu;
T nA = count;
T nB = countB;
count = count + countB;
T nX = count;
if (nX > T(0)) {
nA = nA / nX;
nB = nB / nX;
mu = nA * mu + nB * muB;
sigma2 = sigma2 + sigma2B + delta * delta * nA * nB * nX;
} else {
mu = T(0);
sigma2 = T(0);
}
}
template <typename T>
__device__ void cuWelfordMuSigma2(
const T* __restrict__ vals,
const int n1,
const int n2,
const int i1,
T& mu,
T& sigma2,
T* buf) {
// Assumptions:
// 1) blockDim.x == GPU_WARP_SIZE
// 2) Tensor is contiguous
// 3) 2*blockDim.y*sizeof(T)+blockDim.y*sizeof(int) shared memory available.
//
// compute variance and mean over n2
T count = T(0);
mu = T(0);
sigma2 = T(0);
if (i1 < n1) {
// one warp normalizes one n1 index,
// synchronization is implicit
// initialize with standard Welford algorithm
const int numx = blockDim.x * blockDim.y;
const int thrx = threadIdx.x + threadIdx.y * blockDim.x;
const T* lvals = vals + i1 * n2;
int l = 4 * thrx;
for (; l + 3 < n2; l += 4 * numx) {
for (int k = 0; k < 4; ++k) {
T curr = static_cast<T>(lvals[l + k]);
cuWelfordOnlineSum(curr, mu, sigma2, count);
}
}
for (; l < n2; ++l) {
T curr = static_cast<T>(lvals[l]);
cuWelfordOnlineSum(curr, mu, sigma2, count);
}
// intra-warp reductions
#pragma unroll
for (int stride = GPU_WARP_SIZE / 2; stride > 0; stride /= 2) {
T muB = WARP_SHFL_DOWN(mu, stride);
T countB = WARP_SHFL_DOWN(count, stride);
T sigma2B = WARP_SHFL_DOWN(sigma2, stride);
cuChanOnlineSum(muB, sigma2B, countB, mu, sigma2, count);
}
// threadIdx.x == 0 has correct values for each warp
// inter-warp reductions
if (blockDim.y > 1) {
T* ubuf = (T*)buf;
T* ibuf = (T*)(ubuf + blockDim.y);
for (int offset = blockDim.y / 2; offset > 0; offset /= 2) {
// upper half of warps write to shared
if (threadIdx.x == 0 && threadIdx.y >= offset && threadIdx.y < 2 * offset) {
const int wrt_y = threadIdx.y - offset;
ubuf[2 * wrt_y] = mu;
ubuf[2 * wrt_y + 1] = sigma2;
ibuf[wrt_y] = count;
}
__syncthreads();
// lower half merges
if (threadIdx.x == 0 && threadIdx.y < offset) {
T muB = ubuf[2 * threadIdx.y];
T sigma2B = ubuf[2 * threadIdx.y + 1];
T countB = ibuf[threadIdx.y];
cuChanOnlineSum(muB, sigma2B, countB, mu, sigma2, count);
}
__syncthreads();
}
// threadIdx.x = 0 && threadIdx.y == 0 only thread that has correct values
if (threadIdx.x == 0 && threadIdx.y == 0) {
ubuf[0] = mu;
ubuf[1] = sigma2;
}
__syncthreads();
mu = ubuf[0];
sigma2 = ubuf[1] / T(n2);
// don't care about final value of count, we know count == n2
} else {
mu = WARP_SHFL(mu, 0);
sigma2 = WARP_SHFL(sigma2 / T(n2), 0);
}
}
}
template <typename T>
__global__ void cuApplyLayerNorm(
T* __restrict__ output_vals,
T* __restrict__ mean,
T* __restrict__ invvar,
const T* __restrict__ vals,
const int n1,
const int n2,
const T epsilon,
const T* __restrict__ gamma,
const T* __restrict__ beta) {
// Assumptions:
// 1) blockDim.x == GPU_WARP_SIZE
// 2) Tensors are contiguous
//
for (int i1 = blockIdx.y; i1 < n1; i1 += gridDim.y) {
extern __shared__ T s_float[];
T* buf = (T*)s_float;
T mu, sigma2;
cuWelfordMuSigma2(vals, n1, n2, i1, mu, sigma2, buf);
const T* lvals = vals + i1 * n2;
T* ovals = output_vals + i1 * n2;
T c_invvar = Rsqrt(sigma2 + epsilon);
const int numx = blockDim.x * blockDim.y;
const int thrx = threadIdx.x + threadIdx.y * blockDim.x;
if (gamma != NULL && beta != NULL) {
for (int i = thrx; i < n2; i += numx) {
T curr = static_cast<T>(lvals[i]);
ovals[i] = gamma[i] * static_cast<T>(c_invvar * (curr - mu)) + beta[i];
}
} else {
for (int i = thrx; i < n2; i += numx) {
T curr = static_cast<T>(lvals[i]);
ovals[i] = static_cast<T>(c_invvar * (curr - mu));
}
}
if (threadIdx.x == 0 && threadIdx.y == 0) {
mean[i1] = mu;
invvar[i1] = c_invvar;
}
}
}
template <typename T>
void HostApplyLayerNorm(
T* output,
T* mean,
T* invvar,
const T* input,
int64_t n1,
int64_t n2,
T epsilon,
const T* gamma,
const T* beta) {
const dim3 threads(GPU_WARP_SIZE, 4, 1);
const dim3 blocks(1, std::min((uint64_t)n1, MAX_GRID_Y), 1);
int nshared =
threads.y > 1 ? threads.y * sizeof(T) + (threads.y / 2) * sizeof(T) : 0;
cuApplyLayerNorm<<<blocks, threads, nshared, 0>>>(
output,
mean,
invvar,
input,
n1, n2,
epsilon,
gamma, beta);
}
)",
"");
LU_DEFINE(declaration::ort_layer_norm, R"(
__device__ inline half2 AddHalf2(const half2 a, const half2 b) {
#if __CUDA_ARCH__ >= 530 || !defined(__CUDA_ARCH__)
return __hadd2(a, b);
#else
return __halves2half2(__hadd(a.x, b.x), __hadd(a.y, b.y));
#endif
}
struct KeyValuePairSum {
__device__ inline cub::KeyValuePair<float, float> operator()(const cub::KeyValuePair<float, float>& a, const cub::KeyValuePair<float, float>& b) {
return cub::KeyValuePair<float, float>(a.key + b.key, a.value + b.value);
}
__device__ inline cub::KeyValuePair<half, half> operator()(const cub::KeyValuePair<half, half>& a, const cub::KeyValuePair<half, half>& b) {
const half2 a2 = __halves2half2(a.key, a.value);
const half2 b2 = __halves2half2(b.key, b.value);
const half2 res = AddHalf2(a2, b2);
return cub::KeyValuePair<half, half>(res.x, res.y);
}
__device__ inline cub::KeyValuePair<half2, half2> operator()(const cub::KeyValuePair<half2, half2>& a, const cub::KeyValuePair<half2, half2>& b) {
return cub::KeyValuePair<half2, half2>(AddHalf2(a.key, b.key), AddHalf2(a.value, b.value));
}
};
template <typename T, int TPB>
__device__ inline void LayerNorm(
const cub::KeyValuePair<T, T>& thread_data, const int ld, const int offset, const T* beta,
const T* gamma, const T epsilon, T* output) {
// Assuming thread_data is already divided by ld
using BlockReduce = cub::BlockReduce<cub::KeyValuePair<T, T>, TPB>;
__shared__ typename BlockReduce::TempStorage temp_storage;
__shared__ T mu; // mean
__shared__ T rsigma; // 1 / std.dev.
KeyValuePairSum pair_sum;
const auto sum_kv = BlockReduce(temp_storage).Reduce(thread_data, pair_sum);
if (threadIdx.x == 0) {
mu = sum_kv.key;
rsigma = Rsqrt(sum_kv.value - mu * mu + epsilon);
}
__syncthreads();
for (int i = threadIdx.x; i < ld; i += TPB) {
const int idx = offset + i;
const T val = output[idx];
const T g(gamma[i]);
const T b(beta[i]);
output[idx] = g * (val - mu) * rsigma + b;
}
}
template <typename T, int TPB>
__device__ inline void LayerNormSmall(const T val, const cub::KeyValuePair<T, T>& thread_data, const int ld, const int idx,
const T* beta, const T* gamma, const T epsilon, T* output) {
// Assuming thread_data is already divided by ld
// Small settings: the block covers the leading dimension TPB >= ld. The input
// value is available in a register
using BlockReduce = cub::BlockReduce<cub::KeyValuePair<T, T>, TPB>;
__shared__ typename BlockReduce::TempStorage temp_storage;
__shared__ T mu; // mean
__shared__ T rsigma; // 1 / std.dev.
KeyValuePairSum pair_sum;
const auto sum_kv = BlockReduce(temp_storage).Reduce(thread_data, pair_sum);
if (threadIdx.x == 0) {
mu = sum_kv.key;
rsigma = Rsqrt(sum_kv.value - mu * mu + epsilon);
}
__syncthreads();
if (threadIdx.x < ld) {
const T g(gamma[threadIdx.x]);
const T b(beta[threadIdx.x]);
output[idx] = g * (val - mu) * rsigma + b;
}
}
)");
LU_DEFINE(declaration::ort_qkv_to_context, R"(
template <class INT, class INT2>
static INT CeilDiv(INT a, INT2 b) // ceil(a/b)
{
return (INT)(((size_t)a + (size_t)b - 1) / (size_t)b); // these size_t casts are necessary since b may be INT_MAX (for maxGridSize[])
}
static size_t AlignTo(size_t a, size_t b) {
return CeilDiv(a, b) * b;
}
size_t ScratchSize(size_t element_size, int batch_size, int num_heads, int sequence_length, int all_sequence_length) {
const size_t len = batch_size * num_heads * sequence_length * all_sequence_length;
const size_t bytes = len * element_size;
const size_t alignment = 256;
const size_t bytesAligned = AlignTo(bytes, alignment);
return bytesAligned;
}
template <typename T, unsigned TPB>
__device__ inline void Softmax(const int all_sequence_length,
const int sequence_length,
const int valid_end,
const int valid_start,
const T* input,
T* output) {
using BlockReduce = cub::BlockReduce<float, TPB>;
__shared__ typename BlockReduce::TempStorage tmp_storage;
__shared__ float sum_reverse_block;
__shared__ float max_block;
float thread_data_max(-CUDART_INF_F);
// e^x is represented as infinity if x is large enough, like 100.f.
// Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
// a math transform as below is leveraged to get a stable softmax:
// e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))
const int offset = (blockIdx.y * gridDim.x + blockIdx.x) * all_sequence_length;
for (int i = threadIdx.x; i < valid_end; i += TPB) {
if (i >= valid_start) {
const int index = offset + i;
if (thread_data_max < float(input[index])) {
thread_data_max = float(input[index]);
}
}
}
const auto max = BlockReduce(tmp_storage).Reduce(thread_data_max, cub::Max());
// Store max value
if (threadIdx.x == 0) {
max_block = max;
}
__syncthreads();
float thread_data_sum(0.f);
for (int i = threadIdx.x; i < valid_end; i += TPB) {
if (i >= valid_start) {
const int index = offset + i;
const float val = input[index];
thread_data_sum += expf(val - max_block);
}
}
const auto sum = BlockReduce(tmp_storage).Reduce(thread_data_sum, cub::Sum());
if (threadIdx.x == 0) {
sum_reverse_block = 1.f / sum;
}
__syncthreads();
for (int i = threadIdx.x; i < all_sequence_length; i += TPB) {
const int index = offset + i;
const float val = (i >= valid_start && i < valid_end) ? expf(float(input[index]) - max_block) * sum_reverse_block : 0.f;
output[index] = T(val);
}
}
template <typename T, unsigned TPB>
__device__ inline void SoftmaxSmall(const int all_sequence_length,
const int sequence_length,
const int valid_end,
const int valid_start,
const T* input,
T* output,
bool is_unidirectional) {
using BlockReduce = cub::BlockReduce<float, TPB>;
__shared__ typename BlockReduce::TempStorage tmp_storage;
__shared__ float sum_reverse_block;
__shared__ float max_block;
// Input dimension is BxNxSxS*; blockIdx.y is batch index b; gridDim.x=N*S; blockIdx.x is index within N*S;
const int offset = (blockIdx.y * gridDim.x + blockIdx.x) * all_sequence_length;
const int index = offset + threadIdx.x;
bool is_valid = false; // whether it has attention mask == 1.
// Update end position for unidirectional.
int end = valid_end;
if (is_unidirectional) {
int end_unid = all_sequence_length - sequence_length + (blockIdx.x % sequence_length) + 1;
if (end_unid <= valid_start) {
// In this situation, mask of [0, end_unid) and [valid_start, valid_end) has -10000, and [end_unid, valid_start) and [valid_end, all_seq_len) has -20000.
// So [0, end_unid) will also have value after softmax.
is_valid = threadIdx.x < end_unid;
} else {
end = min(valid_end, end_unid);
}
}
is_valid = is_valid || (threadIdx.x >= valid_start && threadIdx.x < end);
// e^x is represented as infinity if x is large enough, like 100.f.
// Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
// a math transform as below is leveraged to get a stable softmax:
// e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))
float thread_data_max = is_valid ? float(input[index]) : float(-CUDART_INF_F);
const auto max = BlockReduce(tmp_storage).Reduce(thread_data_max, cub::Max(), end);
// Store max value
if (threadIdx.x == 0) {
max_block = max;
}
__syncthreads();
float thread_data_exp(0.f);
if (is_valid) {
thread_data_exp = expf(float(input[index]) - max_block);
}
const auto sum = BlockReduce(tmp_storage).Reduce(thread_data_exp, cub::Sum(), end);
// Store value of 1.0/sum.
if (threadIdx.x == 0) {
sum_reverse_block = (1.f) / sum;
}
__syncthreads();
// threadIdx.x might be larger than all_sequence_length due to alignment to 32x.
if (threadIdx.x < all_sequence_length) {
output[index] = T(thread_data_exp * sum_reverse_block);
}
}
template <typename T, unsigned TPB>
__device__ inline void SoftmaxWithMask2DSmall(const int all_sequence_length,
const int sequence_length,
const int* attention_mask, // 2D attention mask
const T* input,
T* output,
const bool is_unidirectional,
const float scalar) {
using BlockReduce = cub::BlockReduce<float, TPB>;
__shared__ typename BlockReduce::TempStorage tmp_storage;
__shared__ float sum_reverse_block;
__shared__ float max_block;
// Input dimension is BxNxSxS*; blockIdx.y is batch index b; gridDim.x=N*S; blockIdx.x is index within N*S;
int index = (blockIdx.y * gridDim.x + blockIdx.x) * all_sequence_length + threadIdx.x;
float thread_data = -CUDART_INF_F;
if (threadIdx.x < all_sequence_length) {
const int& mask = attention_mask[blockIdx.y * all_sequence_length + threadIdx.x];
float mask_value = mask > 0 ? 0.0f : -10000.0f;
if (is_unidirectional) {
int from_index = all_sequence_length - sequence_length + (blockIdx.x % sequence_length); // offset of from token in all sequence length.
if (threadIdx.x > from_index) {
mask_value += -10000.0f;
}
}
thread_data = float(input[index]) * scalar + mask_value;
}
const float max = BlockReduce(tmp_storage).Reduce(thread_data, cub::Max(), all_sequence_length);
// Store max value
if (threadIdx.x == 0) {
max_block = max;
}
__syncthreads();
float thread_data_exp = threadIdx.x < all_sequence_length ? expf(thread_data - max_block) : 0.0f;
const auto sum = BlockReduce(tmp_storage).Reduce(thread_data_exp, cub::Sum(), all_sequence_length);
// Store value of 1.0/sum
if (threadIdx.x == 0) {
sum_reverse_block = (1.f) / sum;
}
__syncthreads();
if (threadIdx.x < all_sequence_length) {
output[index] = T(thread_data_exp * sum_reverse_block);
}
}
template <typename T, unsigned TPB>
__global__ void SoftmaxKernelSmall(const int all_sequence_length, const int sequence_length, const T* input, T* output, bool is_unidirectional) {
SoftmaxSmall<T, TPB>(all_sequence_length, sequence_length, all_sequence_length, 0, input, output, is_unidirectional);
}
template <typename T, unsigned TPB>
__global__ void SoftmaxKernel(const int all_sequence_length, const int sequence_length, const T* input, T* output) {
Softmax<T, TPB>(all_sequence_length, sequence_length, all_sequence_length, 0, input, output);
}
template <typename T>
void ComputeSoftmax(
cudaStream_t stream, const int all_sequence_length, const int sequence_length, const int batch_size, const int num_heads,
const T* input, T* output, bool is_unidirectional) {
const dim3 grid(sequence_length * num_heads, batch_size, 1);
if (all_sequence_length <= 32) {
const int blockSize = 32;
SoftmaxKernelSmall<T, blockSize><<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, input, output, is_unidirectional);
} else if (all_sequence_length <= 64) {
const int blockSize = 64;
SoftmaxKernelSmall<T, blockSize><<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, input, output, is_unidirectional);
} else if (all_sequence_length <= 128) {
const int blockSize = 128;
SoftmaxKernelSmall<T, blockSize><<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, input, output, is_unidirectional);
} else if (all_sequence_length <= 256) {
const int blockSize = 256;
SoftmaxKernelSmall<T, blockSize><<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, input, output, is_unidirectional);
} else if (all_sequence_length <= 512) {
const int blockSize = 512;
SoftmaxKernelSmall<T, blockSize><<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, input, output, is_unidirectional);
} else if (all_sequence_length <= 1024) {
const int blockSize = 1024;
SoftmaxKernelSmall<T, blockSize><<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, input, output, is_unidirectional);
} else if (!is_unidirectional) {
const int blockSize = 1024;
SoftmaxKernel<T, blockSize><<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, input, output);
}
}
template <typename T, unsigned TPB>
__global__ void MaskedSoftmaxKernelSmall(const int all_sequence_length, const int sequence_length, const int* mask_end, const int* mask_start, const T* input, T* output, bool is_unidirectional) {
__shared__ int start_position;
__shared__ int end_position;
if (threadIdx.x == 0) {
const int batch = blockIdx.y;
start_position = mask_start != nullptr ? max(0, mask_start[batch]) : 0;
end_position = min(all_sequence_length, mask_end[batch]);
// Attend to no word has same effect as attend to all words. This is added to get parity with CPU result.
if (start_position >= end_position) {
start_position = 0;
end_position = all_sequence_length;
}
}
__syncthreads();
SoftmaxSmall<T, TPB>(all_sequence_length, sequence_length, end_position, start_position, input, output, is_unidirectional);
}
template <typename T, unsigned TPB>
__global__ void MaskedSoftmaxKernel(const int all_sequence_length, const int sequence_length, const int* mask_end, const int* mask_start, const T* input, T* output) {
__shared__ int start_position;
__shared__ int end_position;
if (threadIdx.x == 0) {
const int batch = blockIdx.y;
start_position = mask_start != nullptr ? max(0, mask_start[batch]) : 0;
end_position = min(all_sequence_length, mask_end[batch]);
// Attend to no word has same effect as attend to all words. This is added to get parity with CPU result.
if (start_position >= end_position) {
start_position = 0;
end_position = all_sequence_length;
}
}
__syncthreads();
Softmax<T, TPB>(all_sequence_length, sequence_length, end_position, start_position, input, output);
}
template <typename T, unsigned TPB>
__global__ void SoftmaxWithMask2DSmallKernel(const int all_sequence_length, const int sequence_length, const int* attention_mask, const T* input, T* output, const bool is_unidirectional, const float scalar) {
SoftmaxWithMask2DSmall<T, TPB>(all_sequence_length, sequence_length, attention_mask, input, output, is_unidirectional, scalar);
}
template <typename T>
void ComputeSoftmaxWithMask1D(cudaStream_t stream, const int all_sequence_length, const int sequence_length, const int batch_size, const int num_heads,
const int* mask_index, const int* mask_start, const T* input, T* output, const bool is_unidirectional) {
const dim3 grid(sequence_length * num_heads, batch_size, 1);
if (all_sequence_length <= 32) {
const int blockSize = 32;
MaskedSoftmaxKernelSmall<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, mask_index, mask_start, input, output, is_unidirectional);
} else if (all_sequence_length <= 64) {
const int blockSize = 64;
MaskedSoftmaxKernelSmall<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, mask_index, mask_start, input, output, is_unidirectional);
} else if (all_sequence_length <= 128) {
const int blockSize = 128;
MaskedSoftmaxKernelSmall<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, mask_index, mask_start, input, output, is_unidirectional);
} else if (all_sequence_length <= 256) {
const int blockSize = 256;
MaskedSoftmaxKernelSmall<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, mask_index, mask_start, input, output, is_unidirectional);
} else if (all_sequence_length <= 512) {
const int blockSize = 512;
MaskedSoftmaxKernelSmall<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, mask_index, mask_start, input, output, is_unidirectional);
} else if (all_sequence_length <= 1024) {
const int blockSize = 1024;
MaskedSoftmaxKernelSmall<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, mask_index, mask_start, input, output, is_unidirectional);
} else if (!is_unidirectional) {
const int blockSize = 1024;
MaskedSoftmaxKernel<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, mask_index, mask_start, input, output);
}
}
template <typename T>
void ComputeSoftmaxWithMask2D(cudaStream_t stream, const int all_sequence_length, const int sequence_length, const int batch_size, const int num_heads,
const int* attention_mask, const T* input, T* output, const bool is_unidirectional, const float scalar) {
const dim3 grid(sequence_length * num_heads, batch_size, 1);
if (all_sequence_length <= 32) {
const int blockSize = 32;
SoftmaxWithMask2DSmallKernel<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, attention_mask, input, output, is_unidirectional, scalar);
} else if (all_sequence_length <= 64) {
const int blockSize = 64;
SoftmaxWithMask2DSmallKernel<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, attention_mask, input, output, is_unidirectional, scalar);
} else if (all_sequence_length <= 128) {
const int blockSize = 128;
SoftmaxWithMask2DSmallKernel<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, attention_mask, input, output, is_unidirectional, scalar);
} else if (all_sequence_length <= 256) {
const int blockSize = 256;
SoftmaxWithMask2DSmallKernel<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, attention_mask, input, output, is_unidirectional, scalar);
} else if (all_sequence_length <= 512) {
const int blockSize = 512;
SoftmaxWithMask2DSmallKernel<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, attention_mask, input, output, is_unidirectional, scalar);
} else if (all_sequence_length <= 1024) {
const int blockSize = 1024;
SoftmaxWithMask2DSmallKernel<T, blockSize>
<<<grid, blockSize, 0, stream>>>(all_sequence_length, sequence_length, attention_mask, input, output, is_unidirectional, scalar);
}
}
template <typename T>
__global__ void TransposeCtx(const int H, const T* input, T* output) {
// Input: BxNxSxH
// Output: BxSxNxH
int n = threadIdx.y;
int s = blockIdx.x;
int b = blockIdx.y;
int num_heads = blockDim.y;
int sequence_length = gridDim.x;
const int NH = num_heads * H;
const int NHS = NH * sequence_length;
const int in_offset = s * H + n * sequence_length * H + b * NHS;
const int out_offset = n * H + s * NH + b * NHS;
const int i = threadIdx.x;
if (i < H) {
output[out_offset + i] = input[in_offset + i];
}
}
void LaunchTransCtx(cudaStream_t stream,
const int sequence_length, const int batch_size, const int head_size, const int num_heads,
const float* input, float* output) {
const dim3 grid(sequence_length, batch_size, 1);
if (0 == (head_size & 1)) {
const int H = head_size / 2;
const float2* input2 = reinterpret_cast<const float2*>(input);
float2* output2 = reinterpret_cast<float2*>(output);
const dim3 block(H, num_heads, 1);
TransposeCtx<float2><<<grid, block, 0, stream>>>(H, input2, output2);
} else {
const dim3 block(head_size, num_heads, 1);
TransposeCtx<float><<<grid, block, 0, stream>>>(head_size, input, output);
}
}
void LaunchTransCtx(cudaStream_t stream,
const int sequence_length, const int batch_size, const int head_size, const int num_heads,
const half* input, half* output) {
const dim3 grid(sequence_length, batch_size, 1);
if (0 == (head_size % 4)) {
const int H = head_size / 4;
const dim3 block(H, num_heads, 1);
const float2* input2 = reinterpret_cast<const float2*>(input);
float2* output2 = reinterpret_cast<float2*>(output);
TransposeCtx<float2><<<grid, block, 0, stream>>>(H, input2, output2);
} else if (0 == (head_size & 1)) {
const int H = head_size / 2;
const dim3 block(H, num_heads, 1);
const half2* input2 = reinterpret_cast<const half2*>(input);
half2* output2 = reinterpret_cast<half2*>(output);
TransposeCtx<half2><<<grid, block, 0, stream>>>(H, input2, output2);
} else { // this should be an "odd" case. probably not worth catching it in the half2 kernel.
const dim3 block(head_size, num_heads, 1);
TransposeCtx<half><<<grid, block, 0, stream>>>(head_size, input, output);
}
}
template <typename T>
__global__ void TransposeQKV(const int H, const T* input, T* output) {
// Input: BxSx3xNxH
// Output: 3xBxNxSxH
int n = threadIdx.y;
int s = blockIdx.x;
int b = blockIdx.y;
int m = blockIdx.z; // matrix id
const int num_heads = blockDim.y;
const int sequence_length = gridDim.x;
const int batch_size = gridDim.y;
const int NH = num_heads * H;
const int NHS = NH * sequence_length;
const int in_offset = n * H + m * NH + s * 3 * NH + b * NHS * 3;
const int out_offset = s * H + n * sequence_length * H + b * NHS + m * NHS * batch_size;
const int i = threadIdx.x;
if (i < H) {
output[out_offset + i] = input[in_offset + i];
}
}
void LaunchTransQkv(cudaStream_t stream,
const int sequence_length, const int batch_size, const int head_size, const int num_heads,
const float* input, float* output) {
const dim3 grid(sequence_length, batch_size, 3);
if (0 == (head_size & 1)) {
const int H = head_size / 2;
const float2* input2 = reinterpret_cast<const float2*>(input);
float2* output2 = reinterpret_cast<float2*>(output);
const dim3 block(H, num_heads, 1);
TransposeQKV<float2><<<grid, block, 0, stream>>>(H, input2, output2);
} else {
const dim3 block(head_size, num_heads, 1);
TransposeQKV<float><<<grid, block, 0, stream>>>(head_size, input, output);
}
}
void LaunchTransQkv(cudaStream_t stream,
const int sequence_length, const int batch_size, const int head_size, const int num_heads,
const half* input, half* output) {
const dim3 grid(sequence_length, batch_size, 3);
if (0 == (head_size % 4)) {
const int H = head_size / 4;
const dim3 block(H, num_heads, 1);
const float2* input2 = reinterpret_cast<const float2*>(input);
float2* output2 = reinterpret_cast<float2*>(output);
TransposeQKV<float2><<<grid, block, 0, stream>>>(H, input2, output2);
} else if (0 == (head_size & 1)) {
const int H = head_size / 2;
const dim3 block(H, num_heads, 1);
const half2* input2 = reinterpret_cast<const half2*>(input);
half2* output2 = reinterpret_cast<half2*>(output);
TransposeQKV<half2><<<grid, block, 0, stream>>>(H, input2, output2);
} else { // this should be an "odd" case. probably not worth catching it in the half2 kernel..
const dim3 block(head_size, num_heads, 1);
TransposeQKV<half><<<grid, block, 0, stream>>>(head_size, input, output);
}
}
template <typename T>
__global__ void ConcatPastToPresent(const int sequence_length,
const T* past,
const T* k_v,
T* present) {
const int h = threadIdx.x;
const int n = threadIdx.y;
const int s = blockIdx.x;
const int b = blockIdx.y;
const int is_v = blockIdx.z; // 0 for k, 1 for v
const int all_sequence_length = gridDim.x;
const int batch_size = gridDim.y;
const int num_heads = blockDim.y;
const int H = blockDim.x;
// past: 2 x BxNxS'xH (past_k and past_v)
// k_v: 2 x BxNxSxH (k and v)
// present: 2 x BxNxS*xH (present_k and present_v)
const int past_sequence_length = all_sequence_length - sequence_length;
const int present_SH = all_sequence_length * H;
const int present_NSH = num_heads * present_SH;
int out_offset = b * present_NSH + n * present_SH + s * H + h + is_v * (present_NSH * batch_size);
if (s < past_sequence_length) {
const int past_SH = past_sequence_length * H;
const int past_NSH = num_heads * past_SH;
const int in_offset = b * past_NSH + n * past_SH + s * H + h + is_v * (past_NSH * batch_size);
present[out_offset] = past[in_offset];
} else if (s < all_sequence_length) {
const int SH = sequence_length * H;
const int NSH = num_heads * SH;
const int in_offset = b * NSH + n * SH + (s - past_sequence_length) * H + h + is_v * (NSH * batch_size);
present[out_offset] = k_v[in_offset];
}
}
void LaunchConcatPastToPresent(cudaStream_t stream,
const int all_sequence_length,
const int sequence_length,
const int batch_size,
const int head_size,
const int num_heads,
const float* past,
const float* k_v,
float* present) {
const dim3 grid(all_sequence_length, batch_size, 2);
if (0 == (head_size & 1)) {
const dim3 block(head_size / 2, num_heads, 1);
ConcatPastToPresent<float2><<<grid, block, 0, stream>>>(sequence_length, reinterpret_cast<const float2*>(past), reinterpret_cast<const float2*>(k_v), reinterpret_cast<float2*>(present));
} else {
const dim3 block(head_size, num_heads, 1);
ConcatPastToPresent<float><<<grid, block, 0, stream>>>(sequence_length, past, k_v, present);
}
}
void LaunchConcatPastToPresent(cudaStream_t stream,
const int all_sequence_length,
const int sequence_length,
const int batch_size,
const int head_size,
const int num_heads,
const half* past,
const half* k_v,
half* present) {
const dim3 grid(all_sequence_length, batch_size, 2);
if (0 == (head_size % 4)) {
const dim3 block(head_size / 4, num_heads, 1);
ConcatPastToPresent<float2><<<grid, block, 0, stream>>>(sequence_length, reinterpret_cast<const float2*>(past), reinterpret_cast<const float2*>(k_v), reinterpret_cast<float2*>(present));
} else if (0 == (head_size & 1)) {
const dim3 block(head_size / 2, num_heads, 1);
ConcatPastToPresent<half2><<<grid, block, 0, stream>>>(sequence_length, reinterpret_cast<const half2*>(past), reinterpret_cast<const half2*>(k_v), reinterpret_cast<half2*>(present));
} else { // this should be an "odd" case. probably not worth catching it in the half2 kernel.
const dim3 block(head_size, num_heads, 1);
ConcatPastToPresent<half><<<grid, block, 0, stream>>>(sequence_length, past, k_v, present);
}
}
void inline CublasGemmStridedBatched(
cublasHandle_t handle, cublasOperation_t transa, cublasOperation_t transb,
int m, int n, int k, const float alpha,
const float* A, int lda, long long int strideA, const float* B, int ldb, long long int strideB,
const float beta, float* C, int ldc, long long int strideC, int batchCount) {
cublasSgemmStridedBatched(
handle, transa, transb, m, n, k, &alpha, A, lda, strideA, B, ldb, strideB, &beta, C, ldc, strideC, batchCount);
}
void inline CublasGemmStridedBatched(
cublasHandle_t handle, cublasOperation_t transa, cublasOperation_t transb,
int m, int n, int k, const half alpha,
const half* A, int lda, long long int strideA, const half* B, int ldb, long long int strideB,
const half beta, half* C, int ldc, long long int strideC, int batchCount) {
cublasHgemmStridedBatched(
handle, transa, transb, m, n, k, &alpha, A, lda, strideA, B, ldb, strideB, &beta, C, ldc, strideC, batchCount);
}
template <typename T>
void QkvToContext(
cublasHandle_t cublas, cudaStream_t stream,
const int batch_size, const int sequence_length, const int num_heads, const int head_size, const size_t element_size,
const T* input, T* output, T* workspace,
const int* mask_index,
bool is_unidirectional, int past_sequence_length, const T* past, T* present, bool use_2d_attention_mask, const int* mask_start) {
const int all_sequence_length = past_sequence_length + sequence_length;
const size_t bytes = ScratchSize(element_size, batch_size, num_heads, sequence_length, all_sequence_length);
T* scratch1 = workspace;
T* scratch2 = scratch1 + (bytes / element_size);
T* scratch3 = scratch2 + (bytes / element_size);
// input should be BxSx3xNxH => scratch3: 3xBxNxSxH
LaunchTransQkv(stream, sequence_length, batch_size, head_size, num_heads, input, scratch3);
// now scratch3 has Q, K, V: each has size BxNxSxH
const int batches = batch_size * num_heads;
const int size_per_batch = sequence_length * head_size;
const int total_size = batches * size_per_batch;
const T* q = scratch3;
const T* k = q + total_size;
const T* v = k + total_size;
cublasSetStream(cublas, stream);
cublasSetMathMode(cublas, CUBLAS_TENSOR_OP_MATH);
// Concat past (2xBxNxS'xH) to present (2xBxNxS*xH):
// past_k (BxNxS'xH) + k (BxNxSxH) => present_k (BxNxS*xH)
// past_v (BxNxS'xH) + v (BxNxSxH) => present_v (BxNxS*xH)
const int present_size_per_batch = all_sequence_length * head_size;
if (nullptr != present) {
LaunchConcatPastToPresent(stream, all_sequence_length, sequence_length, batch_size, head_size, num_heads, past, k, present);
// update pointers to present_k and present_v.
k = present;
v = present + batches * present_size_per_batch;
}
// compute Q*K' (as K'*Q), scaled by 1/sqrt(H) and store in scratch1: BxNxSxS*
// Q: BxNxSxH, K (present_k): BxNxS*xH, Q*K': BxNxSxS*
const float rsqrt_head_size = 1.f / sqrt(static_cast<float>(head_size));
const int temp_matrix_size = sequence_length * all_sequence_length;
T alpha = (T)(use_2d_attention_mask ? 1.0f : rsqrt_head_size);
CublasGemmStridedBatched(
cublas, CUBLAS_OP_T, CUBLAS_OP_N, all_sequence_length, sequence_length, head_size, alpha, k, head_size, present_size_per_batch,
q, head_size, size_per_batch, 0.f, scratch1, all_sequence_length, temp_matrix_size, batches);
// apply softmax and store result P to scratch2: BxNxSxS*
if (use_2d_attention_mask) { // 2d attention mask
ComputeSoftmaxWithMask2D<T>(stream, all_sequence_length, sequence_length, batch_size, num_heads, mask_index, scratch1, scratch2, is_unidirectional, rsqrt_head_size);
} else if (nullptr != mask_index) { // 1d mask index
// mask_index has 1D shape: either (batch_size) or (2*batch_size). Only the later one has start postions.
ComputeSoftmaxWithMask1D<T>(stream, all_sequence_length, sequence_length, batch_size, num_heads, mask_index, mask_start, scratch1, scratch2, is_unidirectional);
} else { // no mask
ComputeSoftmax<T>(stream, all_sequence_length, sequence_length, batch_size, num_heads, scratch1, scratch2, is_unidirectional);
}
// compute P*V (as V*P), and store in scratch3: BxNxSxH
CublasGemmStridedBatched(
cublas, CUBLAS_OP_N, CUBLAS_OP_N, head_size, sequence_length, all_sequence_length, 1.f, v, head_size, present_size_per_batch,
scratch2, all_sequence_length, temp_matrix_size, 0.f, scratch3, head_size, size_per_batch, batches);
// scratch3 is BxNxSxH, transpose to output BxSxNxH
LaunchTransCtx(stream, sequence_length, batch_size, head_size, num_heads, scratch3, output);
}
)");
LU_DEFINE(declaration::math_Rsqrt, R"(
template <typename T>
__device__ inline T Rsqrt(const T& x);
template <>
__device__ inline float Rsqrt(const float& x) {
return rsqrtf(x);
}
template <>
__device__ inline half Rsqrt(const half& x) {
#if __CUDA_ARCH__ >= 530 || !defined(__CUDA_ARCH__)
return hrsqrt(x);
#else
return half(rsqrtf(float(x)));
#endif
}
)");
LU_DEFINE(declaration::math_Gelu, R"(
template <typename T>
__device__ __inline__ T _Normcdf(T a);
template <>
__device__ __inline__ float _Normcdf(float a) { return normcdff(a); }
template <>
__device__ __inline__ double _Normcdf(double a) { return normcdf(a); }
template <>
__device__ __inline__ half _Normcdf(half a) { return half(normcdff((float)a)); }
template <typename T>
__device__ __inline__ T _Gelu(T a) {
return a * _Normcdf(a);
}
)");
LU_DEFINE_EXTEND(declaration::ort_softmax,
R"(
inline int log2_ceil(int value) {
int log2_value = 0;
while ((1 << log2_value) < value) ++log2_value;
return log2_value;
}
template <typename T>
struct Add {
__device__ __forceinline__ T operator()(T a, T b) const {
return a + b;
}
};
template <typename T>
struct Max {
__device__ __forceinline__ T operator()(T a, T b) const {
return a < b ? b : a;
}
};
template <typename acc_t, int WARP_BATCH, int WARP_SIZE, template <typename> class ReduceOp>
__device__ __forceinline__ void warp_reduce(acc_t* sum) {
ReduceOp<acc_t> r;
#pragma unroll
for (int offset = WARP_SIZE / 2; offset > 0; offset /= 2) {
#pragma unroll
for (int i = 0; i < WARP_BATCH; ++i) {
acc_t b = WARP_SHFL_XOR(sum[i], offset, WARP_SIZE);
sum[i] = r(sum[i], b);
}
}
}
/**
* Copyright (c) 2016-present, Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* Modifications Copyright (c) Microsoft. */
// The code below(from the defination of softmax_warp_forward to the defination of dispatch_softmax_forward)
// is mostly copied from Pytorch PersistentSoftmax.cuh
// The softmax_warp_* methods perform softmax forward and backward propagation on samples spanning the fast dimension.
// Each sample contains element_count scalar elements. element_count can be any integer value <= 1024.
// The template arguments have the following meaning:
// One "WARP" works on one "BATCH". One "BATCH" contains "WARP_BATCH" samples.
// WARP_BATCH is equal to 1 when element_count is large, and > 1 when element_count is small.
// A "WARP" contains "GPU_WARP_SIZE" threads, these treads are guaranteed to belong to the same warp.
// This is important because it means only __shfl_ instructions are required for reductions.
// Note that this means WARP_SIZE must be a power of two and <= architecture warp size.
// CUDA warp size is 32 for all existing GPU architecures, but there is no guarantee this will not change for future arch.
// is_log_softmax is a flag indicating whether SoftMax or LogSoftMax should be computed.
// The template can be instantiated with any floating point type for the type arguments input_t, output_t and acc_t.
// This allows SoftMax to be fused with a cast immediately following the SoftMax.
// For instance:
// input_t=half, acc_t=float, output_t=half => read half tensor, float accumulators, write half tensor.
// input_t=half, acc_t=float, output_t=float => read half tensor, float accumulators, write float tensor.
// input_t_float, acc_t=float, output_t=half => read float tensor, float accumulators, write half tensor.
template <typename input_t, typename output_t, typename acc_t, int log2_elements, bool is_log_softmax>
__global__ void softmax_warp_forward(output_t* dst, const input_t* src, int batch_size, int stride, int element_count) {
// WARP_SIZE and WARP_BATCH must match the return values batches_per_warp and warp_size of method warp_softmax_forward_kernel.
constexpr int next_power_of_two = 1 << log2_elements;
constexpr int WARP_SIZE = (next_power_of_two < GPU_WARP_SIZE) ? next_power_of_two : GPU_WARP_SIZE;
constexpr int WARP_ITERATIONS = next_power_of_two / WARP_SIZE;
constexpr int WARP_BATCH = (next_power_of_two <= 128) ? 2 : 1;
int first_batch = (blockDim.y * blockIdx.x + threadIdx.y) * WARP_BATCH;
// batch_size might not be a multiple of WARP_BATCH. Check how
// many batches have to computed within this WARP.
int local_batches = batch_size - first_batch;
if (local_batches > WARP_BATCH)
local_batches = WARP_BATCH;
// there might be multiple batches per warp. compute the index within the batch
int local_idx = threadIdx.x;
src += first_batch * stride + local_idx;
dst += first_batch * stride + local_idx;
// The nested loops over WARP_BATCH and then WARP_ITERATIONS can be simplified to one loop,
// but I think doing so would obfuscate the logic of the algorithm, thus I chose to keep
// the nested loops.
// This should have no impact on performance because the loops are unrolled anyway.
// load data from global memory
acc_t elements[WARP_BATCH][WARP_ITERATIONS];
for (int i = 0; i < WARP_BATCH; ++i) {
int batch_element_count = (i >= local_batches) ? 0 : element_count;
for (int it = 0; it < WARP_ITERATIONS; ++it) {
int element_index = local_idx + it * WARP_SIZE;
if (element_index < batch_element_count) {
elements[i][it] = src[i * element_count + it * WARP_SIZE];
} else {
elements[i][it] = -std::numeric_limits<acc_t>::infinity();
}
}
}
// compute max_value
acc_t max_value[WARP_BATCH];
#pragma unroll
for (int i = 0; i < WARP_BATCH; ++i) {
max_value[i] = elements[i][0];
#pragma unroll
for (int it = 1; it < WARP_ITERATIONS; ++it) {
max_value[i] = (max_value[i] > elements[i][it]) ? max_value[i] : elements[i][it];
}
}
warp_reduce<acc_t, WARP_BATCH, WARP_SIZE, Max>(max_value);
acc_t sum[WARP_BATCH]{0.0f};
#pragma unroll
for (int i = 0; i < WARP_BATCH; ++i) {
#pragma unroll
for (int it = 0; it < WARP_ITERATIONS; ++it) {
if (is_log_softmax) {
sum[i] += std::exp((float)(elements[i][it] - max_value[i]));
} else {
elements[i][it] = std::exp((float)(elements[i][it] - max_value[i]));
sum[i] += elements[i][it];
}
}
}
warp_reduce<acc_t, WARP_BATCH, WARP_SIZE, Add>(sum);
// store result
#pragma unroll
for (int i = 0; i < WARP_BATCH; ++i) {
if (i >= local_batches)
break;
if (is_log_softmax) sum[i] = max_value[i] + std::log((float)(sum[i]));
#pragma unroll
for (int it = 0; it < WARP_ITERATIONS; ++it) {
int element_index = local_idx + it * WARP_SIZE;
if (element_index < element_count) {
if (is_log_softmax) {
dst[i * element_count + it * WARP_SIZE] = elements[i][it] - sum[i];
} else {
dst[i * element_count + it * WARP_SIZE] = elements[i][it] / sum[i];
}
} else {
break;
}
}
}
}
template <typename input_t, typename output_t, typename acc_t, bool is_log_softmax>
void dispatch_softmax_forward(cudaStream_t stream, output_t* dst, const input_t* src, int softmax_elements, int softmax_elements_stride, int batch_count) {
if (softmax_elements == 0) {
return;
} else {
int log2_elements = log2_ceil(softmax_elements);
const int next_power_of_two = 1 << log2_elements;
// This value must match the WARP_SIZE constexpr value computed inside softmax_warp_forward.
int warp_size = (next_power_of_two < GPU_WARP_SIZE) ? next_power_of_two : GPU_WARP_SIZE;
// This value must match the WARP_BATCH constexpr value computed inside softmax_warp_forward.
int batches_per_warp = (next_power_of_two <= 128) ? 2 : 1;
// use 128 threads per block to maximimize gpu utilization
constexpr int threads_per_block = 128;
int warps_per_block = (threads_per_block / warp_size);
int batches_per_block = warps_per_block * batches_per_warp;
int blocks = (batch_count + batches_per_block - 1) / batches_per_block;
dim3 threads(warp_size, warps_per_block, 1);
// Launch code would be more elegant if C++ supported FOR CONSTEXPR
switch (log2_elements) {
case 0: // 1
softmax_warp_forward<input_t, output_t, acc_t, 0, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 1: // 2
softmax_warp_forward<input_t, output_t, acc_t, 1, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 2: // 4
softmax_warp_forward<input_t, output_t, acc_t, 2, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 3: // 8
softmax_warp_forward<input_t, output_t, acc_t, 3, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 4: // 16
softmax_warp_forward<input_t, output_t, acc_t, 4, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 5: // 32
softmax_warp_forward<input_t, output_t, acc_t, 5, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 6: // 64
softmax_warp_forward<input_t, output_t, acc_t, 6, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 7: // 128
softmax_warp_forward<input_t, output_t, acc_t, 7, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 8: // 256
softmax_warp_forward<input_t, output_t, acc_t, 8, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 9: // 512
softmax_warp_forward<input_t, output_t, acc_t, 9, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 10: // 1024
softmax_warp_forward<input_t, output_t, acc_t, 10, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
default:
break;
}
}
}
)",
R"(
inline int log2_ceil(int value) {
int log2_value = 0;
while ((1 << log2_value) < value) ++log2_value;
return log2_value;
}
template <typename T>
struct Add {
__device__ __forceinline__ T operator()(T a, T b) const {
return a + b;
}
};
template <typename T>
struct Max {
__device__ __forceinline__ T operator()(T a, T b) const {
return a < b ? b : a;
}
};
template <typename acc_t, int WARP_BATCH, int WARP_SIZE, template <typename> class ReduceOp>
__device__ __forceinline__ void warp_reduce(acc_t* sum) {
ReduceOp<acc_t> r;
#pragma unroll
for (int offset = WARP_SIZE / 2; offset > 0; offset /= 2) {
#pragma unroll
for (int i = 0; i < WARP_BATCH; ++i) {
acc_t b = WARP_SHFL_XOR(sum[i], offset, WARP_SIZE);
sum[i] = r(sum[i], b);
}
}
}
/**
* Copyright (c) 2016-present, Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* Modifications Copyright (c) Microsoft. */
// The code below(from the defination of softmax_warp_forward to the defination of dispatch_softmax_forward)
// is mostly copied from Pytorch PersistentSoftmax.cuh
// The softmax_warp_* methods perform softmax forward and backward propagation on samples spanning the fast dimension.
// Each sample contains element_count scalar elements. element_count can be any integer value <= 1024.
// The template arguments have the following meaning:
// One "WARP" works on one "BATCH". One "BATCH" contains "WARP_BATCH" samples.
// WARP_BATCH is equal to 1 when element_count is large, and > 1 when element_count is small.
// A "WARP" contains "GPU_WARP_SIZE" threads, these treads are guaranteed to belong to the same warp.
// This is important because it means only __shfl_ instructions are required for reductions.
// Note that this means WARP_SIZE must be a power of two and <= architecture warp size.
// CUDA warp size is 32 for all existing GPU architecures, but there is no guarantee this will not change for future arch.
// is_log_softmax is a flag indicating whether SoftMax or LogSoftMax should be computed.
// The template can be instantiated with any floating point type for the type arguments input_t, output_t and acc_t.
// This allows SoftMax to be fused with a cast immediately following the SoftMax.
// For instance:
// input_t=half, acc_t=float, output_t=half => read half tensor, float accumulators, write half tensor.
// input_t=half, acc_t=float, output_t=float => read half tensor, float accumulators, write float tensor.
// input_t_float, acc_t=float, output_t=half => read float tensor, float accumulators, write half tensor.
template <typename input_t, typename output_t, typename acc_t, int log2_elements, bool is_log_softmax>
__global__ void softmax_warp_forward(output_t* dst, const input_t* src, int batch_size, int stride, int element_count) {
// WARP_SIZE and WARP_BATCH must match the return values batches_per_warp and warp_size of method warp_softmax_forward_kernel.
constexpr int next_power_of_two = 1 << log2_elements;
constexpr int WARP_SIZE = (next_power_of_two < GPU_WARP_SIZE) ? next_power_of_two : GPU_WARP_SIZE;
constexpr int WARP_ITERATIONS = next_power_of_two / WARP_SIZE;
constexpr int WARP_BATCH = (next_power_of_two <= 128) ? 2 : 1;
int first_batch = (blockDim.y * blockIdx.x + threadIdx.y) * WARP_BATCH;
// batch_size might not be a multiple of WARP_BATCH. Check how
// many batches have to computed within this WARP.
int local_batches = batch_size - first_batch;
if (local_batches > WARP_BATCH)
local_batches = WARP_BATCH;
// there might be multiple batches per warp. compute the index within the batch
int local_idx = threadIdx.x;
src += first_batch * stride + local_idx;
dst += first_batch * stride + local_idx;
// The nested loops over WARP_BATCH and then WARP_ITERATIONS can be simplified to one loop,
// but I think doing so would obfuscate the logic of the algorithm, thus I chose to keep
// the nested loops.
// This should have no impact on performance because the loops are unrolled anyway.
// load data from global memory
acc_t elements[WARP_BATCH][WARP_ITERATIONS];
for (int i = 0; i < WARP_BATCH; ++i) {
int batch_element_count = (i >= local_batches) ? 0 : element_count;
for (int it = 0; it < WARP_ITERATIONS; ++it) {
int element_index = local_idx + it * WARP_SIZE;
if (element_index < batch_element_count) {
elements[i][it] = src[i * element_count + it * WARP_SIZE];
} else {
elements[i][it] = -std::numeric_limits<acc_t>::infinity();
}
}
}
// compute max_value
acc_t max_value[WARP_BATCH];
#pragma unroll
for (int i = 0; i < WARP_BATCH; ++i) {
max_value[i] = elements[i][0];
#pragma unroll
for (int it = 1; it < WARP_ITERATIONS; ++it) {
max_value[i] = (max_value[i] > elements[i][it]) ? max_value[i] : elements[i][it];
}
}
warp_reduce<acc_t, WARP_BATCH, WARP_SIZE, Max>(max_value);
acc_t sum[WARP_BATCH]{0.0f};
#pragma unroll
for (int i = 0; i < WARP_BATCH; ++i) {
#pragma unroll
for (int it = 0; it < WARP_ITERATIONS; ++it) {
if (is_log_softmax) {
sum[i] += std::exp((float)(elements[i][it] - max_value[i]));
} else {
elements[i][it] = std::exp((float)(elements[i][it] - max_value[i]));
sum[i] += elements[i][it];
}
}
}
warp_reduce<acc_t, WARP_BATCH, WARP_SIZE, Add>(sum);
// store result
#pragma unroll
for (int i = 0; i < WARP_BATCH; ++i) {
if (i >= local_batches)
break;
if (is_log_softmax) sum[i] = max_value[i] + std::log((float)(sum[i]));
#pragma unroll
for (int it = 0; it < WARP_ITERATIONS; ++it) {
int element_index = local_idx + it * WARP_SIZE;
if (element_index < element_count) {
if (is_log_softmax) {
dst[i * element_count + it * WARP_SIZE] = elements[i][it] - sum[i];
} else {
dst[i * element_count + it * WARP_SIZE] = elements[i][it] / sum[i];
}
} else {
break;
}
}
}
}
template <typename input_t, typename output_t, typename acc_t, bool is_log_softmax>
void dispatch_softmax_forward(cudaStream_t stream, output_t* dst, const input_t* src, int softmax_elements, int softmax_elements_stride, int batch_count) {
if (softmax_elements == 0) {
return;
} else {
int log2_elements = log2_ceil(softmax_elements);
const int next_power_of_two = 1 << log2_elements;
// This value must match the WARP_SIZE constexpr value computed inside softmax_warp_forward.
int warp_size = (next_power_of_two < GPU_WARP_SIZE) ? next_power_of_two : GPU_WARP_SIZE;
// This value must match the WARP_BATCH constexpr value computed inside softmax_warp_forward.
int batches_per_warp = (next_power_of_two <= 128) ? 2 : 1;
// use 128 threads per block to maximimize gpu utilization
constexpr int threads_per_block = 128;
int warps_per_block = (threads_per_block / warp_size);
int batches_per_block = warps_per_block * batches_per_warp;
int blocks = (batch_count + batches_per_block - 1) / batches_per_block;
dim3 threads(warp_size, warps_per_block, 1);
// Launch code would be more elegant if C++ supported FOR CONSTEXPR
switch (log2_elements) {
case 0: // 1
softmax_warp_forward<input_t, output_t, acc_t, 0, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 1: // 2
softmax_warp_forward<input_t, output_t, acc_t, 1, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 2: // 4
softmax_warp_forward<input_t, output_t, acc_t, 2, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 3: // 8
softmax_warp_forward<input_t, output_t, acc_t, 3, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 4: // 16
softmax_warp_forward<input_t, output_t, acc_t, 4, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 5: // 32
softmax_warp_forward<input_t, output_t, acc_t, 5, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 6: // 64
softmax_warp_forward<input_t, output_t, acc_t, 6, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 7: // 128
softmax_warp_forward<input_t, output_t, acc_t, 7, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 8: // 256
softmax_warp_forward<input_t, output_t, acc_t, 8, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 9: // 512
softmax_warp_forward<input_t, output_t, acc_t, 9, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
case 10: // 1024
softmax_warp_forward<input_t, output_t, acc_t, 10, is_log_softmax>
<<<blocks, threads, 0, stream>>>(dst, src, batch_count, softmax_elements_stride, softmax_elements);
break;
default:
break;
}
}
}
)",
"");
LU_DEFINE_EXTEND(declaration::warp,
R"(
// Check compute capability
const int GPU_WARP_SIZE = 32;
const uint64_t MAX_GRID_Y = 65535;
template <typename T>
__device__ __forceinline__ T WARP_SHFL(T value, int srcLane, int width = GPU_WARP_SIZE, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_sync(mask, value, srcLane, width);
#else
return __shfl(value, srcLane, width);
#endif
}
template <typename T>
__device__ __forceinline__ T WARP_SHFL_XOR(T value, int laneMask, int width = GPU_WARP_SIZE, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_xor_sync(mask, value, laneMask, width);
#else
return __shfl_xor(value, laneMask, width);
#endif
}
template <typename T>
__device__ __forceinline__ T WARP_SHFL_UP(T value, unsigned int delta, int width = GPU_WARP_SIZE, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_up_sync(mask, value, delta, width);
#else
return __shfl_up(value, delta, width);
#endif
}
template <typename T>
__device__ __forceinline__ T WARP_SHFL_DOWN(T value, unsigned int delta, int width = GPU_WARP_SIZE, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_down_sync(mask, value, delta, width);
#else
return __shfl_down(value, delta, width);
#endif
}
)",
R"(
// Check compute capability
#if !defined(CONSTANT_VAR)
#define CONSTANT_VAR 1
const int GPU_WARP_SIZE = 32;
const uint64_t MAX_GRID_Y = 65535;
#endif
template <typename T>
__device__ __forceinline__ T WARP_SHFL(T value, int srcLane, int width = GPU_WARP_SIZE, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_sync(mask, value, srcLane, width);
#else
return __shfl(value, srcLane, width);
#endif
}
template <typename T>
__device__ __forceinline__ T WARP_SHFL_XOR(T value, int laneMask, int width = GPU_WARP_SIZE, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_xor_sync(mask, value, laneMask, width);
#else
return __shfl_xor(value, laneMask, width);
#endif
}
template <typename T>
__device__ __forceinline__ T WARP_SHFL_UP(T value, unsigned int delta, int width = GPU_WARP_SIZE, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_up_sync(mask, value, delta, width);
#else
return __shfl_up(value, delta, width);
#endif
}
template <typename T>
__device__ __forceinline__ T WARP_SHFL_DOWN(T value, unsigned int delta, int width = GPU_WARP_SIZE, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_down_sync(mask, value, delta, width);
#else
return __shfl_down(value, delta, width);
#endif
}
)",
"");
LU_DEFINE(declaration::barrier,
R"(
__device__ void Barrier() {
cooperative_groups::grid_group g = cooperative_groups::this_grid();
g.sync();
}
)");
|
// round numeric inputs to 1 decimal place
public String roundInput(String input){
if (input.equals("")){
return input;
}
double number = Double.parseDouble(input);
if (number < 0){
number = number * -1;
}
number = Math.round(number * 10.0) / 10.0;
return ""+number;
} |
package jeu;
import java.util.*;
import javafx.beans.property.BooleanProperty;
import javafx.beans.property.ObjectProperty;
import javafx.beans.property.SimpleBooleanProperty;
import javafx.beans.property.SimpleObjectProperty;
import javafx.beans.value.ChangeListener;
import javafx.beans.value.ObservableValue;
public class Partie {
private ObjectProperty<Joueur> joueurCourant;
private Joueur joueur1;
private Joueur joueur2;
private LinkedList<Bushi> armeeJ1;
private LinkedList<Bushi> armeeJ2;
private Plateau plateau;
private BooleanProperty estPartieTerminee;
private int compteurTour;
public Partie (Joueur joueur1, Joueur joueur2) {
this.joueur1 = joueur1;
this.joueur2 = joueur2;
plateau = new Plateau();
armeeJ1 = new LinkedList<Bushi>();
armeeJ2 = new LinkedList<Bushi>();
joueurCourant = new SimpleObjectProperty<Joueur>();
armeeJ1.add((Bushi) new Dragon(plateau.getCase(1,0),joueur1.getCouleur()));
armeeJ1.add(new Lion(plateau.getCase(2,0),joueur1.getCouleur()));
armeeJ1.add(new Singe(plateau.getCase(3,0),joueur1.getCouleur()));
armeeJ1.add(new Lion(plateau.getCase(1,1),joueur1.getCouleur()));
armeeJ1.add(new Singe(plateau.getCase(2,1),joueur1.getCouleur()));
armeeJ1.add(new Singe(plateau.getCase(1,2),joueur1.getCouleur()));
armeeJ1.add(new Dragon(plateau.getCase(8,0),joueur1.getCouleur()));
armeeJ1.add(new Lion(plateau.getCase(7,0),joueur1.getCouleur()));
armeeJ1.add(new Singe(plateau.getCase(6,0),joueur1.getCouleur()));
armeeJ1.add(new Lion(plateau.getCase(8,1),joueur1.getCouleur()));
armeeJ1.add(new Singe(plateau.getCase(7,1),joueur1.getCouleur()));
armeeJ1.add(new Singe(plateau.getCase(8,2),joueur1.getCouleur()));
armeeJ2.add(new Dragon(plateau.getCase(1,9),joueur2.getCouleur()));
armeeJ2.add(new Lion(plateau.getCase(2,9),joueur2.getCouleur()));
armeeJ2.add(new Singe(plateau.getCase(3,9),joueur2.getCouleur()));
armeeJ2.add(new Lion(plateau.getCase(1,8),joueur2.getCouleur()));
armeeJ2.add(new Singe(plateau.getCase(2,8),joueur2.getCouleur()));
armeeJ2.add(new Singe(plateau.getCase(1,7),joueur2.getCouleur()));
armeeJ2.add(new Dragon(plateau.getCase(8,9),joueur2.getCouleur()));
armeeJ2.add(new Lion(plateau.getCase(7,9),joueur2.getCouleur()));
armeeJ2.add(new Singe(plateau.getCase(6,9),joueur2.getCouleur()));
armeeJ2.add(new Lion(plateau.getCase(8,8),joueur2.getCouleur()));
armeeJ2.add(new Singe(plateau.getCase(7,8),joueur2.getCouleur()));
armeeJ2.add(new Singe(plateau.getCase(8,7),joueur2.getCouleur()));
estPartieTerminee = new SimpleBooleanProperty();
estPartieTerminee.set(false);
compteurTour = 0;
}
/*
* GETTER & SETTER
*/
public Joueur getJoueur1() {
return joueur1;
}
public Joueur getJoueur2() {
return joueur2;
}
public Joueur getJoueurCourant() {
return joueurCourant.get();
}
public ObjectProperty<Joueur> getJoueurCourantProperty(){
return joueurCourant;
}
public BooleanProperty getEstPartieTerminee(){
return estPartieTerminee;
}
public Joueur getGagnant() {
return joueurCourant.get();
}
public LinkedList<Bushi> getArmeeJ1() {
return armeeJ1;
}
public void setArmeeJ1(LinkedList<Bushi> armeeJ1) {
this.armeeJ1 = armeeJ1;
}
public LinkedList<Bushi> getArmeeJ2() {
return armeeJ2;
}
public void setArmeeJ2(LinkedList<Bushi> armeeJ2) {
this.armeeJ2 = armeeJ2;
}
public Plateau getPlateau() {
return plateau;
}
/*
* METHODE D'OBJET
*/
public void nextJoueur() {
Case [][] pla = getPlateau().getDamier();
for(Case[] row : pla) {
for (Case case1 : row) {
case1.highlightOff();
}
}
getPlateau().resetTour(true);
if (testPartieTerminee()) {
estPartieTerminee.set(true);
}
else {
if(joueurCourant.get() == joueur1) {
joueurCourant.set(joueur2);
}else {
joueurCourant.set(joueur1);
}
getPlateau().clearMarquage(false);
compteurTour++;
}
}
public void doClearMarquage() {
LinkedList<Bushi> bushis = getPlateau().clearMarquage(true);
for (Bushi bushi : bushis) {
if (armeeJ1.contains(bushi)) {
armeeJ1.remove(bushi);
int scoreToAdd = (int) Math.round(((10*Math.abs((100-(Math.sqrt(compteurTour)*10))))*Math.sqrt(bushis.size()))/10)*10;
if (scoreToAdd<50) scoreToAdd = 50;
joueur2.setScore((int) (joueur2.getScore().get())+scoreToAdd);
}
else if (armeeJ2.contains(bushi)) {
int scoreToAdd = (int) Math.round(((10*Math.abs((100-(Math.sqrt(compteurTour)*10))))*Math.sqrt(bushis.size()))/10)*10;
if (scoreToAdd<50) scoreToAdd = 50;
joueur1.setScore((int) (joueur1.getScore().get())+scoreToAdd);
armeeJ2.remove(bushi);
}
}
}
public boolean testPartieTerminee() {
if (getPlateau().getActionVictorieuse()) {
joueurCourant.get().setScore(joueur2.getScore().get()+1000);
return true;
}
int countDragJ1 = 0;
int countDragJ2 = 0;
int countBushJ1 = 0;
int countBushJ2 = 0;
for(Bushi b : armeeJ1) {
if(b.getHauteur()==3) {
countDragJ1+=1;
}
else {
countBushJ1+=1;
}
}
for(Bushi b : armeeJ2) {
if(b.getHauteur()==3) {
countDragJ2+=1;
}
else {
countBushJ2+=1;
}
}
if(countBushJ1==0||countBushJ2==0||countDragJ1==0||countDragJ2==0) {
return true;
}
return false;
}
public void initialize() {
plateau.poseArmee(armeeJ1);
plateau.poseArmee(armeeJ2);
plateau.getCaseSelectionnee().addListener(new ChangeListener<Case>() {
public void changed(ObservableValue<? extends Case> observable, Case oldValue, Case newValue) {
plateau.testTestDeplacement(plateau.getCase(newValue.getX(), newValue.getY()).getElement());
}
});
nextJoueur();
}
}
|
2018 has come and gone. As we proved with our list of the 100 best pop songs, there was no shortage of quality content released last year. However, today we’re turning our attention to the anthems that didn’t quite get their due. From Mariah Carey’s straightforward kiss-off “GTFO” to Anne-Marie’s nostalgia-inducing “2002,” these hidden gems really should have been bigger on the charts. We compiled a list of 50 of the year’s most under-represented bangers, ballads and bops in alphabetic order for you to peruse below!
A summery, nostalgia-inducing anthem with a writing credit from Ed Sheeran – how did Anne-Marie not dominate the Billboard Hot 100 with this for weeks on end?
Christina Aguilera kicked off the Liberation era with a demented, club-ready banger that was cruelly slept on by the public.
No slow burn here. Sabrina Carpenter’s “Almost Love” is all about taking a deep dive into a relationship, and the bouncy single goes equally hard from start to finish.
Celine Dion’s staggering contribution to the Deadpool 2 soundtrack stands toe-to-toe with some of her best ballads and surely deserved to make more than a ripple on the charts this year.
Dinah Jane threw it back to her sparkly pop roots on her debut solo single, and she should have netted a “Work From Home”-sized hit.
There is nothing subtle about Lizzo’s gloriously thirsty “Boys.” The fact that it should have been a hit is as obvious as the answer to 2+2.
MNEK and Hailee Steinfeld linked up over supercharged beats for one of the year’s most vibrant but under-loved collaborations.
OneRepublic’s “Connection” had all the ingredients to be one of the biggest songs of 2018. However, the relatable anthem failed to find the required spark.
Hayley Kiyoko launched 20gayteen with a banger that proved her more than worthy of the title of Lesbian Jesus.
A sensual anthem inspired by Janet Jackson’s “All Nite (Don’t Stop)”? Sign me right up. Troye Sivan featured pop’s reigning princess Ariana Grande on the slinky single, but even that wasn’t enough to garner an appearance on the Hot 100 because sometimes life isn’t fair.
A collaboration between J.Lo and rap’s latest queen should have added another hit to both icon’s discographies. It’s an absolute mystery how this was not the case.
With every release, Dua Lipa further cements her status as one of pop’s most successful Club Queens. “Electricity” is yet another tribute to how good she sounds over smooth EDM productions.
No one does soul-searching balladry quite like James Arthur, and “Empty Space” is one of his best.
Lauren Jauregui’s first solo single was a stark departure from her work with one of pop’s most recognizable girl groups. Hopefully we give her latest offering – “More Than That” – the love both singles deserved.
No song off Joyride seemed more likely to propel Tinashe back up the charts than her mellow but loved-up “Faded Love.” Easily one of the best songs in her jam-packed discography, it highlights all that she is capable of if we’d give her a chance to truly shine.
There is nothing elusive about the meaning behind Mariah Carey’s massively underrated, expletive-riddled slow jam. In a perfect world, this would have become the legendary diva’s 19th number one.
Following the breathtaking success of “The Middle,” Zedd’s follow-up single surely should have been equally monumental. However, it unjustly left little more than a blip on the charts.
Avril Lavigne marked her return to music with a sweeping ballad tracking her battle with Lyme Disease. I’m still waiting for it to get the respect it truly deserves.
Relentlessly hopeful and buoyed by a euphoric production, Kim Petras’ “Heart To Break” defined pop perfection in 2018. I will never forgive The Gays for being unable to send it to the top of the charts.
From “Issues” to “Worst In Me,” Julia Michaels has time and again proven capable of capturing relatable emotions and presenting them as perfect tunes. “Jump” is yet another example.
Vanessa Hudgens did not save pop music with cult classics like “Sneakernight” and “Say OK” for us to continue sleeping on her musical career. But here we are, once again with her latest should-be hit.
To call the roll-out schedule for ZAYN’s Icarus Falls a little overwrought is an understatement. This may explain why none of the singles really had a chance on the charts. However, the silky smooth, early millennium stylings of “Let Me” should have warranted the album’s first global smash.
Meghan Trainor has more shimmery pop bops to her name than almost any other hitmaker on the scene. One of 2018’s best was the disco-tinged “Let You Be Right,” which didn’t even chart across the globe. What gives?
Ciara sparked a viral challenge with the release of a fast and furious banger. It was enough to propel her back onto the Hot 100 for the first time since 2015, but “Level Up” really should have left more of a lasting impact.
Kris Wu set his sights on American markets with the release of “Like That.” He broke onto the Hot 100 after conquering US iTunes, but the breezy cut could have been so much bigger.
A tribute to lost love, Toni Braxton’s “Long As I Live” is the defining heartbreak anthem of 2018. World-weary but gorgeous, it offers a mature glimpse into moving on without your heart’s true desire by your side.
Shawn Mendes gave us his best Justin Timberlake impression on his funk-laced “Lost In Japan,” and then he teamed up with Zedd for an even more danceable remix. The result has the staying power to compete with any of his biggest hits but inexplicably peaked outside the Top 40.
A devastating ode to a relationship on the brink of collapse, Noah Cyrus added another standout single to her discography with the Gallant collaboration. It is hard as hell to figure out why this failed to chart across the globe.
With 214 million streams and counting on Spotify, Sean Paul, David Guetta and Becky G’s club-ready banger should have been an early contender for the title of Song Of The Summer.
Janet Jackson did everything right with her anti-FOMO bop “Made For Now,” but it only peaked at number 88. Surely, the timely floor filler deserved a higher placement.
“My My My!” is the only song off Bloom to make an appearance on the Hot 100, but the frenzied banger truly should have climbed much higher than the Top 80.
From collabs like “Youth” to his Suncity EP, Khalid peppered 2018 with a slew of quality content. However, a clear highlight was his work with Martin Garrix on “Ocean.” The restrained club anthem truly should have soared so much higher.
Niall Horan proved he had what it takes to make it as a solo star with “Slow Hands,” and he hasn’t put a toe out of place since. However, it’s a mystery why his follow-up singles – particularly “On The Loose” – haven’t rocketed back up the charts.
2019 is going to be the year we stop letting the public sleep on Carly Rae Jepsen. Until then, join me in a moment of silence for her latest underrated, perfect pop release.
Despite claiming top honors in the UK, Calvin Harris, Sam Smith and Jessie Reyez’s ode to one night stands only peaked at number 65 on the Hot 100. Can anyone explain this travesty?
When are we going to stop ignoring Nick Jonas’ massive potential? The “Jealous” hitmaker has what it takes to be one of the biggest men in the game, if only we’d give him a better chance. Sadly, “Right Now” is another under-appreciated slice of quality pop.
Camila Cabello reunited with Pharrell Williams on “Sangria Wine,” which feels like a grown-up relative of “Havana.” However, the sensual bop failed to pick up enough steam to match its predecessor.
Club-ready and complete with a neon-hued video, “Say Something” is a clear highlight on David Guetta’s 7. So I really need someone to explain to me how it hasn’t even appeared on the Hot 100 yet.
Shockingly, the fourth single from Pink’s Beautiful Trauma failed to chart across the globe, and that’s just not alright.
Elle King recaptured the rollicking sass of her breakout hit “Ex’s and Oh’s” on this year’s “Shame.” It is truly a shame that we, the people were unable to return the favor by sending her back into the Top 10.
Demi Lovato and Clean Bandit’s stuttering, self-love banger hit the top spot in the UK but peaked at 58 on the Hot 100. Excuse me while I grab a plane ticket to travel somewhere with actual taste.
Mike Posner’s “Song About You” is a brutally honest tribute to heartbreak, and the soul-bruising single deserved much more love.
MØ and Diplo once again proved to be the perfect musical pair by dropping the thinking man’s Song Of The Summer. And humanity proved that we are unworthy by failing to send their latest release up the charts.
A throwback-inspired ode to being himself, Charlie Puth’s “The Way I Am” should have netted the hitmaker another “Attention”-sized smash.
Admitting to your feelings for someone has never sounded sexier than it does on MNEK’s Language highlight. The club-ready bop proved the songwriter-for-the-stars had what it takes to front an artist project of his own.
Following the breakout success of “Love Lies,” it is shocking that Normani’s equally slinky 6LACK collab didn’t also rocket into the Top 10 (or even warrant a music video).
What other songs deserved more love in 2018? Let us know below, or by hitting us up on Facebook and Twitter! |
A highly scalable dielectric metamaterial with superior capacitor performance over a broad temperature Interface effects in a new class of nanocomposites generate superior energy storage performance over broad temperatures. INTRODUCTION Dielectric materials play a key role in electronic and electric devices and systems for controlling and storing charge and electric energy. For example, they are widely used for improving the power efficiency such as in hybrid electric vehicles (HEVs), electric grids, and networks and for energy storage in pulse power systems. Compared with inorganic dielectrics, polymers are attractive for their low dielectric loss, low manufacturing cost because of their thin-film roll-to-roll fabrication process, high breakdown strength, and graceful failure mode, which ensures high reliability when operating at high electric fields. A key figure of merit for dielectric materials is the energy density U e U e = K 0 E 2 where E is the applied electric field, K is the dielectric constant, and 0 is the vacuum permittivity (=8.85 10 −12 F/m). Hence, a dielectric material with high K and high breakdown field E is highly desirable, in addition to low cost. Moreover, the charge/discharge (C/D) efficiency of a dielectric material at high electric fields, which measures how efficiently the material can store, control, and deliver charges and energy and is directly linked to the high-field dielectric loss (see fig. S1A), is another critical performance parameter. In dielectric polymers, the conduction loss, which is negligible at low fields, could become high at high fields (and high temperatures) because most conduction losses increase exponentially with electric field and temperature. High loss at high field will also cause heating in dielectric devices, which may lead to failure of the devices. Hence, evaluating and comparing U e of dielectric materials should take the electric field E at which the dielectric can maintain a high C/D efficiency (low loss) into consideration. Therefore, despite its low K (=2.2), the exceptionally high breakdown field (~700 MV/m), low loss, and low cost of biaxially oriented polypropylene (BOPP) films enable BOPP film capacitors to reach a relatively high energy density (~3 J/cm 3 ) with high C/D efficiency, which makes it the capacitor of choice for a wide range of applications such as HEV and high-voltage electric networks. On the other hand, the low operation temperatures of BOPP (<80°C) require external cooling to maintain safe operation temperature of these capacitors, because many applications are in hot environments (to 150°C). This external cooling increases system size and cost. To meet the demand of increased energy and power levels, increased functionality with limited volume, and miniaturization of the modern electronic and electrical devices and systems, there are urgent needs to (i) improve the energy density while developing strategies to maintain low loss so that a higher energy can be stored and delivered efficiently without increasing the device volume and (ii) raise the operation temperature of polymer capacitors to above 150°C, for high-temperature applications and for eliminating external cooling in many polymer capacitor applications. Several approaches have been investigated in the past to raise the dielectric constant K of polymer-based dielectrics and hence improve the energy density. For example, nanocomposites in which high volume loading (>15 volume %) of high-dielectric constant nanofillers (K > 1000) is added to a polymer matrix have been widely studied to raise the dielectric constant (7,. The rationale for this approach is based on the assumption that the composites can maintain high breakdown field E of the polymer matrix, and hence, a high U e can be obtained. However, the large dielectric contrast between the nanofillers and polymer matrix and high volume loading of nanofillers required to raise K of the composites result in the intensification of local electric fields in the polymer matrix, leading to a large reduction of the electric breakdown strength. To mitigate this local field effect, the surfaces of high-K nanofillers have been modified, for example, to form core-shell structures that reduce the local field strength so that the breakdown strength of the nanocomposites approaches that of the polymer matrix. By integrating nanocomposite layers with different nanofiller morphologies into multilayered films, both the dielectric constant and breakdown field can be improved compared with the polymer matrix. On the other hand, these studies have not addressed the issues of improving the C/D efficiency at high fields and raising the operation temperature to 125°C or even 150°C. For dielectric materials, low cost is one of the most important factors to consider in developing new dielectric approaches. The composite approaches developed increase complications and cost in fabricating polymer films (typically below 5 m in polymer capacitors), which are not compatible with large-scale/low-cost roll-to-roll processes. During the past decade, dielectric polymers with high glass transition temperatures (T g > 150°C) have been examined for hightemperature polymer capacitors. It was found that although these high-T g polymers have low dielectric loss (<1%) at low electric fields (<10 MV/m), large conduction loss at high electric fields (>300 MV/m) at high temperature causes high loss and a large reduction of the breakdown field, resulting in a low U e at high temperatures. Recently, Li et al. show that nanocomposites of cross-linked high-T g polymer BCB with 10 volume % boron nitride (BN) nanosheets can substantially reduce the conduction loss at high temperature while achieving high breakdown field. As a result, the discharged energy density U e ~ 2.2 J/cm 3 with 90% C/D efficiency (loss, <10%) was demonstrated at 150°C under 400 MV/m. Moreover, by coating high-T g polymer films with a thin layer of wide-bandgap (>5 eV) inorganics such as h-BN or Si 3 N 4, it was demonstrated that conduction loss at high field can be substantially reduced. Directly measured C/D efficiency of h-BN-coated PEI (polyetherimide) films can reach 90% C/D efficiency at 150°C under 400 MV/m, delivering a U e value of ~2.3 J/cm 3. However, there are substantial challenges in translating these strategies for producing large-scale/low-cost high-performance dielectric polymer films. In nanocomposites, even small amount of nanofillers can generate large interface areas and boundaries and induce substantial effects in polymers. For instance, in PEI, a high-temperature (T g > 217°C) amorphous dipolar polymer that has been considered as a promising candidate for high-temperature capacitor applications, it was observed that very low volume loading (<0.5 volume %) of nanofillers can lead to more than 50% increase in the dielectric constant K. However, as will be shown here, the presence of nanofillers in PEI films does not reduce the conduction loss at high fields and hence does not enhance the breakdown fields and does not generate a large improvement in the high-temperature performance. In contrast to amorphous polymers, polymer single crystals exhibit very low electric conductivity because of the large bandgap (>7 eV). Simulations by Xu et al. show that the presence of crystallinity in semicrystalline polymers can substantially reduce the high-field conduction loss. Our current paper reports the development of a highly scalable and low-cost dielectric metamaterial approach, in which nanoparticles at very low volume loading (~0.2 volume %) substantially enhance the energy density, C/D efficiency, and breakdown field of high-temperature semicrystalline dipolar polymers. Specifically, we show that in poly(arylene ether urea) (PEEU), which is a high-T g (>250°C) semicrystalline dipolar polymer, ca. 0.2 volume % of 20-nm-sized alumina nanofiller increases both the dielectric constant K and breakdown field E over a broad temperature range to >150°C. The dielectric constant K is raised from K = 4.7 of the base PEEU to 7.4. At 150°C, the nanocomposite films exhibit a breakdown field of 600 MV/m, increased from 400 MV/m of the base PEEU films. Moreover, the nanofiller at such a low loading also substantially reduces the high-field conduction loss. As a result, the PEEU films deliver a discharged U e of 5 J/cm 3 with a high C/D efficiency (>90%) at 150°C . We chose PEEU for this study because its urea unit has a high dipole moment of 4.56 D, which can serve as deep traps and reduce the conduction loss. In addition, the crystalline phase in PEEU is sensitive to processing conditions, which may be exploited for tuning the dielectric properties in dielectric metamaterials. Alumina (Al 2 O 3 ) nanoparticles (K = 9.1; size, 20 nm; gamma phase), which have been widely used in nanocomposites, are chosen as the nanofiller. The films were fabricated using a solution casting method (see Materials and Methods for details). Dielectric properties of PEEU nanocomposites The dielectric constant K at 1 kHz of PEEU films at room temperature with various low nanofiller loading is presented in Fig. 1A, showing a dielectric enhancement peak K = 7.4 at ca. 0.21 volume %. The films containing nanofillers display very similar dielectric loss and frequency dispersion as that of PEEU (see Fig. 1B). Figure 1C presents the dielectric performance of the films with nanofillers to high temperature, showing thermal stability with low dielectric loss to 200°C. For dielectric materials such as polymers, the losses at high electric fields and high temperatures (such as at 150°C) are caused mainly by high-field conduction, which can be much higher than that at low fields (<10 MV/m, for example). High conduction loss at high electric field results in a low C/D efficiency, which lowers the energy density U e and may cause a low breakdown field. The C/D behavior from the films at room temperature and high electric fields was examined first at compositions of interest. The data for PEEU films with different nanofiller loadings at room temperature are shown in Fig. 2 (A to C). Distinctly different from the PEI nanocomposites, PEEU films with 0.21 volume % filler exhibits 50% higher breakdown field (900 MV/m) than the base PEEU (600 MV/m). The C/D efficiency at high fields (conduction loss = 1 − ) is deduced from the C/D data (see fig. S1A). Owing to enhancement in both the dielectric constant K and breakdown field E, the PEEU films with 0.21 volume % nanofiller loading deliver a discharged energy density of 27 J/cm 3 with a C/D efficiency of >90% under 900 MV/m, compared with 8.2 J/cm 3 of the base PEEU under 600 MV/m, as shown in Fig. 2B. At higher nanofiller loading, the breakdown field of the PEEU films becomes similar to or even lower than that of the base PEEU (see Fig. 2C). That is, the enhancements in both the dielectric constant and breakdown field in PEEU occur in a narrow nanofiller composition range. Figure 2C summarizes the breakdown field E b, which is the highest field measured from the C/D curve, versus the filler loading at room temperature, showing a peak at ca. 0.21 volume %. For room temperature, the electric fields E CD at 90% C/D efficiency of the nanocomposites are the same as the highest breakdown field E b measured. The large enhancement in the breakdown field in the PEEU films with 0.21 volume % nanofillers is maintained at high temperatures. The dielectric breakdown strength enhancement peaks at ca. 0.21 volume %, which is the same as that at room temperature. The C/D curves measured at 150°C are shown in Fig. 2D. The base PEEU films display a breakdown field of 400 MV/m, while the PEEU films with 0.21 volume % nanofiller have higher breakdown field, reaching 600 MV/m. Figure 2E compares the discharged energy density and C/D efficiency of the films with 0.21 volume % loading with that of PEEU at 150°C, revealing that the films with 0.21 volume % of nanofillers delivers a discharged energy density of 10.6 J/cm 3 at 600 MV/m (highest breakdown field measured), compared with the base PEEU of 2.9 J/cm 3 at 400 MV/m. Figure 2F shows how the nanofiller loading influences the high-temperature (e.g., 150°C) C/D efficiency, e.g., the electric field E CD at 90% C/D efficiency. In Fig. 2F, the breakdown field versus the nanofiller loading is also presented. The data reveal that PEEU films with ca. 0.21 volume % filler loading exhibit an E CD of 400 MV/m compared with base PEEU of 200 MV/m. The enhancement in both K and E CD results in a discharged energy density of 5 J/cm 3 at 90% C/D efficiency, which is six times of the base PEEU of 0.83 J/cm 3 at 150°C. The U e demonstrated here is more than two times of the state of the art at high temperature (150°C), not to mention the highly scalable and low-cost strategy developed here. In fig. S1B, we illustrate how the C/D efficiency of PEEU films under a fixed high electric field changes with alumina nanofiller loading at both room temperature and 150°C. At room temperature, the efficiency is taken at 600 MV/m. At 150°C, the efficiency is taken at 400 MV/m. The data show that under a given electric field, the films with 0.21 volume % filler loading exhibit a much higher efficiency compared with that of PEEU, reflecting substantially reduced conduction loss of the PEEU films at high electric fields by 0.21 volume % nanoparticles. It should be pointed out that at low fields, nanofillers do not change the dielectric loss of the films, which is expected because of very low nanofiller loading in the films (Fig. 1B). At high fields, the conduction loss shows a strong dependence on the nanofiller loading (loss = 1 -efficiency) and temperature. It is also noted that the C/D efficiency curves in fig. S1B resemble the dielectric breakdown curve, revealing the direct correlation between the charge conduction at high electric field and dielectric breakdown in the PEEU films. In dielectric materials, the presence of deep traps may reduce the charge conduction at high fields. Here, thermally stimulated depolarization current (TSDC) is used to probe the trap states in the PEEU films with different nanofiller loadings, and the data recorded are presented in fig. S3. The PEEU films with 0.21 volume % nanofiller display a major discharge peak at 92.5°C, higher than 78°C of the base PEEU and 70°C of the films with 0.43 volume % nanofillers. The higher temperature peak position and shaper peak rising curve indicate that the PEEU films with 0.21 volume % filler have a deeper trap level compared with the PEEU and PEEU films with 0.43 volume % nanofillers. The presence of deep traps reduces the charge carrier mobility from the trap-free value 0. Several models have been derived, and in general, it follows that ~ 0 (exp(E t /k B T)) −1, where E t is the trap level, k B is the Boltzmann constant, and T is the temperature in kelvin. These results are consistent with the direct conduction measurements in PEEU films with nanofillers. Electric conductivity measured at 100 MV/m and room temperature shows = 1.5 10 −17 S/cm for PEEU films with 0.21 volume % loading, more than one order of magnitude smaller than PEEU, = 2 10 −16 S/cm. Characterizations of structural changes of PEEU nanocomposites To examine the possible changes in the polymer morphology in the PEEU films with nanofiller loading, we collected wide-angle x-ray diffraction (XRD) data for the base PEEU and nanocomposites, which are shown in Fig. 3A. PEEU with 0.21 volume % displays a distinctly different x-ray pattern compared with the base PEEU and PEEU with higher alumina loadings. The data were analyzed (see fig. S4) to estimate the crystallinity and the peak position change of the amorphous phase of the base PEEU and nanocomposite films. Using the peak position to estimate the mean interchain spacing shows an expansion of the interchain spacing of about 5.8% for films with 0.21 volume % of nanofillers compared with base PEEU and films with higher volume loading (0.43 and 0.63 volume %) (see table S1 for the summary of structural changes deduced from the x-ray data). We also obtained infrared spectra of the PEEU films with different nanofiller loadings and noticed the change in the Fourier transform infrared (FT-IR) data, as presented in Fig. 3B, associated with the hydrogen bonding in PEEU polymer (see fig. S2B for schematics of hydrogen bonding in PEEU polymer). The data show a softening of the hydrogen bonding in the PEEU films with 0.21 volume % nanofiller loading compared with PEEU and PEEU with higher volume loading of nanofillers, indicating that the nanofillers of 0.21 volume % partially disrupt the hydrogen bonding in the polymer, which reduces the constraints on the urea dipoles. Both the weakening of hydrogen bonding and expansion of the interchain spacing, which generates local free space for dipoles, will enhance the dipolar response to the external field and increase the dielectric constant. The data in Fig. 3A also suggest that the films with 0.21 volume % have a relatively smaller amorphous peak area compared with other films. By comparing the peak areas of the broad amorphous peak and relatively sharp peak for the crystalline phase, we estimate the crystallinity of the PEEU films with different nanofiller loadings, which is presented in the inset of Fig. 3A. The data show that there is a slight increase in crystallinity (and reduced crystallite size; see fig. S4D) of the PEEU films with 0.21 volume % compared with the base PEEU. The results suggest that the induced increase in crystallinity and the reduced crystallite size in the PEEU 0.21 volume % nanocomposites of the nanofillers reduce the distance between crystallites, reducing the mean free path for the mobile charges. This may have a positive contribution to the reduced conductivity and enhanced breakdown strength. This mechanism is supported by the experimental results of Wang et al. on low-density polyethylene (LDPE)/alumina nanocomposites, in which 0.5 weight % (wt %) (0.12 volume %) of alumina nanofiller (particles of 30 nm diameter) in LDPE enhances the breakdown field and reduces the conduction loss at high electric field, compared with the neat LDPE and composites with higher nanofiller loadings. The nanofillers at 0.5 wt % also increase the crystallinity of LDPE films. Fig. 4 compare U e at the breakdown fields and at fields with 90% C/D efficiency of the PEEU films with 0.21 volume % filler loading with the state-of-the-art high-temperature nanocomposites in the literature, respectively. The PEEU nanocomposites deliver a U e value of 4.8 J/cm 3 at 150°C at 90% C/D efficiency, which is about six times of that of the base PEEU. In addition, we also choose PEI and polyaromatic ether ketone (PAEK) for the comparison. PEI is a high-temperature (T g = 217°C) amorphous dipolar polymer, and its nanocomposite shows dielectric enhancement from the base PEI K = 3.2 to K = 5 of films with 0.32 volume % of 20-nm-diameter nanoparticles. For PEI nanocomposites, an earlier study has shown that nanofillers do not improve the breakdown field and high-temperature performance. PAEK is a commercial high-temperature semicrystalline dipolar polymer (K = 3.6, T g = 230°C, and T m = 350°C) (see fig. S2D for the chemical structure). The PEI and PAEK and their nanocomposite films were prepared here, and measurements were carried out (see figs. S5 and S6). Although the nanofiller of 20 nm size at 0.32 volume % increases the dielectric constant in PEI, the data show that the nanofiller does not improve the C/D efficiency of the PEI films. This is different from the PEEU films. For PAEK, it was found that it is not straightforward to make high-quality films; hence, only one nanocomposite composition (0.35 volume % of 20-nm-sized alumina particles, which is close to the PEI composition showing high dielectric enhancement) was fabricated. Data in fig. S6 show that PAEK films with nanofiller loading display very similar features as observed in PEEU, i.e., nanofillers at this composition enhance both the dielectric constant and breakdown field. K is increased from 3.6 of the base PAEK to 4.3 in the nanocomposite films. Breakdown at room temperature is increased from 400 MV/m of base PAEK to 500 MV/m for the nanocomposite. At 150°C, it is increased from 350 to 400 MV/m. In addition, at 150°C, nanofillers substantially reduce the conduction loss and improve the C/D efficiency, as shown in fig. S6 (D and E). Comparison on capacitor performance of PEEU nanocomposites with multiple classic dielectric polymeric composites Panels A and B of Panels A and B of fig. S7 present the discharged energy density U e of the base polymers at the breakdown electric fields and fields with 90% C/D efficiency, respectively, from which the enhancement ratios of U e of composites to their base polymers in Fig. 4 (A and B) are derived. As can be seen, the high-temperature semicrystalline polymers, e.g., PEEU and PAEK, exhibit a high enhancement ratio at 150°C compared with the other polymers, indicating the effectiveness of nanofillers at low volume loading in these polymers in reducing the conduction loss at high temperature. Figure S7 (C and D) presents another view of Fig. 4, e.g., the enhancement ratio of composite film U e /base polymer U e as the vertical axis and U e of the nanocomposite as the horizontal axis, showing superior capacitance performance of PEEU nanocomposites. In addition to the results presented here, we note that in several widely used low-temperature (e.g., T g near or below room temperature) semicrystalline polymers such as LDPE, polypropylene (PP), and tetrafluoroethylene-hexafluoropropylene-vinylidene fluoride (THV) terpolymer, early studies have also shown that nanofillers at very low volume loading (<0.5 volume %) enhance the breakdown field and reduce the high-field conduction loss at room temperature. On the other hand, no enhancement in the dielectric constant was observed in these low-temperature dielectric polymers. The enhancements of capacitance performance from nanocomposites with low nanofiller volume loading are summarized in table S2, including high-temperature polymers of PEEU, PEI, and PAEK and low-temperature polymers of LDPE and PP. DISCUSSION This paper reports that low volume loading of nanofillers (<0.5 volume %) in high-T g semicrystalline dipolar polymers, PEEU and PAEK, can increase the dielectric constant, reduce the high-field conduction loss, and enhance the breakdown field over a broad temperature range. The experimental data on PEEU show that nanofillers at such a low volume content generate local and nanostructure changes that weaken the hydrogen bonding and expand the interchain spacing, hence creating local "free volume" and reducing the local constraints on polymer dipoles in the glassy state. These contribute to the increase in the dielectric constant. In addition, nanofillers at 0.21 volume % loading in PEEU films enhance the deep trap level, increase crystallinity, and reduce the crystallite size, which may have positive contributions to the reduced conduction loss and enhanced breakdown field. We note that dielectric metamaterials have been studied quite extensively in the past decades for high-frequency applications (from microwave to optical frequencies). In these dielectric metamaterials, local structures and interfaces, rather than averaged structures, play a substantial role in enhancing the material performance and generating new material responses, which are not found in natural materials. The results presented here demonstrate that such a dielectric metamaterial strategy can also be explored at low frequencies for controlling and storing charges and electric energy in dielectric composites. The low cost and highly scalable approach demonstrated here pave the way for the development of a totally new class of dielectric metamaterial with superior capacitance performance over a broad temperature range. MATERIALS AND METHODS Preparation of PEEU nanocomposites PEEU powder was synthesized by Key Synthesis LLC, following the synthesis process as shown in fig. S2A. The 1 H nuclear magnetic resonance spectrum of PEEU is presented in fig. S2B. The molecular weight M n is around 20,000. Alumina (Al 2 O 3 ) nanoparticles (gamma phase) with a mean particle diameter of 20 nm were purchased from US-Nano. To prepare the nanocomposite solution, a proper amount of PEEU powder was dissolved in dimethylformamide (DMF) at 60°C and stirred overnight to obtain a homogeneous solution. Alumina nanoparticles with selected weight percentage were dispersed in DMF at room temperature using Elma "P" Series Ultrasonics (250 W) for 1 hour. Afterward, the PEEU solution was poured into this suspension and sonicated for 6 hours. Then, the solution was cast onto a silicon plate and kept in a vacuum oven for 4 hours and dried at 80°C for 12 hours to remove the solvent. The obtained film was heated to 180°C for 24 hours to further remove the solvent. Platinum (Pt) electrodes of 4 mm diameters were sputtered on the composite films for the dielectric characterization. The thickness of PEEU and its nanocomposites is in the range of 2 to 3 m. It was found that long ultrasonication time is critical to ensure uniform nanocomposite films. Relatively small fluctuations in filler content could compromise the homogeneity of the final product. Preparation of PEI nanocomposites Ultem 1000 PEI polymer resin was purchased from PolyK Technologies. To prepare the nanocomposite solution, a proper amount of PEI powder was dissolved in DMF at 60°C and stirred overnight to obtain a homogeneous solution. Alumina nanoparticles at 1 wt % were dispersed in DMF at room temperature using Elma "P" Series Ultrasonics (250 W) for 1 hour. Afterward, the PEI solution was poured into this suspension and sonicated for 12 hours. The solution was then cast onto a clean glass slide. The solution cast films were kept in a drying oven at 70°C for 12 hours to remove the solvent and then heated to 100° and 150°C for 1 hour and to 200°C for 12 hours, followed by a final drying step at 225°C for 2 hours. Afterward, the films were kept in a vacuum oven at 200°C for 1 day to further remove any residual solvent. The film was peeled off from the glass substrate by placing it in the deionized (DI) water; then, the film was dried at 70°C in a vacuum oven. The thickness of PEI and its nanocomposite films is in the range of 7 to 8 m. Preparation of PAEK nanocomposites PAEK resin (P7000) was provided by Polymics Ltd. (State College). To prepare the nanocomposite solution, a proper amount of PEI powder was dissolved in N-methyl pyrrolidone (NMP) at 150°C and stirred overnight to obtain a homogeneous solution. The alumina nanoparticles at 1 wt % were dispersed in NMP at room temperature using Elma "P" Series Ultrasonics (250 W) for 1 hour. Afterward, the PAEK solution was poured into this suspension and sonicated for 12 hours. The solution was then cast onto a clean glass slide. The solution cast films were kept in a drying oven at 120°C for 20 hours to remove the solvent and then heated to 150°C for 12 hours. Afterward, the films were kept in a vacuum oven at 150°C for 1 day to further remove any residual solvent. The film was peeled off from the glass substrate by placing it in the DI water; then, the films were dried at 70°C in a vacuum oven. The thickness of PAEK and its nanocomposites is from 10 to 15 m. Electrical measurements Dielectric properties at different temperatures were characterized by an HP 4284 LCR meter connected to a Delta 9023 environment chamber. The polarization-electric field unipolar loops (P-E loops) and breakdown strength at various temperatures were measured with a modified Sawyer-Tower circuit at 10 Hz; the area of the electrode is 4.52 mm 2. The Weibull distribution on the dielectric breakdown of PEEU nanocomposites with 0.21 volume % alumina is presented in fig. S8. The XRD data for PEEU nanocomposites were collected at room temperature using a PANalytical XPert Pro MPD diffractometer. The TSDC test was characterized using an HP 4140B pA meter, which was connected to a Trek high-voltage amplifier (model 609 D-6) and an environment test chamber (Delta 9023). To perform TSDC test, these samples were poled at a poling field of 10 MV/m (120°C poling temperature for 10 min). Then, the samples were cooled down to −80°C, and the field was removed. The depolarization currents were measured at a constant heating rate of 5°C/min. The high-field conductivity was measured on a Cascade Microtech probe station with an HP 4140B pA meter/dc voltage source and a Kepco bipolar operational power supply/amplifier (model BOP 1000M). The measurement was carried out under an electric field of 100 MV/m. FT-IR spectra measurement was carried out at room temperature using a Bruker VERTEX 70 spectrophotometer with attenuated total reflect ance absorption; the resolution was 2 cm −1. SUPPLEMENTARY MATERIALS Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/6/4/eaax6622/DC1 Fig. S1. Calculation of C/D efficiency for PEEU composites. Fig. S2. Chemical information of PEEU and PAEK. Fig. S3. TSDC data of PEEU composites. Fig. S4. Analysis of XRD curves for PEEU and its nanocomposites. Fig. S5. Capacitor performance of PEI and its nanocomposites. Fig. S6. Capacitor performance of PAEK and its nanocomposites. Fig. S7. Summary of capacitor performance for the base polymers. Fig. S8. Weibull distribution of dielectric breakdown for PEEU composites. Table S1. Summary of structural information for PEEU and its nanocomposites. Table S2. Summary of capacitor performance for various polymers and their nanocomposites. |
//
// RecToolBar.h
// MainPart
//
// Created by blacksky on 2020/3/9.
// Copyright © 2020 blacksky. All rights reserved.
//
#import <UIKit/UIKit.h>
typedef void (^ToolBarClickBlock)(NSInteger);
@interface RecToolBar : UIView
@property (nonatomic, strong, readonly) QMUIButton *favorBtn;
@property (nonatomic, strong) ToolBarClickBlock detailBlock;
@property (nonatomic, strong) ToolBarClickBlock discountBlock;
@property (nonatomic, strong) ToolBarClickBlock favorBlock;
- (void)changeColorWithSelected:(BOOL)selected;
@end
|
<reponame>dvingo/a_timely_manner<filename>timelyManner/Task.h
//
// Task.h
// timelyManner
//
// Created by <NAME> on 7/15/13.
// Copyright (c) 2013 Rhombus Inc. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <CoreData/CoreData.h>
enum {
kStopWatchTask,
kTimerTask,
kTripTask
};
@class Instance;
@interface Task : NSManagedObject
@property (nonatomic, retain) NSNumber * avgTime;
@property (nonatomic, retain) NSDate * lastRun;
@property (nonatomic, retain) NSString * name;
@property (nonatomic, retain) NSNumber * taskType;
@property (nonatomic, retain) NSSet *instances;
@property (nonatomic, retain) NSArray *activeInstances;
@property (nonatomic, retain) NSDate *createdAt;
@end
@interface Task (CoreDataGeneratedAccessors)
- (void)addInstancesObject:(Instance *)value;
- (void)removeInstancesObject:(Instance *)value;
- (void)addInstances:(NSSet *)values;
- (void)removeInstances:(NSSet *)values;
- (BOOL)isStopWatchTask;
- (BOOL)isTripTask;
- (void)updateLastRun;
- (NSInteger)averageInstanceTime;
@end
|
Image copyright Thinkstock
Is it the biggest looming crisis that you have never heard of?
Since 1945, the world's population has tripled to seven billion, and feeding that population has relied increasingly on artificial fertilisers.
Phosphates, among the most important fertilisers, come from an ore that is in limited supply. It is mined, processed and spread on to our fields, whence it is ultimately washed away into the ocean.
So what will happen if one day we run out of the stuff?
As long as we can mine a vital resource cheaply, we will price it cheaply Jeremy Grantham, Grantham Mayo van Otterloo
"Crop yields will drop very, very spectacularly," chemist Andrea Sella, of University College, London, told Wednesday's Business Daily programme on the World Service.
"We will be in very, very deep trouble. We have to remember that the world's population is growing steadily, and so demand for phosphorus is growing every year."
As Dr Sella explains, phosphorus is essential for life. The element - which is so reactive that it spontaneously combusts in its pure form - is used by plant and animal cells to store energy.
It also forms the backbone of DNA, and it is an essential ingredient of our bones and teeth.
Farming without it is not a realistic option.
Underpriced and undervalued
While this may sound rather alarming, there are two important caveats.
First, the supply of phosphates is forecast to last for many decades, if not centuries, to come.
Phosphorus - key facts Image copyright Thinkstock Main uses in agriculture and steel production
Found mainly as phosphates
Yearly global production: 198 million tonnes (2011)
Estimated global reserves: 67 billion tonnes
Non-metallic element
Solid at room temperature
Symbol: P
Atomic number: 15
Atomic weight: 30.97 US Geological Survey
So humanity is at no immediate risk of running out of the means to feed itself, even at the current rate at which it is gobbling up phosphates.
Second, one of the biggest problems with phosphates over the past 60 years is arguably that they have been far too cheap and abundant.
There has been no incentive to use them sparingly.
Only a small fraction is actually absorbed by plants, and much is washed off by rain.
And this glut of fertilisers being washed into river systems, both phosphates and also nitrates, has created a nasty environmental problem - eutrophication.
This is where the abundant nutrients feed algae in rivers and ponds, creating blooms that turn the water green.
The algae then die, providing a feast for microbes, which in turn multiply and suck the oxygen out of the water, killing off all the fish and other animal and plant life.
It is a common problem in the lower reaches of major rivers such as the Thames and Rhine in Europe, and the Yangtze in China.
Similar algal blooms occur in our oceans, where large areas - notably the Baltic Sea and the Gulf of Mexico - have become "dead zones".
Image copyright Getty Images Image caption Fertiliser run-off encourages the growth of algal blooms at sea
Purely from an environmental perspective, the price of phosphates has clearly been too low.
'A hopelessly bad system'
Yet this now appears to be changing. The price of phosphate ores has risen fivefold over the past decade as demand, particularly from the developing world, has grown steadily.
Meanwhile, the cost of fertiliser production has also risen as the richest, cheapest phosphate seams have already been mined.
Media playback is unsupported on your device Media caption A beach in China is engulfed by algae
"Commodities are priced on the cost of extracting the next tonne that you need," says Jeremy Grantham, of US fund managers Grantham Mayo van Otterloo. "It is a hopelessly bad system.
"As long as we can mine a vital resource cheaply, we will price it cheaply, and run through the reserves until they become very expensive. And then we'll start to conserve."
There are various options:
Modern agriculture could start using phosphates far more sparingly and capture what is currently washed away
We could breed or genetically engineer crops that are more efficient in how they take up phosphorus, and so need less fertiliser
We could regulate - for example, the European Union recently banned another common usage of phosphates in cleaning products where it is used as a water softener
Where there's muck there's brass
And then there is the sewage option.
Why not just capture the phosphorus from our own waste and recycle it? Sweden and Germany have been leading the way.
Image caption UK water company Thames Water is now extracting phosphate from sewage waste
There is also a cottage industry among the eco-friendly in Western countries of "compost toilets".
Now the UK's Thames Water is getting in on the act, launching a new "reactor" that turns sewage sludge into nice clean fertiliser pellets.
How much of future supply could ultimately be provided by recycling is open to debate - Thames Water says 20% using the current technology.
But perhaps the more important point lies in the fact that Thames Water and Canadian partners Ostara, which developed the technology, expect to make a profit.
This should come from selling the pellets as well as from saving the cost of cleaning and replacing pipes that have become blocked by a phosphorus-based sediment called struvite.
Any benefits, as far as the environment or the long-term sustainable usage of a limited resource are concerned, are but a happy by-product.
Image copyright Ostara
The important point is that it is the rising price of phosphates that has made it worthwhile to start recycling the stuff.
A beautiful friendship?
So should we welcome the higher price? Well, it depends who you are.
In general, the lower your income, the more of it you spend on food and therefore the more sensitive you are to the higher food bills that might come with more expensive fertilisers.
In other words, rising phosphate prices hurt the poor most, which is hardly a recipe for social cohesion.
And that goes for whole countries too.
As Jeremy Grantham points out, many North African countries depend on food imports, and rising food prices contributed to the discontent behind the 2011 Arab Spring.
One of those countries is Morocco, which by a freak of geography controls about three-quarters of the world's remaining good-quality phosphate reserves.
"Morocco has the most impressive quasi-monopoly in the history of man," says Mr Grantham. "It makes oil look unimportant in comparison."
That could make Morocco a very rich nation in the future, one that the rest of the world will be keen to court.
And it gives the country a great responsibility in pricing its product in a way that eventually weans the world off it in a manageable way - much like Saudi Arabia and oil.
Ironically, the higher prices that monopolists like to set may actually be what the planet needs.
Strategic questions
But Morocco's unique position could also make it a centre of intrigue.
For example, much of its phosphates are actually located in the territory of Western Sahara.
It is occupied by the Moroccan military, which currently has an uneasy ceasefire in place with the local Algerian-backed Saharawi resistance.
Image copyright Getty Images Image caption Morocco controls 75% of known global quality phosphate reserves
This poses moral questions for the multinational companies that mine the stuff there, as well as some obvious strategic issues for the rest of the world about securing future food supplies.
Mr Grantham points out that half of nearby Mali - admittedly the sparsely populated Saharan half - was recently briefly overrun by militants affiliated with al-Qaeda, and he warns that Morocco itself may one day become the scene of rising social tensions, terrorism or revolt.
"I would almost guarantee to you that the major militaries of this world are well aware of this problem.
"They would not allow Morocco to become a hopelessly failed state," he says reassuringly.
"You don't want to look forward to the great fertiliser wars of 2042."
You can listen to Business Daily on BBC World Service at 08:32 GMT and 15:06 GMT. |
async def _start_all_interfaces_with_random_port(self):
loop = asyncio.get_running_loop()
infos = await loop.getaddrinfo(
None,
0,
family=socket.AF_UNSPEC,
type=socket.SOCK_STREAM,
flags=socket.AI_PASSIVE,
proto=0,
)
infos = sorted(infos, key=lambda x: x[0].name)
servers = []
port = None
try:
for res in infos:
af, socktype, proto, canonname, sa = res
try:
sock = socket.socket(af, socktype, proto)
except OSError:
continue
if af == getattr(socket, "AF_INET6", None) and hasattr(
socket, "IPPROTO_IPV6"
):
sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, True)
if port is not None:
sa = (sa[0], port, *sa[2:])
try:
sock.bind(sa)
except OSError as err:
raise OSError(
err.errno,
"error while attempting "
"to bind on address %r: %s" % (sa, err.strerror.lower()),
) from None
if port is None:
port = sock.getsockname()[1]
server = await loop.create_server(
lambda: DaskCommProtocol(self._on_connection),
sock=sock,
**self._extra_kwargs,
)
servers.append(server)
sock = None
except BaseException:
for server in servers:
server.close()
if sock is not None:
sock.close()
raise
return servers |
The present invention relates to the field of thermal imaging cameras and in particular to improvements of such cameras for detecting medium wave infrared and long wave infrared regions of the electromagnetic spectrum.
A principal application for thermal imaging cameras is the detection, recognition and subsequent identification (DRI) of objects. Present cameras are required to render to a display screen or to an “image processor” the “shape and texture” attributes of such objects and their contexts to such a quality that a human observer or an electronic substitute may perform these tasks to a high probability of success. The resolution of such devices is limited to the ability of humans, or their electronic substitute, to recognise objects from the rendering on a display screen.
When combined with the achievable performances of cameras and human observers and processors, these requirements frequently impose limits on the camera's maximum field of view to such an extent that the ranges at which the tasks of DRI can be achieved are incompatible with many applications of the camera. Within the limits of technology and those imposed by natural laws, an increase in the “task achievement range” requires a reduction in the field of view of the camera. With this narrowing field-of-view, the probability of an object being present in the field is reduced. Furthermore, any decrease in the field of view of the camera is likely to result in an increase in the area of the optical aperture with consequent impact on the cost and vulnerability of the optics and the aerodynamic performance of any aircraft on which the camera is deployed.
If the intended application requires a minimum field-of-view, then the ability of the camera to recognise objects is adversely affected and the camera has only sufficient resolving power to detect objects. Such a camera is then limited in its ability to discriminate between objects because the context will inevitably contain multiple features such as animals, heated rocks or vegetation that have the same temperature difference as that created by the genuine object. In such a situation, the application of the camera is limited by erroneous recognition.
The prior art teaches of thermal cameras characterised by a wide field-of-view and a low erroneous recognition rate. Such devices are employed for the measurements of the spectral emissivity of natural and cultural objects in the so-called Medium Waved Infrared (MWIR), between 3.2 um and 5.5 pm, and Long Wave Infrared (LWIR), 7.8, um and 11.4 um, atmospheric windows. It is known to those skilled in the art that the use of such a camera capable of measuring these attributes enhances the observer's ability to discriminate between classes of object such as trees, rocks, grasses and vehicles.
A thermal imaging camera with such a capability is known as a hyperspectral camera. Rather than observing the scene using a single waveband and presenting the image as a plane, the scene is decomposed into a number of planes representing spectral sub-bands or spectral bins. The assembly of these planes is then known as a “hyperspectral cube”.
It is well known to those skilled in the art and science that such hyperspectral cameras present difficulties in achieving adequate signal to noise ratio (SNR) against objects of interest whose temperature difference relative to the background is typically only a few Celsius. In a perfect thermal imaging camera, the noise in the instrument is dominated by that from the detector. To achieve such a performance, the noise internal to the detector itself must be made extremely low. This can only be achieved in detectors sensitive to LWIR radiation by cryogenically cooling the detector. Modern detectors are integrated with a closed-cycle cooling engine which can reduce the temperature of the detector array to values lower than 80 Kelvins. When fitted with such a detector, the camera is then capable of achieving “Background Limited”thermal sensitivity. This performance level indicates that the noise in the camera is created by the random arrival of photons from all objects in the field-of-view of the detector. The photon rate, and the fluctuation thereof, are determined by the temperature of the objects. As that temperature falls, so does the noise level in the detector.
This effect is exploited in modern, high performance, infrared detectors by engineering the detector package and cooling engine to cool not only the detector array but also a “cold-shield” enclosing the detector array.
The cold-shield is pierced to allow the detector to receive the scene image-forming rays from the imaging system such as a sequence of lenses or mirrors.
Inconsiderate design of this optical system leads to an instrument whose detector is exposed not only to radiation from the scene but also to that from the interior of the camera. Contributions to this additional radiation come either from the optical elements or from the enclosure, either directly or by reflections thereof from the optical components.
If the camera design is such that spectral filtering is provided prior to this process of intrusion by stray radiation, the SNR of the instrument will be adversely affected and will not achieve that possible if both the signal and noise had been spectrally filtered.
Prior designs of hyperspectral thermal cameras have solved this problem in a number of ways. A choice between the various methods is mainly influenced by the requirements of spectral resolving power and the operating waveband. The ratio of the operating waveband to the spectral resolving power is described by the term “number of channels” or “number of spectral bins”.
For a camera with only a modest number of spectral bins, a preferred method is to introduce a carousel of dielectric interference filters at the entrance window of the detector. Rotation of the carousel allows measurements of the radiation transmitted through the filter. The advantage of this method is that out-of band radiation is reflected from the filter out to the optical system and either absorbed in the camera body or reflected out of the camera. Thus, the noise from the camera optics is also filtered. Another advantage of this method is that a full spatial frame is gathered during the dwell time of the filter. The disadvantage of this method is that the behaviour of interference filters is very dependent upon the angle of arrival of rays.
Thus when used with focusing optics, the spectral bandpass of the filter is widened and the number of spectral bins is limited to less than about 8 in the LWIR band.
Higher spectral resolving power can be achieved by using a spectrally dispersive component such as a prism or a diffraction grating. The principal disadvantage of a prism instrument is that the dispersive power of prisms is relatively low so that long focal lengths and thus bulky imaging optics are required to form a usefully sized spectrum. In addition, light from the interior of the camera is uncontrolled and will increase the noise.
Thus, it is normal for such instruments optical components to be cooled to a very low temperature such that this intrusive radiation is reduced. In the very highest quality instruments it is normal to cool the entire instrument which may weigh 100 kg with a cryogenic liquid such as helium. This cooling requirement eliminates such instruments from large-scale deployment that requires maneuverability. The reflective diffraction grating has a very high dispersive power and is widely used in laboratory instruments, but these are also bulky. The oblique configuration of the instruments using reflection diffractive gratings also limits their use to optics with relatively poor light-gathering capacity and field-of-view at which high image quality is possible.
The highest spectral resolving power is achieved with an instrument using a variable optical path interferometer.
This capability is gained at the penalty of poor light gathering capacity and extreme sensitivity to relative mechanical motions of the camera components. |
Mr. Tong Guohua has been Chairman of the Board in Fiberhome Telecommunication Tech. Co., Ltd since December 24, 2005. He used to be Vice Chairman of the Board in the Company. He is also Chairman of the Board in ten other companies, including Accelink Technologies Co., Ltd. and Wuhan Topbond Technologies Co., Ltd., as well as Chairman of the Board and General Manager in another Wuhan-based company. |
Should Norepinephrine, Rather than Phenylephrine, Be Considered the Primary Vasopressor in Anesthetic Practice? May 2016 Volume 122 Number 5 www.anesthesia-analgesia.org 1707 Copyright © 2016 International Anesthesia Research Society DOI: 10.1213/ANE.0000000000001239 The induction of general anesthesia is associated with sympatholysis1 and a decrease in circulating norepinephrine (NE) and epinephrine (E) concentrations.2,3 Yet, the associated hypotension is commonly treated with phenylephrine (PE), a synthetic vasoconstrictor.4,5 Theoretically, NE might better combat this general anesthesiainduced hypotension by restoring decreased circulating concentrations of this catecholamine and maintaining cardiac output (CO). However, NE is rarely used in these circumstances. In this article, I will propose that patients might be better served by using NE rather than PE as the primary vasopressor to combat hypotension during general anesthesia. |
<reponame>austinglaser/xmc4700
#[doc = "Reader of register DCTL"]
pub type R = crate::R<u32, super::DCTL>;
#[doc = "Writer for register DCTL"]
pub type W = crate::W<u32, super::DCTL>;
#[doc = "Register DCTL `reset()`'s with value 0x02"]
impl crate::ResetValue for super::DCTL {
type Type = u32;
#[inline(always)]
fn reset_value() -> Self::Type {
0x02
}
}
#[doc = "Reader of field `RmtWkUpSig`"]
pub type RMTWKUPSIG_R = crate::R<bool, bool>;
#[doc = "Write proxy for field `RmtWkUpSig`"]
pub struct RMTWKUPSIG_W<'a> {
w: &'a mut W,
}
impl<'a> RMTWKUPSIG_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !0x01) | ((value as u32) & 0x01);
self.w
}
}
#[doc = "Soft Disconnect\n\nValue on reset: 1"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum SFTDISCON_A {
#[doc = "0: Normal operation. When this bit is cleared after a soft disconnect, the core drives a device connect event to the USB host. When the device is reconnected, the USB host restarts device enumeration."]
VALUE1 = 0,
#[doc = "1: The core drives a device disconnect event to the USB host."]
VALUE2 = 1,
}
impl From<SFTDISCON_A> for bool {
#[inline(always)]
fn from(variant: SFTDISCON_A) -> Self {
variant as u8 != 0
}
}
#[doc = "Reader of field `SftDiscon`"]
pub type SFTDISCON_R = crate::R<bool, SFTDISCON_A>;
impl SFTDISCON_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> SFTDISCON_A {
match self.bits {
false => SFTDISCON_A::VALUE1,
true => SFTDISCON_A::VALUE2,
}
}
#[doc = "Checks if the value of the field is `VALUE1`"]
#[inline(always)]
pub fn is_value1(&self) -> bool {
*self == SFTDISCON_A::VALUE1
}
#[doc = "Checks if the value of the field is `VALUE2`"]
#[inline(always)]
pub fn is_value2(&self) -> bool {
*self == SFTDISCON_A::VALUE2
}
}
#[doc = "Write proxy for field `SftDiscon`"]
pub struct SFTDISCON_W<'a> {
w: &'a mut W,
}
impl<'a> SFTDISCON_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: SFTDISCON_A) -> &'a mut W {
{
self.bit(variant.into())
}
}
#[doc = "Normal operation. When this bit is cleared after a soft disconnect, the core drives a device connect event to the USB host. When the device is reconnected, the USB host restarts device enumeration."]
#[inline(always)]
pub fn value1(self) -> &'a mut W {
self.variant(SFTDISCON_A::VALUE1)
}
#[doc = "The core drives a device disconnect event to the USB host."]
#[inline(always)]
pub fn value2(self) -> &'a mut W {
self.variant(SFTDISCON_A::VALUE2)
}
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 1)) | (((value as u32) & 0x01) << 1);
self.w
}
}
#[doc = "Global Non-periodic IN NAK Status\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum GNPINNAKSTS_A {
#[doc = "0: A handshake is sent out based on the data availability in the transmit FIFO."]
VALUE1 = 0,
#[doc = "1: A NAK handshake is sent out on all non-periodic IN endpoints, irrespective of the data availability in the transmit FIFO."]
VALUE2 = 1,
}
impl From<GNPINNAKSTS_A> for bool {
#[inline(always)]
fn from(variant: GNPINNAKSTS_A) -> Self {
variant as u8 != 0
}
}
#[doc = "Reader of field `GNPINNakSts`"]
pub type GNPINNAKSTS_R = crate::R<bool, GNPINNAKSTS_A>;
impl GNPINNAKSTS_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> GNPINNAKSTS_A {
match self.bits {
false => GNPINNAKSTS_A::VALUE1,
true => GNPINNAKSTS_A::VALUE2,
}
}
#[doc = "Checks if the value of the field is `VALUE1`"]
#[inline(always)]
pub fn is_value1(&self) -> bool {
*self == GNPINNAKSTS_A::VALUE1
}
#[doc = "Checks if the value of the field is `VALUE2`"]
#[inline(always)]
pub fn is_value2(&self) -> bool {
*self == GNPINNAKSTS_A::VALUE2
}
}
#[doc = "Global OUT NAK Status\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum GOUTNAKSTS_A {
#[doc = "0: A handshake is sent based on the FIFO Status and the NAK and STALL bit settings."]
VALUE1 = 0,
#[doc = "1: No data is written to the RxFIFO, irrespective of space availability. Sends a NAK handshake on all packets, except on SETUP transactions. All isochronous OUT packets are dropped."]
VALUE2 = 1,
}
impl From<GOUTNAKSTS_A> for bool {
#[inline(always)]
fn from(variant: GOUTNAKSTS_A) -> Self {
variant as u8 != 0
}
}
#[doc = "Reader of field `GOUTNakSts`"]
pub type GOUTNAKSTS_R = crate::R<bool, GOUTNAKSTS_A>;
impl GOUTNAKSTS_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> GOUTNAKSTS_A {
match self.bits {
false => GOUTNAKSTS_A::VALUE1,
true => GOUTNAKSTS_A::VALUE2,
}
}
#[doc = "Checks if the value of the field is `VALUE1`"]
#[inline(always)]
pub fn is_value1(&self) -> bool {
*self == GOUTNAKSTS_A::VALUE1
}
#[doc = "Checks if the value of the field is `VALUE2`"]
#[inline(always)]
pub fn is_value2(&self) -> bool {
*self == GOUTNAKSTS_A::VALUE2
}
}
#[doc = "Write proxy for field `SGNPInNak`"]
pub struct SGNPINNAK_W<'a> {
w: &'a mut W,
}
impl<'a> SGNPINNAK_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 7)) | (((value as u32) & 0x01) << 7);
self.w
}
}
#[doc = "Write proxy for field `CGNPInNak`"]
pub struct CGNPINNAK_W<'a> {
w: &'a mut W,
}
impl<'a> CGNPINNAK_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 8)) | (((value as u32) & 0x01) << 8);
self.w
}
}
#[doc = "Write proxy for field `SGOUTNak`"]
pub struct SGOUTNAK_W<'a> {
w: &'a mut W,
}
impl<'a> SGOUTNAK_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 9)) | (((value as u32) & 0x01) << 9);
self.w
}
}
#[doc = "Write proxy for field `CGOUTNak`"]
pub struct CGOUTNAK_W<'a> {
w: &'a mut W,
}
impl<'a> CGOUTNAK_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 10)) | (((value as u32) & 0x01) << 10);
self.w
}
}
#[doc = "Global Multi Count\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
#[repr(u8)]
pub enum GMC_A {
#[doc = "0: Invalid."]
VALUE1 = 0,
#[doc = "1: 1 packet."]
VALUE2 = 1,
#[doc = "2: 2 packets."]
VALUE3 = 2,
#[doc = "3: 3 packets."]
VALUE4 = 3,
}
impl From<GMC_A> for u8 {
#[inline(always)]
fn from(variant: GMC_A) -> Self {
variant as _
}
}
#[doc = "Reader of field `GMC`"]
pub type GMC_R = crate::R<u8, GMC_A>;
impl GMC_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> GMC_A {
match self.bits {
0 => GMC_A::VALUE1,
1 => GMC_A::VALUE2,
2 => GMC_A::VALUE3,
3 => GMC_A::VALUE4,
_ => unreachable!(),
}
}
#[doc = "Checks if the value of the field is `VALUE1`"]
#[inline(always)]
pub fn is_value1(&self) -> bool {
*self == GMC_A::VALUE1
}
#[doc = "Checks if the value of the field is `VALUE2`"]
#[inline(always)]
pub fn is_value2(&self) -> bool {
*self == GMC_A::VALUE2
}
#[doc = "Checks if the value of the field is `VALUE3`"]
#[inline(always)]
pub fn is_value3(&self) -> bool {
*self == GMC_A::VALUE3
}
#[doc = "Checks if the value of the field is `VALUE4`"]
#[inline(always)]
pub fn is_value4(&self) -> bool {
*self == GMC_A::VALUE4
}
}
#[doc = "Write proxy for field `GMC`"]
pub struct GMC_W<'a> {
w: &'a mut W,
}
impl<'a> GMC_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: GMC_A) -> &'a mut W {
{
self.bits(variant.into())
}
}
#[doc = "Invalid."]
#[inline(always)]
pub fn value1(self) -> &'a mut W {
self.variant(GMC_A::VALUE1)
}
#[doc = "1 packet."]
#[inline(always)]
pub fn value2(self) -> &'a mut W {
self.variant(GMC_A::VALUE2)
}
#[doc = "2 packets."]
#[inline(always)]
pub fn value3(self) -> &'a mut W {
self.variant(GMC_A::VALUE3)
}
#[doc = "3 packets."]
#[inline(always)]
pub fn value4(self) -> &'a mut W {
self.variant(GMC_A::VALUE4)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bits(self, value: u8) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x03 << 13)) | (((value as u32) & 0x03) << 13);
self.w
}
}
#[doc = "Ignore frame number for isochronous endpoints in case of Scatter/Gather DMA\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum IGNRFRMNUM_A {
#[doc = "0: Scatter/Gather enabled: The core transmits the packets only in the frame number in which they are intended to be transmitted. Scatter/Gather disabled: Periodic transfer interrupt feature is disabled; the application must program transfers for periodic endpoints every frame"]
VALUE1 = 0,
#[doc = "1: Scatter/Gather enabled: The core ignores the frame number, sending packets immediately as the packets are ready. Scatter/Gather disabled: Periodic transfer interrupt feature is enabled; the application can program transfers for multiple frames for periodic endpoints."]
VALUE2 = 1,
}
impl From<IGNRFRMNUM_A> for bool {
#[inline(always)]
fn from(variant: IGNRFRMNUM_A) -> Self {
variant as u8 != 0
}
}
#[doc = "Reader of field `IgnrFrmNum`"]
pub type IGNRFRMNUM_R = crate::R<bool, IGNRFRMNUM_A>;
impl IGNRFRMNUM_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> IGNRFRMNUM_A {
match self.bits {
false => IGNRFRMNUM_A::VALUE1,
true => IGNRFRMNUM_A::VALUE2,
}
}
#[doc = "Checks if the value of the field is `VALUE1`"]
#[inline(always)]
pub fn is_value1(&self) -> bool {
*self == IGNRFRMNUM_A::VALUE1
}
#[doc = "Checks if the value of the field is `VALUE2`"]
#[inline(always)]
pub fn is_value2(&self) -> bool {
*self == IGNRFRMNUM_A::VALUE2
}
}
#[doc = "Write proxy for field `IgnrFrmNum`"]
pub struct IGNRFRMNUM_W<'a> {
w: &'a mut W,
}
impl<'a> IGNRFRMNUM_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: IGNRFRMNUM_A) -> &'a mut W {
{
self.bit(variant.into())
}
}
#[doc = "Scatter/Gather enabled: The core transmits the packets only in the frame number in which they are intended to be transmitted. Scatter/Gather disabled: Periodic transfer interrupt feature is disabled; the application must program transfers for periodic endpoints every frame"]
#[inline(always)]
pub fn value1(self) -> &'a mut W {
self.variant(IGNRFRMNUM_A::VALUE1)
}
#[doc = "Scatter/Gather enabled: The core ignores the frame number, sending packets immediately as the packets are ready. Scatter/Gather disabled: Periodic transfer interrupt feature is enabled; the application can program transfers for multiple frames for periodic endpoints."]
#[inline(always)]
pub fn value2(self) -> &'a mut W {
self.variant(IGNRFRMNUM_A::VALUE2)
}
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 15)) | (((value as u32) & 0x01) << 15);
self.w
}
}
#[doc = "Reader of field `NakOnBble`"]
pub type NAKONBBLE_R = crate::R<bool, bool>;
#[doc = "Write proxy for field `NakOnBble`"]
pub struct NAKONBBLE_W<'a> {
w: &'a mut W,
}
impl<'a> NAKONBBLE_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 16)) | (((value as u32) & 0x01) << 16);
self.w
}
}
#[doc = "Enable continue on BNA\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum ENCONTONBNA_A {
#[doc = "0: After receiving BNA interrupt, the core disables the endpoint. When the endpoint is re-enabled by the application, the core starts processing from the DOEPDMA descriptor."]
VALUE1 = 0,
#[doc = "1: After receiving BNA interrupt, the core disables the endpoint. When the endpoint is re-enabled by the application, the core starts processing from the descriptor that received the BNA interrupt."]
VALUE2 = 1,
}
impl From<ENCONTONBNA_A> for bool {
#[inline(always)]
fn from(variant: ENCONTONBNA_A) -> Self {
variant as u8 != 0
}
}
#[doc = "Reader of field `EnContOnBNA`"]
pub type ENCONTONBNA_R = crate::R<bool, ENCONTONBNA_A>;
impl ENCONTONBNA_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> ENCONTONBNA_A {
match self.bits {
false => ENCONTONBNA_A::VALUE1,
true => ENCONTONBNA_A::VALUE2,
}
}
#[doc = "Checks if the value of the field is `VALUE1`"]
#[inline(always)]
pub fn is_value1(&self) -> bool {
*self == ENCONTONBNA_A::VALUE1
}
#[doc = "Checks if the value of the field is `VALUE2`"]
#[inline(always)]
pub fn is_value2(&self) -> bool {
*self == ENCONTONBNA_A::VALUE2
}
}
#[doc = "Write proxy for field `EnContOnBNA`"]
pub struct ENCONTONBNA_W<'a> {
w: &'a mut W,
}
impl<'a> ENCONTONBNA_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: ENCONTONBNA_A) -> &'a mut W {
{
self.bit(variant.into())
}
}
#[doc = "After receiving BNA interrupt, the core disables the endpoint. When the endpoint is re-enabled by the application, the core starts processing from the DOEPDMA descriptor."]
#[inline(always)]
pub fn value1(self) -> &'a mut W {
self.variant(ENCONTONBNA_A::VALUE1)
}
#[doc = "After receiving BNA interrupt, the core disables the endpoint. When the endpoint is re-enabled by the application, the core starts processing from the descriptor that received the BNA interrupt."]
#[inline(always)]
pub fn value2(self) -> &'a mut W {
self.variant(ENCONTONBNA_A::VALUE2)
}
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 17)) | (((value as u32) & 0x01) << 17);
self.w
}
}
impl R {
#[doc = "Bit 0 - Remote Wakeup Signaling"]
#[inline(always)]
pub fn rmt_wk_up_sig(&self) -> RMTWKUPSIG_R {
RMTWKUPSIG_R::new((self.bits & 0x01) != 0)
}
#[doc = "Bit 1 - Soft Disconnect"]
#[inline(always)]
pub fn sft_discon(&self) -> SFTDISCON_R {
SFTDISCON_R::new(((self.bits >> 1) & 0x01) != 0)
}
#[doc = "Bit 2 - Global Non-periodic IN NAK Status"]
#[inline(always)]
pub fn gnpinnak_sts(&self) -> GNPINNAKSTS_R {
GNPINNAKSTS_R::new(((self.bits >> 2) & 0x01) != 0)
}
#[doc = "Bit 3 - Global OUT NAK Status"]
#[inline(always)]
pub fn goutnak_sts(&self) -> GOUTNAKSTS_R {
GOUTNAKSTS_R::new(((self.bits >> 3) & 0x01) != 0)
}
#[doc = "Bits 13:14 - Global Multi Count"]
#[inline(always)]
pub fn gmc(&self) -> GMC_R {
GMC_R::new(((self.bits >> 13) & 0x03) as u8)
}
#[doc = "Bit 15 - Ignore frame number for isochronous endpoints in case of Scatter/Gather DMA"]
#[inline(always)]
pub fn ignr_frm_num(&self) -> IGNRFRMNUM_R {
IGNRFRMNUM_R::new(((self.bits >> 15) & 0x01) != 0)
}
#[doc = "Bit 16 - Set NAK automatically on babble"]
#[inline(always)]
pub fn nak_on_bble(&self) -> NAKONBBLE_R {
NAKONBBLE_R::new(((self.bits >> 16) & 0x01) != 0)
}
#[doc = "Bit 17 - Enable continue on BNA"]
#[inline(always)]
pub fn en_cont_on_bna(&self) -> ENCONTONBNA_R {
ENCONTONBNA_R::new(((self.bits >> 17) & 0x01) != 0)
}
}
impl W {
#[doc = "Bit 0 - Remote Wakeup Signaling"]
#[inline(always)]
pub fn rmt_wk_up_sig(&mut self) -> RMTWKUPSIG_W {
RMTWKUPSIG_W { w: self }
}
#[doc = "Bit 1 - Soft Disconnect"]
#[inline(always)]
pub fn sft_discon(&mut self) -> SFTDISCON_W {
SFTDISCON_W { w: self }
}
#[doc = "Bit 7 - Set Global Non-periodic IN NAK"]
#[inline(always)]
pub fn sgnpin_nak(&mut self) -> SGNPINNAK_W {
SGNPINNAK_W { w: self }
}
#[doc = "Bit 8 - Clear Global Non-periodic IN NAK"]
#[inline(always)]
pub fn cgnpin_nak(&mut self) -> CGNPINNAK_W {
CGNPINNAK_W { w: self }
}
#[doc = "Bit 9 - Set Global OUT NAK"]
#[inline(always)]
pub fn sgoutnak(&mut self) -> SGOUTNAK_W {
SGOUTNAK_W { w: self }
}
#[doc = "Bit 10 - Clear Global OUT NAK"]
#[inline(always)]
pub fn cgoutnak(&mut self) -> CGOUTNAK_W {
CGOUTNAK_W { w: self }
}
#[doc = "Bits 13:14 - Global Multi Count"]
#[inline(always)]
pub fn gmc(&mut self) -> GMC_W {
GMC_W { w: self }
}
#[doc = "Bit 15 - Ignore frame number for isochronous endpoints in case of Scatter/Gather DMA"]
#[inline(always)]
pub fn ignr_frm_num(&mut self) -> IGNRFRMNUM_W {
IGNRFRMNUM_W { w: self }
}
#[doc = "Bit 16 - Set NAK automatically on babble"]
#[inline(always)]
pub fn nak_on_bble(&mut self) -> NAKONBBLE_W {
NAKONBBLE_W { w: self }
}
#[doc = "Bit 17 - Enable continue on BNA"]
#[inline(always)]
pub fn en_cont_on_bna(&mut self) -> ENCONTONBNA_W {
ENCONTONBNA_W { w: self }
}
}
|
/**
* Array add delete modify query
*
* @author wangzhichao
* @date 7/27/20
*/
public class MyArray {
private int[] array;
/**
* 数组中实际元素的数量
*/
private int size;
/**
* 构造方法
*
* @param capacity 数组的容量
*/
public MyArray(int capacity) {
array = new int[capacity];
size = 0;
}
/**
* 向数组中插入元素
* <p>
* 尾部插入
* 头部插入
* 中间插入
*
* @param element 插入的元素
* @param index 插入的位置
* @throws Exception
*/
public void insert(int element, int index) throws Exception {
if (index < 0 || index > size) {
throw new IndexOutOfBoundsException("插入元素的位置超出数组中实际元素范围[0, " + size + "], index: " + index);
}
if (size == array.length) {
// 扩容
ensureCapacity();
}
if (index != size) {
System.arraycopy(array, index, array, index + 1, size - index);
}
array[index] = element;
size++;
}
/**
* 删除元素
* <p>
* 尾部删除
* 头部删除
* 中间删除
*
* @param index 删除的位置
* @return 删除的位置上元素的值
* @throws Exception
*/
public int delete(int index) throws Exception {
if (index < 0 || index >= size) {
throw new IndexOutOfBoundsException("删除元素的位置超出数组中实际元素索引范围[0, " + (size - 1) + "], index: " + index);
}
int result = array[index];
System.arraycopy(array, index + 1, array, index, size - 1 - index);
size--;
return result;
}
/**
* 更新元素
*
* @param element 新的元素值
* @param index 更新的位置
* @throws Exception
*/
public void set(int element, int index) throws Exception {
if (index < 0 || index >= size) {
throw new IndexOutOfBoundsException("更新元素的位置超出数组中实际元素索引范围[0, " + (size - 1) + "], index: " + index);
}
array[index] = element;
}
/**
* 获取元素
*
* @param index 获取的位置
* @return 获取的位置上元素的值
* @throws Exception
*/
public int get(int index) throws Exception {
if (index < 0 || index >= size) {
throw new IndexOutOfBoundsException("更新元素的位置超出数组中实际元素索引范围[0, " + (size - 1) + "], index: " + index);
}
return array[index];
}
/**
* 判断数组是否为空
*
* @return
*/
public boolean isEmpty() {
return size == 0;
}
/**
* 获取数组中实际的元素个数
*
* @return
*/
public int getSize() {
return size;
}
private void ensureCapacity() {
int[] newArray = new int[array.length * 2];
System.arraycopy(array, 0, newArray, 0, array.length);
array = newArray;
}
public void print() {
StringBuilder sb = new StringBuilder("[");
for (int i = 0; i < size; i++) {
if (i > 0) {
sb.append(", ");
}
sb.append(array[i]);
}
sb.append("]");
System.out.println(sb.toString());
}
public static void main(String[] args) throws Exception {
MyArray array = new MyArray(7);
// 插入元素
System.out.println("插入元素:");
array.insert(6, 0);
array.insert(5, 1);
array.insert(4, 2);
array.insert(1, 3);
array.insert(2, 4);
array.print();
array.insert(3, 3);
array.print();
array.insert(7, 3);
array.print();
array.insert(8, 7);
array.print();
array.insert(10, 0);
array.print();
// try {
// array.insert(11, 10);
// } catch (Exception e) {
// e.printStackTrace();
// }
// 更新元素
System.out.println("更新元素:");
array.set(-1, 1);
array.print();
// 获取元素
System.out.println("获取元素: ");
System.out.println("获取1号元素:" + array.get(1));
// 删除元素
System.out.println("删除元素:");
System.out.println("删除0号元素:" + array.delete(0));
array.print();
System.out.println("删除2号元素:" + array.delete(2));
array.print();
System.out.println("删除6号元素:" + array.delete(6));
array.print();
System.out.println("删除4号元素:" + array.delete(4));
array.print();
System.out.println("删除1号元素:" + array.delete(1));
array.print();
System.out.println("删除3号元素:" + array.delete(3));
array.print();
System.out.println("删除1号元素:" + array.delete(1));
array.print();
System.out.println("删除1号元素:" + array.delete(1));
array.print();
System.out.println("删除0号元素:" + array.delete(0));
array.print();
}
} |
Leave it to Solange Knowles to make wearing a puffer jacket of a dress look chic on the Met Gala red carpet. The A Seat at the Table singer sported a black, oversized bubble jacket with a train and matching trousers as well as a crisp white button down and matching black and white shoes, all by Thom Browne.
In an Instagram posted to her account, Knowles sent love to both Browne and the designer inspiring the night’s theme, avant garde designer Rei Kawakubo of Comme des Garçons, but she also gave a shout out to an unexpected source of inspiration for her outfit: Missy Elliott.
With that context in mind, Solange’s elegant puffer jacket of a gown bears a striking resemblance to Missy’s memorably inflated garbage bag ensemble from “The Rain [Supa Dupa Fly]” video. Talk about real recognizing real.
See Solange’s tribute to Missy and then watch Missy’s OG fashion statement below. |
Interaction of monoclonal antibodies with MHC class I antigens on mouse spleen cells. II. Levels of expression of H-2K, H-2D, and H-2L in different mouse strains. The numbers of MHC class I molecules expressed by spleen cells from various mouse strains were determined by using MHC-specific monoclonal antibodies and a radioactive binding assay. Although small differences were found to exist in some cases, our general conclusion is that different mice of the same strain, congenic mice of different haplotypes, and syngeneic mice of varying background all express similar numbers of class I antigens. B10.A mice (8 to 10 wk old), for example, express 5.3 X 10 Kk molecules/cell, 5.4 X 10 Dd molecules/cell, and 2.2 X 10 Ld molecules/cell. Some of the differences observed in class I antigen expression included: 1) the level of Kk expression increased to a small but significant extent with age in B10.A mice; 2) female B10.A mice expressed slightly higher amounts of Kk than male mice; and 3) B10.A(2R) and B10.A(4R) recombinant strains expressed elevated levels of K-end antigens and slightly decreased levels of D-end antigens when compared with the unrecombinant B10.A strain. In several strains, F1 mice express approximately 50% as many copies of each parental antigen as do the homozygous parents. B10 mice, which are negative for the L antigen, nevertheless express the same total number of D-end molecules as do B10.A mice. The data suggest that the levels of expression of MHC class I molecules are controlled by at least two factors: gene dosage and another factor(s) that gives rise to the small variations in class I antigen expression seen with age, sex, and strain, and to the low expression of Ld relative to Dd and Kk. |
Abstract Phantom limb pain is a painful sensation that is perceived in a body part that no longer exists. To control this pain, many methods have been used such as medication, physical treatment, nerve block, neuromodulation, surgical treatment and mirror therapy. However, until now, there effects have been uncertain. We report the successful reduction of phantom limb pain using mirror therapy when other treatments initially failed to control the pain. Keywords: amputation, mirror neurons, phantom limb pain
Phantom limb pain is a painful sensation that is perceived within a body part that no longer exists [1]. This pain was first introduced by Ambrose Pare, a French army surgeon in the mid-16th century. Numerous research studies on phantom limb pain have been done since then due to in part the enormous number of patients who lost body parts during the First and Second World Wars. However, its components of pathological physiology and etiology have not yet been clearly elucidated. More than 50-85% of phantom limb pain develops after amputation. It can develop immediately after an amputation procedure. Half of all patients suffer the pain within 24 hours after the amputation. About 75% of the patients experience the pain within a few days after amputation [2-4]. The frequncey of the pain or its severity can be relieved over time; however, cases of no change and even increases in pain have also been reported [5]. The treatment for phantom limb pain includes medication, physical treatment, nerve block, neuromodulation, and surgical treatment. Nevertheless, any effects of these methods have not yet been proven. Herein, we report a case with the successful reduction of phantom limb pain using the mirror therapy when other treatments initially failed to control the pain.
CASE REPORT A 30-year-old male patient received an above-elbow amputation about eight months prior to seeing us for an open fracture on the left radius and ulna due to trauma. He ended up transferring from one department to another department since his condition did not improve at all during treatment due to constant and severe pain after the amputation surgery. We could not find any specific findings in this medical history. He kept complaining about cramping pain on his removed arm and electric-like pain occurring once every few minutes. He also said that he felt the entire shape of his removed arm, and it was medially rotated. Every day, he was prescribed gabapentine (2,400 mg), oxycodon (200 mg), and amitriptyline (25 mg), with other medications to control the pain. However, the degree of his pain relief was somewhat insignificant and the visual analog scale (VAS) was 8-10 out of 10. Other treatment methods, such as stellate ganglion block, thoracic sympathetic ganglion block, brachial plexus block, cervical transforaminal epidural block, and a subcutaneous infusion of ketamine, were also done. However, they gave the patient only short-term improvement. Lastly, spinal cord stimulation (SCS) was done for the patient, but the treatment effect was very insignificant. Finally, we performed mirror therapy for the patient. He had to visit the hospital four times a week and went through a 15-minute treatment period. We had the patient feel the movement of his removed arm and hand just like his normal arm and hand moving through a mirror ( ). After a week passed, the patient said that he could feel his medially rotated arm was back to normal, and his VAS level decreased to 7 out of 10. One month later, he said that the previous cramping pain was almost gone and the phantom hand and arm returned to normal. At that time, his VAS level was 5 out of 10. After, three month from the initial therapy, he is doing a mirror therapy three to four times a week at home. However, the electric like pain remains and the VAS usually is maintained at 4 out of 10. He is under follow-up at our outpatient department with oxycodon decreased to 100 mg a day. Open in a separate window
DISCUSSION No treatment for phantom limb pain has yet been clearly proven in terms of its effect. Drug therapy includes narcotic drugs, anti-epileptic medications, topical anesthesia, and analgesics. An infusion of ketamine, a N-Methyl-D-aspartic acid receptor antagonist, was also introduced for phantom limb pain treatment [6]. Meanwhile, non-drug therapy includes sympathetic ganglion block, transforaminal epidural block, peripheral nerve block, transcutaneous electrical nerve stimulation (TENS), direct cortical or spinal cord stimulation, and mirror therapy, etc. One of the most effective treatments is mirror therapy. Mirror therapy was unveiled by Ramachandran and Rogers-Ramachandran in 1996. Under this therapy, a patient is allowed to feel the imaginary movement of the removed body part behaving as normal body movement through a mirror [7]. The mirror image of the normal body part helps reorganize and integrate the mismatch between proprioception and visual feedback of the removed body. Thus, enhancing the treatment effect for phantom limb pain. The clinical effect of mirror therapy is much more significant than any other treatments [4,8]. Rizzolatti used a mirror neuron to explain the fundamentals of a mirror therapy [9]. At first, a mirror neuron was found in the monkey premotor cortex, and later, Rossi discovered that humans also have similar mirror neurons systems [10]. A mirror neuron fires both when a person acts and when a person observes the same action performed by another. Then, the neuron mirrors the behavior of the other, as though the observer were itself acting. A mirror neuron provides observers with internally recognized experiences, making them understand other's behaviors, intentions, and emotional status [9,10]. Therefore, while mimicking the behavior of the other, observers can experience not only the sensation, but also the similar emotion of the other. In this sense, a patient with phantom limb pain can feel the same sense or emotion of his/her normal body part by observing the mirror image. By doing so, it is expected to decrease pain by resolving conflict between motor intention, proprioception and visual system. Not all observing activities are accompanied by these sensory experiences of a mirror neuron. A person without phantom limb pain and no amputations cannot feel these sensory experiences since the signs from a non-mirror neuron block the mirror neuron, while a patient with an amputation does not have this non-mirror neuron system operating [11]. The visual observation can help feeling empathy, which explains how the mirror therapy works for a patient. The effect of the mirror therapy varies depending on the pain. It is reported that the therapy is more effective on deep somatic pain (e.g., pressure sense and proprioceptive pain) than on superficial pain (e.g., warmth sense and nociceptive pain). This is because deep tissues are responsible for integrating sensorimotor nerves as well as creating movements compared to superficial tissues [12]. Recently, mirror therapy has used for not only patients with phantom limb pain, but also for patients with complex regional pain syndrome and strokes [13,14]. Many studies indicate that mirror therapy is only effective for upper limb treatment, but it has potential as alternative treatment for pain that is difficult to control. In this study, mirror therapy resulted in dramatic pain relief for a patient with chronic phantom limb pain when other treatments such as medications, physical therapies, nerve blocks, nerve transformations did not work. Mirror therapy is expected to be widely used for the treatment of phantom limb pain since it is easy to use at both home and in outpatient departments. |
/**
* This evaluates an attribute from a bean / data source.
*/
public abstract class AbstractValueOperator<DataType> extends Operator {
private static final long serialVersionUID = 1L;
private String attributeName;
private DataType fixedValue;
/**
* @param fixedValue
* the data type this operator compares the value against.
*/
protected AbstractValueOperator(String attributeName, DataType fixedValue) {
Objects.requireNonNull(attributeName);
this.attributeName = attributeName;
this.fixedValue = fixedValue;
}
/**
* Evaluate the one TestAtom associated with {@link #getAttribute()}.
*/
protected abstract boolean evaluateTestAtom(TestAtom atom);
@Override
protected Operator createCanonicalOperator() {
return this;
}
@Override
public final int getOperandCount() {
return 2;
}
@Override
public Object getOperand(int index) {
if (index == 0) {return attributeName;}
if (index == 1) {return fixedValue;}
throw new IllegalArgumentException("illegal operand index: " + index);
}
@SuppressWarnings("rawtypes")
@Override
protected Map<String, Collection<Class>> getAttributeTypes() {
Map<String, Collection<Class>> returnValue = new HashMap<>(1);
Collection<Class> c = new HashSet<>();
Object value = getValue();
if (value == null) {
c.add(null);
} else {
c.add(value.getClass());
}
returnValue.put(getAttribute(), c);
return returnValue;
}
@Override
protected final boolean evaluateTestAtoms(Map<String, TestAtom> values) {
TestAtom atom = values.get(getAttribute());
if (atom == null) {
throw new IllegalArgumentException("Missing TestAtom for \""
+ getAttribute() + "\"");
}
return evaluateTestAtom(atom);
}
@Override
protected int getCanonicalOrder() {
return 3;
}
/**
* Convert a value to a String.
* <p>
* This helps format objects in a Java-friendly format. For example the
* float 1 will be converted to "1f". The double 4 will be "4.0". Strings
* and characters will be encoded using Java's escape character conventions.
* <p>
* This formatting is important because the OperationParser uses a
* parser designed to pick up on these differences.
*/
protected String toString(Object value) {
StringBuilder sb = new StringBuilder();
if (value instanceof Integer) {
sb.append(value);
} else if (value instanceof Float) {
sb.append(value);
sb.append('f');
} else if (value instanceof Double) {
sb.append(value);
if (sb.indexOf(".") == -1) {
sb.append(".0");
}
} else if (value instanceof Long) {
sb.append(value);
sb.append('L');
} else if (value instanceof Character) {
sb.append('\'');
sb.append(JavaEncoding.encode(fixedValue.toString()));
sb.append('\'');
} else if (value == null) {
sb.append("null");
} else {
sb.append('\"');
sb.append(JavaEncoding.encode(value.toString()));
sb.append('\"');
}
return sb.toString();
}
private void writeObject(java.io.ObjectOutputStream out) throws IOException {
out.writeInt(0);
out.writeObject(getAttribute());
out.writeObject(getValue());
}
@SuppressWarnings("unchecked")
private void readObject(java.io.ObjectInputStream in) throws IOException,
ClassNotFoundException {
int version = in.readInt();
if (version == 0) {
attributeName = (String) in.readObject();
fixedValue = (DataType) in.readObject();
} else {
throw new IOException("Unsupported internal version: " + version);
}
}
/**
* Return the attribute name, which is the first operand.
*/
public String getAttribute() {
return (String) getOperand(0);
}
/**
* Return the value the attribute must match, which is the second operand.
*/
@SuppressWarnings("unchecked")
public DataType getValue() {
return (DataType) getOperand(1);
}
} |
package rocha.bruno.ExemploH2MySQL.repository;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
import rocha.bruno.ExemploH2MySQL.model.ClientEntity;
@Repository
public interface ClientRepository extends CrudRepository<ClientEntity, Long> {
} |
/*******************************************************************************
* Copyright (c) 2020 Northrop Grumman Systems Corporation.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*******************************************************************************/
package com.zeligsoft.ddk.zdl.zdlgen.util;
import java.util.Arrays;
import java.util.List;
/**
* @author prismtech
*
*/
public final class RhapsodyMetaclasses {
private RhapsodyMetaclasses() {
//nothing to do
}
public static final String ACCEPT_EVENT_ACTION = "AcceptEventAction"; //$NON-NLS-1$
public static final String ACCEPT_TIME_EVENT = "AcceptTimeEvent"; //$NON-NLS-1$
public static final String ACTIVITY_DIAGRAM = "ActivityDiagram"; //$NON-NLS-1$
public static final String ACTOR = "Actor"; //$NON-NLS-1$
public static final String ARGUMENT = "Argument"; //$NON-NLS-1$
public static final String ASSOCIATION = "Association"; //$NON-NLS-1$
public static final String ASSOCIATION_END = "AssociationEnd"; //$NON-NLS-1$
public static final String ATTRIBUTE = "Attribute"; //$NON-NLS-1$
public static final String CALL_OPERATION = "CallOperation"; //$NON-NLS-1$
public static final String CLASS = "Class"; //$NON-NLS-1$
public static final String CLASSIFIER_ROLE = "ClassifierRole"; //$NON-NLS-1$
public static final String CLEANUP = "Cleanup"; //$NON-NLS-1$
public static final String COMBINED_FRAGMENT = "CombinedFragment"; //$NON-NLS-1$
public static final String COMMENT = "Comment"; //$NON-NLS-1$
public static final String COMMUNICATION_DIAGRAM = "CommunicationDiagram"; //$NON-NLS-1$
public static final String COMPONENT = "Component"; //$NON-NLS-1$
public static final String COMPONENT_DIAGRAM = "ComponentDiagram"; //$NON-NLS-1$
public static final String COMPONENT_INSTANCE = "ComponentInstance"; //$NON-NLS-1$
public static final String CONDITION = "Condition"; //$NON-NLS-1$
public static final String CONFIGURATION = "Configuration"; //$NON-NLS-1$
public static final String CONNECTOR = "Connector"; //$NON-NLS-1$
public static final String CONSTRAINT = "Constraint"; //$NON-NLS-1$
public static final String CONSTRUCTOR = "Constructor"; //$NON-NLS-1$
public static final String CONTROLLED_FILE = "ControlledFile"; //$NON-NLS-1$
public static final String DEFAULT_TRANSITION = "DefaultTransition"; //$NON-NLS-1$
public static final String DEPENDENCY = "Dependency"; //$NON-NLS-1$
public static final String DEPLOYMENT_DIAGRAM = "DeploymentDiagram"; //$NON-NLS-1$
public static final String DESTRUCTOR = "Destructor"; //$NON-NLS-1$
public static final String ENUMERATION_LITERAL = "EnumerationLiteral"; //$NON-NLS-1$
public static final String EVENT = "Event"; //$NON-NLS-1$
public static final String EXECUTION_OCCURRENCE = "ExecutionOccurrence"; //$NON-NLS-1$
public static final String FILE = "File"; //$NON-NLS-1$
public static final String FLOW = "Flow"; //$NON-NLS-1$
public static final String FOLDER = "Folder"; //$NON-NLS-1$
public static final String GENERALIZATION = "Generalization"; //$NON-NLS-1$
public static final String HYPERLINK = "HyperLink"; //$NON-NLS-1$
public static final String INITIALIZER = "Initializer"; //$NON-NLS-1$
public static final String INSTANCESLOT = "InstanceSlot"; //$NON-NLS-1$
public static final String INSTANCE_SPECIFICATION = "InstanceSpecification"; //$NON-NLS-1$
public static final String INTERACTION_OCCURRENCE = "InteractionOccurrence"; //$NON-NLS-1$
public static final String INTERACTION_OPERAND = "InteractionOperand"; //$NON-NLS-1$
public static final String ITEM_FLOW = "ItemFlow"; //$NON-NLS-1$
public static final String LINK = "Link"; //$NON-NLS-1$
public static final String MATRIX_LAYOUT = "MatrixLayout"; //$NON-NLS-1$
public static final String MATRIX_VIEW = "MatrixView"; //$NON-NLS-1$
public static final String MESSAGE = "Message"; //$NON-NLS-1$
public static final String MODULE = "Module"; //$NON-NLS-1$
public static final String NODE = "Node"; //$NON-NLS-1$
public static final String OBJECT = "Object"; //$NON-NLS-1$
public static final String OBJECT_MODEL_DIAGRAM = "ObjectModelDiagram"; //$NON-NLS-1$
public static final String OBJECT_NODE = "ObjectNode"; //$NON-NLS-1$
public static final String OPERATION = "Operation"; //$NON-NLS-1$
public static final String PACKAGE = "Package"; //$NON-NLS-1$
public static final String PANEL_DIAGRAM = "PanelDiagram"; //$NON-NLS-1$
public static final String PIN = "Pin"; //$NON-NLS-1$
public static final String PORT = "Port"; //$NON-NLS-1$
public static final String PROFILE = "Profile"; //$NON-NLS-1$
public static final String PROJECT = "Project"; //$NON-NLS-1$
public static final String RECEPTION = "Reception"; //$NON-NLS-1$
public static final String REFERENCE_ACTIVITY = "ReferenceActivity"; //$NON-NLS-1$
public static final String REQUIREMENT = "Requirement"; //$NON-NLS-1$
public static final String SEQUENCE_DIAGRAM = "SequenceDiagram"; //$NON-NLS-1$
public static final String STATE = "State"; //$NON-NLS-1$
public static final String STATECHART = "Statechart"; //$NON-NLS-1$
public static final String STEREOTYPE = "Stereotype"; //$NON-NLS-1$
public static final String STRUCTURE_DIAGRAM = "StructureDiagram"; //$NON-NLS-1$
public static final String SWIMLANE = "Swimlane"; //$NON-NLS-1$
public static final String SYSML_PORT = "SysMLPort"; //$NON-NLS-1$
public static final String TABLE_LAYOUT = "TableLayout"; //$NON-NLS-1$
public static final String TABLE_VIEW = "TableView"; //$NON-NLS-1$
public static final String TAG = "Tag"; //$NON-NLS-1$
public static final String TRANSITION = "Transition"; //$NON-NLS-1$
public static final String TRIGGERED_OPERATION = "TriggeredOperation"; //$NON-NLS-1$
public static final String TYPE = "Type"; //$NON-NLS-1$
public static final String TEMPLATEPARAMETER = "TemplateParameter"; //$NON-NLS-1$
public static final String USE_CASE = "UseCase"; //$NON-NLS-1$
public static final String USE_CASE_DIAGRAM = "UseCaseDiagram"; //$NON-NLS-1$
public static final List<String> METACLASSES = Arrays.asList(
ACCEPT_EVENT_ACTION,
ACCEPT_TIME_EVENT,
ACTIVITY_DIAGRAM,
ACTOR,
ARGUMENT,
ASSOCIATION,
ASSOCIATION_END,
ATTRIBUTE,
CALL_OPERATION,
CLASS,
CLASSIFIER_ROLE,
CLEANUP,
COMBINED_FRAGMENT,
COMMENT,
COMMUNICATION_DIAGRAM,
COMPONENT,
COMPONENT_DIAGRAM,
COMPONENT_INSTANCE,
CONDITION,
CONFIGURATION,
CONNECTOR,
CONSTRAINT,
CONSTRUCTOR,
CONTROLLED_FILE,
DEFAULT_TRANSITION,
DEPENDENCY,
DEPLOYMENT_DIAGRAM,
DESTRUCTOR,
ENUMERATION_LITERAL,
EVENT,
EXECUTION_OCCURRENCE,
FILE,
FLOW,
FOLDER,
GENERALIZATION,
HYPERLINK,
INITIALIZER,
INSTANCESLOT,
INSTANCE_SPECIFICATION,
INTERACTION_OCCURRENCE,
INTERACTION_OPERAND,
ITEM_FLOW,
LINK,
MATRIX_LAYOUT,
MATRIX_VIEW,
MESSAGE,
MODULE,
NODE,
OBJECT,
OBJECT_MODEL_DIAGRAM,
OBJECT_NODE,
OPERATION,
PACKAGE,
PANEL_DIAGRAM,
PIN,
PORT,
PROFILE,
PROJECT,
RECEPTION,
REFERENCE_ACTIVITY,
REQUIREMENT,
SEQUENCE_DIAGRAM,
STATE,
STATECHART,
STEREOTYPE,
STRUCTURE_DIAGRAM,
SWIMLANE,
SYSML_PORT,
TABLE_LAYOUT,
TABLE_VIEW,
TAG,
TRANSITION,
TRIGGERED_OPERATION,
TYPE,
USE_CASE,
USE_CASE_DIAGRAM,
TEMPLATEPARAMETER
);
}
|
Self-detoxification, embodiment and masculinity: a qualitative analysis of dependent heroin users experiences of coming off drugs in prison Abstract Not all heroin users that enter the prison estate continue to use heroin or access opiate maintenance or detoxification treatment programmes. Some prisoners decide to self-detoxify. The literature on self-detoxification is thin and focuses on the decisions and practices of self-detoxification in community settings. Less attention has been given to the role of the body and the lived experience of self-detoxification in prison settings. The aim of this paper therefore is to examine the process of self-detoxification in prison, with a particular focus on the role of the body, embodiment and prisoner social relations. This paper draws on Drew Leders absent body theoretical framework and the literature on prison masculinity to analyse qualitative interviews with recently released prisoners. It shows how the decision to self-detoxify can be understood as part of the masculine performance of keeping a low profile. Keeping a low profile helped the participants minimise the risks of victimisation. The self-detoxification techniques the participants used were underpinned by an awareness of the body as poisoned by heroin, suffering because of its presence, rather than its absence. This study has implications for prisoners access to opiate maintenance and detoxification treatment programmes and harm reduction services upon release. |
// IsPrintable returns whether the string does contain only ascii
func IsPrintable(s string) bool {
for _, c := range s {
if !unicode.IsOneOf(unicode.PrintRanges, c) {
return false
}
}
return true
} |
/* Tests_SRS_EVENTSYSTEM_26_003: [ This function shall destroy and free resources of the given event system. ] */
TEST_FUNCTION(EventSystem_Destroy_Basic)
{
CEventSystemMocks mocks;
EVENTSYSTEM_HANDLE event_system = EventSystem_Init();
mocks.ResetAllCalls();
expectEventSystemDestroy(mocks, false, 0);
EventSystem_Destroy(event_system);
mocks.AssertActualAndExpectedCalls();
} |
These students at Unidos Dual Language Charter School in Clayton County, Georgia are learning their science in Spanish. And there's science to suggest the approach benefits Spanish- and English-speakers alike.
Unidos Dual Language Charter School in Clayton County, Georgia is in the flight path of Atlanta’s airport. It looks like a growing number of America’s public schools: it’s a little rundown, and its students are brown — a little more than half African-American, just less than half Latino. Ninety percent are on free or reduced lunch.
But what’s happening inside its Pre-K through eighth-grade classes is anything but typical of the US or the South. Basically, Spanish-speaking Latino kids are getting much of their instruction in the language they know, and African-American kids from the neighborhood are picking up a second language.
That’s accomplished through a ratio of Spanish-to-English instruction that changes depending on the grade level. For example, Unidos kindergarteners get 70 percent Spanish and 30 percent English, in the form of one block of English Language Arts so they can learn to read and write in English. Their specials (gym, art) are in English, too. So their math, their other language arts lessons, their science, and their social studies are taught in Spanish. Practically speaking, that means kindergarteners stay with one Spanish-speaking teacher most of the day. By second grade, though, the ratio at Unidos is 50-50, and teachers have to do a lot more “trading off” — students, subjects, and languages. These arrangements are one of the reasons experts say a high degree of collaboration is key to making a dual-language school work.
With help from an eager second-grade volunteer, John Rendon teaches math in Spanish at Unidos Dual Language Charter School.
Speaking of which, in the first class I see, native Colombian John Rendon is teaching math in Spanish to a roomful of Latino and African-American second-graders. The kids are enthused, almost all of them raising their hands, eager to show their teacher what they know. Maybe even more tellingly, he makes a joke that pretty much all the kids get.
I speak one-on-one with a half-dozen African-American kids, some whom admittedly were “volunteered” for me, but others I randomly stop in the hall. With the degree of variability you’d expect from the second- through fifth-graders who spoke with me, they could all converse in Spanish. And their accents are solid, too.
In a mainly monolingual country where so many poor black students face low expectations and educational opportunity gaps, this degree of fluency surprises people.
“To see an African-American child walk up to you and speaking fluent Spanish, it takes them off for a minute, but then they smile,” says Tony McCreary, Unidos parent, PTA president, and volunteer cafeteria monitor. I ask if he gets questions or quizzical looks when he tells people that his children are not just learning Spanish, but learning their math, their science, their social studies in Spanish.
But the former college recruiter is even more enthusiastic about potential benefits he sees for all Unidos students.
And a growing body of research shows benefits not just for English-speakers. Though it’s a counterintuitive concept, it turns out that if you teach language-minorities core subjects in their native language, they do better in school, including learning English. In other words, teaching, say, Latino students math in Spanish often leads to better overall performance and better English compared to kids in English immersion programs.
Unidos academic coach and parent Jeannie Myers has seen the drawbacks of spending so much critical time on language, not content.
The reasons kids can eventually have better English after being taught core subjects in Spanish are complex; they have to do with brain development, having knowledge and concepts to attach words to, and with getting language-minorities to buy in. Myers, who’s originally from the Dominican Republic, says that motivational aspect includes parent involvement — a less intimidating endeavor when one’s language is spoken throughout the school building.
Fifth-graders Joshua Chicas, Melissa Padron, Kimani Watson, and Jose Bautista.
The way “dual-language” schools (the rough synonym “bilingual” has become a loaded word and is used less frequently) operate varies, but basically, students learn literacy and content in two languages with at least half their instruction in the non-dominant tongue. In the US that’s usually Spanish, though there are more and more programs using other languages.
Regardless, the approach seems to be working for shy fifth-grader and aspiring science teacher Melissa Padron. She says the English and the language-hopping she’s learning at Unidos helps her family.
That could be good practice for Melissa, and as we’ll see later in the series, it could be good for the future pool of bilingual teachers and for the economy as a whole.
Melissa’s school’s test scores are decent — roughly average for the demographics. But Unidos kids are becoming bilingual, so dual-language education is apparently not a zero-sum game (even when a large portions of the students’ instruction is in a language that’s not on the test). And of course, there are things that standardized tests can’t measure. Gabriela Washington tells me bilingual people get better-paying jobs, and that one of her dreams is to travel with an Unidos classmate.
Maybe all the noise from one of the world’s busiest airports isn’t just a nuisance.
This story is part of a series originally published by WBHM. Support for this series comes from The Equity Reporting Project: Restoring the Promise of Education, which was developed by Renaissance Journalism with funding from the Ford Foundation.
Share your thoughts and ideas on Facebook at our Global Nation Exchange, on Twitter @globalnation, or contact us here. Is there a question you wanted answered in this story? Let reporter Dan Carsen know. |
<gh_stars>10-100
'''A module to initiate preferences and services properly'''
from abc import abstractmethod
from typing import Any, Iterable, List, Tuple, Union
from calculate_anything.units import UnitsService
from calculate_anything.currency.providers import CurrencyProviderFactory
from calculate_anything.currency.providers.base import CurrencyProvider
from calculate_anything.currency import CurrencyService
from calculate_anything.lang import LanguageService
from calculate_anything.time import TimezoneService
from calculate_anything.utils import (
Singleton,
get_or_default,
is_not_types,
safe_operation,
)
__all__ = ['Preferences']
class _Preferences:
def __init__(self):
self._uncomitted_keys = set()
self._uncomitted = []
self._commits = 0
def _to_commit(self, key: str, value: Any) -> None:
self._uncomitted.append((key, value))
self._uncomitted_keys.add(key)
@abstractmethod
def _commit_one(self, *args: Any, **kwargs: Any) -> None:
pass
@abstractmethod
def _pre_commit(self, *args: Any, **kwargs: Any) -> None:
pass
def commit(self) -> None:
cls_name = self.__class__.__name__
for key, value in self._uncomitted:
update_str = '{}: {} = {}'.format(cls_name, key, value)
with safe_operation(update_str):
self._commit_one(key, value)
self._pre_commit()
self._uncomitted = []
self._uncomitted_keys = set()
self._commits += 1
class LanguagePreferences(_Preferences):
'''The language preferences class
Attributes:
lang (str): The language currently in use
'''
@property
def lang(self) -> str:
return LanguageService().lang
def set(self, lang: str) -> None:
'''Language to be changed. The language is not set immediately,
but only after 'commit()' is called
Args:
lang (str): The language to set. The name must be a file from
data/lang without the extension.
'''
super()._to_commit('lang', lang)
def _commit_one(self, key: str, value: Any) -> None:
if key == 'lang':
LanguageService().set(value)
def _pre_commit(self) -> None:
# Set en_US if no lang has been specified and its first start
if self._commits == 0 and 'lang' not in self._uncomitted_keys:
LanguageService().set('en_US')
class TimePreferences(_Preferences):
'''The timezone preferences class
Attributes:
default_cities (str): The default cities currently in use
'''
@property
def default_cities(self) -> str:
return TimezoneService().default_cities
def set_default_cities(
self, default_cities: Union[str, Iterable[str]]
) -> None:
'''Default cities to be set. The cities are not set immediately,
but only after 'commit()' is called
Args:
default_cities (Union[str, Iterable[str]]): The default cities to
set. If str is provided it must be comma separated cities.
(i.e 'Athens GR,New York City US')
(i.e ['Athens GR', 'New York City US'])
'''
if not isinstance(default_cities, str):
default_cities = ','.join(default_cities)
default_cities = TimezoneService().parse_default_cities_str(
default_cities, save=False
)
super()._to_commit('default_cities', default_cities)
def _commit_one(self, key: str, value: Any) -> None:
if key == 'default_cities':
TimezoneService().set_default_cities(value)
def _pre_commit(self) -> None:
if self._commits == 0:
TimezoneService().start()
class CurrencyPreferences(_Preferences):
'''The currency preferences class
Attributes:
default_currencies (list of str): The default currencies currently in
use
cache_update_frequency (int): An integer representing the current
interval of cache update in seconds
cache_enabled (bool): Wether cache is currently enabled or not
providers (tuple of str): A tuple of currently enabled currency
providers.
'''
@property
def default_currencies(self) -> List[str]:
return CurrencyService().default_currencies
@property
def cache_update_frequency(self) -> int:
return CurrencyService()._cache._update_frequency
@property
def cache_enabled(self) -> bool:
return CurrencyService().cache_enabled
@property
def providers(self) -> Tuple[str]:
free_providers = CurrencyService()._provider._free_providers
api_providers = CurrencyService()._provider._api_providers
return tuple([*free_providers, *api_providers])
def set_default_currencies(
self, default_currencies: Union[str, Iterable[str]]
) -> None:
'''Default currencies to set. The currencies are not set immediately,
but only after 'commit()' is called
Args:
default_currencies (Union[str, Iterable[str]]): The default
currencies to set in iso3 format. If str is provided it must
be comma separated currencies. (i.e 'EUR,CAD,BTC,USD'),
(i.e ['EUR', 'CAD', 'BTC', 'USD])
'''
if isinstance(default_currencies, str):
default_currencies = default_currencies.split(',')
default_currencies = map(str.strip, default_currencies)
default_currencies = map(str.upper, default_currencies)
default_currencies = list(default_currencies)
super()._to_commit('default_currencies', default_currencies)
def set_cache_update_frequency(self, update_frequency: int) -> None:
'''Update frequency to set. The update frequency is not set immediately,
but only after 'commit()' is called
Args:
update_frequency (int): An integer representing an interval
in seconds in which cache will be updated.
'''
update_frequency = get_or_default(update_frequency, int, 86400)
update_frequency = max(update_frequency, 0)
super()._to_commit('cache_update_frequency', update_frequency)
def enable_cache(self, update_frequency: int) -> None:
'''Alias of 'set_cache_update_frequency()'.'''
self.set_cache_update_frequency(update_frequency)
def disable_cache(self) -> None:
'''Disables the cache after 'commit()' is called.'''
super()._to_commit('cache_update_frequency', 0)
def _get_provider(
self, provider: Union[str, CurrencyProvider], api_key: str
) -> CurrencyProvider:
if is_not_types(CurrencyProvider)(provider):
provider = str(provider).lower()
provider = get_or_default(
provider,
str,
'internal',
CurrencyProviderFactory.get_available_providers(),
)
provider = CurrencyProviderFactory.get_provider(provider, api_key)
return provider
def add_provider(
self, provider: Union[str, CurrencyProvider], api_key: str = ''
) -> None:
'''A currency provider to be added with an asociated api_key.
The provider is not set immediately, but only after 'commit()' is
called
Args:
provider (Union[str, CurrencyProvider]): If str is provided it
must represent a provider name str as returned by
'CurrencyProviderFactory.get_available_providers()'. if a
CurrencyProvider is provided, api_key is ignored
api_key (str): The api_key to set if provider is a str.
'''
provider = self._get_provider(provider, api_key)
super()._to_commit('add_provider', provider)
def remove_provider(self, provider: Union[str, CurrencyProvider]) -> None:
'''A currency provider to be removed. The provider is not removed immediately,
but only after 'commit()' is called
Args:
provider (Union[str, CurrencyProvider]): If str is provided it
must represent a provider name str as returned by
'CurrencyProviderFactory.get_available_providers()'. if a
CurrencyProvider is provided, api_key is ignored
'''
provider = self._get_provider(provider, '')
super()._to_commit('remove_provider', provider)
def _commit_one(self, key: str, value: Any) -> None:
if key == 'default_currencies':
CurrencyService().set_default_currencies(value)
elif key == 'cache_update_frequency':
if value > 0:
CurrencyService().enable_cache(value)
else:
CurrencyService().disable_cache()
elif key == 'add_provider':
CurrencyService().remove_provider(value)
CurrencyService().add_provider(value)
elif key == 'remove_provider':
CurrencyService().remove_provider(value)
def _pre_commit(self) -> None:
# If first start, start service
if self._commits == 0:
CurrencyService().start()
# Else if currency_provider has been provided start with force
elif 'add_provider' in self._uncomitted_keys:
CurrencyService().start(force=True)
class UnitsPreferences(_Preferences):
'''The units preferences class
Attributes:
conversion_mode (UnitsService.ConversionMode): The conversion mode
currently in use as in UnitsService.ConversionMode.
'''
@property
def conversion_mode(self) -> UnitsService.ConversionMode:
return UnitsService().conversion_mode
def set_conversion_mode(
self, mode: Union[str, UnitsService.ConversionMode]
) -> None:
'''A conversion mode to be set. The mode is not removed immediately,
but only after 'commit()' is called
Args:
mode (Union[str, UnitsService.ConversionMode]): If str is provided
it must represent a conversion mode (i.e 'normal', 'crazy').
If int is provided it must be one of
UnitsService.ConversionMode.
'''
if isinstance(mode, str):
mode = mode.lower()
mode = get_or_default(mode, str, 'normal', ['normal', 'crazy'])
if mode == 'crazy':
mode = UnitsService.ConversionMode.CRAZY
else:
mode = UnitsService.ConversionMode.NORMAL
mode = get_or_default(
mode,
UnitsService.ConversionMode,
UnitsService.ConversionMode.NORMAL,
[
UnitsService.ConversionMode.NORMAL,
UnitsService.ConversionMode.CRAZY,
],
)
super()._to_commit('units_conversion_mode', mode)
def _commit_one(self, key: str, value: Any) -> None:
if key == 'units_conversion_mode':
UnitsService().set_conversion_mode(value)
def _pre_commit(self) -> None:
if self._commits == 0:
UnitsService().start()
class Preferences(metaclass=Singleton):
'''The Preferences class is a Singleton class which holds all other
preferences, like language, timezone, units and currency.
Attributes:
language (LanguagePreferences): The language preferences reference.
time (TimePreferences): The time preferences reference.
units (UnitsPreferences): The units preferences reference.
currency (CurrencyPreferences): The currency preferences reference.
'''
def __init__(self):
self.language = LanguagePreferences()
self.time = TimePreferences()
self.units = UnitsPreferences()
self.currency = CurrencyPreferences()
def commit(self) -> None:
'''Commits preference changes in proper order'''
self.language.commit()
self.time.commit()
self.units.commit()
self.currency.commit()
|
package com.nio.channel;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SocketChannel;
public class SockeChannelDemo1 {
public static void main(String[] args) throws Exception {
SocketChannel sc = SocketChannel.open();
sc.configureBlocking(false);
sc.connect(new InetSocketAddress("127.0.0.1", 9999));
while(!sc.finishConnect()){
}
String str = "hello nio!";
ByteBuffer buf = ByteBuffer.wrap(str.getBytes());
while(buf.hasRemaining()){
sc.write(buf);
}
sc.close();
}
}
|
package org.kuali.coeus.propdev.api.specialreview;
import org.kuali.coeus.common.api.compliance.exemption.ExemptionTypeContract;
import org.kuali.coeus.sys.api.model.IdentifiableNumeric;
public interface ProposalSpecialReviewExemptionContract extends IdentifiableNumeric {
ExemptionTypeContract getExemptionType();
}
|
Impact of mergers on LISA parameter estimation for nonspinning black hole binaries We investigate the precision with which the parameters describing the characteristics and location of nonspinning black hole binaries can be measured with the Laser Interferometer Space Antenna (LISA). By using complete waveforms including the inspiral, merger and ringdown portions of the signals, we find that LISA will have far greater precision than previous estimates for nonspinning mergers that ignored the merger and ringdown. Our analysis covers nonspinning waveforms with moderate mass ratios, q>= 1/10, and total masses 10^5<M/M_{Sun}<10^7. We compare the parameter uncertainties using the Fisher matrix formalism, and establish the significance of mass asymmetry and higher-order content to the predicted parameter uncertainties resulting from inclusion of the merger. In real-time observations, the later parts of the signal lead to significant improvements in sky-position precision in the last hours and even the final minutes of observation. For comparable mass systems with total mass M/M_{Sun} = ~10^6, we find that the increased precision resulting from including the merger is comparable to the increase in signal-to-noise ratio. For the most precise systems under investigation, half can be localized to within O(10 arcmin), and 10% can be localized to within O(1 arcmin). I. INTRODUCTION Gravitational waves carry a tremendous amount of information through the universe. It is the goal of the emerging field of gravitational wave astronomy to access that information and bring it to bear on the problems of astrophysics and cosmology. The current generation of gravitational wave detectors, such as the Laser Interferometer Gravitational-wave Observatory (LIGO), are focused on the detection of gravitational waves from isolated astrophysical systems. LIGO and its contemporaries will also provide some minimal information on the parameters of these systems such as their mass, luminosity distance, and approximate sky position. The quality of this information will be limited by the relatively low signal-to-noise ratios (SNR) expected for LIGO events. The Laser Interferometer Space Antenna (LISA), a space-based detector of gravitational waves in the milli-Hertz band, will detect the inspiral and merger of supermassive black hole binaries (BHBs) with very large SNRs (100 ∼ 10 4 ) out to redshifts of z ∼ 10 or greater. These large SNRs make it possible to extract a large amount of information from each event including mass, mass ratio, spins, orientation, luminosity distance, and sky position. Because sources of gravitational waves are strongly dominated by gravitational dynamics, and because the waves are expected to propagate through intervening matter with little interaction, these observations may provide an unusually clean and direct measurement * Electronic address: Sean.T.McWilliams@nasa.gov of the source system parameters. Of particular interest are the distance and sky position parameters, which will drive LISA's ability to narrow the set of candidate source galaxies or clusters for merger events, potentially opening up a range of multi-messenger astronomy opportunities. For instance, the coincident measurement of gravitational and electromagnetic signatures from a single source galaxy (i. e. standard sirens ) will allow a direct measurement of the redshift-luminosity relationship, thereby constraining the dark-energy equation of state. While cosmological models predict that the dark-energydominated era began fairly recently at z ∼ 1, measurements for much larger redshifts are not currently possible by other methods, and the standard siren method is limited only by the achievable range of coincident electromagnetic observation. Extracting this information requires a waveform model that can provide templates with sufficient fidelity to distinguish between signals with different parameters in the presence of instrumental noise. For BHBs, the complete waveform signal is traditionally divided into three different regimes: the inspiral, which can be described using post-Newtonian (PN) orbital dynamics; the ring-down, which can be treated using black-hole perturbation theory; and the merger, which bridges the two and can be predicted using numerical relativity. Ideally, an estimate of LISA's ability to measure the parameters of observed BHBs would include the information contained in the complete waveform. However, the difficulty associated with modeling the merger has led the majority of such studies (the exceptions being,, and ) to include only the inspiral portion of the waveform. Until recent advances in numerical relativity opened the door to a complete understanding of General Relativity's predictions for these signals, it was not clear whether theoretical knowledge about the final strongest moments of the signal would be available for system parameter measurements. A naive guess at the consequences of omitting the merger would be that the loss in parameter precision would be proportional to the loss in SNR. This assumes that the two portions of the signal have equal density of information per unit SNR and that that information is independent. There are two reasons that are sometimes cited for expecting that the effect of the merger on parameter precision will be less than that on SNR. The first is that the merger encompasses very few GW cycles compared with the observed portion of the inspiral and it is expected that information content correlates with the number of cycles. The second is that, in the low-frequency limit, the sensitivity of LISA to parameters such as sky position is entirely generated by the orbital motion of the LISA constellation and the merger is too short in duration to experience a significant orbital modulation. In this work we investigate the significance of the merger to LISA parameter estimation using quasianalytic waveforms that are tuned to match the results of numerical relativity. We restrict ourselves to nonspinning binaries with moderate mass ratios (q < − 1/10) and explore the parameter space around a candidate system with total redshifted mass of 1.33 10 6 M ⊙, mass ratio q = 1/2, and redshift z = 1. Astrophysically, we expect black holes to have spin. While including spin and mergers each separately improve parameter estimation, it is not known how these effects will combine. Therefore, including both spin effects and mergers will be an important followup to this investigation. In section II, we discuss the methods employed for generating models of the complete waveform signals and the instrument, and for estimating parameter measurement precision. In section III, we examine LISA's ability to measure binary black-hole system parameters. The primary novelty of our results is that we assume theoretical knowledge of the complete signal is applied in the observational analysis. We examine the impact of including the merger and higher harmonic content (III A) for comparable mass systems near 10 6 M ⊙, the variation of the results across a range of masses (III B) and mass ratios (III C). In (III E) we study how sky position information accumulates in time, which will impact how LISA ultimately interacts with other astronomical instruments. We summarize our key results in section IV. II. METHODOLOGY Before presenting our findings, we will briefly review the steps taken to estimate the precision with which LISA will measure astrophysical parameters using complete waveforms. Theoretical predictions of the strong-field gravitational dynamics and radiation generation must be encoded in a parameterized waveform signal model. We then need to apply a model of the instrument, including the response to signals and the sources of noise. Finally, we need to estimate the theoretical limit on the uncertainty of the measured signal parameters that could be achieved from a measurement consisting of our realizations of the signal and noise content. A. Waveform model We assume that Einstein's theory of gravity correctly describes black hole binary systems. While numerical relativity can now treat the final moments of these events, it would be impractical to conduct simulations covering the parameter space of interest. Furthermore, general signal templates must cover the complete signal including the long-lasting inspiral signal which can not be modeled numerically, but which is well-described by the post-Newtonian (PN) approximation. Instead our complete waveform model is based on a variation of the PN treatment that has been "tuned" to approximate the numerical simulation results at late times. Such a model can be tuned using available numerical data, while providing reasonable, if unverified, model signals for arbitrary parameter sets. In order to investigate LISA's capabilities for recovering source parameters, we specifically use a waveform model tuned to match the available numerical simulations for nonspinning black hole binaries. This model, referred to as the IRS-EOB model, uses a conventional effective-one-body (EOB) Hamiltonian formalism for the adiabatic inspiral. For the merger-ringdown, a fit to a physically-motivated functional form is employed for the phasing (see Eq. 9 in ), while the amplitude is calculated using a model for the flux that is constrained both to be consistent with the inspiral flux through 3.5 PN order, and also to vanish as it approaches the ringdown frequency (referred to as "Model 2" and given by Eq. 19 in ). The physical motivation, that the radiation can be treated as though it were being generated by a shrinking rigid rotator, explains the IRS in IRS-EOB, which stands for "implicit rotating source". For a unit mass-system (m 1 + m 2 = 1), the source model depends only on the remaining intrinsic source parameters, the mass ratio q ≡ m 1 /m 2 (where m 2 > m 1 ), and the spins, which we set to vanish for this initial investigation. Throughout this work, we employ waveforms that correspond to ∼ 10 6 M of observation, or ∼ 3 months for our fiducial case with M = 1.33 10 6 M ⊙. Here, we use units of G = c = 1, so that 1 M = 4.92 10 −6 (M/M ⊙ ) seconds. Therefore, lower-mass systems require longer simulations in M. We are limited computationally from employing longer waveforms, but we have verified that our results do not change significantly (with the single exception of the uncertainty in total system mass) by doubling the waveform length for the lowest-mass cases investigated. We use a model cadence of 0.5 M. The signal is resampled when we apply the detector's response function, so that the final signal cadence corresponds to a quarter wavelength at the highest frequency reached by the ℓ = 4, m = + − 4 harmonics. This is true even for cases where we restrict the calculation to have only quadrupolar content. After the source calculation, we derive the incident waveforms referenced to the solar system barycenter (SSB), at which point we can apply the response of the LISA detector. Computation of the incident waveform in SSB frame depends on eight additional parameters: the redshifted total system mass M = M o (1 + z) (with M o the rest mass and z the redshift), luminosity distance D L, coalescence time t c, and three angles describing the orientation of the binary, for which we use the inclination (using the convention that = 0 corresponds to the line of sight being coincident with the orbital axis of the binary), initial orbital phase o and the polarization phase At the source, the emitted radiation can be decomposed in spin-weighted spherical harmonic components h ℓm of the dimensionless gravitational wave strain (scaled for unit distance from the source). Here the strain is complex-valued to represent both polarization components, h ≡ h + + ih. Specifying the parameters (, 0,, M, D L ) allows us to calculate the solar-system incident waveform h B : where −2 Y ℓm are the spin-weight −2 spherical harmonics. The two additional parameters, the ecliptic latitude and longitude, describe the sky location of the binary in the SSB frame. The dependence on sky location is applied by the instrument response, which we discuss below. We use the vector a ≡ (ln M, ln D L,,,, o,, t c ) to denote the complete set of variable parameters. Note that the dependence on mass ratio, q, is not explicitly included in a but is instead implicitly included in the h lm. This is because q is not varied when computing parameter uncertainties (see section II C), a procedure consistent with that used in. This is equivalent to the assumption that there is no uncertainty in the measurement of q. We intend to relax this assumption in future investigations. B. Instrument model The instrument model consists of two components: a prescription for converting h B into signals observed by the instrument, and a description of the instrument noise. The LISA instrument consists of a constellation of three spacecraft located at the vertices of an approximately equilateral triangle with a side length of 5 10 9 m. The light travel time along each of the six one-way links is monitored using laser interferometry. These individual link measurements are then combined using a technique known as Time Delay Interferometry (TDI) to yield observables that contain gravitational wave signals and suppress instrumental noise. Of the many families of TDI observables, the ones most suitable for data analysis are the orthogonalized or "optimal" variables. For this investigation, we have developed a set of orthogonal variables we refer to as "pseudo-A, E, T " (here-after ≡ {,,T }), which are analogous to the original A ≡ {A, E, T } variables in except that they are constructed from the Michelson X ≡ {X, Y, Z} variables rather than the TDI generators ≡ {,, }. The software package Synthetic LISA is used to generate the X variables from the incident gravitational waveforms, h B and the variables are then computed as An overall factor of 1 2 has been applied to all three formulas to make and agree with A and E in the lowfrequency limit. Synthetic LISA can also be used to model instrument noise. However, Synthetic LISA includes statistical fluctuations in its noise-generation algorithms, whereas we are interested in studying parameter uncertainties that result from differences in the incident waveforms for typical instrumental noise levels. We therefore wish to suppress the impact of statistical fluctuations in a given realization of the noise. One way to do this is to generate an ensemble of noise realizations and average them. Satisfactory results can be achieved with a suitable number of averages at the expense of increased computational effort. For this analysis, we have followed the procedure used in to produce analytic estimates of the mean power spectral densities of the noise in the TDI channels directly from the acceleration and optical path length noises in the individual links. This procedure requires making assumptions such as stationary, equal arm lengths. The expressions for the one-sided spectral densities in the observables are S, = 2 sin 2 () , where ≡ L/c and L is the arm length, expressed as a light-travel time. These expressions are the analog of Eqs. 67 and 68 in and Eqs. 19 and 20 in for the noise response of the original A, and we have verified that we duplicate the results in for A using (accounting for a typographical error which appears in Eq. 20 of which, if corrected, would make it consistent with and with our results). The quantities S pm and S op are the one-sided spectral densities of the proof mass acceleration and optical pathlength noises, respectively, expressed as equivalent strain. They are modeled as The acceleration noise power spectrum in S pm includes an additional f −1 reddening below 0.1 mHz to account for the unmodeled behavior of the instrument below the LISA band. As a check of the expressions in Eq. 3, we used Synthetic LISA to model an ensemble of 1000 realizations of noise in the TDI channels. Fig. 1 shows a comparison of the mean power spectra of this ensemble with the expressions in Eq. 3. In general, the agreement between the simulated noise and the analytic expressions is quite good. Deviations between the two curves indicate areas of potential concern when evaluating SNR and parameter sensitivity. For example, the analytic noise expressions in Eq. 3 contain nulls at frequencies corresponding to the inverse round-trip times of the constellation. The simulated data, be it signal or noise, is finite at these frequencies due to spectral estimation effects. This leads to a spurious divergent contribution to the SNR and parameter uncertainties at these frequencies. To guard against this, we applied a noise floor of 10 −40 (f /1 Hz) 2, eliminating the nulls in the noise response. In addition the 'flexing' of the LISA arms due to orbital variations was disabled in Synthetic LISA to maintain consistency with the expressions in Eq. 3, which were derived assuming constant arm lengths. The other area of disagreement between the simulated and analytic noise in Fig. 1 is at the low-frequency end of theT channel. The analytic expression in Eq. 3 predicts that the noise in theT channel should continue to decrease with decreasing frequency, while the simulated noise levels off. As a result of using Synthetic LISA to model the signal response, and Eq. 3 to model the noise, there would be a large spurious contribution to SNR and parameter sensitivity at low frequencies, precisely the band where theT channel is not expected to contribute. As the source of this discrepancy has not yet been identified, we have elected to exclude theT channel in the remainder of our analysis. We note that, since the information fromT will be present at high frequencies, its exclusion will lead us to produce conservative uncertainty estimates with systematically worse uncertainties than might be otherwise obtained for higher-mass cases whereT -channel response can be non-negligible. Our treatment is consistent with previous studies which have generally neglected the details of LISA high-frequency response. We plan to includeT in future work. The final component of the noise model is the foreground of gravitational waves from unresolved compact binaries. We use the model for the galactic foreground that was developed in. Specifically we add to the expressions for S, in Eq. 3 a galactic foreground noise, S gal, given by where S conf is taken from Eq. 14 of. The contribution from S gal is not included in Fig. 1 but is included in all SNR and parameter estimation calculations. C. Parameter estimation using the Fisher matrix To approximate the measurement precision that LISA can achieve, one approach we can take is the Fisher matrix formalism. If the LISA data stream consists of a waveform, h( a ), embedded in a signal, s, so that the noise, n, is given by n = s − h, then the probability that a signal contains a waveform with the parameter set a is given by the likelihood function, where | is a noise-weighted inner product. The "maximum likelihood" set of parameters, a, is the one that maximizes p. Errors in the a set of parameters can be assessed by expanding p around a, such that where a ≡ a − a. The Fisher information matrix, ab, which is the centerpiece of our subsequent analysis, is defined to be where a and b are parameter indices. Throughout this work, we calculate the parameter derivatives in Eq. 8 using one-sided differencing, with a fractional step size a = ∆ a, where we set ∆ = 10 −4 for the coalescence time, and ∆ = 10 −6 for all other parameters. To lowest order in an expansion in SNR −1, the covariance matrix, ab, is just the inverse of the Fisher matrix: so that a ≡ √ aa is the standard deviation of parameter a. The covariance matrix is symmetric, with the off-diagonal terms giving the covariance between parameters, and the diagonal terms giving the variance of each parameter. Because inverting the Fisher matrix to find the covariance matrix is not always valid, we verify our results by testing individual cases at random for each system of interest using Markov Chain Monte Carlo simulations. Because we use ln M and ln D L as parameters, the resulting uncertainties are fractional: We therefore express these as fractional uncertainties throughout this work. Though we will refer to the quantities a as "uncertainties" throughout this work, we wish to note that they are a measure of precision, not necessarily accuracy. Another parameter of interest is the precision of the sky localization, expressed as the area of the uncertainty ellipse on the sky,. This is sometimes referred to as ∆ or in the literature, but we simply use, given that it is a measure of an area of uncertainty, rather than a measure of uncertainty of an area. To construct this value from our data, we make the approximation We note that generally =, both because we quote median values throughout, and because we define the ellipse as in, so that the effect of Eq. 11 is to diagonalize the Fisher sub-matrix formed by and, and thereby calculate the area using the true semi-major and semi-minor axes. D. Information Accumulation One aspect of LISA's parameter sensitivity that is of interest is the way in which parameter uncertainty evolves with time. A simple way to investigate this is to truncate the signal at a specified sequence of times before merger. To avoid edge effects in the ensuing spectral estimation, the truncated signal is tapered with a raised cosine window with a length of approximately one wave cycle of the quadrupole mode at the time of truncation. The Fisher matrix and covariance matrix are then computed using this truncated signal. By constructing a sequence with progressively later truncation times, one can trace the evolution of parameter uncertainty. Fig. 2 shows the time-evolution of the uncertainty in the ecliptic latitude,, for an equal-mass system with M = 1.33 10 6 M ⊙ at z = 1. Three variants of the time curves are shown corresponding to different tapering conventions. The taper either starts at the designated truncation time, ends at that time, or the mid-point of the taper occurs at that time. This distinction, while seemingly trivial, shows the potential impact of slight differences in the treatment of time-domain signals with regard to taper length, type, or placement, as clear differences can be seen in Fig. 2. For early times, the taper prescription does not matter. At later times, the precise taper implementation becomes more important, leading to a time shift between the three curves. In our discussion of information accumulation in section III E, truncation time refers to the mid-point of the taper. For signals comprising a single harmonic (or rather, a single pair of ℓ, + − m modes, which are complex conjugates of each other for nonspinning waveforms), it is straightforward to repeat this analysis in the frequency domain. This provides an internal consistency check for our code. For each desired truncation time, the instantaneous frequency of the source waveform is used to compute the corresponding signal frequency. The evaluation of the inner product in Eq. 8 is then limited to frequencies below this. The curve in Fig. 2 shows that this approach is consistent with the time-domain approach, at least up to times very near the merger. It would be possible to extend this technique by tracking each mode separately and computing a different frequency cutoff for each. However, we find the time-domain approach to be more straightforward. E. Caveats It is well known that the Fisher-matrix approach is prone to a number of potential pitfalls. In this section, we attempt to address a few of them. The first potential issue is our approximation of the parameter derivatives, ∂h/∂ a, using a one-sided finite The curves labeled "pre-taper", "mid-taper", and "post-taper" were computed using timedomain truncation, with the truncation time corresponding to the end of, middle of, and beginning of a one-radiationcycle-long taper, respectively. The curve labeled "frequency" was computed using the full waveform but imposing an upper frequency cutoff when evaluating the inner product in Eq. 8 corresponding to the instantaneous signal frequencies at the truncation times. The vertical dash-dotted line corresponds to the Schwarzschild ISCO (see Sec. III A). The vertical thick solid line separates the times prior to merger from the times after merger. difference approach. As a rough check of the validity of this approximation, we have computed the Fisher and covariance matrices using various finite difference step sizes and verified that the results were consistent. Table I demonstrates that an order of magnitude decrease in the step size changes covariance terms by a few percent at worst. A related concern is that the Fisher information matrix precision estimates assume that the relevant portion of the likelihood function can be treated as a quadratic function. This assumption should be guaranteed by the large SNR, but is not explicitly verified. Finally, actual observations will require the implementation of concrete algorithms for exploring the likelihood function, such as those pursued in the Mock LISA Data Challenges. In those Challenges, it has been demonstrated that accuracy may be impacted, for instance, by complicated structure in the likelihood function, including multiple maxima and extended shallow regions. Systematic errors are also possible, for instance, if errors in the theoretical signal predictions should exceed statistical errors. Highly accurate merger waveform predictions, and corresponding models tuned to those predictions, are currently available only for a very limited sampling of specific black hole system configurations. However, this area of study is advancing rapidly, and it now appears that accurate information about the complete signals throughout the relevant parameter space is likely to be available at the time of LISA's operation. We therefore focus on the additional source information that may be obtainable when the full signal predictions are applied in the observational analysis. III. RESULTS Since there are variations in the parameter uncertainties across the parameter space, we perform Monte Carlo simulations to find the distribution of uncertainties For specific choices of masses and luminosity distance (parameters M, q, and D L ), we conducted Monte Carlo simulations consisting of 1024 randomly-generated parameter sets for all the cases shown, and have spot-checked that our results do not change significantly if we increase to 8192 parameter sets. The remaining parameters are drawn from uniform distributions, with drawn from a uniform distribution in cos to give uniform sky coverage. A. Adding the merger and higher harmonics Table II summarizes the improvement in parameter uncertainties resulting from the addition of merger for the cases of equal-mass systems and mass-ratio q = 1/2, each with a total mass of 1.33 10 6 M ⊙ at a redshift z = 1. We compare the uncertainty estimates obtained with four different options for the waveform models. Two of these options consist of the ℓ = 2, m = + − 2 modes only, with one tapered in time to remove the merger, and the other including the full inspiral-merger-ringdown signal. The midpoint of the taper corresponds to the time when the signal reaches the frequency of the innermost stable circular orbit (ISCO) frequency of a test particle orbit- II: Ratio of the waveform-model results for median variance of all the extrinsic parameters for two sets of comparablemass physical systems. The "Numerator" and "Denominator" columns indicate the models compared in constructing the ratios for that row. The models vary by the harmonic content of the waveforms and by whether the merger is included (full) or not (ISCO). For the systems considered here, the fractional loss in estimated precision from ignoring the final merger is comparable to the corresponding loss in SNR and has a greater impact than ignoring higher harmonics. The significance of the higher harmonics is lower when full models are considered, as compared with ISCO-terminated models. The actual median fractional variances for all cases are given in Table III. ing a Schwarzschild black hole, f ISCO = c 3 / 6 3/2 GM. Much of the previous systematic work on parameter uncertainties with LISA observations has applied waveform models similar to the (ℓ = |m| = 2, ISCO) option. More recent work has included higher harmonic content. Our other two waveform options include modes up to ℓ < − 4, where one case is again tapered to remove the merger, and the other includes the complete signal. The top row for each system included in Table II shows the ratio of parameter uncertainties with and without the merger when higher harmonics are included. In each case the inclusion of the merger increases the SNR by roughly a factor of 3. In general terms, if the information contained in the merger waveform is equally rich, as compared with the inspiral waveform, the uncertainties should decrease by a similar factor. This is generally the case for most parameters with the mass uncertainty showing the least improvement and the sky-position, polarization and coalescence-time uncertainties improving most. The second and third rows for each system in Table II summarize the improvement in uncertainties resulting from the inclusion of higher-harmonic content in the waveforms. The second row shows the improvement in uncertainty when the full waveform is included in each model, while the third row shows the effect of including the higher harmonics with waveform models terminating at ISCO. The latter comparison is roughly similar to previous considerations of the impact of higher harmonics. Comparing the second and third rows provides some indication of the independence of information in the higher-harmonics and in the post-ISCO merger. For most parameters the marginal effect of including higher harmonics is not as great when the full-length waveforms are considered as it was for ISCO-terminated waveform models. and 4, we show histograms of our results for the four waveform model options for the q = 1 and q = 1/2 systems, respectively. The histograms in Fig. 3 agree qualitatively with those presented in our prior work. Quantitatively there is some disagreement, which may be attributed to several factors. Chief among these are the increased duration of the signal (∼10 days in the prior work as opposed to ∼3 months for a comparable mass in this work) and an error in the prior code that omitted a factor of the TDI cadence in the parameter uncertainty estimates. For both the equal-mass case in Fig. 3, and q = 1/2 in Fig. 4, we see a clear improvement in the level of measurement precision one can expect by including the merger waveform. It appears that, in particular, the parameter t c is localized extremely well in both Figs. 3 and 4 for cases that include the merger, relative to the timing precision without the merger. Indeed, for every mass ratio the inclusion of the merger is estimated to result in uncertainties in t c that are an order of magnitude or more smaller than the smallest gravitational wave half-period reached by the signal waveforms, which is the shortest time interval over which the signal will contain information content. This dramatic improvement in timing accuracy can be heuristically explained by noting that the merger provides a sharp feature that can be well localized. The total system mass, M, is essentially insensitive to the inclusion of the merger or the presence of higher harmonics, and appears to depend entirely on the number of inspiral cycles, as was anticipated in. For this reason, we instead show SNR −1 in Fig. 4 and in subsequent histograms. This quantity is useful, as it shows the degree of relative improvement in parameter measurement that can be explained by an increase in SNR alone. The precision of the luminosity distance D L and polarization phase measurements improve by roughly an order-of-magnitude over the quadrupolar inspiral case as either the merger or higher harmonics are added individually. When both features are added simultaneously, the improvement is "only" a factor of ∼ 30, suggesting that some of the information added by the two features is common. The inclination and orbital phase constant o show qualitatively different behavior depending on what additional physics is added to the waveform model. For both mass-ratios, we see that the addition of higher harmonics dramatically reduces the long tail of large uncertainties for the parameters in the worst cases of the quadrupoleonly results. The addition of the merger, on the other hand, results in an improvement of the most precise determinations of and o, with less effect on the uncertainty of the least accurate parameter sets. Unlike the results for luminosity distance and polarization phase, the overall improvement when both merger and harmonics are included is closer to the product of the individual improvements, indicating that the additional information brought by each is independent. For the sky angles (the ecliptic latitude and longitude ) the phenomenology of the response to including higher harmonics is roughly reversed from that seen in the inclination and orbital phase angles. In both Figs. 3 and 4, inclusion of higher harmonics most significantly improves the smallest uncertainties in the distribution, with less impact on the largest uncertainties. The addition of the merger, however, appears to be more complicated. For both mass ratios the ℓ = |m| = 2 uncertainty distributions for both sky angles appear to uniformly improve. In the ℓ < − 4 distributions, the addition of the merger shows relatively more improvement in the least-accurate side of the distribution. The reduction in uncertainty obtained by adding both features to the waveform model shows less independence than seen with and o. For the q = 1/2 in particular, there is relatively little benefit to adding the higher harmonics once the merger has been included. B. Systems with different total masses With redshifted mass 1.33 10 6 M ⊙, the frequency of the inspiral-merger transition in the signals we have studied so far occurs near the optimal region of LISA's sensitivity band. Varying the mass shifts this transition frequency (in inverse proportion), thus changing LISA's relative sensitivity to the inspiral and merger-ringdown signals. In Fig. 5, we compare three cases, all with a mass ratio q = 1/2 at a redshift z = 1, and with total masses of M = 1.33 10 5 M ⊙, 1.33 10 6 M ⊙, and 1.33 10 7 M ⊙, chosen in part so that the heaviest case can be compared to the results in. In this and subsequent figures, we do not compare results for the measurement of the mass due to the fact that our signal duration is constant in M, and therefore the lightest systems do not fully span LISA's band. This was essentially due to computational constraints on the signal length, which we intend to im-prove upon in future work. Scaled in units of seconds, we see the best t c estimates for the mid-mass case, which merges closest to LISA's most sensitive band, and therefore has the largest signalto-noise ratio (SNR). On the other hand, if we were to rescale the curves to measure precision against the timescale of the source physics, M, then the largest systems would be seen as most precise. For the sky position angles the middle mass case outperforms the others by a factor of 2-3, with a broad distribution for the highest mass case. For all other parameters, the lowest mass is easily the worst performer, with the mid-mass system marginally outperforming the largest mass case. In prior investigations that were limited to the inspiral a more precipitous drop in performance occurs for systems with masses approaching 10 7 M ⊙. This is simply a result of the absence of signal, as for such large masses the portion of the total signal that occurs in band for LISA is increasingly dominated by the merger, so that no signal is present when the merger is excluded. This effect is exacerbated in studies that employ a more severe low-frequency cut-off in the LISA sensitivity. C. Systems with different mass ratios We have also examined results for mass ratios other than q = 1/2. In Fig. 6, we compare three different mass ratios, q = 1/2, q = 1/4, and q = 1/10, with all three cases again corresponding to a total system mass of 1.33 10 6 M ⊙ at a redshift z = 1. Varying the mass ratio has surprisingly little effect on the uncertainties. We see that the inverse SNR shows more variation than the parameter uncertainties. This would seem to suggest a balance between the importance of the total signal power and the fraction of that power contained in higher harmonics. To the extent that the relatively small differences among the three cases for most parameters are statistically meaningful, the q = 1/10 case is the worst performer by a small margin for all parameters except t c, with an insignificant difference between the q = 1/2 and q = 1/4 cases. D. Comparing results In Table III, we summarize our results by quoting the median parameter uncertainties for all of our data, as well as quoting results for comparable cases from the literature. We note that, as we do not include the mass ratio in our covariance calculation, we are unable to convert to uncertainties in the chirp mass, M c, and reduced mass ratio,, which are used in and in most of the literature. Furthermore, without explicit knowledge of the covariance between these parameters in the available publications, we are unable to convert the results in the literature into uncertainties in the total mass, so we leave the mass out of our comparison in Table III. We do include a comparison of the sky position, calculated using. Of particular note is the discrepancy between our results, and the results found in. Specifically, from their Fig. 1, their median latitude and longitude uncertainties are 0.046 deg and 0.057 deg, respectively, or ∼ 3 arcmin as stated in their abstract. However, when running as identical a case as possible with ten cycles prior to merger of a system with q = 1/2 and M = 1.33 10 7 M ⊙, and using the and channels only, we arrive at median estimates of 0.39 deg and 0.63 deg for the latitude and longitude, respectively. This represents an order-ofmagnitude disagreement. We note that for a total mass of M = 1.33 10 6 M ⊙, the median latitude and longitude uncertainties for all mass ratios was within a factor of a few of the localization claimed in, with q = 1/2 providing the best localization with median latitude and longitude uncertainties of 0.09 deg (5 arcmin) and 0.18 deg (11 arcmin), respectively, and with 10% of the cases in that ensemble being localized at the ∼ 1 arcmin level. E. Accumulation of information with time In actual observations of black hole binaries with LISA, information about the system will be progressively unveiled over time. In particular some estimate of the system parameters may be available in advance of the merger observation. As the system approaches merger the uncertainties of these estimates are expected to decrease sharply. This real-time development is especially important in planning multi-messenger observations. How and when sky position estimates improve as the coalescence proceeds may impact the instruments operational requirements including how frequently data downlinks are required and in planning protected observing periods near the moment of merger. In turn, these operational requirements may influence details of LISA's instrumental design. In order to compare the evolution in measured parameter precision for different systems, we have calculated the parameter uncertainty for waveforms whose ends are "turned off" via windowing as described earlier. In this section, all the specified times correspond to the time at the mid-point of the applied taper. This procedure is analogous to a realistic procedure for measuring parameters from progressively longer segments of real-time data. In Fig. 7, we compare the uncertainty in ecliptic latitude and longitude for the equal-mass waveform and the q = 1/2 waveform, both with higher harmonics ℓ < − 4 and restricted to quadrupolar modes (ℓ = 2, m = + − 2). The linear-appearing decrease seen in both panels indicates that, over the last several hours before merger, our estimates for uncertainties in the sky position angles are roughly proportional to the time remaining before merger until a couple minutes before "merger" (which we have defined as the moment at which the |ℓ| = 2 mode amplitude peaks). For the cases studied, the dominant ringdown radiation period is about 80 s, roughly setting the scale at which the linear trend levels off. Note that, in some cases the parameter uncertainties may continue to improve after "merger", drawing on information in the ringdown radiation. The lower pair of curves in each panel are based on the waveform model including the higher harmonics. Consistent with the discussion in Sec. III A, including the harmonics continues to be valuable even late in the observation, after the merger is recorded. Fig. 8 shows another comparison of latitude uncertainty, for the same systems compared in Fig. 5. Because the parameter being varied is the total system mass, and the mass rescales time, we compare these results using times measured in M and in seconds. Because these signals are simply mass-rescalings of the same signal in naturalized units, and therefore have identical harmonic content relative to their quadrupolar content, the main factor for differences is the frequency band spanned by the signal, and where that band falls relative to the most sensitive band of the detector. The lowest-mass case, M = 1.33 10 5 M ⊙, has the largest number of cycles inband, and therefore performs best at early times. By the time it merges, however, the signal is chirping at frequencies much higher than the most sensitive band for LISA, so the the contribution after ISCO is negligible for this case. The mid-mass case, M = 1.33 10 6 M ⊙, is outperformed by the lowest-mass case at early times. However, because it merges in LISA's most sensitive band, the contribution approaching ISCO and running through the merger and ringdown is far greater than the other cases. Indeed, by the time the full signal has been included, this case yields a more accurate estimate than the lowest-mass case by a factor of ∼ 2. The largest mass, M = 1.33 10 7 M ⊙, has the fewest cycles in band, so it yields the lowest precision at early times. However, it too has a substantial gain in SNR, relative to the SNR of its inspiral, in the late inspiral through the merger and ringdown, so it too makes gains in precision relative to the lowest-mass case, although unlike the M = 1.33 10 6 M ⊙ case, it does not fully "catch up", and remains the worst performer of the three. We compare latitude uncertainty for four different mass ratios in Fig. 9: q = 1, q = 1/2, q = 1/4, and q = 1/10. This comparison again shows a trade-off between the number of in-band cycles and the signal power. Because radiation reaction is weaker for more disparate mass ratios, the q = 1/10 yields the highest precision at early times, despite having significantly less power (the SNR scales as for the inspiral, and as 2 for the merger ). Sky position uncertainty for the smaller-q cases decreases more slowly over most of the last day than the (1 + z)M m1/m2 harmonics merger? M /M D L /DL (deg 2 ) o tc SNR this work 1 a signal duration is limited to 10 cycles, for comparison to b estimated from histograms in Fig. 2 of c "R" indicates that the amplitude was restricted to the leading order term. d estimated from the results in Table II of TABLE III: Median fractional variance of all the extrinsic parameters for the cases investigated in this paper, as well as results from the literature for comparable systems. All angles are measured in degrees, and time is measured in seconds. We separate our results into cases where all available information has been included (top portion), and where some information has been suppressed for testing and analysis (bottom portion). For literature results, "X PN" refers to the post-Newtonian order of the model used. All studies set at a fixed source distance of z = 1, albeit with slightly different cosmological parameters. The results in this work correspond to ∼ 10 6 M of observation, (∼ 3 months for M = 1.33 10 6 M⊙) unless otherwise noted, while the results from the literature correspond to 1 year of observation. near-linear rate seen for the equal-mass case. By ∼ 20 minutes before merger, approaching ISCO, the median uncertainties in are roughly the same for all mass-ratios shown. At late times, the power content becomes a more dominant factor in further decreasing the uncertainty in. The equal-mass case contains more signal power in the merger. The q = 1/2 and q = 1/4 cases are nearly optimal, retaining some of the merger signal strength but perhaps benefiting more from stronger harmonic content at late times. Overall final sky-position error estimates are nearly flat for 1 > q > 1/4 (see Table III). By q = 1/10 the merger signal power is significantly diminished. While there are still improvements in position estimates after merger, they are notably smaller than those in the other cases. IV. CONCLUSION We have investigated the precision with which black hole binary system parameters can be measured from LISA observations including merger waveforms in the analysis of nonspinning binaries with moderate mass ratios (q > − 1/10). We have further studied how the expected performance depends on mass ratio and total system mass, and the impact of including or neglecting the merger signal and higher harmonics. The luminosity distance and the polarization phase depend on both the inclusion of the merger and the presence of higher harmonics, although the improvements from including these two elements are not independent. The inclination, the orbital phase constant, the ecliptic latitude and ecliptic longitude also depend on both the merger and the harmonic content. For these parameters the improvements resulting from including both the merger and higher harmonics are essentially independent. For comparable-mass systems near 10 6 M ⊙, ignoring the merger reduces the SNR by a factor of ∼ 3 and results in a similar loss of median precision in parameter estimation, even more so for the sky position estimates. For sky position, ignoring the merger results in a more significant loss of precision than ignoring higher harmonics. Parameter estimates are roughly independent of mass ratio through 1 > q > 1/4, for sky position in particular, though for smaller mass ratios q < ∼ 1/10 the precision begins to decrease. For q = 1/2, the best parameter estimates are obtained for systems near 10 6 M ⊙, which merge in the middle of LISA's sensitivity band. Decreasing the mass by an order of magnitude to ∼ 10 5 M ⊙ results in a precision loss of roughly a factor of 5 with a diminished relative contribution from the merger. Increasing the mass to ∼ 10 7 M ⊙ results in a similar loss. Though we have left out the effects of spin, including precession, our median sky position precision estimates are similar to those obtained with precession, but ignoring the merger. Each method locates the systems in the sky within O(10 arcmin). Our best cases (∼ top 10%) are localized at the level of O(1 arcmin). We estimate that LISA will usually be able to locate larger mass systems (near 10 7 M ⊙ ) quite well, in some cases better than systems with masses near 10 5 M ⊙, and far better than earlier estimates based on inspirals alone. Our results for these more massive systems do not, however, reproduce the preliminary (but widely discussed) extraordinarily precise sky localization results found in, though we do achieve such high precision for the ∼ 10 6 M ⊙ systems, where both the inspiral and merger are in-band and can contribute. For equal-mass systems near 10 6 M ⊙ the sky angle estimates improve over the last several hours up to a few minutes before merger in rough proportion to the time remaining before merger. |
<filename>CLFoundation/Classes/Categorys/NSDate+CL.h
//
// NSDate+CL.h
// AFNetworking
//
// Created by 秦传龙 on 2020/8/21.
//
#import <Foundation/Foundation.h>
NS_ASSUME_NONNULL_BEGIN
@interface NSDate (CL)
/// 时间戳转格式化
- (NSString *)cl_timestampToFormatter:(NSString *)formatter timestamp:(NSString *)timestamp;
/// 当前时间转格式化
- (NSString *)cl_nowDateFormatter:(NSString *)formatter;
/// 时间格式化
- (NSString *)cl_formatterDate:(NSDate *)date formatter:(NSString *)formatter ;
/// 1544408230000 13位字符串
- (NSDate *)cl_timestampToDate:(NSString *)timestamp;
// 时间 -> 时间戳
- (NSTimeInterval)cl_dateToTimestamp:(NSDate *)date;
@end
NS_ASSUME_NONNULL_END
|
In many hydrocarbon well applications, electric submersible pumping (ESP) systems are used for pumping fluids, e.g. hydrocarbon-based fluids. For example, the ESP system may be conveyed downhole on a well string and used to pump oil from a downhole wellbore location to a surface collection location along a fluid flow path. The ESP system is supplied with AC electrical power from the surface via a power cable routed downhole along the well string. The power cable is coupled with a submersible motor of the ESP system via a connector sometimes referred to as a pothead. The pothead may be coupled to a motor lead extension (MLE) which is part of the overall power cable used to supply electrical power to the ESP system. Coupling existing pothead structures to the submersible motor can be difficult, and existing potheads are sometimes susceptible to leakage.
Additionally power cable couplings may be formed between, for example, sections of the power cable and/or between the MLE and the upper portion of the overall power cable. Such couplings also may be difficult, e.g. time-consuming, and sometimes susceptible to leakage. In deep well applications, sections of power cable may be spliced together to provide a power cable long enough to extend downhole to the ESP system. The splices/couplings are formed at the surface, e.g. on the rig, and splicing difficulty can increase the time and expense associated with the deployment of the well string, including the ESP system. |
Running Head: PTSD and MDD's Underlying Dimensions Underlying dimensions of DSM-5 posttraumatic stress di order and major depressive disorder symptoms This study examined the relationship between the u nderlying latent factors of major depression symptoms and DSM-5 posttraumatic stress disorder (PTSD) symptoms (American Psychiatric Association, 2013). A non-clinical sample of 266 participants with a trauma history participated in the study. Confirm atory factor analyses were conducted to evaluate the fit of the DSM-5 PTSD model and dysphoria model, as well as a depression model comprised of somatic and non-somat ic fac ors. The DSM-5 PTSD model demonstrated somewhat better fit over the dys phoria model. Wald tests indicated that PTSD's negative alterations in cognitions and mood factor was more strongly related to depression's non-somatic factor than its somatic f ctor. This study furthers a nascent line of research examining the relationship between PTSD and depression factors in order to better understand the nature of the high comorbi dity rates between the two disorders. Moreover, this study provides an initial analysis o f the new DSM-5 diagnostic criteria for PTSD. |
# One way to solve: use for loop and iterate
class UniqueChars(object):
def has_unique_chars(self, string):
if string == None:
return False
else:
repeats = ""
for letter in string.lower():
if letter in repeats:
return False
else:
repeats += letter
return True
# Second way: set()
class UniqueChars(object):
def has_unique_chars(self, string):
if string is None:
return False
return bool(len(string) == len(set(string)))
# Third way: add letters to empty set
class UniqueChars(object):
def has_unique_chars(self, string):
if string == None:
return False
else:
repeats = set()
for letter in string.lower():
if letter in repeats:
return False
else:
repeats.add(letter)
return True
# Fourth way: iterate without a data structure
class UniqueChars(object):
def has_unique_chars(self, string):
if string == None:
return False
for i, letter in enumerate(string.lower()):
next_letter = string[i + 1 : i + 2 : 1]
if string[i] == next_letter:
return False
return True
# Fifth way: code credit for this last solution: <NAME>
class UniqueCharsInPlace(object):
def has_unique_chars(self, string):
if string is None:
return False
for char in string:
if string.count(char) > 1:
return False
return True
|
<filename>plugins/manufacturingWarehouse/vue-storefront/core/data-resolver/ReviewsService.ts
import { DataResolver } from './types/DataResolver';
import { TaskQueue } from '@vue-storefront/core/lib/sync'
import { processLocalizedURLAddress } from '@vue-storefront/core/helpers'
import config from 'config'
import Review from 'core/modules/review/types/Review';
const createReview = (review: Review): Promise<boolean> =>
TaskQueue.execute({
url: processLocalizedURLAddress(config.reviews.create_endpoint),
payload: {
method: 'POST',
mode: 'cors',
headers: {
'Accept': 'application/json, text/plain, */*',
'Content-Type': 'application/json'
},
body: JSON.stringify({ review })
}
}).then(({ code }) => code === 200)
export const ReviewsService: DataResolver.ReviewsService = {
createReview
}
|
// /src/social/add_moderator/materialiconstwotone/24px.svg
import { createSvgIcon } from './createSvgIcon';
export const SvgAddModeratorTwotone = createSvgIcon(
`<svg xmlns="http://www.w3.org/2000/svg" enable-background="new 0 0 24 24" height="24" viewBox="0 0 24 24" width="24">
<g>
<rect fill="none" height="24" width="24"/>
</g>
<g>
<g>
<g>
<path d="M12,4.14L6,6.39v4.7c0,3.33,1.76,6.44,4.33,8.04c-1.56-4.89,2.5-9.8,7.67-9.05V6.39L12,4.14z" opacity=".3"/>
<path d="M10.33,19.13C7.76,17.53,6,14.42,6,11.09v-4.7l6-2.25l6,2.25v3.69c0.71,0.1,1.38,0.31,2,0.6V5l-8-3L4,5v6.09 c0,5.05,3.41,9.76,8,10.91c0.03-0.01,0.05-0.02,0.08-0.02C11.29,21.19,10.68,20.22,10.33,19.13z"/>
</g>
<path d="M17,12c-2.76,0-5,2.24-5,5s2.24,5,5,5s5-2.24,5-5S19.76,12,17,12z M20,17.5h-2.5V20h-1v-2.5H14v-1h2.5V14h1v2.5H20V17.5z"/>
</g>
</g>
</svg>`
);
|
from typing import Any, Dict, Tuple, Optional
import matplotlib.pyplot as plt
import meep as mp
import numpy as np
import pandas as pd
import gdsfactory as gf
from meep.geom import Medium
from numpy import ndarray
from gdsfactory.component import Component
from gmeep.add_monitors import add_monitors
mp.verbosity(0)
def get_transmission_2ports(
component: Component,
extend_ports_length: Optional[float] = 4.0,
layer_core: int = 1,
layer_source: int = 110,
layer_monitor1: int = 101,
layer_monitor2: int = 102,
layer_simulation_region: int = 2,
res: int = 20,
t_clad_bot: float = 1.0,
t_core: float = 0.22,
t_clad_top: float = 1.0,
dpml: int = 1,
clad_material: Medium = mp.Medium(epsilon=2.25),
core_material: Medium = mp.Medium(epsilon=12),
is_3d: bool = False,
run: bool = True,
wavelengths: ndarray = np.linspace(1.5, 1.6, 50),
field_monitor_point: Tuple[int, int, int] = (0, 0, 0),
dfcen: float = 0.2,
) -> Dict[str, Any]:
"""Returns dict with Sparameters for a 2port gf.component
requires source and port monitors in the GDS
based on meep directional coupler example
https://meep.readthedocs.io/en/latest/Python_Tutorials/GDSII_Import/
https://support.lumerical.com/hc/en-us/articles/360042095873-Metamaterial-S-parameter-extraction
Args:
component: gf.Component
extend_ports_function: function to extend the ports for a component to ensure it goes beyond the PML
layer_core: GDS layer for the Component material
layer_source: for the source monitor
layer_monitor1: monitor layer for port 1
layer_monitor2: monitor layer for port 2
layer_simulation_region: for simulation region
res: resolution (pixels/um) For example: (10: 100nm step size)
t_clad_bot: thickness for cladding below core
t_core: thickness of the core material
t_clad_top: thickness for cladding above core
dpml: PML thickness (um)
clad_material: material for cladding
core_material: material for core
is_3d: if True runs in 3D
run: if True runs simulation, False only build simulation
wavelengths: iterable of wavelengths to simulate
field_monitor_point: monitors the field and stops simulation after field decays by 1e-9
dfcen: delta frequency
Returns:
Dict:
sim: simulation object
Make sure you visualize the simulation region with gf.before you simulate a component
.. code::
import gdsfactory as gf
import gmeep as gm
component = gf.components.bend_circular()
margin = 2
cm = gm.add_monitors(component)
cm.show()
"""
assert isinstance(
component, Component
), f"component needs to be a Component, got Type {type(component)}"
if extend_ports_length:
component = gf.components.extension.extend_ports(
component=component, length=extend_ports_length, centered=True
)
component.flatten()
gdspath = component.write_gds()
gdspath = str(gdspath)
freqs = 1 / wavelengths
fcen = np.mean(freqs)
frequency_width = dfcen * fcen
cell_thickness = dpml + t_clad_bot + t_core + t_clad_top + dpml
cell_zmax = 0.5 * cell_thickness if is_3d else 0
cell_zmin = -0.5 * cell_thickness if is_3d else 0
core_zmax = 0.5 * t_core if is_3d else 10
core_zmin = -0.5 * t_core if is_3d else -10
geometry = mp.get_GDSII_prisms(
core_material, gdspath, layer_core, core_zmin, core_zmax
)
cell = mp.GDSII_vol(gdspath, layer_core, cell_zmin, cell_zmax)
sim_region = mp.GDSII_vol(gdspath, layer_simulation_region, cell_zmin, cell_zmax)
cell.size = mp.Vector3(
sim_region.size[0] + 2 * dpml, sim_region.size[1] + 2 * dpml, sim_region.size[2]
)
cell_size = cell.size
zsim = t_core + t_clad_top + t_clad_bot + 2 * dpml
m_zmin = -zsim / 2
m_zmax = +zsim / 2
src_vol = mp.GDSII_vol(gdspath, layer_source, m_zmin, m_zmax)
sources = [
mp.EigenModeSource(
src=mp.GaussianSource(fcen, fwidth=frequency_width),
size=src_vol.size,
center=src_vol.center,
eig_band=1,
eig_parity=mp.NO_PARITY if is_3d else mp.EVEN_Y + mp.ODD_Z,
eig_match_freq=True,
)
]
sim = mp.Simulation(
resolution=res,
cell_size=cell_size,
boundary_layers=[mp.PML(dpml)],
sources=sources,
geometry=geometry,
default_material=clad_material,
)
sim_settings = dict(
resolution=res,
cell_size=cell_size,
fcen=fcen,
field_monitor_point=field_monitor_point,
layer_core=layer_core,
t_clad_bot=t_clad_bot,
t_core=t_core,
t_clad_top=t_clad_top,
is_3d=is_3d,
dmp=dpml,
)
m1_vol = mp.GDSII_vol(gdspath, layer_monitor1, m_zmin, m_zmax)
m2_vol = mp.GDSII_vol(gdspath, layer_monitor2, m_zmin, m_zmax)
m1 = sim.add_mode_monitor(
freqs,
mp.ModeRegion(center=m1_vol.center, size=m1_vol.size),
)
m1.z = 0
m2 = sim.add_mode_monitor(
freqs,
mp.ModeRegion(center=m2_vol.center, size=m2_vol.size),
)
m2.z = 0
# if 0:
# ''' Useful for debugging. '''
# sim.run(until=50)
# sim.plot2D(fields=mp.Ez)
# plt.show()
# quit()
r = dict(sim=sim, cell_size=cell_size, sim_settings=sim_settings)
if run:
sim.run(
until_after_sources=mp.stop_when_fields_decayed(
dt=50, c=mp.Ez, pt=field_monitor_point, decay_by=1e-9
)
)
# call this function every 50 time spes
# look at simulation and measure component that we want to measure (Ez component)
# when field_monitor_point decays below a certain 1e-9 field threshold
# Calculate the mode overlaps
m1_results = sim.get_eigenmode_coefficients(m1, [1]).alpha
m2_results = sim.get_eigenmode_coefficients(m2, [1]).alpha
# Parse out the overlaps
a1 = m1_results[:, :, 0] # forward wave
b1 = m1_results[:, :, 1] # backward wave
a2 = m2_results[:, :, 0] # forward wave
# b2 = m2_results[:, :, 1] # backward wave
# Calculate the actual scattering parameters from the overlaps
s11 = np.squeeze(b1 / a1)
s12 = np.squeeze(a2 / a1)
s22 = s11.copy()
s21 = s12.copy()
# s22 and s21 requires another simulation, with the source on the other port
# Luckily, if the device is symmetric, we can assume that s22=s11 and s21=s12.
# visualize results
plt.figure()
plt.plot(
wavelengths,
10 * np.log10(np.abs(s11) ** 2),
"-o",
label="Reflection",
)
plt.plot(
wavelengths,
10 * np.log10(np.abs(s12) ** 2),
"-o",
label="Transmission",
)
plt.ylabel("Power (dB)")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.legend()
plt.grid(True)
r.update(dict(s11=s11, s12=s12, s21=s21, s22=s22, wavelengths=wavelengths))
keys = [key for key in r.keys() if key.startswith("S")]
s = {f"{key}a": list(np.unwrap(np.angle(r[key].flatten()))) for key in keys}
s_mod = {f"{key}m": list(np.abs(r[key].flatten())) for key in keys}
s.update(**s_mod)
s = pd.DataFrame(s)
return r
def plot2D(results_dict, z=0):
"""Plot a 2D cut of your simulation."""
sim = results_dict["sim"]
cell_size = results_dict["cell_size"]
cell_size.z = 0
sim.plot2D(
output_plane=mp.Volume(center=mp.Vector3(), size=cell_size),
fields=mp.Ez,
field_parameters={"interpolation": "spline36", "cmap": "RdBu"},
)
def plot3D(results_dict):
"""Plots 3D simulation in Mayavi."""
sim = results_dict["sim"]
sim.plot3D()
def test_waveguide_2D() -> None:
"""Ensure >99% transmission (S21) at 1550nm."""
c = gf.components.straight(length=2)
cm = add_monitors(component=c)
# gf.show(cm)
r = get_transmission_2ports(cm, is_3d=False, run=True)
assert 0.99 < np.mean(abs(r["s21"])) < 1.01
assert 0 < np.mean(abs(r["s11"])) < 0.2
# def test_waveguide_3D() -> None:
# """Ensure >99% transmission (S21) at 1550nm."""
# c = gf.components.straight(length=2)
# cm = add_monitors(component=c)
# gf.show(cm)
# r = get_transmission_2ports(cm, is_3d=True, run=True, res=10)
# assert 0.99 < np.mean(abs(r["s21"])) < 1.01
# assert 0 < np.mean(abs(r["s11"])) < 0.2
def test_bend_2D():
"""Ensure >99% transmission (S21) at 1550nm."""
c = gf.components.bend_circular(radius=5)
cm = add_monitors(component=c)
# gf.show(cm)
r = get_transmission_2ports(cm, is_3d=False, run=True)
assert 0.97 < np.mean(abs(r["s21"])) < 1.01
assert 0 < np.mean(abs(r["s11"])) < 0.2
if __name__ == "__main__":
c = gf.components.straight(length=2)
cm = add_monitors(component=c)
gf.show(cm)
r = get_transmission_2ports(cm, run=True)
print(r)
# sim = r["sim"]
# plt.show()
|
More than 100,000 people attended the funeral for Rabbi Nosson Tzvi Finkel, head of the Mir Yeshiva in Jerusalem, who died Tuesday at the age of 68 after suffering cardiac arrest at his home. He had also suffered from Parkinson’s disease.
wikipedia Rabbi Nosson Tzvi Finkel Share
Pinterest
Email
According to Israeli news sources, the packed funeral in Jerusalem, caused disruptions to the city’s light rail service.
A native of Chicago, Finkel was a descendent of a rabbinic dynasty connected to the Slabodka Yeshiva in Lithuania. He assumed leadership of the Mir in 1990, following the death of his father-in-law, Rabbi Beinish Finkel, becoming one of the few American-born rabbinical leaders in Israel.
Under Finkel’s leadership, the Mir became one of the largest rabbinical academies in world with approximately 6,000 students. He is to be succeeded by his son, Rabbi Eliezer Yehudah Finkel.
This story "100,000 Mourn Rabbi Finkel of Mir Yeshiva" was written by Forward Staff. |
Book Review: Beyond the Anti-Group: Survival and Transformation Two more aspects: I enjoyed the personal/professional disclosures by the author, indeed the wider questions of how life development influences the therapist as he/she navigates his/her way through the uncertain and often unpredictable terrain of the psychotherapy group (p.138). The idea of group templates within our histories is a helpful one that can be used in training. Finally, I approve of his return to the subject of desire in the chapter, Falling in Love. Its vignettes, as elsewhere, are crystal clear, and, as such an important life experience, requires our continuing attention, including at the level of theory. The same is true of human playfulness, as expressed through art, in the wider sense of that term. So it is good that his book leads us into such areas in the latter chapters. My appreciation of the book will be apparent and I applaud his overall case that, as a discipline, we need fresh conceptualization and firmer grounding. I do have a quibble about its title, and wondered about why not something more affirmative, like, The Creative Group? After all, he talks of Winnicotts squiggle game and of artist Paul Klees talking a line for a walk. Imaginative references indeed, as is Nitsuns new contribution. It will be interesting to see where his ideasand ours, in engagement, hopefullywalk next. |
class IteratorWithAggregation:
"""
An iterable over an iterable which also makes an aggregate of the values available asap
It iterates over the iterable in a separate thread.
A use case is a generator which collects information about resources,
which might be relatively fast but still take time. While we are iterating over it,
we could perform other operations on yielded records, but we would also like to have access to
the "summary" object as soon as that iterator completes but while we might still be
iterating over items in the outside loop.
Use case: iterate over remote resource for downloads, and get "Total" size/number as
soon as it becomes known inside the underlying iterator.
TODO: probably could be more elegant etc if implemented via async/coroutines.
Attributes
----------
.total:
Aggregated value as known to the moment. None if nothing was aggregated.
It is a final value if `finished` is True.
.finished: bool
Set to True upon completion of iteration
.exc: BaseException or None
If not None -- the exception which was raised
Example
-------
Very simplistic example, since typically (not range) it would be taking some time to
iterate for the nested iteration::
it = IteratorWithAggregation(range(3), lambda v, t=0: v+t)
for v in it:
print(it.total, it.finished, v)
sleep(0.02) # doing smth heavy, but we would know .total as soon as it is known
would produce (so 3 is known right away, again since it is just range)
3 True 0
3 True 1
3 True 2
"""
def __init__(self, gen, agg, reraise_immediately=False):
"""
Parameters
----------
gen: iterable
Generator (but could be any iterable, but it would not make much sense)
to yield from
agg: callable
A callable with two args: new_value[, total=None] which should return adjusted
total. Upon first iteration, no prior `total` is provided
reraise_immediately: bool, optional
If True, it would stop yielding values as soon as it detects that some
exception has occurred (although there might still be values in the queue to be yielded
which were collected before the exception was raised)
"""
self.gen = gen
self.agg = agg
self.reraise_immediately = reraise_immediately
self.total = None
self.finished = None
self._exc = None
def __iter__(self):
self.finished = False
self._exc = None
queue = Queue()
def worker():
"""That is the one which interrogates gen and places total
into queue_total upon completion"""
total = None
try:
for value in self.gen:
queue.put(value)
self.total = total = (
self.agg(value, total) if total is not None else self.agg(value)
)
except BaseException as e:
self._exc = e
finally:
self.finished = True
t = Thread(target=worker)
t.start()
# yield from the queue (.total and .finished could be accessed meanwhile)
while True:
if self.reraise_immediately and self._exc is not None:
break
# race condition HERE between checking for self.finished and
if self.finished and queue.empty():
break
# in general queue should not be empty, but if it is, e.g. due to race
# condition with above check
try:
yield queue.get(timeout=0.001)
except Empty:
continue
t.join()
if self._exc is not None:
raise self._exc |
Like high-rise architecture, Britpop and Twitter, plastic football pitches made everyone feel very modern and oh so pleased with themselves when they first came out. But like high-rise architecture, Britpop and Twitter, it was not long before it became apparent that plastic football pitches were dreadfully regressive, systematically ruining everything they purported to be trying to improve – society, music, society, sport – an empty epoch, a despicable sham.
The football played on plastic pitches during the 1980s was egregious reductive nonsense, the surface immediately giving the artless a morally dubious advantage over the artisan. In 1989-90, for example, a poor Luton Town side could boast a better home record than both Manchester United and fifth-placed Chelsea, yet on their travels from plastic-clad Kenilworth Road, had the second-worst away record in the entire division and only stayed up on goal difference.
Speaking of Luton, here's a little karmic payback for all those sides turned over on the special surface at Kenilworth. Roy Wegerle has put the home side one up with as good a piece of pure striking skill as you'll see, a simple shift inside from the left and a pinpoint belter into the bottom-right corner, Aston Villa's goalkeeper Nigel Spink given no chance. But late in the day Spink launches a long punt upfield, forcing Marvin Johnson into a panicked attempt at a backpass as the ball rears up erratically. The resulting arc, looping over a perplexed Les Sealey, is as unnatural as they come. Or, indeed, as the pitch itself.
But for proper nuclear-power karmic retribution, full-scale fire-and-brimstone stuff, look no further than the greatest tournament of them all. After 116 minutes of a tumultuous 1986 World Cup quarter-final, Brazil and France were tied at 1-1 and facing penalties. It was at this point that Michel Platini stroked a pass down the middle to set Bruno Bellone free on the Brazil goal. The goalkeeper Carlos, about to be rounded on the edge of his area, raced from the box and embraced both Bellone and moral turpitude, intricate samba rhythms making way for Oi! music. Carlos should have been sent packing, but the referee Ioan Igna merely shrugged his shoulders and waved play on.
Brazil had survived to make the penalty shoot-out, but the reckoning was not long in coming. Bellone was one of France's penalty takers, and he dispatched his effort against the bottom of the right-hand post. Unfortunately for Brazil, the ball came bouncing back inside, whereupon it met the back of the diving Carlos, who had guessed the right way but was way too late. The ball pinged back into the net, justice done. The freakiest goal in a major international fixture.
Or was it? Because the 1978 World Cup threw up two humdingers to compare, though neither of them led to success for the lucky recipients. The tournament was only a day old when Italy scored a classic of the kind in their opener against France, Roberto Bettega redirecting a wild sliced shot on to the woodwork with his head, before Marius Tresor's clearance ricocheted towards Paolo Rossi, who eventually prodded home. Italy had conceded in the opening 40 seconds of their campaign, but this goal settled their nerves and sent them on a journey that would take them to within 41 minutes of the final, Holland eventually seeing them off with two long-range rakes.
Potentially more crucial was the goal hammered in by the splendidly monikered Roberto Dinamite in Brazil's final game of the second group stage that year. The player had been drafted into the team midway through the tournament over the head of the manager Cláudio Coutinho by the CBF's technical committee, and for once the suits seem to have made a decent decision, the player scoring twice in a 3-1 win over Poland. His second came at the end of an amazing sequence, the Polish post being hit twice by Mendonça and the crossbar once by Gil within the space of a few seconds, before our man exploded into action and fired the ball into the net. The goal meant Argentina had to score at least four goals in their final game against Peru to reach the final, and we all know how that ended up, rendering Dinamite's goal effectively a damp squib.
What goes round comes around: this is fast becoming a theme of the freaky goal, and a slightly disconcerting one now we come to think about it. Here's Tim Flowers being hoist by his own petard after hacking lumps out of his own penalty area in order to orientate himself. Stan Collymore advances towards the area and unleashes a shot at a speed that would shame a pensioner playing carpet bowls. No matter: just before the ball reaches the waiting Flowers, it rears up off one of the keeper's divots, over his shoulder, and into the net.
Later in the year, Steve McManaman hit another David Bryant-style effort against Tottenham, a lame effort that bucked up off the surface just before it reached the diving Ian Walker and apologetically settled into the net. "It's all thanks to our portable divot," laughed the manager Roy Evans, knowing full well Liverpool's luck was in, but would no doubt come back to bite them on the buttocks at some point. It took a while – nearly 13 years, in fact – but Liverpool finally found themselves on the receiving end with Sunderland striker Darren Bent's infamous billiard shot off a beach ball and past a heel-rocking Pepe Reina.
Collymore, incidentally, was never the most self-aware of players, but at least he had the good grace to look sheepish when his effort went in. Compare and contrast to the shameless Bent, who raced off as though he'd just recreated Eddie Gray's second against Burnley in 1970, only using a house brick while wearing slippers.
This is just stupid.
Roberto Dinamite's strike apart – and his was a standard-issue culmination to a strange move, rather than a freak scene of its own – all our goals have been scrappy affairs. We finish with a real peach, though, a once-in-a-million outcome that was both skilful and aesthetically satisfying.
Perfectly timed, too. England had been suffering a hellish 1995, which started with the Lansdowne Road riot, followed by a dire goalless draw at Wembley against Uruguay in March. Come the summer, and the four-team Umbro Cup tournament, a dress rehearsal for the following year's European Championships. They didn't start well for England, who required a late winner to see off Japan. Then, in their second game, they went 2-0 then 3-1 down against Sweden at Elland Road, and with two minutes to go were facing their worst defeat on home soil since the infamous thrashing by West Germany in 1972.
Paul Gascoigne set David Platt up to ensure England avoided that ignominy, and then in the final throes of the game, Darren Anderton stepped up and lashed a shot towards the left-hand side of goal. The ball twanged off the post, flew in mid-air along the route of the goalline, hit the right-hand post flush, before spinning into the net at speed.
England would still take time to improve for Euro 96 – indeed they didn't find anything approaching top gear until 135 minutes of their tournament had elapsed – but this strike snapped the nation out of its depression and gave it a sense of wonder again. A geometric pleasure, one of the loveliest goals ever scored by England. |
Remember the offseason? The Miami Dolphins organization was flying high in the offseason.
The afterglow of a 10-6 run to the playoffs in the winter of 2016 was still fresh and the possibilities for greater things in 2017 seemed within reach. So the Dolphins went into the 2017 offseason with a mission of consolidating if not building on 2016’s success.
Except — and here’s the point of this post — that didn’t happen.
The two dozen or so offseason moves the team made between March and the start of the regular season have paid off handsomely in only a few instances. Mostly those moves have led to decreased salary cap space, uneven individual performances on the field, and obviously a disappointing 4-6 record.
Sign Up and Save Get six months of free digital access to the Miami Herald
It needs to be said that of the Dolphins’ 10 highest salary cap hits this season, eight belong to players addressed this offseason.
The team did work to sign, re-sign or renegotiate deals for Jay Cutler, Cameron Wake, Julius Thomas, Andre Branch, William Hayes, Kenny Stills, Reshad Jones and Lawrence Timmons. The magnificent seven plus one — sort of.
Yes, there were other moves but let’s first look at these eight that are atop the Dolphins salary cap structure.
Miami Dolphins quarterback Jay Cutler (6) throws an interception late in the second quarter as they play the Atlanta Falcons at the Mercedes-Benz Stadium in Atlanta, Ga., Oct. 15, 2017. CHARLES TRAINOR JR ctrainor@miamiherald.com
Jay Cutler: When Ryan Tannehill blew out his knee (again) the first week of training camp the Dolphins faced a season-defining decision. They could turn the team over to backup Matt Moore and sign a viable backup for $2-$3 million, or chase Cutler to become the starter. Coach Adam Gase wanted Cutler and no one in the organization pushed back hard enough to dissuade him.
So the Dolphins signed Cutler to be the new starter. The problem is the signing cost $10 million against the cap this year. And regardless of whether that is a bargain price for a starting quarterback or not, the Dolphins decided to cash all their checks for 2017 because it left them with practically no cap room.
The Dolphins right now have $607,148 in salary cap room according to the NFL Players Association and that is the least cap room of any AFC team and the second least in the NFL behind only the Seattle Seahawks.
That hasn’t necessarily affected the team’s ability to do business this year — except at the October trade deadline when Miami couldn’t afford to take on any salary burden from an incoming veteran.
But the issue will matter for 2018 because cap room can be carried over. And so if the Dolphins had not spent those $10 million on Cutler and signed a backup instead, for example, they would have an extra $7-$8 million in cap space to carry over next year.
The New York Jets will be carrying over approximately $17 million from this season to next.
The Buffalo Bills will be carrying over approximately $12 million from this season to next.
The New England Patriots will be carrying over approximately $3.8 million.
The Dolphins will be carrying over that $600,000 or so they currently have.
So that one move, the Cutler signing, might affect the Dolphins ability to add a couple of starters in free agency next season. Said another way, adding Cutler this season might hurt Miami’s ability to compete both in free agency and on the field next season.
It would all be worthwhile if Cutler was leading a Dolphins playoff charge this season. That might be worthwhile if Cutler was having a great season. But neither of those is true.
Cutler, like most of the rest of the players in the Miami locker room, has had an inconsistent season. Up. And down.
The Miami offense is 31st in points scored.
Cutler is ranked 25th among 35 quarterbacks who have started games this season.
So has Cutler delivered good return on the investment? That speaks for itself.
Before Cutler was added, the Dolphins did some heavy lifting to improve the tight end position in the offseason. The team signed Anthony Fasano to a one-year deal. And Miami traded away a seventh-round draft pick to Jacksonville in exchange for Julius Thomas and then redid the tight end’s contract.
Fasano has been a good addition. It’s a winning offseason move. He’s only caught seven passes and scored one touchdown but he’s a good blocker and was actually the team’s best tight end early in the year. So the team is more or less getting its $2.75 million worth from Fasano.
Not so with Thomas.
SHARE COPY LINK Julius Thomas, Miami Dolphins TE, talks to the media about how 11 penalties can only hurt you in their defeat to the Oakland Raiders.
Julius Thomas: He is costing $5.6 million against the cap, which is the fifth-highest cap number on the active roster, and has not lived up to expectations so far.
He has caught 29 passes for 290 yards with two touchdowns. But he has dropped a handful of passes, he has time and again failed to win one-on-one matchups, particularly in the red zone even against smaller defensive backs. He also rarely breaks a tackle.
Multiple times Gase has said Thomas hasn’t had better games because the coverage has prevented it. It seems more like Thomas simply doesn’t have the kind of speed he had when Gase coached him in Denver as late as 2014.
This weekend the Dolphins play the New England Patriots, who will come into the game with one and perhaps two productive tight ends.
Coverages don’t seem to often limit Rob Gronkowski and Martellus Bennett.
The Dolphins may argue Thomas was worth the cost. Actually, they don’t argue this at all because the team already believes he won’t be on the roster in 2018 for the final season of his contract.
At age 30, Thomas can be on the Miami 2018 roster at a cost of $6.6 million. The Dolphins can also cut Thomas and save the entire amount.
I’ll give you one guess which the Dolphins will do.
SHARE COPY LINK Andre Branch, Miami Dolphins DE, says the sound of pads hitting pads is the sound of the start of football season, today was the first day of full pads.
Andre Branch: The Dolphins did an absolutely fabulous job adding Branch late in the 2016 offseason. They paid him $2.75 million and he rewarded them with 5.5 sacks, two forced fumbles, and 49 total tackles. He went from part-time player through six games to full-time starter afterward.
Branch had single game high tackles of six against Arizona, five against Buffalo and San Francisco, and four against San Diego. He factored.
So the Dolphins rewarded the defensive end with a three-year, $24 million contract this offseason. The team obviously believed a player in a contract year delivering career-best numbers was more what Branch is than his previous seasons in Jacksonville when he had been something of a disappointment.
And to the apparent surprise of everyone within the Dolphins organization, Branch has regressed to the mean now that he has his new contract.
He has 16 total tackles in nine games. He has three sacks. He hasn’t had a sack since Oct. 8.
Branch this year is producing much like the player that was a disappointment in Jacksonville in 2014 and ’15 except now he’s making $8 million per season.
Miami Dolphins LB Lawrence Timmons (94) returns to practice at the Miami Dolphins training facility in Davie, Fla., Sept. 27, 2017. Timmons disappeared at the beginning of the regular season. CHARLES TRAINOR JR ctrainor@miamiherald.com
Lawrence Timmons: Forget that Timmons embarrassed the Dolphins and himself by leaving the team one day before the season opener. That alone casts doubt on the two-year, $12 million contract the 31-year-old Pittsburgh Steelers discard got from Miami.
The bigger trouble is Timmons is not a long-term solution to any issue and he is failing as a short-term solution because, lately, he is playing as if he’s worn out.
Timmons was signed to be a three-down linebacker. The problem is when he finally got back from his AWOL episode, which included a one-game suspension, Timmons played like a two-down linebacker who could get by on run downs but was exposed in pass coverage.
The issue with that? Timmons was exposed in pass coverage last season with the Steelers. And somehow the Dolphins thought that was a fluke or correctable.
The Steelers liked Timmons for his toughness, which he still has, and his occasional ability to blitz, which he can still do. But cover running backs or slot receivers or tight ends?
Nope.
Timmons last week became a two-down player as the Dolphins tried to introduce Stephone Anthony into the pass coverage role on third down.
The problem with the Timmons deal is next year, at age 32, he’s not likely to be a better player. He’s trending in the wrong direction. But the Dolphins are on the hook for an $8.2 million cap hit if he’s on the team.
Now this is where the Dolphins have gotten lucky: With Timmons going AWOL, he had to forfeit the guaranteed portion of his 2018 salary to get back on the team, a fact I reported first several months ago. So the Dolphins can cut Timmons this offseason and because earlier this season he left the team without permission, he would cost the team only $2.75 million in dead money rather than the $7.25 million he would have cost otherwise.
Timmons, a mistake signing last offseason, will likely be cut in this coming offseason.
Miami Dolphins DE William Hayes 95, runs a defensive drill during training camp at the Miami Dolphins training facility in Davie, Fla., July 31, 2017. CHARLES TRAINOR JR ctrainor@miamiherald.com
William Hayes: It was a good trade for the Dolphins. Hayes has filled the role he was hired to fill — as an expert run stopper — to virtual perfection. He is a free agent after the season. My guess is the Dolphins would like to re-sign Hayes to do the same job after this season is over.
That would be a good move as long as, you know, the team doesn’t overpay a player who will be 33 next season.
Dilly, Dilly!
SHARE COPY LINK Miami Dolphins wide receivers Kenny Stills and Jarvis Landry talk about their loss to the Tampa Bay Bucs.
Kenny Stills: I didn’t love this re-signing at this price because I thought the Dolphins could get about the same production out of, say, Marquise Goodwin — the player I suggested as an option at the time — for about 1/8th the price. And, make no mistake, Goodwin has been solid for the San Francisco 49ers while having no real quarterback and playing for less than $2 million.
But I was nonetheless wrong.
Miami Dolphins Kenny Stills (10) stiff arms Tampa Bay Buccaneers Ryan Smith (29) in the second quarter at Hard Rock Stadium in Miami Gardens, Fla., Nov. 19, 2017. CHARLES TRAINOR JR ctrainor@miamiherald.com
Stills has been good on and off the field for the Dolphins.
Stills leads the team in receiving yards (588) and yards per catch (14.7). His five touchdowns are on pace with the nine he scored last season. And with six games to play he has 40 catches compared to the 42 he had all last year.
All told this was a good football signing considering one other team offered Stills $10 million per season and the Dolphins got him for $8 million per season.
Off the field, Stills is a quiet leader. He took it upon himself to tutor Jakeem Grant and Leonte Carroo in the offseason. No, it hasn’t paid dividends but it’s admirable. He’s always at the team facility and enthusiastically tried to convince visiting free agents to sign with the Dolphins.
Stills also is good in the community and although that does not factor one iota to the win-loss record, it should be acknowledged. On a team that had a player go AWOL and another who was arrested at a club when he was still out partying at 8:30 in the morning, Stills offers a strong counter argument to that influence within the locker room.
So this move has been a good one by the Dolphins so far.
Miami Dolphins Kiko Alonso (47) tackles Tennessee Titans Rishard Mathews (18) in the second quarter at Hard Rock Stadium in Miami Gardens, Fla., Oct. 8, 2017. ROY VIERA For The Herald
Extending Kiko Alonso and Reshad Jones: The Dolphins didn’t have to do either of these moves. But I understand the reasoning, including the idea that owner Stephen Ross promised Jones a new deal the previous year. Sometimes moves get done from higher sources.
Anyway, Alonso would have been a restricted free agent. The Dolphins could have put a first-round tender, worth about $3.91 million, on Alonso and any team wanting to sign him would have had to give Miami a first-round pick if the Dolphins didn’t match the offer.
The Dolphins, however, valued Alonso so much they signed him to a four-year contract costing $28.9 million. Cool. Good value overall.
Alonso leads the team in tackles and is second in tackles for losses. But he’s not as good in coverage this season as last season and he’s not delivering turnovers like he did last season when he was responsible for seven, including a game-winning interception at San Diego. This season Alonso has played a role in two turnovers.
The Dolphins believe Alonso’s coverage problems are a result of his desire to cover up failings by Timmons, so there’s that. They consider Alonso a cornerstone on the defense.
But that doesn’t change the fact the Dolphins are paying more while getting fewer big plays from Alonso so far this season.
Miami Dolphins Reshad Jones (20) tackles a New York Jets player at Hard Rock Stadium in Miami Gardens, Fla., Oct. 22, 2017. CHARLES TRAINOR JR ctrainor@miamiherald.com
Same story with Jones. He is the team’s second-leading tackler. He scored a touchdown on a fumble return against Tennessee and sealed the victory in Atlanta with a last-minute interception of Matt Ryan.
Big plays.
And that’s what Jones has promised and often delivered the last few years.
But that and more is what’s supposed to happen because Jones had his contract renegotiated this offseason and he went from an $8 million-a-year player to a $12 million-a-year player.
Jones is having a good season. No issue there.
But is he having a season that’s $4 million better than his 2015 or even injury-shortened 2016?
Both the Alonso and Jones extensions were not wrong. They were not bad.
But great work? Awesome? Prescient?
Debatable.
(OK, it’s Thanksgiving. Give yourself a break. Go get some turkey and stuffing or something. Then come back and read the rest. If you’re reading this after your meal and haven’t gotten indigestion, then read on).
The offensive line moves and philosophy behind them: This is a big one because a majority of the problems the oft-troubled Dolphins offense is enduring have roots in the offensive line.
Let’s begin with the move to cut Branden Albert and move Laremy Tunsil from left guard to left tackle. The Dolphins felt this was a no-brainer because Albert was declining but his cap number was not. So they saved approximately $7 million in cap space by cutting Albert.
I have no issue whatsoever with this so far.
But here’s my problem … The team moved Laremy Tunsil from guard to left tackle and knowing that their second-year player and first-year starting left tackle was still young and inexperienced, they undervalued the need to help him by teaming him up with an excellent guard.
The Dolphins actually undervalued guards, period. They didn’t want to pay for the position. And they didn’t use a premium draft pick for the position, either.
So they paid Ted Larsen approximately $5.65 million for two years as almost an afterthought move. The team was clearly blindsided by how fast and how high guard salaries soared in free agency.
The move that should have been made is either using the $7 million saved on Albert on a premium guard or drafting a guard with a premium pick. I would also opt for the drafting of that guard.
Instead, Larsen seems like an afterthought signing and the drafting of Isaac Asiata on the third day of the draft seems like an afterthought selection. Larsen is serviceable at best and Asiata is getting a “redshirt” year to improve his strength even though he’s going to be 25 years old next month.
The result of what the Dolphins did? Tunsil, a young and unestablished NFL left tackle, hasn’t had a quality veteran presence next to him to guide him as Albert did last year. And so the left side of the Miami offensive line, expected to be a strength, has not been.
The Dolphins at the end of last season expected to move on from Jermon Bushrod. That was the plan. But, again, the price of free agent guards soared.
And so suddenly, Bushrod for $3 million seemed like a good idea. Bushrod is a stopgap. He’s great in the locker room but not as much on the field where he should be getting more double-team help in pass protection but isn’t.
One more offensive line issue that didn’t come up in offseason moves but was an offseason conversation: The Dolphins banked center Mike Pouncey would be the same player this year as he’s been in the past. He’s not. His pass protection remains excellent. His run blocking has declined.
He’s due to cost $9 million against the cap next season but the team can save $7 million by cutting him. That is going to require a very lengthy and difficult conversation among the team’s braintrust.
T.J. McDonald: The grade here is incomplete because the guy has played all of two games. The Dolphins believe they’ve added an elite safety at a bargain $500,000 for eight games this season and $6 million per season for four years after that. Safeties of his caliber can get $8 million per year so the Dolphins believe they got a bargain.
If McDonald plays as advertised and stays out of trouble after serving his suspension, then yes, enlightened move.
The 2017 draft: First the really, really, really good: Davon Godchaux has been very good when one measures … A. The team’s need at the position. B. The fact he was found in the fifth round. C. The fact Godchaux should be the team’s rookie of the year and on some plays is the team’s best defensive lineman.
Davon Godchaux is at this stage the team’s best draft pick in 2017.
The Dolphins boast they have up to three rookies start on defense this year and up to six rookies (including undrafted rookies) actually playing snaps. They see activity as a good thing.
I see achievement as a good thing.
Defensive end Charles Harris, drafted in the first round, has one sack. He flashes on occasion and other times not so much. The coaches love him.
Fans will love him when/if he’s a double-digit sack guy. And I side with the fans because, again, achievement is more important than activity.
Linebacker Raekwon McMillan, drafted in the second round, is out for the season. Incomplete grade.
Cornerback Cordrea Tankersley, drafted in the third round, is starting now. He’s coming off a good game against Tampa Bay. He’s struggled in multiple other games. In my view, he’s ahead of schedule so this seems like a good pick.
Guard Isaac Asiata, drafted in the fifth round, was mentioned above.
Defensive tackle Vincent Taylor is not as far along as Godchaux but he’s shown good value as a part-time player. Very good third day of the draft selection.
Overall: The point of an offseason is to improve a team. Judging it by another way is spin. The Dolphins earlier this season thought they used the offseason to “fix” the defense only to find their defense declining the past five games. The team used the offseason to keep players it valued at home. Except several of those players are delivering less than they did last season.
Yes, some things were done well.
And some moves probably shouldn’t be judged too harshly because a turnaround is still possible and we don’t know what might happen longterm.
But there is no doubt the Dolphins, looking to hit a home run last offseason, are having to settle for much less. |
/*****************************************************************************
* *
* OpenNI 1.x Alpha *
* Copyright (C) 2012 PrimeSense Ltd. *
* *
* This file is part of OpenNI. *
* *
* Licensed under the Apache License, Version 2.0 (the "License"); *
* you may not use this file except in compliance with the License. *
* You may obtain a copy of the License at *
* *
* http://www.apache.org/licenses/LICENSE-2.0 *
* *
* Unless required by applicable law or agreed to in writing, software *
* distributed under the License is distributed on an "AS IS" BASIS, *
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. *
* See the License for the specific language governing permissions and *
* limitations under the License. *
* *
*****************************************************************************/
//---------------------------------------------------------------------------
// Includes
//---------------------------------------------------------------------------
#include <XnOS.h>
#include <XnLog.h>
#include "XnOSWin32Internal.h"
//---------------------------------------------------------------------------
// Code
//---------------------------------------------------------------------------
XN_C_API XnStatus XN_C_DECL xnOSCreateEvent(XN_EVENT_HANDLE* pEventHandle, XnBool bManualReset)
{
return (xnOSCreateNamedEvent(pEventHandle, NULL, bManualReset));
}
XN_C_API XnStatus XN_C_DECL xnOSCreateNamedEvent(XN_EVENT_HANDLE* pEventHandle, const XnChar* cpEventName, XnBool bManualReset)
{
return xnOSCreateNamedEventEx(pEventHandle, cpEventName, bManualReset, FALSE);
}
XN_C_API XnStatus XN_C_DECL xnOSCreateNamedEventEx(XN_EVENT_HANDLE* pEventHandle, const XnChar* cpEventName, XnBool bManualReset, XnBool bAllowOtherUsers)
{
// Local function variables
XnStatus nRetVal = XN_STATUS_OK;
// Validate the input/output pointers (to make sure none of them is NULL)
XN_VALIDATE_OUTPUT_PTR(pEventHandle);
XnChar strEventOSName[MAX_PATH];
XnChar* pEventOSName = NULL;
SECURITY_ATTRIBUTES* pSecurityAttributes = NULL;
if (cpEventName != NULL)
{
nRetVal = XnWin32CreateKernelObjectName(strEventOSName, MAX_PATH, cpEventName, bAllowOtherUsers);
if (nRetVal != XN_STATUS_OK)
{
return XN_STATUS_OS_EVENT_CREATION_FAILED;
}
pEventOSName = strEventOSName;
nRetVal = XnWin32GetSecurityAttributes(bAllowOtherUsers, &pSecurityAttributes);
if (nRetVal != XN_STATUS_OK)
{
return XN_STATUS_OS_MUTEX_CREATION_FAILED;
}
}
// Create a named event via the OS
*pEventHandle = CreateEvent(pSecurityAttributes, bManualReset, FALSE, pEventOSName);
// Make sure it succeeded (return value is not null)
if (*pEventHandle == NULL)
{
xnLogError(XN_MASK_OS, "CreateEvent() failed with error %u", GetLastError());
return XN_STATUS_OS_EVENT_CREATION_FAILED;
}
// All is good...
return (XN_STATUS_OK);
}
XN_C_API XnStatus XN_C_DECL xnOSOpenNamedEvent(XN_EVENT_HANDLE* pEventHandle, const XnChar* cpEventName)
{
return xnOSOpenNamedEventEx(pEventHandle, cpEventName, FALSE);
}
XN_C_API XnStatus XN_C_DECL xnOSOpenNamedEventEx(XN_EVENT_HANDLE* pEventHandle, const XnChar* cpEventName, XnBool bAllowOtherUsers)
{
XnStatus nRetVal = XN_STATUS_OK;
XN_VALIDATE_INPUT_PTR(cpEventName);
XN_VALIDATE_OUTPUT_PTR(pEventHandle);
XnChar strEventOSName[MAX_PATH];
nRetVal = XnWin32CreateKernelObjectName(strEventOSName, MAX_PATH, cpEventName, bAllowOtherUsers);
if (nRetVal != XN_STATUS_OK)
{
return XN_STATUS_OS_EVENT_CREATION_FAILED;
}
*pEventHandle = OpenEvent(EVENT_MODIFY_STATE | SYNCHRONIZE, FALSE, cpEventName);
if (*pEventHandle == NULL)
{
return XN_STATUS_OS_EVENT_OPEN_FAILED;
}
return (XN_STATUS_OK);
}
XN_C_API XnStatus xnOSCloseEvent(XN_EVENT_HANDLE* pEventHandle)
{
// Local function variables
XnBool bRetVal = FALSE;
// Validate the input/output pointers (to make sure none of them is NULL)
XN_VALIDATE_INPUT_PTR(pEventHandle);
// Make sure the actual event handle isn't NULL
XN_RET_IF_NULL(*pEventHandle, XN_STATUS_OS_INVALID_EVENT);
// Close the event via the OS
bRetVal = CloseHandle(*pEventHandle);
// Make sure it succeeded (return value is true)
if (bRetVal != TRUE)
{
xnLogVerbose(XN_MASK_OS, "CloseHandle() failed with error %u", GetLastError());
return (XN_STATUS_OS_EVENT_CLOSE_FAILED);
}
// Null the output event
*pEventHandle = NULL;
// All is good...
return (XN_STATUS_OK);
}
XN_C_API XnStatus xnOSSetEvent(const XN_EVENT_HANDLE EventHandle)
{
// Local function variables
XnBool bRetVal = FALSE;
// Make sure the actual event handle isn't NULL
XN_RET_IF_NULL(EventHandle, XN_STATUS_OS_INVALID_EVENT);
// Set the event via the OS
bRetVal = SetEvent(EventHandle);
// Make sure it succeeded (return value is true)
if (bRetVal != TRUE)
{
xnLogVerbose(XN_MASK_OS, "SetEvent() failed with error %u", GetLastError());
return (XN_STATUS_OS_EVENT_SET_FAILED);
}
// All is good...
return (XN_STATUS_OK);
}
XN_C_API XnStatus xnOSResetEvent(const XN_EVENT_HANDLE EventHandle)
{
// Local function variables
XnBool bRetVal = FALSE;
// Make sure the actual event handle isn't NULL
XN_RET_IF_NULL(EventHandle, XN_STATUS_OS_INVALID_EVENT);
// Reset the event via the OS
bRetVal = ResetEvent(EventHandle);
// Make sure it succeeded (return value is true)
if (bRetVal != TRUE)
{
xnLogVerbose(XN_MASK_OS, "ResetEvent() failed with error %u", GetLastError());
return (XN_STATUS_OS_EVENT_RESET_FAILED);
}
// All is good...
return (XN_STATUS_OK);
}
XN_C_API XnStatus xnOSWaitEvent(const XN_EVENT_HANDLE EventHandle, XnUInt32 nMilliseconds)
{
// Local function variables
DWORD nRetVal = 0;
// Make sure the actual event handle isn't NULL
XN_RET_IF_NULL(EventHandle, XN_STATUS_OS_INVALID_EVENT);
// Wait for the event for a period if time (can be infinite)
nRetVal = WaitForSingleObject(EventHandle, nMilliseconds);
// Check the return value (WAIT_OBJECT_0 is OK)
if (nRetVal != WAIT_OBJECT_0)
{
// Handle the timeout failure
if (nRetVal == WAIT_TIMEOUT)
{
return (XN_STATUS_OS_EVENT_TIMEOUT);
}
else
{
xnLogVerbose(XN_MASK_OS, "WaitForSingleObject() failed with error %u", GetLastError());
return (XN_STATUS_OS_EVENT_WAIT_FAILED);
}
}
// All is good...
return (XN_STATUS_OK);
}
XN_C_API XnBool xnOSIsEventSet(const XN_EVENT_HANDLE EventHandle)
{
return (xnOSWaitEvent(EventHandle, 0) == XN_STATUS_OK);
}
|
#include "stdio.h"
#include "phli.h"
int php3_rshutdown_xml(void);
int main(int argc, char* argv[]) {
char *result_dir="/data/phli/results/php3_rshutdown_xml/";
int result;
result=php3_rshutdown_xml();
} |
// ATTiny support code is from https://github.com/jscrane/RF24
/**
* @file spi.h
* \cond HIDDEN_SYMBOLS
* Class declaration for SPI helper files
*/
#include <stdio.h>
#include <Arduino.h>
#include <avr/pgmspace.h>
#define SPI_CLOCK_DIV4 0x00
#define SPI_CLOCK_DIV16 0x01
#define SPI_CLOCK_DIV64 0x02
#define SPI_CLOCK_DIV128 0x03
#define SPI_CLOCK_DIV2 0x04
#define SPI_CLOCK_DIV8 0x05
#define SPI_CLOCK_DIV32 0x06
//#define SPI_CLOCK_DIV64 0x07
#define SPI_MODE0 0x00
#define SPI_MODE1 0x04
#define SPI_MODE2 0x08
#define SPI_MODE3 0x0C
#define SPI_MODE_MASK 0x0C // CPOL = bit 3, CPHA = bit 2 on SPCR
#define SPI_CLOCK_MASK 0x03 // SPR1 = bit 1, SPR0 = bit 0 on SPCR
#define SPI_2XCLOCK_MASK 0x01 // SPI2X = bit 0 on SPSR
class SPIClass {
public:
static byte transfer(byte _data);
// SPI Configuration methods
inline static void attachInterrupt();
inline static void detachInterrupt(); // Default
static void begin(); // Default
static void end();
static void setBitOrder(uint8_t);
static void setDataMode(uint8_t);
static void setClockDivider(uint8_t);
};
extern SPIClass SPI;
/**
* \endcond
*/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.